`ps_data__`
This contains the data for each "table", in JSON format. Since JSON is being used, this table's schema does not change when columns are added, removed or changed in the Sync Streams (or legacy Sync Rules) and client-side schema.
`ps_data_local__`
Same as the previous point, but for [local-only](/client-sdks/advanced/local-only-usage) tables.
`` (`VIEW`)
These are views on the above `ps_data` tables, with each defined column in the client-side schema extracted from the JSON. For example, a `description` text column would be `CAST(data ->> '$.description' as TEXT)`.
`ps_untyped`
Any synced table that is not defined in the client-side schema is placed here. If the table is added to the schema at a later point, the data is then migrated to `ps_data__`.
`ps_oplog`
This is the operation history data as received from the [PowerSync Service](/architecture/powersync-service), grouped per bucket.
`ps_crud`
The client-side upload queue (see [Writing Data](#writing-data-via-sqlite-database-and-upload-queue) below)
`ps_buckets`
A small amount of metadata for each bucket.
`ps_migrations`
Table keeping track of Client SDK schema migrations.
Most rows will be present in at least two tables — the `ps_data__` table, and in `ps_oplog`.
The copy of the row in `ps_oplog` may be newer than the one in `ps_data__`. This is because of the checkpoint system in PowerSync that gives the system its consistency properties. When a full [checkpoint](/architecture/consistency) has been downloaded, data is copied over from `ps_oplog` to the individual `ps_data__` tables.
It is possible for different [buckets](/architecture/powersync-service#bucket-system) to include overlapping data (for example, if multiple buckets contain data from the same table). If rows with the same table and ID have been synced via multiple buckets, it may be present multiple times in `ps_oplog`, but only one will be preserved in the `ps_data__` table (the one with the highest `op_id`).
**Raw Tables Instead of JSON-Backed SQLite Views**: If you run into limitations with the above JSON-based SQLite view system, check out the [Raw Tables experimental feature](/client-sdks/advanced/raw-tables) which allows you to define and manage raw SQLite tables to work around some of the limitations of PowerSync's default JSON-backed SQLite views system. We are actively seeking feedback on the raw tables functionality.
## Writing Data (via SQLite Database and Upload Queue)
Any mutations on the SQLite database, namely updates, deletes and inserts, are immediately reflected in the SQLite database, and also also automatically placed into an **upload queue** by the Client SDK.
The upload queue is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue.
The upload queue is automatically managed by the PowerSync Client SDK.
The Client SDK processes the upload queue by invoking an `uploadData()` function [that you define](/configuration/app-backend/client-side-integration) when you integrate the Client SDK. Your `uploadData()` function implementation should call your [backend application API](/configuration/app-backend/setup) to persist the mutations to the backend source database.
The reason why we designed PowerSync this way is that it allows you to apply your own backend business logic, validations and authorization to any mutations going to your source database.
The PowerSync Client SDK automatically takes care of network failures and retries. If processing mutations in the upload queue fails (e.g. because the user is offline), it is automatically retried.
# Consistency
Source: https://docs.powersync.com/architecture/consistency
PowerSync uses the concept of "checkpoints" to ensure that data is consistent.
## PowerSync: Designed for Causal+ Consistency
PowerSync is designed to have [causal+ consistency](https://jepsen.io/consistency/models/causal), while providing enough flexibility for applications to perform their own data validations and conflict handling. PowerSync's consistency properties have been [tested and verified](https://github.com/nurturenature/jepsen-powersync#readme).
## How It Works: Checkpoints
A checkpoint is a single point-in-time on the server (similar to an [LSN in Postgres](https://www.postgresql.org/docs/current/datatype-pg-lsn.html)) with a consistent state — only fully committed transactions are part of the state.
The client only updates its local state when it has all the data matching a checkpoint, and then it updates the state to exactly match that of the checkpoint. There is no intermediate state while downloading large sets of changes such as large server-side transactions. Different tables and [buckets](/architecture/powersync-service#bucket-system) are all included in the same consistent checkpoint, to ensure that the state is consistent over all data in the client.
## Client-Side Mutations
Client-side mutations are applied on top of the last checkpoint received from the server , as well as being persisted into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue).
While mutations are present in the upload queue, the client does not advance to a new checkpoint. This means the client never has to resolve conflicts locally.
Only once all the client-side mutations have been acknowledged by the server, and the data for that new checkpoint is downloaded by the client, does the client advance to the next checkpoint. This ensures that the operations are always ordered correctly on the client.
There is one nuanced case here, which is buckets with [Priority 0](/sync/advanced/prioritized-sync#special-case:-priority-0) if you are using [Prioritized Syncing](/sync/advanced/prioritized-sync).
## Types of Client-Side Mutations/Operations
The client automatically records mutations to the client-side database as `PUT`, `PATCH` or `DELETE` operations — corresponding to `INSERT`, `UPDATE` or `DELETE` statements in SQLite. These are grouped together in a batch per client-side transaction.
Since the [developer has full control](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) over how mutations are applied to the source database, more advanced operations can be modeled on top of these three. See [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution) for examples.
## Validation and Conflict Handling
With PowerSync offering [full flexibility](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) in how mutations are applied on the server , it is also the developer's responsibility to implement this correctly to avoid consistency issues.
Some scenarios to consider:
While the client was offline, a row was modified on the client-side. By the time the client is online again, that row has been deleted on the source database. Some options for handling the mutation in your backend:
* Discard the mutation.
* Discard the entire transaction.
* Re-create the row .
* Record the failed mutation elsewhere, potentially notifying the user and allowing the user to resolve the issue.
Some other examples include foreign-key or not-null constraints, maximum size of numeric fields, unique constraints, and access restrictions (such as row-level security policies).
In an online-only application, the user typically sees the error as soon as it occurs, and can correct the issue as required. In an offline-capable application that syncs asynchronously with the server, these errors may occur much later than when the mutation was made, so more care is required to handle these cases.
Special care must be taken so that issues such as those do not block the upload queue. The upload queue in the PowerSync Client SDK is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue, and the queue cannot advance if the backend does not acknowledge a mutation . And as mentioned above, if the queue cannot be cleared, the client does not move on to the next checkpoint of synced data.
There is no single correct choice on how to handle write failures such as mentioned above — the best action depends on the specific application and scenario. However, we do have some suggestions for general approaches:
1. In general, consider relaxing constraints somewhat on the backend where they are not absolutely required. It may be better to accept data that is somewhat inconsistent (e.g. a client not applying all expected validations), rather than discarding the data completely.
2. If it is critical to preserve all client mutations and preserve the order of mutations:
1. Block the client's upload queue on unexpected errors (don't acknowledge the mutation in your backend API).
2. Implement error monitoring to be notified of issues, and resolve the issues as soon as possible.
3. If it is critical to preserve all client mutations, but the exact order may not be critical:
1. On a constraint error, persist the transaction in a separate queue on your backend, and acknowledge the change.
2. The backend queue can then be inspected and retried asynchronously, without blocking the client-side upload queue.
4. If it is acceptable to lose some mutations due to constraint errors:
1. Discard the mutation, or the entire transaction if the changes must all be applied together.
2. Implement error notifications to detect these issues.
See also:
* [Handling Update Conflicts](/handling-writes/handling-update-conflicts)
* [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution)
## Questions?
If you have any questions about consistency, please [join our Discord](https://discord.gg/powersync) to discuss.
# PowerSync Protocol
Source: https://docs.powersync.com/architecture/powersync-protocol
Overview of the sync protocol used between PowerSync clients and the PowerSync Service for efficient delta syncing.
This contains a broad overview of the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service).
For details, see the implementation in the various PowerSync Client SDKs.
## Design
The PowerSync protocol is designed to efficiently sync changes to clients, while maintaining [consistency](/architecture/consistency) and integrity of data.
The same process is used for:
* Downloading the initial set of data
* Bulk downloading changes after being offline for a while
* And incrementally streaming changes while connected.
## Concepts
### Buckets
All synced data is grouped into [buckets](/architecture/powersync-service#bucket-system). A bucket represents a collection of synced rows, synced to any number of users.
[Buckets](/architecture/powersync-service#bucket-system) is a core concept that allows PowerSync to efficiently scale to tens of thousands of concurrent clients per PowerSync Service instance, and incrementally sync changes to hundreds of thousands of rows (or even [a million or more](/resources/performance-and-limits#sync-powersync-service-→-client)) to each client.
Each bucket keeps an ordered list of changes to rows within the bucket (operation history) — generally as `PUT` or `REMOVE` operations.
* `PUT` is the equivalent of `INSERT OR REPLACE`
* `REMOVE` is slightly different from `DELETE`: a row is only deleted from the client if it has been removed from *all* buckets synced to the client.
As a practical example of how buckets manifest themselves, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be obtained from the JWT). Now let's say users with IDs `A` and `B` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs `user_todo_lists["A"]` and `user_todo_lists["B"]`.
As you can see, buckets are essentially scoped by their parameters (`A` and `B` in this example), so they are always synced as a whole. For user `A` to receive only their relevant to-do lists, they would sync the entire contents of the bucket `user_todo_lists["A"]`
### Checkpoints
A checkpoint is a sequential ID that represents a single point-in-time for consistency purposes. This is further explained in [Consistency](/architecture/consistency).
### Checksums for Verifying Data Integrity
For any checkpoint, the client and server compute a per-bucket checksum. This is essentially the sum of checksums of individual operations within the bucket, which each individual checksum being a hash of the operation data.
The checksum helps to ensure that the client has all the correct data. In the hypothetical scenario where the bucket data becomes corrupted on the PowerSync Service, the checksums will stop matching, and the client will re-download the entire bucket.
Note: Checksums are not a cryptographically secure method to verify data integrity. Rather, it is designed to detect simple data mismatches, whether due to bugs, bucket data tampering, or other corruption issues.
### Compacting
To avoid indefinite growth in size of buckets, the operation history of a bucket can be [compacted](/maintenance-ops/compacting-buckets). Stale updates are replaced with marker entries, which can be merged together, while keeping the same checksums.
## Protocol
A client initiates a sync session using:
1. A JWT token that typically contains the `user_id`, and additional parameters (optional).
2. A list of current buckets that the client has, and the latest operation ID in each.
The server then responds with a stream of:
1. **Checkpoint available**: A new checkpoint ID, with a checksum for each bucket in the checkpoint.
2. **Data**: New operations for the above checkpoint for each relevant bucket, starting from the last operation ID as sent by the client.
3. **Checkpoint complete**: Sent once all data for the checkpoint have been sent.
The server then waits until a new checkpoint is available, then repeats the above sequence.
The stream can be interrupted at any time, at which point the client will initiate a new session, resuming from the last point.
If a checksum validation fails on the client, the client will delete the bucket and start a new sync session.
Data for individual rows are represented [using JSON](/architecture/client-architecture#client-side-schema-and-sqlite-database-structure). The protocol itself is schemaless — the client is expected to use their own copy of the schema , and gracefully handle schema differences.
#### Write Checkpoints
Write checkpoints are used to ensure clients have synced their own mutations back before applying downloaded data locally.
Creating a write checkpoint is a separate operation, which is performed by the client after all mutations has been uploaded (i.e. the client's [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) has been successfully fully processed and is empty). It is [important](/handling-writes/writing-client-changes#why-must-my-write-endpoint-be-synchronous) that this happens after the data has been written to the backend source database.
The server then keeps track of the current CDC stream position on the database (LSN in Postgres and SQL Server, resume token in MongoDB and GTID+Binlog Position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream.
# PowerSync Service
Source: https://docs.powersync.com/architecture/powersync-service
Understand the PowerSync Service architecture, including the bucket system, data replication, and real-time streaming sync.
When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the *read path* from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
## Bucket System
The concept of *buckets* is core to PowerSync and its scalability.
*Buckets* are basically partitions of data that allow the PowerSync Service to efficiently query the correct data that a specific client needs to sync.
With [Sync Streams](/sync/streams/overview), buckets are created **implicitly** based on your stream definitions, their queries, and subqueries. You don't need to understand or manage buckets directly — the PowerSync Service handles this automatically.
For example, if you define a stream like:
```yaml theme={null}
streams:
user_lists:
auto_subscribe: true
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
```
PowerSync automatically creates the appropriate buckets internally based on the query parameters.
With legacy [Sync Rules](/sync/rules/overview), you explicitly define the buckets using `bucket_definitions` and specify which [parameters](/sync/rules/overview#parameters) are used for each bucket.
### How Buckets Work
To understand how buckets enable efficient syncing, consider this example: Let's say you have data scoped to users — the to-do lists for each user. Based on the data that exists in your source database, PowerSync will create individual buckets for each user. If users with IDs `1`, `2`, and `3` exist in your source database, PowerSync will create buckets with IDs `user_todo_lists["1"]`, `user_todo_lists["2"]`, and `user_todo_lists["3"]`.
When a user with `user_id=1` in their JWT connects to the PowerSync Service, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`.
With legacy Sync Rules, a bucket ID is formed from the bucket definition name and its parameter values, for example `user_todo_lists["1"]`. With Sync Streams, the bucket IDs are generated automatically based on your stream queries — you don't need to define and name buckets explicitly.
### Deduplication for Scalability
The bucket system also allows for high-scalability because it *deduplicates* data that is shared between different users.
For example, let's pretend that instead of `user_todo_lists`, we have `org_todo_lists` buckets, each containing the to-do lists for an *organization*., and we use an `organization_id` parameter from the JWT for this bucket. Now let's pretend that both users with IDs `1` and `2` both belong to an organization with an ID of `1`. In this scenario, both users `1` and `2` will sync from a bucket with a bucket ID of `org_todo_lists["1"]`.
This also means that the PowerSync Service has to keep track of less state per-user — and therefore, server-side resource requirements don't scale linearly with the number of users/clients.
## Operation History
Each bucket stores the *recent history* of operations on each row , not just the latest state of the row.
This is another core part of the PowerSync architecture — the PowerSync Service can efficiently query the *operations* that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync.
When a change occurs in the source database that affects a certain bucket (based on your Sync Streams, or legacy Sync Rules), that change will be appended to the operation history in that bucket. Buckets are therefore treated as "append-only" data structures. That being said, to avoid an ever-growing operation history, the buckets can be [compacted](/maintenance-ops/compacting-buckets) (this is automatically done on PowerSync Cloud).
## Bucket Storage
The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres are currently supported as *bucket storage* databases. The *bucket storage* database is separate from the connection to your *source database* (Postgres, MongoDB, MySQL or SQL Server). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the *bucket storage* database.
Persisting the bucket state in a database is also part of how PowerSync achieves high scalability: it means that the PowerSync Service can have a low memory footprint even as you scale to very large volumes of synced data and users/clients.
## Replication From the Source Database
As mentioned above, one of the primary purposes of the PowerSync Service is replicating data from the source database, based on your Sync Streams (or legacy Sync Rules):
When the PowerSync Service replicates data from the source database, it:
1. Pre-processes the data according to your [Sync Streams](/sync/streams/overview) (or [Sync Rules](/sync/rules/overview)), splitting data into *buckets* (as explained above) and transforming the data if required.
2. Persists each operation into the relevant buckets, ready to be streamed to clients.
### Initial Replication vs. Incremental Replication
Whenever a new version of Sync Streams (or legacy Sync Rules) is deployed, initial replication takes place by means of taking a snapshot of all tables/collections they reference.
After that, data is incrementally replicated using a change data capture stream (the specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture).
## Streaming Sync
As mentioned above, the other primary purpose of the PowerSync Service is streaming data to clients.
The PowerSync Service authenticates clients/users using [JWTs](/configuration/auth/overview). Once a client/user is authenticated:
1. The PowerSync Service calculates a list of buckets for the user to sync based on their Sync Stream subscriptions (or [Parameter Queries](/sync/rules/parameter-queries) in legacy Sync Rules).
2. The Service streams any operations added to those buckets since the last time the client/user connected.
The Service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes.
Only the internal *bucket storage* of the PowerSync Service is used — the source database is not queried directly during streaming.
For more details on exactly how streaming sync works, see [PowerSync Protocol](/architecture/powersync-protocol#protocol).
## Source Code Repo
The repo for the PowerSync Service can be found here:
# Attachments / Files
Source: https://docs.powersync.com/client-sdks/advanced/attachments
Keep files out of your database and handle attachments in an entirely storage-agnostic way. PowerSync syncs minimal metadata while an offline-first queue automatically handles uploads, downloads, and retries.
## Introduction
The `@powersync/attachments` package (JavaScript/TypeScript) and `powersync_attachments_helper` package (Flutter/Dart) are deprecated. Attachment functionality is now built-in to the PowerSync SDKs. Please use the [built-in attachment helpers](#sdk-%26-demo-reference) instead, and see the [migration notes](#migrating-from-deprecated-packages).
While PowerSync excels at syncing structured data, storing large files (images, videos, PDFs) directly in SQLite is not recommended. Embedding files as base64-encoded data or binary blobs in database rows can lead to many issues.
Instead, PowerSync uses a **metadata + storage provider pattern**: sync small metadata records through PowerSync while storing actual files in purpose-built storage systems (S3, Supabase Storage, Cloudflare R2, etc.). This approach provides:
* **Optimal performance** - Database stays small and fast
* **Automatic queue management** - Background uploads/downloads with retry logic
* **Offline-first support** - Local files available immediately, sync happens in background
* **Cache management** - Automatic cleanup of unused files
* **Platform flexibility** - Works across web, mobile, and desktop
## SDK & Demo Reference
We provide attachment helpers for multiple platforms:
| SDK | Package | Min. SDK version | Demo App |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **JavaScript/TypeScript** | [Built-in attachments (alpha)](https://github.com/powersync-ja/powersync-js/tree/main/packages/common/src/attachments) | Web v1.33.0, React Native v1.30.0, Node.js v0.17.0 | [React Native Todo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist) · [React Web Todo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist) |
| **Flutter** | [Built-in attachments (alpha)](https://pub.dev/documentation/powersync_core/latest/topics/attachments-topic.html) | v1.16.0 | [Flutter Todo](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) |
| **Swift** | [Built-in attachments (alpha)](https://github.com/powersync-ja/powersync-swift/blob/main/Sources/PowerSync/attachments/README.md) | v1.0.0 | [iOS Demo](https://github.com/powersync-ja/powersync-swift/tree/main/Demo) |
| **Kotlin** | [Built-in attachments (alpha)](https://github.com/powersync-ja/powersync-kotlin/tree/main/common/src/commonMain/kotlin/com/powersync/attachments) | v1.0.0 | [Android Todo](https://github.com/powersync-ja/powersync-kotlin/tree/main/demos/android-supabase-todolist) |
Most demo applications use Supabase Storage as the storage provider, but the patterns are adaptable to any storage system.
## How It Works
### Workflow
1. **Save file** - Your app calls `saveFile()` with file data and an `updateHook` to handle linking the attachment to your data model
2. **Queue for upload** - File is saved locally and a record is created in the attachments table with state `QUEUED_UPLOAD`
3. **Background upload** - The attachment queue automatically uploads file to remote storage (S3/Supabase/etc.)
4. **Remote storage** - File is stored in remote storage with the attachment ID
5. **State update** - The `updateHook` runs, updating your data model with the attachment ID and marking the file locally as `SYNCED`
6. **Cross-device sync** - PowerSync syncs the data model changes to other clients
7. **Data model updated** - Other clients receive the updated data model with the new attachment reference (e.g., `user.photo_id = "id-123"`)
8. **Watch detects attachment** - Other clients' `watchAttachments()` callback detects the new attachment reference and creates a record in the attachments table with state `QUEUED_DOWNLOAD`
9. **File download** - The attachment queue automatically downloads the file from remote storage
10. **Local storage** - File is saved to local storage on the other client
11. **State update** - File is marked locally as `SYNCED` and ready for use
### Attachment States
| State | Description |
| ----------------- | ------------------------------------------------------------------ |
| `QUEUED_UPLOAD` | File saved locally, waiting to upload to remote storage |
| `QUEUED_DOWNLOAD` | Data model synced from another device, file needs to be downloaded |
| `SYNCED` | File exists both locally and in remote storage, fully synchronized |
| `QUEUED_DELETE` | Marked for deletion from both local and remote storage |
| `ARCHIVED` | No longer referenced in your data model, candidate for cleanup |
## Core Components
### Attachment Table
The **Attachment Table** is a local-only table that stores metadata about each file. It's not synced through PowerSync's sync rules - instead, it's managed entirely by the attachment queue on each device.
**Metadata stored:**
* `id` - Unique attachment identifier (UUID)
* `filename` - File name with extension (e.g., `photo-123.jpg`)
* `localUri` - Path to file in local storage
* `size` - File size in bytes
* `mediaType` - MIME type (e.g., `image/jpeg`)
* `state` - Current sync state (see states above)
* `hasSynced` - Boolean indicating if file has ever been uploaded
* `timestamp` - Last update time
* `metaData` - Optional JSON string for custom data
**Key characteristics:**
* **Local-only** - Each device maintains its own attachment table
* **Automatic management** - Queue handles all inserts/updates
* **Cross-client coordination** - Your data model (e.g., `users.photo_id`) tells each client which files it needs
### Remote Storage Adapter
The **Remote Storage Adapter** is an interface you implement to connect PowerSync with your cloud storage provider. It's completely platform-agnostic - Implementations can use S3, Supabase Storage, Cloudflare R2, Azure Blob, or even IPFS.
**Interface methods:**
* `uploadFile(fileData, attachment)` - Upload file to cloud storage
* `downloadFile(attachment)` - Download file from cloud storage
* `deleteFile(attachment)` - Delete file from cloud storage
**Common pattern:**
For security reasons client side implementations should use **signed URLs**
1. Request a signed upload/download URL from your backend
2. Your backend validates permissions and generates a temporary URL
3. Client uploads/downloads directly to storage using the signed URL
4. Never expose storage credentials to clients
### Local Storage Adapter
The **Local Storage Adapter** handles file persistence on the device. PowerSync provides implementations for common platforms and allows you to create custom adapters.
**Interface methods:**
* `initialize()` - Set up storage (create directories, etc.)
* `saveFile(path, data)` - Write file to storage
* `readFile(path)` - Read file from storage
* `deleteFile(path)` - Remove file from storage
* `fileExists(path)` - Check if file exists
* `getLocalUri(filename)` - Get full path for a filename
**Built-in adapters:**
* **IndexedDB** - For web browsers (`IndexDBFileSystemStorageAdapter`)
* **Node.js Filesystem** - For Node/Electron (`NodeFileSystemAdapter`)
* **React Native** - For React Native with Expo or bare React Native we have a dedicated package [(`@powersync/attachments-storage-react-native`)](https://github.com/powersync-ja/powersync-js/tree/main/packages/attachments-storage-react-native)
* **Native mobile storage** - For Flutter, Kotlin, Swift
The React Native local storage adapter requires Expo 54 or later.
### Attachment Queue
The **Attachment Queue** is the orchestrator that manages the entire attachment lifecycle. It:
* **Watches your data model** - You pass a `watchAttachments` function as a parameter that monitors which files your app references
* **Manages state transitions** - Automatically moves files through states (upload/download → synced → archive → delete)
* **Handles retries** - Failed operations are retried on the next sync interval
* **Performs cleanup** - Removes archived files that are no longer needed
* **Verifies integrity** - Checks local files exist and repairs inconsistencies
**Watched Attachments pattern:**
The queue needs to know which attachments exist in your data model. The `watchAttachments` function you provide monitors your data model and returns a list of attachment IDs that your app references. The queue compares this list with its internal attachment table to determine:
* **New attachments** - Download them
* **Missing attachments** - Upload them
* **Removed attachments** - Archive them
The `watchAttachments` queries are reactive and execute whenever the watched tables change, keeping the attachment queue synchronized with your data model.
There are a few scenarios you might encounter:
**Single Attachment Type**
For a single attachment type, you watch one table. For example, if users have profile photos:
```sql theme={null}
SELECT photo_id FROM users WHERE photo_id IS NOT NULL
```
**Multiple Attachment Types - Single Queue**
You can watch multiple attachment types using a single queue by combining queries with SQL `UNION` or `UNION ALL`. This allows you to monitor attachments across different tables (e.g., `users.photo_id`, `documents.document_id`, `videos.video_id`) in one queue. Each attachment type may have different file extensions, which can be handled in the query by selecting the extension from your data model or using type-specific defaults.
For example:
```sql theme={null}
SELECT photo_id as id, photo_file_extension as file_extension
FROM users
WHERE photo_id IS NOT NULL
UNION ALL
SELECT document_id as id, document_file_extension as file_extension
FROM documents
WHERE document_id IS NOT NULL
UNION ALL
SELECT video_id as id, video_file_extension as file_extension
FROM videos
WHERE video_id IS NOT NULL
```
Use `UNION ALL` when you want to include all rows (including duplicates), or `UNION` when you want to automatically deduplicate results. For attachment watching, `UNION ALL` is typically preferred since attachment IDs should already be unique.
The UNION query executes whenever any of the watched tables change, which may have higher database overhead compared to watching a single table. Implementation examples are shown in the [Initialize Attachment Queue](#initialize-attachment-queue) section below.
**Multiple Attachment Types - Multiple Queues**
Alternatively, you can create separate queues for different attachment types. Each queue watches its own specific table(s) with simpler queries, allowing for independent configuration and management.
Multiple queues may use more memory, but each queue watches simpler queries. Implementation examples are shown in the [Initialize Attachment Queue](#initialize-attachment-queue) section below.
## Implementation Guide
### Installation
```bash JavaScript/TypeScript theme={null}
Included with web and node and react-native packages, for react-native adapters install @powersync/attachments-storage-react-native.
```
```bash Flutter theme={null}
comes with flutter SDK, check SDK installation guide
```
```swift Swift theme={null}
comes with swift SDK, check SDK installation guide
```
```kotlin Kotlin theme={null}
comes with Kotlin SDK, check SDK installation guide
```
### Setup: Add Attachment Table to Schema
```typescript JavaScript/TypeScript theme={null}
import { Schema, Table, column, AttachmentTable } from '@powersync/web';
const appSchema = new Schema({
users: new Table({
name: column.text,
email: column.text,
photo_id: column.text // References attachment ID
}),
// Add the attachment table
attachments: new AttachmentTable()
});
```
```dart Flutter theme={null}
import 'package:powersync/powersync.dart';
import 'package:powersync_core/attachments/attachments.dart';
final schema = Schema([
Table('users', [
Column.text('name'),
Column.text('email'),
Column.text('photo_id'), // References attachment ID
]),
AttachmentsQueueTable(),
]);
```
```swift Swift theme={null}
import PowerSync
let users = Table(
name: "users",
columns: [
Column.text("name"),
Column.text("email"),
Column.text("photo_id"), // References attachment ID
]
)
let schema = Schema(
tables: [
users,
// Add the local-only table which stores attachment states
createAttachmentTable(name: "attachments")
]
)
```
```kotlin Kotlin theme={null}
import com.powersync.attachments.createAttachmentsTable
import com.powersync.db.schema.Column
import com.powersync.db.schema.Schema
import com.powersync.db.schema.Table
val users = Table(
name = "users",
columns = listOf(
Column.text("name"),
Column.text("email"),
Column.text("photo_id") // References attachment ID
)
)
val schema = Schema(
users,
// Add the local-only table which stores attachment states
createAttachmentsTable("attachments")
)
```
### Configure Storage Adapters
```typescript JavaScript/TypeScript theme={null}
// For web browsers (IndexedDB)
import { IndexDBFileSystemStorageAdapter } from '@powersync/web';
const localStorage = new IndexDBFileSystemStorageAdapter('my-app-files');
// For Node.js/Electron (filesystem)
// import { NodeFileSystemAdapter } from '@powersync/node';
// const localStorage = new NodeFileSystemAdapter('./user-attachments');
// For React Native (Expo or bare React Native)
// Need to install @powersync/attachments-storage-react-native
//
// For Expo projects, also install expo-file-system
// import { ExpoFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native';
// const localStorage = new ExpoFileSystemStorageAdapter();
//
// For bare React Native, also install @dr.pogodin/react-native-fs
// import { ReactNativeFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native';
// const localStorage = new ReactNativeFileSystemStorageAdapter();
// Remote storage adapter (example with signed URLs)
const remoteStorage = {
async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) {
// Request signed upload URL from your backend
const { uploadUrl } = await fetch('/api/attachments/upload-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
filename: attachment.filename,
contentType: attachment.mediaType
})
}).then(r => r.json());
// Upload to cloud storage using signed URL
await fetch(uploadUrl, {
method: 'PUT',
body: fileData,
headers: {
'Content-Type': attachment.mediaType || 'application/octet-stream'
}
});
},
async downloadFile(attachment: AttachmentRecord): Promise {
// Request signed download URL from your backend
const { downloadUrl } = await fetch(
`/api/attachments/${attachment.id}/download-url`
).then(r => r.json());
// Download from cloud storage
const response = await fetch(downloadUrl);
return response.arrayBuffer();
},
async deleteFile(attachment: AttachmentRecord) {
// Delete via your backend
await fetch(`/api/attachments/${attachment.id}`, {
method: 'DELETE'
});
}
};
```
```dart Flutter theme={null}
import 'dart:io';
import 'dart:typed_data';
import 'package:path_provider/path_provider.dart';
import 'package:powersync_core/attachments/attachments.dart';
import 'package:powersync_core/attachments/io.dart';
import 'package:http/http.dart' as http;
// For Flutter (native platforms)
Future getLocalStorage() async {
final appDocDir = await getApplicationDocumentsDirectory();
final attachmentsDir = Directory('${appDocDir.path}/attachments');
return IOLocalStorage(attachmentsDir);
}
// Remote storage adapter (example with signed URLs)
class SignedUrlStorageAdapter implements RemoteStorage {
@override
Future uploadFile(
Stream> fileData,
Attachment attachment,
) async {
// Request signed upload URL from your backend
final response = await http.post(
Uri.parse('/api/attachments/upload-url'),
headers: {'Content-Type': 'application/json'},
body: jsonEncode({
'filename': attachment.filename,
'contentType': attachment.mediaType,
}),
);
final uploadUrl = jsonDecode(response.body)['uploadUrl'] as String;
// Collect stream data
final bytes = [];
await for (final chunk in fileData) {
bytes.addAll(chunk);
}
// Upload to cloud storage using signed URL
await http.put(
Uri.parse(uploadUrl),
body: Uint8List.fromList(bytes),
headers: {
'Content-Type': attachment.mediaType ?? 'application/octet-stream',
},
);
}
@override
Future>> downloadFile(Attachment attachment) async {
// Request signed download URL from your backend
final response = await http.get(
Uri.parse('/api/attachments/${attachment.id}/download-url'),
);
final downloadUrl = jsonDecode(response.body)['downloadUrl'] as String;
// Download from cloud storage
final httpResponse = await http.get(Uri.parse(downloadUrl));
return Stream.value(httpResponse.bodyBytes);
}
@override
Future deleteFile(Attachment attachment) async {
// Delete via your backend
await http.delete(
Uri.parse('/api/attachments/${attachment.id}'),
);
}
}
```
```swift Swift theme={null}
import Foundation
import PowerSync
// For iOS/macOS (FileManager)
func getAttachmentsDirectoryPath() throws -> String {
guard let documentsURL = FileManager.default.urls(
for: .documentDirectory,
in: .userDomainMask
).first else {
throw PowerSyncAttachmentError.attachmentError("Could not determine attachments directory path")
}
return documentsURL.appendingPathComponent("attachments").path
}
let localStorage = FileManagerStorageAdapter()
// Remote storage adapter (example with signed URLs)
class SignedUrlStorageAdapter: RemoteStorageAdapter {
func uploadFile(fileData: Data, attachment: Attachment) async throws {
// Request signed upload URL from your backend
struct UploadUrlResponse: Codable {
let uploadUrl: String
}
let requestBody = [
"filename": attachment.filename,
"contentType": attachment.mediaType ?? "application/octet-stream"
]
var request = URLRequest(url: URL(string: "/api/attachments/upload-url")!)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.httpBody = try JSONSerialization.data(withJSONObject: requestBody)
let (data, _) = try await URLSession.shared.data(for: request)
let response = try JSONDecoder().decode(UploadUrlResponse.self, from: data)
// Upload to cloud storage using signed URL
var uploadRequest = URLRequest(url: URL(string: response.uploadUrl)!)
uploadRequest.httpMethod = "PUT"
uploadRequest.setValue(attachment.mediaType ?? "application/octet-stream", forHTTPHeaderField: "Content-Type")
uploadRequest.httpBody = fileData
let (_, uploadResponse) = try await URLSession.shared.data(for: uploadRequest)
guard let httpResponse = uploadResponse as? HTTPURLResponse,
(200...299).contains(httpResponse.statusCode) else {
throw PowerSyncAttachmentError.generalError("Upload failed")
}
}
func downloadFile(attachment: Attachment) async throws -> Data {
// Request signed download URL from your backend
struct DownloadUrlResponse: Codable {
let downloadUrl: String
}
let request = URLRequest(url: URL(string: "/api/attachments/\(attachment.id)/download-url")!)
let (data, _) = try await URLSession.shared.data(for: request)
let response = try JSONDecoder().decode(DownloadUrlResponse.self, from: data)
// Download from cloud storage
let downloadRequest = URLRequest(url: URL(string: response.downloadUrl)!)
let (fileData, _) = try await URLSession.shared.data(for: downloadRequest)
return fileData
}
func deleteFile(attachment: Attachment) async throws {
// Delete via your backend
var request = URLRequest(url: URL(string: "/api/attachments/\(attachment.id)")!)
request.httpMethod = "DELETE"
let (_, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse,
(200...299).contains(httpResponse.statusCode) else {
throw PowerSyncAttachmentError.generalError("Delete failed")
}
}
}
let remoteStorage = SignedUrlStorageAdapter()
```
```kotlin Kotlin theme={null}
import com.powersync.attachments.LocalStorage
import com.powersync.attachments.RemoteStorage
import com.powersync.attachments.Attachment
import com.powersync.attachments.storage.IOLocalStorageAdapter
import kotlinx.coroutines.flow.Flow
import kotlinx.coroutines.flow.flowOf
import kotlinx.io.files.Path
// For local storage (uses IOLocalStorageAdapter by default)
// On Android: "${applicationContext.filesDir.canonicalPath}/attachments"
val attachmentsDirectory = Path("attachments").toString()
val localStorage: LocalStorage = IOLocalStorageAdapter()
// Remote storage adapter (example with signed URLs)
val remoteStorage = object : RemoteStorage {
override suspend fun uploadFile(
fileData: Flow,
attachment: Attachment
) {
// Request signed upload URL from your backend
val uploadUrl = // ... fetch from your API
// Upload to cloud storage using signed URL
// Collect the flow and upload
val bytes = mutableListOf()
fileData.collect { bytes.add(it) }
val allBytes = bytes.flatMap { it.toList() }.toByteArray()
// Upload allBytes to uploadUrl
// ... your HTTP upload implementation
}
override suspend fun downloadFile(attachment: Attachment): Flow {
// Request signed download URL from your backend
val downloadUrl = // ... fetch from your API
// Download from cloud storage
val response = // ... your HTTP download implementation
return flowOf(response) // or convert your ByteArray to Flow
}
override suspend fun deleteFile(attachment: Attachment) {
// Delete via your backend
// ... your HTTP delete implementation
}
}
```
**Security Best Practice:** Always use your backend to generate signed URLs and validate permissions. Never expose storage credentials directly to clients.
### Initialize Attachment Queue
```typescript JavaScript/TypeScript theme={null}
import { AttachmentQueue } from '@powersync/web';
const attachmentQueue = new AttachmentQueue({
db: db, // PowerSync database instance
localStorage,
remoteStorage,
// Define which attachments exist in your data model
watchAttachments: (onUpdate) => {
db.watch(
`SELECT photo_id FROM users WHERE photo_id IS NOT NULL`,
[],
{
onResult: async (result) => {
const attachments = result.rows?._array.map(row => ({
id: row.photo_id,
fileExtension: 'jpg'
})) ?? [];
await onUpdate(attachments);
}
}
);
},
// Optional configuration
syncIntervalMs: 30000, // Sync every 30 seconds
downloadAttachments: true, // Auto-download referenced files
archivedCacheLimit: 100 // Keep 100 archived files before cleanup
});
// Start the sync process
await attachmentQueue.startSync();
```
```dart Flutter theme={null}
import 'package:logging/logging.dart';
import 'package:powersync/powersync.dart';
import 'package:powersync_core/attachments/attachments.dart';
final logger = Logger('AttachmentQueue');
late AttachmentQueue attachmentQueue;
Future initializeAttachmentQueue(PowerSyncDatabase db) async {
attachmentQueue = AttachmentQueue(
db: db,
remoteStorage: SignedUrlStorageAdapter(),
localStorage: await getLocalStorage(),
// Define which attachments exist in your data model
watchAttachments: () => db.watch('''
SELECT photo_id as id
FROM users
WHERE photo_id IS NOT NULL
''').map(
(results) => [
for (final row in results)
WatchedAttachmentItem(
id: row['id'] as String,
fileExtension: 'jpg',
)
],
),
// Optional configuration
syncInterval: const Duration(seconds: 30), // Sync every 30 seconds
downloadAttachments: true, // Auto-download referenced files
archivedCacheLimit: 100, // Keep 100 archived files before cleanup
logger: logger,
);
// Start the sync process
await attachmentQueue.startSync();
}
```
```swift Swift theme={null}
let attachmentQueue = AttachmentQueue(
db: db, // PowerSync database instance
remoteStorage: remoteStorage,
attachmentsDirectory: try getAttachmentsDirectoryPath(),
// Define which attachments exist in your data model
watchAttachments: {
try db.watch(
sql: """
SELECT photo_id
FROM users
WHERE photo_id IS NOT NULL
""",
parameters: [],
mapper: { cursor in
try WatchedAttachmentItem(
id: cursor.getString(name: "photo_id"),
fileExtension: "jpg"
)
}
)
},
// Optional configuration
syncInterval: 30.0, // Sync every 30 seconds
downloadAttachments: true, // Auto-download referenced files
archivedCacheLimit: 100 // Keep 100 archived files before cleanup
)
// Start the sync process
try await attachmentQueue.startSync()
```
```kotlin Kotlin theme={null}
import com.powersync.attachments.AttachmentQueue
import com.powersync.attachments.WatchedAttachmentItem
import com.powersync.db.getString
import kotlinx.coroutines.flow.Flow
import kotlin.time.Duration.Companion.seconds
val attachmentQueue = AttachmentQueue(
db = db, // PowerSync database instance
remoteStorage = remoteStorage,
attachmentsDirectory = attachmentsDirectory,
localStorage = localStorage, // Optional, defaults to IOLocalStorageAdapter()
// Define which attachments exist in your data model
watchAttachments = {
db.watch(
sql = """
SELECT photo_id
FROM users
WHERE photo_id IS NOT NULL
""",
parameters = null
) { cursor ->
WatchedAttachmentItem(
id = cursor.getString("photo_id"),
fileExtension = "jpg"
)
}
},
// Optional configuration
syncInterval = 30.seconds, // Sync every 30 seconds
downloadAttachments = true, // Auto-download referenced files
archivedCacheLimit = 100 // Keep 100 archived files before cleanup
)
// Start the sync process
attachmentQueue.startSync()
```
The `watchAttachments` callback is crucial - it tells the queue which files your app needs based on your data model. The queue uses this to automatically download, upload, or archive files.
#### Watching Multiple Attachment Types
When watching multiple attachment types, you need to provide the `fileExtension` for each attachment. You can store this in your data model tables or derive it from other fields. Here are examples for both patterns:
**Pattern 2: Single Queue with UNION**
```typescript JavaScript/TypeScript theme={null}
// Example: Watching users.photo_id, documents.document_id, and videos.video_id
// Assuming your tables store file extensions
const attachmentQueue = new AttachmentQueue({
db: db,
localStorage,
remoteStorage,
watchAttachments: (onUpdate) => {
db.watch(
`SELECT photo_id as id, photo_file_extension as file_extension
FROM users
WHERE photo_id IS NOT NULL
UNION ALL
SELECT document_id as id, document_file_extension as file_extension
FROM documents
WHERE document_id IS NOT NULL
UNION ALL
SELECT video_id as id, video_file_extension as file_extension
FROM videos
WHERE video_id IS NOT NULL`,
[],
{
onResult: async (result) => {
const attachments = result.rows?._array.map(row => ({
id: row.id,
fileExtension: row.file_extension
})) ?? [];
await onUpdate(attachments);
}
}
);
},
// ... other options
});
await attachmentQueue.startSync();
```
```dart Flutter theme={null}
// Example: Watching users.photo_id, documents.document_id, and videos.video_id
// Assuming your tables store file extensions
attachmentQueue = AttachmentQueue(
db: db,
remoteStorage: SignedUrlStorageAdapter(),
localStorage: await getLocalStorage(),
watchAttachments: () => db.watch('''
SELECT photo_id as id, photo_file_extension as file_extension
FROM users
WHERE photo_id IS NOT NULL
UNION ALL
SELECT document_id as id, document_file_extension as file_extension
FROM documents
WHERE document_id IS NOT NULL
UNION ALL
SELECT video_id as id, video_file_extension as file_extension
FROM videos
WHERE video_id IS NOT NULL
''').map(
(results) => [
for (final row in results)
WatchedAttachmentItem(
id: row['id'] as String,
fileExtension: row['file_extension'] as String,
)
],
),
// ... other options
);
await attachmentQueue.startSync();
```
```swift Swift theme={null}
// Example: Watching users.photo_id, documents.document_id, and videos.video_id
// Assuming your tables store file extensions
let attachmentQueue = AttachmentQueue(
db: db,
remoteStorage: remoteStorage,
attachmentsDirectory: try getAttachmentsDirectoryPath(),
watchAttachments: {
try db.watch(
sql: """
SELECT photo_id as id, photo_file_extension as file_extension
FROM users
WHERE photo_id IS NOT NULL
UNION ALL
SELECT document_id as id, document_file_extension as file_extension
FROM documents
WHERE document_id IS NOT NULL
UNION ALL
SELECT video_id as id, video_file_extension as file_extension
FROM videos
WHERE video_id IS NOT NULL
""",
parameters: [],
mapper: { cursor in
try WatchedAttachmentItem(
id: cursor.getString(name: "id"),
fileExtension: cursor.getString(name: "file_extension")
)
}
)
},
// ... other options
)
try await attachmentQueue.startSync()
```
```kotlin Kotlin theme={null}
// Example: Watching users.photo_id, documents.document_id, and videos.video_id
// Assuming your tables store file extensions
val attachmentQueue = AttachmentQueue(
db = db,
remoteStorage = remoteStorage,
attachmentsDirectory = attachmentsDirectory,
localStorage = localStorage,
watchAttachments = {
db.watch(
sql = """
SELECT photo_id as id, photo_file_extension as file_extension
FROM users
WHERE photo_id IS NOT NULL
UNION ALL
SELECT document_id as id, document_file_extension as file_extension
FROM documents
WHERE document_id IS NOT NULL
UNION ALL
SELECT video_id as id, video_file_extension as file_extension
FROM videos
WHERE video_id IS NOT NULL
""",
parameters = null
) { cursor ->
WatchedAttachmentItem(
id = cursor.getString("id"),
fileExtension = cursor.getString("file_extension")
)
}
},
// ... other options
)
attachmentQueue.startSync()
```
**Pattern 3: Multiple Queues**
```typescript JavaScript/TypeScript theme={null}
// Create separate queues for different attachment types
const photoQueue = new AttachmentQueue({
db: db,
localStorage,
remoteStorage,
watchAttachments: (onUpdate) => {
db.watch(
`SELECT photo_id FROM users WHERE photo_id IS NOT NULL`,
[],
{
onResult: async (result) => {
const attachments = result.rows?._array.map(row => ({
id: row.photo_id,
fileExtension: 'jpg'
})) ?? [];
await onUpdate(attachments);
}
}
);
},
});
const documentQueue = new AttachmentQueue({
db: db,
localStorage,
remoteStorage,
watchAttachments: (onUpdate) => {
db.watch(
`SELECT document_id FROM documents WHERE document_id IS NOT NULL`,
[],
{
onResult: async (result) => {
const attachments = result.rows?._array.map(row => ({
id: row.document_id,
fileExtension: 'pdf'
})) ?? [];
await onUpdate(attachments);
}
}
);
},
});
await Promise.all([
photoQueue.startSync(),
documentQueue.startSync()
]);
```
```dart Flutter theme={null}
// Create separate queues for different attachment types
final photoQueue = AttachmentQueue(
db: db,
remoteStorage: SignedUrlStorageAdapter(),
localStorage: await getLocalStorage(),
watchAttachments: () => db.watch('''
SELECT photo_id as id
FROM users
WHERE photo_id IS NOT NULL
''').map(
(results) => [
for (final row in results)
WatchedAttachmentItem(
id: row['id'] as String,
fileExtension: 'jpg',
)
],
),
);
final documentQueue = AttachmentQueue(
db: db,
remoteStorage: SignedUrlStorageAdapter(),
localStorage: await getLocalStorage(),
watchAttachments: () => db.watch('''
SELECT document_id as id
FROM documents
WHERE document_id IS NOT NULL
''').map(
(results) => [
for (final row in results)
WatchedAttachmentItem(
id: row['id'] as String,
fileExtension: 'pdf',
)
],
),
);
await Future.wait([
photoQueue.startSync(),
documentQueue.startSync(),
]);
```
```swift Swift theme={null}
// Create separate queues for different attachment types
let photoQueue = AttachmentQueue(
db: db,
remoteStorage: remoteStorage,
attachmentsDirectory: try getAttachmentsDirectoryPath(),
watchAttachments: {
try db.watch(
sql: """
SELECT photo_id
FROM users
WHERE photo_id IS NOT NULL
""",
parameters: [],
mapper: { cursor in
try WatchedAttachmentItem(
id: cursor.getString(name: "photo_id"),
fileExtension: "jpg"
)
}
)
}
)
let documentQueue = AttachmentQueue(
db: db,
remoteStorage: remoteStorage,
attachmentsDirectory: try getAttachmentsDirectoryPath(),
watchAttachments: {
try db.watch(
sql: """
SELECT document_id
FROM documents
WHERE document_id IS NOT NULL
""",
parameters: [],
mapper: { cursor in
try WatchedAttachmentItem(
id: cursor.getString(name: "document_id"),
fileExtension: "pdf"
)
}
)
}
)
try await photoQueue.startSync()
try await documentQueue.startSync()
```
```kotlin Kotlin theme={null}
// Create separate queues for different attachment types
val photoQueue = AttachmentQueue(
db = db,
remoteStorage = remoteStorage,
attachmentsDirectory = attachmentsDirectory,
localStorage = localStorage,
watchAttachments = {
db.watch(
sql = """
SELECT photo_id
FROM users
WHERE photo_id IS NOT NULL
""",
parameters = null
) { cursor ->
WatchedAttachmentItem(
id = cursor.getString("photo_id"),
fileExtension = "jpg"
)
}
}
)
val documentQueue = AttachmentQueue(
db = db,
remoteStorage = remoteStorage,
attachmentsDirectory = attachmentsDirectory,
localStorage = localStorage,
watchAttachments = {
db.watch(
sql = """
SELECT document_id
FROM documents
WHERE document_id IS NOT NULL
""",
parameters = null
) { cursor ->
WatchedAttachmentItem(
id = cursor.getString("document_id"),
fileExtension = "pdf"
)
}
}
)
photoQueue.startSync()
documentQueue.startSync()
```
### Upload an Attachment
```typescript JavaScript/TypeScript theme={null}
async function uploadProfilePhoto(imageBlob: Blob, userId: string) {
const arrayBuffer = await imageBlob.arrayBuffer();
const attachment = await attachmentQueue.saveFile({
data: arrayBuffer,
fileExtension: 'jpg',
mediaType: 'image/jpeg',
// updateHook runs in same transaction, ensuring atomicity
updateHook: async (tx, attachment) => {
await tx.execute(
'UPDATE users SET photo_id = ? WHERE id = ?',
[attachment.id, userId]
);
}
});
return attachment;
}
// The queue will:
// 1. Save file locally immediately
// 2. Create attachment record with state QUEUED_UPLOAD
// 3. Update user record in same transaction
// 4. Automatically upload file in background
// 5. Update state to SYNCED when complete
```
```dart Flutter theme={null}
import 'dart:io';
import 'dart:typed_data';
import 'package:powersync_core/attachments/attachments.dart';
Future uploadProfilePhoto(
File imageFile,
String userId,
) async {
final imageBytes = await imageFile.readAsBytes();
final attachment = await attachmentQueue.saveFile(
data: Stream.value(imageBytes),
mediaType: 'image/jpeg',
fileExtension: 'jpg',
// updateHook runs in same transaction, ensuring atomicity
updateHook: (context, attachment) async {
await context.execute(
'UPDATE users SET photo_id = ? WHERE id = ?',
[attachment.id, userId],
);
},
);
return attachment;
}
// The queue will:
// 1. Save file locally immediately
// 2. Create attachment record with state QUEUED_UPLOAD
// 3. Update user record in same transaction
// 4. Automatically upload file in background
// 5. Update state to SYNCED when complete
```
```swift Swift theme={null}
func uploadProfilePhoto(imageData: Data, userId: String) async throws -> Attachment {
let attachment = try await attachmentQueue.saveFile(
data: imageData,
mediaType: "image/jpeg",
fileExtension: "jpg",
// updateHook runs in same transaction, ensuring atomicity
updateHook: { tx, attachment in
try tx.execute(
sql: "UPDATE users SET photo_id = ? WHERE id = ?",
parameters: [attachment.id, userId]
)
}
)
return attachment
}
// The queue will:
// 1. Save file locally immediately
// 2. Create attachment record with state QUEUED_UPLOAD
// 3. Update user record in same transaction
// 4. Automatically upload file in background
// 5. Update state to SYNCED when complete
```
```kotlin Kotlin theme={null}
import kotlinx.coroutines.flow.flowOf
suspend fun uploadProfilePhoto(imageBytes: ByteArray, userId: String) {
val attachment = attachmentQueue.saveFile(
data = flowOf(imageBytes),
mediaType = "image/jpeg",
fileExtension = "jpg",
// updateHook runs in same transaction, ensuring atomicity
updateHook = { tx, attachment ->
tx.execute(
"UPDATE users SET photo_id = ? WHERE id = ?",
listOf(attachment.id, userId)
)
}
)
return attachment
}
// The queue will:
// 1. Save file locally immediately
// 2. Create attachment record with state QUEUED_UPLOAD
// 3. Update user record in same transaction
// 4. Automatically upload file in background
// 5. Update state to SYNCED when complete
```
The `updateHook` parameter is the recommended way to link attachments to your data model. It runs in the same database transaction, ensuring data consistency.
### Download/Access an Attachment
```typescript JavaScript/TypeScript theme={null}
// Downloads happen automatically when watchAttachments references a file
async function getProfilePhotoUri(userId: string): Promise {
const user = await db.get(
'SELECT photo_id FROM users WHERE id = ?',
[userId]
);
if (!user?.photo_id) {
return null;
}
const attachment = await db.get(
'SELECT * FROM attachments WHERE id = ?',
[user.photo_id]
);
if (!attachment) {
return null;
}
if (attachment.state === 'SYNCED' && attachment.local_uri) {
return attachment.local_uri;
}
return null;
}
// Example: Display image in React with watch query
function ProfilePhoto({ userId }: { userId: string }) {
const [photoUri, setPhotoUri] = useState(null);
useEffect(() => {
const watch = db.watch(
`SELECT a.local_uri, a.state
FROM users u
LEFT JOIN attachments a ON a.id = u.photo_id
WHERE u.id = ?`,
[userId],
{
onResult: (result) => {
const row = result.rows?._array[0];
if (row?.state === 'SYNCED' && row?.local_uri) {
setPhotoUri(row.local_uri);
}
}
}
);
return () => watch.close();
}, [userId]);
if (!photoUri) {
return Loading photo...
;
}
return ;
}
```
```dart Flutter theme={null}
import 'package:powersync/powersync.dart';
import 'package:powersync_core/attachments/attachments.dart';
// Downloads happen automatically when watchAttachments references a file
Future getProfilePhotoUri(
PowerSyncDatabase db,
String userId,
) async {
final user = await db.get(
'SELECT photo_id FROM users WHERE id = ?',
[userId],
);
if (user == null || user['photo_id'] == null) {
return null;
}
final attachment = await db.get(
'SELECT * FROM attachments_queue WHERE id = ?',
[user['photo_id']],
);
if (attachment == null) {
return null;
}
final state = AttachmentState.fromInt(attachment['state'] as int);
final localUri = attachment['local_uri'] as String?;
if (state == AttachmentState.synced && localUri != null) {
// Resolve full path from local storage
final appDocDir = await getApplicationDocumentsDirectory();
return '${appDocDir.path}/attachments/$localUri';
}
return null;
}
// Example: Display image in Flutter with StreamBuilder
StreamBuilder>>(
stream: db.watch('''
SELECT a.local_uri, a.state
FROM users u
LEFT JOIN attachments_queue a ON a.id = u.photo_id
WHERE u.id = ?
''').map((results) => results.toList()),
builder: (context, snapshot) {
if (!snapshot.hasData || snapshot.data!.isEmpty) {
return const CircularProgressIndicator();
}
final row = snapshot.data!.first;
final state = AttachmentState.fromInt(row['state'] as int);
final localUri = row['local_uri'] as String?;
if (state == AttachmentState.synced && localUri != null) {
// Load and display image
return Image.file(File(localUri));
}
return const Text('Loading photo...');
},
)
```
```swift Swift theme={null}
// Downloads happen automatically when watchAttachments references a file
func getProfilePhotoUri(userId: String) async throws -> String? {
guard let user = try await db.getOptional(
sql: "SELECT photo_id FROM users WHERE id = ?",
parameters: [userId],
mapper: { cursor in
try cursor.getStringOptional(name: "photo_id")
}
), let photoId = user else {
return nil
}
guard let attachment = try await db.getOptional(
sql: "SELECT * FROM attachments WHERE id = ?",
parameters: [photoId],
mapper: { cursor in
try Attachment.fromCursor(cursor)
}
) else {
return nil
}
if attachment.state == .synced, let localUri = attachment.localUri {
return localUri
}
return nil
}
// Example: Display image in SwiftUI with watch query
struct ProfilePhotoView: View {
let userId: String
@State private var photoUri: String?
var body: some View {
Group {
if let photoUri = photoUri {
AsyncImage(url: URL(fileURLWithPath: photoUri)) { image in
image.resizable()
} placeholder: {
ProgressView()
}
} else {
Text("Loading photo...")
}
}
.task {
do {
for try await results in try db.watch(
sql: """
SELECT a.local_uri, a.state
FROM users u
LEFT JOIN attachments a ON a.id = u.photo_id
WHERE u.id = ?
""",
parameters: [userId],
mapper: { cursor in
(
state: try AttachmentState.from(cursor.getInt(name: "state")),
localUri: try cursor.getStringOptional(name: "local_uri")
)
}
) {
if let first = results.first,
first.state == .synced,
let localUri = first.localUri {
photoUri = localUri
}
}
} catch {
print("Error watching photo: \(error)")
}
}
}
}
```
```kotlin Kotlin theme={null}
import com.powersync.attachments.AttachmentState
import com.powersync.db.getString
import com.powersync.db.getStringOptional
import kotlinx.coroutines.flow.Flow
import kotlinx.coroutines.flow.map
// Downloads happen automatically when watchAttachments references a file
suspend fun getProfilePhotoUri(userId: String): String? {
val user = db.get(
"SELECT photo_id FROM users WHERE id = ?",
listOf(userId)
) { cursor ->
cursor.getStringOptional("photo_id")
}
if (user == null) {
return null
}
val attachment = db.get(
"SELECT * FROM attachments WHERE id = ?",
listOf(user)
) { cursor ->
com.powersync.attachments.Attachment.fromCursor(cursor)
}
if (attachment == null) {
return null
}
if (attachment.state == AttachmentState.SYNCED && attachment.localUri != null) {
return attachment.localUri
}
return null
}
// Example: Watch attachment state in Compose/UI
fun watchProfilePhoto(userId: String): Flow {
return db.watch(
sql = """
SELECT a.local_uri, a.state
FROM users u
LEFT JOIN attachments a ON a.id = u.photo_id
WHERE u.id = ?
""",
parameters = listOf(userId)
) { cursor ->
val state = AttachmentState.fromLong(cursor.getLong("state"))
val localUri = cursor.getStringOptional("local_uri")
if (state == AttachmentState.SYNCED && localUri != null) {
localUri
} else {
null
}
}.map { results -> results.firstOrNull() }
}
```
### Delete an Attachment
```typescript JavaScript/TypeScript theme={null}
async function deleteProfilePhoto(userId: string, photoId: string) {
await attachmentQueue.deleteFile({
id: photoId,
// updateHook ensures atomic deletion
updateHook: async (tx, attachment) => {
await tx.execute(
'UPDATE users SET photo_id = NULL WHERE id = ?',
[userId]
);
}
});
console.log('Photo queued for deletion');
// The queue will:
// 1. Delete from remote storage
// 2. Delete local file
// 3. Remove attachment record
}
// Alternative: Remove reference and let queue archive it automatically
async function removePhotoReference(userId: string) {
await db.execute(
'UPDATE users SET photo_id = NULL WHERE id = ?',
[userId]
);
// The watchAttachments callback will detect this change
// The queue will automatically archive the unreferenced attachment
// After reaching archivedCacheLimit, it will be deleted
}
```
```dart Flutter theme={null}
Future deleteProfilePhoto(
String userId,
String photoId,
) async {
await attachmentQueue.deleteFile(
attachmentId: photoId,
// updateHook ensures atomic deletion
updateHook: (context, attachment) async {
await context.execute(
'UPDATE users SET photo_id = NULL WHERE id = ?',
[userId],
);
},
);
print('Photo queued for deletion');
// The queue will:
// 1. Delete from remote storage
// 2. Delete local file
// 3. Remove attachment record
}
// Alternative: Remove reference and let queue archive it automatically
Future removePhotoReference(
PowerSyncDatabase db,
String userId,
) async {
await db.execute(
'UPDATE users SET photo_id = NULL WHERE id = ?',
[userId],
);
// The watchAttachments callback will detect this change
// The queue will automatically archive the unreferenced attachment
// After reaching archivedCacheLimit, it will be deleted
}
```
```swift Swift theme={null}
func deleteProfilePhoto(userId: String, photoId: String) async throws {
try await attachmentQueue.deleteFile(
attachmentId: photoId,
// updateHook ensures atomic deletion
updateHook: { tx, attachment in
try tx.execute(
sql: "UPDATE users SET photo_id = NULL WHERE id = ?",
parameters: [userId]
)
}
)
print("Photo queued for deletion")
// The queue will:
// 1. Delete from remote storage
// 2. Delete local file
// 3. Remove attachment record
}
// Alternative: Remove reference and let queue archive it automatically
func removePhotoReference(userId: String) async throws {
try await db.execute(
sql: "UPDATE users SET photo_id = NULL WHERE id = ?",
parameters: [userId]
)
// The watchAttachments callback will detect this change
// The queue will automatically archive the unreferenced attachment
// After reaching archivedCacheLimit, it will be deleted
}
```
```kotlin Kotlin theme={null}
suspend fun deleteProfilePhoto(userId: String, photoId: String) {
attachmentQueue.deleteFile(
attachmentId = photoId,
// updateHook ensures atomic deletion
updateHook = { tx, attachment ->
tx.execute(
"UPDATE users SET photo_id = NULL WHERE id = ?",
listOf(userId)
)
}
)
// The queue will:
// 1. Delete from remote storage
// 2. Delete local file
// 3. Remove attachment record
}
// Alternative: Remove reference and let queue archive it automatically
suspend fun removePhotoReference(userId: String) {
db.writeTransaction { tx ->
tx.execute(
"UPDATE users SET photo_id = NULL WHERE id = ?",
listOf(userId)
)
}
// The watchAttachments callback will detect this change
// The queue will automatically archive the unreferenced attachment
// After reaching archivedCacheLimit, it will be deleted
}
```
## Advanced Topics
### Error Handling
Implement custom error handling to control retry behavior:
```typescript JavaScript/TypeScript theme={null}
import { AttachmentErrorHandler } from '@powersync/web';
const errorHandler: AttachmentErrorHandler = {
async onDownloadError(attachment, error) {
console.error(`Download failed: ${attachment.filename}`, error);
// Return true to retry, false to archive
if (error.message.includes('404')) {
return false; // File doesn't exist, don't retry
}
return true; // Retry on network errors
},
async onUploadError(attachment, error) {
console.error(`Upload failed: ${attachment.filename}`, error);
return true; // Always retry uploads
},
async onDeleteError(attachment, error) {
console.error(`Delete failed: ${attachment.filename}`, error);
return true; // Retry deletes
}
};
const queue = new AttachmentQueue({
// ... other options
errorHandler
});
```
```dart Flutter theme={null}
import 'package:powersync_core/attachments/attachments.dart';
final errorHandler = AttachmentErrorHandler(
onDownloadError: (attachment, exception, stackTrace) async {
print('Download failed: ${attachment.filename}');
print('Error: $exception');
// Return true to retry, false to archive
if (exception.toString().contains('404')) {
return false; // File doesn't exist, don't retry
}
return true; // Retry on network errors
},
onUploadError: (attachment, exception, stackTrace) async {
print('Upload failed: ${attachment.filename}');
print('Error: $exception');
return true; // Always retry uploads
},
onDeleteError: (attachment, exception, stackTrace) async {
print('Delete failed: ${attachment.filename}');
print('Error: $exception');
return true; // Retry deletes
},
);
final queue = AttachmentQueue(
// ... other options
errorHandler: errorHandler,
);
```
```swift Swift theme={null}
class CustomErrorHandler: SyncErrorHandler {
func onDownloadError(attachment: Attachment, error: Error) async -> Bool {
print("Download failed: \(attachment.filename), error: \(error)")
// Return true to retry, false to archive
if let urlError = error as? URLError, urlError.code == .badServerResponse {
return false // File doesn't exist (404), don't retry
}
return true // Retry on network errors
}
func onUploadError(attachment: Attachment, error: Error) async -> Bool {
print("Upload failed: \(attachment.filename), error: \(error)")
return true // Always retry uploads
}
func onDeleteError(attachment: Attachment, error: Error) async -> Bool {
print("Delete failed: \(attachment.filename), error: \(error)")
return true // Retry deletes
}
}
let queue = AttachmentQueue(
db: db,
remoteStorage: remoteStorage,
attachmentsDirectory: try getAttachmentsDirectoryPath(),
watchAttachments: watchAttachments,
errorHandler: CustomErrorHandler()
)
```
```kotlin Kotlin theme={null}
import com.powersync.attachments.SyncErrorHandler
val errorHandler = object : SyncErrorHandler {
override suspend fun onDownloadError(
attachment: Attachment,
exception: Exception
): Boolean {
println("Download failed: ${attachment.filename}", exception)
// Return true to retry, false to archive
if (exception.message?.contains("404") == true) {
return false // File doesn't exist, don't retry
}
return true // Retry on network errors
}
override suspend fun onUploadError(
attachment: Attachment,
exception: Exception
): Boolean {
println("Upload failed: ${attachment.filename}", exception)
return true // Always retry uploads
}
override suspend fun onDeleteError(
attachment: Attachment,
exception: Exception
): Boolean {
println("Delete failed: ${attachment.filename}", exception)
return true // Retry deletes
}
}
val queue = AttachmentQueue(
// ... other options
errorHandler = errorHandler
)
```
### Custom Storage Adapters
The following is an example of how to implement a custom storage adapter for IPFS:
```typescript JavaScript/TypeScript theme={null}
import { LocalStorageAdapter, RemoteStorageAdapter } from '@powersync/web';
// Example: IPFS remote storage
class IPFSStorageAdapter implements RemoteStorageAdapter {
async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) {
// Upload to IPFS
const cid = await ipfs.add(fileData);
// Store CID in your backend for retrieval
await fetch('/api/ipfs-cids', {
method: 'POST',
body: JSON.stringify({ attachmentId: attachment.id, cid })
});
}
async downloadFile(attachment: AttachmentRecord): Promise {
// Retrieve CID from backend
const { cid } = await fetch(`/api/ipfs-cids/${attachment.id}`)
.then(r => r.json());
// Download from IPFS
return ipfs.cat(cid);
}
async deleteFile(attachment: AttachmentRecord) {
// IPFS is immutable, but you can unpin and remove from backend
await fetch(`/api/ipfs-cids/${attachment.id}`, { method: 'DELETE' });
}
}
```
```dart Flutter theme={null}
// Example: IPFS remote storage
class IPFSStorageAdapter implements RemoteStorage {
@override
Future uploadFile(
Stream> fileData,
Attachment attachment,
) async {
// Collect the stream
final bytes = [];
await for (final chunk in fileData) {
bytes.addAll(chunk);
}
// Upload to IPFS
final cid = await ipfs.add(Uint8List.fromList(bytes));
// Store CID in your backend for retrieval
await http.post(
Uri.parse('/api/ipfs-cids'),
body: jsonEncode({
'attachmentId': attachment.id,
'cid': cid,
}),
);
}
@override
Future>> downloadFile(Attachment attachment) async {
// Retrieve CID from backend
final response = await http.get(
Uri.parse('/api/ipfs-cids/${attachment.id}'),
);
final cid = jsonDecode(response.body)['cid'] as String;
// Download from IPFS
final data = await ipfs.cat(cid);
return Stream.value(data);
}
@override
Future deleteFile(Attachment attachment) async {
// IPFS is immutable, but you can unpin and remove from backend
await http.delete(
Uri.parse('/api/ipfs-cids/${attachment.id}'),
);
}
}
```
```swift Swift theme={null}
// Example: IPFS remote storage
class IPFSStorageAdapter: RemoteStorageAdapter {
func uploadFile(fileData: Data, attachment: Attachment) async throws {
// Upload to IPFS
// let cid = try await ipfs.add(fileData)
// Store CID in your backend for retrieval
struct CIDRequest: Codable {
let attachmentId: String
let cid: String
}
let requestBody = CIDRequest(attachmentId: attachment.id, cid: "your-cid-here")
var request = URLRequest(url: URL(string: "/api/ipfs-cids")!)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.httpBody = try JSONEncoder().encode(requestBody)
_ = try await URLSession.shared.data(for: request)
}
func downloadFile(attachment: Attachment) async throws -> Data {
// Retrieve CID from backend
struct CIDResponse: Codable {
let cid: String
}
let request = URLRequest(url: URL(string: "/api/ipfs-cids/\(attachment.id)")!)
let (data, _) = try await URLSession.shared.data(for: request)
let response = try JSONDecoder().decode(CIDResponse.self, from: data)
// Download from IPFS
// let fileData = try await ipfs.cat(response.cid)
// return fileData
return Data() // Replace with actual IPFS download
}
func deleteFile(attachment: Attachment) async throws {
// IPFS is immutable, but you can unpin and remove from backend
var request = URLRequest(url: URL(string: "/api/ipfs-cids/\(attachment.id)")!)
request.httpMethod = "DELETE"
_ = try await URLSession.shared.data(for: request)
}
}
```
```kotlin Kotlin theme={null}
// Example: IPFS remote storage
class IPFSStorageAdapter : RemoteStorage {
override suspend fun uploadFile(
fileData: Flow,
attachment: Attachment
) {
// Collect the flow
val bytes = mutableListOf()
fileData.collect { bytes.add(it) }
val allBytes = bytes.flatMap { it.toList() }.toByteArray()
// Upload to IPFS
val cid = // ... upload to IPFS
// Store CID in your backend for retrieval
// ... your HTTP POST to store CID
}
override suspend fun downloadFile(attachment: Attachment): Flow {
// Retrieve CID from backend
val cid = // ... fetch CID from your API
// Download from IPFS
val data = // ... download from IPFS
return flowOf(data)
}
override suspend fun deleteFile(attachment: Attachment) {
// IPFS is immutable, but you can unpin and remove from backend
// ... your HTTP DELETE implementation
}
}
```
### Verification and Recovery
`verifyAttachments()` is always called internally during `startSync()`.
This method does the following:
1- Local files exist at expected paths
2- Repairs broken localUri references
3- Archives attachments with missing files
4- Requeues downloads for synced files with missing local copies
```typescript Javascript/Typescript theme={null}
await attachmentQueue.verifyAttachments();
```
```dart Flutter theme={null}
Coming soon, need to expose the function publicly
```
```swift Swift theme={null}
try await attachmentQueue.waitForInit()
```
```kotlin Kotlin theme={null}
Coming soon, need to expose the function publicly
```
### Cache Management
Control archived file retention:
```typescript JavaScript/TypeScript theme={null}
const queue = new AttachmentQueue({
// ... other options
archivedCacheLimit: 200 // Keep 200 archived files; oldest deleted when limit reached
});
// For manually expiring the cache
queue.expireCache()
```
```dart Flutter theme={null}
final queue = AttachmentQueue(
// ... other options
archivedCacheLimit: 200, // Keep 200 archived files; oldest deleted when limit reached
);
// For manually expiring the cache
await queue.expireCache();
```
```swift Swift theme={null}
let queue = AttachmentQueue(
db: db,
remoteStorage: remoteStorage,
attachmentsDirectory: try getAttachmentsDirectoryPath(),
watchAttachments: watchAttachments,
// ... other options
archivedCacheLimit: 200 // Keep 200 archived files; oldest deleted when limit reached
)
// For manually expiring the cache
try await queue.expireCache()
```
```kotlin Kotlin theme={null}
val queue = AttachmentQueue(
// ... other options
archivedCacheLimit = 200 // Keep 200 archived files; oldest deleted when limit reached
)
// For manually expiring the cache
queue.expireCache()
```
### Offline-First Considerations
The attachment queue is designed for offline-first apps:
* **Local-first operations** - Files are saved locally immediately, synced later
* **Automatic retry** - Failed uploads/downloads retry when connection returns
* **Queue persistence** - Queue state survives app restarts
* **Conflict-free** - Files are immutable, identified by UUID
* **Bandwidth efficient** - Only syncs when needed, respects network conditions
## Migrating From Deprecated Packages
If you are migrating from the now deprecated attachment helpers for Dart or JavaScript, follow the notes below:
A fairly simple migration from `powersync_attachments_helper` to the new utilities would be to adopt the new library with a different Attachment Queue table name and drop the legacy package. This means existing attachments are lost, but will be re-downloaded automatically.
Import `AttachmentTable` and `AttachmentQueue` directly from your platform SDK (`@powersync/web`, `@powersync/node`, or `@powersync/react-native`), then remove `@powersync/attachments` from your dependencies.
**React Native only:** also install `@powersync/attachments-storage-react-native` plus either `expo-file-system` (Expo 54+) or `@dr.pogodin/react-native-fs`.
**What changed:**
| Before (`@powersync/attachments`) | After (platform SDK) |
| --------------------------------------------- | ------------------------------------------------------------------------ |
| `AbstractAttachmentQueue` subclass | `AttachmentQueue` instantiated directly |
| `onAttachmentIdsChange(ids: string[])` | `watchAttachments` — items must be `{ id, fileExtension }`, not just IDs |
| `newAttachmentRecord()` + `saveToQueue()` | `saveFile({ data, fileExtension, updateHook })` |
| `init()` | `startSync()` |
| Single `storage` adapter | `localStorage` + `remoteStorage` (two separate adapters) |
| `syncInterval` | `syncIntervalMs` |
| `cacheLimit` | `archivedCacheLimit` |
| `AttachmentTable` option: `name` | `viewName` |
| `AttachmentTable` option: `additionalColumns` | Removed — use the built-in `meta_data` column (JSON string) instead |
| Error handlers return `{ retry: boolean }` | Return `Promise`; `onDeleteError` is now also required |
**Tip:** use a different `viewName` (e.g. `attachment_queue`) to avoid a SQLite conflict with the old `attachments` table during the transition.
**Data on existing users:** the new local attachments table starts empty. Files already in remote storage will re-download automatically once referenced by your `watchAttachments` query. Files that were only ever stored locally and never uploaded have no remote copy and will not be recoverable.
## Related Resources
* **[An Implementation Walkthrough Using The Flutter/Dart Attachment Helpers](https://www.powersync.com/blog/building-offline-first-file-uploads-with-powersync-attachments-helper)** - Blog post on building offline-first uploads
***
# Background Syncing
Source: https://docs.powersync.com/client-sdks/advanced/background-syncing
Run PowerSync operations while your app is inactive or in the background
Applications often need to sync data when they're not in active use. This document explains background syncing implementations with PowerSync.
## Platform Support
Background syncing has been tested in:
* **Flutter** - Using [workmanager](https://github.com/fluttercommunity/flutter_workmanager/)
* **React Native & Expo** - Using Expo's `BackgroundTask` API. See our [demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-background-sync) and accompanying [blog post](https://www.powersync.com/blog/keep-background-apps-fresh-with-expo-background-tasks-and-powersync).
* **Kotlin - Android** - Implementation details in the [Supabase To-Do List demo](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/docs/BackgroundSync.md)
These examples can be adapted for other platforms/frameworks. For implementation questions or assistance, chat to us on [Discord](https://discord.gg/powersync).
## Flutter Implementation Guide
### Prerequisites
1. Complete the [workmanager platform setup](https://github.com/fluttercommunity/flutter_workmanager/#platform-setup)
2. Review the [Supabase To-Do List Demo](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) for context
### Configure the Background Task
In `main.dart`:
```dart theme={null}
void main() async {
// ... existing setup code ...
const simpleTaskKey = "com.domain.myapp.taskId";
// Mandatory if the App is obfuscated or using Flutter 3.1+
@pragma('vm:entry-point')
void callbackDispatcher() {
Workmanager().executeTask((task, inputData) async {
switch (task) {
case simpleTaskKey:
// Initialize PowerSync database and connection
final currentConnector = await openDatabase();
db.connect(connector: currentConnector!);
// Perform database operations
await TodoList.create('New background task item');
await currentConnector.uploadData(db);
await TodoList.create('testing1111');
await currentConnector.uploadData(db);
// print("$simpleTaskKey was executed. inputData = $inputData");
break;
}
// Close database when done
await db.close();
return Future.value(true);
});
}
// Initialize the workmanager with your callback
Workmanager().initialize(
callbackDispatcher,
// Shows notifications during task execution (useful for debugging)
isInDebugMode: true
);
// ... rest of your app initialization ...
}
```
Note specifically in the switch statement:
```dart theme={null}
// currentConnector is the connector to the remote DB
// openDatabase sets the db variable to the PowerSync database
final currentConnector = await openDatabase();
// connect PowerSync to the remote database
db.connect(connector: currentConnector!);
// a database write operation
await TodoList.create('Buy new shoes');
// Sync with the remote database
await currentConnector.uploadData(db);
```
1. Since WorkManager executes in a new process, you need to set up the PowerSync local database and connect to the remote database using your connector.
2. Run a write (in the case of this demo app, we create a 'todo list')
3. Make sure to run `currentConnector.uploadData(db);` so that the local write is uploaded to the remote database.
### Testing
Add a test button:
```dart theme={null}
ElevatedButton(
title: const Text("Start the Flutter background service"),
onTap: () async {
await Workmanager().cancelAll();
// print("RUN BACKGROUND TASK");
await Workmanager().registerOneOffTask(
simpleTaskKey,
simpleTaskKey,
initialDelay: Duration(seconds: 10),
inputData: {
int': 1,
},
);
},
),
```
Press the button, background the app, wait 10 seconds, then verify new records in the remote database.
### Platform Compatibility
#### Android
* Implementation works as expected.
#### iOS
* At the time of last testing this (January 2024), we were only able to get part of this to work using the branch for [this PR](https://github.com/fluttercommunity/flutter_workmanager/pull/511) into workmanager.
* While testing we were not able to get iOS background fetching to work, however this is most likely an
[issue](https://github.com/fluttercommunity/flutter_workmanager/issues/515) with the package.
# CRDT Data Structures
Source: https://docs.powersync.com/client-sdks/advanced/crdts
PowerSync does not use [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) directly as part of its sync or conflict resolution process, but CRDT data structures (from a library such as [Yjs](https://github.com/yjs/yjs) or y-crdt) may be persisted and synced using PowerSync.
This may be useful for cases such as document editing, where last-write-wins is not sufficient for conflict resolution. PowerSync becomes the provider for CRDT data — both for local storage and for propagating changes to other clients.
### Example Implementations
For an example implementation, refer to the following demo built using the PowerSync Web SDK:
* [Yjs Document Collaboration Demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/yjs-react-supabase-text-collab)
# JSON, Arrays and Custom Types
Source: https://docs.powersync.com/client-sdks/advanced/custom-types-arrays-and-json
PowerSync supports JSON/JSONB and arrays, and can sync other custom types by serializing them to text.
PowerSync supports JSON/JSONB and array columns. They are synced as JSON text and can be queried with SQLite JSON functions on the client. Other custom Postgres types can be synced by serializing their values to text in the client-side schema. When updating client data, you have the option to replace the entire column value with a string or enable [advanced schema options](#advanced-schema-options-to-process-writes) to track more granular changes and include custom metadata.
## JSON and JSONB
The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
**Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.
### Postgres
JSON columns are represented as:
```sql theme={null}
ALTER TABLE todos
ADD COLUMN custom_payload json;
```
### Sync Streams
PowerSync treats JSON columns as text. Use `json_extract()` and other JSON functions in stream queries. Subscribe per list to sync only that list's todos:
```yaml theme={null}
config:
edition: 3
streams:
my_json_todos:
auto_subscribe: true
with:
owned_lists: SELECT id AS list_id FROM lists WHERE owner_id = auth.user_id()
query: SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') IN owned_lists
```
The client subscribes once per list (e.g. `db.syncStream('my_json_todos', { list_id: listId }).subscribe()`).
PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as `json_extract()`.
```yaml theme={null}
bucket_definitions:
my_json_todos:
# Separate bucket per To-Do list
parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id()
data:
- SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') = bucket.list_id
```
### Client SDK
**Schema**
Add your JSON column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options).
```dart theme={null}
Table(
name: 'todos',
columns: [
Column.text('custom_payload'),
// ... other columns ...
],
// Optionally, enable advanced update tracking options (see details at the end of this page):
trackPreviousValues: true,
trackMetadata: true,
ignoreEmptyUpdates: true,
)
```
```javascript theme={null}
const todos = new Table(
{
custom_payload: column.text,
// ... other columns ...
},
{
// Optionally, enable advanced update tracking options (see details at the end of this page):
trackPrevious: true,
trackMetadata: true,
ignoreEmptyUpdates: true,
}
);
```
```csharp theme={null}
new Table
{
Name = "todos",
Columns =
{
["custom_payload"] = ColumnType.Text,
// ... other columns ...
},
// Optionally, enable advanced update tracking options (see details at the end of this page):
TrackPreviousValues = new TrackPreviousOptions(),
TrackMetadata = true,
IgnoreEmptyUpdates = true
}
```
Example not yet available.
**Writing Changes**
You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about:
```dart theme={null}
// Full replacement (basic):
await db.execute('UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', [
'{"foo": "bar", "baz": 123}',
'op-metadata-example', // Example metadata value
'00000000-0000-0000-0000-000000000000'
]);
// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
import 'dart:convert';
if (op.op == UpdateType.put && op.previousValues != null) {
var oldJson = jsonDecode(op.previousValues['custom_payload'] ?? '{}');
var newJson = jsonDecode(op.opData['custom_payload'] ?? '{}');
var metadata = op.metadata; // Access metadata here
// Compare oldJson and newJson to determine what changed
// Use metadata as needed as you process the upload
}
```
```javascript theme={null}
// Full replacement (basic):
await db.execute(
'UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?',
['{"foo": "bar", "baz": 123}', 'op-metadata-example', '00000000-0000-0000-0000-000000000000']
);
// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op === UpdateType.PUT && op.previousValues) {
const oldJson = JSON.parse(op.previousValues['custom_payload'] ?? '{}');
const newJson = JSON.parse(op.opData['custom_payload'] ?? '{}');
const metadata = op.metadata; // Access metadata here
// Compare oldJson and newJson to determine what changed
// Use metadata as needed as you process the upload
}
```
```csharp theme={null}
// Full replacement (basic):
await db.Execute(
"UPDATE todos SET custom_payload = ?, _metadata = ? WHERE id = ?",
new object[] { "{\"foo\": \"bar\", \"baz\": 123}", "op-metadata-example", "00000000-0000-0000-0000-000000000000" }
);
// Diffing columns in UploadData (advanced):
// See details about these advanced schema options at the end of this page
using Newtonsoft.Json;
if (op.Op.ToString() == "PUT" && op.PreviousValues != null)
{
var oldJson = JsonConvert.DeserializeObject>(
op.PreviousValues.GetValueOrDefault("custom_payload", "{}")?.ToString() ?? "{}"
);
var newJson = JsonConvert.DeserializeObject>(
(op.OpData != null ? op.OpData.GetValueOrDefault("custom_payload", "{}")?.ToString() ?? "{}" : "{}") ?? "{}"
);
var metadata = op.Metadata; // Access metadata here
// Compare oldJson and newJson to determine what changed
// Use metadata as needed as you process the upload
}
```
Example not yet available.
## Arrays
PowerSync treats array columns as JSON text. This means that the SQLite JSON operators can be used on any array columns.
Additionally, array membership is supported in [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) so you can sync rows based on whether a parameter value appears in an array column.
**Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.
### Postgres
Array columns are defined in Postgres using the following syntax:
```sql theme={null}
ALTER TABLE todos
ADD COLUMN unique_identifiers text[];
```
### Sync Streams
Array columns are converted to text by the PowerSync Service. A text array as defined above would be synced to clients as the following string:
`["00000000-0000-0000-0000-000000000000", "12345678-1234-1234-1234-123456789012"]`
**Array Membership**
Sync rows where a subscription parameter value is in the row's array column using `IN`:
```yaml theme={null}
config:
edition: 3
streams:
custom_todos:
query: SELECT * FROM todos WHERE subscription.parameter('list_id') IN unique_identifiers
```
The client subscribes per list (e.g. `db.syncStream('custom_todos', { list_id: listId }).subscribe()`).
It's possible to sync rows dynamically based on the contents of array columns using the `IN` operator:
```yaml theme={null}
bucket_definitions:
custom_todos:
# Separate bucket per To-Do list
parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id()
data:
- SELECT * FROM todos WHERE bucket.list_id IN unique_identifiers
```
See these additional details when using the `IN` operator: [Operators](/sync/supported-sql#operators)
### Client SDK
**Schema**
Add your array column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options).
```javascript theme={null}
const todos = new Table(
{
unique_identifiers: column.text,
// ... other columns ...
},
{
// Optionally, enable advanced update tracking options (see details at the end of this page):
trackPrevious: true,
trackMetadata: true,
ignoreEmptyUpdates: true,
}
);
```
```dart theme={null}
Table(
name: 'todos',
columns: [
Column.text('unique_identifiers'),
// ... other columns ...
],
// Optionally, enable advanced update tracking options (see details at the end of this page):
trackPreviousValues: true,
trackMetadata: true,
ignoreEmptyUpdates: true,
)
```
```csharp theme={null}
new Table
{
Name = "todos",
Columns =
{
["unique_identifiers"] = ColumnType.Text,
// ... other columns ...
},
// Optionally, enable advanced update tracking options (see details at the end of this page):
TrackPreviousValues = new TrackPreviousOptions(),
TrackMetadata = true,
IgnoreEmptyUpdates = true
}
```
Example not yet available.
**Writing Changes**
You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about:
```javascript theme={null}
// Full replacement (basic):
await db.execute(
'UPDATE todos set unique_identifiers = ?, _metadata = ? WHERE id = ?',
['["DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF"]', 'op-metadata-example', '00000000-0000-0000-0000-000000000000']
);
// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op === UpdateType.PUT && op.previousValues) {
const oldArray = JSON.parse(op.previousValues['unique_identifiers'] ?? '[]');
const newArray = JSON.parse(op.opData['unique_identifiers'] ?? '[]');
const metadata = op.metadata; // Access metadata here
// Compare oldArray and newArray to determine what changed
// Use metadata as needed as you process the upload
}
```
```dart theme={null}
// Full replacement (basic):
await db.execute('UPDATE todos set unique_identifiers = ?, _metadata = ? WHERE id = ?', [
'["DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF"]',
'op-metadata-example', // Example metadata value
'00000000-0000-0000-0000-000000000000'
]);
// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op == UpdateType put && op.previousValues != null) {
final oldArray = jsonDecode(op.previousValues['unique_identifiers'] ?? '[]');
final newArray = jsonDecode(op.opData['unique_identifiers'] ?? '[]');
final metadata = op.metadata; // Access metadata here
// Compare oldArray and newArray to determine what changed
// Use metadata as needed as you process the upload
}
```
```csharp theme={null}
// Full replacement (basic):
await db.Execute(
"UPDATE todos SET unique_identifiers = ?, _metadata = ? WHERE id = ?",
new object[] {
"[\"DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF\", \"ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF\"]",
"op-metadata-example",
"00000000-0000-0000-0000-000000000000"
}
);
// Diffing columns in UploadData (advanced):
// See details about these advanced schema options at the end of this page
using Newtonsoft.Json;
if (op.Op.ToString() == "PUT" && op.PreviousValues != null)
{
var oldArray = JsonConvert.DeserializeObject>(
(op.PreviousValues != null ? op.PreviousValues.GetValueOrDefault("unique_identifiers", "[]")?.ToString() : "[]") ?? "[]"
);
var newArray = JsonConvert.DeserializeObject>(
(op.OpData != null ? op.OpData.GetValueOrDefault("unique_identifiers", "[]")?.ToString() : "[]") ?? "[]"
);
var metadata = op.Metadata; // Access metadata here
// Compare oldArray and newArray to determine what changed
// Use metadata as needed as you process the upload
}
```
Example not yet available.
**Attention Supabase users:** Supabase can handle writes with arrays, but you must convert from string to array using `jsonDecode` in the connector's `uploadData` function. The default implementation of `uploadData` does not handle complex types like arrays automatically.
## Custom Types
PowerSync respects Postgres custom types: DOMAIN types sync as their inner type, custom type columns as JSON objects, arrays of custom types as JSON arrays, and ranges (and multi-ranges) as structured JSON. This behavior is the default for Sync Streams. For configuration and legacy behavior, see [Compatibility](/sync/advanced/compatibility#custom-postgres-types). For type handling in queries, see [Types](/sync/types).
### Postgres
Postgres allows developers to create custom data types for columns. For example:
```sql theme={null}
create type location_address AS (
street text,
city text,
state text,
zip numeric
);
```
### Sync Streams
The custom type column is serialized as JSON and you can use `json_extract()` and other JSON functions in stream queries:
```yaml theme={null}
config:
edition: 3
streams:
todos_by_city:
query: SELECT * FROM todos WHERE json_extract(location, '$.city') = subscription.parameter('city')
```
Custom type columns are converted to text by the PowerSync Service.
Depending on whether the `custom_postgres_types` [compatibility option](/sync/advanced/compatibility) is enabled,
PowerSync would sync the row as:
* `{"street":"1000 S Colorado Blvd.","city":"Denver","state":"CO","zip":80211}` if the option is enabled.
* `("1000 S Colorado Blvd.",Denver,CO,80211)` if the option is disabled.
You can use regular string and JSON manipulation functions in Sync Rules. This means that individual values of the type
can be synced with `json_extract` if the `custom_postgres_types` compatibility option is enabled.
Without the option, the entire column must be synced as text.
### Client SDK
**Schema**
Add your custom type column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options).
```javascript theme={null}
const todos = new Table(
{
location: column.text,
// ... other columns ...
},
{
// Optionally, enable advanced update tracking options (see details at the end of this page):
trackPrevious: true,
trackMetadata: true,
ignoreEmptyUpdates: true,
}
);
```
```dart theme={null}
Table(
name: 'todos',
columns: [
Column.text('location'),
// ... other columns ...
],
// Optionally, enable advanced update tracking options (see details at the end of this page):
trackPreviousValues: true,
trackMetadata: true,
ignoreEmptyUpdates: true,
)
```
```csharp theme={null}
new Table
{
Name = "todos",
Columns =
{
["location"] = ColumnType.Text,
// ... other columns ...
},
// Optionally, enable advanced update tracking options (see details at the end of this page):
TrackPreviousValues = new TrackPreviousOptions(),
TrackMetadata = true,
IgnoreEmptyUpdates = true
}
```
Example not yet available.
**Writing Changes**
You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about:
```javascript theme={null}
// Full replacement (basic):
await db.execute(
'UPDATE todos set location = ?, _metadata = ? WHERE id = ?',
['("1234 Update Street",Denver,CO,80212)', 'op-metadata-example', 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b']
);
// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op === UpdateType.PUT && op.previousValues) {
const oldCustomType = op.previousValues['location'] ?? 'null';
const newCustomType = op.opData['location'] ?? 'null';
const metadata = op.metadata; // Access metadata here
// Compare oldCustomType and newCustomType to determine what changed
// Use metadata as needed as you process the upload
}
```
```dart theme={null}
// Full replacement (basic):
await db.execute('UPDATE todos set location = ?, _metadata = ? WHERE id = ?', [
'("1234 Update Street",Denver,CO,80212)',
'op-metadata-example', // Example metadata value
'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b'
]);
// Diffing columns in uploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.op == UpdateType.put && op.previousValues != null) {
final oldCustomType = op.previousValues['location'] ?? 'null';
final newCustomType = op.opData['location'] ?? 'null';
final metadata = op.metadata; // Access metadata here
// Compare oldCustomType and newCustomType to determine what changed
// Use metadata as needed as you process the upload
}
```
```csharp theme={null}
// Full replacement (basic):
await db.Execute(
"UPDATE todos SET location = ?, _metadata = ? WHERE id = ?",
new object[] { "(\"1234 Update Street\",Denver,CO,80212)", "op-metadata-example", "faffcf7a-75f9-40b9-8c5d-67097c6b1c3b" }
);
// Diffing columns in UploadData (advanced):
// See details about these advanced schema options at the end of this page
if (op.Op.ToString() == "PUT" && op.PreviousValues != null)
{
var oldCustomType = op.PreviousValues.GetValueOrDefault("location", "null")?.ToString() ?? "null";
var newCustomType = op.OpData.GetValueOrDefault("location", "null")?.ToString() ?? "null";
var metadata = op.Metadata; // Access metadata here
// Compare oldCustomType and newCustomType to determine what changed
// Use metadata as needed as you process the upload
}
```
Example not yet available.
## Bonus: Mashup
What if we had a column defined as an array of custom types, where a field in the custom type was JSON? Consider the following Postgres schema:
```sql theme={null}
-- define custom type
CREATE TYPE extended_location AS (
address_label text,
json_address json
);
-- add column
ALTER TABLE todos
ADD COLUMN custom_locations extended_location[];
```
## Advanced Schema Options to Process Writes
With arrays and JSON fields, it's common for only part of the value to change during an update. To make handling these writes easier, you can enable advanced schema options that let you track exactly what changed in each row—not just the new state.
* `trackPreviousValues` (or `trackPrevious` in our JS SDKs): Access previous values for diffing JSON or array fields. Accessible later via `CrudEntry.previousValues`.
* `trackMetadata`: Adds a `_metadata` column for storing custom metadata. Value of the column is accessible later via `CrudEntry.metadata`.
* `ignoreEmptyUpdates`: Skips updates when no data has actually changed.
These advanced schema options were introduced in the following SDK versions:
* Flutter v1.13.0
* React Native v1.20.1
* JavaScript/Web v1.20.1
* Kotlin v1.1.0
* Swift v1.1.0
* Node.js v0.4.0
* .NET v0.0.6-alpha.1
# Data Encryption
Source: https://docs.powersync.com/client-sdks/advanced/data-encryption
### In Transit Encryption
Data is always encrypted in transit using TLS — both between the client and PowerSync, and between PowerSync [and the source database](/configuration/source-db/postgres-maintenance#tls).
### At Rest Encryption
The client-side database can be encrypted at rest. This is currently available for:
[SQLCipher](https://www.zetetic.net/sqlcipher/) support is available for Dart/Flutter through the `powersync_sqlcipher` SDK. See usage details in the package README:
[SQLCipher](https://www.zetetic.net/sqlcipher/) support is available for PowerSync's React Native SDK through the `@powersync/op-sqlite` package. See usage details in the package README:
The Web SDK uses the [ChaCha20 cipher algorithm by default](https://utelle.github.io/SQLite3MultipleCiphers/docs/ciphers/cipher_chacha20/). See usage details in the package README:
Additionally, a minimal example demonstrating encryption of the web database is available [here](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-vite-encryption).
Encryption support is available for PowerSync's Node.js SDK using [`better-sqlite3-multiple-ciphers`](https://www.npmjs.com/package/better-sqlite3-multiple-ciphers). See usage details and code examples in the [Node.js SDK reference](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers).
Encryption support is available for PowerSync's Kotlin SDK (since version 1.9.0) using [`SQLite3MultipleCiphers`](https://utelle.github.io/SQLite3MultipleCiphers/) via the [`com.powersync:sqlite3multipleciphers`](https://central.sonatype.com/artifact/com.powersync/sqlite3multipleciphers) package. This allows you to encrypt your local SQLite database with various cipher algorithms.
**Setup:**
1. Replace your dependency on `com.powersync:core` with `com.powersync:common` of the same version.
2. Add a dependency on `com.powersync:sqlite3multipleciphers`.
3. Since `:core` includes a Ktor client implementation, you'll need to [add one manually](https://ktor.io/docs/client-engines.html) if you're not already using Ktor:
* Android/JVM: `io.ktor:ktor-client-okhttp`
* Apple targets (Kotlin/Native): `io.ktor:ktor-client-darwin`
4. Use the appropriate encrypted database factory when creating your `PowerSyncDatabase`:
```kotlin theme={null}
// Android
val database = PowerSyncDatabase(
factory = AndroidEncryptedDatabaseFactory(
context,
Key.Passphrase("your encryption key")
),
schema = yourSchema,
dbFilename = "your_database"
)
// JVM
val database = PowerSyncDatabase(
factory = JavaEncryptedDatabaseFactory(
Key.Passphrase("your encryption key")
),
schema = yourSchema,
dbFilename = "your_database"
)
// Kotlin/Native (Apple targets)
val database = PowerSyncDatabase(
factory = NativeEncryptedDatabaseFactory(
Key.Passphrase("your encryption key")
),
schema = yourSchema,
dbFilename = "your_database"
)
```
Store encryption keys securely rather than hardcoding them in your code.
For more details, see the [`sqlite3multipleciphers` README](https://github.com/powersync-ja/powersync-kotlin/tree/main/sqlite3multipleciphers) in the PowerSync Kotlin SDK repository.
Encryption support is available for PowerSync's Swift SDK (since version 1.10.0) using [`SQLite3MultipleCiphers`](https://utelle.github.io/SQLite3MultipleCiphers/). Encryption keys are configured with the `initialStatements` parameter on `PowerSyncDatabase()` which allows running `PRAGMA key` statements.
**Setup requirements:**
The PowerSync Swift SDK depends on [CSQLite](https://github.com/powersync-ja/CSQLite) to build and link SQLite.
That package can be configured to optionally link SQLite3 Multiple Ciphers by enabling the `Encryption` trait. Due to SwiftPM limitations, we can't directly expose that trait on the Swift SDK.
Instead, we recommend directly depending on CSQLite with the encryption trait, which will enable the same for the SDK (since each package can only appear in a build once). Since Xcode doesn't support specifying package traits when adding dependencies, you first need to add a local Swift package as a workaround.
1. Create a local `Package.swift` in your project that depends on CSQLite with the `Encryption` trait:
```swift theme={null}
// swift-tools-version: 6.2
import PackageDescription
let package = Package(
name: "helper",
products: [
.library(name: "helper", targets: ["helper"]),
],
dependencies: [
.package(url: "https://github.com/powersync-ja/CSQLite.git", exact: "3.51.2", traits: ["Encryption"]),
],
targets: [
.target(name: "helper", dependencies: [.product(name: "CSQLite", package: "CSQLite")]),
]
)
```
2. Add a dependency to this local package from Xcode and resolve packages. This enables `sqlite3mc` for your entire app, including the PowerSync framework.
3. Configure encryption when opening the database:
```swift theme={null}
let db = PowerSyncDatabase(
schema: yourSchema,
initialStatements: ["pragma key = 'your encryption key'"]
)
```
Store encryption keys securely (e.g., in Keychain) rather than hardcoding them in your code.
For a complete working example, see the [SwiftEncryptionDemo](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/SwiftEncryptionDemo) in the PowerSync Swift SDK repository.
Support for encryption on other platforms is planned. In the meantime, let us know your needs and use cases on [Discord](https://discord.gg/powersync).
### End-to-end Encryption
For end-to-end encryption, the encrypted data can be synced using PowerSync. The data can then either be encrypted and decrypted directly in memory by the application, or a separate local-only table can be used to persist the decrypted data — allowing querying the data directly.
[Raw SQLite Tables](/client-sdks/advanced/raw-tables) can be used for full control over the SQLite schema and managing tables for the decrypted data. We have a [React & Supabase example app](https://github.com/powersync-community/react-supabase-chat-e2ee) that demonstrates this approach. See also the accompanying [blog post](https://www.powersync.com/blog/building-an-e2ee-chat-app-with-powersync-supabase).
## See Also
* Database Setup → [Security & IP Filtering](/configuration/source-db/security-and-ip-filtering)
* Resources → [Security](/resources/security)
# GIS Data: PostGIS
Source: https://docs.powersync.com/client-sdks/advanced/gis-data-postgis
For Postgres, PowerSync integrates well with PostGIS and provides tools for working with geo data.
Custom types, arrays and [PostGIS](https://postgis.net/) are frequently presented together since geospatial data is often complex and multidimensional. It's therefore recommend to first quickly scan the content in [Custom Types, Arrays and JSON](/client-sdks/advanced/custom-types-arrays-and-json)
### PostGIS
In Supabase, the PostGIS extension needs to be added to your project to use this type. Run the following command in the SQL editor to include the PostGIS extension:
```sql theme={null}
CREATE extension IF NOT EXISTS postgis;
```
The `geography` and `geometry` types are now available in your Postgres.
## Supabase Configuration Example:
This example builds on the To-Do List demo app in our [Supabase integration guide](/integrations/supabase/guide).
### Add custom type, array and PostGIS columns to the `todos` table
```sql theme={null}
--SQL command to update the todos table with 3 additional columns:
ALTER TABLE todos
ADD COLUMN address location_address null,
ADD COLUMN contact_numbers text [] null,
ADD COLUMN location geography (point) null
```
### Insert a row of data into the table
```sql theme={null}
-- Grab the id of a list object and a user id and create a new todos
INSERT INTO public.todos(description, list_id, created_by, address, location, contact_numbers) VALUES ('Bread', 'list_id', 'user_id', '("1000 S Colorado Blvd.","Denver","CO",80211)', st_point(39.742043, -104.991531), '{000-000-0000, 000-000-0000, 000-000-0000}');
```
Note the following:
**Custom type**: Specify the value for the `address` column by wrapping the value in single quotes and comma separate the different location\_address properties.
* `'("1000 S Colorado Blvd.","Denver","CO",80211)'`
**Array**: Specify the value of the `contact_numbers` column, by surrounding the comma-separated array items in curly braces.
* `'{000-000-0000, 000-000-0000, 000-000-0000}'`
**PostGIS**: Specify the value of the `location` column by using the `st_point` function and pass in the latitude and longitude
* `st_point(39.742043, -104.991531)`
### What this data looks like in Postgres
Postgres' internal binary representation of the PostGIS type is as follows:
| location |
| -------------------------------------------------- |
| 0101000020E6100000E59CD843FBDE4340E9818FC18AC052C0 |
## On the Client
### AppSchema example
```js theme={null}
export const AppSchema = new Schema([
new Table({
name: 'todos',
columns: [
new Column({ name: 'list_id', type: ColumnType.TEXT }),
new Column({ name: 'created_at', type: ColumnType.TEXT }),
new Column({ name: 'completed_at', type: ColumnType.TEXT }),
new Column({ name: 'description', type: ColumnType.TEXT }),
new Column({ name: 'completed', type: ColumnType.INTEGER }),
new Column({ name: 'created_by', type: ColumnType.TEXT }),
new Column({ name: 'completed_by', type: ColumnType.TEXT }),
new Column({name: 'address', type: ColumnType.TEXT}),
new Column({name: 'contact_numbers', type: ColumnType.TEXT})
new Column({name: 'location', type: ColumnType.TEXT}),
],
indexes: [new Index({ name: 'list', columns: [new IndexedColumn({ name: 'list_id' })] })]
}),
new Table({
name: 'lists',
columns: [
new Column({ name: 'created_at', type: ColumnType.TEXT }),
new Column({ name: 'name', type: ColumnType.TEXT }),
new Column({ name: 'owner_id', type: ColumnType.TEXT })
]
})
]);
```
Note:
* The custom type, array and PostGIS type have been defined as `TEXT` in the AppSchema. The Postgres PostGIS capabilities are not available because the PowerSync SDK uses SQLite, which only has a limited number of types. This means that everything is replicated into the SQLite database as TEXT values.
* Depending on your application, you may need to implement functions in the client to parse the values and then other functions to write them back to the Postgres database.
### What does the data look like in SQLite?
The data looks exactly how it’s stored in the Postgres database i.e.
1. **Custom Type**: It has the same format as if you inserted it using a SQL statement, i.e.
1. `(1000 S Colorado Blvd.,Denver,CO,80211)`
2. **Array**: Array types act similar in that it shows the data in the same way it was inserted e.g
1. `{000-000-0000, 000-000-0000, 000-000-0000}`
3. **PostGIS**: The `geography` type is transformed into an encoded form of the value.
1. If you insert coordinates as `st_point(39.742043, -104.991531)` then it is shown as `0101000020E6100000E59CD843FBDE4340E9818FC18AC052C0`
## Sync Streams
### PostGIS
Example use case: Extract x (long) and y (lat) values from a PostGIS type, to use these values independently in an application.
PowerSync supports the following PostGIS functions in Sync Streams (or legacy Sync Rules): [Operators and Functions](/sync/supported-sql#functions)
1. `ST_AsGeoJSON`
2. `ST_AsText`
3. `ST_X`
4. `ST_Y`
IMPORTANT NOTE: These functions will only work if your Postgres instance has the PostGIS extension installed and you’re storing values as type `geography` or `geometry`.
```yaml theme={null}
config:
edition: 3
streams:
global:
queries:
- SELECT * FROM lists
- SELECT *, st_x(location) as longitude, st_y(location) as latitude FROM todos
```
```yaml theme={null}
bucket_definitions:
global:
data:
- SELECT * FROM lists
- SELECT *, st_x(location) as longitude, st_y(location) as latitude from todos
```
# Local-Only Usage
Source: https://docs.powersync.com/client-sdks/advanced/local-only-usage
Some use cases require data persistence before the user has registered or signed in.
In some of those cases, the user may want to register and start syncing data with other devices or users at a later point, while other users may keep on using the app without ever registering or going online."
PowerSync supports these scenarios. By default, all local changes will be stored in the upload queue, and will be uploaded to the backend server if the user registers at a later point.
A caveat is that if the user never registers, this queue will keep on growing in size indefinitely. For many applications this should be small enough to not be significant, but some data-intensive applications may want to avoid the indefinite queue growth.
There are two general approaches we recommend for this:
### 1. Local-only tables
```dart theme={null}
final table = Table.localOnly(
...
)
```
**Flutter + Drift users:** If you're using local-only tables with `viewName` overrides, Drift's watch streams may not update correctly. See the [troubleshooting guide](/client-sdks/orms/flutter-orm-support#troubleshooting:-watch-streams-with-local-only-tables) for the solution.
```js theme={null}
const lists = new Table({
...
}, {
localOnly: true
});
```
```kotlin theme={null}
val Table = Table(
...
localOnly = true
)
```
```swift theme={null}
let table = Table(
...
localOnly: true
)
```
```csharp theme={null}
public static Table Todos = new Table
{
Name = "todos",
Columns =
{
// ... column definitions ...
},
LocalOnly = true
};
```
Example not yet available.
Use local-only tables until the user has registered or signed in. This would not store any data in the upload queue, avoiding any overhead or growth in database size.
Once the user registers, move the data over to synced tables, at which point the data would be placed in the upload queue.
The following example implementations are available:
| Client framework | Link |
| -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| Flutter To-Do List App (with Supabase) | [supabase-todolist-optional-sync](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-optional-sync) |
| React To-Do List App (with Supabase) | [react-supabase-todolist-optional-sync](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-optional-sync) |
### 2. Clearing the upload queue
The upload queue can be cleared periodically (for example on every app start-up), avoiding the growth in database size over time. This can be done using:
```sql theme={null}
DELETE FROM ps_crud
```
It is up to the application to then re-create the queue when the user registers, or upload data directly from the existing tables instead.
A small amount of metadata per row is also stored in the `ps_oplog` table. We do not recommend deleting this data, as it can cause or hide consistency issues when later uploading the data. If the overhead in `ps_oplog` is too much, rather use the local-only tables approach.
### Local-only columns on synced tables
If you need individual local-only columns on a table that is otherwise synced (rather than an entirely local-only table), this can be achieved with [raw tables](/client-sdks/advanced/raw-tables#local-only-columns).
# Pre-Seeding SQLite Databases
Source: https://docs.powersync.com/client-sdks/advanced/pre-seeded-sqlite
Optimizing Initial Sync by Pre-Seeding SQLite Databases.
# Overview
When syncing large amounts of data to connected clients, it can be useful to pre-seed the SQLite database with an initial snapshot of the data. This can help to reduce the initial sync time and improve the user experience.
To achieve this, you can run server-side processes using the [PowerSync Node.js SDK](/client-sdks/reference/node) to pre-seed SQLite files. These SQLite files can then be uploaded to blob storage providers such as AWS S3, Azure Blob Storage, or Google Cloud Storage and downloaded directly by client applications. Client applications can then initialize the pre-seeded SQLite file, effectively bypassing the initial sync process.
## Demo App
If you're interested in seeing an end-to-end example, we've prepared a demo repo that can be used as a template for your own implementation. This repo covers all of the key concepts and code examples shown in this page.
Self-hosted PowerSync instance connected to a PostgreSQL database, using the PowerSync Node.js SDK, React Native SDK and AWS S3 for storing the pre-seeded SQLite files.
# Main Concepts
## Generate a scoped JWT token
In most cases you'd want to pre-seed the SQLite database with user specific data and not all data from the source database, as you normally would when using PowerSync. For this you would need to generate JWT tokens that include the necessary properties to satisfy the conditions of the queries in your Sync Streams (or legacy Sync Rules).
Let's say we have the following sync config:
```yaml theme={null}
sync_config:
content: |
config:
edition: 3
streams:
store_products:
query: SELECT * FROM products WHERE store_id = auth.parameter('store_id')
```
```yaml theme={null}
sync_config:
content: |
bucket_definitions:
store_products:
parameters: SELECT id as store_id FROM stores WHERE id = request.jwt() ->> 'store_id'
data:
- SELECT * FROM products WHERE store_id = bucket.store_id
```
In the example above the `store_id` is part of the JWT payload and is used to filter products by store for a user. Given this we would want to do the following:
1. Query the source database, directly from the Node.js application, for all the store ids you'd want a pre-seeded SQLite database for.
2. Generate a JWT token for each store and include the `store_id` in the payload.
3. In the Node.js application which implements the PowerSync SDK, return the JWT token in the `fetchCredentials()` function.
This will ensure that only the data for a specific store is pre-seeded into the SQLite database.
Here's an example of a function that generates a JWT token based on the `store_id` using the [`jose`](https://github.com/panva/jose) library:
```typescript theme={null}
import * as jose from 'jose';
export const generateToken = async (subject: string, store_id: string) => {
return await new jose.SignJWT({store_id: store_id}) // Set the store_id in the payload
.setProtectedHeader({ alg: 'HS256', kid: "My Kid" })
.setSubject(subject)
.setIssuedAt(new Date())
.setAudience('powersync')
.setExpirationTime('1h')
.sign(Buffer.from("My Base64 Encoded Secret", 'base64url'));
};
```
## Pre-seeding script
Once you've got a plan in place for generating the JWT tokens, you can write a simple script to connect to the PowerSync instance and pre-seed the SQLite database. Here's an example of a script that does this:
```typescript theme={null}
async function prepareDatabase (storeId: string) {
const backupPath = `/path/to/sqlite/${storeId}.sqlite`;
const connector = new Connector();
await powersync.connect(connector);
await powersync.waitForFirstSync();
const result = await powersync.execute("DELETE FROM ps_kv WHERE key = ?", ["client_id"]);
const vacuumResult = await powersync.execute(`VACUUM INTO ${backupPath}`);
await uploadFile(storeId, `${storeId}.sqlite`, backupPath);
await powersync.close();
await powersync.disconnect();
}
```
Some critical points to note:
* You will need to wait for the first sync to complete before deleting the `client_id` key and vacuuming the database. This makes sure all of the data is synced to the database before we proceed.
* The `client_id` key is used to identify the client device and is typically set when the client connects to the PowerSync instance. So when pre-seeding the database, we need to delete the `client_id` key to avoid conflicts when the client connects to the PowerSync instance.
* It's important to note that you will need to use the [`VACUUM INTO`](https://sqlite.org/lang_vacuum.html) command to create a clean, portable SQLite database file. This will help to reduce the size of the database file and provide an optimized version for the client to download.
* In this example the upload function is using AWS S3, but you can use any blob storage provider that you prefer.
### Scheduling and Cleaning Up
To enhance the process you can consider doing the following:
* To keep the pre-seeded SQLite databases fresh, schedule a CRON jobs for periodic regeneration, ensuring that new clients always download the latest snapshot of the initial sync data.
* After each run, perform some environment cleanup to avoid disk bloat. This can be done by deleting the pre-seeded SQLite database files after they have been uploaded to the blob storage provider.
## Client Side Usage
When the client application boots, before connecting to the PowerSync instance, check if a SQLite database exists in the application's permanent storage. If it does, use it, else download a pre-seeded SQLite database from the blob storage provider.
Here's an example of a function that checks if a file exists in the application's permanent storage:
```typescript theme={null}
import { File, Paths } from 'expo-file-system/next';
export const FilePath = `${Paths.document.uri}`;
export const fileExists = (storeId: string) => {
const file = new File(FilePath, `${storeId}.sqlite`);
return file.exists;
}
```
Here's an example of a function that downloads the pre-seeded SQLite database from the blob storage provider:
```typescript theme={null}
export const downloadFile = async (storeId: string) => {
// Retrieve a pre-signed URL from the server that allows the client to download the file.
const response = await fetch(`https://your-api-url.com/database?store_id=${storeId}`);
const { databaseUrl } = await response.json();
// Download the file to the permanent location on the device.
const newFile = new File(FilePath, `${storeId}.sqlite`);
await File.downloadFileAsync(databaseUrl, newFile);
}
```
It's important to note that when the client downloads the pre-seeded SQLite database that it's stored in a permanent location on the device. This means that the database will not be deleted when the app is restarted.
Depending on which PowerSync SDK you are using, you may need to use framework specific methods to store the file in a permanent location on the device. For example, with React Native + Expo you can use the [`expo-file-system`](https://docs.expo.dev/versions/latest/sdk/filesystem/) module to store the file in a permanent location on the device.
Once the database is downloaded, initialize the `PowerSyncDatabase` class with the file path and connect to the PowerSync instance.
```typescript theme={null}
import { OPSqliteOpenFactory } from '@powersync/op-sqlite';
import { PowerSyncDatabase } from '@powersync/react-native';
import { AppSchema } from './Schema';
// databasePath is the path to the pre-seeded SQLite database file on the device.
export const configureDatabase = async (storeId: string) => {
const opSqlite = new OPSqliteOpenFactory({
dbFilename: `${storeId}.sqlite`,
dbLocation: FilePath.replace('file://', '')
});
const powersync = new PowerSyncDatabase({
schema: AppSchema,
database: opSqlite,
});
// Call init() first, this will ensure the database is initialized, but not connected to the PowerSync instance.
await powersync.init();
// Insert a new `client_id` key into the `ps_kv` table to avoid conflicts when the client connects to the PowerSync instance.
await powersync.execute("INSERT INTO ps_kv (key, value) VALUES (?, ?)", ["client_id", "1234567890"]);
// Connect to the PowerSync instance.
await powersync.connect(connector);
}
```
It's important that you insert a new `client_id` key into the `ps_kv` table to avoid conflicts when the client connects to the PowerSync instance.
At this point the client would connect to the PowerSync instance and sync the data from where the pre-seeded snapshot was created, bypassing the initial sync process.
# Querying JSON Data in SQLite
Source: https://docs.powersync.com/client-sdks/advanced/query-json-in-sqlite
How to query JSON data synced from your backend and stored as strings in SQLite
# Overview
When syncing data from your backend source database to PowerSync, JSON columns (whether from MongoDB documents, PostgreSQL JSONB columns, or other JSON data types) are stored as `TEXT` in SQLite. See the [type mapping guide](/sync/types) for more details. This guide shows you how to effectively query and filter JSON data using SQLite's powerful JSON functions on the client.
## Understanding JSON Storage in PowerSync
Your backend source database might store structured data as JSON in various ways:
* **MongoDB**: Nested documents and arrays
* **PostgreSQL**: JSONB, JSON, array, or custom types
* **MySQL**: JSON columns
* **SQL Server**: JSON columns
Regardless of the source, PowerSync syncs these JSON structures to SQLite as `TEXT` columns. On the client side, you can query this data using SQLite's built-in JSON functions without needing to parse it yourself. Learn more about [how PowerSync handles JSON, arrays, and custom types](/client-sdks/advanced/custom-types-arrays-and-json#javascript).
## Example Data Structure
Let's use a task management system where tasks have nested metadata:
```json theme={null}
{
"id": "task_123",
"title": "Redesign homepage",
"assignees": [
{
"user_id": "user_001",
"role": "designer",
"hours_allocated": 20
},
{
"user_id": "user_002",
"role": "developer",
"hours_allocated": 40
}
],
"tags": ["urgent", "frontend", "design"],
"metadata": {
"priority": 1,
"sprint": "2024-Q1",
"dependencies": ["task_100", "task_101"]
}
}
```
In SQLite, the `assignees`, `tags`, and `metadata` columns are stored as JSON strings. For details on how different backend types map to SQLite, see [database types and mapping](/sync/types).
## JSON Extraction Basics
### Standard [`json_extract()`](https://sqlite.org/json1.html#jex) Function
Extract values from JSON using path expressions:
```sql theme={null}
SELECT
id,
title,
json_extract(metadata, '$.priority') AS priority,
json_extract(metadata, '$.sprint') AS sprint
FROM tasks;
```
**Path syntax:**
* `$` - root element
* `.` - object member access
* `[index]` - array element access
### Shorthand: The -> and ->> Operators
SQLite provides convenient [shorthand operators](https://sqlite.org/json1.html#jptr) for JSON extraction:
```sql theme={null}
SELECT
id,
title,
metadata -> '$.dependencies' AS dependencies_array, -- maintains JSON array
metadata ->> '$.sprint' AS sprint
FROM tasks;
```
**The difference between [-> and ->>](https://sqlite.org/json1.html#jptr):**
* `->` returns JSON (preserves type information, quotes strings)
* `->>` extracts the value unquoted (strings as TEXT, numbers/booleans as their native types)
```sql theme={null}
-- Using -> (returns JSON)
SELECT metadata -> '$.priority' FROM tasks;
-- Result: 1 (as a SQLite TEXT value)
-- Using ->> (returns parsed value)
SELECT metadata ->> '$.priority' FROM tasks;
-- Result: 1 (as a SQLite INTEGER value)
-- For strings, the difference is clearer:
SELECT metadata -> '$.sprint' FROM tasks;
-- Result: "2024-Q1" (with quotes, as JSON)
SELECT metadata ->> '$.sprint' FROM tasks;
-- Result: 2024-Q1 (without quotes, as text)
```
**When to use which:**
* Use `->>` when extracting **final values** for display or comparison
* Use `->` when extracting **intermediate JSON** for further processing
* `->>` preserves data types (numbers stay numbers, not strings)
### Nested Path Access
Access deeply nested values:
```sql theme={null}
-- All three are equivalent:
json_extract(metadata, '$.dependencies[0]')
metadata -> '$.dependencies[0]'
metadata -> '$.dependencies' -> '$[0]'
```
## Querying Arrays with [`json_each()`](https://sqlite.org/json1.html#jeach)
### Flattening Simple Arrays
For the `tags` array, use `json_each()` to create one row per element:
```sql theme={null}
SELECT
t.id,
t.title,
tag.value AS tag
FROM tasks t,
json_each(t.tags) AS tag
WHERE tag.value = 'urgent';
```
**What's happening:**
* `json_each(t.tags)` creates a virtual table with one row per tag
* `tag.value` contains each individual tag string
* You can filter, join, or aggregate these expanded rows
### Querying Nested Objects in Arrays
For complex objects like `assignees`:
```sql theme={null}
SELECT
t.id,
t.title,
assignee.value ->> '$.user_id' AS user_id,
assignee.value ->> '$.role' AS role,
assignee.value -> '$.hours_allocated' AS hours
FROM tasks t,
json_each(t.assignees) AS assignee
WHERE (assignee.value ->> '$.role') = 'developer';
```
**Key points:**
* Each `assignee.value` is a JSON object representing one assignee
* Use `->>` to extract text values for comparison
* Use `->` when you need numeric values for calculations
## Real-World Query Examples
### Example 1: Finding Tasks by Assignee
**Use case:** Show all tasks assigned to a specific user.
```sql theme={null}
SELECT DISTINCT
t.id,
t.title,
t.metadata ->> '$.priority' AS priority
FROM tasks t,
json_each(t.assignees) AS assignee
WHERE (assignee.value ->> '$.user_id') = 'user_001'
ORDER BY t.metadata ->> '$.priority';
```
### Example 2: Calculating Total Hours by Role
**Use case:** Aggregate hours across all tasks grouped by role.
```sql theme={null}
SELECT
assignee.value ->> '$.role' AS role,
SUM(assignee.value ->> '$.hours_allocated') AS total_hours,
COUNT(DISTINCT t.id) AS task_count
FROM tasks t,
json_each(t.assignees) AS assignee
GROUP BY role
ORDER BY total_hours DESC;
```
### Example 3: Tasks with Specific Tags
**Use case:** Find tasks tagged with multiple specific tags.
```sql theme={null}
-- Tasks with BOTH 'urgent' AND 'frontend' tags
SELECT DISTINCT t.*
FROM tasks t
WHERE EXISTS (
SELECT 1 FROM json_each(t.tags)
WHERE value = 'urgent'
)
AND EXISTS (
SELECT 1 FROM json_each(t.tags)
WHERE value = 'frontend'
);
```
Or using a simpler approach for single tags:
```sql theme={null}
-- Tasks with 'urgent' tag
SELECT *
FROM tasks t,
json_each(t.tags) AS tag
WHERE tag.value = 'urgent';
```
### Example 4: Filtering by Array Contents
**Use case:** Find tasks that depend on a specific task ID.
```sql theme={null}
SELECT *
FROM tasks t,
json_each(t.metadata -> '$.dependencies') AS dep
WHERE dep.value = 'task_100';
```
### Example 5: Checking for Array Membership
**Use case:** Check if a task has any dependencies.
```sql theme={null}
SELECT
id,
title,
json_array_length(metadata -> '$.dependencies') AS dep_count
FROM tasks
WHERE json_array_length(metadata -> '$.dependencies') > 0;
```
## Working with Comma or Delimiter-Separated Values
Sometimes JSON strings contain delimiter-separated values (like `"NYC;LAX;MIA"`). Here's how to query them efficiently:
```sql theme={null}
-- Assume tasks have a field: "approved_by": "user_001;user_002;user_003"
-- Find tasks approved by a specific user
SELECT *
FROM tasks
WHERE instr(
';' || (metadata ->> '$.approved_by') || ';',
';user_001;'
) > 0;
```
**Why this pattern works:**
* Wraps the value: `";user_001;user_002;user_003;"`
* Searches for `;user_001;` ensuring exact delimiter-bounded match
* Prevents false matches (won't match "user\_0011" when searching for "user\_001")
**Avoid `LIKE` for delimited strings:**
```sql theme={null}
-- ❌ WRONG - can match partial values
WHERE (metadata ->> '$.approved_by') LIKE '%user_001%'
-- This would incorrectly match "user_0011" or "user_001_archive"
-- ✅ CORRECT - exact delimiter match
WHERE instr(';' || (metadata ->> '$.approved_by') || ';', ';user_001;') > 0
```
## Advanced Techniques
### Using CTEs for Cleaner Queries
Common Table Expressions make complex JSON queries more readable:
```sql theme={null}
WITH task_assignees AS (
SELECT
t.id,
t.title,
assignee.value ->> '$.user_id' AS user_id,
assignee.value ->> '$.role' AS role,
assignee.value ->> '$.hours_allocated' AS hours
FROM tasks t,
json_each(t.assignees) AS assignee
)
SELECT
user_id,
role,
SUM(hours) AS total_hours,
COUNT(*) AS assignment_count
FROM task_assignees
WHERE hours > 10
GROUP BY user_id, role;
```
### Combining Multiple JSON Arrays
Query across multiple nested arrays:
```sql theme={null}
SELECT DISTINCT
t.id,
t.title,
assignee.value ->> '$.user_id' AS assigned_to,
tag.value AS tag
FROM tasks t,
json_each(t.assignees) AS assignee,
json_each(t.tags) AS tag
WHERE tag.value IN ('urgent', 'high-priority')
AND assignee.value ->> '$.role' = 'developer';
```
**Cartesian product warning:** When using multiple `json_each()` calls, you create a Cartesian product. A task with 3 assignees and 4 tags creates 12 rows. Use `DISTINCT` when needed and filter early to minimize row expansion.
### Checking for Key Existence
Verify if a JSON key exists:
```sql theme={null}
-- Check if 'sprint' key exists
SELECT *
FROM tasks
WHERE json_extract(metadata, '$.sprint') IS NOT NULL;
-- Or using shorthand
SELECT *
FROM tasks
WHERE metadata -> '$.sprint' IS NOT NULL;
```
## Performance Optimization
**Important Performance Considerations**
1. **Index JSON columns for better performance**: If you frequently query JSON fields, add indexes to the JSON string columns in your `AppSchema`:
```typescript theme={null}
const tasks = new Table(
{
id: column.text,
title: column.text,
metadata: column.text,
tags: column.text,
},
{
indexes: {
tagsIndex: ['tags']
}
}
);
```
2. **Minimize `json_each()` usage**: Each `json_each()` call expands rows. For a table with 10,000 tasks averaging 5 assignees each, you're processing 50,000 rows.
3. **Use EXISTS for membership checks**: More efficient than joining:
```sql theme={null}
-- ✅ BETTER for large datasets
SELECT * FROM tasks t
WHERE EXISTS (
SELECT 1 FROM json_each(t.tags) WHERE value = 'urgent'
);
-- vs joining which creates all row combinations
```
4. **Cache extracted values in CTEs**: Extract once, use multiple times:
```sql theme={null}
WITH task_metrics AS (
SELECT
t.id,
t.title,
t.metadata,
COUNT(assignee.value) AS assignee_count,
SUM(assignee.value ->> '$.hours_allocated') AS total_hours
FROM tasks t,
json_each(t.assignees) AS assignee
GROUP BY t.id, t.title, t.metadata
)
SELECT *
FROM task_metrics
WHERE metadata ->> '$.sprint' = '2024-Q1'
AND assignee_count > 1
ORDER BY total_hours DESC;
```
## Useful JSON Functions
Beyond extraction, SQLite offers many JSON utilities:
```sql theme={null}
-- Get array length
SELECT json_array_length(tags) FROM tasks;
-- Check JSON validity
SELECT json_valid(metadata) FROM tasks;
-- Get all object keys
SELECT json_each.key, json_each.value
FROM tasks,
json_each(tasks.metadata)
WHERE id = 'task_123';
-- Get JSON type of a value
SELECT json_type(metadata -> '$.priority') FROM tasks;
-- Returns: 'integer', 'text', 'array', 'object', 'null', etc.
-- Aggregate JSON arrays
SELECT json_group_array(tag.value)
FROM tasks t,
json_each(t.tags) AS tag
WHERE t.id = 'task_123';
```
## Common Gotchas
**Watch out for these common issues:**
1. **NULL vs missing keys**: `json_extract()` returns `NULL` for non-existent paths. Always check for NULL:
```sql theme={null}
WHERE COALESCE(metadata ->> '$.priority', 999) = 1
```
2. **Type mismatches**:
```sql theme={null}
-- ❌ String comparison (wrong!)
WHERE metadata -> '$.priority' > 5
-- ✅ BEST: Use ->> for direct numeric extraction
WHERE metadata ->> '$.priority' > 5
```
3. **Array index bounds**: Out-of-bounds array access returns NULL, not an error:
```sql theme={null}
SELECT metadata -> '$.dependencies[99]' -- Returns NULL if not enough elements
```
4. **Quotes in JSON strings**: Use `->>` to get unquoted text, not `->`:
```sql theme={null}
-- ❌ Returns: "2024-Q1" (with quotes)
WHERE metadata -> '$.sprint' = '2024-Q1'
-- ✅ Returns: 2024-Q1 (without quotes)
WHERE metadata ->> '$.sprint' = '2024-Q1'
```
5. **Performance on large arrays**: `json_each()` on arrays with thousands of elements can be slow. Consider data restructuring for such cases.
## Summary
Querying JSON data in SQLite effectively requires:
* Understanding that JSON is stored as strings but queryable with built-in functions
* Using `json_extract()` or the shorthand `->` and `->>` operators
* Leveraging `json_each()` to flatten arrays for filtering and aggregation
* Being mindful of type conversions and NULL handling
* Optimizing queries by filtering early and considering denormalization for critical paths
With these techniques, you can query complex nested data structures synced from your backend while maintaining good performance on mobile and edge devices.
For complete SQLite JSON function reference, see the [SQLite JSON documentation](https://www.sqlite.org/json1.html).
# Raw SQLite Tables to Bypass JSON View Limitations
Source: https://docs.powersync.com/client-sdks/advanced/raw-tables
Use raw tables for native SQLite functionality and improved performance.
Raw tables are an experimental feature. We're actively seeking feedback on:
* API design and developer experience
* Additional features or optimizations needed
View the [roadmap doc](https://docs.google.com/document/d/1h2sayKHsQ2hwSAaBlR8z7ReEVDeJ2t1Zzs8N0um5SJc/edit?tab=t.0) to see our latest thinking on the future of this feature, and join our [Discord community](https://discord.gg/powersync) to share your experience and get help.
By default, PowerSync uses a [JSON-based view system](/architecture/client-architecture#schema) where data is stored schemalessly in JSON format and then presented through SQLite views based on the client-side schema. Raw tables allow you to define native SQLite tables in the client-side schema, bypassing this.
This eliminates overhead associated with extracting values from the JSON data and provides access to advanced SQLite features like foreign key constraints and custom indexes.
**Availability**
Features describes on this page were introduced in the following versions of our client SDKs:
* **JavaScript** (Node: `0.18.0`, React-Native: `1.31.0`, Web: `1.35.0`)
* **Dart**: Version 1.18.0 of `package:powersync`.
* **Kotlin**: Version 1.11.0.
* **Swift**: Version 1.12.0.
* **Rust**: Version 0.0.4.
* This feature is not yet available on our .NET SDK.
## When to Use Raw Tables
Consider raw tables when you need:
* **Indexes** - PowerSync's default schema has basic support for indexes on columns, while raw tables give you complete control to create indexes on expressions, use `GENERATED` columns, etc.
* **Improved performance** for complex queries (e.g., `SELECT SUM(value) FROM transactions`) - raw tables more efficiently get these values directly from the SQLite column, instead of extracting the value from the JSON object on every row.
* **Reduced storage overhead** - eliminate JSON object overhead for each row in `ps_data__.data` column.
* **To manually create tables** - Sometimes you need full control over table creation, for example when implementing custom triggers.
**Advanced SQLite features** like `FOREIGN KEY` and `ON DELETE CASCADE` constraints need [special consideration](#using-foreign-keys).
## How Raw Tables Work
### Current JSON-Based System
Currently the sync system involves two general steps:
1. Download bucket operations from the PowerSync Service.
2. Once the client has a complete checkpoint and no pending local changes in the upload queue, sync the local database with the bucket operations.
The bucket operations use JSON to store the individual operation data. The local database uses tables with a simple schemaless `ps_data__` structure containing only an `id` (TEXT) and `data` (JSON) column.
PowerSync automatically creates views on that table that extract JSON fields to resemble standard tables reflecting your schema.
### Raw Tables Approach
When opting in to raw tables, you are responsible for creating the tables before using them - PowerSync will no longer create them automatically.
Because PowerSync takes no control over raw tables, you need to manually:
1. Define how PowerSync's [schemaless protocol](/architecture/powersync-protocol#protocol) maps to your raw tables — see [Define sync mapping for raw tables](#define-sync-mapping-for-raw-tables)
2. Define triggers that capture local writes from raw tables — see [Capture local writes with triggers](#capture-local-writes-with-triggers)
For the purpose of this example, consider a simple table like this:
```sql theme={null}
CREATE TABLE todo_lists (
id TEXT NOT NULL PRIMARY KEY,
created_by TEXT NOT NULL,
title TEXT NOT NULL,
content TEXT
) STRICT;
```
### Define sync mapping for raw tables
To sync into the raw `todo_lists` table instead of `ps_data__`, PowerSync needs the SQL statements extracting columns from the untyped JSON protocol used during syncing.
Internally, this involves two SQL statements:
1. A `put` SQL statement for upserts, responsible for creating a `todo_list` row or updating it based on its `id` and data columns.
2. A `delete` SQL statement responsible for deletions.
The PowerSync client as part of our SDKs will automatically run these statements in response to sync lines being sent from the PowerSync Service.
In most cases, these statements can be inferred automatically. However, the statements can also be given explicitly if customization is needed.
#### Inferring sync statements
In most cases, the `put` and `delete` statements are obvious when looking at the structure of the table.
With the `todo_list` example, a delete statement would `DELETE FROM todo_lists WHERE id = $row_id_to_delete`.
Similarly, a `put` statement would use a straightforward upsert to create or update rows.
When the SDK knows the name of the local table you're inserting into, it can infer statements automatically
by analyzing the `CREATE TABLE` structure.
The name of raw tables can be provided with the `RawTableSchema` type:
```javascript JavaScript theme={null}
// Raw tables are not included in the regular Schema() object.
// Instead, add them afterwards using withRawTables().
const mySchema = new Schema({
// Define your PowerSync-managed schema here
// ...
});
mySchema.withRawTables({
todo_lists: {
schema: {},
}
});
```
```dart Dart theme={null}
// Raw tables are not part of the regular tables list and can be defined with the optional rawTables parameter.
const schema = Schema([], rawTables: [
RawTable.inferred(
name: 'todo_lists',
schema: RawTableSchema(),
),
]);
```
```kotlin Kotlin theme={null}
// To define a raw table, include it in the list of tables passed to the Schema
val schema = Schema(listOf(
RawTable(
name = "todo_lists",
schema = RawTableSchema(),
)
))
```
```swift Swift theme={null}
// To define a raw table, include it in the list of tables passed to the Schema
let lists = RawTable(
name: "todo_lists",
schema: RawTableSchema()
)
let schema = Schema(lists)
```
```csharp .NET theme={null}
Unfortunately, raw tables are not yet available in the .NET SDK.
```
```rust Rust theme={null}
use powersync::schema::{RawTable, RawTableSchema, Schema};
pub fn app_schema() -> Schema {
let mut schema = Schema::default();
let table = RawTable::with_schema("todo_lists", RawTableSchema::default());
schema.raw_tables.push(table);
schema
}
```
**When to use inferred statements**
If you have a local table that directly corresponds to the schema of a synced output table,
inferred statements greatly simplify the schema setup.
You will need explicit sync statements if, for instance:
* you want to apply transformations on synced values before inserting them into your local database.
* you need custom default values for synced `NULL` values.
* you're using the [rest column pattern](#the-_extra-column-pattern) to help with migrations.
* you have a custom setup where a raw table stores data from multiple source tables.
If the name of the SQLite table and the name of the synced table aren't the same, the inferred
statements can be customized.
For instance, say you had a `local_users` table in your SQLite database and want to sync rows
from the `users` table in your backend.
Here, the name of the raw table must be `users` to match PowerSync definitions, but the `RawTableSchema`
type on every SDK has an optional `tableName` field that can be set to `local_users` in this case.
#### Explicit sync statements
To pass statements explicitly, use the `put` and `delete` parameters available in each SDK.
A statement consists of two parts:
1. An SQL string of the statement to run. It should use positional parameters (`?`) as placeholders for values from the synced row.
2. An array describing the instantiation of positional parameter.
`delete` statements can reference the id of the affected row, while `put` statements can also reference individual column values.
A `rest` parameter is also available, see [migrations](#the-_extra-column-pattern) for details on how that can be useful.
Declaring these statements and parameters happens as part of the schema passed to PowerSync databases:
```javascript JavaScript theme={null}
// Raw tables are not included in the regular Schema() object.
// Instead, add them afterwards using withRawTables().
// The values of parameters are described as a JSON array either containing:
// - the string 'Id' to reference the id of the affected row.
// - the object { Column: name } to reference the value of the column 'name'.
const mySchema = new Schema({
// Define your PowerSync-managed schema here
// ...
});
mySchema.withRawTables({
// The name here doesn't have to match the name of the table in SQL. Instead, it's used to match
// the table name from the backend source database as sent by the PowerSync Service.
todo_lists: {
put: {
sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)',
params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }]
},
delete: {
sql: 'DELETE FROM lists WHERE id = ?',
params: ['Id']
}
}
});
// We will simplify this API after understanding the use-cases for raw tables better.
```
```dart Dart theme={null}
// Raw tables are not part of the regular tables list and can be defined with the optional rawTables parameter.
final schema = Schema(const [], rawTables: const [
RawTable(
// The name here doesn't have to match the name of the table in SQL. Instead, it's used to match
// the table name from the backend source database as sent by the PowerSync Service.
name: 'todo_lists',
put: PendingStatement(
sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)',
params: [
.id(),
.column('created_by'),
.column('title'),
.column('content'),
],
),
delete: PendingStatement(
sql: 'DELETE FROM todo_lists WHERE id = ?',
params: [
.id(),
],
),
),
]);
```
```kotlin Kotlin theme={null}
// To define a raw table, include it in the list of tables passed to the Schema
val schema = Schema(listOf(
RawTable(
// The name here doesn't have to match the name of the table in SQL. Instead, it's used to match
// the table name from the backend database as sent by the PowerSync service.
name = "todo_lists",
put = PendingStatement(
"INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)",
listOf(
PendingStatementParameter.Id,
PendingStatementParameter.Column("created_by"),
PendingStatementParameter.Column("title"),
PendingStatementParameter.Column("content")
)
),
delete = PendingStatement(
"DELETE FROM todo_lists WHERE id = ?", listOf(PendingStatementParameter.Id)
)
)
))
```
```swift Swift theme={null}
// To define a raw table, include it in the list of tables passed to the Schema
let lists = RawTable(
name: "todo_lists",
put: PendingStatement(
sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)",
parameters: [.id, .column("created_by"), .column("title"), .column("content")]
),
delete: PendingStatement(
sql: "DELETE FROM todo_lists WHERE id = ?",
parameters: [.id],
),
)
let schema = Schema(lists)
```
```csharp .NET theme={null}
Unfortunately, raw tables are not yet available in the .NET SDK.
```
```rust Rust theme={null}
use powersync::schema::{PendingStatement, PendingStatementValue, RawTable, Schema};
pub fn app_schema() -> Schema {
let mut schema = Schema::default();
let lists = RawTable::with_statements(
"todo_lists",
PendingStatement {
sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)".into(),
params: vec![
PendingStatementValue::Id,
PendingStatementValue::Column("created_by".into()),
PendingStatementValue::Column("title".into()),
PendingStatementValue::Column("content".into()),
]
},
PendingStatement {
sql: "DELETE FROM todo_lists WHERE id = ?".into(),
params:vec![PendingStatementValue::Id]
}
);
schema.raw_tables.push(lists);
schema
}
```
After adding raw tables to the schema, you're also responsible for creating them by executing the corresponding `CREATE TABLE` statement before `connect()`-ing the database.
### Capture local writes with triggers
PowerSync uses an internal SQLite table to collect local writes. For PowerSync-managed views, a trigger for insertions, updates and deletions automatically forwards local mutations into this table. When using raw tables, defining those triggers is your responsibility.
The [PowerSync SQLite extension](https://github.com/powersync-ja/powersync-sqlite-core) creates an insert-only virtual table named `powersync_crud` with these columns:
```sql theme={null}
-- This table is part of the PowerSync SQLite core extension
CREATE VIRTUAL TABLE powersync_crud(
-- The type of operation: 'PUT' or 'DELETE'
op TEXT,
-- The id of the affected row
id TEXT,
type TEXT,
-- optional (not set on deletes): The column values for the row
data TEXT,
-- optional: Previous column values to include in a CRUD entry
old_values TEXT,
-- optional: Metadata for the write to include in a CRUD entry
metadata TEXT,
);
```
The virtual table associates local mutations with the current transaction and ensures writes made during the sync process (applying server-side changes) don't count as local writes.
The role of triggers is to insert into `powersync_crud` to record writes on raw tables.
Like [with statements](#inferring-sync-statements), these triggers can usually be inferred from the schema of the table.
#### Inferred triggers
The `powersync_create_raw_table_crud_trigger` SQL function is available in migrations to create triggers for
raw tables. It takes three arguments:
1. A JSON description of the raw table with options, which can be generated by PowerSync SDKs.
2. The name of the trigger to create.
3. The type of write for which to generate a trigger (`INSERT`, `UPDATE` or `DELETE`). Typically, you'd generate all three.
`powersync_create_raw_table_crud_trigger` parses the structure of tables from the database schema, so it
must be called *after* the raw table has been created.
```javascript JavaScript theme={null}
const table: RawTable = { name: 'todo_lists', schema: {} };
await database.execute("CREATE TABLE todo_lists (...)");
for (const write of ["INSERT", "UPDATE", "DELETE"]) {
await database.execute(
"SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)",
[JSON.stringify(Schema.rawTableToJson(table)), `users_${write}`, write],
);
}
```
```dart Dart theme={null}
const table = RawTable.inferred(
name: 'todo_lists',
schema: RawTableSchema(),
);
await database.execute("CREATE TABLE todo_lists (...)");
for (final write in ["INSERT", "UPDATE", "DELETE"]) {
await database.execute(
"SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)",
[json.encode(table), "users_$write", write],
);
}
```
```kotlin Kotlin theme={null}
// To define a raw table, include it in the list of tables passed to the Schema
val table = RawTable(
name = "todo_lists",
schema = RawTableSchema(),
)
database.execute("CREATE TABLE todo_lists (...)")
for (write in listOf("INSERT", "UPDATE", "DELETE)) {
database.execute(
"SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)",
listOf(table.jsonDescription(), "users_$write", write),
)
}
```
```swift Swift theme={null}
let lists = RawTable(
// The name here specifies the name of the table in your backend database or sync configuration.
name: "todo_lists",
schema: RawTableSchema()
)
try await database.execute("CREATE TABLE todo_lists (...)")
for write in ["INSERT", "UPDATE", "DELETE"] {
try await database.execute(
sql: "SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)",
parameters: [
lists.jsonDescription(),
"todo_lists_\(write)",
write,
]
)
}
```
```csharp .NET theme={null}
Unfortunately, raw tables are not yet available in the .NET SDK.
```
```rust Rust theme={null}
use powersync::schema::{RawTable, RawTableSchema};
pub async fn configure_raw_tables(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> {
let raw_table = RawTable::with_schema("todo_lists", RawTableSchema::default());
let serialized_table = serde_json::to_string(&raw_table).unwrap();
let mut writer = db.writer().await?;
writer.execute("CREATE TABLE todo_lists (...);")?;
let mut trigger_stmt = writer.prepare("SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)");
for write in &["INSERT", "UPDATE", "DELETE"] {
trigger_stmt.query_one(
params![serialized_table, format!("todo_lists_{write}", write)],
|_| Ok(()),
)?;
}
Ok(())
}
```
Note that these triggers are created just once! It is your responsibility to drop and re-create them after
altering the table.
Regular JSON-based tables include [advanced options](/client-sdks/advanced/custom-types-arrays-and-json#advanced-schema-options-to-process-writes).
These are also available on raw tables and they affect the generated trigger.
You can track previous values, mark a raw table as insert-only or configure the trigger to ignore
empty updates by passing an `options` parameter (Rust, Swift, Dart, Kotlin)
or set the options on the object literal when defining raw tables (JavaScript).
#### Explicit triggers
Triggers on raw tables can also be defined explicitly instead of using `powersync_create_raw_table_crud_trigger`.
It is your responsibility to setup and migrate these triggers along with the table:
```sql theme={null}
CREATE TRIGGER todo_lists_insert
AFTER INSERT ON todo_lists
FOR EACH ROW
BEGIN
INSERT INTO powersync_crud (op, id, type, data) VALUES ('PUT', NEW.id, 'todo_lists', json_object(
'created_by', NEW.created_by,
'title', NEW.title,
'content', NEW.content
));
END;
CREATE TRIGGER todo_lists_update
AFTER UPDATE ON todo_lists
FOR EACH ROW
BEGIN
SELECT CASE
WHEN (OLD.id != NEW.id)
THEN RAISE (FAIL, 'Cannot update id')
END;
-- TODO: You may want to replace the json_object with a powersync_diff call of the old and new values, or
-- use your own diff logic to avoid marking unchanged columns as updated.
INSERT INTO powersync_crud (op, id, type, data) VALUES ('PATCH', NEW.id, 'todo_lists', json_object(
'created_by', NEW.created_by,
'title', NEW.title,
'content', NEW.content
));
END;
CREATE TRIGGER todo_lists_delete
AFTER DELETE ON todo_lists
FOR EACH ROW
BEGIN
INSERT INTO powersync_crud (op, id, type) VALUES ('DELETE', OLD.id, 'todo_lists');
END;
```
#### Using foreign keys
Raw tables support advanced table constraints including foreign keys. When enabling foreign keys however, you need to be aware of the following:
1. While PowerSync will always apply synced data in a transaction, there is no way to control the order in which rows get applied.
For this reason, foreign keys need to be configured with `DEFERRABLE INITIALLY DEFERRED`.
2. When using [stream priorities](/sync/advanced/prioritized-sync), you need to ensure you don't have foreign keys from high-priority
rows to lower-priority data. PowerSync applies data in one transaction per priority, so these foreign keys would not work.
3. As usual when using foreign keys, note that they need to be explicitly enabled with `pragma foreign_keys = on`.
## Local-Only Columns
Raw tables allow you to add columns that exist only on the client and are never synced to the backend. This is useful for client-specific state like user preferences, local notes, or UI flags that should persist across app restarts but have no equivalent in the backend database.
Local-only columns are not supported with PowerSync's default [JSON-based view system](/architecture/client-architecture#schema). Raw tables are required for this functionality.
Building on the `todo_lists` example above, you can add local-only columns such as `is_pinned` and `local_notes`:
```sql theme={null}
CREATE TABLE IF NOT EXISTS todo_lists (
id TEXT NOT NULL PRIMARY KEY,
-- Synced columns
created_by TEXT NOT NULL,
title TEXT NOT NULL,
content TEXT,
-- Local-only columns (not synced)
is_pinned INTEGER NOT NULL DEFAULT 0,
local_notes TEXT
) STRICT;
```
### With inferred statements and triggers
Both the inferred `put` and `delete` statements as well as triggers generated by `powersync_create_raw_table_crud_trigger`
support local-only columns.
To configure this, include a `syncedColumns` array on the `RawTableSchema`:
```javascript JavaScript theme={null}
const table: RawTable = {
name: 'todo_lists',
schema: {
syncedColumns: ['created_by', 'title', 'content'],
},
};
```
```dart Dart theme={null}
const table = RawTable.inferred(
name: 'todo_lists',
schema: RawTableSchema(
syncedColumns: ['created_by', 'title', 'content'],
),
);
```
```kotlin Kotlin theme={null}
// To define a raw table, include it in the list of tables passed to the Schema
val table = RawTable(
name = "todo_lists",
schema = RawTableSchema(
syncedColumns = listOf("created_by", "title", "content"),
),
)
```
```swift Swift theme={null}
let lists = RawTable(
name: "todo_lists",
schema: RawTableSchema(
syncedColumns: ["created_by", "title", "content"]
)
)
```
```csharp .NET theme={null}
Unfortunately, raw tables are not yet available in the .NET SDK.
```
```rust Rust theme={null}
use powersync::schema::{RawTable, RawTableSchema};
let raw_table = RawTable::with_schema("todo_lists", {
let mut info = RawTableSchema::default();
// Columns not included in this list will not be synced.
info.synced_columns = Some(vec!["created_by", "title", "content"]);
info
});
```
### With explicit statements
The standard raw table setup requires modifications to support local-only columns:
#### Use upsert instead of INSERT OR REPLACE
The `put` statement must use `INSERT ... ON CONFLICT(id) DO UPDATE SET` instead of `INSERT OR REPLACE`. `INSERT OR REPLACE` deletes and re-inserts the row, which resets local-only columns to their defaults on every sync update. An upsert only updates the specified synced columns, leaving local-only columns intact.
Only synced columns should be referenced in the `put` params. Local-only columns are omitted entirely:
```javascript JavaScript theme={null}
schema.withRawTables({
todo_lists: {
put: {
sql: `INSERT INTO todo_lists (id, created_by, title, content)
VALUES (?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
created_by = excluded.created_by,
title = excluded.title,
content = excluded.content`,
params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }]
},
delete: {
sql: 'DELETE FROM todo_lists WHERE id = ?',
params: ['Id']
}
}
});
```
```dart Dart theme={null}
final schema = Schema(const [], rawTables: const [
RawTable(
name: 'todo_lists',
put: PendingStatement(
sql: '''INSERT INTO todo_lists (id, created_by, title, content)
VALUES (?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
created_by = excluded.created_by,
title = excluded.title,
content = excluded.content''',
params: [
PendingStatementValue.id(),
PendingStatementValue.column('created_by'),
PendingStatementValue.column('title'),
PendingStatementValue.column('content'),
],
),
delete: PendingStatement(
sql: 'DELETE FROM todo_lists WHERE id = ?',
params: [
PendingStatementValue.id(),
],
),
),
]);
```
```kotlin Kotlin theme={null}
val schema = Schema(listOf(
RawTable(
name = "todo_lists",
put = PendingStatement(
"""INSERT INTO todo_lists (id, created_by, title, content)
VALUES (?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
created_by = excluded.created_by,
title = excluded.title,
content = excluded.content""",
listOf(
PendingStatementParameter.Id,
PendingStatementParameter.Column("created_by"),
PendingStatementParameter.Column("title"),
PendingStatementParameter.Column("content")
)
),
delete = PendingStatement(
"DELETE FROM todo_lists WHERE id = ?", listOf(PendingStatementParameter.Id)
)
)
))
```
```swift Swift theme={null}
let lists = RawTable(
name: "todo_lists",
put: PendingStatement(
sql: """
INSERT INTO todo_lists (id, created_by, title, content)
VALUES (?, ?, ?, ?)
ON CONFLICT(id) DO UPDATE SET
created_by = excluded.created_by,
title = excluded.title,
content = excluded.content
""",
parameters: [.id, .column("created_by"), .column("title"), .column("content")]
),
delete: PendingStatement(
sql: "DELETE FROM todo_lists WHERE id = ?",
parameters: [.id],
),
)
let schema = Schema(lists)
```
#### Exclude local-only columns from triggers
The `json_object()` in both the INSERT and UPDATE triggers should only reference synced columns. Local-only columns must not appear in the CRUD payload sent to the backend.
Additionally, the UPDATE trigger needs a `WHEN` clause that checks only synced columns. Without it, changes to local-only columns would fire the trigger and produce unnecessary CRUD entries that get uploaded. The `WHEN` clause must use `IS NOT` instead of `!=` for NULL-safe comparisons. `NULL != NULL` evaluates to `NULL` in SQLite, which would cause the trigger to skip legitimate changes to nullable synced columns.
```sql theme={null}
CREATE TRIGGER todo_lists_insert
AFTER INSERT ON todo_lists
FOR EACH ROW
BEGIN
INSERT INTO powersync_crud (op, id, type, data) VALUES ('PUT', NEW.id, 'todo_lists', json_object(
'created_by', NEW.created_by,
'title', NEW.title,
'content', NEW.content
));
END;
-- WHEN clause ensures this only fires for synced column changes.
-- Uses IS NOT instead of != for correct NULL handling.
CREATE TRIGGER todo_lists_update
AFTER UPDATE ON todo_lists
FOR EACH ROW
WHEN
OLD.created_by IS NOT NEW.created_by
OR OLD.title IS NOT NEW.title
OR OLD.content IS NOT NEW.content
BEGIN
INSERT INTO powersync_crud (op, id, type, data) VALUES ('PATCH', NEW.id, 'todo_lists', json_object(
'created_by', NEW.created_by,
'title', NEW.title,
'content', NEW.content
));
END;
CREATE TRIGGER todo_lists_delete
AFTER DELETE ON todo_lists
FOR EACH ROW
BEGIN
INSERT INTO powersync_crud (op, id, type) VALUES ('DELETE', OLD.id, 'todo_lists');
END;
```
With this setup, local-only columns can be queried and updated using standard SQL without affecting sync:
```sql theme={null}
-- Updating a local-only column does not produce a CRUD entry
UPDATE todo_lists SET is_pinned = 1 WHERE id = '...';
-- Local-only columns can be used in queries and ordering
SELECT * FROM todo_lists ORDER BY is_pinned DESC, title ASC;
```
## Migrations
In PowerSync's [JSON-based view system](/architecture/client-architecture#schema) the client-side schema is applied to the schemaless data, meaning no migrations are required. Raw tables however are excluded from this, so it is the developers responsibility to manage migrations for these tables.
### Adding raw tables as a new table
When you're adding new tables to your Sync Streams (or legacy Sync Rules), clients will start to sync data on those tables - even if the tables aren't mentioned in the client's schema yet. So at the time you're introducing a new raw table to your app, it's possible that PowerSync has already synced some data for that table, which would be stored in `ps_untyped`. When adding regular tables, PowerSync will automatically extract rows from `ps_untyped`. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table:
```
INSERT INTO my_table (id, my_column, ...)
SELECT id, data ->> 'my_column' FROM ps_untyped WHERE type = 'my_table';
DELETE FROM ps_untyped WHERE type = 'my_table';
```
This does not apply if you've been using the raw table from the beginning (and never called `connect()` without them) - you only need this for raw tables you already had locally.
Another workaround is to clear PowerSync data when changing raw tables and opt for a full resync.
### Migrating to raw tables
To migrate from PowerSync-managed tables to raw tables, first:
1. Open the database with the new schema mentioning raw tables. PowerSync will copy data from tables previously managed by PowerSync into `ps_untyped`.
2. Create raw tables.
3. Run the `INSERT FROM SELECT` statement to insert `ps_untyped` data into your raw tables.
### Migrations on raw tables
For JSON-based tables, migrations are trivial since all rows are stored as complete JSON objects.
Adding or removing columns only affects views over unchanged JSON data, making the schema a stateless structure.
For raw tables, the situation is different. When adding a new column for instance, existing rows would
not have a default value even if one could have been synced already.
Suppose a new column is added with a simple migration: `ALTER TABLE todo_list ADD COLUMN priority INTEGER`.
This adds the new column on the client, with null values for each existing row.
If the client updates the schema before the server and then syncs the changes, every row effectively
resyncs and reflects populated values for the new column. So clients observe a consistent state after the sync.
If new values have been synced before the client updates, existing rows may not receive the new column
until those rows are synced again! This is why special approaches are needed when migrating synced
tables.
#### Deleting data on migrations
One option that makes migrations safe (with obvious downsides) is to simply reset the database before
migrating: `await db.disconnectAndClear(soft: true)` deletes materialized sync rows while keeping
downloaded data active. Afterwards, migrations can migrate the schema in any way before you reconnect.
In a soft clear, data doesn't have to be downloaded again in most cases. This might reduce the downtime
in which no data is available, but a network connection is necessary for data to become
available again.
#### Triggering resync on migrations
An alternative to the approach of deleting data could be to trigger a re-sync *without* clearing tables.
For example:
```sql theme={null}
-- We need an (optimistic) default value for existing rows
ALTER TABLE todo_list ADD COLUMN priority INTEGER DEFAULT 1 NOT NULL;
SELECT powersync_trigger_resync(TRUE);
```
The optimistic default value would be overridden on the next completed sync (depending on when
the user is online again).
This means that the app is still usable offline after an update, but having optimistic state
on the client is a caveat because PowerSync normally has [stronger consistency guarantees](architecture/consistency#consistency).
There may be cases where the approach of deleting data is a safer choice.
#### The `_extra` column pattern
Another option to avoid data inconsistencies in migrations is to ensure the raw table stores
a full row as expected by PowerSync.
To do that, you can introduce an extra column on your table designed to hold values from the backend
database that a client is not yet aware of:
```sql theme={null}
CREATE TABLE todo_lists (
id TEXT NOT NULL PRIMARY KEY,
created_by TEXT NOT NULL,
title TEXT NOT NULL,
content TEXT,
_extra TEXT
) STRICT;
```
The `_extra` column is not used in the app, but the sync service can be informed about it using
the `Rest` column source:
```javascript JavaScript theme={null}
mySchema.withRawTables({
// The name here doesn't have to match the name of the table in SQL. Instead, it's used to match
// the table name from the backend source database as sent by the PowerSync Service.
todo_lists: {
put: {
sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)',
params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }, 'Rest']
},
delete: ...
}
});
```
```dart Dart theme={null}
final schema = Schema(const [], rawTables: const [
RawTable(
name: 'todo_lists',
put: PendingStatement(
sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)',
params: [
.id(),
.column('created_by'),
.column('title'),
.column('content'),
.rest(),
],
),
delete: PendingStatement(...),
),
]);
```
```kotlin Kotlin theme={null}
val schema = Schema(listOf(
RawTable(
name = "todo_lists",
put = PendingStatement(
"INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)",
listOf(
PendingStatementParameter.Id,
PendingStatementParameter.Column("created_by"),
PendingStatementParameter.Column("title"),
PendingStatementParameter.Column("content"),
PendingStatementParameter.Rest,
)
),
delete = PendingStatement(...)
)
))
```
```swift Swift theme={null}
let lists = RawTable(
name: "todo_lists",
put: PendingStatement(
sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)",
parameters: [.id, .column("created_by"), .column("title"), .column("content"), .rest]
),
delete: ...
)
```
```csharp .NET theme={null}
Unfortunately, raw tables are not yet available in the .NET SDK.
```
```rust Rust theme={null}
use powersync::schema::{PendingStatement, PendingStatementValue, RawTable, Schema};
let lists = RawTable::with_statements(
"todo_lists",
PendingStatement {
sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)".into(),
params: vec![
PendingStatementValue::Id,
PendingStatementValue::Column("created_by".into()),
PendingStatementValue::Column("title".into()),
PendingStatementValue::Column("content".into()),
PendingStatementValue::Rest,
]
},
...
);
```
If PowerSync then syncs a row like `{"created_by": "User", "title": "title", "content": "content", "tags": "Important"}`,
this put statement would set `_extra` to `{"tags":"Important"}`, ensuring that the entire source row
can be recovered from a row in the raw table.
This then allows writing migrations:
1. Adding new columns by using `json_extract(_extra, '$.newColumnName')` as a default value.
2. Removing existing columns by updating `_extra = json_set(_extra, '$.droppedColumnName', droppedColumnName)` before dropping
the column.
Don't forget to delete triggers before running these statements in migrations, since these updates
shouldn't result in `ps_crud` writes.
## Deleting data and raw tables
APIs that clear an entire PowerSync database, like e.g. `disconnectAndClear()`, don't affect raw tables by default. You can use the `clear` parameter on the `RawTable` constructor to set an SQL statement to run when clearing the database. Typically, something like `DELETE FROM $tableName` would be a reasonable statement to run.
`clear` statements are not inferred automatically and must always be set explicitly.
Raw tables themselves are not managed by PowerSync and need to be dropped to delete them.
# Sequential ID Mapping
Source: https://docs.powersync.com/client-sdks/advanced/sequential-id-mapping
Learn how to map a local UUID to a remote sequential (auto-incrementing) ID.
## Introduction
When auto-incrementing / sequential IDs are used on the backend source database, the ID can only be generated on the backend source database, and not on the client while offline.
To handle this, you can use a secondary UUID on the client, then map it to a sequential ID when performing an update on the backend source database.
This allows using a sequential primary key for each record, with a UUID as a secondary ID.
This mapping must be performed wherever the UUIDs are referenced, including for every foreign key column.
To illustrate this, we will use the [React To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist) and modify it to use UUIDs
on the client and map them to sequential IDs on the backend source database (Supabase in this case).
### Overview
Before we get started, let's outline the changes we will have to make:
Update the `lists` and `todos` tables
Add two triggers that will map the UUID to the integer ID and vice versa.
Update your Sync Streams (or legacy Sync Rules) to use the UUID column instead of the integer ID.
The following components/files will have to be updated:
* *Files*:
* `AppSchema.ts`
* `fts_setup.ts`
* `SupabaseConnector.ts`
* *Components*:
* `lists.tsx`
* `page.tsx`
* `SearchBarWidget.tsx`
* `TodoListsWidget.tsx`
## Schema
In order to map the UUID to the integer ID, we need to update the
* `lists` table by adding a `uuid` column, which will be the secondary ID, and
* `todos` table by adding a `uuid` column, and a `list_uuid` foreign key column which references the `uuid` column in the `lists` table.
```sql schema {3, 13, 21, 26} theme={null}
create table public.lists (
id serial,
uuid uuid not null unique,
created_at timestamp with time zone not null default now(),
name text not null,
owner_id uuid not null,
constraint lists_pkey primary key (id),
constraint lists_owner_id_fkey foreign key (owner_id) references auth.users (id) on delete cascade
) tablespace pg_default;
create table public.todos (
id serial,
uuid uuid not null unique,
created_at timestamp with time zone not null default now(),
completed_at timestamp with time zone null,
description text not null,
completed boolean not null default false,
created_by uuid null,
completed_by uuid null,
list_id int not null,
list_uuid uuid not null,
constraint todos_pkey primary key (id),
constraint todos_created_by_fkey foreign key (created_by) references auth.users (id) on delete set null,
constraint todos_completed_by_fkey foreign key (completed_by) references auth.users (id) on delete set null,
constraint todos_list_id_fkey foreign key (list_id) references lists (id) on delete cascade,
constraint todos_list_uuid_fkey foreign key (list_uuid) references lists (uuid) on delete cascade
) tablespace pg_default;
```
With the schema updated, we now need a method to synchronize and map the `list_id` and `list_uuid` in the `todos` table, with the `id` and `uuid` columns in the `lists` table.
We can achieve this by creating SQL triggers.
## Create SQL Triggers
We need to create triggers that can look up the integer ID for the given UUID and vice versa.
These triggers will maintain consistency between `list_id` and `list_uuid` in the `todos` table by ensuring that they remain synchronized with the `id` and `uuid` columns in the `lists` table;
even if changes are made to either field.
We will create the following two triggers that cover either scenario of updating the `list_id` or `list_uuid` in the `todos` table:
1. `update_integer_id`, and
2. `update_uuid_column`
The `update_integer_id` trigger ensures that whenever a `list_uuid` value is inserted or updated in the `todos` table,
the corresponding `list_id` is fetched from the `lists` table and updated automatically. It also validates that the `list_uuid` exists in the `lists` table; otherwise, it raises an exception.
```sql theme={null}
CREATE OR REPLACE FUNCTION func_update_integer_id()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
-- Always update list_id on INSERT
SELECT id INTO NEW.list_id
FROM lists
WHERE uuid = NEW.list_uuid;
IF NOT FOUND THEN
RAISE EXCEPTION 'UUID % does not exist in lists', NEW.list_uuid;
END IF;
ELSIF TG_OP = 'UPDATE' THEN
-- Only update list_id if list_uuid changes
IF NEW.list_uuid IS DISTINCT FROM OLD.list_uuid THEN
SELECT id INTO NEW.list_id
FROM lists
WHERE uuid = NEW.list_uuid;
IF NOT FOUND THEN
RAISE EXCEPTION 'UUID % does not exist in lists', NEW.list_uuid;
END IF;
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER update_integer_id
BEFORE INSERT OR UPDATE ON todos
FOR EACH ROW
EXECUTE FUNCTION func_update_integer_id();
```
The `update_uuid_column` trigger ensures that whenever a `list_id` value is inserted or updated in the todos table, the corresponding `list_uuid` is fetched from the
`lists` table and updated automatically. It also validates that the `list_id` exists in the `lists` table.
```sql update_uuid_column theme={null}
CREATE OR REPLACE FUNCTION func_update_uuid_column()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
-- Always update list_uuid on INSERT
SELECT uuid INTO NEW.list_uuid
FROM lists
WHERE id = NEW.list_id;
IF NOT FOUND THEN
RAISE EXCEPTION 'ID % does not exist in lists', NEW.list_id;
END IF;
ELSIF TG_OP = 'UPDATE' THEN
-- Only update list_uuid if list_id changes
IF NEW.list_id IS DISTINCT FROM OLD.list_id THEN
SELECT uuid INTO NEW.list_uuid
FROM lists
WHERE id = NEW.list_id;
IF NOT FOUND THEN
RAISE EXCEPTION 'ID % does not exist in lists', NEW.list_id;
END IF;
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER update_uuid_column
BEFORE INSERT OR UPDATE ON todos
FOR EACH ROW
EXECUTE FUNCTION func_update_uuid_column();
```
We now have triggers in place that will handle the mapping for our updated schema and
can move on to updating your Sync Streams (or legacy Sync Rules) to use the UUID column instead of the integer ID.
## Update Sync Streams
As sequential IDs can only be created on the backend source database, we need to use UUIDs in the client. The sync config is updated to use the `uuid` column as the `id` column for the `lists` and `todos` tables, explicitly defining which columns to select so that `list_id` (the integer ID) is no longer exposed to the client.
```yaml theme={null}
config:
edition: 3
streams:
user_lists:
auto_subscribe: true
with:
user_lists_param: SELECT id FROM lists WHERE owner_id = auth.user_id()
queries:
- "SELECT lists.uuid AS id, lists.created_at, lists.name, lists.owner_id FROM lists WHERE lists.id IN user_lists_param"
- "SELECT todos.uuid AS id, todos.created_at, todos.completed_at, todos.description, todos.completed, todos.created_by, todos.list_uuid FROM todos WHERE todos.list_id = user_lists_param"
```
```yaml sync-config.yaml {4, 7-8} theme={null}
bucket_definitions:
user_lists:
# Separate bucket per todo list
parameters: select id from lists where owner_id = request.user_id()
data:
# Explicitly define all the columns
- select uuid as id, created_at, name, owner_id from lists where id = bucket.id
- select uuid as id, created_at, completed_at, description, completed, created_by, list_uuid from todos where list_id = bucket.id
```
We can now move on to updating the client to use UUIDs.
## Update Client to Use UUIDs
With Sync Streams updated, we no longer have the `list_id` column in the `todos` table.
We start by updating `AppSchema.ts` and replacing `list_id` with `list_uuid` in the `todos` table.
```typescript AppSchema.ts {3, 11} theme={null}
const todos = new Table(
{
list_uuid: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_uuid'] } }
);
```
The `uploadData` function in `SupabaseConnector.ts` needs to be updated to use the new `uuid` column in both tables.
```typescript SupabaseConnector.ts {13, 17, 20} theme={null}
export class SupabaseConnector extends BaseObserver implements PowerSyncBackendConnector {
// other code
async uploadData(database: AbstractPowerSyncDatabase): Promise {
// other code
try {
for (const op of transaction.crud) {
lastOp = op;
const table = this.client.from(op.table);
let result: any;
switch (op.op) {
case UpdateType.PUT:
const record = { ...op.opData, uuid: op.id };
result = await table.upsert(record);
break;
case UpdateType.PATCH:
result = await table.update(op.opData).eq('uuid', op.id);
break;
case UpdateType.DELETE:
result = await table.delete().eq('uuid', op.id);
break;
}
}
} catch (ex: any) {
// other code
}
}
}
```
For the remaining files, we simply need to replace any reference to `list_id` with `list_uuid`.
```typescript fts_setup.ts {3} theme={null}
export async function configureFts(): Promise {
await createFtsTable('lists', ['name'], 'porter unicode61');
await createFtsTable('todos', ['description', 'list_uuid']);
}
```
```tsx page.tsx {4, 14} theme={null}
const TodoEditSection = () => {
// code
const { data: todos } = useQuery(
`SELECT * FROM ${TODOS_TABLE} WHERE list_uuid=? ORDER BY created_at DESC, id`,
[listID]
);
// code
const createNewTodo = async (description: string) => {
// other code
await powerSync.execute(
`INSERT INTO
${TODOS_TABLE}
(id, created_at, created_by, description, list_uuid)
VALUES
(uuid(), datetime(), ?, ?, ?)`,
[userID, description, listID!]
);
}
}
```
```tsx TodoListWidget.tsx {10, 18} theme={null}
export function TodoListsWidget(props: TodoListsWidgetProps) {
// hooks and navigation
const { data: listRecords, isLoading } = useQuery(`
SELECT
${LISTS_TABLE}.*, COUNT(${TODOS_TABLE}.id) AS total_tasks, SUM(CASE WHEN ${TODOS_TABLE}.completed = true THEN 1 ELSE 0 END) as completed_tasks
FROM
${LISTS_TABLE}
LEFT JOIN ${TODOS_TABLE}
ON ${LISTS_TABLE}.id = ${TODOS_TABLE}.list_uuid
GROUP BY
${LISTS_TABLE}.id;
`);
const deleteList = async (id: string) => {
await powerSync.writeTransaction(async (tx) => {
// Delete associated todos
await tx.execute(`DELETE FROM ${TODOS_TABLE} WHERE list_uuid = ?`, [id]);
// Delete list record
await tx.execute(`DELETE FROM ${LISTS_TABLE} WHERE id = ?`, [id]);
});
};
}
```
```tsx SearchBarWidget.tsx {8, 19} theme={null}
export const SearchBarWidget: React.FC = () => {
const handleInputChange = async (value: string) => {
if (value.length !== 0) {
let listsSearchResults: any[] = [];
const todoItemsSearchResults = await searchTable(value, 'todos');
for (let i = 0; i < todoItemsSearchResults.length; i++) {
const res = await powersync.get(`SELECT * FROM ${LISTS_TABLE} WHERE id = ?`, [
todoItemsSearchResults[i]['list_uuid']
]);
todoItemsSearchResults[i]['list_name'] = res.name;
}
if (!todoItemsSearchResults.length) {
listsSearchResults = await searchTable(value, 'lists');
}
const formattedListResults: SearchResult[] = listsSearchResults.map(
(result) => new SearchResult(result['id'], result['name'])
);
const formattedTodoItemsResults: SearchResult[] = todoItemsSearchResults.map((result) => {
return new SearchResult(result['list_uuid'], result['list_name'] ?? '', result['description']);
});
setSearchResults([...formattedTodoItemsResults, ...formattedListResults]);
}
};
}
```
# State Management Libraries
Source: https://docs.powersync.com/client-sdks/advanced/state-management
Use PowerSync with state management libraries in Dart/Flutter
This guide is currently specific to the Dart/Flutter SDK. We may expand it to cover other SDKs in the future.
Our [demo apps](/intro/examples) for Flutter are intentionally kept simple to focus on demonstrating PowerSync APIs. Instead of using heavy state management solutions, they use simple global fields to make the PowerSync database accessible to widgets.
When adopting PowerSync in your own app, you might want a more sophisticated approach for state management. This guide explains how PowerSync's Dart/Flutter SDK integrates with popular state management packages.
Adopting PowerSync can actually simplify your app architecture by using a local SQLite database as the single source of truth for all data. For a general discussion on how PowerSync fits into modern app architecture, see [this blog post](https://dinkomarinac.dev/building-local-first-flutter-apps-with-riverpod-drift-and-powersync).
PowerSync exposes database queries with the standard `Future` and `Stream` classes from `dart:async`. Given how widely used these are
in the Dart ecosystem, PowerSync works well with all popular approaches for state management, such as:
1. Providers with `package:provider`: Create your database as a `Provider` and expose watched queries to child widgets with `StreamProvider`!
The provider for databases should `close()` the database in `dispose`.
2. Providers with `package:riverpod`: We mention relevant snippets [below](#riverpod).
3. Dependency injection with `package:get_it`: PowerSync databases can be registered with `registerSingletonAsync`. Again, make sure
to `close()` the database in the `dispose` callback.
4. The BLoC pattern with the `bloc` package: You can easily listen to watched queries in Cubits (although, if you find your
Blocs and Cubits becoming trivial wrappers around database streams, consider just `watch()`ing database queries in widgets directly.
That doesn't make your app [less testable](/client-sdks/advanced/unit-testing)!).
To simplify state management, avoid the use of hydrated blocs and cubits for state that depends on database queries. With PowerSync,
regular data is already available locally and doesn't need a second local cache.
## Riverpod
We have a [complete example](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-drift) using PowerSync with modern Flutter libraries like Riverpod, Drift, and `auto_route`.
A good way to open PowerSync databases with Riverpod is to use an async provider. You can manage your `connect` and `disconnect` calls there, for instance by listening to authentication state:
```dart theme={null}
@Riverpod(keepAlive: true)
Future powerSyncInstance(Ref ref) async {
final db = PowerSyncDatabase(
schema: schema,
path: await _getDatabasePath(),
logger: attachedLogger,
);
await db.initialize();
// TODO: Listen for auth changes and connect() the database here.
ref.listen(yourAuthProvider, (prev, next) {
if (next.isAuthenticated && !prev.isAuthenticated) {
db.connect(connector: MyConnector());
}
// ...
});
ref.onDispose(db.close);
return db;
}
```
### Querying Data
To expose auto-updating query results, use a `StreamProvider` that reads from the database:
```dart theme={null}
final _lists = StreamProvider((ref) async* {
final database = await ref.read(powerSyncInstanceProvider.future);
yield* database.watch('SELECT * FROM lists');
});
```
### Waiting for sync
If you were awaiting `waitForFirstSync` before, you can keep doing that:
```dart theme={null}
final db = await ref.read(powerSyncInstanceProvider.future);
await db.waitForFirstSync();
```
Alternatively, you can expose the sync status as a provider and use that to determine
whether the synchronization has completed:
```dart theme={null}
final syncStatus = statefulProvider((ref, change) {
final status = Stream.fromFuture(ref.read(powerSyncInstanceProvider.future))
.asyncExpand((db) => db.statusStream);
final sub = status.listen(change);
ref.onDispose(sub.cancel);
return const SyncStatus();
});
@riverpod
bool didCompleteSync(Ref ref, [BucketPriority? priority]) {
final status = ref.watch(syncStatus);
if (priority != null) {
return status.statusForPriority(priority).hasSynced ?? false;
} else {
return status.hasSynced ?? false;
}
}
final class MyWidget extends ConsumerWidget {
const MyWidget({super.key});
@override
Widget build(BuildContext context, WidgetRef ref) {
final didSync = ref.watch(didCompleteSyncProvider());
if (!didSync) {
return const Text('Busy with sync...');
}
// ... content after first sync
}
}
```
### Attachment queue
If you're using the attachment queue helper to synchronize media assets, you can also wrap that in a provider:
```dart theme={null}
@Riverpod(keepAlive: true)
Future attachmentQueue(Ref ref) async {
final db = await ref.read(powerSyncInstanceProvider.future);
final queue = YourAttachmentQueue(db, remoteStorage);
await queue.init();
return queue;
}
```
Reading and awaiting this provider can then be used to show attachments:
```dart theme={null}
final class PhotoWidget extends ConsumerWidget {
final TodoItem todo;
const PhotoWidget({super.key, required this.todo});
@override
Widget build(BuildContext context, WidgetRef ref) {
final photoState = ref.watch(_getPhotoStateProvider(todo.photoId));
if (!photoState.hasValue) {
return Container();
}
final data = photoState.value;
if (data == null) {
return Container();
}
String? filePath = data.photoPath;
bool fileIsDownloading = !data.fileExists;
bool fileArchived =
data.attachment?.state == AttachmentState.archived.index;
if (fileArchived) {
return Column(
crossAxisAlignment: CrossAxisAlignment.center,
mainAxisAlignment: MainAxisAlignment.center,
children: [
const Text("Unavailable"),
const SizedBox(height: 8),
],
);
}
if (fileIsDownloading) {
return const Text("Downloading...");
}
File imageFile = File(filePath!);
int lastModified = imageFile.existsSync()
? imageFile.lastModifiedSync().millisecondsSinceEpoch
: 0;
Key key = ObjectKey('$filePath:$lastModified');
return Image.file(
key: key,
imageFile,
width: 50,
height: 50,
);
}
}
class _ResolvedPhotoState {
String? photoPath;
bool fileExists;
Attachment? attachment;
_ResolvedPhotoState(
{required this.photoPath, required this.fileExists, this.attachment});
}
@riverpod
Future<_ResolvedPhotoState> _getPhotoState(Ref ref, String? photoId) async {
if (photoId == null) {
return _ResolvedPhotoState(photoPath: null, fileExists: false);
}
final queue = await ref.read(attachmentQueueProvider.future);
final photoPath = await queue.getLocalUri('$photoId.jpg');
bool fileExists = await File(photoPath).exists();
final row = await queue.db
.getOptional('SELECT * FROM attachments_queue WHERE id = ?', [photoId]);
if (row != null) {
Attachment attachment = Attachment.fromRow(row);
return _ResolvedPhotoState(
photoPath: photoPath, fileExists: fileExists, attachment: attachment);
}
return _ResolvedPhotoState(
photoPath: photoPath, fileExists: fileExists, attachment: null);
}
```
# Unit Testing
Source: https://docs.powersync.com/client-sdks/advanced/unit-testing
Set up unit tests for PowerSync in Dart/Flutter
This guide is currently specific to the Dart/Flutter SDK. We may expand it to cover other SDKs in the future.
For unit testing your projects using PowerSync (for example, testing whether your queries run as expected), you'll need the `powersync-sqlite-core` binary in your project's root directory.
## Setup
1. Download the PowerSync SQLite binary
* Go to [powersync-sqlite-core Releases](https://github.com/powersync-ja/powersync-sqlite-core/releases)
* Download the binary for your OS
2. Rename the binary
* Remove the architecture suffix from the filename
* Examples:
* `powersync_x64.dll` → `powersync.dll` (Windows)
* `libpowersync_aarch64.dylib` → `libpowersync.dylib` (macOS)
* `libpowersync_x64.so` → `libpowersync.so` (Linux)
3. Place the binary
* Move the renamed binary to your project's root directory
## Example Test
This example shows basic unit testing with PowerSync in Flutter. For more information, see the [Flutter unit testing documentation](https://docs.flutter.dev/cookbook/testing/unit/introduction).
```dart theme={null}
import 'dart:io';
import 'package:powersync/powersync.dart';
import 'package:path/path.dart';
const schema = Schema([
Table('customers', [Column.text('name'), Column.text('email')])
]);
late PowerSyncDatabase testDB;
String getTestDatabasePath() async {
const dbFilename = 'powersync-test.db';
final dir = Directory.current.absolute.path;
return join(dir, dbFilename);
}
Future openTestDatabase() async {
testDB = PowerSyncDatabase(
schema: schema,
path: await getTestDatabasePath(),
logger: testLogger,
);
await testDB.initialize();
}
test('INSERT', () async {
await testDB.execute(
'INSERT INTO customers(name, email) VALUES(?, ?)', ['John Doe', 'john@hotmail.com']);
final results = await testDB.getAll('SELECT * FROM customers');
expect(results.length, 1);
expect(results, ['John Doe', 'john@hotmail.com']);
});
```
#### If you have trouble with loading the extension, confirm the following
Ensure that your SQLite3 binary install on your system has extension loading enabled. You can confirm this by doing the following
* Run `sqlite3` in your command-line interface.
* In the sqlite3 prompt run `PRAGMA compile_options;`
* Check the output for the option `ENABLE_LOAD_EXTENSION`.
* If you see `ENABLE_LOAD_EXTENSION`, it means extension loading is enabled.
If the above steps don't work, you can also confirm if extension loading is enabled by trying to load the extension in your command-line interface.
* Run `sqlite3` in your command-line interface.
* Run `.load /path/to/file/libpowersync.dylib` (macOS) or `.load /path/to/file/libpowersync.so` (Linux) or `.load /path/to/file/powersync.dll` (Windows).
* If this runs without error, then extension loading is enabled. If it fails with an error message about extension loading being disabled, then it’s not enabled in your SQLite installation.
If it is not enabled, you will have to download a compiled SQLite binary with extension loading enabled (e.g. using Homebrew) or [compile SQLite](https://www.sqlite.org/howtocompile.html) with extension loading enabled and
include it in your project's folder alongside the extension.
# Cascading Delete
Source: https://docs.powersync.com/client-sdks/cascading-delete
Perform cascading deletes on the client-side database
PowerSync [uses SQLite views](/architecture/client-architecture#schema) instead of standard tables, so SQLite features like foreign key constraints and cascading deletes are not available.
There's no built-in support for cascading deletes on the client, but you can achieve this in two ways:
1. Manual deletion in a transaction - Delete all related records in a single transaction (recommended for most cases)
Every local mutation performed against SQLite via the PowerSync SDK will be returned in `uploadData`. As long as you're using `.execute()` for the mutation, the operation will be present in the upload queue.
2. Triggers - Create triggers on the [internal tables](/architecture/client-architecture#schema) (more complex, but more automatic)
You create triggers on the internal tables (not the views defined by the client schema), similar to what is done [here](https://github.com/powersync-ja/powersync-js/blob/e77b1abfbed91988de1f4c707c24855cd66b2219/demos/react-supabase-todolist/src/app/utils/fts_setup.ts#L50).
## Example: Manual Transaction
This example from the [React Native To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist) shows how to delete a `list` and all its associated `todos` in a single transaction:
```typescript theme={null}
const deleteList = async (id: string) => {
await system.powersync.writeTransaction(async (tx) => {
// Delete associated todos
await tx.execute(`DELETE FROM ${TODO_TABLE} WHERE list_id = ?`, [id]);
// Delete list record
await tx.execute(`DELETE FROM ${LIST_TABLE} WHERE id = ?`, [id]);
});
};
```
Every mutation performed via `.execute()` is added to the upload queue and returned in `uploadData`. PowerSync will also delete local records when your backend performs cascade deletes on the source database (as long as those tables are in the publication).
For example, if you delete a record from the local `lists` table and Supabase cascade deletes a record from the `todo` table, PowerSync will also delete the local `todo` record when online.
# Expo Go Support
Source: https://docs.powersync.com/client-sdks/frameworks/expo-go-support
PowerSync supports Expo Go with @powersync/adapter-sql-js
Expo Go is a sandbox environment that allows you to quickly test your application without building a development build. To enable PowerSync in Expo Go, we provide a JavaScript-based database adapter: [`@powersync/adapter-sql-js`](https://www.npmjs.com/package/@powersync/adapter-sql-js).
# @powersync/adapter-sql-js
`@powersync/adapter-sql-js` is a development package for PowerSync which uses SQL.js to provide a pure JavaScript SQLite implementation. This eliminates the need for native dependencies and enables development with Expo Go and other JavaScript-only environments. Under the hood, it uses our custom fork [powersync-sql-js](https://github.com/powersync-ja/powersync-sql-js) - a fork of SQL.js (SQLite compiled to JavaScript via Emscripten) that loads PowerSync's Rust core extension.
This package is in an **alpha** release.
**Expo Go Sandbox Environment Only** This adapter is specifically designed for Expo Go and similar JavaScript-only environments. It will be much slower than native database adapters and has limitations. Every write operation triggers a complete rewrite of the entire database file to persistent storage, not just the changed data. In addition to the performance overheads, this adapter doesn't provide any of the SQLite consistency guarantees - you may end up with missing data or a corrupted database file if the app is killed while writing to the database file.
## Usage
### Quickstart
1. Create a new Expo app:
```bash theme={null}
npx create-expo-app@latest my-app
```
2. Navigate to your app directory and start the development server:
```bash theme={null}
cd my-app && npm run ios
```
3. In a new terminal tab, install PowerSync dependencies:
```bash theme={null}
npm install @powersync/react-native @powersync/adapter-sql-js
```
4. Replace the code in `app/(tabs)/index.tsx` with:
```tsx app/(tabs)/index.tsx theme={null}
import { SQLJSOpenFactory } from "@powersync/adapter-sql-js";
import { PowerSyncDatabase, Schema } from "@powersync/react-native";
import { useEffect, useState } from "react";
import { Text } from "react-native";
export const powerSync = new PowerSyncDatabase({
schema: new Schema({}), // todo: define the schema - see Next Steps below
database: new SQLJSOpenFactory({
dbFilename: "example.db",
}),
});
export default function HomeScreen() {
const [version, setVersion] = useState(null);
useEffect(() => {
powerSync.get("select powersync_rs_version();").then((r) => {setVersion(JSON.stringify(r))});
}, []);
return (
<>{version && PowerSync Initialized - {version} }>
);
}
```
1. Install the SQL.js adapter:
```bash theme={null}
npm install @powersync/adapter-sql-js
```
2. Set up PowerSync by using the Sql.js factory:
```tsx SystemProvider.tsx theme={null}
import { SQLJSOpenFactory } from "@powersync/adapter-sql-js";
import { PowerSyncDatabase, Schema } from "@powersync/react-native";
import { useEffect, useState } from "react";
import { Text } from "react-native";
export const powerSync = new PowerSyncDatabase({
schema: new Schema({}), // todo: define the schema - see Next Steps below
database: new SQLJSOpenFactory({
dbFilename: "example.db",
}),
});
export default function HomeScreen() {
const [version, setVersion] = useState(null);
useEffect(() => {
powerSync.get("select powersync_rs_version();").then((r) => {setVersion(JSON.stringify(r))});
}, []);
return (
<>{version && PowerSync Initialized - {version} }>
);
}
```
## Next Steps
After adding PowerSync to your app:
1. [**Define what data to sync by setting up Sync Rules**](/sync/rules/overview)
2. [**Implement your SQLite client schema**](/client-sdks/reference/react-native-and-expo#1-define-the-client-side-schema)
3. [**Connect to PowerSync and your backend**](/client-sdks/reference/react-native-and-expo#3-integrate-with-your-backend)
## Data Persistence
The default version of this adapter uses in-memory persistence, but you can specify your own `persister` option to the open factory.
See an example in the package [README](https://www.npmjs.com/package/@powersync/adapter-sql-js).
## Moving Beyond Expo Go
When you're ready to move beyond the Expo Go sandbox environment - whether for native development builds or production deployment - we recommend switching to our native database adapters:
* [OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) (Recommended) - Offers built-in encryption support and better React Native New Architecture compatibility
* [React Native Quick SQLite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) - Our original native adapter
These database adapters cannot run in Expo Go because they require native code compilation. Specifically, PowerSync needs a SQLite implementation that can load our [Rust core extension](https://github.com/powersync-ja/powersync-sqlite-core), which isn't possible in Expo Go's prebuilt app container.
These adapters provide better performance, full SQLite consistency guarantees, and are suitable for both development builds and production deployment. See the SDKs [Installation](/client-sdks/reference/react-native-and-expo#install-peer-dependencies) details for setup instructions.
### Switching Between Adapters - Example
If you want to keep using Expo Go alongside development and production builds, you can switch between different adapters based on the Expo `executionEnvironment`:
```js SystemProvider.tsx theme={null}
import { SQLJSOpenFactory } from "@powersync/adapter-sql-js";
import { PowerSyncDatabase } from "@powersync/react-native";
import Constants from "expo-constants";
const isExpoGo = Constants.executionEnvironment === "storeClient";
export const powerSync = new PowerSyncDatabase({
schema: AppSchema,
database: isExpoGo
? new SQLJSOpenFactory({
dbFilename: "app.db",
})
: {
dbFilename: "sqlite.db",
},
});
```
# Dart/Flutter Web Support (Beta)
Source: https://docs.powersync.com/client-sdks/frameworks/flutter-web-support
Web support for Flutter in version `^1.9.0` is currently in a **beta** release. It is functionally ready for production use, provided that you've tested your use cases.
Please see the [Limitations](#limitations) detailed below.
## Demo app
The easiest way to test Flutter Web support is to run the [Supabase Todo-List](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app:
1. Clone the [powersync.dart](https://github.com/powersync-ja/powersync.dart/tree/main) repo.
1. **Note**: If you are an existing user updating to the latest code after a git pull, run `melos exec 'flutter pub upgrade'` in the repo's root and make sure it succeeds.
2. Run `melos prepare` in the repo's root
3. cd into the `demos/supabase-todolist` folder
4. If you haven’t yet: `cp lib/app_config_template.dart lib/app_config.dart` (optionally update this config with your own Supabase and PowerSync project details).
5. Run `flutter run -d chrome`
## Installing PowerSync in your own project
Install the [latest version](https://pub.dev/packages/powersync/versions) of the package, for example:
```bash theme={null}
flutter pub add powersync:'^1.9.0'
```
### Additional config
#### Assets
Web support requires `sqlite3.wasm` and worker (`powersync_db.worker.js` and `powersync_sync.worker.js`) assets to be served from the web application. They can be downloaded to the web directory by running the following command in your application's root folder.
```bash theme={null}
dart run powersync:setup_web
```
The same code is used for initializing native and web `PowerSyncDatabase` clients.
#### OPFS for improved performance
This SDK supports different storage modes of the SQLite database with varying levels of performance and compatibility:
* **IndexedDB**: Highly compatible with different browsers, but performance is slow.
* **OPFS** (Origin-Private File System): Significantly faster but requires additional configuration.
OPFS is the preferred mode when it is available. Otherwise database storage falls back to IndexedDB.
Enabling OPFS requires adding two headers to the HTTP server response when a client requests the Flutter web application:
* `Cross-Origin-Opener-Policy`: Needs to be set to `same-origin`.
* `Cross-Origin-Embedder-Policy`: Needs to be set to `require-corp`.
When running the app locally, you can use the following command to include the required headers:
```bash theme={null}
flutter run -d chrome --web-header "Cross-Origin-Opener-Policy=same-origin" --web-header "Cross-Origin-Embedder-Policy=require-corp"
```
When serving a Flutter Web app in production, the [Flutter docs](https://docs.flutter.dev/deployment/web#building-the-app-for-release) recommend building the web app with `flutter build web`, then serving the content with an HTTP server. The server should be configured to use the above headers.
**Further reading**:
[Drift](https://drift.simonbinder.eu/) uses the same packages as our [`sqlite_async`](https://github.com/powersync-ja/sqlite_async.dart) package under the hood, and has excellent documentation for how the web filesystem is selected. See [here](https://drift.simonbinder.eu/platforms/web/) for web compatibility notes and [here](https://drift.simonbinder.eu/platforms/web/#additional-headers) for additional notes on the required web headers.
## Limitations
The API for Web is essentially the same as for native platforms, however, some features within `PowerSyncDatabase` clients are not available.
### Imports
Flutter Web does not support importing directly from `sqlite3.dart` as it uses `dart:ffi`.
Change imports from:
```dart theme={null}
import 'package/powersync/sqlite3.dart`
```
to:
```dart theme={null}
import 'package/powersync/sqlite3_common.dart'
```
in code which needs to run on the Web platform. Isolated native-specific code can still import from `sqlite3.dart`.
### Database connections
Web database connections do not support concurrency. A single database connection is used. `readLock` and `writeLock` contexts do not implement checks for preventing writable queries in read connections and vice-versa.
Direct access to the synchronous `CommonDatabase` (`sqlite.Database` equivalent for web) connection is not available. `computeWithDatabase` is not available on web.
# Next.js + PowerSync
Source: https://docs.powersync.com/client-sdks/frameworks/next-js
A guide for creating a new Next.js application with PowerSync for offline/local first functionality
## Introduction
In this tutorial, we'll explore how to enhance a Next.js application with offline-first capabilities using PowerSync. In the following sections, we'll walk through the process of integrating PowerSync into a Next.js application, setting up local-first storage, and handling synchronization efficiently.
PowerSync is tailored for client-side applications — there isn't much benefit to using SSR with PowerSync. Some frameworks like Next.js push towards enabling SSR by default, which means code is evaluated in a Node.js runtime. The PowerSync Web SDK requires browser APIs which are not available in Node.js. For ergonomics, the SDK performs no-ops if used in Node.js (rather than throwing errors), but you should not expect any data from PowerSync during server-side rendering. If you are using SSR in your application, we recommend explicitly isolating PowerSync to client-side code.
## Setup
### Next.js Project Setup
Let's start by bootstrapping a new Next.js application using [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
```shell npm theme={null}
npx create-next-app@latest my-powersync-app
```
```shell yarn theme={null}
yarn create next-app my-powersync-app
```
```shell pnpm theme={null}
pnpm create next-app my-powersync-app
```
When running this command you'll be presented with a few options. The PowerSync suggested selection for the setup options Next.js offers are:
```shell theme={null}
Would you like to use TypeScript? Yes
Would you like to use ESLint? Yes
Would you like to use Tailwind CSS? Yes
Would you like your code inside a `src/` directory? Yes
Would you like to use App Router? (recommended) Yes
Would you like to use Turbopack for `next dev`? Yes
Would you like to customize the import alias (`@/*` by default)? Yes
```
Turbopack is supported in Next.js 16+. If you're using an older version of Next.js, see the [Webpack configuration (legacy)](#webpack-configuration-legacy) section below.
### Install PowerSync Dependencies
Using PowerSync in a Next.js application will require the use of the [PowerSync Web SDK](https://www.npmjs.com/package/@powersync/web) and it's peer dependencies.
In addition to this we'll also install [`@powersync/react`](https://www.npmjs.com/package/@powersync/react), which provides several hooks and providers for easier integration.
```shell npm theme={null}
npm install @powersync/web @journeyapps/wa-sqlite @powersync/react
```
```shell yarn theme={null}
yarn add @powersync/web @journeyapps/wa-sqlite @powersync/react
```
```shell pnpm theme={null}
pnpm install @powersync/web @journeyapps/wa-sqlite @powersync/react
```
This SDK currently requires [@journeyapps/wa-sqlite](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency.
### Copy Worker Assets
When using Turbopack, you need to copy the PowerSync worker files to your public directory. Add a `postinstall` script to your `package.json`:
```json package.json theme={null}
{
"scripts": {
"postinstall": "powersync-web copy-assets -o public"
}
}
```
Then run the script to copy the assets:
```shell npm theme={null}
npm run postinstall
```
```shell yarn theme={null}
yarn postinstall
```
```shell pnpm theme={null}
pnpm postinstall
```
This copies the pre-bundled worker files to `public/@powersync/`, which are required since Turbopack doesn't support dynamic imports of workers yet.
Add `public/@powersync/*` to your `.gitignore` file since these are generated assets.
## Next.js Config Setup
For Next.js 16+ with Turbopack, the configuration is minimal:
```typescript next.config.ts theme={null}
module.exports = {
images: {
disableStaticImages: true
},
turbopack: {}
};
```
Run `pnpm dev` to start the development server and check that everything compiles correctly, before moving onto the next section.
### Webpack configuration (legacy)
If you're using an older version of Next.js (before 16) or prefer to use Webpack, use this configuration instead:
```typescript next.config.ts theme={null}
module.exports = {
webpack: (config: any, { isServer }: any) => {
config.experiments = {
...config.experiments,
asyncWebAssembly: true,
topLevelAwait: true,
};
if (!isServer) {
config.module.rules.push({
test: /\.wasm$/,
type: "asset/resource",
});
}
return config;
}
}
```
## Configure a PowerSync Instance
Now that we've got our project setup, let's create a new PowerSync Cloud instance and connect our client to it.
For the purposes of this demo, we'll be using Supabase as the backend source database that PowerSync will connect to.
To set up a new PowerSync instance, follow the steps covered in the [Installation - Database Connection](/configuration/source-db/connection) docs page.
## Configure PowerSync in your project
### Add core PowerSync files
Start by adding a new directory in `./src/lib` named `powersync`.
#### `AppSchema`
Create a new file called `AppSchema.ts` in the newly created `powersync` directory and add your App Schema to the file. Here is an example of this.
```typescript lib/powersync/AppSchema.ts theme={null}
import { column, Schema, Table } from '@powersync/web';
const lists = new Table({
created_at: column.text,
name: column.text,
owner_id: column.text
});
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos,
lists
});
// For types
export type Database = (typeof AppSchema)['types'];
export type TodoRecord = Database['todos'];
// OR:
// export type Todo = RowType;
export type ListRecord = Database['lists'];
```
This defines the local SQLite database schema and PowerSync will hydrate the tables once the SDK connects to the PowerSync instance.
#### `BackendConnector`
Create a new file called `BackendConnector.ts` in the `powersync` directory and add the following to the file.
```typescript lib/powersync/BackendConnector.ts theme={null}
import { AbstractPowerSyncDatabase, PowerSyncBackendConnector, UpdateType } from '@powersync/web';
export class BackendConnector implements PowerSyncBackendConnector {
private powersyncUrl: string | undefined;
private powersyncToken: string | undefined;
constructor() {
this.powersyncUrl = process.env.NEXT_PUBLIC_POWERSYNC_URL;
// This token is for development only.
// For production applications, integrate with an auth provider or custom auth.
this.powersyncToken = process.env.NEXT_PUBLIC_POWERSYNC_TOKEN;
}
async fetchCredentials() {
// TODO: Use an authentication service or custom implementation here.
if (this.powersyncToken == null || this.powersyncUrl == null) {
return null;
}
return {
endpoint: this.powersyncUrl,
token: this.powersyncToken
};
}
async uploadData(database: AbstractPowerSyncDatabase): Promise {
const transaction = await database.getNextCrudTransaction();
if (!transaction) {
return;
}
try {
for (const op of transaction.crud) {
// The data that needs to be changed in the remote db
const record = { ...op.opData, id: op.id };
switch (op.op) {
case UpdateType.PUT:
// TODO: Instruct your backend API to CREATE a record
break;
case UpdateType.PATCH:
// TODO: Instruct your backend API to PATCH a record
break;
case UpdateType.DELETE:
//TODO: Instruct your backend API to DELETE a record
break;
}
}
await transaction.complete();
} catch (error: any) {
console.error(`Data upload error - discarding`, error);
await transaction.complete();
}
}
}
```
There are two core functions to this file:
* `fetchCredentials()` - Used to return a JWT token to the PowerSync Service for authentication.
* `uploadData()` - Used to upload changes captured in the local SQLite database that need to be sent to the backend source database, in this case Supabase. We'll get back to this further down.
You'll notice that we need to add a `.env` file to our project which will contain two variables:
* `NEXT_PUBLIC_POWERSYNC_URL` - This is the PowerSync instance url. You can grab this from the PowerSync Cloud dashboard.
* `NEXT_PUBLIC_POWERSYNC_TOKEN` - For development purposes we'll be using a development token. To generate one, please follow the steps outlined in [Development Token](/configuration/auth/development-tokens) from our installation docs.
### Create Providers
Create a new directory in `./src/app/components` named `providers`
#### `SystemProvider`
Add a new file in the newly created `providers` directory called `SystemProvider.tsx`.
```typescript components/providers/SystemProvider.tsx theme={null}
'use client';
import { AppSchema } from '@/lib/powersync/AppSchema';
import { BackendConnector } from '@/lib/powersync/BackendConnector';
import { PowerSyncContext } from '@powersync/react';
import { PowerSyncDatabase, WASQLiteOpenFactory, createBaseLogger, LogLevel } from '@powersync/web';
import React, { Suspense } from 'react';
const logger = createBaseLogger();
logger.useDefaults();
logger.setLevel(LogLevel.DEBUG);
const factory = new WASQLiteOpenFactory({
dbFilename: 'powersync.db',
// Use the pre-bundled worker from public/@powersync/
// This is required since Turbopack doesn't support dynamic imports of workers yet
worker: '/@powersync/worker/WASQLiteDB.umd.js'
});
export const db = new PowerSyncDatabase({
database: factory,
schema: AppSchema,
flags: {
disableSSRWarning: true
},
sync: {
// Use the pre-bundled sync worker from public/@powersync/
worker: '/@powersync/worker/SharedSyncImplementation.umd.js'
}
});
const connector = new BackendConnector();
db.connect(connector);
export const SystemProvider = ({ children }: { children: React.ReactNode }) => {
return (
{children}
);
};
export default SystemProvider;
```
The `SystemProvider` is responsible for initializing the `PowerSyncDatabase`. The worker paths point to the pre-bundled workers copied to the public directory by the `powersync-web copy-assets` command.
We also instantiate our `BackendConnector` and pass an instance of that to `db.connect()`. This will connect to the PowerSync instance, validate the token supplied in the `fetchCredentials` function and then start syncing with the PowerSync Service.
#### Update `layout.tsx`
In our main `layout.tsx` we'll update the `RootLayout` function to use the `SystemProvider`.
```typescript app/layout.tsx theme={null}
'use client';
import { SystemProvider } from '@/app/components/providers/SystemProvider';
import { Geist, Geist_Mono } from "next/font/google";
import "./globals.css";
const geistSans = Geist({
variable: "--font-geist-sans",
subsets: ["latin"],
});
const geistMono = Geist_Mono({
variable: "--font-geist-mono",
subsets: ["latin"],
});
export default function RootLayout({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
return (
{children}
);
}
```
#### Use PowerSync
##### Reading Data
In our `page.tsx` we can now use the `useQuery` hook or other PowerSync functions to read data from the SQLite database and render the results in our application.
```typescript app/page.tsx theme={null}
'use client';
import { useState, useEffect } from 'react';
import { useQuery, useStatus, usePowerSync } from '@powersync/react';
export default function Page() {
// Hook
const powersync = usePowerSync();
// Get database status information e.g. downloading, uploading and lastSycned dates
const status = useStatus();
// Example 1: Reactive Query
const { data: lists } = useQuery("SELECT * FROM lists");
// Example 2: Standard query
const [lists, setLists] = useState([]);
useEffect(() => {
powersync.getAll('SELECT * from lists').then(setLists)
}, []);
return (
{lists.map(list => {list.name} )}
)
}
```
##### Writing Data
Using the `execute` function we can also write data into our local SQLite database.
```typescript theme={null}
await powersync.execute("INSERT INTO lists (id, created_at, name, owner_id) VALUES (?, ?, ?, ?)", [uuid(), new Date(), "Test", user_id]);
```
Changes made against the local data will be stored in the upload queue and will be processed by the `uploadData` in the BackendConnector class.
# Nuxt Integration
Source: https://docs.powersync.com/client-sdks/frameworks/nuxt
PowerSync has first class support for Nuxt. Use this guide to get started.
## Introduction
`@powersync/nuxt` is a Nuxt module that wraps [`@powersync/vue`](https://www.npmjs.com/package/@powersync/vue) and provides everything you need to build offline-first Nuxt applications. It re-exports all `@powersync/vue` composables so this is the only PowerSync dependency you need, and it adds a Nuxt Devtools integration with a PowerSync diagnostics panel for inspecting sync status, local data, config, and logs.
**Alpha:** The Nuxt PowerSync integration is currently in Alpha. APIs and behavior may change. We welcome feedback in [Discord](https://discord.com/invite/powersync) or on [GitHub](https://github.com/powersync-ja/powersync-js).
PowerSync is tailored for client-side applications — there isn't much benefit to using SSR with PowerSync. Nuxt evaluates plugins server-side unless you use the `.client.ts` suffix. The PowerSync Web SDK requires browser APIs that are not available in Node.js; it performs no-ops in a Node.js runtime rather than throwing errors, but you should not expect any data from PowerSync during server-side rendering. Always create your PowerSync plugin as `plugins/powersync.client.ts` to ensure it runs only in the browser.
For a complete working example, see the [Nuxt + Supabase Todo List demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/nuxt-supabase-todolist).
## Setup
### Install PowerSync Dependencies
```shell npm theme={null}
npm install @powersync/nuxt
```
```shell pnpm theme={null}
pnpm add @powersync/nuxt @powersync/vue @powersync/web
```
With **npm** (v7+), peer dependencies are installed automatically. With **pnpm**, you must install peer dependencies explicitly, as shown above.
### Add the Module
Add `@powersync/nuxt` to the `modules` array in `nuxt.config.ts` and include the required Vite configuration:
```typescript nuxt.config.ts theme={null}
export default defineNuxtConfig({
modules: ['@powersync/nuxt'],
vite: {
optimizeDeps: {
exclude: ['@powersync/web']
},
worker: {
format: 'es'
}
}
});
```
If you are using Tailwind CSS in your project, see the [Known Issues](#known-issues) section.
## Configure PowerSync in your Project
### Define your Schema
Create a file at `powersync/AppSchema.ts` and define your local SQLite schema. PowerSync will hydrate these tables once the SDK connects to your PowerSync instance.
```typescript powersync/AppSchema.ts theme={null}
import { column, Schema, Table } from '@powersync/web';
const lists = new Table({
created_at: column.text,
name: column.text,
owner_id: column.text
});
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos,
lists
});
// For types
export type Database = (typeof AppSchema)['types'];
export type TodoRecord = Database['todos'];
export type ListRecord = Database['lists'];
```
Learn more about defining your schema in the [JavaScript Web SDK reference](/client-sdk-references/javascript-web#1-define-the-schema).
### Create your Connector
Create a file at `powersync/PowerSyncConnector.ts`. The connector handles authentication and uploading local changes to your backend.
```typescript powersync/PowerSyncConnector.ts theme={null}
import { UpdateType, type PowerSyncBackendConnector } from '@powersync/web';
export class PowerSyncConnector implements PowerSyncBackendConnector {
async fetchCredentials() {
// Return a JWT for the PowerSync Service to authenticate the client.
// See https://docs.powersync.com/installation/authentication-setup
// For quick local testing, use a development token:
// https://docs.powersync.com/installation/authentication-setup/development-tokens
return {
endpoint: '[Your PowerSync instance URL]',
token: '[Your auth token]'
};
}
async uploadData(db: any) {
// Send local changes to your backend.
// See https://docs.powersync.com/client-sdk-references/javascript-web#3-integrate-with-your-backend
const transaction = await db.getNextCrudTransaction();
if (!transaction) return;
try {
for (const op of transaction.crud) {
const record = { ...op.opData, id: op.id };
switch (op.op) {
case UpdateType.PUT:
// TODO: send CREATE to your backend API
break;
case UpdateType.PATCH:
// TODO: send PATCH to your backend API
break;
case UpdateType.DELETE:
// TODO: send DELETE to your backend API
break;
}
}
await transaction.complete();
} catch (error: any) {
console.error('Data upload error - discarding', error);
await transaction.complete();
}
}
}
```
### Create the Plugin
Create a [Nuxt plugin](https://nuxt.com/docs/guide/directory-structure/plugins) at `plugins/powersync.client.ts`. The `.client.ts` suffix ensures this only runs in the browser.
```typescript plugins/powersync.client.ts theme={null}
import { NuxtPowerSyncDatabase, createPowerSyncPlugin } from '@powersync/nuxt';
import { AppSchema } from '~/powersync/AppSchema';
import { PowerSyncConnector } from '~/powersync/PowerSyncConnector';
export default defineNuxtPlugin({
async setup(nuxtApp) {
const db = new NuxtPowerSyncDatabase({
database: {
dbFilename: 'my-app.sqlite'
},
schema: AppSchema
});
const connector = new PowerSyncConnector();
await db.init();
await db.connect(connector);
const plugin = createPowerSyncPlugin({ database: db });
nuxtApp.vueApp.use(plugin);
}
});
```
## Using PowerSync
The module automatically exposes all `@powersync/vue` composables. You can import and use them directly in any component or composable.
### Reading Data
```vue components/TodoList.vue theme={null}
Status: {{ status.connected ? 'Connected' : 'Offline' }}
Loading...
```
### Writing Data
Use `execute` to write to the local SQLite database. Changes are queued and uploaded to your backend via `uploadData` in the connector.
```typescript theme={null}
import { usePowerSync } from '@powersync/nuxt';
import { v4 as uuid } from 'uuid';
const powersync = usePowerSync();
await powersync.execute(
'INSERT INTO lists (id, created_at, name, owner_id) VALUES (?, ?, ?, ?)',
[uuid(), new Date().toISOString(), 'My List', currentUserId]
);
```
## Kysely ORM (Optional)
The module optionally exposes a `usePowerSyncKysely()` composable for type-safe query building. You must install the driver and opt in via config.
Install the driver:
```shell npm theme={null}
npm install @powersync/kysely-driver
```
```shell pnpm theme={null}
pnpm add @powersync/kysely-driver
```
Enable it in `nuxt.config.ts`:
```typescript nuxt.config.ts theme={null}
export default defineNuxtConfig({
modules: ['@powersync/nuxt'],
powersync: {
kysely: true
},
vite: {
optimizeDeps: {
exclude: ['@powersync/web']
},
worker: {
format: 'es'
}
}
});
```
Then use `usePowerSyncKysely` with your schema's `Database` type for full type safety:
```typescript theme={null}
import { usePowerSyncKysely } from '@powersync/nuxt';
import { type Database } from '~/powersync/AppSchema';
const db = usePowerSyncKysely();
const lists = await db.selectFrom('lists').selectAll().execute();
```
## Diagnostics & Inspector
The `@powersync/nuxt` module includes a PowerSync diagnostics panel (Inspector) that you can open from the **Nuxt Devtools** PowerSync tab or at **`/__powersync-inspector`**. It shows sync status, local data, config, and logs. Diagnostics must be explicitly enabled (see below).
### Enabling Diagnostics
Add `powersync: { useDiagnostics: true }` to your `nuxt.config.ts`:
```typescript nuxt.config.ts theme={null}
export default defineNuxtConfig({
modules: ['@powersync/nuxt'],
powersync: {
useDiagnostics: true
},
vite: {
optimizeDeps: {
exclude: ['@powersync/web']
},
worker: {
format: 'es'
}
}
});
```
When `useDiagnostics: true` is set, `NuxtPowerSyncDatabase` automatically:
* Extends your schema with the diagnostics schema
* Sets up diagnostics recording and logging
* Stores the connector internally so the inspector can access it
No changes to your plugin code are needed.
### Accessing the Inspector
Once diagnostics are enabled, you can open the inspector in two ways:
* **Nuxt Devtools**: open Devtools in your browser and look for the PowerSync tab
* **Direct URL**: navigate to `http://localhost:3000/__powersync-inspector`
The inspector provides the following views:
* **Sync Status** — real-time connection status, sync progress, upload queue statistics, and error monitoring
* **Data Inspector** — browse and search your local SQLite tables
* **Bucket Inspector** - browse your buckets and their data
* **Config Inspector** — view your PowerSync configuration, connection options, and schema
* **Logs** — real-time log output with syntax highlighting and search
## Known Issues
PowerSync Inspector uses `unocss` as a transitive dependency, which can conflict with Tailwind CSS. If you use Tailwind, add the following to your `nuxt.config.ts`:
```typescript nuxt.config.ts theme={null}
export default defineNuxtConfig({
unocss: {
autoImport: false
}
});
```
# React Hooks
Source: https://docs.powersync.com/client-sdks/frameworks/react
The `@powersync/react` package provides React hooks for use with the [JavaScript Web SDK](/client-sdks/reference/javascript-web) or [React Native SDK](/client-sdks/reference/react-native-and-expo). These hooks are designed to support reactivity, and can be used to automatically re-render React components when query results update or to access PowerSync connectivity status changes.
The main hooks available are:
* `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties.
* `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not.
* `useSuspenseQuery`: This hook also allows you to access the results of a watched query, but its loading and fetching states are handled through [Suspense](https://react.dev/reference/react/Suspense). It automatically converts certain loading/fetching states into Suspense signals, triggering Suspense boundaries in parent components.
For advanced watch query features like incremental updates and differential results for React Hooks, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
The full API Reference and example code can be found here:
# React Native Web Support
Source: https://docs.powersync.com/client-sdks/frameworks/react-native-web-support
[React Native for Web](https://necolas.github.io/react-native-web/) enables developers to use the same React Native codebase for both mobile and web platforms.
**Availability**
Support for React Native Web is available since versions 1.12.1 of the PowerSync [React Native SDK](/client-sdks/reference/react-native-and-expo) and 1.8.0 if the [JavaScript Web SDK](/client-sdks/reference/javascript-web), and is currently in a **beta** release.
A demo app showcasing this functionality is available here:
## Configuring PowerSync in your React Native for Web project
To ensure that PowerSync features are fully supported in your React Native Web project, follow the below steps. This documentation covers necessary web worker configurations, database instantiation, and multi-platform implementations.
### 1. Install Web SDK
The [PowerSync Web SDK](/client-sdks/reference/javascript-web), alongside the [PowerSync React Native SDK](/client-sdks/reference/react-native-and-expo), is required for Web support.
See installation instructions [here](https://www.npmjs.com/package/@powersync/web).
### 2. Configure Web Workers
For React Native for Web, workers need to be configured when instantiating `PowerSyncDatabase`. An example of this is available [here](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-web-supabase-todolist/library/powersync/system.ts).
To do this, copy the contents of `node_modules/@powersync/web/dist` to the root of your project (typically in the `public `directory). To make it easier to manage these files in the `public` directory, it is recommended to place the contents in a nested directory like `@powersync`.
The [`@powersync/web`](https://github.com/powersync-ja/powersync-js/tree/main/packages/web) package includes a CLI utility which can copy the required assets to the `public` directory (configurable with the `--output` option).
```bash theme={null}
# Places assets into public/@powersync by default. Override with `--output path/from_current_working_dir`.
npx @powersync/web copy-assets
# or pnpm @powersync/web copy-assets
```
### 3. Instantiate Web Workers
The example below demonstrates how to instantiate the workers (PowerSync requires a database and a sync worker) when instantiating `PowerSyncDatabase`. You can either specify a path to the worker (they are available in the `worker` directory of the `dist` contents), or provide a factory function to create the worker.
```js theme={null}
const factory = new WASQLiteOpenFactory({
dbFilename: 'sqlite.db',
// Option 1: Specify a path to the database worker
worker: '/@powersync/worker/WASQLiteDB.umd.js'
// Option 2: Or provide a factory function to create the worker.
// The worker name should be unique for the database filename to avoid conflicts if multiple clients with different databases are present.
// worker: (options) => {
// if (options?.flags?.enableMultiTabs) {
// return new SharedWorker(`/@powersync/worker/WASQLiteDB.umd.js`, {
// name: `shared-DB-worker-${options?.dbFilename}`
// });
// } else {
// return new Worker(`/@powersync/worker/WASQLiteDB.umd.js`, {
// name: `DB-worker-${options?.dbFilename}`
// });
// }
// }
});
this.powersync = new PowerSyncDatabaseWeb({
schema: AppSchema,
database: factory,
sync: {
// Option 1: You can specify a path to the sync worker
worker: '/@powersync/worker/SharedSyncImplementation.umd.js'
//Option 2: Or provide a factory function to create the worker.
// The worker name should be unique for the database filename to avoid conflicts if multiple clients with different databases are present.
// worker: (options) => {
// return new SharedWorker(`/@powersync/worker/SharedSyncImplementation.umd.js`, {
// name: `shared-sync-${options?.dbFilename}`
// });
// }
}
});
```
This `PowerSyncDatabaseWeb` database will be used alongside the native `PowerSyncDatabase` to support platform-specific implementations. See the [Instantiating PowerSync](#implementations) section below for more details.
### 4. Enable multiple platforms
To target both mobile and web platforms, you need to adjust the Metro configuration and handle platform-specific libraries accordingly.
#### Metro config
Refer to the example [here](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-web-supabase-todolist/metro.config.js). Setting `config.resolver.resolveRequest` allows Metro to behave differently based on the platform.
```js theme={null}
config.resolver.resolveRequest = (context, moduleName, platform) => {
if (platform === 'web') {
// Depending on `@powersync/web` for functionality, ignore mobile specific dependencies.
if (['react-native-prompt-android', '@powersync/react-native'].includes(moduleName)) {
return {
type: 'empty'
};
}
const mapping = { 'react-native': 'react-native-web', '@powersync/web': '@powersync/web/dist/index.umd.js' };
if (mapping[moduleName]) {
console.log('remapping', moduleName);
return context.resolveRequest(context, mapping[moduleName], platform);
}
} else {
// Depending on `@powersync/react-native` for functionality, ignore `@powersync/web` dependencies.
if (['@powersync/web'].includes(moduleName)) {
return {
type: 'empty'
};
}
}
// Ensure you call the default resolver.
return context.resolveRequest(context, moduleName, platform);
};
```
#### Implementations
Many `react-native` and `web` packages are implemented with only their specific platform in mind, as such there may be times where you will need to evaluate the platform and provide alternative implementations.
**Instantiating PowerSync**
The following snippet constructs the correct `PowerSyncDatabase` depending on the platform that the code is executing on.
```js theme={null}
import React from 'react';
import { PowerSyncDatabase as PowerSyncDatabaseNative } from '@powersync/react-native';
import { PowerSyncDatabase as PowerSyncDatabaseWeb } from '@powersync/web';
if (PowerSyncDatabaseNative) {
this.powersync = new PowerSyncDatabaseNative({
schema: AppSchema,
database: {
dbFilename: 'sqlite.db'
}
});
} else {
const factory = new WASQLiteOpenFactory({
dbFilename: 'sqlite.db',
worker: '/@powersync/worker/WASQLiteDB.umd.js'
});
this.powersync = new PowerSyncDatabaseWeb({
schema: AppSchema,
database: factory,
sync: {
worker: '/@powersync/worker/SharedSyncImplementation.umd.js'
}
});
}
```
**Implementations that don't support both mobile and web**
```js theme={null}
import { Platform } from 'react-native';
import { Platform } from 'react-native';
import rnPrompt from 'react-native-prompt-android';
// Example conditional implementation
export async function prompt(
title = '',
description = '',
onInput = (_input: string | null): void | Promise => {},
options: { placeholder: string | undefined } = { placeholder: undefined }
) {
const isWeb = Platform.OS === 'web';
let name: string | null = null;
if (isWeb) {
name = window.prompt(`${title}\n${description}`, options.placeholder);
} else {
name = await new Promise((resolve) => {
rnPrompt(
title,
description,
(input) => {
resolve(input);
},
{ placeholder: options.placeholder, style: 'shimo' }
);
});
}
await onInput(name);
}
```
Which can then be used agnostically in a component.
```js theme={null}
import { Button } from 'react-native';
import { prompt } from 'util/prompt';
{
prompt(
'Add a new Todo',
'',
(text) => {
if (!text) {
return;
}
return createNewTodo(text);
},
{ placeholder: 'Todo description' }
);
}}
/>;
```
### 5. Configure UMD target
React Native Web requires the UMD target of `@powersync/web` (available at `@powersync/web/umd`). To fully support this target version, configure the following in your project:
1. Add `config.resolver.unstable_enablePackageExports = true;` to your `metro.config.js` file.
2. TypeScript projects: In the `tsconfig.json` file specify the `moduleResolution` to be `Bundler`.
```json theme={null}
"compilerOptions": {
"moduleResolution": "Bundler"
}
```
# TanStack Query & TanStack DB
Source: https://docs.powersync.com/client-sdks/frameworks/tanstack
PowerSync integrates with multiple TanStack libraries.
## TanStack Query
PowerSync integrates with [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview) (formerly React Query) through the `@powersync/tanstack-react-query` package.
This package wraps TanStack's `useQuery` and `useSuspenseQuery` hooks, bringing many of TanStack's advanced asynchronous state management features to PowerSync web and React Native applications, including:
* **Loading and error states** via [`useQuery`](https://tanstack.com/query/latest/docs/framework/react/guides/queries)
* [**React Suspense**](https://tanstack.com/query/latest/docs/framework/react/guides/suspense) **support**: `useSuspenseQuery` automatically converts certain loading states into Suspense signals, triggering Suspense boundaries in parent components.
* [**Caching queries**](https://tanstack.com/query/latest/docs/framework/react/guides/caching): Queries are cached with a unique key and reused across the app, so subsequent instances of the same query won't refire unnecessarily.
* **Built-in support for** [**pagination**](https://tanstack.com/query/latest/docs/framework/react/guides/paginated-queries)
#### Additional hooks
We plan to support more TanStack Query hooks over time. If there are specific hooks you're interested in, please let us know on [Discord](https://discord.gg/powersync).
### Example Use Case
When navigating to or refreshing a page, you may notice a brief UI "flicker" (10-50ms). Here are a few ways to manage this with TanStack Query:
* **First load**: When a page loads for the first time, use a loading indicator or a Suspense fallback to handle queries. See the [examples](https://www.npmjs.com/package/@powersync/tanstack-react-query#usage).
* **Subsequent loads**: With TanStack's query caching, subsequent loads of the same page won't refire queries, which reduces the flicker effect.
* **Block navigation until components are ready**: Using `useSuspenseQuery`, you can ensure that navigation from page A to page B only happens after the queries for page B have loaded. You can do this by combining `useSuspenseQuery` with the ` ` element and React Router’s [`v7_startTransition`](https://reactrouter.com/en/main/upgrading/future#v7_starttransition) future flag, which blocks navigation until all suspending components are ready.
### Usage and Examples
For more examples and usage details, see the package [README](https://www.npmjs.com/package/@powersync/tanstack-react-query).
The full API Reference can be found here:
## TanStack DB
The **TanStack DB** integration lets you use [TanStack DB](https://tanstack.com/db/latest) collections backed by PowerSync. In-memory collections stay in sync with PowerSync's SQLite database for offline-first, reactive data and [backend sync](/handling-writes/writing-client-changes).
The PowerSync TanStack DB collection is currently in an [Alpha](/resources/feature-status) release.
### Quick start
Install the TanStackDB-PowerSync collection package with a PowerSync SDK. Then define your schema, initialize the PowerSync database, and create a collection. Optionally [connect a backend connector](/configuration/app-backend/client-side-integration) for sync.
```bash theme={null}
npm install @tanstack/powersync-db-collection @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
yarn add @tanstack/powersync-db-collection @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
pnpm install @tanstack/powersync-db-collection @powersync/web @journeyapps/wa-sqlite
```
```ts theme={null}
// Other SDKs are also supported
import { Schema, Table, column } from '@powersync/web';
import { createCollection } from '@tanstack/react-db';
import { powerSyncCollectionOptions } from '@tanstack/powersync-db-collection';
// Define schema and init PowerSync database
const APP_SCHEMA = new Schema({
documents: new Table({
name: column.text,
author: column.text,
created_at: column.text,
archived: column.integer
})
});
const db = new PowerSyncDatabase({
database: { dbFilename: 'app.sqlite' },
schema: APP_SCHEMA
});
// Optional: db.connect(connector) for backend sync
// Create a TanStack DB collection (types inferred from table)
const documentsCollection = createCollection(
powerSyncCollectionOptions({
database: db,
table: APP_SCHEMA.props.documents
})
);
```
### Features
* **Blazing fast in-memory queries** — Built on differential data flow, live queries update incrementally instead of re-running entire queries, so they stay fast even for complex queries across multiple collections.
* **Reactive data flow** — Live queries update automatically when underlying data changes, so components re-render only when necessary.
* **Optimistic updates** — Mutations apply to local state immediately for instant feedback; TanStack DB keeps optimistic state on top of synced data and rolls back automatically if the server request fails.
* **Cross-collection queries** — Live queries can join across collections, seamlessly querying PowerSync and other TanStack DB collections simultaneously.
* **Schema validation and rich types** — Use a custom schema (e.g. Zod) to validate mutations and transform SQLite types into rich JavaScript types such as `Date`, boolean, and JSON. You can keep SQLite-compatible input for writes and expose transformed types on read, or accept rich input with a separate deserialization schema for synced data. See [Create a TanStack DB collection](https://tanstack.com/db/latest/docs/collections/powersync-collection#option-3-transform-sqlite-input-types-to-rich-output-types).
* **Metadata tracking** — Attach custom metadata to insert, update, and delete operations. PowerSync persists it and exposes it in `CrudEntry` when processing uploads in your connector. See [Accessing metadata during upload](https://tanstack.com/db/latest/docs/collections/powersync-collection#accessing-metadata-during-upload).
* **Configuration options** — `powerSyncCollectionOptions` supports schema and deserialization schemas, optional serializers, `onDeserializationError`, and `syncBatchSize`. See [PowerSync Collection](https://tanstack.com/db/latest/docs/collections/powersync-collection#4-create-a-tanstack-db-collection) (Configuration Options).
* **TanStackDB transactions** — Batch multiple operations with `PowerSyncTransactor` and `createTransaction`, control commit timing, and wait for persistence. See [Advanced transactions](https://tanstack.com/db/latest/docs/collections/powersync-collection#advanced-transactions).
### Framework support
PowerSync works with all TanStack DB framework adapters:
* React ([`@tanstack/react-db`](https://tanstack.com/db/latest/docs/framework/react/overview))
* Vue ([`@tanstack/vue-db`](https://tanstack.com/db/latest/docs/framework/vue/overview))
* Solid ([`@tanstack/solid-db`](https://tanstack.com/db/latest/docs/framework/solid/overview))
* Svelte ([`@tanstack/svelte-db`](https://tanstack.com/db/latest/docs/framework/svelte/overview))
* Angular ([`@tanstack/angular-db`](https://tanstack.com/db/latest/docs/framework/angular/overview))
### Documentation
# Vue Composables
Source: https://docs.powersync.com/client-sdks/frameworks/vue
The [`powersync/vue`](https://www.npmjs.com/package/@powersync/vue) package is a Vue-specific wrapper for PowerSync. It provides Vue [composables](https://vuejs.org/guide/reusability/composables) that are designed to support reactivity, and can be used to automatically re-render components when query results update or to access PowerSync connectivity status changes.
The main hooks available are:
* `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties.
* `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not.
For advanced watch query features like incremental updates and differential results for Vue Hooks, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
The full API Reference and example code can be found here:
# Full-Text Search
Source: https://docs.powersync.com/client-sdks/full-text-search
Use SQLite's FTS5 extension for client-side full-text search
PowerSync supports full-text search using the [SQLite FTS5 extension](https://www.sqlite.org/fts5.html). This requires creating FTS5 tables to index your data and updating them with SQLite triggers.
## SDK Support
Full-text search has been demonstrated in the following SDKs:
* [**Dart/Flutter SDK**](/client-sdks/reference/flutter): Uses the [sqlite\_async](https://pub.dev/documentation/sqlite_async/latest/) package for migrations
* [**JavaScript Web SDK**](/client-sdks/reference/javascript-web): Requires version 0.5.0 or greater (including [wa-sqlite](https://github.com/powersync-ja/wa-sqlite) 0.2.0+)
* [**React Native SDK**](/client-sdks/reference/react-native-and-expo): Requires version 1.16.0 or greater (including [@powersync/react-native-quick-sqlite](https://github.com/powersync-ja/react-native-quick-sqlite) 2.2.1+)
* [**Swift SDK**](/client-sdks/reference/swift)
Note that the availability of FTS in our SDKs is dependent on the underlying `sqlite` package used. It may be supported in our other SDKs, especially if the `FTS5` extension is available, but would be untested. Check with us on [Discord](https://discord.gg/powersync) if you have a use case and need help getting started.
## Example Implementations
FTS is implemented in the following demo apps:
* [Flutter To-Do List App](https://github.com/powersync-ja/powersync.dart/tree/master/demos/supabase-todolist)
* [React To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist)
* [React Native To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist)
* [Swift To-Do List App](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/PowerSyncExample)
We explain the Flutter/Dart implementation in more detail below. Example code is shown mainly in Dart, but references to the React, React Native and Swift equivalents are included where relevant, so you should be able to cross-reference.
## Walkthrough (Dart): Full-text search in the To-Do List Demo App
### Setup
First, we need to set up the FTS tables to match the `lists` and `todos` tables already created in this demo app. Don't worry if you already have data in the tables, as it will be copied into the new FTS tables.
FTS tables are created when instantiating the client-side PowerSync database.
```dart theme={null}
// https://github.com/powersync-ja/powersync.dart/blob/master/demos/supabase-todolist/lib/powersync.dart#L186
Future openDatabase() async {
...
await configureFts(db);
}
```
To simplify implementation these examples make use of SQLite migrations. The migrations are run in [migrations/fts\_setup.dart](https://github.com/powersync-ja/powersync.dart/blob/master/demos/supabase-todolist/lib/migrations/fts_setup.dart) in the Flutter implementation. Here we use the [sqlite\_async](https://pub.dev/documentation/sqlite_async/latest/) Dart package to generate the migrations.
```dart theme={null}
// migrations/fts_setup.dart
/// This is where you can add more migrations to generate FTS tables
/// that correspond to the tables in your schema and populate them
/// with the data you would like to search on
Future configureFts(PowerSyncDatabase db) async {
migrations
..add(createFtsMigration(
migrationVersion: 1,
tableName: 'lists',
columns: ['name'],
tokenizationMethod: 'porter unicode61'))
..add(createFtsMigration(
migrationVersion: 2,
tableName: 'todos',
columns: ['description', 'list_id'],
));
await migrations.migrate(db);
}
```
The `createFtsMigration` function is key and corresponds to the below (Dart example):
```dart theme={null}
// migrations/fts_setup.dart
/// Create a Full Text Search table for the given table and columns
/// with an option to use a different tokenizer otherwise it defaults
/// to unicode61. It also creates the triggers that keep the FTS table
/// and the PowerSync table in sync.
SqliteMigration createFtsMigration(
{required int migrationVersion,
required String tableName,
required List columns,
String tokenizationMethod = 'unicode61'}) {
String internalName =
schema.tables.firstWhere((table) => table.name == tableName).internalName;
String stringColumns = columns.join(', ');
return SqliteMigration(migrationVersion, (tx) async {
// Add FTS table
await tx.execute('''
CREATE VIRTUAL TABLE IF NOT EXISTS fts_$tableName
USING fts5(id UNINDEXED, $stringColumns, tokenize='$tokenizationMethod');
''');
// Copy over records already in table
await tx.execute('''
INSERT INTO fts_$tableName(rowid, id, $stringColumns)
SELECT rowid, id, ${generateJsonExtracts(ExtractType.columnOnly, 'data', columns)}
FROM $internalName;
''');
// Add INSERT, UPDATE and DELETE and triggers to keep fts table in sync with table
await tx.execute('''
CREATE TRIGGER IF NOT EXISTS fts_insert_trigger_$tableName AFTER INSERT
ON $internalName
BEGIN
INSERT INTO fts_$tableName(rowid, id, $stringColumns)
VALUES (
NEW.rowid,
NEW.id,
${generateJsonExtracts(ExtractType.columnOnly, 'NEW.data', columns)}
);
END;
''');
await tx.execute('''
CREATE TRIGGER IF NOT EXISTS fts_update_trigger_$tableName AFTER UPDATE
ON $internalName BEGIN
UPDATE fts_$tableName
SET ${generateJsonExtracts(ExtractType.columnInOperation, 'NEW.data', columns)}
WHERE rowid = NEW.rowid;
END;
''');
await tx.execute('''
CREATE TRIGGER IF NOT EXISTS fts_delete_trigger_$tableName AFTER DELETE
ON $internalName BEGIN
DELETE FROM fts_$tableName WHERE rowid = OLD.rowid;
END;
''');
});
}
```
After this is run, you should have the following tables and triggers in your SQLite DB:
### FTS Search Delegate
To show off this new functionality, we have incorporated FTS into the search button at the top of the screen in the To-Do List demo app:
Clicking on the search icon will open a search bar which will allow you to search for `lists` or `todos` that you have generated.
It uses a custom search delegate widget found in [widgets/fts\_search\_delegate.dart](https://github.com/powersync-ja/powersync.dart/blob/master/demos/supabase-todolist/lib/widgets/fts_search_delegate.dart) (Flutter) and [widgets/SearchBarWidget.tsx](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/components/widgets/SearchBarWidget.tsx) (Web) to display the search results.
### FTS Helper
We added a helper in [lib/fts\_helpers.dart](https://github.com/powersync-ja/powersync.dart/blob/master/demos/supabase-todolist/lib/fts_helpers.dart) (Flutter) and [utils/fts\_helpers.ts](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/app/utils/fts_helpers.ts) (Web) that allows you to add additional search functionality which can be found in the [SQLite FTS5 extension](https://www.sqlite.org/fts5.html) documentation.
```dart theme={null}
// lib/fts_helpers.dart
String _createSearchTermWithOptions(String searchTerm) {
// adding * to the end of the search term will match any word that starts with the search term
// e.g. searching bl will match blue, black, etc.
// consult FTS5 Full-text Query Syntax documentation for more options
String searchTermWithOptions = '$searchTerm*';
return searchTermWithOptions;
}
/// Search the FTS table for the given searchTerm and return results ordered by the
/// rank of their relevance
Future search(String searchTerm, String tableName) async {
String searchTermWithOptions = _createSearchTermWithOptions(searchTerm);
return await db.execute(
'SELECT * FROM fts_$tableName WHERE fts_$tableName MATCH ? ORDER BY rank',
[searchTermWithOptions]);
}
```
## Implementations in other SDKs
* The React, React Native and Swift implementations do not use migrations to create the FTS tables. They create the FTS tables separately, see for example:
* [utils/fts\_setup.ts](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/app/utils/fts_setup.ts) (React)
* [library/fts/fts\_setup.ts](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/fts/fts_setup.ts) (React Native)
* [PowerSync/FtsSetup](https://github.com/powersync-ja/powersync-swift/blob/11def989bfbdc4f6ffe192192cd076abe17743c0/Demo/PowerSyncExample/PowerSync/FtsSetup.swift#L121) (Swift)
* See below for relevant snippets in the demo implementations.
```ts theme={null}
// See https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/components/providers/SystemProvider.tsx#L41
SystemProvider = ({ children }: { children: React.ReactNode }) => {
...
React.useEffect(() => {
...
configureFts();
})
}
```
```ts theme={null}
// See https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/powersync/system.ts#L75
export class System {
...
powersync: PowerSyncDatabase;
...
async init() {
...
await configureFts(this.powersync);
}
}
```
```swift theme={null}
// See https://github.com/powersync-ja/powersync-swift/blob/main/Demo/PowerSyncExample/PowerSync/SystemManager.swift#L89
```
# Experimental: High Performance Diffs
Source: https://docs.powersync.com/client-sdks/high-performance-diffs
Efficiently get row changes using trigger-based table diffs (JS)
# Overview
While [basic/incremental watch queries](/client-sdks/watch-queries) enable reactive UIs by automatically re‑running queries when underlying data changes and returning updated results, they don't specify which individual rows were modified. To get these details, you can use [**differential watch queries**](/client-sdks/watch-queries#differential-watch-queries), which return a structured diff between successive query results. However, on large result sets they can be slow because they re‑run the query and compare full results (e.g., scanning \~1,000 rows to detect 1 new item). That’s why we introduced **trigger‑based table diffs**: a more performant approach that uses SQLite triggers to record changes on a table as they happen. This means that the overhead associated with tracking these changes overhead is more proportional to the number of rows inserted, updated, or deleted.
**JavaScript Only**: Trigger-based table diffs are currently only supported in our JavaScript SDKs, starting from:
* Web v1.26.0
* React Native v1.24.0
* Node.js v0.10.0
The `db.triggers` APIs are experimental. We're actively seeking feedback on:
* API design and developer experience
* Additional features or optimizations needed
Join our [Discord community](https://discord.gg/powersync) to share your experience and get help.
## Comparison: Trigger-Based Diffs vs Differential Watch Queries
* **Scope**: Trigger-based diffs track row-level changes on a single table. Differential watches work with arbitrary query results (including joins).
* **Overhead**: Trigger-based diffs do per-row work at write time (overhead grows with number of affected rows). Differential watches re-query and compare result sets on each change (overhead grows with result set size).
* **Processing path**: Trigger-based diffs record changes at write time and require a `writeLock` during processing (only a single `writeLock` is allowed). Differential watches run on read connections and re-query/compare results on each change (often concurrent on some platforms).
* **Storage/shape**: Trigger-based diffs store changes as rows in a temporary SQLite table that you can query with SQL. Differential watch diffs are exposed to app code as JS objects/arrays.
* **Filtering**: Trigger-based diffs can filter/skip storing diff records inside the SQLite trigger, which prevents emissions on a lower level. Differential watches query the SQLite DB on any change to the query's dependent tables, and the changes are filtered after querying SQLite.
**In summary**: Differential watch queries are the most flexible since they work with arbitrary multi-table queries, but they can be slow on large result sets. For those cases, trigger-based diffs are more efficient, though they only track a single table and add some write overhead.
## Trigger-Based Diffs
Trigger-based diffs create temporary SQLite triggers and a temporary table to record row‑level inserts, updates, and deletes as they happen. You can then query the diff table with SQL to process the changes.
**SQLite triggers and PowerSync views**
In PowerSync, the tables you define in the client schema are exposed as SQLite views. The actual data is stored in underlying SQLite tables, with each row's values encoded as JSON (commonly in a single `data` column).
SQLite cannot attach triggers to INSERT/UPDATE/DELETE operations on views — triggers must target the underlying base tables. The `db.triggers` API handles these details for you:
* You can reference the view name in `source`; PowerSync resolves and targets the corresponding underlying table internally.
* Column filters are applied by inspecting JSON changes in the underlying row and determining whether the configured columns changed.
* Diff rows can be queried as if they were real columns (not raw JSON) using the `withExtractedDiff(...)` helper.
You can also create your own triggers manually (for example, as shown in the [Full‑Text Search example](/client-sdks/full-text-search)), but be mindful of the view/trigger limitation and target the underlying table rather than the view.
## Tracking and reacting to changes (recommended)
The primary API is `trackTableDiff`. It wraps the lower-level trigger setup, automatically manages a `writeLock` during processing, exposes a `DIFF` table alias to join against, and cleans up when you call the returned `stop()` function. Think of it as an automatic "watch" that processes diffs as they occur.
```javascript theme={null}
const stop = await db.triggers.trackTableDiff({
// PowerSync source table/view to trigger and track changes from.
// This should be present in the PowerSync database's schema.
source: 'todos',
// Specifies which columns from the source table to track in the diff records.
// Defaults to all columns in the source table.
// Use an empty array to track only the ID and operation.
columns: ['list_id'],
// Required WHEN clause per operation to filter inside the trigger. Use 'TRUE' to track all.
when: { INSERT: sanitizeSQL`json_extract(NEW.data, '$.list_id') = ${firstList.id}` },
onChange: async (context) => {
// // Fetches the todo records that were inserted during this diff
const newTodos = await context.withDiff(/* sql */ `
SELECT todos.*
FROM DIFF
JOIN todos ON DIFF.id = todos.id
`);
// Handle new todos here
}
});
// Later, dispose triggers and internal resources
await stop();
```
### Filtering with `when`
The required `when` parameter lets you add conditions that determine when the triggers should fire. This corresponds to a SQLite [WHEN](https://sqlite.org/lang_createtrigger.html) clause in the trigger body.
* Use `NEW` for `INSERT`/`UPDATE` and `OLD` for `DELETE`.
* Row data is stored as JSON in the `data` column; the row identifier is `id`.
* Use `json_extract(NEW.data, '$.column')` or `json_extract(OLD.data, '$.column')` to reference logical columns.
* Set the clause to `'TRUE'` to track all changes for a given operation.
Example:
```javascript theme={null}
const stop = await db.triggers.trackTableDiff({
source: 'todos',
when: {
// Track all INSERTs
INSERT: 'TRUE',
// Only UPDATEs where status becomes 'active' for a specific record
UPDATE: sanitizeSQL`NEW.id = ${sanitizeUUID('abcd')} AND json_extract(NEW.data, '$.status') = 'active'`,
// Only DELETEs for a specific list
DELETE: sanitizeSQL`json_extract(OLD.data, '$.list_id') = 'abcd'`
}
});
```
The strings in `when` are embedded directly into the SQLite trigger creation SQL. Sanitize any user‑derived values. The `sanitizeSQL` helper performs some basic sanitization; additional sanitization is recommended.
## Lower-level: createDiffTrigger (advanced)
Set up temporary triggers that write change operations into a temporary table you control. Prefer `trackTableDiff` unless you need to manage lifecycle and locking manually (e.g., buffer diffs to process them later). Note that since the table is created as a temporary table on the SQLite write connection, it can only be accessed within operations performed inside a writeLock.
```javascript theme={null}
// Define the temporary table to store the diff
const tempTable = 'listsDiff';
// Configure triggers to record INSERT and UPDATE operations on `lists`
const dispose = await db.triggers.createDiffTrigger({
// PowerSync source table/view to trigger and track changes from.
// This should be present in the PowerSync database's schema.
source: 'lists',
// Destination table to send changes to.
// This table is created internally as a SQLite temporary table.
// This table will be dropped once the trigger is removed.
destination: tempTable,
// Required WHEN clause per operation to filter inside the trigger. Use 'TRUE' to track all.
when: {
INSERT: 'TRUE',
UPDATE: sanitizeSQL`json_extract(NEW.data, '$.name') IS NOT NULL`
},
// Specifies which columns from the source table to track in the diff records.
// Defaults to all columns in the source table.
// Use an empty array to track only the ID and operation.
columns: ['name']
});
// ... perform writes on `lists` ...
// Consume and clear changes within a writeLock
await db.writeLock(async (tx) => {
const changes = await tx.getAll(/* sql */ `
SELECT * FROM ${tempTable}
`);
// Process changes here
// Clear after processing
await tx.execute(/* sql */ `DELETE FROM ${tempTable};`);
});
// Later, clean up triggers and temp table
await dispose();
```
# Infinite Scrolling
Source: https://docs.powersync.com/client-sdks/infinite-scrolling
Infinite scrolling is a software design technique that loads content continuously as the user scrolls down the page/screen.
There are a few ways to accomplish infinite scrolling with PowerSync, either by querying data from the local SQLite database, or by [lazy-loading](https://en.wikipedia.org/wiki/Lazy_loading) or lazy-syncing data from your backend.
Here is an overview of the different options with pros and cons:
### 1) Pre-sync all data and query the local database
PowerSync currently [performs well](/resources/performance-and-limits) with syncing up to 1,000,000 rows per client.
This means that in many cases, you can sync a sufficient amount of data to let a user keep scrolling a list or feed that basically feels "infinite" to them.
| Pros | Cons |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| It works offline and is low-latency (data loads quickly from the local database). We don't need to load data from the backend via the network when the user reaches the bottom of the page/feed/list. | There will be cases where this approach won't work because the total volume of data might become too large for the local database - for example, when there's a wide range of tables that the user needs to be able to infinite scroll. Your app allows the user to apply filters to the displayed data, which results in fewer pages displayed from a large dataset, and therefore limited scrolling. |
### 2) Control data sync using subscription or client parameters
**Sync Streams** (recommended): Use [subscription parameters](/sync/streams/parameters#subscription-parameters) to subscribe to specific data on demand. For example, a client can subscribe to a specific "page" of data when the user scrolls to it. This is more flexible than client parameters — each subscription is independent and multiple tabs/views can subscribe with different parameters simultaneously.
**Sync Rules** (legacy): PowerSync supports the use of [client parameters](/sync/rules/client-parameters) which are specified directly by the client. The app can dynamically change these parameters on the client-side and they can be accessed in Sync Rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/sync/rules/parameter-queries) from the JWT).
Usage example: To lazy-load/lazy-sync data for infinite scrolling, you could split your data into 'pages' and use a subscription parameter (Sync Streams) or client parameter (Sync Rules) to specify which pages to sync to a user.
| Pros | Cons |
| ---------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| Does not require updating flags in your backend source database. Enables client-side control over what data is synced. | We can only sync additional data when the user is online. There will be latency while the user waits for the additional data to sync. |
### 3) Sync limited data and then load more data from an API
In this scenario we can sync a smaller number of rows to the user initially. If the user reaches the end of the page/feed/list, we make an online API call to load additional data from the backend to display to the user.
| Pros | Cons |
| ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| This requires syncing less data to each user, which will result in a faster initial sync time. | We can only load additional data when the user is online. There will be some latency to load the additional data (similar to a cloud-first app making API calls). In your app code, records loaded from the API will have to be treated differently from the records loaded from the local SQLite database. |
### 4) Client-side triggers a server-side function to flag data to sync
You could add a flag to certain records in your backend source database which are used by your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) to determine which records to sync to specific users. Then your app could make an API call which triggers a function that updates the flags on certain records, causing more records to be synced to the user.
## Questions?
Ask on [Discord](https://discord.gg/powersync) if you need help implementing infinite scrolling.
# Dart/Flutter ORM Support
Source: https://docs.powersync.com/client-sdks/orms/flutter-orm-support
ORM support is available via the following package (currently in a beta release):
This package enables using the [Drift](https://pub.dev/packages/drift) persistence library (ORM) with the PowerSync Dart/Flutter SDK. The Drift integration gives Flutter developers the flexibility to write queries in either Dart or SQL.
Importantly, it supports propagating change notifications from the PowerSync side to Drift, which is necessary for streaming queries.
The use of this package is recommended for Flutter developers who already know Drift, or specifically want the benefits of an ORM for their PowerSync projects.
### Example implementation
An example project which showcases setting up and using Drift with PowerSync is available here:
## Troubleshooting: Watch Streams with Local-Only Tables
When using local-only tables with a `viewName` that differs from the table name, Drift's `watch()` streams may not receive update notifications. This happens because PowerSync sends notifications using the internal table name (e.g., `local_items`), but Drift is listening for the view name (e.g., `items`).
**Example problem:**
```dart theme={null}
// PowerSync schema with viewName override
Table.localOnly(
'local_items', // Internal table name
[...],
viewName: 'items', // User-facing view name
)
```
**Solution:** Use `transformTableUpdates` to map internal names to view names:
```dart theme={null}
import 'package:drift/drift.dart' show TableUpdate;
final connection = SqliteAsyncDriftConnection(
powerSyncDatabase,
transformTableUpdates: (notification) {
return notification.tables.map((tableName) {
if (tableName.startsWith('local_')) {
// Convert local_items → items
return TableUpdate(tableName.substring(6));
}
return TableUpdate(tableName);
}).toSet();
},
);
final db = AppDatabase(connection);
```
This ensures Drift receives notifications with the expected view names, allowing watch streams to work correctly.
### Support for Other Flutter ORMs
Other ORMs for Flutter, like [Floor](https://pinchbv.github.io/floor/), are not currently supported. It is technically possible to open a separate connection to the same database file using Floor but there are two big caveats to that:
**Write locks**
Every write transaction (or write statement) will lock the database for other writes for the duration of the transaction. While transactions are typically short, if multiple happen to run at the same time they may fail with a SQLITE\_BUSY or similar error.
**External modifications**
Often, ORMs only detect notifications made using the same library. In order to support streaming queries, PowerSync requires the ORM to allow external modifications to trigger the same change notifications, meaning streaming queries are unlikely to work out-of-the-box.
# Drizzle
Source: https://docs.powersync.com/client-sdks/orms/js/drizzle
This package enables using [Drizzle](https://orm.drizzle.team/) with the PowerSync [React Native](/client-sdks/reference/react-native-and-expo) and [JavaScript Web](/client-sdks/reference/javascript-web) SDKs.
## Setup
Set up the PowerSync Database and wrap it with Drizzle.
```js theme={null}
import { wrapPowerSyncWithDrizzle } from '@powersync/drizzle-driver';
import { PowerSyncDatabase } from '@powersync/web';
import { relations } from 'drizzle-orm';
import { index, integer, sqliteTable, text } from 'drizzle-orm/sqlite-core';
import { AppSchema } from './schema';
export const lists = sqliteTable('lists', {
id: text('id'),
name: text('name')
});
export const todos = sqliteTable('todos', {
id: text('id'),
description: text('description'),
list_id: text('list_id'),
created_at: text('created_at')
});
export const listsRelations = relations(lists, ({ one, many }) => ({
todos: many(todos)
}));
export const todosRelations = relations(todos, ({ one, many }) => ({
list: one(lists, {
fields: [todos.list_id],
references: [lists.id]
})
}));
export const drizzleSchema = {
lists,
todos,
listsRelations,
todosRelations
};
// As an alternative to manually defining a PowerSync schema, generate the local PowerSync schema from the Drizzle schema with the `DrizzleAppSchema` constructor:
// import { DrizzleAppSchema } from '@powersync/drizzle-driver';
// export const AppSchema = new DrizzleAppSchema(drizzleSchema);
//
// This is optional, but recommended, since you will only need to maintain one schema on the client-side
// Read on to learn more.
export const powerSyncDb = new PowerSyncDatabase({
database: {
dbFilename: 'test.sqlite'
},
schema: AppSchema
});
// This is the DB you will use in queries
export const db = wrapPowerSyncWithDrizzle(powerSyncDb, {
schema: drizzleSchema
});
```
## Schema Conversion
The `DrizzleAppSchema` constructor simplifies the process of integrating Drizzle with PowerSync. It infers the [client-side PowerSync schema](/intro/setup-guide#define-your-client-side-schema) from your Drizzle schema definition, providing a unified development experience.
As the PowerSync schema only supports SQLite types (`text`, `integer`, and `real`), the same limitation extends to the Drizzle table definitions.
To use it, define your Drizzle tables and supply the schema to the `DrizzleAppSchema` function:
```js theme={null}
import { DrizzleAppSchema } from '@powersync/drizzle-driver';
import { sqliteTable, text } from 'drizzle-orm/sqlite-core';
// Define a Drizzle table
const lists = sqliteTable('lists', {
id: text('id').primaryKey().notNull(),
created_at: text('created_at'),
name: text('name').notNull(),
owner_id: text('owner_id')
});
export const drizzleSchema = {
lists
};
// Infer the PowerSync schema from your Drizzle schema
export const AppSchema = new DrizzleAppSchema(drizzleSchema);
```
### Defining PowerSync Options
The PowerSync table definition allows additional options supported by PowerSync's app schema beyond that which are supported by Drizzle.
They can be specified as follows. Note that these options exclude indexes as they can be specified in a Drizzle table.
```js theme={null}
import { DrizzleAppSchema } from '@powersync/drizzle-driver';
// import { DrizzleAppSchema, type DrizzleTableWithPowerSyncOptions} from '@powersync/drizzle-driver'; for TypeScript
const listsWithOptions = { tableDefinition: logs, options: { localOnly: true } };
// const listsWithOptions: DrizzleTableWithPowerSyncOptions = { tableDefinition: logs, options: { localOnly: true } }; for TypeScript
export const drizzleSchemaWithOptions = {
lists: listsWithOptions
};
export const AppSchema = new DrizzleAppSchema(drizzleSchemaWithOptions);
```
### Converting a Single Table From Drizzle to PowerSync
Drizzle tables can also be converted on a table-by-table basis with `toPowerSyncTable`.
```js theme={null}
import { toPowerSyncTable } from '@powersync/drizzle-driver';
import { Schema } from '@powersync/web';
import { sqliteTable, text } from 'drizzle-orm/sqlite-core';
// Define a Drizzle table
const lists = sqliteTable('lists', {
id: text('id').primaryKey().notNull(),
created_at: text('created_at'),
name: text('name').notNull(),
owner_id: text('owner_id')
});
const psLists = toPowerSyncTable(lists); // converts the Drizzle table to a PowerSync table
// toPowerSyncTable(lists, { localOnly: true }); - allows for PowerSync table configuration
export const AppSchema = new Schema({
lists: psLists // names the table `lists` in the PowerSync schema
});
```
## Compilable queries
To use Drizzle queries in your hooks and composables, they currently need to be converted using `toCompilableQuery`.
```js theme={null}
import { toCompilableQuery } from "@powersync/drizzle-driver";
const query = db.select().from(users);
const { data: listRecords, isLoading } = useQuery(toCompilableQuery(query));
```
## Usage Examples
Below are examples comparing Drizzle and PowerSync syntax for common database operations.
### Select Operations
```js Drizzle theme={null}
const result = await db.select().from(users);
// [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }]
```
```js PowerSync theme={null}
const result = await powerSyncDb.getAll('SELECT * from users');
// [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }]
```
### Insert Operations
```js Drizzle theme={null}
await db.insert(users).values({ id: '1', name: 'John' });
const result = await db.select().from(users);
// [{ id: '1', name: 'John' }]
```
```js PowerSync theme={null}
await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(1, ?)', ['John']);
const result = await powerSyncDb.getAll('SELECT * from users');
// [{ id: '1', name: 'John' }]
```
### Delete Operations
```js Drizzle theme={null}
await db.insert(users).values({ id: '2', name: 'Ben' });
await db.delete(users).where(eq(users.name, 'Ben'));
const result = await db.select().from(users);
// []
```
```js PowerSync theme={null}
await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(2, ?)', ['Ben']);
await powerSyncDb.execute(`DELETE FROM users WHERE name = ?`, ['Ben']);
const result = await powerSyncDb.getAll('SELECT * from users');
// []
```
### Update Operations
```js Drizzle theme={null}
await db.insert(users).values({ id: '3', name: 'Lucy' });
await db.update(users).set({ name: 'Lucy Smith' }).where(eq(users.name, 'Lucy'));
const result = await db.select({ name: users.name }).from(users).get();
// 'Lucy Smith'
```
```js PowerSync theme={null}
await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(3, ?)', ['Lucy']);
await powerSyncDb.execute('UPDATE users SET name = ? WHERE name = ?', ['Lucy Smith', 'Lucy']);
const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['Lucy Smith'])
// 'Lucy Smith'
```
### Watched Queries
For watched queries with Drizzle it's recommended to use the `watch()` function from the Drizzle integration which takes in a Drizzle query.
```js Drizzle theme={null}
const query = db.select().from(users);
db.watch(query, {
onResult(results) {
console.log(results);
},
});
// [{ id: '1', name: 'John' }]
```
```js PowerSync theme={null}
powerSyncDb.watch("select * from users", [], {
onResult(results) {
console.log(results.rows?._array);
},
});
// [{ id: '1', name: 'John' }]
```
### Transactions
```js Drizzle theme={null}
await db.transaction(async (transaction) => {
await db.insert(users).values({ id: "4", name: "James" });
await db
.update(users)
.set({ name: "Lucy James Smith" })
.where(eq(users.name, "James"));
});
const result = await db.select({ name: users.name }).from(users).get();
// 'James Smith'
```
```js PowerSync theme={null}
await powerSyncDb.writeTransaction((transaction) => {
await transaction.execute('INSERT INTO users (id, name) VALUES(4, ?)', ['James']);
await transaction.execute("UPDATE users SET name = ? WHERE name = ?", ['James Smith', 'James']);
})
const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['James Smith'])
// 'James Smith'
```
## Developer Notes
### Table Constraint Restrictions
The Drizzle ORM relies on the underlying PowerSync table definitions which are subject to certain limitations.
This means that most Drizzle [constraint features](https://orm.drizzle.team/docs/indexes-constraints) (such as cascading deletes, foreign checks, unique) are currently not supported.
# Kysely
Source: https://docs.powersync.com/client-sdks/orms/js/kysely
This package enables using [Kysely](https://kysely.dev/) with PowerSync React Native and web SDKs.
It gives JavaScript developers the flexibility to write queries in either JavaScript/TypeScript or SQL, and provides type-safe imperative APIs.
## Setup
Set up the PowerSync Database and wrap it with Kysely.
### JavaScript Setup
```js theme={null}
import { wrapPowerSyncWithKysely } from '@powersync/kysely-driver';
import { PowerSyncDatabase } from '@powersync/web';
// Define schema as in: https://docs.powersync.com/intro/setup-guide#define-your-client-side-schema
import { appSchema } from './schema';
export const powerSyncDb = new PowerSyncDatabase({
database: {
dbFilename: 'test.sqlite'
},
schema: appSchema
});
export const db = wrapPowerSyncWithKysely(powerSyncDb);
```
### TypeScript Setup
```js theme={null}
import { wrapPowerSyncWithKysely } from '@powersync/kysely-driver';
import { PowerSyncDatabase } from "@powersync/web";
// Define schema as in: https://docs.powersync.com/intro/setup-guide#define-your-client-side-schema
import { appSchema, Database } from "./schema";
export const powerSyncDb = new PowerSyncDatabase({
database: {
dbFilename: "test.sqlite"
},
schema: appSchema,
});
// `db` now automatically contains types for defined tables
export const db = wrapPowerSyncWithKysely(powerSyncDb)
```
For more information on Kysely typing, see [their documentation](https://kysely.dev/docs/getting-started#types).
## Usage Examples
Below are examples comparing Kysely and PowerSync syntax for common database operations.
### Select Operations
```js Kysely theme={null}
const result = await db.selectFrom('users').selectAll().execute();
// [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }]
```
```js PowerSync theme={null}
const result = await powerSyncDb.getAll('SELECT * from users');
// [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }]
```
### Insert Operations
```js Kysely theme={null}
await db.insertInto('users').values({ id: '1', name: 'John' }).execute();
const result = await db.selectFrom('users').selectAll().execute();
// [{ id: '1', name: 'John' }]
```
```js PowerSync theme={null}
await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(1, ?)', ['John']);
const result = await powerSyncDb.getAll('SELECT * from users');
// [{ id: '1', name: 'John' }]
```
### Delete Operations
```js Kysely theme={null}
await db.insertInto('users').values({ id: '2', name: 'Ben' }).execute();
await db.deleteFrom('users').where('name', '=', 'Ben').execute();
const result = await db.selectFrom('users').selectAll().execute();
// []
```
```js PowerSync theme={null}
await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(2, ?)', ['Ben']);
await powerSyncDb.execute(`DELETE FROM users WHERE name = ?`, ['Ben']);
const result = await powerSyncDb.getAll('SELECT * from users');
// []
```
### Update Operations
```js Kysely theme={null}
await db.insertInto('users').values({ id: '3', name: 'Lucy' }).execute();
await db.updateTable('users').where('name', '=', 'Lucy').set('name', 'Lucy Smith').execute();
const result = await db.selectFrom('users').select('name').executeTakeFirstOrThrow();
// 'Lucy Smith'
```
```js PowerSync theme={null}
await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(3, ?)', ['Lucy']);
await powerSyncDb.execute('UPDATE users SET name = ? WHERE name = ?', ['Lucy Smith', 'Lucy']);
const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['Lucy Smith'])
// 'Lucy Smith'
```
### Watched Queries
For watched queries with Kysely it's recommended to use the `watch()` function from the wrapper package which takes in a Kysely query.
```js Kysely theme={null}
const query = db.selectFrom('users').selectAll();
db.watch(query, {
onResult(results) {
console.log(results);
},
});
// [{ id: '1', name: 'John' }]
```
```js PowerSync theme={null}
powerSyncDb.watch("select * from users", [], {
onResult(results) {
console.log(results.rows?._array);
},
});
// [{ id: '1', name: 'John' }]
```
### Transactions
```js Kysely theme={null}
await db.transaction().execute(async (transaction) => {
await transaction.insertInto('users').values({ id: '4', name: 'James' }).execute();
await transaction.updateTable('users').where('name', '=', 'James').set('name', 'James Smith').execute();
});
const result = await db.selectFrom('users').select('name').executeTakeFirstOrThrow();
// 'James Smith'
```
```js Kysely with Raw SQL theme={null}
await db.transaction().execute(async (transaction) => {
await sql`INSERT INTO users (id, name) VALUES ('4', 'James');`.execute(transaction)
await transaction.updateTable('users').where('name', '=', 'James').set('name', 'James Smith').execute();
});
const result = await db.selectFrom('users').select('name').executeTakeFirstOrThrow();
// 'James Smith'
```
```js PowerSync theme={null}
await powerSyncDb.writeTransaction((transaction) => {
await transaction.execute('INSERT INTO users (id, name) VALUES(4, ?)', ['James']);
await transaction.execute("UPDATE users SET name = ? WHERE name = ?", ['James Smith', 'James']);
})
const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['James Smith'])
// 'James Smith'
```
# JavaScript ORMs Overview
Source: https://docs.powersync.com/client-sdks/orms/js/overview
Reference for using ORMs in PowerSync's JavaScript-based SDKs
The following ORMs and query libraries are officially supported:
Kysely query builder for PowerSync.
Drizzle ORM for PowerSync.
TanStack DB collection for PowerSync.
# TanStack DB
Source: https://docs.powersync.com/client-sdks/orms/js/tanstack-db
# Kotlin SQL Libraries
Source: https://docs.powersync.com/client-sdks/orms/kotlin/overview
Reference for using PowerSync with SQL mapping libraries on Kotlin.
The PowerSync Kotlin SDK allows syncing SQLite databases with your backend source database, and gives you full control over which queries you run on your client.
Manually writing SQL queries and parsing results can be prone to errors though.
Libraries like [SQLDelight](https://sqldelight.github.io/sqldelight) and [Room](https://developer.android.com/jetpack/androidx/releases/room) make this process safer by validating your schema and queries at compile-time, as well as generating code to map from raw SQLite rows into statically typed structures.
Starting with version `1.6.0` of the PowerSync Kotlin SDK, both SQLDelight and Room are officially supported on all platforms!
We've recently added support for these libraries. We're still seeking feedback on developer experience
and don't have complete examples for them yet.
Contributions and feedback are welcome!
Join our [Discord community](https://discord.gg/powersync) to share your experience and get help.
Use SQLDelight on PowerSync databases.
Use PowerSync with Room databases.
If you're not sure which library to use, consider that Room requires [raw tables](/client-sdks/advanced/raw-tables) and is more complex to set up, so:
* SQLDelight is easier to use if you're starting with an existing PowerSync database.
* We mainly recommend the Room integration if you have an existing Room database you want to add sync to.
# Room (Alpha)
Source: https://docs.powersync.com/client-sdks/orms/kotlin/room
Room support is currently in alpha.
While we don't expect any major changes and the library is tested on multiple platforms, it depends on raw tables,
an unstable PowerSync feature.
PowerSync supports the Room database library for Kotlin (Multiplatform).
## Features
When adopting the Room integration for PowerSync:
* PowerSync will use the connection pool of the Room database for efficient queries (avoiding e.g. "database is locked" errors).
* Local writes from Room will update watched PowerSync queries, and they will trigger a CRUD upload.
* Writes from PowerSync (including those made by the sync client) will immediately update your Room flows.
## Installation
PowerSync acts as an addon to your existing Room database, which means that (unlike with most other PowerSync SDKs)
you are still responsible for schema management.
Room requires [raw tables](/client-sdks/advanced/raw-tables), as the views managed by PowerSync are incompatible with
the schema verification when Room opens the database.
To add PowerSync to your Room database,
1. Add a dependency on `com.powersync:core` and `com.powersync:integration-room`.
2. Add a dependency on `androidx.sqlite:sqlite-bundled`: Since PowerSync uses a SQLite extension (which are unsupported on
the platform SQLite libraries on both Android and iOS), you need to bundle a SQLite with your app.
On the `RoomDatabase.Builder`, call `setDriver()` with a PowerSync-enabled driver:
```Kotlin theme={null}
val driver = BundledSQLiteDriver().also {
it.loadPowerSyncExtension() // Extension method by PowerSync
}
Room.databaseBuilder(...).setDriver(driver).build()
```
## Setup
Because PowerSync syncs into tables that you've created with Room, it needs to know which SQL statements to run for
inserts, updates and deletes.
Let's say you had a table like the following:
```Kotlin theme={null}
@Entity(tableName = "todos")
data class TodoItem(
// Note that PowerSync uses textual ids (usually randomly-generated UUIDs)
@PrimaryKey val id: String
val description: String
@ColumnInfo(name="created_by") val authorId: String
)
```
To inform PowerSync about that table, include it as a `RawTable` in the schema:
```Kotlin theme={null}
val schema = Schema(
RawTable(
name = "todos",
put =
PendingStatement(
"INSERT OR REPLACE INTO todos (id, description, created_by) VALUES (?, ?, ?)",
listOf(
PendingStatementParameter.Id,
PendingStatementParameter.Column("description"),
PendingStatementParameter.Column("created_by"),
),
),
delete =
PendingStatement(
"DELETE FROM todos WHERE id = ?",
listOf(PendingStatementParameter.Id),
),
),
)
```
Here:
* The SQL statements must match the schema created by Room.
* The `RawTable.name` and `PendingStatementParameter.Column` values must match the table and column names of the synced
table from the PowerSync Service, derived from your Sync Rules.
For more details, see [raw tables](/client-sdks/advanced/raw-tables).
After these steps, you can open your Room database like you normally would. Then, you can use the
following method to obtain a `PowerSyncDatabase` instance which is backed by Room:
```Kotlin theme={null}
val schema = Schema(...)
val pool = RoomConnectionPool(yourRoomDatabase, schema)
val powersync = PowerSyncDatabase.opened(
pool = pool,
scope = this,
schema = schema,
identifier = "databaseName", // Prefer to use the same path/name as your Room database
logger = Logger,
)
```
The returned `PowerSyncDatabase` behaves just like a regular PowerSync database, meaning that you can call
`connect` to establish a sync connection:
```Kotlin theme={null}
powersync.connect(
YourBackendConnector(),
options = SyncOptions(
// Raw tables require the new client implementation.
newClientImplementation = true
)
)
```
## Usage
To run queries, you can keep defining Room DAOs in the usual way:
```Kotlin theme={null}
@Dao
interface TodoItemsDao {
@Insert
suspend fun create(item: TodoItem)
@Query("SELECT * FROM todo_item")
fun watchAll(): Flow>
}
// ...
todoItemsDao.create(TodoItem(
id = Uuid.random()toHexDashString(),
title = "My first todo item",
authorId = currentUserId
))
todoItemsDao.watchAll().collect { items ->
println("This flow emits events for writes from Room and synced data from PowerSync")
}
```
## Local writes
To transfer local writes from Room to PowerSync:
1. Create triggers on your Room tables to insert rows into `ps_crud`. See [raw tables](/client-sdks/advanced/raw-tables#capture-local-writes-with-triggers) for details.
2. Ensure the `RoomConnectionPool` is constructed with your `schema` (as shown above). When the schema is provided, the pool will notify PowerSync about writes to every raw table referenced in the schema.
3. Alternatively, after performing writes through Room, invoke:
```Kotlin theme={null}
pool.transferPendingRoomUpdatesToPowerSync()
```
This explicitly transfers any pending Room updates to PowerSync if you prefer to control the timing.
# SQLDelight (Beta)
Source: https://docs.powersync.com/client-sdks/orms/kotlin/sqldelight
PowerSync supports the SQLDelight library to safely build and run SQL statements on all platforms.
SQLDelight support is currently in beta.
There are some limitations to be aware of:
1. PowerSync migrates all databases to `user_version` 1 when created (it will never downgrade a database). If you want to use SQLDelight's schema versioning, start from version `2`.
2. `CREATE TABLE` statements in `.sq` files are only used at build time to verify queries. At runtime, PowerSync creates tables as views from your schema and ignores those statements. If you want SQLDelight to manage the schema, configure PowerSync to use [raw tables](/client-sdks/advanced/raw-tables).
3. Functions and tables provided by the PowerSync core SQLite extension are not visible to `.sq` files currently. We may revisit this with a custom dialect in the future.
## Features
When adopting SQLDelight with PowerSync, you can safely define your SQL statements and let
the SQLDelight compiler generate code to map rows into typed classes.
All `Flow`s from SQLDelight will automatically update for PowerSync writes (including those from
sync).
## Installation
This guide assumes that you already have a PowerSync database for Kotlin. See the [general documentation](/client-sdks/reference/kotlin) for notes on getting started with PowerSync.
To use SQLDelight, you can generally follow [SQLDelight](https://sqldelight.github.io/sqldelight/2.1.0/multiplatform_sqlite/) documentation. A few steps are different though, and these are highlighted here.
In addition to SQLDelight, add a dependency on `com.powersync:integration-sqldelight`, using the same version you use for the
PowerSync Kotlin SDK.
When defining your schema, note that the `CREATE TABLE` statements don't actually run. PowerSync creates views
for the schema passed to the `PowerSyncDatabase` factory instead. This also means that triggers, views and indexes
defined in `.sq` files are ignored.
To ensure your defined queries are valid, the `CREATE TABLE` syntax should still mirror your PowerSync schema.
Next, ensure that SQLDelight is not linking `sqlite3` (the PowerSync SDK takes care of that,
and you don't want to link it twice). Also, ensure the async generator is active because the
PowerSync driver does not support synchronous reads:
```Kotlin theme={null}
sqldelight {
databases {
linkSqlite.set(false)
create("YourDatabase") {
generateAsync.set(true)
deriveSchemaFromMigrations.set(false)
dialect("app.cash.sqldelight:sqlite-3-38-dialect")
}
}
}
```
## Usage
Open a PowerSync database [in the usual way](https://docs.powersync.com/client-sdks/reference/kotlin#getting-started)
and finally pass it to the constructor of your generated SQLDelight database:
```kotlin theme={null}
val powersync = PowerSyncDatabase(...)
val sqldelight = Yourdatabase(PowerSyncDriver(powersync))
```
That's it! The `PowerSyncDriver` will automatically keep the two databases in sync and update SQLDelight flows
for all writes, regardless of whether they've been issued against the `sqldelight` database or against the source `powersync` connection.
### Example
```sql theme={null}
CREATE TABLE todo_items (
id TEXT NOT NULL,
title TEXT NOT NULL,
author_id TEXT NOT NULL
);
all:
SELECT * FROM todo_items;
create:
INSERT INTO todo_items (id, title, author_id) VALUES (uuid(), ?, ?);
```
```Kotlin theme={null}
sqldelight.todosQueries.create("my title", "my content")
sqldelight.todosQueries.all().asFlow().mapToList(Dispatchers.IO).collect {
println("This flow emits events for writes from SQLDelight and synced data from PowerSync")
}
```
# ORM Support Overview
Source: https://docs.powersync.com/client-sdks/orms/overview
Use type-safe ORMs with PowerSync instead of raw SQL queries
## Our Approach to ORM Support
As much as some developers love to drop into raw SQL for advanced queries, it can be annoying to have to write SQL for simple queries, often because there’s no type-safety. Using an ORM helps address this challenge.
With PowerSync, our philosophy is to not force a specific ORM on developers. Instead, we allow any approach from raw SQL queries to working with popular ORM libraries.
We specifically avoid implementing our own ORM since we feel it's better to support popular existing ORMs, which likely do a much better job than we can. It also makes it easier to switch to/from PowerSync if you can keep most of your database code the same.
## Platform-Specific Information
## Learn More
See our blog post: [Using ORMs With PowerSync](https://www.powersync.com/blog/using-orms-with-powersync)
# GRDB (Alpha)
Source: https://docs.powersync.com/client-sdks/orms/swift/grdb
PowerSync integrates with the [GRDB library](https://github.com/groue/GRDB.swift), a powerful SQLite tool for Swift development. GRDB is a full-fledged SQLite ecosystem that offers SQLite connection creation and pooling, SQL generation (ORM functionality), database observation (reactive queries), robust concurrency, migrations, and SwiftUI integration with [GRDBQuery](https://github.com/groue/GRDBQuery).
This integration allows you to combine PowerSync's sync capabilities with GRDB's mature tooling and Swift-friendly patterns. It provides an easier adoption path for existing GRDB users while also enabling access to GRDB's ecosystem of libraries.
GRDB support was added in v1.9.0 of the PowerSync Swift SDK and is currently in an **alpha** release.
There are some limitations to be aware of:
* Updating the PowerSync schema using `updateSchema` is not yet supported.
* Xcode previews may not yet work correctly.
* The current implementation uses the SQLite session API for change tracking, which may consume more memory than the standard PowerSync implementation. A more efficient tracking mechanism is planned.
* You may see thread priority inversion warnings in Xcode. We're working to ensure consistent quality-of-service classes across threads.
* The schema definition process requires manually defining both the PowerSync `AppSchema` and GRDB record types separately. Future versions may allow these to be declared together or derived from each other.
## Features
When using GRDB with PowerSync:
* **Easier adoption for existing GRDB users**: The familiar GRDB API lowers the barrier to entry for teams already using GRDB.
* **Access to GRDB ecosystem**: Use libraries built on GRDB like GRDBQuery (SwiftUI data layer with automatic UI updates) and SQLiteData.
* **Type-safe query generation**: GRDB's ORM provides compile-time error checking and Swift-idiomatic patterns that make SQLite development more productive. You get features like database observation (similar to PowerSync's watch functionality), migration support, and record protocols that reduce boilerplate while maintaining flexibility to drop down to raw SQL when needed.
* **Direct SQLite access**: GRDB provides more direct access to the actual SQLite connections being used. This enables advanced SQLite operations like registering custom SQLite functions.
## Setup
This guide assumes that you have completed the [Getting Started](/client-sdks/reference/swift#getting-started) steps in the SDK documentation, or are at least familiar with them. The GRDB-specific configuration described below applies to the "Instantiate the PowerSync Database" step (step 2) in the Getting Started guide.
To set up PowerSync with GRDB, create a `DatabasePool` with PowerSync configuration:
```swift theme={null}
var config = Configuration()
try config.configurePowerSync(
schema: schema
)
let documentsDir = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let dbURL = documentsDir.appendingPathComponent("test.sqlite")
let pool = try DatabasePool(
path: dbURL.path,
configuration: config
)
```
You can then pass this pool when creating the `PowerSyncDatabase`:
```swift theme={null}
let powerSync = openPowerSyncWithGRDB(
pool: pool,
schema: schema,
identifier: "mydatabase.sqlite"
)
```
The returned `PowerSyncDatabase` behaves just like a regular PowerSync database, meaning that you can call `connect` to establish a sync connection.
## Usage
Using the `DatabasePool` in the PowerSync SDK results in the same locking mechanisms being used between instances of the `PowerSyncDatabase` and `DatabasePool`. Consumers should be safe to alternate between both clients.
You can use PowerSync queries:
```swift theme={null}
try await powerSync.execute(
"INSERT INTO users(id, name, count) VALUES(uuid(), 'steven', 1)"
)
let initialUsers = try await powerSync.getAll(
"SELECT * FROM users"
) { cursor in
try cursor.getString(name: "name")
}
print("initial users \(initialUsers)")
```
And also use GRDB queries:
```swift theme={null}
// Define a GRDB record type
struct Users: Codable, Identifiable, FetchableRecord, PersistableRecord {
var id: String
var name: String
var count: Int
enum Columns {
static let name = Column(CodingKeys.name)
static let count = Column(CodingKeys.count)
}
}
let grdbUsers = try await pool.read { db in
try Users.fetchAll(db)
}
```
## Demo App
The [PowerSync Swift GRDB Demo App](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/GRDBDemo) showcases how to use GRDB with PowerSync.
## Architecture
The GRDB integration works by sharing the same underlying SQLite database between PowerSync and GRDB. Instead of PowerSync creating its own SQLite database instance (as in the standard implementation), the integration uses a GRDB `DatabasePool` that has been configured with [PowerSync's Rust core extension](https://github.com/powersync-ja/powersync-sqlite-core) (required for PowerSync features).
When you create a `DatabasePool` with PowerSync configuration and pass it to `openPowerSyncWithGRDB`, PowerSync uses that same `DatabasePool` interface for all database operations.
This shared architecture means that you can use both the GRDB `DatabasePool` and PowerSync `PowerSyncDatabase` interfaces interchangeably.
# Client SDKs Overview
Source: https://docs.powersync.com/client-sdks/overview
Client-side SDKs for syncing data with PowerSync
PowerSync provides client SDKs for multiple frameworks. Each SDK manages a local SQLite database that syncs with your backend.
## Choose Your SDK
Select your client framework to get started:
## Common Tasks
Once you've installed an SDK, these guides cover the core functionality:
Query your local SQLite database
Insert, update, and delete records
Build reactive UIs with live queries
Common patterns and code examples
## Additional Resources
Use type-safe ORMs with PowerSync
Platform compatibility for each SDK
Working demo apps and starter templates
# Reading Data
Source: https://docs.powersync.com/client-sdks/reading-data
Query data from your local SQLite database using SQL queries
On the client-side, you can read data directly from the local SQLite database using standard SQL queries.
## Basic Queries
Read data using SQL queries:
```typescript React Native, Web, Node.js & Capacitor (TS) theme={null}
// Get all todos
const todos = await db.getAll('SELECT * FROM todos');
// Get a single todo
const todo = await db.get('SELECT * FROM todos WHERE id = ?', [todoId]);
// Watch for changes (reactive query)
const stream = db.watch('SELECT * FROM todos WHERE list_id = ?', [listId]);
for await (const todos of stream) {
// Update UI when data changes
console.log(todos);
}
```
```kotlin Kotlin theme={null}
// Get all todos
val todos = database.getAll("SELECT * FROM todos") { cursor ->
Todo.fromCursor(cursor)
}
// Get a single todo
val todo = database.get("SELECT * FROM todos WHERE id = ?", listOf(todoId)) { cursor ->
Todo.fromCursor(cursor)
}
// Watch for changes
database.watch("SELECT * FROM todos WHERE list_id = ?", listOf(listId))
.collect { todos ->
// Update UI when data changes
}
```
```swift Swift theme={null}
// Get all todos
let todos = try await db.getAll(
sql: "SELECT * FROM todos",
mapper: { cursor in
TodoContent(
description: try cursor.getString(name: "description")!,
completed: try cursor.getBooleanOptional(name: "completed")
)
}
)
// Watch for changes
for try await todos in db.watch(
sql: "SELECT * FROM todos WHERE list_id = ?",
parameters: [listId]
) {
// Update UI when data changes
}
```
```dart Dart/Flutter theme={null}
// Get all todos
final todos = await db.getAll('SELECT * FROM todos');
// Get a single todo
final todo = await db.get('SELECT * FROM todos WHERE id = ?', [todoId]);
// Watch for changes
db.watch('SELECT * FROM todos WHERE list_id = ?', [listId])
.listen((todos) {
// Update UI when data changes
});
```
```csharp .NET theme={null}
// Define a result type with properties matching the schema columns (some columns omitted here for brevity)
// public class TodoResult { public string id; public string description; public int completed; ... }
// Get all todos
var todos = await db.GetAll("SELECT * FROM todos");
// Get a single todo
var todo = await db.Get("SELECT * FROM todos WHERE id = ?", [todoId]);
// You can also query without specifying a type to get dynamic results:
dynamic asset = await db.Get("SELECT id, description, make FROM assets");
Console.WriteLine($"Asset ID: {asset.id}");
```
## Live Queries / Watch Queries
For reactive UI updates that automatically refresh when data changes, use watch queries. These queries execute whenever dependent tables are modified.
See [Live Queries / Watch Queries](/client-sdks/watch-queries) for more details.
## ORM Support
PowerSync integrates with popular ORM libraries, which provide type safety and additional tooling. Using an ORM is often preferable to writing raw SQL queries, especially for common operations.
See [ORM Support](/client-sdks/orms/overview) to learn which ORMs PowerSync supports and how to get started.
## Advanced Topics
* [Usage Examples](/client-sdks/usage-examples) - Code examples for common use cases
* [Full-Text Search](/client-sdks/full-text-search) - Full-text search using the [SQLite FTS5 extension](https://www.sqlite.org/fts5.html)
* [Query JSON in SQLite](/client-sdks/advanced/query-json-in-sqlite) - Learn how to work with JSON data in SQLite
* [Infinite Scrolling](/client-sdks/infinite-scrolling) - Efficiently load large datasets
* [High Performance Diffs](/client-sdks/high-performance-diffs) - Efficiently get row changes for large datasets
# Capacitor SDK (alpha)
Source: https://docs.powersync.com/client-sdks/reference/capacitor
Full SDK guide for using PowerSync in Capacitor clients
This SDK is distributed via NPM
Refer to `packages/capacitor` in the `powersync-js` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with Capacitor and PowerSync
Changelog for the SDK
This SDK is currently in an [**alpha** release](/resources/feature-status).
The SDK is largely built on our stable [Web SDK](/client-sdks/reference/javascript-web), so that functionality can be considered stable. However, the [Capacitor Community SQLite](https://github.com/capacitor-community/sqlite) integration for mobile platforms is in alpha for real-world testing and feedback. There are [known limitations](#limitations) currently.
**Built on the Web SDK**
The PowerSync Capacitor SDK is built on top of the [PowerSync Web SDK](/client-sdks/reference/javascript-web). It shares the same API and usage patterns as the Web SDK. The main differences are:
* Uses Capacitor-specific SQLite implementation (`@capacitor-community/sqlite`) for native Android and iOS platforms
* Certain features are not supported on native Android and iOS platforms, see [limitations](#limitations) below for details
All code examples from the Web SDK apply to Capacitor — use `@powersync/web` for imports instead of `@powersync/capacitor`. See the [JavaScript Web SDK reference](/client-sdks/reference/javascript-web) for ORM support, SPA framework integration, and developer notes.
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Installation
Add the [PowerSync Capacitor NPM package](https://www.npmjs.com/package/@powersync/capacitor) to your project:
```bash theme={null}
npm install @powersync/capacitor
```
```bash theme={null}
yarn add @powersync/capacitor
```
```bash theme={null}
pnpm install @powersync/capacitor
```
**Install Peer Dependencies**
You must also install the following peer dependencies:
```bash theme={null}
npm install @capacitor-community/sqlite @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
yarn add @capacitor-community/sqlite @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
pnpm install @capacitor-community/sqlite @powersync/web @journeyapps/wa-sqlite
```
After installing, sync your Capacitor project:
```bash theme={null}
npx cap sync
```
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required .
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
**Note on imports**: While you install `@powersync/capacitor`, the Capacitor SDK extends the Web SDK so you import general components from `@powersync/web` (installed as a peer dependency). See the [JavaScript Web SDK schema definition section](/client-sdks/reference/javascript-web#1-define-the-client-side-schema) for more advanced examples.
```js theme={null}
// AppSchema.ts
import { column, Schema, Table } from '@powersync/web';
const lists = new Table({
created_at: column.text,
name: column.text,
owner_id: column.text
});
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos,
lists
});
// For types
export type Database = (typeof AppSchema)['types'];
export type TodoRecord = Database['todos'];
// OR:
// export type Todo = RowType;
export type ListRecord = Database['lists'];
```
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
The Capacitor PowerSyncDatabase automatically detects the platform and uses the appropriate database drivers:
* **Android and iOS**: Uses [Capacitor Community SQLite](https://github.com/capacitor-community/sqlite) for native database access
* **Web**: Falls back to the PowerSync Web SDK
```js theme={null}
import { PowerSyncDatabase } from '@powersync/capacitor';
// Import general components from the Web SDK package
import { Schema } from '@powersync/web';
import { Connector } from './Connector';
import { AppSchema } from './AppSchema';
/**
* The Capacitor PowerSyncDatabase will automatically detect the platform
* and use the appropriate database drivers.
*/
export const db = new PowerSyncDatabase({
// The schema you defined in the previous step
schema: AppSchema,
database: {
// Filename for the SQLite database — it's important to only instantiate one instance per file.
dbFilename: 'powersync.db'
}
});
```
When using custom database factories, be sure to specify the `CapacitorSQLiteOpenFactory` for Capacitor platforms:
```js theme={null}
import { PowerSyncDatabase } from '@powersync/capacitor';
import { WASQLiteOpenFactory, CapacitorSQLiteOpenFactory } from '@powersync/capacitor';
import { Schema } from '@powersync/web';
const db = new PowerSyncDatabase({
schema: AppSchema,
database: isWeb
? new WASQLiteOpenFactory({ dbFilename: "mydb.sqlite" })
: new CapacitorSQLiteOpenFactory({ dbFilename: "mydb.sqlite" })
});
```
Once you've instantiated your PowerSync database, call the [connect()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#connect) method to sync data with your backend.
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
```js theme={null}
export const setupPowerSync = async () => {
// Uses the backend connector that will be created in the next section
const connector = new Connector();
db.connect(connector);
};
```
### 3. Integrate with your Backend
The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
See the [JavaScript Web SDK backend integration section](/client-sdks/reference/javascript-web#3-integrate-with-your-backend) for connector examples with Supabase and Firebase authentication, and handling `uploadData` with batch operations.
```js theme={null}
import { UpdateType } from '@powersync/web';
export class Connector {
async fetchCredentials() {
// Implement fetchCredentials to obtain a JWT from your authentication service.
// See https://docs.powersync.com/configuration/auth/overview
return {
endpoint: '[Your PowerSync instance URL or self-hosted endpoint]',
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
token: 'An authentication token'
};
}
async uploadData(database) {
// Implement uploadData to send local changes to your backend service.
// You can omit this method if you only want to sync data from the database to the client
// See example implementation here: https://docs.powersync.com/client-sdks/reference/javascript-web#3-integrate-with-your-backend
}
}
```
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
**All CRUD examples from the JavaScript Web SDK apply**: The Capacitor SDK uses the same API as the Web SDK. See the [JavaScript Web SDK CRUD functions section](/client-sdks/reference/javascript-web#using-powersync-crud-functions) for examples of `get`, `getAll`, `watch`, `execute`, `writeTransaction`, incremental watch updates, and differential results.
The most commonly used CRUD functions to interact with your SQLite data are:
* [PowerSyncDatabase.get](/client-sdks/reference/javascript-web#fetching-a-single-item) - get (SELECT) a single row from a table.
* [PowerSyncDatabase.getAll](/client-sdks/reference/javascript-web#querying-items-powersync.getall) - get (SELECT) a set of rows from a table.
* [PowerSyncDatabase.watch](/client-sdks/reference/javascript-web#watching-queries-powersync.watch) - execute a read query every time source tables are modified.
* [PowerSyncDatabase.execute](/client-sdks/reference/javascript-web#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query.
### Fetching a Single Item
The [get](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#get) method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#getoptional) to return a single optional result (returns `null` if no result is found).
```js theme={null}
// Find a list item by ID
export const findList = async (id) => {
const result = await db.get('SELECT * FROM lists WHERE id = ?', [id]);
return result;
}
```
### Querying Items (PowerSync.getAll)
The [getAll](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#getall) method returns a set of rows from a table.
```js theme={null}
// Get all list IDs
export const getLists = async () => {
const results = await db.getAll('SELECT * FROM lists');
return results;
}
```
### Watching Queries (PowerSync.watch)
The [watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
### Mutations (PowerSync.execute, PowerSync.writeTransaction)
The [execute](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#execute) method can be used for executing single SQLite write statements.
```js theme={null}
// Delete a list item by ID
export const deleteList = async (id) => {
const result = await db.execute('DELETE FROM lists WHERE id = ?', [id]);
return TodoList.fromRow(results);
}
// OR: using a transaction
const deleteList = async (id) => {
await db.writeTransaction(async (tx) => {
// Delete associated todos
await tx.execute(`DELETE FROM ${TODOS_TABLE} WHERE list_id = ?`, [id]);
// Delete list record
await tx.execute(`DELETE FROM ${LISTS_TABLE} WHERE id = ?`, [id]);
});
};
```
## Configure Logging
```js theme={null}
import { createBaseLogger, LogLevel } from '@powersync/web';
const logger = createBaseLogger();
// Configure the logger to use the default console output
logger.useDefaults();
// Set the minimum log level to DEBUG to see all log messages
// Available levels: DEBUG, INFO, WARN, ERROR, TRACE, OFF
logger.setLevel(LogLevel.DEBUG);
```
Enable verbose output in the developer tools for detailed logs.
## Limitations
* Encryption for native mobile platforms is not yet supported.
* Multiple tab support is not available for native Android and iOS targets.
* `PowerSyncDatabase.executeRaw` does not support results where multiple columns would have the same name in SQLite
* `PowerSyncDatabase.execute` has limited support on Android. The SQLCipher Android driver exposes queries and executions as separate APIs, so there is no single method that handles both. While PowerSyncDatabase.execute accepts both, on Android we treat a statement as a query only when the SQL starts with select (case-insensitive).
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM Support
See [JavaScript ORM Support](/client-sdks/orms/js/overview) for details.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> Capacitor SDK](/resources/supported-platforms#capacitor-sdk).
## Upgrading the SDK
Run the below command in your project folder:
```bash theme={null}
npm upgrade @powersync/capacitor @powersync/web
```
```bash theme={null}
yarn upgrade @powersync/capacitor @powersync/web
```
```bash theme={null}
pnpm upgrade @powersync/capacitor @powersync/web
```
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/capacitor-api
# .NET SDK (alpha)
Source: https://docs.powersync.com/client-sdks/reference/dotnet
Full SDK guide for using PowerSync in .NET clients.
This SDK is distributed via NuGet
Refer to the `powersync-dotnet` repo on GitHub
A full API Reference for this SDK is not yet available. This is planned for a future release.
Gallery of example projects/demo apps built with .NET PowerSync
Changelog for the SDK
This SDK is currently in an [**alpha** release](/resources/feature-status). It is not suitable for production use as breaking changes may still occur.
## Supported Frameworks and Targets
The PowerSync .NET SDK supports:
* **.NET Versions**: 6, 8, and 9
* **.NET Framework**: Version 4.8 (requires additional configuration)
* **MAUI**: Cross-platform support for Android, iOS, and Windows
* **WPF**: Windows desktop applications
**Current Limitations**:
* Blazor (web) platforms are not yet supported.
For more details, please refer to the package [README](https://github.com/powersync-ja/powersync-dotnet/tree/main?tab=readme-ov-file).
## SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Quickstart
For desktop/server/binary use-cases and WPF, add the [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) NuGet package to your project:
```bash theme={null}
dotnet add package PowerSync.Common --prerelease
```
For MAUI apps, add both [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) and [`PowerSync.Maui`](https://www.nuget.org/packages/PowerSync.Maui/) NuGet packages to your project:
```bash theme={null}
dotnet add package PowerSync.Common --prerelease
dotnet add package PowerSync.Maui --prerelease
```
Add `--prerelease` while this package is in alpha. To install a specific version, use `--version` instead: `dotnet add package PowerSync.Common --version 0.0.6-alpha.1`
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required .
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
You can use [this example](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/AppSchema.cs) as a reference when defining your schema.
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
#### Schema definition syntax
There are two supported syntaxes for defining the schema:
**Attribute-based (recommended)** — Annotate a C# class with `[Table]`, `[Column]`, and `[Index]` attributes. The same class can then be used directly as the result type in queries, so you define your data structure once:
```cs theme={null}
using PowerSync.Common.DB.Schema;
using PowerSync.Common.DB.Schema.Attributes;
[Table("todos"), Index("list", ["list_id"])]
public class Todo
{
[Column("id")]
public string TodoId { get; set; }
[Column("list_id")]
public string ListId { get; set; }
[Column("created_at")]
public string CreatedAt { get; set; }
[Column("completed")]
public bool Completed { get; set; }
// ... other columns
}
public static Schema PowerSyncSchema = new Schema(typeof(Todo));
// The same Todo class is used for queries:
var todos = await db.GetAll("SELECT * FROM todos");
```
Unlike the other syntaxes where PowerSync automatically creates an `id` column, the attribute-based syntax requires you to explicitly declare it. The SDK identifies the `id` property by looking for either a property named `id`, or any property with a `[Column("id")]` attribute (case-insensitive). Having none or more than one is an error.
If you prefer to keep your schema definition separate from your data classes, you can use the object initializer syntax instead:
```cs theme={null}
using PowerSync.Common.DB.Schema;
class AppSchema
{
public static Table Todos = new Table
{
Name = "todos",
Columns =
{
["list_id"] = ColumnType.Text,
["created_at"] = ColumnType.Text,
["completed"] = ColumnType.Integer,
// ... other columns
},
Indexes =
{
["list"] = ["list_id"]
}
};
public static Schema PowerSyncSchema = new Schema(Todos);
}
```
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
The initialization syntax differs slightly between the Common and MAUI SDKs:
```cs theme={null}
using PowerSync.Common.Client;
class Demo
{
static async Task Main()
{
var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions
{
Database = new SQLOpenOptions { DbFilename = "tododemo.db" },
Schema = AppSchema.PowerSyncSchema,
});
await db.Init();
}
}
```
```cs theme={null}
using PowerSync.Common.Client;
using PowerSync.Common.MDSQLite;
using PowerSync.Maui.SQLite;
class Demo
{
static async Task Main()
{
// Ensures the DB file is stored in a platform appropriate location
var dbPath = Path.Combine(FileSystem.AppDataDirectory, "maui-example.db");
var factory = new MAUISQLiteDBOpenFactory(new MDSQLiteOpenFactoryOptions()
{
DbFilename = dbPath
});
var Db = new PowerSyncDatabase(new PowerSyncDatabaseOptions()
{
Database = factory, // Supply a factory
Schema = AppSchema.PowerSyncSchema,
});
await db.Init();
}
}
```
### 3. Integrate with your Backend
The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. [PowerSyncBackendConnector.FetchCredentials](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L50) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. [PowerSyncBackendConnector.UploadData](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L72) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```cs theme={null}
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;
using PowerSync.Common.Client;
using PowerSync.Common.Client.Connection;
using PowerSync.Common.DB.Crud;
public class MyConnector : IPowerSyncBackendConnector
{
private readonly HttpClient _httpClient;
// User credentials for the current session
public string UserId { get; private set; }
// Service endpoints
private readonly string _backendUrl;
private readonly string _powerSyncUrl;
private string? _clientId;
public MyConnector()
{
_httpClient = new HttpClient();
// In a real app, this would come from your authentication system
UserId = "user-123";
// Configure your service endpoints
_backendUrl = "https://your-backend-api.example.com";
_powerSyncUrl = "https://your-powersync-instance.powersync.journeyapps.com";
}
public async Task FetchCredentials()
{
try {
// Implement fetchCredentials to obtain a JWT from your authentication service.
// See https://docs.powersync.com/configuration/auth/overview
var authToken = "your-auth-token"; // Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
// Return credentials with PowerSync endpoint and JWT token
return new PowerSyncCredentials(_powerSyncUrl, authToken);
}
catch (Exception ex)
{
Console.WriteLine($"Error fetching credentials: {ex.Message}");
throw;
}
}
public async Task UploadData(IPowerSyncDatabase database)
{
// Get the next transaction to upload
CrudTransaction? transaction;
try
{
transaction = await database.GetNextCrudTransaction();
}
catch (Exception ex)
{
Console.WriteLine($"UploadData Error: {ex.Message}");
return;
}
// If there's no transaction, there's nothing to upload
if (transaction == null)
{
return;
}
// Get client ID if not already retrieved
_clientId ??= await database.GetClientId();
try
{
// Convert PowerSync operations to your backend format
var batch = new List();
foreach (var operation in transaction.Crud)
{
batch.Add(new
{
op = operation.Op.ToString(), // INSERT, UPDATE, DELETE
table = operation.Table,
id = operation.Id,
data = operation.OpData
});
}
// Send the operations to your backend
var payload = JsonSerializer.Serialize(new { batch });
var content = new StringContent(payload, Encoding.UTF8, "application/json");
HttpResponseMessage response = await _httpClient.PostAsync($"{_backendUrl}/api/data", content);
response.EnsureSuccessStatusCode();
// Mark the transaction as completed
await transaction.Complete();
}
catch (Exception ex)
{
Console.WriteLine($"UploadData Error: {ex.Message}");
throw;
}
}
}
```
With your database instantiated and your connector ready, call `connect` to start syncing data with your backend:
```cs theme={null}
await db.Connect(new MyConnector());
await db.WaitForFirstSync(); // Optional, to wait for a complete snapshot of data to be available
```
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* `PowerSyncDatabase.Get` - get (SELECT) a single row from a table.
* `PowerSyncDatabase.GetAll` - get (SELECT) a set of rows from a table.
* `PowerSyncDatabase.Watch` - execute a read query every time source tables are modified.
* `PowerSyncDatabase.Execute` - execute a write (INSERT/UPDATE/DELETE) query.
### Fetching a Single Item
The `Get` method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use `GetOptional` to return a single optional result (returns `null` if no result is found).
```cs theme={null}
// Define a result type with properties matching the schema columns (some columns omitted here for brevity)
// public class ListResult { public string id; public string name; public string owner_id; ... }
var list = await db.Get("SELECT * FROM lists WHERE id = ?", [listId]);
```
### Querying Items (PowerSync.GetAll)
The `GetAll` method returns a set of rows from a table.
```cs theme={null}
// Define a result type with properties matching the schema columns (some columns omitted here for brevity)
// public class ListResult { public string id; public string name; public string owner_id; ... }
var lists = await db.GetAll("SELECT * FROM lists");
```
### Watching Queries (PowerSync.Watch)
The `Watch` method executes a read query whenever a change to a dependent table is made. It returns an `IAsyncEnumerable` so you can use `await foreach` to consume results.
```csharp theme={null}
// Define a result type with properties matching the schema columns (some columns omitted here for brevity)
// public class ListResult { public string id; public string name; public string owner_id; ... }
// Optional cancellation token to stop watching
var cts = new CancellationTokenSource();
// Register listener synchronously on the calling thread...
var listener = db.Watch(
"SELECT * FROM lists WHERE owner_id = ?",
[ownerId],
new SQLWatchOptions { Signal = cts.Token }
);
// ...then listen to changes on another thread (or await foreach directly if already in an async context)
_ = Task.Run(async () =>
{
await foreach (var results in listener)
{
Console.WriteLine("Lists: ");
foreach (var result in results)
{
Console.WriteLine($"{result.id}: {result.name}");
}
}
}, cts.Token);
// To stop watching, cancel the token: cts.Cancel();
```
### Mutations (PowerSync.Execute)
The `Execute` method can be used for executing single SQLite write statements.
```cs theme={null}
// And db.Execute for inserts, updates and deletes:
await db.Execute(
"insert into lists (id, name, owner_id, created_at) values (uuid(), 'New User', ?, datetime())",
[connector.UserId]
);
```
## Configure Logging
Enable logging to help you debug your app. By default, the SDK uses a no-op logger that doesn't output any logs. To enable logging, you can configure a custom logger using .NET's `ILogger` interface:
```cs theme={null}
using Microsoft.Extensions.Logging;
using PowerSync.Common.Client;
// Create a logger factory
ILoggerFactory loggerFactory = LoggerFactory.Create(builder =>
{
builder.AddConsole(); // Enable console logging
builder.SetMinimumLevel(LogLevel.Information); // Set minimum log level
});
var logger = loggerFactory.CreateLogger("PowerSyncLogger");
var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions
{
Database = new SQLOpenOptions { DbFilename = "powersync.db" },
Schema = AppSchema.PowerSyncSchema,
Logger = logger
});
```
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> .NET SDK](/resources/supported-platforms#net-sdk).
## Upgrading the SDK
To upgrade to the latest version of the PowerSync package, run the below command in your project folder:
```bash theme={null}
dotnet add package PowerSync.Common --prerelease
```
```bash theme={null}
dotnet add package PowerSync.Common --prerelease
dotnet add package PowerSync.Maui --prerelease
```
Add `--prerelease` while this package is in alpha. To install a specific version, use `--version` instead: `dotnet add package PowerSync.Common --version 0.0.6-alpha.1`
# Dart/Flutter SDK
Source: https://docs.powersync.com/client-sdks/reference/flutter
Full SDK guide for using PowerSync in Dart/Flutter clients
The SDK is distributed via pub.dev
Refer to the `powersync.dart` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with Flutter and PowerSync
Changelog for the SDK
### Quickstart
Get started quickly by using the self-hosted **Flutter** + **Supabase** template
📂 GitHub Repo
[https://github.com/powersync-community/flutter-powersync-supabase](https://github.com/powersync-community/flutter-powersync-supabase)
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
Web support is currently in a beta release. Refer to [Flutter Web Support](/client-sdks/frameworks/flutter-web-support) for more details.
## Installation
Add the [PowerSync pub.dev package](https://pub.dev/packages/powersync) to your project:
```bash theme={null}
flutter pub add powersync
```
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
For this reference document, we assume that you have created a Flutter project and have the following directory structure:
```plaintext theme={null}
lib/
├── models/
├── schema.dart
└── todolist.dart
├── powersync/
├── my_backend_connector.dart
└── powersync.dart
├── widgets/
├── lists_widget.dart
├── todos_widget.dart
├── main.dart
```
### 1. Define the Client-Side Schema
The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using *SQLite views* to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
```dart lib/models/schema.dart theme={null}
import 'package:powersync/powersync.dart';
const schema = Schema(([
Table('todos', [
Column.text('list_id'),
Column.text('created_at'),
Column.text('completed_at'),
Column.text('description'),
Column.integer('completed'),
Column.text('created_by'),
Column.text('completed_by'),
], indexes: [
// Index to allow efficient lookup within a list
Index('list', [IndexedColumn('list_id')])
]),
Table('lists', [
Column.text('created_at'),
Column.text('name'),
Column.text('owner_id')
])
]));
```
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
To instantiate `PowerSyncDatabase`, inject the Schema you defined in the previous step and a file path — it's important to only instantiate one instance of `PowerSyncDatabase` per file.
**Example**:
```dart lib/powersync/powersync.dart theme={null}
import 'package:path/path.dart';
import 'package:path_provider/path_provider.dart';
import 'package:powersync/powersync.dart';
import '../main.dart';
import '../models/schema.dart';
openDatabase() async {
final dir = await getApplicationSupportDirectory();
final path = join(dir.path, 'powersync-dart.db');
// Set up the database
// Inject the Schema you defined in the previous step and a file path
db = PowerSyncDatabase(schema: schema, path: path);
await db.initialize();
}
```
Once you've instantiated your PowerSync database, call the [connect()](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/connect.html) method to sync data with your backend. This method requires the backend connector that will be created in the next step.
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
```dart lib/main.dart {35} theme={null}
import 'package:flutter/material.dart';
import 'package:powersync/powersync.dart';
import 'powersync/powersync.dart';
late PowerSyncDatabase db;
Future main() async {
WidgetsFlutterBinding.ensureInitialized();
await openDatabase();
runApp(const DemoApp());
}
class DemoApp extends StatefulWidget {
const DemoApp({super.key});
@override
State createState() => _DemoAppState();
}
class _DemoAppState extends State {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Demo',
home: // TODO: Implement your own UI here.
// You could listen for authentication state changes to connect or disconnect from PowerSync
StreamBuilder(
stream: // TODO: some stream,
builder: (ctx, snapshot) {,
// TODO: implement your own condition here
if ( ... ) {
// Uses the backend connector that will be created in the next step
db.connect(connector: MyBackendConnector());
// TODO: implement your own UI here
}
},
)
);
}
}
```
### 3. Integrate with your Backend
The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. [PowerSyncBackendConnector.fetchCredentials](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/fetchCredentials.html) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. [PowerSyncBackendConnector.uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```dart lib/powersync/my_backend_connector.dart theme={null}
import 'package:powersync/powersync.dart';
class MyBackendConnector extends PowerSyncBackendConnector {
PowerSyncDatabase db;
MyBackendConnector(this.db);
@override
Future fetchCredentials() async {
// Implement fetchCredentials to obtain a JWT from your authentication service.
// See https://docs.powersync.com/configuration/auth/overview
// See example implementation here: https://pub.dev/documentation/powersync/latest/powersync/DevConnector/fetchCredentials.html
return PowerSyncCredentials(
endpoint: 'https://xxxxxx.powersync.journeyapps.com',
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
token: 'An authentication token'
);
}
// Implement uploadData to send local changes to your backend service
// You can omit this method if you only want to sync data from the server to the client
// See example implementation here: https://docs.powersync.com/client-sdks/reference/flutter#3-integrate-with-your-backend
@override
Future uploadData(PowerSyncDatabase database) async {
// This function is called whenever there is data to upload, whether the
// device is online or offline.
// If this call throws an error, it is retried periodically.
final transaction = await database.getNextCrudTransaction();
if (transaction == null) {
return;
}
// The data that needs to be changed in the remote db
for (var op in transaction.crud) {
switch (op.op) {
case UpdateType.put:
// TODO: Instruct your backend API to CREATE a record
case UpdateType.patch:
// TODO: Instruct your backend API to PATCH a record
case UpdateType.delete:
//TODO: Instruct your backend API to DELETE a record
}
}
// Completes the transaction and moves onto the next one
await transaction.complete();
}
}
```
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* [PowerSyncDatabase.get](/client-sdks/reference/flutter#fetching-a-single-item) - get (SELECT) a single row from a table.
* [PowerSyncDatabase.getAll](/client-sdks/reference/flutter#querying-items-powersync.getall) - get (SELECT) a set of rows from a table.
* [PowerSyncDatabase.watch](/client-sdks/reference/flutter#watching-queries-powersync.watch) - execute a read query every time source tables are modified.
* [PowerSyncDatabase.execute](/client-sdks/reference/flutter#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query.
For the following examples, we will define a `TodoList` model class that represents a List of todos.
```dart lib/models/todolist.dart theme={null}
/// This is a simple model class representing a TodoList
class TodoList {
final int id;
final String name;
final DateTime createdAt;
final DateTime updatedAt;
TodoList({
required this.id,
required this.name,
required this.createdAt,
required this.updatedAt,
});
factory TodoList.fromRow(Map row) {
return TodoList(
id: row['id'],
name: row['name'],
createdAt: DateTime.parse(row['created_at']),
updatedAt: DateTime.parse(row['updated_at']),
);
}
}
```
### Fetching a Single Item
The [get](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/get.html) method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/getOptional.html) to return a single optional result (returns `null` if no result is found).
The following is an example of selecting a list item by ID:
```dart lib/widgets/lists_widget.dart theme={null}
import '../main.dart';
import '../models/todolist.dart';
Future find(id) async {
final result = await db.get('SELECT * FROM lists WHERE id = ?', [id]);
return TodoList.fromRow(result);
}
```
### Querying Items (PowerSync.getAll)
The [getAll](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/getAll.html) method returns a set of rows from a table.
```dart lib/widgets/lists_widget.dart theme={null}
import 'package:powersync/sqlite3.dart';
import '../main.dart';
Future> getLists() async {
ResultSet results = await db.getAll('SELECT id FROM lists WHERE id IS NOT NULL');
List ids = results.map((row) => row['id'] as String).toList();
return ids;
}
```
### Watching Queries (PowerSync.watch)
The [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) method executes a read query whenever a change to a dependent table is made.
```dart theme={null}
StreamBuilder(
stream: db.watch('SELECT * FROM lists WHERE state = ?', ['pending']),
builder: (context, snapshot) {
if (snapshot.hasData) {
// TODO: implement your own UI here based on the result set
return ...;
} else {
return const Center(child: CircularProgressIndicator());
}
},
)
```
### Mutations (PowerSync.execute)
The [execute](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/execute.html) method can be used for executing single SQLite write statements.
```dart lib/widgets/todos_widget.dart {12-15} theme={null}
import 'package:flutter/material.dart';
import '../main.dart';
// Example Todos widget
class TodosWidget extends StatelessWidget {
const TodosWidget({super.key});
@override
Widget build(BuildContext context) {
return FloatingActionButton(
onPressed: () async {
await db.execute(
'INSERT INTO lists(id, created_at, name, owner_id) VALUES(uuid(), datetime(), ?, ?)',
['name', '123'],
);
},
tooltip: '+',
child: const Icon(Icons.add),
);
}
}
```
## Configure Logging
Since version 1.1.2 of the SDK, logging is enabled by default and outputs logs from PowerSync to the console in debug mode.
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM Support
See [ORM Support](/client-sdks/orms/flutter-orm-support) for details.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> Dart SDK](/resources/supported-platforms#dart-sdk).
## Upgrading the SDK
To upgrade to a newer version of the PowerSync package, run the below command in your project folder:
```bash theme={null}
flutter pub upgrade powersync
```
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/flutter-api
# JavaScript Web SDK
Source: https://docs.powersync.com/client-sdks/reference/javascript-web
Full SDK guide for using PowerSync in JavaScript Web clients
This SDK is distributed via NPM
Refer to packages/web in the `powersync-js` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with JavaScript Web stacks and PowerSync
Changelog for the SDK
### Quickstart
📂 GitHub Repo
[https://github.com/powersync-community/vite-react-ts-powersync-supabase/](https://github.com/powersync-community/vite-react-ts-powersync-supabase/)
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Single-Page Application (SPA) Frameworks
The PowerSync [JavaScript Web SDK](../javascript-web) is compatible with popular Single-Page Application (SPA) frameworks like React, Vue, Angular, and Svelte. Integration packages are provided specifically for the following:
Wrapper package to support reactivity and live queries.
Wrapper package to support reactivity and live queries.
PowerSync integrates with TanStack Query and TanStack DB for reactive data management.
} href="/client-sdks/frameworks/nuxt">
PowerSync Nuxt module to build offline/local first apps using Nuxt.
For React or React Native apps:
* The [`@powersync/react`](#react-hooks) package is best for most basic use cases, especially when you only need reactive queries with loading and error states.
* For more advanced scenarios, such as query caching and pagination, [TanStack Query](#tanstack-query) is a powerful solution. The [`@powersync/tanstack-react-query`](#tanstack-query) package extends the `useQuery` hook from `@powersync/react` and adds functionality from [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview), making it a better fit for advanced use cases or performance-optimized apps.
* For reactive data management and live query support across multiple frameworks, consider [TanStack DB](#tanstack-db). PowerSync works with all TanStack DB framework adapters (React, Vue, Solid, Svelte, Angular).
If you have a Vue app, use the Vue-specific package: [`@powersync/vue`](#vue-composables).
## Installation
Add the [PowerSync Web NPM package](https://www.npmjs.com/package/@powersync/web) to your project:
```bash theme={null}
npm install @powersync/web
```
```bash theme={null}
yarn add @powersync/web
```
```bash theme={null}
pnpm install @powersync/web
```
**Install Peer Dependencies**
This SDK currently requires [`@journeyapps/wa-sqlite`](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency. Install it in your app with:
```bash theme={null}
npm install @journeyapps/wa-sqlite
```
```bash theme={null}
yarn add @journeyapps/wa-sqlite
```
```bash theme={null}
pnpm install @journeyapps/wa-sqlite
```
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required .
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
```js theme={null}
// AppSchema.ts
import { column, Schema, Table } from '@powersync/web';
const lists = new Table({
created_at: column.text,
name: column.text,
owner_id: column.text
});
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos,
lists
});
// For types
export type Database = (typeof AppSchema)['types'];
export type TodoRecord = Database['todos'];
// OR:
// export type Todo = RowType;
export type ListRecord = Database['lists'];
```
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
```js theme={null}
import { PowerSyncDatabase } from '@powersync/web';
import { Connector } from './Connector';
import { AppSchema } from './AppSchema';
export const db = new PowerSyncDatabase({
// The schema you defined in the previous step
schema: AppSchema,
database: {
// Filename for the SQLite database — it's important to only instantiate one instance per file.
dbFilename: 'powersync.db'
// Optional. Directory where the database file is located.
// dbLocation: 'path/to/directory'
}
});
```
**SDK versions lower than 1.2.0**
In SDK versions lower than 1.2.0, you will need to use the deprecated [WASQLitePowerSyncDatabaseOpenFactory](https://powersync-ja.github.io/powersync-js/web-sdk/classes/WASQLitePowerSyncDatabaseOpenFactory) syntax to instantiate the database.
Once you've instantiated your PowerSync database, call the [connect()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#connect) method to sync data with your backend.
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
```js theme={null}
export const setupPowerSync = async () => {
// Uses the backend connector that will be created in the next section
const connector = new Connector();
db.connect(connector);
};
```
### 3. Integrate with your Backend
The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```js theme={null}
import { UpdateType } from '@powersync/web';
export class Connector {
async fetchCredentials() {
// Implement fetchCredentials to obtain a JWT from your authentication service.
// See https://docs.powersync.com/configuration/auth/overview
return {
endpoint: '[Your PowerSync instance URL or self-hosted endpoint]',
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
token: 'An authentication token'
};
}
async uploadData(database) {
// Implement uploadData to send local changes to your backend service.
// You can omit this method if you only want to sync data from the database to the client
// See example implementation here: https://docs.powersync.com/client-sdks/reference/javascript-web#3-integrate-with-your-backend
}
}
```
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* [PowerSyncDatabase.get](/client-sdks/reference/javascript-web#fetching-a-single-item) - get (SELECT) a single row from a table.
* [PowerSyncDatabase.getAll](/client-sdks/reference/javascript-web#querying-items-powersync.getall) - get (SELECT) a set of rows from a table.
* [PowerSyncDatabase.watch](/client-sdks/reference/javascript-web#watching-queries-powersync.watch) - execute a read query every time source tables are modified.
* [PowerSyncDatabase.execute](/client-sdks/reference/javascript-web#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query.
### Fetching a Single Item
The [get](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#get) method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#getoptional) to return a single optional result (returns `null` if no result is found).
```js theme={null}
// Find a list item by ID
export const findList = async (id) => {
const result = await db.get('SELECT * FROM lists WHERE id = ?', [id]);
return result;
}
```
### Querying Items (PowerSync.getAll)
The [getAll](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#getall) method returns a set of rows from a table.
```js theme={null}
// Get all list IDs
export const getLists = async () => {
const results = await db.getAll('SELECT * FROM lists');
return results;
}
```
### Watching Queries (PowerSync.watch)
The [watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
### Mutations (PowerSync.execute, PowerSync.writeTransaction)
The [execute](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#execute) method can be used for executing single SQLite write statements.
```js theme={null}
// Delete a list item by ID
export const deleteList = async (id) => {
const result = await db.execute('DELETE FROM lists WHERE id = ?', [id]);
return TodoList.fromRow(results);
}
// OR: using a transaction
const deleteList = async (id) => {
await db.writeTransaction(async (tx) => {
// Delete associated todos
await tx.execute(`DELETE FROM ${TODOS_TABLE} WHERE list_id = ?`, [id]);
// Delete list record
await tx.execute(`DELETE FROM ${LISTS_TABLE} WHERE id = ?`, [id]);
});
};
```
## Configure Logging
```js theme={null}
import { createBaseLogger, LogLevel } from '@powersync/web';
const logger = createBaseLogger();
// Configure the logger to use the default console output
logger.useDefaults();
// Set the minimum log level to DEBUG to see all log messages
// Available levels: DEBUG, INFO, WARN, ERROR, TRACE, OFF
logger.setLevel(LogLevel.DEBUG);
```
Enable verbose output in the developer tools for detailed logs.
Additionally, the [WASQLiteDBAdapter](https://powersync-ja.github.io/powersync-js/web-sdk/classes/WASQLiteDBAdapter) opens SQLite connections inside a shared web worker. This worker can be inspected in Chrome by accessing:
```
chrome://inspect/#workers
```
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM Support
See [JavaScript ORM Support](/client-sdks/orms/js/overview) for details.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> JS/Web SDK](/resources/supported-platforms#js%2Fweb-sdk).
## Upgrading the SDK
Run the below command in your project folder:
```bash theme={null}
npm upgrade @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
yarn upgrade @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
pnpm upgrade @powersync/web @journeyapps/wa-sqlite
```
## Developer Notes
### Connection Methods
This SDK supports two methods for streaming sync commands:
1. **WebSocket (Default)**
* The implementation leverages RSocket for handling reactive socket streams.
* Back-pressure is effectively managed through client-controlled command requests.
* Sync commands are transmitted efficiently as BSON (binary) documents.
* This method is **recommended** since it will support the future [BLOB column support](https://roadmap.powersync.com/c/88-support-for-blob-column-types) feature.
2. **HTTP Streaming (Legacy)**
* This is the original implementation method.
* This method will not support the future BLOB column feature.
By default, the `PowerSyncDatabase.connect()` method uses WebSocket. You can optionally specify the `connectionMethod` to override this:
```js theme={null}
// WebSocket (default)
powerSync.connect(connector);
// HTTP Streaming
powerSync.connect(connector, { connectionMethod: SyncStreamConnectionMethod.HTTP });
```
### SQLite Virtual File Systems
This SDK supports multiple Virtual File Systems (VFS), responsible for storing the local SQLite database:
#### 1. IDBBatchAtomicVFS (Default)
* This system utilizes IndexedDB as its underlying storage mechanism.
* Multiple tabs are fully supported across most modern browsers.
* Users may experience stability issues when using Safari. For example, the `RangeError: Maximum call stack size exceeded` error. See [Troubleshooting](/debugging/troubleshooting#rangeerror-maximum-call-stack-size-exceeded-on-ios-or-safari) for more details.
#### 2. OPFS-based Alternatives
PowerSync supports two OPFS (Origin Private File System) implementations that generally offer improved performance:
##### OPFSCoopSyncVFS (Recommended)
* This implementation provides comprehensive multi-tab support across all major browsers.
* It offers the most reliable compatibility with Safari and Safari iOS.
* Example configuration:
```js theme={null}
import { PowerSyncDatabase, WASQLiteOpenFactory, WASQLiteVFS } from '@powersync/web';
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: new WASQLiteOpenFactory({
dbFilename: 'exampleVFS.db',
vfs: WASQLiteVFS.OPFSCoopSyncVFS,
flags: {
enableMultiTabs: typeof SharedWorker !== 'undefined'
}
}),
flags: {
enableMultiTabs: typeof SharedWorker !== 'undefined'
}
});
```
##### AccessHandlePoolVFS
* This implementation delivers optimal performance for single-tab applications.
* The system is not designed to handle multiple tab scenarios.
* The configuration is similar to `OPFSCoopSyncVFS`, but requires using `WASQLiteVFS.AccessHandlePoolVFS`.
#### VFS Compatibility Matrix
| VFS Type | Multi-Tab Support (Standard Browsers) | Multi-Tab Support (Safari/iOS) | Notes |
| ------------------- | ------------------------------------- | ------------------------------ | ------------------------------------- |
| IDBBatchAtomicVFS | ✅ | ❌ | Default, some Safari stability issues |
| OPFSCoopSyncVFS | ✅ | ✅ | Recommended for multi-tab support |
| AccessHandlePoolVFS | ❌ | ❌ | Best for single-tab applications |
**Note**: There are known issues with OPFS when using Safari's incognito mode.
### Managing OPFS Storage
Unlike IndexedDB, OPFS storage cannot be managed through browser developer tools. The following utility functions can help you manage OPFS storage programmatically:
```js theme={null}
// Clear all OPFS storage
async function purgeVFS() {
await powerSync.disconnect();
await powerSync.close();
const root = await navigator.storage.getDirectory();
await new Promise(resolve => setTimeout(resolve, 1)); // Allow .db-wal to become deletable
for await (const [name, entry] of root.entries!()) {
try {
if (entry.kind === 'file') {
await root.removeEntry(name);
} else if (entry.kind === 'directory') {
await root.removeEntry(name, { recursive: true });
}
} catch (err) {
console.error(`Failed to delete ${entry.kind}: ${name}`, err);
}
}
}
// List OPFS entries
async function listVfsEntries() {
const root = await navigator.storage.getDirectory();
for await (const [name, entry] of root.entries()) {
console.log(`${entry.kind}: ${name}`);
}
}
```
### Multiple Tab Support
* Multiple tab support is not currently available on Android.
* For Safari, use the [`OPFSCoopSyncVFS`](/client-sdks/reference/javascript-web#sqlite-virtual-file-systems) virtual file system to ensure stable multi-tab functionality.
* If you encounter a `RangeError: Maximum call stack size exceeded` error, see [Troubleshooting](/debugging/troubleshooting#rangeerror-maximum-call-stack-size-exceeded-on-ios-or-safari) for solutions.
Using PowerSync between multiple tabs is supported on some web browsers. Multiple tab support relies on shared web workers for database and sync streaming operations. When enabled, shared web workers named `shared-DB-worker-[dbFileName]` and `shared-sync-[dbFileName]` will be created.
#### `shared-DB-worker-[dbFileName]`
The shared database worker will ensure writes to the database will instantly be available between tabs.
#### `shared-sync-[dbFileName]`
The shared sync worker connects directly to the PowerSync backend instance and applies changes to the database. Note that the shared sync worker will call the `fetchCredentials` and `uploadData` method of the latest opened available tab. Closing a tab will shift the latest tab to the previously opened one.
Currently, using the SDK in multiple tabs without enabling the [enableMultiTabs](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/web/src/db/adapters/web-sql-flags.ts#L23) flag will spawn a standard web worker per tab for DB operations. These workers are safe to operate on the DB concurrently, however changes from one tab may not update watches on other tabs. Only one tab can sync from the PowerSync instance at a time. The sync status will not be shared between tabs, only the oldest tab will connect and display the latest sync status.
Support is enabled by default if available. This can be disabled as below:
```js theme={null}
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'my_app_db.sqlite'
},
flags: {
/**
* Multiple tab support is enabled by default if available.
* This can be disabled by setting this flag to false.
*/
enableMultiTabs: false
}
});
```
### Using PowerSyncDatabase Flags
This guide provides an overview of the customizable flags available for the `PowerSyncDatabase` in the JavaScript Web SDK. These flags allow you to enable or disable specific features to suit your application's requirements.
#### Configuring Flags
You can configure flags during the initialization of the `PowerSyncDatabase`. Flags can be set using the `flags` property, which allows you to enable or disable specific functionalities.
```javascript theme={null}
import { PowerSyncDatabase, resolveWebPowerSyncFlags, WebPowerSyncFlags } from '@powersync/web';
import { AppSchema } from '@/library/powersync/AppSchema';
// Define custom flags
const customFlags: WebPowerSyncFlags = resolveWebPowerSyncFlags({
enableMultiTabs: true,
broadcastLogs: true,
disableSSRWarning: false,
ssrMode: false,
useWebWorker: true,
});
// Create the PowerSync database instance
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'example.db',
},
flags: customFlags,
});
```
#### Available Flags
default: `true`
Enables support for multiple tabs using shared web workers. When enabled, multiple tabs can interact with the same database and sync data seamlessly.
default: `false`
Enables the broadcasting of logs for debugging purposes. This flag helps monitor shared worker logs in a multi-tab environment.
default: `false`
Disables warnings when running in SSR (Server-Side Rendering) mode.
default: `false`
Enables SSR mode. In this mode, only empty query results will be returned, and syncing with the backend is disabled.
default: `true`
Enables the use of web workers for database operations. Disabling this flag also disables multi-tab support.
#### Flag Behavior
**Example 1: Multi-Tab Support**
By default, multi-tab support is enabled if supported by the browser. To explicitly disable this feature:
```javascript theme={null}
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'my_app_db.sqlite',
},
flags: {
enableMultiTabs: false,
},
});
```
When disabled, each tab will use independent workers, and changes in one tab will not automatically propagate to others.
**Example 2: SSR Mode**
To enable SSR mode and suppress warnings:
```javascript theme={null}
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'my_app_db.sqlite',
},
flags: {
ssrMode: true,
disableSSRWarning: true,
},
});
```
**Example 3: Verbose Debugging with Broadcast Logs**
To enable detailed logging for debugging:
```javascript theme={null}
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'my_app_db.sqlite',
},
flags: {
broadcastLogs: true,
},
});
```
Logs will include detailed insights into database and sync operations.
#### Recommendations
1. **Set `enableMultiTabs`** to `true` if your application requires seamless data sharing across multiple tabs.
2. **Set `useWebWorker`** to `true` for efficient database operations using web workers.
3. **Set `broadcastLogs`** to `true` during development to troubleshoot and monitor database and sync operations.
4. **Set `disableSSRWarning`** to `true` when running in SSR mode to avoid unnecessary console warnings.
5. **Test combinations** of flags to validate their behavior in your application's specific use case.
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/javascript-web-api
# Kotlin SDK
Source: https://docs.powersync.com/client-sdks/reference/kotlin
Full SDK guide for using PowerSync in Kotlin clients
The PowerSync Kotlin SDK is distributed via Maven Central
Refer to the `powersync-kotlin` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with Kotlin and PowerSync.
Changelog for the SDK
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Installation
Add the [PowerSync SDK](https://central.sonatype.com/artifact/com.powersync/core) to your project by adding the following to your `build.gradle.kts` file:
```toml gradle/libs.versions.toml theme={null}
[versions]
# Please check the latest version at https://github.com/powersync-ja/powersync-kotlin/releases/
powersync = "1.10.0"
[libraries]
powersync-core = { module = "com.powersync:core", version.ref = "powersync" }
powersync-integration-supabase = { module = "com.powersync:connector-supabase", version.ref = "powersync" }
```
```Kotlin build.gradle.kts icon="https://mintcdn.com/powersync/GTJdSKFSfUc2Sxtc/logo/gradle.svg?fit=max&auto=format&n=GTJdSKFSfUc2Sxtc&q=85&s=bb14bd89bac7520f103a2ad2abc17053" theme={null}
kotlin {
//...
sourceSets {
commonMain.dependencies {
implementation(libs.powersync.core)
// If you want to use the Supabase Connector, also add the following:
implementation(libs.powersync.integration.supabase)
}
//...
}
}
```
```Kotlin build.gradle.kts icon="https://mintcdn.com/powersync/GTJdSKFSfUc2Sxtc/logo/gradle.svg?fit=max&auto=format&n=GTJdSKFSfUc2Sxtc&q=85&s=bb14bd89bac7520f103a2ad2abc17053" theme={null}
kotlin {
//...
sourceSets {
commonMain.dependencies {
implementation("com.powersync:core:$powersyncVersion")
// If you want to use the Supabase Connector, also add the following:
implementation("com.powersync:connector-supabase:$powersyncVersion")
}
//...
}
}
```
In a Kotlin-Multiplatform project targeting iOS, macOS, tvOS or watchOS, you also need to
install the PowerSync SQLite extension.
The best way to do that depends on how you [integrate Kotlin](https://kotlinlang.org/docs/multiplatform/multiplatform-ios-integration-overview.html) into the XCode project.
PowerSync works with the [direct integration](https://kotlinlang.org/docs/multiplatform/multiplatform-direct-integration.html), you can add the SQLite extension as a dependency
in XCode. In your XCode project settings, under "Package Dependencies", add a package and use
`https://github.com/powersync-ja/powersync-sqlite-core-swift.git` as a package URL.
Use a version dependency and start with the [latest version](https://github.com/powersync-ja/powersync-sqlite-core-swift/releases) to get started.
If you have an existing `Package.swift` file, depend on the SQLite extension like this:
```Swift Package.swift theme={null}
dependencies: [
.package(
url: "https://github.com/powersync-ja/powersync-sqlite-core-swift.git",
// Refer to github.com/powersync-ja/powersync-sqlite-core-swift/releases for the latest version.
exact: "0.4.11",
)
]
```
Note that CocoaPods will become read-only in late 2026, and we won't be able to update the
SQLite extension through CocoaPods afterwards.
Add the following to the `cocoapods` config in your `build.gradle.kts`:
```Kotlin theme={null}
cocoapods {
//...
pod("powersync-sqlite-core") {
linkOnly = true
}
framework {
isStatic = true
export("com.powersync:core")
}
//...
}
```
The `linkOnly = true` attribute and `isStatic = true` framework setting ensure that the `powersync-sqlite-core` binaries are statically linked.
For Android and JVM targets, the extension is embedded in the SDK and doesn't need to be installed manually.
**Supported platforms**
* PowerSync supports Android, JVM and Apple (iOS, macOS, tvOS, watchOS) targets through Kotlin Multiplatform.
* On the JVM, the following platforms are supported: Linux AArch64, Linux X64, MacOS AArch64, MacOS X64, Windows X64.
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using *SQLite views* to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
```kotlin theme={null}
// AppSchema.kt
import com.powersync.db.schema.Column
import com.powersync.db.schema.Index
import com.powersync.db.schema.IndexedColumn
import com.powersync.db.schema.Schema
import com.powersync.db.schema.Table
val AppSchema: Schema = Schema(
listOf(
Table(
name = "todos",
columns = listOf(
Column.text('list_id'),
Column.text('created_at'),
Column.text('completed_at'),
Column.text('description'),
Column.integer('completed'),
Column.text('created_by'),
Column.text('completed_by')
),
// Index to allow efficient lookup within a list
indexes = listOf(
Index("list", listOf(IndexedColumn.descending("list_id")))
)
),
Table(
name = "lists",
columns = listOf(
Column.text('created_at'),
Column.text('name'),
Column.text('owner_id')
)
)
)
)
```
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
a. Create platform specific `DatabaseDriverFactory` to be used by the `PowerSyncBuilder` to create the SQLite database driver.
```kotlin theme={null}
// commonMain
import com.powersync.DatabaseDriverFactory
import com.powersync.PowerSyncDatabase
// Android
val driverFactory = DatabaseDriverFactory(this)
// iOS & Desktop
val driverFactory = DatabaseDriverFactory()
```
b. Build a `PowerSyncDatabase` instance using the `PowerSyncBuilder` and the `DatabaseDriverFactory`. The schema you created in a previous step is provided as a parameter:
```kotlin theme={null}
// commonMain
val database = PowerSyncDatabase({
factory: driverFactory, // The factory you defined above
schema: AppSchema, // The schema you defined in the previous step
dbFilename: "powersync.db"
// logger: YourLogger // Optionally include your own Logger that must conform to Kermit Logger
// dbDirectory: "path/to/directory" // Optional. Directory path where the database file is located. This parameter is ignored for iOS.
});
```
c. Connect the `PowerSyncDatabase` to sync data with your backend:
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
```kotlin theme={null}
// commonMain
// Uses the backend connector that will be created in the next step
database.connect(MyConnector())
```
**Special case: Compose Multiplatform**
The artifact `com.powersync:powersync-compose` provides a simpler API:
```kotlin theme={null}
// commonMain
val database = rememberPowerSyncDatabase(schema)
remember {
database.connect(MyConnector())
}
```
### 3. Integrate with your Backend
Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. `PowerSyncBackendConnector.fetchCredentials` - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. `PowerSyncBackendConnector.uploadData` - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```kotlin theme={null}
// PowerSync.kt
import com.powersync.DatabaseDriverFactory
import com.powersync.PowerSyncDatabase
class MyConnector : PowerSyncBackendConnector() {
override suspend fun fetchCredentials(): PowerSyncCredentials {
// implement fetchCredentials to obtain the necessary credentials to connect to your backend
// See an example implementation in https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt
return {
endpoint: '[Your PowerSync instance URL or self-hosted endpoint]',
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
token: 'An authentication token'
}
}
override suspend fun uploadData(database: PowerSyncDatabase) {
// Implement uploadData to send local changes to your backend service
// You can omit this method if you only want to sync data from the server to the client
// See an example implementation under Usage Examples (sub-page)
// See https://docs.powersync.com/handling-writes/writing-client-changes for considerations.
}
}
```
**Note**: If you are using Supabase, you can use [SupabaseConnector.kt](https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt) as a starting point.
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* [PowerSyncDatabase.get](/client-sdks/reference/kotlin#fetching-a-single-item) - get (SELECT) a single row from a table.
* [PowerSyncDatabase.getAll](/client-sdks/reference/kotlin#querying-items-powersync-getall) - get (SELECT) a set of rows from a table.
* [PowerSyncDatabase.watch](/client-sdks/reference/kotlin#watching-queries-powersync-watch) - execute a read query every time source tables are modified.
* [PowerSyncDatabase.execute](/client-sdks/reference/kotlin#mutations-powersync-execute) - execute a write (INSERT/UPDATE/DELETE) query.
### Fetching a Single Item
The `get` method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use `getOptional` to return a single optional result (returns `null` if no result is found).
```kotlin theme={null}
// Find a list item by ID
suspend fun find(id: Any): TodoList {
return database.get(
"SELECT * FROM lists WHERE id = ?",
listOf(id)
) { cursor ->
TodoList.fromCursor(cursor)
}
}
```
### Querying Items (PowerSync.getAll)
The `getAll` method executes a read-only (SELECT) query and returns a set of rows.
```kotlin theme={null}
// Get all list IDs
suspend fun getLists(): List {
return database.getAll(
"SELECT id FROM lists WHERE id IS NOT NULL"
) { cursor ->
cursor.getString("id")
}
}
```
### Watching Queries (PowerSync.watch)
The `watch` method executes a read query whenever a change to a dependent table is made.
```kotlin theme={null}
fun watchPendingLists(): Flow> =
db.watch(
"SELECT * FROM lists WHERE state = ?",
listOf("pending"),
) { cursor ->
ListItem(
id = cursor.getString("id"),
name = cursor.getString("name"),
)
}
```
### Mutations (PowerSync.execute)
The `execute` method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any).
```kotlin theme={null}
suspend fun insertCustomer(name: String, email: String) {
database.writeTransaction { tx ->
tx.execute(
sql = "INSERT INTO customers (id, name, email) VALUES (uuid(), ?, ?)",
parameters = listOf(name, email)
)
}
}
suspend fun updateCustomer(id: String, name: String, email: String) {
database.execute(
sql = "UPDATE customers SET name = ? WHERE email = ?",
parameters = listOf(name, email)
)
}
suspend fun deleteCustomer(id: String? = null) {
// If no id is provided, delete the first customer in the database
val targetId =
id ?: database.getOptional(
sql = "SELECT id FROM customers LIMIT 1",
mapper = { cursor ->
cursor.getString(0)!!
}
) ?: return
database.writeTransaction { tx ->
tx.execute(
sql = "DELETE FROM customers WHERE id = ?",
parameters = listOf(targetId)
)
}
}
```
## Configure Logging
You can include your own Logger that must conform to the [Kermit Logger](https://kermit.touchlab.co/docs/) as shown here.
```kotlin theme={null}
PowerSyncDatabase(
...
logger: Logger? = YourLogger
)
```
If you don't supply a Logger then a default Kermit Logger is created with settings to only show `Warnings` in release and `Verbose` in debug as follows:
```kotlin theme={null}
val defaultLogger: Logger = Logger
// Severity is set to Verbose in Debug and Warn in Release
if(BuildConfig.isDebug) {
Logger.setMinSeverity(Severity.Verbose)
} else {
Logger.setMinSeverity(Severity.Warn)
}
return defaultLogger
```
You are able to use the Logger anywhere in your code as follows to debug:
```kotlin theme={null}
import co.touchlab.kermit.Logger
Logger.i("Some information");
Logger.e("Some error");
...
```
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM / SQL Library Support
The PowerSync SDK for Kotlin can be used with the SQLDelight and Room libraries, making it easier to define and
run SQL queries.
For details, see the [SQL Library Support](/client-sdks/orms/kotlin/overview) page.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> Kotlin SDK](/resources/supported-platforms#kotlin-sdk).
## Upgrading the SDK
Update your project's Gradle file (`build.gradle.kts`) with the latest version of the [SDK](https://central.sonatype.com/artifact/com.powersync/core).
## Developer Notes
### Client Implementation
The PowerSync Service sends encoded instructions about data to sync to connected clients.
These instructions are decoded by our SDKs, and on Kotlin there are two implementations available for this:
1. **Kotlin (default)**
* This is the original implementation method, mostly implemented in Kotlin.
* Most upcoming features will not be ported to this implementation, and we intend to remove it eventually.
2. **Rust (currently experimental)**
* This is a newer implementation, mostly implemented in Rust but still using Kotlin for networking.
* Apart from newer features, this implementation is also more performant.
* We [encourage interested users to try it out](https://releases.powersync.com/announcements/improved-sync-performance-in-our-client-sdks)
and report feedback, as we intend to make it the default after a stabilization period.
To enable the Rust client, pass `SyncOptions(newClientImplementation = true)` as a second parameter when
[connecting](https://powersync-ja.github.io/powersync-kotlin/core/com.powersync/-power-sync-database/connect.html).
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/kotlin-api
# Node.js client SDK (Beta)
Source: https://docs.powersync.com/client-sdks/reference/node
SDK reference for using PowerSync in Node.js clients.
This page describes the PowerSync *client* SDK for Node.js.
If you're interested in using PowerSync for your Node.js backend, no special package is required.
Instead, follow our guides on [app backend setup](/configuration/app-backend/setup).
This SDK is distributed via NPM
Refer to `packages/node` in the `powersync-js` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with Node.js and PowerSync
Changelog for the SDK
This SDK is currently in a [**beta** release](/resources/feature-status) and can be considered production-ready for tested use cases.
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Quickstart
Add the [PowerSync Node NPM package](https://www.npmjs.com/package/@powersync/node) to your project:
```bash theme={null}
npm install @powersync/node
```
```bash theme={null}
yarn add @powersync/node
```
```bash theme={null}
pnpm install @powersync/node
```
**Install Peer Dependencies**
The PowerSync SDK for Node.js supports multiple drivers. More details are available under [Encryption and Custom SQLite Drivers](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers). We currently recommend the `better-sqlite3` package for most users:
```bash theme={null}
npm install better-sqlite3
```
```bash theme={null}
yarn add better-sqlite3
```
```bash theme={null}
pnpm install better-sqlite3
```
Previous versions of the PowerSync SDK for Node.js used the `@powersync/better-sqlite3` fork as a
required peer dependency.
This is no longer recommended. After upgrading to `@powersync/node` version `0.12.0` or later, ensure
the old package is no longer installed by running `npm uninstall @powersync/better-sqlite3`
**Common Installation Issues**
The `better-sqlite3` package requires native compilation, which depends on certain system tools.
Prebuilt assets are available and used by default, but a custom compilation may be started depending on the Node.js
or Electron version used.
This compilation process is handled by `node-gyp` and may fail if required dependencies are missing or misconfigured.
Refer to the [PowerSync Node package README](https://www.npmjs.com/package/@powersync/node) for more details.
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required .
You can use [this example](https://github.com/powersync-ja/powersync-js/blob/e5a57a539150f4bc174e109d3898b6e533de272f/demos/example-node/src/powersync.ts#L47-L77) as a reference when defining your schema.
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
Select JavaScript and replace the suggested import with `@powersync/node`.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
```js theme={null}
import { PowerSyncDatabase } from '@powersync/node';
import { Connector } from './Connector';
import { AppSchema } from './Schema';
export const db = new PowerSyncDatabase({
// The schema you defined in the previous step
schema: AppSchema,
database: {
// Filename for the SQLite database — it's important to only instantiate one instance per file.
dbFilename: 'powersync.db',
// Optional. Directory where the database file is located.
// dbLocation: 'path/to/directory'
},
});
```
### 3. Integrate with your Backend
The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```js theme={null}
import { UpdateType } from '@powersync/node';
export class Connector implements PowerSyncBackendConnector {
constructor() {
// set up a connection to your server for uploads
this.serverConnectionClient = TODO;
}
async fetchCredentials() {
// Implement fetchCredentials to obtain a JWT from your authentication service.
// See https://docs.powersync.com/configuration/auth/overview
return {
endpoint: '[Your PowerSync instance URL or self-hosted endpoint]',
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
token: 'An authentication token'
};
}
async uploadData(database) {
// Implement uploadData to send local changes to your backend service.
// You can omit this method if you only want to sync data from the database to the client
// See example implementation here: https://docs.powersync.com/client-sdks/reference/javascript-web#3-integrate-with-your-backend
}
}
```
With your database instantiated and your connector ready, call `connect()` to start syncing data with your backend:
```js theme={null}
await db.connect(new Connector());
await db.waitForFirstSync(); // Optional, to wait for a complete snapshot of data to be available
```
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
## Usage
After connecting the client database, it is ready to be used. The API to run queries and updates is identical to our
[JavaScript/Web SDK](/client-sdks/reference/javascript-web#using-powersync%3A-crud-functions):
```js theme={null}
// Use db.get() to fetch a single row:
console.log(await db.get('SELECT powersync_rs_version();'));
// Or db.getAll() to fetch all:
console.log(await db.getAll('SELECT * FROM lists;'));
// And db.execute for inserts, updates and deletes:
await db.execute(
"INSERT INTO lists (id, created_at, name, owner_id) VALUEs (uuid(), datetime('now'), ?, uuid());",
['My new list']
);
```
### Watch Queries
The `db.watch()` method executes a read query whenever a change to a dependent table is made.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
PowerSync runs queries asynchronously on a background pool of workers and automatically configures WAL to allow a writer and multiple readers to operate in parallel.
## Configure Logging
```js theme={null}
import { createBaseLogger, LogLevel } from '@powersync/node';
const logger = createBaseLogger();
// Configure the logger to use the default console output
logger.useDefaults();
// Set the minimum log level to DEBUG to see all log messages
// Available levels: DEBUG, INFO, WARN, ERROR, TRACE, OFF
logger.setLevel(LogLevel.DEBUG);
```
Enable verbose output in the developer tools for detailed logs.
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM Support
See [JavaScript ORM Support](/client-sdks/orms/js/overview) for details.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> Node.js SDK](/resources/supported-platforms#node-js-sdk).
## Upgrading the SDK
Run the below command in your project folder:
```bash theme={null}
npm upgrade @powersync/node
```
```bash theme={null}
yarn upgrade @powersync/node
```
```bash theme={null}
pnpm upgrade @powersync/node
```
## Encryption and Custom SQLite Drivers
The SDK has an optional dependency on `better-sqlite3` which is used as the default SQLite
driver for that package.
Because that dependency is optional, it can be replaced or removed to customize how SQLite
gets loaded. This section lists common options.
### Encryption
To encrypt databases managed by the PowerSync SDK for Node.js, replace the `better-sqlite3`
dependency with the [`better-sqlite3-multiple-ciphers`](https://www.npmjs.com/package/better-sqlite3-multiple-ciphers) fork.
That package has the same API as `better-sqlite3` while bundling [SQLite3MultipleCiphers](https://github.com/utelle/SQLite3MultipleCiphers)
instead of upstream SQLite.
The [node example](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-node) in the PowerSync
repository can use both `better-sqlite3` and `better-sqlite3-multiple-ciphers` and may be a useful example here.
Because PowerSync attempts to dynamically load `better-sqlite3` at runtime, using a different package
requires patching the database worker. To do that, create a file (say `database.worker.js`) with the following
contents:
```Typescript theme={null}
// This worker uses bindings to sqlite3 multiple ciphers instead of the original better-sqlite3 worker.
import Database from 'better-sqlite3-multiple-ciphers';
import { startPowerSyncWorker } from '@powersync/node/worker.js';
async function resolveBetterSqlite3() {
return Database;
}
startPowerSyncWorker({ loadBetterSqlite3: resolveBetterSqlite3 });
```
When opening the database, instruct PowerSync to use the custom worker.
Also use the `initializeConnection` option to install an encryption key:
```Typescript theme={null}
const encryptionKey = 'todo: generate encryption key and store it safely';
const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'app.db',
openWorker: (_, options) => {
return new Worker(new URL('./database.worker.js', import.meta.url), options);
},
initializeConnection: async (db) => {
if (encryptionKey.length) {
const escapedKey = encryptionKey.replace("'", "''");
await db.execute(`pragma key = '${escapedKey}'`);
}
// Make sure the database is readable, this fails early if the key is wrong.
await db.execute('pragma user_version');
}
},
logger
});
```
If you're using a custom compilation toolchain, for instance because you're compiling from TypeScript
or are applying a bundler to your project, loading workers may require additional configuration on that
toolchain.
### `node:sqlite`
Recent versions of Node.js contain an [experimental SQLite API](https://nodejs.org/api/sqlite.html).
Using the builtin SQLite API can reduce code size and external native dependencies. To enable it,
remove your dependency on `better-sqlite3` and configure PowerSync to use the builtin APIs:
```JavaScript theme={null}
const database = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'app.db',
dbLocation: directory,
// Use node:sqlite instead of better-sqlite3
implementation: { type: 'node:sqlite' }
}
});
```
There are stability issues when using PowerSync with this API, and it's not recommended outside of
testing purposes at the moment.
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/node-api
# React Native & Expo SDK
Source: https://docs.powersync.com/client-sdks/reference/react-native-and-expo
Full SDK guide for using PowerSync in React Native clients
This SDK is distributed via NPM
Refer to packages/react-native in the powersync-js repo on GitHub
Full API reference for the PowerSync SDK
Gallery of example projects/demo apps built with React Native and PowerSync.
Changelog for the SDK
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Using Hooks
A separate `powersync-react` package is available containing React hooks for PowerSync. See its README for example code.
## Installation
Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powersync/react-native) to your project:
```bash theme={null}
npx expo install @powersync/react-native
```
```bash theme={null}
yarn expo add @powersync/react-native
```
```
pnpm expo install @powersync/react-native
```
**Install Peer Dependencies**
PowerSync requires a SQLite database adapter. Choose between:
[PowerSync OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) offers:
* Built-in encryption support via SQLCipher
* Smoother transition to React Native's New Architecture
```bash theme={null}
npx expo install @powersync/op-sqlite @op-engineering/op-sqlite
```
```bash theme={null}
yarn expo add @powersync/op-sqlite @op-engineering/op-sqlite
```
```
pnpm expo install @powersync/op-sqlite @op-engineering/op-sqlite
```
The [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) package is the original database adapter for React Native and therefore more battle-tested in production environments.
```bash theme={null}
npx expo install @journeyapps/react-native-quick-sqlite
```
```bash theme={null}
yarn expo add @journeyapps/react-native-quick-sqlite
```
```
pnpm expo install @journeyapps/react-native-quick-sqlite
```
**iOS with `use_frameworks!`**
If your iOS project uses `use_frameworks!`, add the `react-native-quick-sqlite` plugin to your app.json or app.config.js and configure the staticLibrary option:
```
{
"expo": {
"plugins": [
[
"@journeyapps/react-native-quick-sqlite",
{
"staticLibrary": true
}
]
]
}
}
```
This plugin automatically configures the necessary build settings for `react-native-quick-sqlite` to work with `use_frameworks!`.
**Using Expo Go?** Our native database adapters listed below (OP-SQLite and React Native Quick SQLite) are not compatible with Expo Go's sandbox environment. To run PowerSync with Expo Go install our JavaScript-based adapter `@powersync/adapter-sql-js` instead. See details [here](/client-sdks/frameworks/expo-go-support).
**Polyfills and additional notes:**
* For async iterator support with watched queries, additional polyfills are required. See the [Babel plugins section](https://www.npmjs.com/package/@powersync/react-native#babel-plugins-watched-queries) in the README.
* When using the **OP-SQLite** package, we recommend adding this [metro config](https://github.com/powersync-ja/powersync-js/tree/main/packages/react-native#metro-config-optional)
to avoid build issues.
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using *SQLite views* to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
**Note**: No need to declare a primary key `id` column - as PowerSync will automatically create this.
```typescript powersync/AppSchema.ts theme={null}
import { column, Schema, Table } from '@powersync/react-native';
const lists = new Table({
created_at: column.text,
name: column.text,
owner_id: column.text
});
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos,
lists
});
// For types
export type Database = (typeof AppSchema)['types'];
export type TodoRecord = Database['todos'];
// OR:
// export type Todo = RowType;
export type ListRecord = Database['lists'];
```
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
For getting started and testing PowerSync use the [@journeyapps/react-native-quick-sqlite](https://github.com/powersync-ja/react-native-quick-sqlite) package.
By default, this SDK requires @journeyapps/react-native-quick-sqlite as a peer dependency.
```typescript powersync/system.ts theme={null}
import { PowerSyncDatabase } from '@powersync/react-native';
import { AppSchema } from './Schema';
export const powersync = new PowerSyncDatabase({
// The schema you defined in the previous step
schema: AppSchema,
// For other options see,
// https://powersync-ja.github.io/powersync-js/web-sdk/globals#powersyncopenfactoryoptions
database: {
// Filename for the SQLite database — it's important to only instantiate one instance per file.
// For other database options see,
// https://powersync-ja.github.io/powersync-js/web-sdk/globals#sqlopenoptions
dbFilename: 'powersync.db'
}
});
```
If you want to include encryption with SQLCipher use the [@powersync/op-sqlite](https://www.npmjs.com/package/@powersync/op-sqlite) package.
If you've already installed `@journeyapps/react-native-quick-sqlite`, You will have to uninstall it and then install both `@powersync/op-sqlite` and it's peer dependency `@op-engineering/op-sqlite` to use this.
```typescript powersync/system.ts theme={null}
import { PowerSyncDatabase } from '@powersync/react-native';
import { OPSqliteOpenFactory } from '@powersync/op-sqlite'; // Add this import
import { AppSchema } from './Schema';
// Create the factory
const opSqlite = new OPSqliteOpenFactory({
dbFilename: 'powersync.db'
});
export const powersync = new PowerSyncDatabase({
// For other options see,
schema: AppSchema,
// Override the default database
database: opSqlite
});
```
**SDK versions lower than 1.8.0**
In SDK versions lower than 1.8.0, you will need to use the deprecated [RNQSPowerSyncDatabaseOpenFactory](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/RNQSPowerSyncDatabaseOpenFactory) syntax to instantiate the database.
Once you've instantiated your PowerSync database, call the [connect()](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/AbstractPowerSyncDatabase#connect) method to sync data with your backend.
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
```typescript powersync/system.ts theme={null}
import { Connector } from './Connector';
export const setupPowerSync = async () => {
// Uses the backend connector that will be created in the next section
const connector = new Connector();
powersync.connect(connector);
};
```
### 3. Integrate with your Backend
The PowerSync backend connector provides the connection between your application backend and the PowerSync client-slide managed SQLite database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - Use this to upload client-side changes to your app backend.
-> See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```typescript powersync/Connector.ts theme={null}
import { PowerSyncBackendConnector, AbstractPowerSyncDatabase, UpdateType } from "@powersync/react-native"
export class Connector implements PowerSyncBackendConnector {
/**
* Implement fetchCredentials to obtain a JWT from your authentication service.
* See https://docs.powersync.com/configuration/auth/custom
*/
async fetchCredentials() {
return {
// The PowerSync instance URL or self-hosted endpoint
endpoint: 'https://xxxxxx.powersync.journeyapps.com',
/**
* To get started quickly, use a development token, see:
* Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
*/
token: 'An authentication token'
};
}
/**
* Implement uploadData to send local changes to your backend service.
* You can omit this method if you only want to sync data from the database to the client
* See example implementation here:https://docs.powersync.com/client-sdks/reference/react-native-and-expo#3-integrate-with-your-backend
*/
async uploadData(database: AbstractPowerSyncDatabase) {
/**
* For batched crud transactions, use data.getCrudBatch(n);
* https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SqliteBucketStorage#getcrudbatch
*/
const transaction = await database.getNextCrudTransaction();
if (!transaction) {
return;
}
for (const op of transaction.crud) {
// The data that needs to be changed in the remote db
const record = { ...op.opData, id: op.id };
switch (op.op) {
case UpdateType.PUT:
// TODO: Instruct your backend API to CREATE a record
break;
case UpdateType.PATCH:
// TODO: Instruct your backend API to PATCH a record
break;
case UpdateType.DELETE:
//TODO: Instruct your backend API to DELETE a record
break;
}
}
// Completes the transaction and moves onto the next one
await transaction.complete();
}
}
```
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* [PowerSyncDatabase.get](#fetching-a-single-item) - get (`SELECT`) a single row from a table.
* [PowerSyncDatabase.getAll](#querying-items-powersync-getall) - get (`SELECT`) a set of rows from a table.
* [PowerSyncDatabase.watch](#watching-queries-powersync-watch) - execute a read query every time source tables are modified.
* [PowerSyncDatabase.execute](#mutations-powersync-execute) - execute a write (`INSERT`/`UPDATE`/`DELETE`) query.
### Fetching a Single Item
The [get](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#get) method executes a read-only (`SELECT`) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#getoptional) to return a single optional result (returns `null` if no result is found).
```js TodoItemWidget.jsx theme={null}
import { Text } from 'react-native';
import { powersync } from "../powersync/system";
export const TodoItemWidget = ({id}) => {
const [todoItem, setTodoItem] = React.useState([]);
const [error, setError] = React.useState([]);
React.useEffect(() => {
// .get returns the first item of the result. Throws an exception if no result is found.
powersync.get('SELECT * from todos WHERE id = ?', [id])
.then(setTodoItem)
.catch(ex => setError(ex.message))
}, []);
return {error || todoItem.description}
}
```
### Querying Items (`PowerSync.getAll`)
The [getAll](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#getall) method returns a set of rows from a table.
```js ListsWidget.jsx theme={null}
import { FlatList, Text} from 'react-native';
import { powersync } from "../powersync/system";
export const ListsWidget = () => {
const [lists, setLists] = React.useState([]);
React.useEffect(() => {
powersync.getAll('SELECT * from lists').then(setLists)
}, []);
return ( ({key: list.id, ...list}))}
renderItem={({item}) => {item.name} }
/>)
}
```
### Watching Queries (`PowerSync.watch`)
The [watch](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made. It can be used with an `AsyncGenerator`, or with a callback.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
### Mutations (`PowerSync.execute`)
The [execute](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#execute) method can be used for executing single SQLite write statements.
```js ListsWidget.jsx theme={null}
import { Alert, Button, FlatList, Text, View } from 'react-native';
import { powersync } from "../powersync/system";
export const ListsWidget = () => {
// Populate lists with one of methods listed above
const [lists, setLists] = React.useState([]);
return (
({key: list.id, ...list}))}
renderItem={({item}) => (
{item.name}
{
try {
await powersync.execute(`DELETE FROM lists WHERE id = ?`, [item.id])
// Watched queries should automatically reload after mutation
} catch (ex) {
Alert('Error', ex.message)
}
}}
/>
)}
/>
{
try {
await powersync.execute('INSERT INTO lists (id, created_at, name, owner_id) VALUES (uuid(), datetime(), ?, ?) RETURNING *', [
'A list name',
"[The user's uuid]"
])
// Watched queries should automatically reload after mutation
} catch (ex) {
Alert.alert('Error', ex.message)
}
}}
/>
)
}
```
## Configure Logging
```js theme={null}
import { createBaseLogger, LogLevel } from '@powersync/react-native';
const logger = createBaseLogger();
// Configure the logger to use the default console output
logger.useDefaults();
// Set the minimum log level to DEBUG to see all log messages
// Available levels: DEBUG, INFO, WARN, ERROR, TRACE, OFF
logger.setLevel(LogLevel.DEBUG);
```
Enable verbose output in the developer tools for detailed logs.
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM Support
See [JavaScript ORM Support](/client-sdks/orms/js/overview) for details.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> React Native SDK](/resources/supported-platforms#react-native-sdk).
## Upgrading the SDK
Run the below command in your project folder:
```bash theme={null}
npm upgrade @powersync/react-native @journeyapps/react-native-quick-sqlite
```
```bash theme={null}
yarn upgrade @powersync/react-native @journeyapps/react-native-quick-sqlite
```
```bash theme={null}
pnpm upgrade @powersync/react-native @journeyapps/react-native-quick-sqlite
```
## Developer Notes
### Connection Methods
This SDK supports two methods for streaming sync commands:
1. **WebSocket (Default)**
* The implementation leverages RSocket for handling reactive socket streams.
* Back-pressure is effectively managed through client-controlled command requests.
* Sync commands are transmitted efficiently as BSON (binary) documents.
* This method is **recommended** since it will support the future [BLOB column support](https://roadmap.powersync.com/c/88-support-for-blob-column-types) feature.
2. **HTTP Streaming (Legacy)**
* This is the original implementation method.
* This method will not support the future BLOB column feature.
By default, the `PowerSyncDatabase.connect()` method uses WebSocket. You can optionally specify the `connectionMethod` to override this:
```js theme={null}
// WebSocket (default)
powerSync.connect(connector);
// HTTP Streaming
powerSync.connect(connector, { connectionMethod: SyncStreamConnectionMethod.HTTP });
```
### Android: Flipper Network Plugin for HTTP Streams
**Not needed when using websockets, which is the default since `@powersync/react-native@1.11.0`.**
If you are connecting to PowerSync using HTTP streams, you require additional configuration on Android. React Native does not support streams out of the box, so we use the [polyfills mentioned](/client-sdks/reference/react-native-and-expo#installation). There is currently an open [issue](https://github.com/facebook/flipper/issues/2495) where the Flipper network plugin does not allow Stream events to fire. This plugin needs to be [disabled](https://stackoverflow.com/questions/69235694/react-native-cant-connect-to-sse-in-android/69235695#69235695) in order for HTTP streams to work.
**If you are using Java (Expo \< 50):**
Uncomment the following from `android/app/src/debug/java/com//ReactNativeFlipper.java`
```js theme={null}
// NetworkFlipperPlugin networkFlipperPlugin = new NetworkFlipperPlugin();
// NetworkingModule.setCustomClientBuilder(
// new NetworkingModule.CustomClientBuilder() {
// @Override
// public void apply(OkHttpClient.Builder builder) {
// builder.addNetworkInterceptor(new FlipperOkhttpInterceptor(networkFlipperPlugin));
// }
// });
// client.addPlugin(networkFlipperPlugin);
```
Disable the dev client network inspector `android/gradle.properties`
```bash theme={null}
# Enable network inspector
EX_DEV_CLIENT_NETWORK_INSPECTOR=false
```
**If you are using Kotlin (Expo > 50):**
Comment out the following from `onCreate` in `android/app/src/main/java/com//example/MainApplication.kt`
```js theme={null}
// if (BuildConfig.DEBUG) {
// ReactNativeFlipper.initializeFlipper(this, reactNativeHost.reactInstanceManager)
// }
```
### Development on iOS Simulator
Testing offline mode on an iOS simulator by disabling the host machine's entire internet connection will cause the device to remain offline even after the internet connection has been restored. This issue seems to affect all network requests in an application.
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/react-native-api
# Rust SDK (pre-alpha)
Source: https://docs.powersync.com/client-sdks/reference/rust
Full SDK guide for using PowerSync in Rust applications.
This SDK is currently in a **pre-alpha / experimental** state and is intended for gathering external feedback. It is not suitable for production use.
We also can't guarantee continued support for the SDK at this time.
If you're interested in using the PowerSync Rust SDK, please [contact us](/resources/contact-us) with details about your use case.
The SDK is distributed via crates.io
Refer to the `powersync-native` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with Rust and PowerSync
Changelog for the SDK
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Installation
Add the [PowerSync SDK](https://central.sonatype.com/artifact/com.powersync/core) to your project by adding the following to your `Cargo.toml` file:
```shell theme={null}
cargo add powersync
```
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using *SQLite views* to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
```Rust src/schema.rs theme={null}
use powersync::schema::{Column, Schema, Table};
pub fn app_schema() -> Schema {
let mut schema = Schema::default();
let todos = Table::create(
"todos",
vec![
Column::text("list_id"),
Column::text("created_at"),
Column::text("completed_at"),
Column::text("description"),
Column::integer("completed"),
Column::text("created_by"),
Column::text("completed_by"),
],
|_| {},
);
let lists = Table::create(
"lists",
vec![
Column::text("created_at"),
Column::text("name"),
Column::text("owner_id"),
],
|_| {},
);
schema.tables.push(todos);
schema.tables.push(lists);
schema
}
```
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
#### Process setup
PowerSync is based on SQLite, and statically links a SQLite extension that needs to be enabled for the process before the SDK can be used. The SDK offers a utility to register the extension, and we recommend calling it early in `main()`:
```Rust lib/main.rs theme={null}
use powersync::env::PowerSyncEnvironment;
mod schema;
fn main() {
PowerSyncEnvironment::powersync_auto_extension()
.expect("could not load PowerSync core extension");
// TODO: Start database and your app
}
```
#### Database setup
For maximum flexibility, the PowerSync Rust SDK can be configured with different asynchronous runtimes and HTTP clients used to connect to the PowerSync service.
These dependencies can be configured through the [`PowerSyncEnvironment`](https://docs.rs/powersync/latest/powersync/env/struct.PowerSyncEnvironment.html)
struct, which wraps:
1. An HTTP client (using traits from the `http-client` crate). We recommend enabling the `curl_client` feature
on that crate and then using an `IsahcClient`. The `H1Client` is known not to work with PowerSync because it can't cancel response streams properly.
2. An asynchronous pool giving out leases to SQLite connections.
3. A timer implementation allowing the sync client to implement delayed retries on connection errors.
This is typically provided by async runtimes like Tokio.
To configure PowerSync, begin by configuring a connection pool:
Use `ConnectionPool::open` to open a database file with multiple connections configured with WAL mode:
```Rust theme={null}
use powersync::{ConnectionPool, error::PowerSyncError};
use powersync::env::PowerSyncEnvironment;
fn open_pool() -> Result{
ConnectionPool::open("database.db")
}
```
```Rust theme={null}
use powersync::ConnectionPool;
use powersync::env::PowerSyncEnvironment;
use powersync::error::PowerSyncError;
use rusqlite::Connection;
fn open_pool() -> Result {
let connection = Connection::open_in_memory()?;
Ok(ConnectionPool::single_connection(connection))
}
```
Next, create a database and start asynchronous tasks used by the sync client when connecting.
To be compatible with different executors, the SDK uses a model based on long-lived actors instead of
spawning tasks dynamically. All asynchronous processes are exposed through `PowerSyncDatabase::async_tasks()`,
these tasks must be spawned before connecting.
Ensure you depend on `powersync` with the `tokio` feature enabled.
```Rust theme={null}
#[tokio::main]
async fn main() {
PowerSyncEnvironment::powersync_auto_extension()
.expect("could not load PowerSync core extension");
let pool = open_pool().expect("open pool");
let client = Arc::new(IsahcClient::new());
let env = PowerSyncEnvironment::custom(
client.clone(),
pool,
Box::new(PowerSyncEnvironment::tokio_timer()),
);
let db = PowerSyncDatabase::new(env, schema::app_schema());
db.async_tasks().spawn_with_tokio();
}
```
Ensure you depend on `powersync` with the `smol` feature enabled.
```Rust theme={null}
async fn start_app() {
let pool = open_pool().expect("open pool");
let client = Arc::new(IsahcClient::new());
let env = PowerSyncEnvironment::custom(
client.clone(),
pool,
// Use the async_io crate to implement timers in PowerSync
Box::new(PowerSyncEnvironment::async_io_timer()),
);
let db = PowerSyncDatabase::new(env, schema::app_schema());
// TODO: Use a custom multi-threaded executor instead of the default
let tasks = db.async_tasks().spawn_with(smol::spawn);
for task in tasks {
// The task will automatically stop once the database is dropped, but we
// want to keep it running until then.
task.detach();
}
}
fn main() {
PowerSyncEnvironment::powersync_auto_extension()
.expect("could not load PowerSync core extension");
smol::block_on(start_app());
}
```
PowerSync is executor-agnostic and supports all async Rust runtimes. You need to provide:
1. A future that delays execution by scheduling its waker through a timer.
2. A way to spawn futures as a task that is polled independently.
PowerSync uses the [`Timer`](https://docs.rs/powersync/latest/powersync/env/trait.Timer.html)
trait for timers, it can be installed by creating a `PowerSyncEnvironment` with `PowerSyncEnvironment::custom`
and passing your custom timer implementation.
Spawning tasks is only necessary once after opening the database. All tasks necessary for the sync
client are exposed through `PowerSyncDatabase::async_tasks`. You can spawn these by providing
a function turning these futures into independent tasks via `AsyncDatabaseTasks::spawn_with`.
Finally, instruct PowerSync to sync data from your backend:
```Rust theme={null}
// MyBackendConnector is defined in the next step...
db.connect(SyncOptions::new(MyBackendConnector {
client,
db: db.clone(),
})).await;
```
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
### 3. Integrate with your Backend
Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. `fetch_credentials` - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. `upload_data` - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```Rust theme={null}
struct MyBackendConnector {
client: Arc,
db: PowerSyncDatabase,
}
#[async_trait]
impl BackendConnector for MyBackendConnector {
async fn fetch_credentials(&self) -> Result {
// implement fetchCredentials to obtain the necessary credentials to connect to your backend
// See an example implementation in https://github.com/powersync-ja/powersync-native/blob/508193b0822b8dad1a534a16462e2fcd36a9ac68/examples/egui_todolist/src/database.rs#L119-L133
Ok(PowerSyncCredentials {
endpoint: "[Your PowerSync instance URL or self-hosted endpoint]".to_string(),
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly
token: "An authentication token".to_string(),
})
}
async fn upload_data(&self) -> Result<(), PowerSyncError> {
// Implement uploadData to send local changes to your backend service
// You can omit this method if you only want to sync data from the server to the client
// See an example implementation under Usage Examples (sub-page)
// See https://docs.powersync.com/handling-writes/writing-client-changes for considerations.
let mut local_writes = self.db.crud_transactions();
while let Some(tx) = local_writes.try_next().await? {
todo!("Inspect tx.crud for local writes that need to be uploaded to your backend");
tx.complete().await?;
}
Ok(())
}
}
```
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* [reader](#reads) - run statements reading from the database.
* [writer](/client-sdks/reference/kotlin#querying-items-powersync-getall) - execute a read query every time source tables are modified.
* [writer](#mutations) - write to the database.
### Reads
To obtain a connection suitable for reads, call and await `PowerSyncDatabase::reader()`.
The returned connection leased can be used as a `rusqlite::Connection` to run queries.
```Rust theme={null}
async fn find(db: &PowerSyncDatabase, id: &str) -> Result<(), PowerSyncError> {
let reader = db.reader().await?;
let mut stmt = reader.prepare("SELECT * FROM lists WHERE id = ?")?;
let mut rows = stmt.query(params![id])?;
while let Some(row) = rows.next()? {
let id: String = row.get("id")?;
let name: String = row.get("name")?;
println!("Found todo list: {id}, {name}");
}
}
```
### Watching Queries
The `watch_statement` method executes a read query whenever a change to a dependent table is made.
```Rust theme={null}
async fn watch_pending_lists(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> {
let stream = db.watch_statement(
"SELECT * FROM lists WHERE state = ?".to_string(),
params!["pending"],
|stmt, params| {
let mut rows = stmt.query(params)?;
let mut mapped = vec![];
while let Some(row) = rows.next()? {
mapped.push(() /* TODO: Read row into list struct */)
}
Ok(mapped)
},
);
let mut stream = pin!(stream);
// Note: The stream is never-ending, so you probably want to call this in an independent async
// task.
while let Some(event) = stream.try_next().await? {
// Update UI to display rows
}
Ok(())
}
```
### Mutations
Local writes on tables are automatically captured with triggers. To obtain a connection suitable for
writes, use the `PowerSyncDatabase::writer` method:
The `execute` method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any).
```Rust theme={null}
async fn insert_customer(
db: &PowerSyncDatabase,
name: &str,
email: &str,
) -> Result<(), PowerSyncError> {
let writer = db.writer().await?;
writer.execute(
"INSERT INTO customers (id, name, email) VALUES (uuid(), ?, ?)",
params![name, email],
)?;
Ok(())
}
```
If you're looking for transactions, use the [`transaction`](https://docs.rs/rusqlite/latest/rusqlite/struct.Connection.html#method.transaction)
method from `rusqlite` on `writer`.
## Configure Logging
The Rust SDK uses the `log` crate internally, so you can configure it with any backend, e.g. with
`env_logger`:
```Rust theme={null}
fn main() {
env_logger::init();
// ...
}
```
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM / SQL Library Support
The Rust SDK does not currently support any higher-level SQL libraries, but we're investigating
support for Diesel and sqlx.
Please reach out to us if you're interested in these or other integrations.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> Rust SDK](/resources/supported-platforms#rust-sdk).
## Upgrading the SDK
To update the PowerSync SDK, run `cargo update powersync` or manually update to the
[latest version](https://crates.io/crates/powersync/versions).
# Swift SDK
Source: https://docs.powersync.com/client-sdks/reference/swift
Full SDK guide for using PowerSync in Swift clients
Refer to the `powersync-swift` repo on GitHub
Full API reference for the SDK
Gallery of example projects/demo apps built with PowerSync and Swift
Changelog for the SDK
## Kotlin -> Swift SDK
The PowerSync Swift SDK makes use of the [PowerSync Kotlin SDK](https://github.com/powersync-ja/powersync-kotlin) with the API tool [SKIE](https://skie.touchlab.co/) under the hood to help generate and publish a Swift package. The Swift SDK abstracts the Kotlin SDK behind pure Swift Protocols, enabling us to fully leverage Swift's native features and libraries. Our ultimate goal is to deliver a Swift-centric experience for developers.
### SDK Features
* **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
* **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
* **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
* **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
* **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.
## Installation
You can add the PowerSync Swift package to your project using either `Package.swift` or Xcode:
```swift theme={null}
let package = Package(
//...
dependencies: [
//...
.package(
url: "https://github.com/powersync-ja/powersync-swift",
exact: ""
),
],
targets: [
.target(
name: "YourTargetName",
dependencies: [
.product(
name: "PowerSync",
package: "powersync-swift"
)
]
)
]
)
```
1. Follow [this guide](https://developer.apple.com/documentation/xcode/adding-package-dependencies-to-your-app#Add-a-package-dependency) to add a package to your project.
2. Use `https://github.com/powersync-ja/powersync-swift.git` as the URL
3. Include the exact version (e.g., `1.0.x`)
## Getting Started
**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required .
**Generate schema automatically**
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and click the **Connect** button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.
Similar functionality exists in the [CLI](/tools/cli).
**Note:** The generated schema will not include an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/sync/advanced/client-id).
The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
```swift theme={null}
import Foundation
import PowerSync
let LISTS_TABLE = "lists"
let TODOS_TABLE = "todos"
let lists = Table(
name: LISTS_TABLE,
columns: [
// ID column is automatically included
.text("name"),
.text("created_at"),
.text("owner_id")
]
)
let todos = Table(
name: TODOS_TABLE,
// ID column is automatically included
columns: [
.text("list_id"),
.text("photo_id"),
.text("description"),
// 0 or 1 to represent false or true
.integer("completed"),
.text("created_at"),
.text("completed_at"),
.text("created_by"),
.text("completed_by")
],
indexes: [
Index(
name: "list_id",
columns: [
IndexedColumn.ascending("list_id")
]
)
]
)
let AppSchema = Schema(lists, todos)
```
**Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this.
### 2. Instantiate the PowerSync Database
Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
```swift theme={null}
let schema = AppSchema // Comes from the AppSchema defined above
let db = PowerSyncDatabase(
schema: schema,
dbFilename: "powersync-swift.sqlite"
)
```
### 3. Integrate with your Backend
Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to:
1. Retrieve an auth token to connect to the PowerSync instance.
2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
1. `PowerSyncBackendConnectorProtocol.fetchCredentials` - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See [Authentication Setup](/configuration/auth/overview) for instructions on how the credentials should be generated.
2. `PowerSyncBackendConnectorProtocol.uploadData` - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See [Writing Client Changes](/handling-writes/writing-client-changes) for considerations on the app backend implementation.
**Example**:
```swift theme={null}
import PowerSync
final class MyConnector: PowerSyncBackendConnectorProtocol {
func fetchCredentials() async throws -> PowerSyncCredentials? {
// implement fetchCredentials to obtain the necessary credentials to connect to your backend
// See an example implementation in https://github.com/powersync-ja/powersync-swift/blob/main/Demo/PowerSyncExample/PowerSync/SupabaseConnector.swift
return PowerSyncCredentials(
endpoint: "Your PowerSync instance URL or self-hosted endpoint",
// Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens)
// to get up and running quickly
token: "An authentication token"
)
}
func uploadData(database: PowerSyncDatabaseProtocol) async throws {
// Implement uploadData to send local changes to your backend service
// You can omit this method if you only want to sync data from the server to the client
// See an example implementation under Usage Examples (sub-page)
// See https://docs.powersync.com/handling-writes/writing-client-changes for considerations.
}
}
```
Connect the PowerSync database to sync data with your backend:
```swift theme={null}
let connector = MyConnector()
try await powerSync.connect(connector: connector)
```
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
## Using PowerSync: CRUD functions
Once the PowerSync instance is configured you can start using the SQLite DB functions.
The most commonly used CRUD functions to interact with your SQLite data are:
* [PowerSyncDatabase.get](/client-sdks/reference/swift#fetching-a-single-item) - get (SELECT) a single row from a table.
* [PowerSyncDatabase.getOptional](/client-sdks/reference/swift#fetching-a-single-item) - get (SELECT) a single row from a table and return `null` if not found.
* [PowerSyncDatabase.getAll](/client-sdks/reference/swift#querying-items-powersync-getall) - get (SELECT) a set of rows from a table.
* [PowerSyncDatabase.watch](/client-sdks/reference/swift#watching-queries-powersync-watch) - execute a read query every time source tables are modified.
* [PowerSyncDatabase.execute](/client-sdks/reference/swift#mutations-powersync-execute) - execute a write (INSERT/UPDATE/DELETE) query.
### Fetching a Single Item ( PowerSync.get / PowerSync.getOptional)
The `get` method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use `getOptional` to return a single optional result (returns `null` if no result is found).
```swift theme={null}
// Find a list item by ID
func getList(_ id: String) async throws {
try await self.db.getAll(
sql: "SELECT * FROM \(LISTS_TABLE) WHERE id = ?",
parameters: [id],
mapper: { cursor in
ListContent(
id: try cursor.getString(name: "id")!,
name: try cursor.getString(name: "name")!,
createdAt: try cursor.getString(name: "created_at")!,
ownerId: try cursor.getString(name: "owner_id")!
)
}
)
}
```
### Querying Items (PowerSync.getAll)
The `getAll` method executes a read-only (SELECT) query and returns a set of rows.
```swift theme={null}
// Get all lists
func getLists() async throws {
try await self.db.getAll(
sql: "SELECT * FROM \(LISTS_TABLE)",
parameters: [],
mapper: { cursor in
ListContent(
id: try cursor.getString(name: "id")!,
name: try cursor.getString(name: "name")!,
createdAt: try cursor.getString(name: "created_at")!,
ownerId: try cursor.getString(name: "owner_id")!
)
}
)
}
```
### Watching Queries (PowerSync.watch)
The `watch` method executes a read query whenever a change to a dependent table is made.
```swift theme={null}
func watchPendingLists() throws -> AsyncThrowingStream<[ListContent], Error> {
try db.watch(
sql: "SELECT * FROM lists WHERE state = ?",
parameters: ["pending"],
) { cursor in
try ListContent(
id: cursor.getString(name: "id"),
name: cursor.getString(name: "name"),
)
}
}
```
### Mutations (PowerSync.execute)
The `execute` method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any).
```swift theme={null}
func insertTodo(_ todo: NewTodo, _ listId: String) async throws {
try await db.execute(
sql: "INSERT INTO \(TODOS_TABLE) (id, created_at, created_by, description, list_id, completed) VALUES (uuid(), datetime(), ?, ?, ?, ?)",
parameters: [connector.currentUserID, todo.description, listId, todo.isComplete]
)
}
func updateTodo(_ todo: Todo) async throws {
try await db.execute(
sql: "UPDATE \(TODOS_TABLE) SET description = ?, completed = ?, completed_at = datetime(), completed_by = ? WHERE id = ?",
parameters: [todo.description, todo.isComplete, connector.currentUserID, todo.id]
)
}
func deleteTodo(id: String) async throws {
try await db.writeTransaction(callback: { transaction in
_ = try transaction.execute(
sql: "DELETE FROM \(TODOS_TABLE) WHERE id = ?",
parameters: [id]
)
})
}
```
## Configure Logging
You can include your own Logger that must conform to the [LoggerProtocol](https://powersync-ja.github.io/powersync-swift/documentation/powersync/loggerprotocol) as shown here.
```swift theme={null}
let logger = DefaultLogger(minSeverity: .debug)
let db = PowerSyncDatabase(
schema: schema,
dbFilename: "powersync-swift.sqlite",
logger: logger
)
```
The `DefaultLogger` supports the following severity levels: `.debug`, `.info`, `.warn`, `.error`.
## Additional Usage Examples
For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the [Usage Examples](/client-sdks/usage-examples) page.
## ORM Support
PowerSync supports the [GRDB](/client-sdks/orms/swift/grdb) library for Swift.
## Troubleshooting
See [Troubleshooting](/debugging/troubleshooting) for pointers to debug common issues.
## Supported Platforms
See [Supported Platforms -> Swift SDK](/resources/supported-platforms#swift-sdk).
## Upgrading the SDK
Update the version number in `Package.swift` or via Xcode Package Dependencies as documented in the installation instructions: [Installation](/client-sdks/reference/swift#installation)
# API Reference
Source: https://docs.powersync.com/client-sdks/reference/swift-api
# Usage Examples
Source: https://docs.powersync.com/client-sdks/usage-examples
Code examples and common patterns for using PowerSync Client SDKs
## Using transactions to group changes
Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception).
The [writeTransaction(callback)](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/writeTransaction.html) method combines all writes into a single transaction, only committing to persistent storage once.
```dart theme={null}
deleteList(SqliteDatabase db, String id) async {
await db.writeTransaction((tx) async {
// Delete the main list
await tx.execute('DELETE FROM lists WHERE id = ?', [id]);
// Delete any children of the list
await tx.execute('DELETE FROM todos WHERE list_id = ?', [id]);
});
}
```
Also see [readTransaction(callback)](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/readTransaction.html)
Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception).
[PowerSyncDatabase.writeTransaction(callback)](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#writetransaction) automatically commits changes after the transaction callback is completed if [`tx.rollback()`](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/db/DBAdapter.ts#L53) has not explicitly been called. If an exception is thrown in the callback then changes are automatically rolled back.
```js theme={null}
// ListsWidget.jsx
import {Alert, Button, FlatList, Text, View} from 'react-native';
export const ListsWidget = () => {
// Populate lists with one of methods listed above
const [lists, setLists] = React.useState([]);
return (
({key: list.id, ...list}))}
renderItem={({item}) => (
{item.name}
{
try {
await PowerSync.writeTransaction(async (tx) => {
// Delete the main list
await tx.execute(`DELETE FROM lists WHERE id = ?`, [item.id]);
// Delete any children of the list
await tx.execute(`DELETE FROM todos WHERE list_id = ?`, [item.id]);
// Transactions are automatically committed at the end of execution
// Transactions are automatically rolled back if an exception occurred
})
// Watched queries should automatically reload after mutation
} catch (ex) {
Alert.alert('Error', ex.message)
}
}}
/>
)}
/>
{
try {
await PowerSync.execute('INSERT INTO lists (id, created_at, name, owner_id) VALUES (uuid(), datetime(), ?, ?) RETURNING *', [
'A list name',
"[The user's uuid]"
])
// Watched queries should automatically reload after mutation
} catch (ex) {
Alert.alert('Error', ex.message)
}
}}
/>
)
}
```
Also see [PowerSyncDatabase.readTransaction(callback)](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#readtransaction).
Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception).
[PowerSyncDatabase.writeTransaction(callback)](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#writetransaction) automatically commits changes after the transaction callback is completed if `tx.rollback()` has not explicitly been called. If an exception is thrown in the callback then changes are automatically rolled back.
```js theme={null}
// ListsWidget.jsx
import React, { useState } from 'react';
export const ListsWidget = () => {
const [lists, setLists] = useState([]);
return (
{lists.map((list) => (
{list.name}
{
try {
await PowerSync.writeTransaction(async (tx) => {
// Delete the main list
await tx.execute(`DELETE FROM lists WHERE id = ?`, [item.id]);
// Delete any children of the list
await tx.execute(`DELETE FROM todos WHERE list_id = ?`, [item.id]);
// Transactions are automatically committed at the end of execution
// Transactions are automatically rolled back if an exception occurred
})
// Watched queries should automatically reload after mutation
} catch (ex) {
Alert.alert('Error', ex.message)
}
}}
>
Delete
))}
{
try {
await PowerSync.execute('INSERT INTO lists (id, created_at, name, owner_id) VALUES (uuid(), datetime(), ?, ?) RETURNING *', [
'A list name',
"[The user's uuid]"
])
// Watched queries should automatically reload after mutation
} catch (ex) {
Alert.alert('Error', ex.message)
}
}}
>
Create List
);
};
```
Also see [PowerSyncDatabase.readTransaction(callback)](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#readtransaction).
Example not yet available.
Example not yet available.
Use `writeTransaction` to group statements that can write to the database.
```kotlin theme={null}
database.writeTransaction {
database.execute(
sql = "DELETE FROM list WHERE id = ?",
parameters = listOf(listId)
)
database.execute(
sql = "DELETE FROM todos WHERE list_id = ?",
parameters = listOf(listId)
)
}
```
Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception).
```swift theme={null}
// Delete a list and its todos in a transaction
func deleteList(db: PowerSyncDatabase, listId: String) async throws {
try await db.writeTransaction { tx in
try await tx.execute(sql: "DELETE FROM lists WHERE id = ?", parameters: [listId])
try await tx.execute(sql: "DELETE FROM todos WHERE list_id = ?", parameters: [listId])
}
}
```
Also see [`readTransaction`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/queries/readtransaction\(callback:\)).
```cs theme={null}
using PowerSync.Common.Client;
class Demo
{
static async Task DeleteList(PowerSyncDatabase db, string listId)
{
await db.WriteTransaction(async (tx) =>
{
// Delete the main list
await tx.Execute("DELETE FROM lists WHERE id = ?", new object[] { listId });
// Delete any children of the list
await tx.Execute("DELETE FROM todos WHERE list_id = ?", new object[] { listId });
// Transactions are automatically committed at the end of execution
// Transactions are automatically rolled back if an exception occurred
});
}
}
```
Example not yet available.
## Listen for changes in data
Use [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) to watch for changes to the dependent tables of any SQL query.
```dart theme={null}
StreamBuilder(
stream: db.watch('SELECT * FROM lists WHERE state = ?', ['pending']),
builder: (context, snapshot) {
if (snapshot.hasData) {
// TODO: implement your own UI here based on the result set
return ...;
} else {
return const Center(child: CircularProgressIndicator());
}
},
)
```
Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
The Capacitor SDK uses the same API as the [JavaScript Web SDK](/client-sdks/reference/javascript-web#watching-queries-powersync.watch). Use `db.watch()` to watch for changes in source tables.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/node-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
For advanced watch query features like incremental updates and differential results, see [Live Queries / Watch Queries](/client-sdks/watch-queries).
Use the `watch` method to watch for changes to the dependent tables of any SQL query.
```kotlin theme={null}
fun watchPendingLists(): Flow> =
db.watch(
"SELECT * FROM lists WHERE state = ?",
listOf("pending"),
) { cursor ->
ListItem(
id = cursor.getString("id"),
name = cursor.getString("name"),
)
}
```
Use `watch` to watch for changes to the dependent tables of any SQL query.
```swift theme={null}
func watchPendingLists() throws -> AsyncThrowingStream<[ListContent], Error> {
try db.watch(
sql: "SELECT * FROM lists WHERE state = ?",
parameters: ["pending"],
) { cursor in
try ListContent(
id: cursor.getString(name: "id"),
name: cursor.getString(name: "name"),
)
}
}
```
Use `db.Watch()` to watch queries for changes. `Watch` returns an `IAsyncEnumerable` (since v0.0.11-alpha.1).
```cs theme={null}
using PowerSync.Common.Client;
// Watch for changes (define a result type matching your query, e.g. ListResult)
var cts = new CancellationTokenSource();
var listener = db.Watch("SELECT * FROM lists", [], new SQLWatchOptions { Signal = cts.Token });
await foreach (var results in listener)
{
// Update UI when data changes
Console.WriteLine($"Result count: {results.Length}");
}
// To cancel watching: cts.Cancel();
```
Use `watch_statement` to watch for changes to the dependent tables of any SQL query.
```Rust theme={null}
async fn watch_pending_lists(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> {
let stream = db.watch_statement(
"SELECT * FROM lists WHERE state = ?".to_string(),
params!["pending"],
|stmt, params| {
let mut rows = stmt.query(params)?;
let mut mapped = vec![];
while let Some(row) = rows.next()? {
mapped.push(() /* TODO: Read row into list struct */)
}
Ok(mapped)
},
);
let mut stream = pin!(stream);
// Note: The stream is never-ending, so you probably want to call this in an independent async
// task.
while let Some(event) = stream.try_next().await? {
// Update UI to display rows
}
Ok(())
}
```
## Insert, update, and delete data in the local database
Use [execute](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/execute.html) to run INSERT, UPDATE or DELETE queries.
```dart theme={null}
FloatingActionButton(
onPressed: () async {
await db.execute(
'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)',
['Fred', 'fred@example.org'],
);
},
tooltip: '+',
child: const Icon(Icons.add),
);
```
Use [PowerSyncDatabase.execute](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#execute) to run INSERT, UPDATE or DELETE queries.
```js theme={null}
const handleButtonClick = async () => {
await db.execute(
'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)',
['Fred', 'fred@example.org']
);
};
return (
+
add
);
```
Use [PowerSyncDatabase.execute](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#execute) to run INSERT, UPDATE or DELETE queries.
```js theme={null}
const handleButtonClick = async () => {
await db.execute(
'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)',
['Fred', 'fred@example.org']
);
};
return (
+
add
);
```
Example not yet available.
Example not yet available.
Use `execute` to run `INSERT`, `UPDATE` or `DELETE` queries.
```kotlin theme={null}
suspend fun updateCustomer(id: String, name: String, email: String) {
database.execute(
"UPDATE customers SET name = ? WHERE email = ?",
listOf(name, email)
)
}
```
Use `execute` to run `INSERT`, `UPDATE` or `DELETE` queries.
```swift theme={null}
// Insert a new TODO
func insertTodo(_ todo: NewTodo, _ listId: String) async throws {
try await db.execute(
sql: "INSERT INTO \(TODOS_TABLE) (id, created_at, created_by, description, list_id, completed) VALUES (uuid(), datetime(), ?, ?, ?, ?)",
parameters: [connector.currentUserID, todo.description, listId, todo.isComplete]
)
}
```
Use `Execute` to run `INSERT`, `UPDATE` or `DELETE` queries.
```cs theme={null}
// Insert a new customer
await db.Execute(
"INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)",
new object[] { "Fred", "fred@example.org" }
);
```
Use `PowerSyncDatabase::writer` and `execute` to run INSERT, UPDATE or DELETE queries. Obtain a write connection with `db.writer().await?`, then call `writer.execute(sql, params![...])?`:
```rust theme={null}
use rusqlite::params;
async fn insert_customer(
db: &PowerSyncDatabase,
name: &str,
email: &str,
) -> Result<(), PowerSyncError> {
let writer = db.writer().await?;
writer.execute(
"INSERT INTO customers (id, name, email) VALUES (uuid(), ?, ?)",
params![name, email],
)?;
Ok(())
}
```
## Send changes in local data to your backend service
Override [uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) to send local updates to your backend service.
```dart theme={null}
@override
Future uploadData(PowerSyncDatabase database) async {
final batch = await database.getCrudBatch();
if (batch == null) return;
for (var op in batch.crud) {
switch (op.op) {
case UpdateType.put:
// Send the data to your backend service
// Replace `_myApi` with your own API client or service
await _myApi.put(op.table, op.opData!);
break;
default:
// TODO: implement the other operations (patch, delete)
break;
}
}
await batch.complete();
}
```
Override [uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) to send local updates to your backend service.
```js theme={null}
// Implement the uploadData method in your backend connector
async function uploadData(database) {
const batch = await database.getCrudBatch();
if (batch === null) return;
for (const op of batch.crud) {
switch (op.op) {
case 'put':
// Send the data to your backend service
// replace `_myApi` with your own API client or service
await _myApi.put(op.table, op.opData);
break;
default:
// TODO: implement the other operations (patch, delete)
break;
}
}
await batch.complete();
}
```
Override [uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) to send local updates to your backend service.
```js theme={null}
// Implement the uploadData method in your backend connector
async function uploadData(database) {
const batch = await database.getCrudBatch();
if (batch === null) return;
for (const op of batch.crud) {
switch (op.op) {
case 'put':
// Send the data to your backend service
// replace `_myApi` with your own API client or service
await _myApi.put(op.table, op.opData);
break;
default:
// TODO: implement the other operations (patch, delete)
break;
}
}
await batch.complete();
}
```
Example not yet available.
Example not yet available.
Override `uploadData` to send local updates to your backend service. If you are using Supabase, see [SupabaseConnector.kt](https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt) for a complete implementation.
```kotlin theme={null}
/**
* This function is called whenever there is data to upload, whether the device is online or offline.
* If this call throws an error, it is retried periodically.
*/
override suspend fun uploadData(database: PowerSyncDatabase) {
val transaction = database.getNextCrudTransaction() ?: return;
var lastEntry: CrudEntry? = null;
try {
for (entry in transaction.crud) {
lastEntry = entry;
val table = supabaseClient.from(entry.table)
when (entry.op) {
UpdateType.PUT -> {
val data = entry.opData?.toMutableMap() ?: mutableMapOf()
data["id"] = entry.id
table.upsert(data)
}
UpdateType.PATCH -> {
table.update(entry.opData!!) {
filter {
eq("id", entry.id)
}
}
}
UpdateType.DELETE -> {
table.delete {
filter {
eq("id", entry.id)
}
}
}
}
}
transaction.complete(null);
} catch (e: Exception) {
println("Data upload error - retrying last entry: ${lastEntry!!}, $e")
throw e
}
}
```
Override `uploadData` to send local updates to your backend service.
```swift theme={null}
class MyConnector: PowerSyncBackendConnector {
override func uploadData(database: PowerSyncDatabaseProtocol) async throws {
let batch = try await database.getCrudBatch()
guard let batch = batch else { return }
for entry in batch.crud {
switch entry.op {
case .put:
// Send the data to your backend service
// Replace `_myApi` with your own API client or service
try await _myApi.put(table: entry.table, data: entry.opData)
default:
// TODO: implement the other operations (patch, delete)
break
}
}
try await batch.complete(writeCheckpoint: nil)
}
}
```
Override `UploadData` to send local updates to your backend service.
```cs theme={null}
public class MyConnector : IPowerSyncBackendConnector
{
public async Task UploadData(IPowerSyncDatabase database)
{
var transaction = await database.GetNextCrudTransaction();
if (transaction == null) return;
try
{
foreach (var operation in transaction.Crud)
{
switch (operation.Op)
{
case UpdateType.PUT:
// Send the data to your backend service
// Replace _myApi with your own API client or service
await _myApi.Put(operation.Table, operation.OpData);
break;
default:
// TODO: implement the other operations (PATCH, DELETE)
break;
}
}
await transaction.Complete();
}
catch (Exception ex)
{
Console.WriteLine($"Upload error: {ex.Message}");
throw;
}
}
}
```
Implement `upload_data` on your `BackendConnector` to send local changes to your backend service. Use `db.crud_transactions()` and iterate with `try_next()`; for each transaction, inspect `tx.crud` and call your API, then `tx.complete().await?`:
```rust theme={null}
use async_trait::async_trait;
#[async_trait]
impl BackendConnector for MyBackendConnector {
async fn upload_data(&self) -> Result<(), PowerSyncError> {
let mut local_writes = self.db.crud_transactions();
while let Some(tx) = local_writes.try_next().await? {
for op in &tx.crud {
// Send the data to your backend service
// Replace with your own API client or service
// match on op.op (e.g. Put, Patch, Delete) and op.table, op.id, op.op_data
}
tx.complete().await?;
}
Ok(())
}
}
```
## Accessing PowerSync connection status information
Use [SyncStatus](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus-class.html) and register an event listener with [statusStream](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/statusStream.html) to listen for status changes to your PowerSync instance.
```dart theme={null}
class _StatusAppBarState extends State {
late SyncStatus _connectionState;
StreamSubscription? _syncStatusSubscription;
@override
void initState() {
super.initState();
_connectionState = db.currentStatus;
_syncStatusSubscription = db.statusStream.listen((event) {
setState(() {
_connectionState = db.currentStatus;
});
});
}
@override
void dispose() {
super.dispose();
_syncStatusSubscription?.cancel();
}
@override
Widget build(BuildContext context) {
final statusIcon = _getStatusIcon(_connectionState);
return AppBar(
title: Text(widget.title),
actions: [
...
statusIcon
],
);
}
}
Widget _getStatusIcon(SyncStatus status) {
if (status.anyError != null) {
// The error message is verbose, could be replaced with something
// more user-friendly
if (!status.connected) {
return _makeIcon(status.anyError!.toString(), Icons.cloud_off);
} else {
return _makeIcon(status.anyError!.toString(), Icons.sync_problem);
}
} else if (status.connecting) {
return _makeIcon('Connecting', Icons.cloud_sync_outlined);
} else if (!status.connected) {
return _makeIcon('Not connected', Icons.cloud_off);
} else if (status.uploading && status.downloading) {
// The status changes often between downloading, uploading and both,
// so we use the same icon for all three
return _makeIcon('Uploading and downloading', Icons.cloud_sync_outlined);
} else if (status.uploading) {
return _makeIcon('Uploading', Icons.cloud_sync_outlined);
} else if (status.downloading) {
return _makeIcon('Downloading', Icons.cloud_sync_outlined);
} else {
return _makeIcon('Connected', Icons.cloud_queue);
}
}
```
Use [PowerSyncDatabase.connected](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#connected) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#registerlistener) to listen for status changes to your PowerSync instance.
```js theme={null}
// Example of using connected status to show online or offline
// Tap into connected
const [connected, setConnected] = React.useState(powersync.connected);
React.useEffect(() => {
// Register listener for changes made to the powersync status
return powersync.registerListener({
statusChanged: (status) => {
setConnected(status.connected);
}
});
}, [powersync]);
// Icon to show connected or not connected to powersync
// as well as the last synced time
{
Alert.alert(
'Status',
`${connected ? 'Connected' : 'Disconnected'}. \nLast Synced at ${powersync.currentStatus?.lastSyncedAt.toISOString() ?? '-'
}\nVersion: ${powersync.sdkVersion}`
);
}}
/>;
```
Use [PowerSyncDatabase.connected](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#connected) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#registerlistener) to listen for status changes to your PowerSync instance.
```js theme={null}
// Example of using connected status to show online or offline
// Tap into connected
const [connected, setConnected] = React.useState(powersync.connected);
React.useEffect(() => {
// Register listener for changes made to the powersync status
return powersync.registerListener({
statusChanged: (status) => {
setConnected(status.connected);
}
});
}, [powersync]);
// Icon to show connected or not connected to powersync
// as well as the last synced time
{
Alert.alert(
'Status',
`${connected ? 'Connected' : 'Disconnected'}. \nLast Synced at ${powersync.currentStatus?.lastSyncedAt.toISOString() ?? '-'
}\nVersion: ${powersync.sdkVersion}`
);
}}
/>;
```
Example not yet available.
Example not yet available.
```kotlin theme={null}
// Intialize the DB
val db = remember { PowerSyncDatabase(factory, schema) }
// Get the status as a flow
val status = db.currentStatus.asFlow().collectAsState(initial = null)
// Use the emitted values from the flow e.g. to check if connected
val isConnected = status.value?.connected
```
Use [`currentStatus`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/powersyncdatabaseprotocol/currentstatus) and observe changes to listen for status changes to your PowerSync instance.
```swift theme={null}
import Foundation
import SwiftUI
import PowerSync
struct PowerSyncConnectionIndicator: View {
private let powersync: any PowerSyncDatabaseProtocol
@State private var connected: Bool = false
init(powersync: any PowerSyncDatabaseProtocol) {
self.powersync = powersync
}
var body: some View {
let iconName = connected ? "wifi" : "wifi.slash"
let description = connected ? "Online" : "Offline"
Image(systemName: iconName)
.accessibility(label: Text(description))
.task {
self.connected = powersync.currentStatus.connected
for await status in powersync.currentStatus.asFlow() {
self.connected = status.connected
}
}
}
}
```
Use [SyncStatus](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs) and `db.Events.OnStatusChanged.ListenAsync` (since v0.0.11-alpha.1) to listen for status changes to your PowerSync instance.
```cs theme={null}
using PowerSync.Common.Client;
using PowerSync.Common.DB.Crud;
class StatusIndicator
{
private SyncStatus? _currentStatus;
public async Task StartListeningAsync(PowerSyncDatabase db, CancellationToken ct = default)
{
var listener = db.Events.OnStatusChanged.ListenAsync(ct);
await foreach (var update in listener)
{
_currentStatus = update.Status;
UpdateStatusIcon(_currentStatus);
}
}
private void UpdateStatusIcon(SyncStatus status)
{
var dataFlow = status.DataFlowStatus;
var hasError = dataFlow.DownloadError != null || dataFlow.UploadError != null;
if (hasError)
{
var errorMessage = dataFlow.DownloadError?.Message ?? dataFlow.UploadError?.Message ?? "Unknown error";
Console.WriteLine(status.Connected ? $"Error: {errorMessage} - Sync problem" : $"Error: {errorMessage} - Not connected");
}
else if (status.Connecting)
{
Console.WriteLine("Connecting...");
}
else if (!status.Connected)
{
Console.WriteLine("Not connected");
}
else if (dataFlow.Uploading && dataFlow.Downloading)
{
Console.WriteLine("Uploading and downloading");
}
else if (dataFlow.Uploading)
{
Console.WriteLine("Uploading");
}
else if (dataFlow.Downloading)
{
Console.WriteLine("Downloading");
}
else
{
Console.WriteLine("Connected");
}
}
}
```
Example not yet available.
## Wait for the initial sync to complete
Use the [hasSynced](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus/hasSynced.html) property (available since version 1.5.1 of the SDK) and register a listener to indicate to the user whether the initial sync is in progress.
```dart theme={null}
// Example of using hasSynced to show whether the first sync has completed
/// Global reference to the database
final PowerSyncDatabase db;
bool hasSynced = false;
StreamSubscription? _syncStatusSubscription;
// Use the exposed statusStream
Stream watchSyncStatus() {
return db.statusStream;
}
@override
void initState() {
super.initState();
_syncStatusSubscription = watchSyncStatus.listen((status) {
setState(() {
hasSynced = status.hasSynced ?? false;
});
});
}
@override
Widget build(BuildContext context) {
return Text(hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...');
}
// Don't forget to dispose of stream subscriptions when the view is disposed
void dispose() {
super.dispose();
_syncStatusSubscription?.cancel();
}
```
For async use cases, see the [waitForFirstSync](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/waitForFirstSync.html) method which returns a promise that resolves once the first full sync has completed.
Use the [hasSynced](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus#hassynced) property (available since version 1.4.1 of the SDK) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#registerlistener) to indicate to the user whether the initial sync is in progress.
```js theme={null}
// Example of using hasSynced to show whether the first sync has completed
// Tap into hasSynced
const [hasSynced, setHasSynced] = React.useState(powerSync.currentStatus?.hasSynced || false);
React.useEffect(() => {
// Register listener for changes made to the powersync status
return powerSync.registerListener({
statusChanged: (status) => {
setHasSynced(!!status.hasSynced);
}
});
}, [powerSync]);
return {hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'} ;
```
For async use cases, see [PowerSyncDatabase.waitForFirstSync](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/AbstractPowerSyncDatabase#waitforfirstsync), which returns a promise that resolves once the first full sync has completed (it queries the internal SQL [ps\_buckets](/architecture/client-architecture) table to determine if data has been synced).
Use the [hasSynced](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus#hassynced) property (available since version 0.4.1 of the SDK) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#registerlistener) to indicate to the user whether the initial sync is in progress.
```js theme={null}
// Example of using hasSynced to show whether the first sync has completed
// Tap into hasSynced
const [hasSynced, setHasSynced] = React.useState(powerSync.currentStatus?.hasSynced || false);
React.useEffect(() => {
// Register listener for changes made to the powersync status
return powerSync.registerListener({
statusChanged: (status) => {
setHasSynced(!!status.hasSynced);
}
});
}, [powerSync]);
return {hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'}
;
```
For async use cases, see [PowerSyncDatabase.waitForFirstSync()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#waitforfirstsync), which returns a promise that resolves once the first full sync has completed (it queries the internal SQL [ps\_buckets](/architecture/client-architecture) table to determine if data has been synced).
Example not yet available.
Example not yet available.
Use the `hasSynced` property and register a listener to indicate to the user whether the initial sync is in progress.
```kotlin theme={null}
val db = remember { PowerSyncDatabase(factory, schema) }
val status = db.currentStatus.asFlow().collectAsState(initial = null)
val hasSynced by remember { derivedStateOf { status.value?.hasSynced } }
when {
hasSynced == null || hasSynced == false -> {
Box(
modifier = Modifier.fillMaxSize().background(MaterialTheme.colors.background),
contentAlignment = Alignment.Center
) {
Text(
text = "Busy with initial sync...",
style = MaterialTheme.typography.h6
)
}
}
else -> {
... show rest of UI
```
For async use cases, use `waitForFirstSync` method which is a suspense function that resolves once the first full sync has completed.
Use the `hasSynced` property and observe status changes to indicate to the user whether the initial sync is in progress.
```swift theme={null}
struct WaitForFirstSync: View {
private let powersync: any PowerSyncDatabaseProtocol
@State var didSync: Bool = false
init(powersync: any PowerSyncDatabaseProtocol) {
self.powersync = powersync
}
var body: some View {
if !didSync {
ProgressView().task {
do {
try await powersync.waitForFirstSync()
} catch {
// TODO: Handle errors
}
}
}
}
}
```
For async use cases, use [`waitForFirstSync`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/powersyncdatabaseprotocol/waitforfirstsync\(\)).
Use the [HasSynced](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs) property (available since version 0.0.6-alpha.1 of the SDK) to indicate to the user whether the initial sync is in progress.
```cs theme={null}
using PowerSync.Common.Client;
// Example of using HasSynced to show whether the first sync has completed
if (status?.HasSynced == true)
{
Console.WriteLine("Initial sync completed!");
}
else
{
Console.WriteLine("Busy with initial sync...");
}
// For async use cases, use WaitForFirstSync which returns a task that completes once the first full sync has completed
await db.WaitForFirstSync();
// Wait for a specific priority level to complete syncing
// The priority parameter is available since version 0.0.6-alpha.1 of the SDK
var prioritySyncRequest = new PowerSyncDatabase.PrioritySyncRequest{ Priority = 1 };
await db.WaitForFirstSync(request: prioritySyncRequest);
```
Example not yet available.
## Report sync download progress
You can show users a progress bar when data downloads using the `downloadProgress` property from the
[SyncStatus](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus/downloadProgress.html) class.
`downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs.
As an example, this widget renders a progress bar when a download is active:
```dart theme={null}
import 'package:flutter/material.dart';
import 'package:powersync/powersync.dart' hide Column;
class SyncProgressBar extends StatelessWidget {
final PowerSyncDatabase db;
/// When set, show progress towards the [BucketPriority] instead of towards
/// the full sync.
final BucketPriority? priority;
const SyncProgressBar({
super.key,
required this.db,
this.priority,
});
@override
Widget build(BuildContext context) {
return StreamBuilder(
stream: db.statusStream,
initialData: db.currentStatus,
builder: (context, snapshot) {
final status = snapshot.requireData;
final progress = switch (priority) {
null => status.downloadProgress,
var priority? => status.downloadProgress?.untilPriority(priority),
};
if (progress != null) {
return Center(
child: Column(
children: [
const Text('Busy with sync...'),
LinearProgressIndicator(value: progress?.downloadedFraction),
Text(
'${progress.downloadedOperations} out of ${progress.totalOperations}')
],
),
);
} else {
return const SizedBox.shrink();
}
},
);
}
}
```
Also see:
* [SyncDownloadProgress API](https://pub.dev/documentation/powersync/latest/powersync/SyncDownloadProgress-extension-type.html)
* [Demo component](https://github.com/powersync-ja/powersync.dart/blob/main/demos/supabase-todolist/lib/widgets/guard_by_sync.dart)
You can show users a progress bar when data downloads using the `downloadProgress` property from the [SyncStatus](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus) class. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress.
Example:
```jsx theme={null}
import { useStatus } from '@powersync/react';
import { FC, ReactNode } from 'react';
import { View } from 'react-native';
import { Text, LinearProgress } from '@rneui/themed';
export const SyncProgressBar: FC<{ priority?: number }> = ({ priority }) => {
const status = useStatus();
const progressUntilNextSync = status.downloadProgress;
const progress = priority == null ? progressUntilNextSync : progressUntilNextSync?.untilPriority(priority);
if (progress == null) {
return <>>;
}
return (
{progress.downloadedOperations == progress.totalOperations ? (
Applying server-side changes
) : (
Downloaded {progress.downloadedOperations} out of {progress.totalOperations}.
)}
);
};
```
Also see:
* [SyncStatus API](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus)
* [Demo component](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/widgets/GuardBySync.tsx)
You can show users a progress bar when data downloads using the `downloadProgress` property from the
[SyncStatus](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus) class. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress.
Example (React, using [MUI](https://mui.com) components):
```jsx theme={null}
import { Box, LinearProgress, Stack, Typography } from '@mui/material';
import { useStatus } from '@powersync/react';
import { FC, ReactNode } from 'react';
export const SyncProgressBar: FC<{ priority?: number }> = ({ priority }) => {
const status = useStatus();
const progressUntilNextSync = status.downloadProgress;
const progress = priority == null ? progressUntilNextSync : progressUntilNextSync?.untilPriority(priority);
if (progress == null) {
return <>>;
}
return (
{progress.downloadedOperations == progress.totalOperations ? (
Applying server-side changes
) : (
Downloaded {progress.downloadedOperations} out of {progress.totalOperations}.
)}
);
};
```
Also see:
* [SyncStatus API](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus)
* [Demo component](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/components/widgets/GuardBySync.tsx)
Example not yet available.
Example not yet available.
You can show users a progress bar when data downloads using the `syncStatus.downloadProgress` property. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives a value from 0.0 to 1.0 representing the total sync progress.
Example (Compose):
```kotlin theme={null}
import androidx.compose.foundation.background
import androidx.compose.foundation.layout.Arrangement
import androidx.compose.foundation.layout.Column
import androidx.compose.foundation.layout.fillMaxSize
import androidx.compose.foundation.layout.fillMaxWidth
import androidx.compose.foundation.layout.padding
import androidx.compose.material.LinearProgressIndicator
import androidx.compose.material.MaterialTheme
import androidx.compose.material.Text
import androidx.compose.runtime.Composable
import androidx.compose.runtime.getValue
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.unit.dp
import com.powersync.PowerSyncDatabase
import com.powersync.bucket.BucketPriority
import com.powersync.compose.composeState
/**
* Shows a progress bar while a sync is active.
*
* The [priority] parameter can be set to, instead of showing progress until the end of the entire
* sync, only show progress until data in the [BucketPriority] is synced.
*/
@Composable
fun SyncProgressBar(
db: PowerSyncDatabase,
priority: BucketPriority? = null,
) {
val state by db.currentStatus.composeState()
val progress = state.downloadProgress?.let {
if (priority == null) {
it
} else {
it.untilPriority(priority)
}
}
if (progress == null) {
return
}
Column(
modifier = Modifier.fillMaxSize().background(MaterialTheme.colors.background),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center,
) {
LinearProgressIndicator(
modifier = Modifier.fillMaxWidth().padding(8.dp),
progress = progress.fraction,
)
if (progress.downloadedOperations == progress.totalOperations) {
Text("Applying server-side changes...")
} else {
Text("Downloaded ${progress.downloadedOperations} out of ${progress.totalOperations}.")
}
}
}
```
Also see:
* [SyncDownloadProgress API](https://powersync-ja.github.io/powersync-kotlin/core/com.powersync.sync/-sync-download-progress/index.html)
* [Demo component](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/shared/src/commonMain/kotlin/com/powersync/demos/components/GuardBySync.kt)
You can show users a progress bar when data downloads using the `downloadProgress` property from the [`SyncStatusData`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata/) object. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs.
Example:
```swift theme={null}
struct SyncProgressIndicator: View {
private let powersync: any PowerSyncDatabaseProtocol
private let priority: BucketPriority?
@State private var status: SyncStatusData? = nil
init(powersync: any PowerSyncDatabaseProtocol, priority: BucketPriority? = nil) {
self.powersync = powersync
self.priority = priority
}
var body: some View {
VStack {
if let totalProgress = status?.downloadProgress {
let progress = if let priority = self.priority {
totalProgress.untilPriority(priority: priority)
} else {
totalProgress
}
ProgressView(value: progress.fraction)
if progress.downloadedOperations == progress.totalOperations {
Text("Applying server-side changes...")
} else {
Text("Downloaded \(progress.downloadedOperations) out of \(progress.totalOperations)")
}
}
}.task {
status = powersync.currentStatus
for await status in powersync.currentStatus.asFlow() {
self.status = status
}
}
}
}
```
Also see:
* [SyncStatusData API](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata/)
* [SyncDownloadProgress API](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncdownloadprogress/)
* [Demo component](https://github.com/powersync-ja/powersync-swift/blob/main/Demo/PowerSyncExample/Components/ListView.swift)
You can show users a progress bar when data downloads using the `DownloadProgress()` method from the [SyncStatus](https://github.com/powersync-ja/powersync-dotnet/blob/main/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs) class. `DownloadProgress().DownloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs.
**Version compatibility**: The `DownloadProgress()` method is available since version 0.0.6-alpha.1 of the SDK. The event listener uses `db.Events.OnStatusChanged.ListenAsync` (since v0.0.11-alpha.1).
Example:
```cs theme={null}
using PowerSync.Common.Client;
using PowerSync.Common.DB.Crud;
class SyncProgressBar
{
private PowerSyncDatabase _db;
private int? _priority; // Optional: when set, show progress towards this priority instead of the full sync
private SyncStatus? _currentStatus;
public async Task StartListeningAsync(CancellationToken ct = default)
{
var listener = _db.Events.OnStatusChanged.ListenAsync(ct);
await foreach (var update in listener)
{
_currentStatus = update.Status;
DisplayProgress();
}
}
public SyncProgressBar(PowerSyncDatabase db, int? priority = null)
{
_db = db;
_priority = priority;
}
public void DisplayProgress()
{
var status = _currentStatus;
var totalProgress = status?.DownloadProgress();
var progress = _priority == null
? totalProgress
: totalProgress?.UntilPriority(_priority.Value);
if (progress != null)
{
var fraction = progress.DownloadedFraction; // 0.0 to 1.0
var downloadedOps = progress.DownloadedOperations;
var totalOps = progress.TotalOperations;
Console.WriteLine($"Sync progress: {fraction * 100:F1}%");
if (downloadedOps == totalOps)
{
Console.WriteLine("Applying server-side changes...");
}
else
{
Console.WriteLine($"Downloaded {downloadedOps} out of {totalOps} operations");
}
}
}
}
```
Example not yet available.
# Live Queries / Watch Queries
Source: https://docs.powersync.com/client-sdks/watch-queries
Build reactive UIs with watch queries that update when data changes
Watch queries, also known as live queries, are essential for building reactive apps where the UI automatically updates when the underlying data changes. PowerSync's watch functionality allows you to listen for SQL query result changes and receive updates whenever the dependent tables are modified.
# Overview
PowerSync provides multiple approaches to watching queries, each designed for different use cases and performance requirements:
1. Basic Watch Queries - These queries work across all SDKs, providing real-time updates when dependent tables change.
2. Incremental Watch Queries - Only emit updates when data actually changes, preventing unnecessary re-renders. Available in JavaScript SDKs only.
3. Differential Watch Queries - Provide detailed information about what specifically changed between result sets. Available in JavaScript SDKs only.
Choose the approach that best fits your platform and performance needs.
# Basic Watch Queries
PowerSync supports the following basic watch queries based on your platform. These APIs return query results whenever the underlying tables change and are available across all SDKs.
Scroll horizontally to find your preferred platform/framework for an example:
This method is only being maintained for backwards compatibility purposes. Use the improved `db.query.watch()` API instead (see [Incremental Watch Queries](#incremental-watch-queries) below).
The original watch method using the `AsyncIterator` pattern. This is the foundational watch API that works across all JavaScript environments and is being maintained for backwards compatibility.
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch(
`SELECT * FROM lists WHERE state = ?`,
['pending']
)) {
yield result.rows?._array ?? [];
}
}
```
This method is only being maintained for backwards compatibility purposes. Use the improved `db.query.watch()` API instead (see [Incremental Watch Queries](#incremental-watch-queries) below).
The callback-based watch method that doesn't require `AsyncIterator` polyfills. Use this approach when you need smoother React Native compatibility or prefer synchronous method signatures:
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
}
);
}
```
React hook that combines watch functionality with built-in loading, fetching, and error states. Use this when you need convenient state management without React Suspense:
```javascript theme={null}
const {
data: pendingLists,
isLoading,
isFetching,
error
} = useQuery('SELECT * FROM lists WHERE state = ?', ['pending']);
```
React Suspense-based hook that automatically handles loading and error states through Suspense boundaries. Use this when you want to leverage React's concurrent features and avoid manual state handling:
```javascript theme={null}
const { data: pendingLists } = useSuspenseQuery('SELECT * FROM lists WHERE state = ?', ['pending']);
```
Vue composition API hook with built-in loading, fetching, and error states. Use this for reactive watch queries in Vue applications:
```javascript theme={null}
const {
data: pendingLists,
isLoading,
isFetching,
error
} = useQuery('SELECT * FROM lists WHERE state = ?', ['pending']);
```
Use this method to watch for changes to the dependent tables of any SQL query:
```dart theme={null}
StreamBuilder(
stream: db.watch('SELECT * FROM lists WHERE state = ?', ['pending']),
builder: (context, snapshot) {
if (snapshot.hasData) {
// TODO: implement your own UI here based on the result set
return ...;
} else {
return const Center(child: CircularProgressIndicator());
}
},
)
```
Use this method to watch for changes to the dependent tables of any SQL query:
```kotlin theme={null}
fun watchPendingLists(): Flow> =
db.watch(
"SELECT * FROM lists WHERE state = ?",
listOf("pending"),
) { cursor ->
ListItem(
id = cursor.getString("id"),
name = cursor.getString("name"),
)
}
```
Use this method to watch for changes to the dependent tables of any SQL query:
```swift theme={null}
func watchPendingLists() throws -> AsyncThrowingStream<[ListContent], Error> {
try db.watch(
sql: "SELECT * FROM lists WHERE state = ?",
parameters: ["pending"],
) { cursor in
try ListContent(
id: cursor.getString(name: "id"),
name: cursor.getString(name: "name"),
)
}
}
```
Use this method to watch for changes to the dependent tables of any SQL query:
```csharp theme={null}
// Define a result type with properties matching the schema columns (some columns omitted here for brevity)
// public class ListResult { public string id; public string name; public string owner_id; ... }
// Optional cancellation token to stop watching
var cts = new CancellationTokenSource();
// Register listener synchronously on the calling thread...
var listener = db.Watch(
"SELECT * FROM lists WHERE owner_id = ?",
[ownerId],
new SQLWatchOptions { Signal = cts.Token }
);
// ...then listen to changes on another thread (or await foreach directly if already in an async context)
_ = Task.Run(async () =>
{
await foreach (var results in listener)
{
Console.WriteLine("Lists: ");
foreach (var result in results)
{
Console.WriteLine($"{result.id}: {result.name}");
}
}
}, cts.Token);
// To stop watching, cancel the token: cts.Cancel();
```
Use this method to watch for changes to the dependent tables of any SQL query:
```Rust theme={null}
async fn watch_pending_lists(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> {
let stream = db.watch_statement(
"SELECT * FROM lists WHERE state = ?".to_string(),
params!["pending"],
|stmt, params| {
let mut rows = stmt.query(params)?;
let mut mapped = vec![];
while let Some(row) = rows.next()? {
mapped.push(() /* TODO: Read row into list struct */)
}
Ok(mapped)
},
);
let mut stream = pin!(stream);
// Note: The stream is never-ending, so you probably want to call this in an independent async
// task.
while let Some(event) = stream.try_next().await? {
// Update UI to display rows
}
Ok(())
}
```
# Incremental Watch Queries
Basic watch queries can cause performance issues in UI frameworks like React because they return new data on every dependent table change, even when the actual data in the query hasn't changed. This can lead to excessive re-renders as components receive updates unnecessarily.
Incremental watch queries solve this by comparing result sets using configurable comparators and only emitting updates when the comparison detects actual data changes. These queries still query the SQLite database under the hood on each dependent table change, but compare the result sets and only yield results if a change has been made.
**JavaScript Only**: Incremental and differential watch queries are currently only available in the JavaScript SDKs starting from:
* Web v1.25.0
* React Native v1.23.1
* Node.js v0.8.1
Basic Syntax:
```javascript theme={null}
db.query({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['pending'] }).watch();
```
Scroll horizontally to find your preferred approach for an example:
`WatchedQuery` class that comes with a better API in that it includes loading, fetching and error states, supports multiple listeners, automatic cleanup on PowerSync close, and the new `updateSettings()` API for dynamic parameter changes. This is the preferred approach for JavaScript SDKs:
```javascript theme={null}
// Create an instance of a WatchedQuery
const pendingLists = db
.query({
sql: 'SELECT * FROM lists WHERE state = ?',
parameters: ['pending']
})
.watch();
// The registerListener method can be used multiple times to listen for updates
const dispose = pendingLists.registerListener({
onData: (data) => {
// This callback will be called whenever the data changes
console.log('Data updated:', data);
},
onStateChange: (state) => {
// This callback will be called whenever the state changes
// The state contains metadata about the query, such as isFetching, isLoading, etc.
console.log('State changed:', state.error, state.isFetching, state.isLoading, state.data);
},
onError: (error) => {
// This callback will be called if the query fails
console.error('Query error:', error);
}
});
```
`WatchedQuery` class with configurable comparator that compares result sets before emitting to listeners, preventing unnecessary listener invocations when data hasn't changed. Use this when you want shared query instances plus result set comparison for incremental updates:
```javascript theme={null}
// Create an instance of a WatchedQuery
const pendingLists = db
.query({
sql: 'SELECT * FROM lists WHERE state = ?',
parameters: ['pending']
})
.watch({
comparator: {
checkEquality: (current, previous) => {
// This comparator will only report updates if the data changes.
return JSON.stringify(current) === JSON.stringify(previous);
}
}
});
// Register listeners as before...
```
React hook that preserves object references for unchanged items and uses row-level comparators to minimize re-renders. Use this when you want built-in state management plus incremental updates for React components:
```javascript theme={null}
const {
data: pendingLists,
isLoading,
isFetching,
error
} = useQuery('SELECT * FROM lists WHERE state = ?', ['pending'], {
rowComparator: {
keyBy: (item) => item.id,
compareBy: (item) => JSON.stringify(item)
}
});
```
React Suspense hook that preserves object references for unchanged items and uses row-level comparators to minimize re-renders. Use this when you want concurrent React features, automatic state handling, and memoization-friendly object stability:
```javascript theme={null}
const { data: lists } = useSuspenseQuery('SELECT * FROM lists WHERE state = ?', ['pending'], {
rowComparator: {
keyBy: (item) => item.id,
compareBy: (item) => JSON.stringify(item)
}
});
```
Providing a `rowComparator` to the React hooks ensures that components only re-render when the query result actually changes. When combined with React memoization (e.g., `React.memo`) on row components that receive query row objects as props, this approach prevents unnecessary updates at the individual row component level, resulting in more efficient UI rendering.
```jsx theme={null}
const TodoListsWidget = () => {
const { data: lists } = useQuery('[SQL]', [...parameters], { rowComparator: DEFAULT_ROW_COMPARATOR });
return (
{
// The individual row widgets will only re-render if the corresponding row has changed
lists.map((listRecord) => (
))
}
);
};
const TodoWidget = React.memo(({ record }) => {
return {record.name} ;
});
```
Existing `AsyncIterator` API with configurable comparator that compares current and previous result sets, only yielding when the comparator detects changes. Use this if you want to maintain the familiar `AsyncIterator` pattern from the basic watch query API:
```javascript theme={null}
async function* pendingLists(): AsyncIterable {
for await (const result of db.watch('SELECT * FROM lists WHERE state = ?', ['pending'], {
comparator: {
checkEquality: (current, previous) => JSON.stringify(current) === JSON.stringify(previous)
}
})) {
yield result.rows?._array ?? [];
}
}
```
Existing Callback API with configurable comparator that compares result sets and only invokes the callback when changes are detected. Use this if you want to maintain the familiar callback pattern from the basic watch query API:
```javascript theme={null}
const pendingLists = (onResult: (lists: any[]) => void): void => {
db.watch(
'SELECT * FROM lists WHERE state = ?',
['pending'],
{
onResult: (result: any) => {
onResult(result.rows?._array ?? []);
}
},
{
comparator: {
checkEquality: (current, previous) => {
// This comparator will only report updates if the data changes.
return JSON.stringify(current) === JSON.stringify(previous);
}
}
}
);
};
```
# Differential Watch Queries
Differential watch queries go a step further than incremental watched queries by computing and reporting diffs between result sets (added/removed/updated items) while preserving object references for unchanged items. This enables more precise UI updates.
**JavaScript Only**: Incremental and differential watch queries are currently only available in the JavaScript SDKs starting from:
* Web v1.25.0
* React Native v1.23.1
* Node.js v0.8.1
For large result sets where re-running and comparing full query results becomes expensive, consider using trigger-based table diffs. See [High Performance Diffs](/client-sdks/high-performance-diffs).
Basic syntax:
```javascript theme={null}
db.query({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['pending'] }).differentialWatch();
```
Use differential watch when you need to know exactly which items were added, removed, or updated rather than re-processing entire result sets:
```javascript theme={null}
// Create an instance of a WatchedQuery
const pendingLists = db
.query({
sql: 'SELECT * FROM lists WHERE state = ?',
parameters: ['pending']
})
.differentialWatch();
// The registerListener method can be used multiple times to listen for updates
const dispose = pendingLists.registerListener({
onData: (data) => {
// This callback will be called whenever the data changes
console.log('Data updated:', data);
},
onStateChange: (state) => {
// This callback will be called whenever the state changes
// The state contains metadata about the query, such as isFetching, isLoading, etc.
console.log('State changed:', state.error, state.isFetching, state.isLoading, state.data);
},
onError: (error) => {
// This callback will be called if the query fails
console.error('Query error:', error);
},
onDiff: (diff) => {
// This callback will be called whenever the data changes.
console.log('Data updated:', diff.added, diff.updated);
}
});
```
By default, the `differentialWatch()` method uses a `DEFAULT_ROW_COMPARATOR`. This comparator identifies (keys) each row by its `id` column if present, or otherwise by the JSON string of the entire row. For row comparison, it uses the JSON string representation of the full row. This approach is generally safe and effective for most queries.
For some queries, performance could be improved by supplying a custom `rowComparator`. Such as comparing by a `hash` column generated or stored in SQLite. These hashes currently require manual implementation.
```javascript theme={null}
const pendingLists = db
.query({
sql: 'SELECT * FROM lists WHERE state = ?',
parameters: ['pending']
})
.differentialWatch({
rowComparator: {
keyBy: (item) => item.id,
compareBy: (item) => item._hash
}
});
```
The [Yjs Document Collaboration Demo
app](https://github.com/powersync-ja/powersync-js/tree/main/demos/yjs-react-supabase-text-collab) showcases the use of
differential watch queries. New document updates are passed to Yjs for consolidation as they are synced. See the
implementation
[here](https://github.com/powersync-ja/powersync-js/blob/main/demos/yjs-react-supabase-text-collab/src/library/powersync/PowerSyncYjsProvider.ts)
for more details.
# The `WatchedQuery` Class
Both incremental and differential queries use the new `WatchedQuery` class. This class, along with a new `query` method allows building instances of `WatchedQuery`s via the `watch` and `differentialWatch` methods:
```javascript theme={null}
const watchedQuery = db.query({ sql: 'SELECT * FROM lists', parameters: [] }).watch();
```
This class provides advanced features:
* Automatically reprocesses itself if the PowerSync schema has been updated with `updateSchema`.
* Automatically closes itself when the PowerSync client has been closed.
* Allows for the query parameters to be updated after instantiation.
* Allows shared listening to state changes.
* New `updateSettings` API for dynamic parameter updates (see below).
## Query Sharing
`WatchedQuery` instances can be shared across components:
```javascript theme={null}
// Create a shared query instance
const sharedListsQuery = db.query({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['pending'] }).watch();
// Multiple components can listen to the same query
const dispose1 = sharedListsQuery.registerListener({
onData: (data) => updatePendingListsDisplay(data)
});
const dispose2 = sharedListsQuery.registerListener({
onData: (data) => updatePendingListsCount(data.length)
});
```
## Dynamic Parameter Updates
Update query parameters to affect all listeners of the query:
```javascript theme={null}
// Updates to query parameters can be performed in a single place, affecting all listeners
sharedListsQuery.updateSettings({
query: new GetAllQuery({ sql: 'SELECT * FROM lists WHERE state = ?', parameters: ['canceled'] })
});
```
## React Hook for External WatchedQuery Instances
When you need to share query instances across components or manage their lifecycle independently from component mounting, use the `useWatchedQuerySubscription` hook. This is ideal for global state management, query caching, or when multiple components need to listen to the same data:
```javascript theme={null}
// Managing the WatchedQuery externally can extend its lifecycle and allow in-memory caching between components.
const pendingLists = db
.query({
sql: 'SELECT * FROM lists WHERE state = ?',
parameters: ['pending']
})
.watch();
// In the component
export const MyComponent = () => {
// In React one could import the `pendingLists` query or create a context provider for various queries
const { data } = useWatchedQuerySubscription(pendingLists);
return (
{data.map((item) => (
{item.name}
))}
);
};
```
# Writing Data
Source: https://docs.powersync.com/client-sdks/writing-data
Write data to your local SQLite database and manage the upload queue
Write data using SQL `INSERT`, `UPDATE`, or `DELETE` statements. PowerSync automatically queues these writes and uploads them to your backend via the `uploadData()` function in your [backend connector](/intro/setup-guide#connect-to-powersync-service-instance).
## Basic Write Operations
```typescript React Native (TS), Web & Node.js theme={null}
// Insert a new todo
await db.execute(
'INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)',
[listId, 'Buy groceries']
);
// Update a todo
await db.execute(
'UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?',
[todoId]
);
// Delete a todo
await db.execute('DELETE FROM todos WHERE id = ?', [todoId]);
```
```kotlin Kotlin theme={null}
// Insert a new todo
database.writeTransaction {
database.execute(
sql = "INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)",
parameters = listOf(listId, "Buy groceries")
)
}
// Update a todo
database.execute(
sql = "UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?",
parameters = listOf(todoId)
)
// Delete a todo
database.execute(
sql = "DELETE FROM todos WHERE id = ?",
parameters = listOf(todoId)
)
```
```swift Swift theme={null}
// Insert a new todo
try await db.execute(
sql: "INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)",
parameters: [listId, "Buy groceries"]
)
// Update a todo
try await db.execute(
sql: "UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?",
parameters: [todoId]
)
// Delete a todo
try await db.execute(
sql: "DELETE FROM todos WHERE id = ?",
parameters: [todoId]
)
```
```dart Dart/Flutter theme={null}
// Insert a new todo
await db.execute(
'INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)',
[listId, 'Buy groceries']
);
// Update a todo
await db.execute(
'UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?',
[todoId]
);
// Delete a todo
await db.execute('DELETE FROM todos WHERE id = ?', [todoId]);
```
```csharp .NET theme={null}
// Insert a new todo
await db.Execute(
"INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), datetime(), ?, ?)",
new[] { listId, "Buy groceries" }
);
// Update a todo
await db.Execute(
"UPDATE todos SET completed = 1, completed_at = datetime() WHERE id = ?",
new[] { todoId }
);
// Delete a todo
await db.Execute(
"DELETE FROM todos WHERE id = ?",
new[] { todoId }
);
```
**Best practice**: Use UUIDs when inserting new rows on the client side. UUIDs can be generated offline/locally, allowing for unique identification of records created in the client database before they are synced to the server. See [Client ID](/sync/advanced/client-id) for more details.
## ORM Support
PowerSync integrates with popular ORM libraries, which provide type safety and additional tooling. Using an ORM is often preferable to writing raw SQL queries, especially for common operations.
See [ORM Support](/client-sdks/orms/overview) to learn which ORMs PowerSync supports and how to get started.
## Write Operations and Upload Queue
PowerSync automatically queues writes and uploads them to your backend. The upload queue stores three types of operations:
| Operation | Purpose | Contents | SQLite Statement |
| --------- | ------------------- | -------------------------------------------------------- | --------------------------------- |
| `PUT` | Create new row | Contains the value for each non-null column | Generated by `INSERT` statements. |
| `PATCH` | Update existing row | Contains the row `id`, and value of each changed column. | Generated by `UPDATE` statements. |
| `DELETE` | Delete existing row | Contains the row `id` | Generated by `DELETE` statements. |
For details on how writes are uploaded to your backend, see [Writing Client Changes](/handling-writes/writing-client-changes).
## Advanced Topics
* [Usage Examples](/client-sdks/usage-examples) - Code examples for common use cases
# Client-Side Integration With Your Backend
Source: https://docs.powersync.com/configuration/app-backend/client-side-integration
The 'backend connector' provides the connection between the PowerSync Client SDK and your [backend application](/configuration/app-backend/setup).
## How PowerSync Uses Your Backend
After you've [instantiated](/intro/setup-guide#instantiate-the-powersync-database) the client-side PowerSync database, you will call `connect()` on it, which causes the PowerSync Client SDK to connect to the [PowerSync Service](/architecture/powersync-service) for the purpose of syncing data to the client-side SQLite database, *and* to connect to your backend application as needed, for two potential purposes:
| Purpose | Description |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Uploading mutations to your backend:** | Mutations that are made to the client-side SQLite database are uploaded to your backend application, where you control how they're applied to your backend source database (Postgres, MongoDB, MySQL, or SQL Server). This is how PowerSync achieves bi-directional syncing of data: The [PowerSync Service](/architecture/powersync-service) provides the *server-to-client read path* based on your [Sync Streams or Sync Rules (legacy)](/sync/overview), and the *client-to-server write path* goes via your backend. |
| **Authentication integration:** (optional) | PowerSync uses JWTs for authentication between the Client SDK and PowerSync Service. Some [authentication providers](/configuration/auth/overview#common-authentication-providers) generate JWTs for users which PowerSync can verify directly. For others, some code must be [added to your application backend](/configuration/auth/custom) to generate the JWTs. |
## 'Backend Connector'
Accordingly, you must pass a *backend connector* as an argument when you call `connect()` on the client-side PowerSync database. You must define that backend connector, and it must implement two functions/methods:
| Purpose | Function | Description |
| ---------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Uploading mutations to your backend:** | `uploadData()` | The PowerSync Client SDK automatically calls this function to upload client-side mutations to your backend. Whenever you write to the client-side SQLite database, those writes are also automatically placed into an *upload queue* by the Client SDK, and the Client SDK processes the entries in the upload queue by calling `uploadData()`. You should define your `uploadData()` function to call your backend application API to upload and apply the write operations to your backend source database. The Client SDK automatically handles retries in the case of failures. See [Writing Data](/client-sdks/writing-data) in the *Client SDKs* section for more details on the implementation of `uploadData()`. |
| **Authentication integration:** | `fetchCredentials()` | This is called every couple of minutes and is used to obtain a JWT as well as the endpoint URL for your PowerSync Service instance. The PowerSync Client SDK uses that JWT to authenticate against the PowerSync Service specified in the endpoint URL. `fetchCredentials()` typically returns an object with `token` (JWT) and `endpoint` fields. See [Authentication Setup](/configuration/auth/overview) for more details on JWT authentication. |
Some authentication providers generate JWTs for users which PowerSync can verify directly, and in that case, your `fetchCredentials()` function implementation can simply return that JWT from client-side state. Your `fetchCredentials()` implementation only needs to retrieve a JWT from your backend if you are using [Custom Authentication](/configuration/auth/custom) integration. See the [Authentication Overview](/configuration/auth/overview) for more background.
## Example Implementation
For an example implementation of a PowerSync 'backend connector', see the SDK guide for your platform:
## More Examples
For additional implementation examples, see the [Examples](/intro/examples) section.
# CloudCode (for MongoDB Backend Functionality)
Source: https://docs.powersync.com/configuration/app-backend/cloudcode
We've made optional functionality available to MongoDB customers that handles the [backend integration](/configuration/app-backend/setup) required by PowerSync.
This makes PowerSync easier to implement for developers migrating from [MongoDB Atlas Device Sync](/migration-guides/atlas-device-sync) who prefer not having to maintain their own backend code and infrastructure (PowerSync's [typical architecture](/configuration/app-backend/setup) is to use your own backend to process mutations uploaded from clients, and to generate JWTs for authentication if needed).
Specifically, you can use the CloudCode feature of JourneyApps Platform, a [sibling product](https://journeyapps.com) of PowerSync. [CloudCode](https://docs.journeyapps.com/reference/cloudcode/cloudcode-overview) is a serverless cloud functions engine based on Node.js and AWS Lambda. It's provided as a fully-managed service running on the same cloud infrastructure as the rest of PowerSync Cloud. PowerSync and JourneyApps Platform share the same login system, so you don’t need to create a separate account to use CloudCode. For further background, see [this post on our blog](https://www.powersync.com/blog/turnkey-backend-functionality-conflict-resolution-for-powersync).
We are currently making JourneyApps Platform CloudCode available for free to all our customers who use PowerSync with MongoDB. It does require a bit of "white glove" onboarding from our team. [Contact us](/resources/contact-us) if you want to use this functionality.
# Using CloudCode for MongoDB Backend Functionality
There is a MongoDB template available in CloudCode that provides the backend functionality needed for a PowerSync MongoDB implementation. Here is how to use it:
## Create a New JourneyApps Platform Project
To create a new JourneyApps Platform project in order to use CloudCode:
Navigate to the [JourneyApps Admin Portal](https://accounts.journeyapps.com/portal/admin). You should see a list of your projects if you've created any.
Select **Create Project** at the top right of the screen.
Select **JourneyApps Platform Project** and click **Next**.
Enter a project name and click **Next**.
There are options available for managing version control for the project. For simplicity we recommend selecting **Basic (Revisions)** and **JourneyApps** as the Git provider.
Select **TypeScript** as your template language, and **MongoDB CRUD & Auth Backend** as your template. Then click **Create App**.
## Overview of the CloudCode Tasks Created From the Template
To view the CloudCode tasks that were created in the new project using this template, select **CloudCode** at the top of the IDE:
Here you will find four CloudCode tasks:
Here's the purpose of each task:
Task
Used For
Description
`generate_keys`
[Authentication Integration](/configuration/app-backend/setup)
This is a task that can be used to generate a private/public key pair which the `jwks` and `token` tasks (see below) require. This task does **not** expose an HTTP endpoint and should only be used for development and getting started.
`jwks`
[Authentication Integration](/configuration/app-backend/setup)
This task [exposes an HTTP endpoint](https://docs.journeyapps.com/reference/cloudcode/triggering-a-cloudcode-task/trigger-cc-via-http) which has a `GET` function which returns the public [JWKS](https://stytch.com/blog/understanding-jwks/) details.
`token`
[Authentication Integration](/configuration/app-backend/setup)
This task exposes an HTTP endpoint which has a `GET` function. The HTTP endpoint can be called by your [`fetchCredentials()` function](/configuration/app-backend/client-side-integration) when implementing the PowerSync Client SDK, to generate a JWT that the Client SDK can use to authenticate against the PowerSync Service.
`upload`
[Client Mutations](/configuration/app-backend/setup)
This task exposes an HTTP endpoint which has a `POST` function which is used to process uploaded mutations from a PowerSync client and persist it to the source MongoDB database. The HTTP endpoint can be called by your [`uploadData()` function](/configuration/app-backend/client-side-integration) when implementing the PowerSync Client SDK.
If you will not be using [Custom Authentication](/configuration/auth/custom), you do not need the authentication-related tasks. Some [authentication providers](/configuration/auth/overview#common-authentication-providers) (e.g. Auth0, Clerk, Stytch, Keycloak, Azure AD, Google Identity, WorkOS, etc.) already generate JWTs for users which PowerSync can work with directly. If you are *not* using one of those authentication providers, you will need to implement [Custom Authentication](/configuration/auth/custom)
## Setup: Deployment Configuration
Before using the tasks, we need to configure a "deployment".
1. At the top of the IDE, select **Deployments**.
2. Create a new deployment by using the **+** button at the top right, *or* use the default `Testing` deployment. You can configure different deployments for different environments (e.g. staging, production)
3. Now select the **Deployment settings** button for the deployment.
4. In the **Deployment settings** - **General** tab, capture a **Domain** value in the text field. This domain name determines where the HTTP endpoints exposed by these CloudCode tasks can be accessed. The application will validate the domain name to make sure it's available.
5. Select **Save**.
6. Deploy the deployment: you can do so by selecting the **Deploy app** button, which can be found on the far right for each of the deployments you have configured. After the deployment is completed, it will take a few minutes for the domain to be available.
7. Your new domain will be available at `.poweredbyjourney.com`. Open the browser and navigate to the new domain. You should be presented with `Cannot GET /`, because there is no index route.
## Setup: Authentication Integration (Optional)
If you will not be using [Custom Authentication](/configuration/auth/custom), you can skip this part. See the explanatory note about authentication above.
### 1. Generate Key Pair
First, you need to generate a public/private key pair. Do the following to generate the key pair:
1. Open the `generate_keys` CloudCode task.
2. Select the **Test CloudCode Task** button at the top right. This will print the public and private key in the task logs window.
3. Copy and paste the `POWERSYNC_PUBLIC_KEY` and `POWERSYNC_PRIVATE_KEY` to a file — we'll need this in the next step.
This step is only meant for testing and development because the keys are printed in the log files.
For production, [generate a key pair locally](https://github.com/powersync-ja/powersync-jwks-example?tab=readme-ov-file#1-generate-a-key-pair) and move onto step 2 and 3.
### 2. Configure Environment Variables
The following variables need to be set on the deployment for authentication integration purposes:
* `POWERSYNC_PUBLIC_KEY` - This is the `POWERSYNC_PUBLIC_KEY` from the values generated in step 1.
* `POWERSYNC_PRIVATE_KEY` - This is the `POWERSYNC_PRIVATE_KEY` from the values generated in step 1.
* `POWERSYNC_URL` - This is your PowerSync instance URL that can be found in the [PowerSync Dashboard](https://dashboard.powersync.com/).
See the [How to Set Environment Variables](#how-to-set-environment-variables) section below for instructions.
### 3. Test
Open your browser and navigate to `.poweredbyjourney.com/jwks` (using the domain name you picked in [Setup: Deployment Configuration](#setup:-deployment-configuration))
If the setup was successful, the `jwks` task will render the keys in JSON format. Make sure the format of your JWKS keys matches the format [in this example](https://hlstmcktecziostiaplz.supabase.co/functions/v1/powersync-jwks) JWKS endpoint.
## Setup: Handling Client Mutations
The following variables need to be set on the deployment for the purpose of handling uploaded client mutations:
* `POWERSYNC_URL` - This is your PowerSync instance URL that can be found in the [PowerSync Dashboard](https://dashboard.powersync.com/).
* `MONGO_URI` - This is the URI of your MongoDB source database e.g. `mongodb+srv://:@/`
See the next section for instructions.
## How to Set Environment Variables
To set environment variables, do the following:
1. At the top of the IDE, select **Deployments**.
2. Click on **Deployment settings** for the relevant deployment.
3. Select the **Environment Variables** tab.
4. Capture the variable name in the **Name** text field.
5. Capture the variable value in the **Value** text field.
6. (Suggested) Check the **Masked** checkbox to obfuscate the variable value for security purposes.
7. Repeat until all the variables are added.
To finalize the setup, do the following:
1. Select the **Save** button. This is important, otherwise the variables will not save.
2. Deploy the deployment: you can do so by selecting the **Deploy app** button.
## Usage: Authentication Integration (Optional)
Make sure you've configured a deployment and set up environment variables as described in the **Setup** steps above before using the HTTP API endpoints exposed by the CloudCode tasks
### Token
You would call the `token` HTTP API endpoint when you [implement](/configuration/app-backend/client-side-integration) the `fetchCredentials()` function in your client application.
Send an HTTP GET request to `.poweredbyjourney.com/token?user_id=` to fetch a JWT for a user. You must provide a `user_id` in the query string of the request, as this is included in the JWT that is generated.
The response of the request will be structured like this:
```json theme={null}
{"token":"..."}
```
### JWKS
The `jwks` HTTP API endpoint is used by PowerSync to validate the token returned from the `.poweredbyjourney.com/token` endpoint. This URL must be set in the configuration of your PowerSync instance.
Send an HTTP GET request to `.poweredbyjourney.com/jwks`.
An example of the response format can be found using [this link](https://hlstmcktecziostiaplz.supabase.co/functions/v1/powersync-jwks).
## Usage: Handling Client Mutations
### Upload
You would call the `upload` HTTP API endpoint when you [implement](/configuration/app-backend/client-side-integration) the `uploadData()` function in your client application.
Send an HTTP POST request to `.poweredbyjourney.com/upload`
The body of the request payload should look like this:
```json theme={null}
{
"batch": [{
"op": "PUT",
"table": "lists",
"id": "61d19021-0565-4686-acc4-3ea4f8c48839",
"data": {
"created_at": "2024-10-31 10:33:24",
"name": "Name",
"owner_id": "8ea4310a-b7c0-4dd7-ae54-51d6e1596b83"
}
}]
}
```
* `batch` should be an array of mutations from the PowerSync Client SDK.
* `op` refers to the type of each mutation recorded by the PowerSync Client SDK (`PUT`, `PATCH` or `DELETE`). Refer to [Writing Data](/client-sdks/writing-data) and [Writing Client Changes](/handling-writes/writing-client-changes) for details.
* `table` refers to the table in SQLite where the mutation originates from, and should match the name of a collection in MongoDB.
The API will respond with HTTP status `200` if the write was successful.
## Customization: Handling Client Mutations
You can make changes to the way the `upload` task writes data to the source MongoDB database.
Here is how:
1. Go to **CloudCode** at the top of the IDE in your JourneyApps Platform project
2. Select and expand the `upload` task in the panel on the left.
3. The `index.ts` contains the entry point function that accepts the HTTP request and has a `MongoDBStorage` class which interacts with the MongoDB database to perform inserts, updates and deletes. To adjust how mutations are performed, take a look at the `updateBatch` function.
## Production Considerations
Before going into production with this solution, you will need to set up authentication on the HTTP endpoints exposed by the CloudCode tasks.
If you need more data validations and/or authorization than what is provided by the template, that will need to be customized too. Consider introducing schema validation of the data being written to the source MongoDB database. You should use a [purpose-built](https://json-schema.org/tools?query=\&sortBy=name\&sortOrder=ascending\&groupBy=toolingTypes\&licenses=\&languages=\&drafts=\&toolingTypes=\&environments=\&showObsolete=false) library for this, and use [MongoDB Schema Validation](https://www.mongodb.com/docs/manual/core/schema-validation/) to enforce the types in the database.
Please [contact us](/resources/contact-us) for assistance on any of the above.
# App Backend Setup
Source: https://docs.powersync.com/configuration/app-backend/setup
PowerSync generally assumes that you have some kind of "backend application" as part of your overall application architecture — whether it's a backend-as-a-service (e.g. Supabase), a custom backend (e.g. Node.js, Rails, Laravel, Django, ASP.NET), some kind of serverless cloud functions (e.g. Azure Functions, AWS Lambda, Google Cloud Functions, Cloudflare Workers, etc.), or any other equivalent system that allows you to run privileged logic securely.
When you integrate PowerSync into your app project, PowerSync relies on that "backend application" for a few potential purposes:
1. **Allowing client-side mutations to be uploaded** and [applied](/handling-writes/writing-client-changes) to the backend source database (Postgres, MongoDB, MySQL, or SQL Server). When you write to the client-side SQLite database provided by PowerSync, those mutations are also placed into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue). The PowerSync Client SDK manages uploading of those mutations to your backend using the `uploadData()` function that you define in your [client-side](/configuration/app-backend/client-side-integration) *backend connector* implementation. Your `uploadData()` implementation should call your backend application API to apply the mutations to your source database. The reason why we designed PowerSync this way is to give you full control over things like server-side data validation and authorization of mutations, while PowerSync itself requires minimal permissions.
2. **Authentication integration (optional):** *If* you are implementing custom authentication (see below), your backend is responsible for securely generating the [JWTs](/configuration/auth/overview) used by the PowerSync Client SDK to authenticate with the [PowerSync Service](/architecture/powersync-service).
If you will only use the backend for applying mutations and not for authentication, you can also use some kind of data API service or API platform (e.g. Hasura).
### Processing Mutations From Clients
* **Server-Side Implementation**: [Writing Client Changes](/handling-writes/writing-client-changes) provides guidance on how you can handle mutations in your backend application.
* **Client-Side Implementation**: See [Client-Side Integration](/configuration/app-backend/client-side-integration)
### Authentication (Optional)
Some authentication providers already generate JWTs for users which PowerSync can work with directly — see [Authentication Setup](/configuration/auth/overview).
For others, some backend code/logic must be added to your backend application to generate the JWTs needed for PowerSync — see [Custom Authentication](/configuration/auth/custom).
In your [client-side](/configuration/app-backend/client-side-integration) *backend connector* implementation, you need to define the `fetchCredentials()` function so that it returns a JWT which can be used by PowerSync Client SDK to authenticate with the [PowerSync Service](/architecture/powersync-service).
## Backend Options
If you already have some kind of backend application as part of your stack, it's best to use that existing backend. Otherwise, there are several options for what you can use: (this is not an exhaustive list)
### Custom Backend
Our [Example Projects](/intro/examples#backend-examples) page provides simple reference implementations of custom backends (e.g. Node.js, Django, Rails, .NET, etc.)
### Backend-as-a-Service / Developer Platforms
There are several backend-as-a-service developer platforms that work well with PowerSync, for example:
* **Supabase** (for Postgres): Several of our demo apps demonstrate how to use [Supabase](https://supabase.com/) as the Postgres-based backend. Supabase provides both an authentication service, PostgREST data APIs, and edge functions for more custom logic. See our [integration guide](/integrations/supabase/guide).
* **Neon** (for Postgres): Similarly to Supabase, [Neon](http://neon.tech/) provides PostgREST data APIs and an authentication service. See our [integration guide](/integrations/neon).
### Serverless Functions
You can use a serverless functions system like Azure Functions, AWS Lambda, Google Cloud Functions, Cloudflare Workers, Vercel Functions, Netlify Functions, Fastly Compute, Deno Deploy, etc.
### For MongoDB: PowerSync Hosted/Managed Option
For developers using MongoDB as a backend source database, an alternative option is to use CloudCode, a serverless cloud functions environment provided by a sibling product of PowerSync, that runs in the same cloud environment as PowerSync Cloud. We have a template that you can use as a turnkey starting point. See the [documentation](/configuration/app-backend/cloudcode).
# Auth0
Source: https://docs.powersync.com/configuration/auth/auth0
Setting up Auth0 Authentication with PowerSync
On Auth0, create a new API:
* **Name**: PowerSync
* **Identifier**: PowerSync instance URL, e.g. `https://{instance}.powersync.journeyapps.com`
On the PowerSync instance, add the Auth0 JWKS URI: `https://{auth0-domain}/.well-known/jwks.json`
In the application, generate access tokens with the PowerSync instance URL as the audience, and use this to connect to PowerSync.
# Custom Authentication
Source: https://docs.powersync.com/configuration/auth/custom
Any authentication provider can be supported by generating custom JWTs for PowerSync.
A quick way to get started during development before implementing custom auth is to use [Development Tokens](/configuration/auth/development-tokens)
When you set up custom authentication, you define the [`fetchCredentials()` function](/configuration/app-backend/client-side-integration) in your *backend connector* to retrieve a JWT from your backend application API, making use of your [existing app-to-backend](/configuration/app-backend/setup) authentication:
## Custom Authentication Flow
The process is as follows:
1. Your client app authenticates the user using the app's authentication provider (either a third-party authentication provider or a custom one) and typically gets a session token.
2. The client makes a call to your backend API (authenticated using the above session token), which generates and signs a JWT for PowerSync. (You define the [`fetchCredentials()` function](/configuration/app-backend/client-side-integration) in your *backend connector* so that it makes the API call, and the PowerSync Client SDK automatically invokes `fetchCredentials()` as needed).
1. For example implementations of this backend API endpoint, see [Custom Backend Examples](/intro/examples#backend-examples)
3. The client connects to the PowerSync Service using the above JWT (this is automatically managed by the PowerSync Client SDK).
4. The PowerSync Service verifies the JWT.
## JWT Requirements
Requirements for the signed JWT:
1. The JWT must be signed using a key in the JWKS ([Option 1](#option-1%3A-asymmetric-jwts-—-using-jwks-recommended)) or the HS256 key ([Option 2](#option-2%3A-symmetric-jwts-—-using-hs256))
2. JWT must have a `kid` matching that of the key.
3. The `aud` of the JWT must match the PowerSync instance URL (for Cloud) or one of the audiences configured in `client_auth.audience` (for self-hosted).
1. To get the instance URL when using PowerSync Cloud: In the [PowerSync Dashboard](https://dashboard.powersync.com/), click **Connect** in the top bar and copy the instance URL from the dialog.
2. Alternatively, specify a custom audience in the instance settings (Cloud) or in your config file ([self-hosted](#self-hosted-configuration)).
4. The JWT must expire in 24 hours or less, and 60 minutes or less is recommended. Specifically, both `iat` and `exp` fields must be present, with a difference of 86,400 or less between them.
5. The user ID must be used as the `sub` of the JWT.
6. Additional fields can be added which can be referenced in Sync Streams (as [`auth.parameters()`](/sync/streams/overview#accessing-parameters)) or Sync Rules [parameter queries](/sync/rules/parameter-queries).
## Option 1: Asymmetric JWTs — Using JWKS (Recommended)
This is the recommended approach for production environments. Asymmetric keys provide better security by separating signing (private key) from verification (public key), making key rotation easier and more secure.
A key pair (private + public key) is required to sign and verify JWTs. The private key is used to sign the JWT, and the public key is used to verify it.
PowerSync requires the public key(s) to be specified in [JSON Web Key Set (JWKS)](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets) format.
The JWKS can be configured in one of two ways:
* Expose the JWKS on a public URL. PowerSync fetches the keys from this endpoint. We have an example endpoint available [here](https://hlstmcktecziostiaplz.supabase.co/functions/v1/powersync-jwks) — ensure that your response looks similar. This option is useful if you're using an external authentication service with an existing JWKS endpoint and you want to automate key rotation without manual deploys.
* Configure the JWKS directly. Provide the keys directly in your PowerSync instance configuration. This option is useful if you generate tokens yourself and want simpler setup.
Requirements for the public key in the JWKS:
1. Supported signature schemes: RSA, EdDSA and ECDSA.
2. Key type (`kty`): `RSA`, `OKP` (EdDSA) or `EC` (ECDSA).
3. Algorithm (`alg`):
1. `RS256`, `RS384` or `RS512` for RSA
2. `EdDSA` for EdDSA
3. `ES256`, `ES384` or `ES512` for ECDSA
4. Curve (`crv`) - only relevant for EdDSA and ECDSA:
1. `Ed25519` or `Ed448` for EdDSA
2. `P-256`, `P-384` or `P-512` for ECDSA
5. A `kid` must be specified and must match the `kid` in the JWT.
Refer to [this example](https://github.com/powersync-ja/powersync-jwks-example) for creating and verifying JWTs for PowerSync authentication.
Since there is no way to revoke a JWT once issued without rotating the key, we recommend using short expiration periods (e.g. 5 minutes). JWTs older than 60 minutes are not accepted by PowerSync.
### Rotating Keys
If a private key is compromised, rotate the key in the JWKS.
The rotation process differs depending on your JWKS configuration method:
#### JWKS on a Public URL
When using a JWKS exposed on a public URL, PowerSync refreshes the keys from the endpoint every few minutes and will detect new keys immediately.
There is a possibility of false authentication errors until PowerSync refreshes the keys. These errors are typically retried by the client and will have little impact. However, to periodically rotate keys without any authentication failures, follow this process:
1. Add a new key to the JWKS at your endpoint.
2. Wait 5 minutes to ensure PowerSync has fetched the new key.
3. Start signing new JWT tokens using the new key.
4. Wait until all existing tokens have expired.
5. Remove the old key from your JWKS endpoint.
#### Direct JWKS Configuration
When the JWKS is configured directly in PowerSync (not via a public URL), you must deploy configuration changes for PowerSync to use the new key:
1. Add the new key to your JWKS configuration.
2. Deploy the configuration changes (via the **Save and Deploy** button in the PowerSync Dashboard, or restart the PowerSync Service for self-hosted).
3. Start signing new JWT tokens using the new key.
4. Wait until all existing tokens have expired.
5. Remove the old key from the JWKS configuration and deploy again.
### PowerSync Cloud Configuration
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Client Auth** view.
2. Configure your JWKS and audience settings. You can either configure the JWKS directly in JSON format (use the **JWKS** section), or configure a **JWKS URI**.
3. Click **Save and Deploy** to apply the changes.
### Self-Hosted Configuration
You can configure authentication using either:
* A JWKS URI endpoint
* Static public keys in the configuration file
This can be configured via your [`config.yaml`](/configuration/powersync-service/self-hosted-instances):
```yaml config.yaml theme={null}
client_auth:
# Option 1: JWKS URI endpoint
jwks_uri: http://demo-backend:6060/api/auth/keys
# Option 2: Static collection of public keys for JWT verification
# jwks:
# keys:
# - kty: 'RSA'
# n: '[rsa-modulus]'
# e: '[rsa-exponent]'
# alg: 'RS256'
# kid: '[key-id]'
audience: ['powersync-dev', 'powersync']
```
## Option 2: Symmetric JWTs — Using HS256
Using shared secrets (HS256) for JWT signing is generally not recommended for production environments due to security risks. We recommend using asymmetric keys (Option 1) instead, which provide better security through public/private key separation.
PowerSync supports HS256 symmetric JWTs for development and testing purposes.
### Generating a Shared Secret
You can generate a shared secret in the terminal using the following command:
```bash theme={null}
openssl rand -base64 32
```
### Base64 URL Encode the Shared Secret
Once you've generated the shared secret, you will need to Base64 URL encode it before setting it in the PowerSync instance Client Auth configuration.
You can use the following command to Base64 URL encode the shared secret:
```bash theme={null}
echo -n "your-value-here" | base64 -w 0 | tr '+/' '-_' | tr -d '='
```
### Set the Shared Secret in the PowerSync Instance
1. Go to the [PowerSync Cloud Dashboard](https://dashboard.powersync.com/) and select your project and instance.
2. Go to the **Client Auth** view.
3. Find the section labeled **HS256 Authentication Tokens (ADVANCED)** and click **+** button to add a new token.
4. Set the **KID** to a unique identifier for the token (you'll use the same KID to sign the token). Set the **Shared Secret** to the Base64 URL encoded shared secret.
5. Click **Save and Deploy**.
1. Add the shared secret to your PowerSync Service configuration file, e.g.:
```yaml powersync.yaml theme={null}
client_auth:
jwks:
keys:
- kty: oct
alg: 'HS256'
kid: '[key-id]'
k: '[base64url-encoded-shared-secret]'
```
2. Restart the PowerSync Service.
### Generate New JWTs Using the KID and Shared Secret
Using your newly-created shared secret, you can generate JWT tokens [in your backend](/configuration/app-backend/setup) using the same KID you set in the PowerSync Service configuration. Here's a example TypeScript function using the [`jose`](https://github.com/panva/jose) library:
```typescript theme={null}
import * as jose from 'jose';
export const generateToken = async (payload: Record, userId: string) => {
return await new jose.SignJWT(payload)
.setProtectedHeader({ alg: 'HS256', kid: 'your-kid' })
.setSubject(userId)
.setIssuer('https://your-domain.com')
.setAudience('https://your-powersync-instance.com')
.setExpirationTime('60m')
// Note: The shared secret should be read from a secure source or environment variable and not hardcoded.
.sign(Buffer.from('your-base64url-encoded-shared-secret', 'base64url'));
};
```
This JWT can then be used to authenticate with the PowerSync Service. In your [`fetchCredentials()` function](/configuration/app-backend/client-side-integration), you will need to retrieve the token from your backend API.
# Development Tokens
Source: https://docs.powersync.com/configuration/auth/development-tokens
Generate temporary development tokens for authentication.
PowerSync allows generating temporary development tokens for authentication.
This is useful for developers who want to get up and running quickly, without full auth integration.
This can also be used to generate a token for a specific user to debug issues.
## Generating a Development Token
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance
2. Go to the **Client Auth** view
3. Check the **Development tokens** setting and save your changes
4. Click the **Connect** button in the top bar
5. Enter a user ID:
* If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
* If your data is filtered by parameters , use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
6. Click **Generate Token** and copy the token
Development tokens expire after 12 hours.
Follow the steps below. Steps 1 and 2 configure signing keys and your PowerSync Service config; in Step 3 you can use the PowerSync CLI (recommended) or the [test-client](https://github.com/powersync-ja/powersync-service/tree/main/test-client) to generate the token.
Generate a temporary private/public key-pair (RS256) or shared key (HS256) for JWT signing and verification.
Use an online JWK generator like [mkjwk.org](https://mkjwk.org/) (select RSA, 2048 bits, Signature use, RS256 algorithm).
Or generate locally with Node.js:
```bash theme={null}
# Install pem-jwk if needed
npm install -g pem-jwk
# Generate private key
openssl genrsa -out private-key.pem 2048
# Convert public key to JWK format
openssl rsa -in private-key.pem -pubout | pem-jwk
```
Use an online JWK generator like [mkjwk.org](https://mkjwk.org/) (select oct, 256 bits, Signature use, HS256 algorithm) - this outputs base64url directly.
Or generate and convert using OpenSSL:
```bash theme={null}
# Generate and convert to base64url
openssl rand -base64 32 | tr '+/' '-_' | tr -d '='
```
For production environments, shared secrets (HS256) are not recommended.
Add the `client_auth` parameter to your PowerSync config (e.g. `service.yaml`):
Copy the JWK values from [mkjwk.org](https://mkjwk.org/) or the `pem-jwk` output, then add to your config:
```yaml config.yaml theme={null}
# Client (application end user) authentication settings
client_auth:
# static collection of public keys for JWT verification
jwks:
keys:
- kty: 'RSA'
n: '[rsa-modulus]'
e: '[rsa-exponent]'
alg: 'RS256'
kid: 'dev-key-1'
```
Copy the `k` value from mkjwk.org or the OpenSSL output, then add to your config:
```yaml config.yaml theme={null}
# Client (application end user) authentication settings
client_auth:
audience: ['http://localhost:8080', 'http://127.0.0.1:8080']
# static collection of public keys for JWT verification
jwks:
keys:
- kty: oct
alg: 'HS256'
k: '[base64url-encoded-shared-secret]'
kid: 'dev-key-1'
```
These examples use static `jwks: keys:` for simplicity. For production, we recommend using `jwks_uri` to point to a JWKS endpoint instead. See [Custom Authentication](/configuration/auth/custom) for more details.
Choose either the [PowerSync CLI](/tools/cli) (recommended) or the test-client:
Apply your config changes (e.g. restart your PowerSync Service or run `powersync docker reset` if running locally with Docker), then run:
```bash theme={null}
powersync generate token --subject=test-user
```
Replace `test-user` with the user ID you want to authenticate:
* If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
* If your data is filtered by parameters , use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
1. If you have not done so already, clone the [`powersync-service` repo](https://github.com/powersync-ja/powersync-service/tree/main)
2. Install and build:
* In the project root: `pnpm install` and `pnpm build`
* In the `test-client` directory: `pnpm build`
3. Generate a token from the `test-client` directory, pointing at your config file:
```bash theme={null}
node dist/bin.js generate-token --config path/to/config.yaml --sub test-user
```
Replace `test-user` with the user ID you want to authenticate:
* If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
* If your data is filtered by parameters , use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
Development tokens expire after 12 hours.
## Usage
Development tokens can be used for testing purposes either with the [Sync Diagnostics Client](https://diagnostics-app.powersync.com), the [test-client](https://github.com/powersync-ja/powersync-service/tree/main/test-client), or your app itself (for development purposes).
### Using with Sync Diagnostics Client
The [Sync Diagnostics Client](https://diagnostics-app.powersync.com) allows you to quickly test syncing and inspect a user's SQLite database, to verify that your PowerSync Service configuration and Sync Streams / Sync Rules behave as expected.
1. Open the [Sync Diagnostics Client](https://diagnostics-app.powersync.com)
2. Enter the generated development token at **PowerSync Token**.
3. Enter your PowerSync Service endpoint URL at **PowerSync Endpoint** unless already prepopulated.
4. Click **Proceed**.
5. Wait for the syncing to complete and inspect the synced data in SQLite.
### Using with `test-client`
The [test-client](https://github.com/powersync-ja/powersync-service/tree/main/test-client) is useful for testing of syncing without persisting anything to a client-side SQLite database. Amongst other things, it can be used for load testing, simulating many client syncing concurrently. Consult the [README](https://github.com/powersync-ja/powersync-service/tree/main/test-client#readme) for details on how to provide the development token as argument to `test-client` supported commands.
### Using with Your Application
To use the temporary development token in your application, update the [`fetchCredentials()` function](/configuration/app-backend/client-side-integration) in your *backend connector* to return the generated token.
```typescript React Native, Web & Capacitor (TS) theme={null}
async fetchCredentials(): Promise {
// for development: use development token
return {
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
};
}
```
```typescript Node.js (TS) theme={null}
async fetchCredentials() {
// for development: use development token
return {
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
};
}
```
```kotlin Kotlin theme={null}
override suspend fun fetchCredentials(): PowerSyncCredentials {
// for development: use development token
return PowerSyncCredentials(
endpoint = "https://your-instance.powersync.com",
token = "your-development-token-here"
)
}
```
```swift Swift theme={null}
func fetchCredentials() async throws -> PowerSyncCredentials {
// for development: use development token
return PowerSyncCredentials(
endpoint: "https://your-instance.powersync.com",
token: "your-development-token-here"
)
}
```
```dart Dart/Flutter theme={null}
@override
Future fetchCredentials() async {
return PowerSyncCredentials(
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
);
}
```
```csharp .NET theme={null}
public async Task FetchCredentials()
{
var powerSyncUrl = "https://your-instance.powersync.com";
var authToken = "your-development-token-here";
// Return credentials with PowerSync endpoint and JWT token
return new PowerSyncCredentials(powerSyncUrl, authToken);
}
```
# Firebase Auth
Source: https://docs.powersync.com/configuration/auth/firebase-auth
Setting up Firebase Authentication with PowerSync
Configure authentication on the PowerSync instance with the following settings:
* **JWKS URI**: `https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com`
* **JWT Audience**: Firebase project ID
Firebase signs these tokens using RS256.
PowerSync will periodically refresh the keys using the above JWKS URI, and validate tokens against the configured audience (token `aud` value).
The Firebase user UID will be available as:
* `auth.user_id()` in [Sync Streams](/sync/streams/overview) (recommended)
* `request.user_id()` in [Sync Rules](/sync/rules/overview) (previously `token_parameters.user_id`)
To use a different identifier as the user ID in Sync Streams / Sync Rules (for example, user email), use [Custom Authentication](/configuration/auth/custom).
### PowerSync Cloud Configuration
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Client Auth** view.
2. Configure your Firebase **JWKS URI** and **JWT Audience** settings.
3. Click **Save and Deploy** to apply the changes.
### Self-Hosted Configuration
This can be configured via your [`config.yaml`](/configuration/powersync-service/self-hosted-instances):
```yaml config.yaml theme={null}
client_auth:
# JWKS URIs can be specified here.
jwks_uri: 'https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com'
audience: ['']
```
# Authentication Setup
Source: https://docs.powersync.com/configuration/auth/overview
## Client Authentication
PowerSync clients (i.e. apps used by your users that embed the PowerSync Client SDK) authenticate against the server-side [PowerSync Service](/architecture/powersync-service) using [JWTs](https://jwt.io/) (signed tokens).
When you call [`connect()`](/intro/setup-guide#connect-to-powersync-service-instance) on the client-side [PowerSync database](/intro/setup-guide#instantiate-the-powersync-database), you pass a [*backend connector*](/configuration/app-backend/client-side-integration#‘backend-connector’) as an argument to it, in which you define a `fetchCredentials()` function that must return a JWT as well the endpoint URL for the PowerSync Service instance that the client will connect to. See [here](/configuration/app-backend/client-side-integration#example-implementation) for example implementations. Your `fetchCredentials()` function will automatically be called by the PowerSync Client SDK whenever it needs it needs a fresh JWT.
## Client Authentication Options
### Development & Testing
For a quick way to get up and running during development, you can generate [Development Tokens](/configuration/auth/development-tokens) directly from the [PowerSync Dashboard](https://dashboard.powersync.com/) (PowerSync Cloud) or [locally](/configuration/auth/development-tokens#self-hosted) with a self-hosted setup.
### Proper Authentication Integration (Needed for Production)
**Use Existing JWT from Auth Provider:** Some authentication providers already generate JWTs for users which PowerSync can verify directly — see [Common Authentication Providers](#common-authentication-providers) below. In this scenario, your [`fetchCredentials()` function](#client-authentication) can return the existing JWT from your client-side state.
**Custom Auth Integration: Generate JWTs:** For others, some backend code must be added to your application backend to generate the JWTs needed for PowerSync — see [Custom Authentication](/configuration/auth/custom). In this scenario, your `fetchCredentials()` function should make an API call to your [backend application](/configuration/app-backend/setup) to obtain a JWT.
## Common Authentication Providers
PowerSync supports JWT-based authentication from various providers. The table below shows commonly used authentication providers, their JWKS URLs, and any specific configuration requirements.
Scroll the table horizontally.
| Provider | Configuration Notes | Documentation | JWKS URL |
| ----------------------------------------- | ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| **Supabase** | Uses Supabase's **JWT Secret** | [Supabase Auth Setup](/configuration/auth/supabase-auth) | Direct integration available |
| **Firebase Auth / GCP Identity Platform** | JWT Audience: Firebase project ID | [Firebase Auth Setup](/configuration/auth/firebase-auth) | `https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com` |
| **Auth0** | JWT Audience: PowerSync instance URL | [Auth0 Setup](/configuration/auth/auth0) | `https://{auth0-domain}/.well-known/jwks.json` |
| **Clerk** | Additional configuration may be required | [Clerk Documentation](https://clerk.com/docs/backend-requests/making/jwt-templates#create-a-jwt-template) | `https://{yourClerkDomain}/.well-known/jwks.json` |
| **Stytch** | Additional configuration may be required | [Stytch Documentation](https://stytch.com/docs/api/jwks-get) | `https://{live_or_test}.stytch.com/v1/sessions/jwks/{project-id}` |
| **Keycloak** | Additional configuration may be required | [Keycloak Documentation](https://documentation.cloud-iam.com/how-to-guides/configure-remote-jkws.html) | `https://{your-keycloak-domain}/auth/realms/{realm-name}/protocol/openid-connect/certs` |
| **Amazon Cognito** | Additional configuration may be required | [Cognito Documentation](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-verifying-a-jwt.html) | `https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json` |
| **Azure AD** | Additional configuration may be required | [Azure AD Documentation](https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens) | `https://login.microsoftonline.com/{tenantId}/discovery/v2.0/keys` |
| **Google Identity** | Additional configuration may be required | [Google Identity Documentation](https://developers.google.com/identity/openid-connect/openid-connect#discovery) | `https://www.googleapis.com/oauth2/v3/certs` |
| **SuperTokens** | Additional configuration may be required | [SuperTokens Documentation](https://supertokens.com/docs/quickstart/integrations/aws-lambda/session-verification/using-jwt-authorizer) | `https://{YOUR_SUPER_TOKENS_CORE_CONNECTION_URI}/.well-known/jwks.json` |
| **WorkOS** | Additional configuration may be required | [WorkOS Documentation](https://workos.com/docs/reference/user-management/session-tokens/jwks) | `https://api.workos.com/sso/jwks/{YOUR_CLIENT_ID}` |
| **Custom JWT** | See custom auth requirements | [Custom Auth Setup](/configuration/auth/custom) | Your own JWKS endpoint |
# Supabase Auth
Source: https://docs.powersync.com/configuration/auth/supabase-auth
PowerSync can verify Supabase JWTs directly when connected to a Supabase-hosted Postgres database.
You can implement various types of authentication when using PowerSync with Supabase:
#### Standard [Supabase Auth](https://supabase.com/docs/guides/auth)
These examples show how to implement [`fetchCredentials()` in your client-side *backend connector*](/configuration/app-backend/client-side-integration#‘backend-connector’) to get the Supabase JWT from the Supabase Client Library:
* [JavaScript example](https://github.com/powersync-ja/powersync-js/blob/58fd05937ec9ac993622666742f53200ee694585/demos/react-supabase-todolist/src/library/powersync/SupabaseConnector.ts#L87)
* [Dart/Flutter example](https://github.com/powersync-ja/powersync.dart/blob/9ef224175c8969f5602c140bcec6dd8296c31260/demos/supabase-todolist/lib/powersync.dart#L38)
* [Kotlin example](https://github.com/powersync-ja/powersync-kotlin/blob/4f60e2089745dda21b0d486c70f47adbbe24d289/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt#L75)
#### Anonymous Sign-Ins
This examples shows use of Supabase's `signInAnonymously()` and then the implementation of [`fetchCredentials()`](/configuration/app-backend/client-side-integration#‘backend-connector’) to get the JWT from the Supabase Client Library:
* [JavaScript example](https://github.com/powersync-ja/powersync-js/blob/58fd05937ec9ac993622666742f53200ee694585/demos/react-multi-client/src/library/SupabaseConnector.ts#L47)
#### Fully Custom Auth
This example shows how to implement Supabase Edge Functions to generate custom JWTs for PowerSync (either for signed-in users or anonymous users) as well as expose a JWKS endpoint:
* [Example](https://github.com/powersync-ja/powersync-jwks-example/)
#### External Auth Providers
We've heard from the community that Supabase's [support for third-party auth providers](https://supabase.com/blog/third-party-auth-mfa-phone-send-hooks) works with PowerSync, but we don't have any examples for this yet.
## Supabase JWT Signing Keys
Supabase supports two types of JWT signing keys:
| Type | Algorithm | Description |
| --------------------------- | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------- |
| **Legacy JWT signing keys** | HS256 (symmetric) | Uses a shared secret to sign and verify tokens. This is the original method. |
| **New JWT signing keys** | RS256 (asymmetric) | Uses public/private key pairs. Supabase signs tokens with a private key, and PowerSync verifies them using a public key fetched via JWKS. |
PowerSync supports both methods. Which configuration you need depends on your Supabase project's JWT settings and your PowerSync deployment type.
To check which signing keys your Supabase project uses, go to [Project Settings > JWT](https://supabase.com/dashboard/project/_/settings/jwt) in your Supabase Dashboard.
## PowerSync Cloud
When using PowerSync Cloud with a Supabase-hosted database, PowerSync can auto-detect your Supabase project from the database connection string and configure authentication automatically.
### Using New JWT Signing Keys
This is the recommended approach for Supabase projects using asymmetric JWT signing keys.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Client Auth** view.
2. Enable the **Use Supabase Auth** checkbox.
3. Leave the **Supabase JWT Secret** field empty (it's not needed for new keys).
4. Click **Save and Deploy**.
PowerSync auto-detects your Supabase project from the database connection string and configures the JWKS URI (`https://.supabase.co/auth/v1/.well-known/jwks.json`) and JWT audience (`authenticated`) automatically.
### Using Legacy JWT Signing Keys
Legacy JWT signing keys use HS256 (symmetric encryption with shared secrets), which is less secure than asymmetric keys. We recommend migrating to [new JWT signing keys](#migrating-from-legacy-to-new-jwt-signing-keys) for better security.
Use this approach if your Supabase project still uses the legacy HS256 symmetric signing keys.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Client Auth** view.
2. Enable the **Use Supabase Auth** checkbox.
3. Copy your **JWT Secret** from your Supabase project's [JWT settings](https://supabase.com/dashboard/project/_/settings/jwt).
4. Paste the secret into the **Supabase JWT Secret (optional) Legacy** field.
5. Click **Save and Deploy**.
### Manual JWKS Configuration
Use manual configuration when PowerSync cannot auto-detect your Supabase project. This happens when:
* You're using a non-standard database connection string
* You're connecting to a self-hosted Supabase instance
* You're using Supabase local development (Docker)
Steps:
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), go to the **Client Auth** view.
2. Leave the **Use Supabase Auth** checkbox **unchecked**.
3. Add a **JWKS URI**, e.g.:
```
http://localhost:54321/auth/v1/.well-known/jwks.json
```
4. Add `authenticated` as an accepted **JWT Audience**.
5. Click **Save and Deploy**.
If you skip adding the `authenticated` audience, you will see `PSYNC_S2105` errors ("JWT payload is missing a required claim 'aud'").
## Self-Hosted PowerSync
For self-hosted PowerSync instances, configure authentication in your [`config.yaml`](/configuration/powersync-service/self-hosted-instances).
### Using New JWT Signing Keys
When using a standard Supabase connection string, PowerSync auto-detects your Supabase project:
```yaml theme={null}
client_auth:
supabase: true
```
PowerSync will automatically configure:
* **JWKS URI**: `https://.supabase.co/auth/v1/.well-known/jwks.json`
* **Audience**: `authenticated`
You'll see a log message confirming the configuration:
```
Configured Supabase Auth with https://.supabase.co/auth/v1/.well-known/jwks.json
```
### Using Legacy JWT Signing Keys
Legacy JWT signing keys use HS256 (symmetric encryption with shared secrets), which is less secure than asymmetric keys. We recommend migrating to [new JWT signing keys](#migrating-from-legacy-to-new-jwt-signing-keys) for better security.
For projects using legacy HS256 symmetric signing keys, provide your JWT secret:
```yaml theme={null}
client_auth:
supabase: true
supabase_jwt_secret: your-jwt-secret-here
```
Get your JWT secret from your Supabase project's [JWT settings](https://supabase.com/dashboard/project/_/settings/jwt).
### Manual JWKS Configuration
Use manual configuration in any of these scenarios:
* PowerSync cannot detect your Supabase project from the connection string
* You're using self-hosted Supabase
* You're using Supabase local development (Docker)
* You need explicit control over the authentication settings
```yaml theme={null}
client_auth:
jwks_uri: http://localhost:54321/auth/v1/.well-known/jwks.json
audience:
- authenticated
```
When using manual configuration, do not set `supabase: true`. Use `jwks_uri` and `audience` directly.
## Migrating from Legacy to New JWT Signing Keys
If you're migrating your Supabase project from legacy JWT signing keys to the new asymmetric keys:
### Step 1: Complete the Supabase Migration
Follow **all steps** in [Supabase's JWT signing keys migration guide](https://supabase.com/blog/jwt-signing-keys#start-using-asymmetric-jwts-today), including the **"Rotate to asymmetric JWTs"** step.
The migration is not complete until you complete the "Rotate to asymmetric JWTs" step in Supabase. Skipping this step will cause authentication failures.
### Step 2: Update PowerSync Configuration
**For PowerSync Cloud and self-hosted with standard Supabase connections:**
* No changes required. PowerSync auto-detects and uses the new JWKS endpoint.
* If you previously provided a legacy JWT secret, you can remove it (it's no longer needed).
**For manual JWKS configurations:**
* Ensure your **JWKS URI** (`jwks_uri`) points to your Supabase JWKS endpoint.
* Verify the `authenticated` **JWT Audience** (`audience`) is configured.
### Step 3: Clear Cached Tokens
Have all users sign out and sign back in. This ensures they receive new tokens signed with the asymmetric keys.
## Troubleshooting
Debugging [error codes](/debugging/error-codes):
### `PSYNC_S2101`: Could not find an appropriate key in the keystore
This error indicates PowerSync cannot verify the JWT signature. Common causes:
| Cause | Solution |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Incomplete Supabase migration** | Complete the "Rotate to asymmetric JWTs" step in Supabase's [migration guide](https://supabase.com/blog/jwt-signing-keys#start-using-asymmetric-jwts-today). |
| **Cached tokens** | Have users sign out and sign back in to get fresh tokens. |
| **Non-standard connection string** | PowerSync couldn't auto-detect your Supabase project. Use [manual JWKS configuration](#manual-jwks-configuration). |
| **Wrong JWT secret** | For legacy keys, verify the JWT secret matches your Supabase project settings. |
### `PSYNC_S2105`: JWT payload is missing a required claim "aud"
This error occurs when using manual JWKS configuration without specifying an audience. Add `authenticated` to your audience configuration.
### Auto-detection not working
If PowerSync logs this warning:
```
Supabase Auth is enabled, but no Supabase connection string found. Skipping Supabase JWKS URL configuration.
```
This means PowerSync couldn't detect your Supabase project from the database connection string. Use [manual JWKS configuration](#manual-jwks-configuration) instead.
## Sync Streams
The Supabase user UUID will be available as:
* `auth.user_id()` in [Sync Streams](/sync/streams/overview).
* `request.user_id()` in [Sync Rules](/sync/rules/overview)
To use a different identifier as the user ID in Sync Streams / Sync Rules (for example, user email), use [Custom Authentication](/configuration/auth/custom).
# Stytch + Supabase
Source: https://docs.powersync.com/configuration/auth/supabase-auth/stytch
PowerSync is compatible with both Consumer and B2B SaaS Stytch project types when using [Stytch](https://stytch.com/) for authentication with Supabase projects.
## Consumer Authentication
See this community project for detailed setup instructions:
## B2B SaaS Authentication
The high-level approach is:
* Users authenticate via [Stytch](https://stytch.com/)
* Extract the user and org IDs from the Stytch JWT
* Generate a Supabase JWT by calling a Supabase Edge Function that uses the Supabase JWT Secret for signing a new JWT
* Set the `kid` in the JWT header
* You can obtain this from any other Supabase JWT by extracting the KID value from the header — this value is static, even across database upgrades.
* Set the `aud` field to `authenticated`
* Set the `sub` field in the JWT payload to the user ID
* Pass this new JWT into your PowerSync `fetchCredentials()` function
Use the below settings in your [PowerSync Dashboard](https://dashboard.powersync.com/):
Reach out to us directly on our [Discord server](https://discord.gg/powersync) if you have any issues with setting up auth.
# PowerSync Cloud Instances
Source: https://docs.powersync.com/configuration/powersync-service/cloud-instances
Create and configure PowerSync Cloud instances of the PowerSync Service.
## Create a PowerSync Instance
When creating a project in the [PowerSync Dashboard](https://dashboard.powersync.com/), *Development* and *Production* instances of the PowerSync Service will be created by default. Select the instance you want to configure.
If you need to create a new instance, follow the steps below.
1. In the dashboard, select your project and open the instance selection dropdown. Click **Add Instance**.
2. Give your instance a name, such as "Production".
3. \[Optional] You can change the default cloud region from US to EU, JP (Japan), AU (Australia) or BR (Brazil) if desired.
* Note: Additional cloud regions will be considered on request, especially for customers on our Enterprise plan. Please [contact us](/resources/contact-us) if you need a different region.
4. Click **Create Instance**.
## Instance Settings
After creating an instance, you can configure various settings through the [PowerSync Dashboard](https://dashboard.powersync.com/):
* **Database Connections**: Connect your instance to your source database. See [Source Database Connection](/configuration/source-db/connection) for details.
* **Client Auth**: Configure how clients authenticate. See [Authentication Setup](/configuration/auth/overview) for details.
* **Sync Streams / Sync Rules (legacy)**: Define what data to sync to clients. See [Sync Streams & Sync Rules Overview](/sync/overview) for details.
* **Settings**: Advanced instance-specific settings.
For more information about managing instances, see the [PowerSync Dashboard](/tools/powersync-dashboard) documentation.
# Self-Hosted Instance Configuration
Source: https://docs.powersync.com/configuration/powersync-service/self-hosted-instances
Configure the PowerSync Service for self-hosted deployments.
## Configuration Methods
The PowerSync Service is configured using key/value pairs in a config file, and supports the following configuration methods:
1. Inject config as an environment variable (which contains the Base64 encoding of a config file)
2. Use a config file mounted on a volume
3. Specify the config as a command line parameter (again Base64 encoded)
Both YAML and JSON config files are supported. You can see examples of the above configuration methods in the [docker-compose](https://github.com/powersync-ja/self-host-demo/blob/d61cea4f1e0cc860599e897909f11fb54420c3e6/docker-compose.yaml#L46) file of our `self-host-demo` app.
## Configuration File Structure
Below is a skeleton config file you can copy and paste to edit locally:
```yaml config.yaml theme={null}
# Settings for source database replication
replication:
# Specify database connection details
# Note only 1 connection is currently supported
# Multiple connection support is on the roadmap
connections:
- type: postgresql
# The PowerSync server container can access the Postgres DB via the DB's service name.
# In this case the hostname is pg-db
# The connection URI or individual parameters can be specified.
uri: postgresql://postgres:mypassword@pg-db:5432/postgres
# SSL settings
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# Note: 'disable' is only suitable for local/private networks, not for public networks
# Connection settings for bucket storage (MongoDB and Postgres are supported)
storage:
# Option 1: MongoDB Storage
type: mongodb
uri: mongodb://mongo:27017/powersync_demo
# Use these if authentication is required. The user should have `readWrite` and `dbAdmin` roles
# username: myuser
# password: mypassword
# Option 2: Postgres Storage
# type: postgresql
# This accepts the same parameters as a Postgres replication source connection
# uri: postgresql://powersync_storage_user:secure_password@storage-db:5432/postgres
# sslmode: disable
# The port which the PowerSync API server will listen on
port: 80
# Specify Sync Streams or legacy Sync Rules (see Sync Streams section below).
# Referencing a separate file is recommended so you can edit streams/rules without nesting YAML.
sync_config:
path: sync-config.yaml
# Settings for client authentication
client_auth:
# Enable this if using Supabase Auth
# supabase: true
# supabase_jwt_secret: your-secret
# JWKS URIs can be specified here.
jwks_uri: http://demo-backend:6060/api/auth/keys
# JWKS audience
audience: ['powersync-dev', 'powersync']
# Settings for telemetry reporting
# See https://docs.powersync.com/self-hosting/telemetry
telemetry:
# Opt out of reporting anonymized usage metrics to PowerSync telemetry service
disable_telemetry_sharing: false
# System-level configuration options
system:
# Service logging configuration
logging:
# Log level for the service logs
level: info # 'silly', 'debug', 'verbose', 'http', 'info', 'warn', 'error'
format: text # 'json' or 'text'
```
### Example
A detailed `powersync.yaml` config example with additional comments can be found here:
### Config File Schema
The config file schema is available here:
## Source Database Connections
Specify the connection to your source database in the `replication` section of the config file:
```yaml config.yaml theme={null}
# Settings for source database replication
replication:
# Specify database connection details
# Note only 1 connection is currently supported
# Multiple connection support is on the roadmap
connections:
- type: postgresql
# The PowerSync server container can access the Postgres DB via the DB's service name.
# In this case the hostname is pg-db
# The connection URI or individual parameters can be specified.
uri: postgresql://postgres:mypassword@pg-db:5432/postgres
# SSL settings
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# Note: 'disable' is only suitable for local/private networks, not for public networks
```
For details on connecting to your source database, see [Connect PowerSync to Your Source Database](/intro/setup-guide#3-connect-powersync-to-your-source-database) in the Setup Guide.
If you are using hosted Supabase, you will need to enable IPv6 for Docker as per [the Docker docs](https://docs.docker.com/config/daemon/ipv6/)
If your host OS does not support Docker IPv6 e.g. macOS, you will need to run Supabase locally.
This is because Supabase only allows direct database connections over IPv6 — PowerSync cannot connect using the connection pooler.
## Bucket Storage Database
The PowerSync Service requires a storage database to store the data and metadata for [buckets](/architecture/powersync-service#bucket-system). You can use either MongoDB or Postgres for this purpose. The bucket storage database should be specified in the `storage` section of the config file:
```yaml config.yaml theme={null}
# Connection settings for bucket storage (MongoDB and Postgres are supported)
storage:
# Option 1: MongoDB Storage
type: mongodb
uri: mongodb://mongo:27017/powersync_demo
# Use these if authentication is required. The user should have `readWrite` and `dbAdmin` roles
# username: myuser
# password: mypassword
# Option 2: Postgres Storage
# type: postgresql
# This accepts the same parameters as a Postgres replication source connection
# uri: postgresql://powersync_storage_user:secure_password@storage-db:5432/postgres
# sslmode: disable
```
The *bucket storage database* is separate from your *source database*.
### MongoDB Storage
MongoDB requires at least one replica set node. A single node is fine for development/staging environments, but a 3-node replica set is recommended [for production](/maintenance-ops/self-hosting/deployment-architecture) deployments.
[MongoDB Atlas](https://www.mongodb.com/products/platform/atlas-database) enables replica sets by default for new clusters.
However, if you're using your own environment you can enable this manually by running:
```bash theme={null}
mongosh "mongodb+srv://powersync.abcdef.mongodb.net/" --apiVersion 1 --username myuser --eval 'try{rs.status().ok && quit(0)} catch {} rs.initiate({_id: "rs0", version: 1, members: [{ _id: 0, host : "mongo:27017" }]})'
```
If you are rolling your own Docker environment, you can include this init script in your `docker-compose` file to configure a replica set as once-off operation:
```yaml theme={null}
# Initializes the MongoDB replica set. This service will not usually be actively running
mongo-rs-init:
image: mongo:7.0
depends_on:
- mongo
restart: "no"
entrypoint:
- bash
- -c
- 'sleep 10 && mongosh --host mongo:27017 --eval ''try{rs.status().ok && quit(0)} catch {} rs.initiate({_id: "rs0", version: 1, members: [{ _id: 0, host : "mongo:27017" }]})'''
```
### Postgres Storage
Available since version 1.3.8 of the [`powersync-service`](https://hub.docker.com/r/journeyapps/powersync-service), you can use Postgres as an alternative bucket storage database.
#### Database Setup
You'll need to create a dedicated user and schema for PowerSync bucket storage. You can either:
1. Let PowerSync create the schema (recommended):
```sql theme={null}
CREATE USER powersync_storage_user WITH PASSWORD 'secure_password';
-- The user should only have access to the schema it created
GRANT CREATE ON DATABASE postgres TO powersync_storage_user;
```
2. Or manually create the schema:
```sql theme={null}
CREATE USER powersync_storage_user WITH PASSWORD 'secure_password';
CREATE SCHEMA IF NOT EXISTS powersync AUTHORIZATION powersync_storage_user;
GRANT CONNECT ON DATABASE postgres TO powersync_storage_user;
GRANT USAGE ON SCHEMA powersync TO powersync_storage_user;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA powersync TO powersync_storage_user;
```
#### Demo App
A demo app with Postgres bucket storage is available [here](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-postgres-bucket-storage).
#### Postgres Version Requirements
Separate Postgres servers are required for replication connections (i.e. source database) and bucket storage **if using Postgres versions below 14**.
| Postgres Version | Server configuration |
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Below 14 | Separate servers are required for the source and bucket storage. Replication will be blocked if the same server is detected. |
| 14 and above | The source database and bucket storage database can be on the same server. Using the same database (with separate schemas) is supported but may lead to higher CPU usage. Using separate servers remains an option. |
## Sync Streams
Your Sync Streams (or legacy Sync Rules) configuration can be in a separate file (recommended) or inline in the main config. The `sync_config:` key is used for both Sync Streams and Sync Rules.
**Separate file**: Referencing a file with `path:` keeps your main config tidy and makes editing Sync Streams/Sync Rules easier. Ensure the file is available at that path (e.g. in the same directory as your main config or on a mounted volume).
```yaml Sync Streams — Separate File (Recommended) theme={null}
# sync-config.yaml (reference from main config with sync_config: path: sync-config.yaml)
config:
edition: 3
streams:
todos:
auto_subscribe: true
query: SELECT * FROM todos WHERE owner_id = auth.user_id()
```
```yaml Sync Streams — Inline theme={null}
sync_config:
content: |
config:
edition: 3
streams:
todos:
auto_subscribe: true
query: SELECT * FROM todos WHERE owner_id = auth.user_id()
```
```yaml Sync Rules — Separate File (Legacy) theme={null}
# sync-config.yaml (reference from main config with sync_config: path: sync-config.yaml)
bucket_definitions:
global:
data:
- SELECT * FROM lists
- SELECT * FROM todos
```
```yaml Sync Rules — Inline (Legacy) theme={null}
sync_config:
content: |
bucket_definitions:
global:
data:
- SELECT * FROM lists
- SELECT * FROM todos
```
For more information, see [Sync Streams](/sync/streams/overview) (recommended) or [Sync Rules](/sync/rules/overview) (legacy).
To verify that your Sync Rules are functioning correctly, inspect the contents of your bucket storage database.
#### MongoDB Example
If you are running MongoDB in Docker, run the following:
```bash theme={null}
docker exec -it {MongoDB container name} mongosh "mongodb://{MongoDB service host}/{MongoDB database name}" --eval "db.bucket_data.find().pretty()"
# Example
docker exec -it self-host-demo-mongo-1 mongosh "mongodb://localhost:27017/powersync_demo" --eval "db.bucket_data.find().pretty()"
```
## Client Authentication
Client authentication is configured in the `client_auth` section:
```yaml config.yaml theme={null}
client_auth:
# Enable this if using Supabase Auth
# supabase: true
# supabase_jwt_secret: your-secret
# Option 1: JWKS URI endpoint
jwks_uri: http://demo-backend:6060/api/auth/keys
# Option 2: Static collection of public keys for JWT verification
# jwks:
# keys:
# - kty: 'RSA'
# n: '[rsa-modulus]'
# e: '[rsa-exponent]'
# alg: 'RS256'
# kid: '[key-id]'
# JWKS audience
audience: ['powersync-dev', 'powersync']
```
For production environments, we recommend using JWKS with asymmetric keys (RS256, EdDSA, or ECDSA) rather than shared secrets (HS256). Asymmetric keys provide better security through public/private key separation and easier key rotation. See [Custom Authentication](/configuration/auth/custom) for more details.
For more details, see [Client Authentication](/configuration/auth/overview).
## Environment Variables
The config file uses custom tags for environment variable substitution.
Using `!env [variable name]` will substitute the value of the environment variable named `[variable name]`.
Only environment variables with names starting with `PS_` can be substituted.
See examples here:
# Source Database Connection
Source: https://docs.powersync.com/configuration/source-db/connection
Connect a PowerSync Cloud instance to your source database.
Each database provider has their quirks when it comes to specifying connection details, so we have documented database-specific and provider-specific instructions below:
Jump to: [Postgres](#postgres-provider-specifics) | [MongoDB](#mongodb-specifics) | [MySQL](#mysql-beta-specifics) | [SQL Server](#sql-server-alpha-specifics)
The below instructions are currently written for PowerSync Cloud. For self-hosted PowerSync instances, specify database connection details in the config file as documented [here](/configuration/powersync-service/self-hosted-instances#source-database-connections).
## Postgres Provider Specifics
Select your Postgres hosting provider for steps to connect your newly-created PowerSync instance to your Postgres database:
1. From your Supabase Dashboard, select **Connect** in the top navigation bar (or follow this [link](https://supabase.com/dashboard/project/_?showConnect=true)):
2. In the **Direct connection** section, copy the complete connection string (including the `[YOUR-PASSWORD]` placeholder):
3. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to **Database Connections**.
4. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
5. Paste the connection string into the **URI** field. PowerSync will automatically parse this URI to populate the database connection details.
6. Update the **Username** and **Password** fields to use the `powersync_role` and password you created when configuring your Supabase for PowerSync (see [Source Database Setup](/configuration/source-db/setup#supabase)).
7. Note: PowerSync includes Supabase's CA certificate by default, so you can use `verify-full` SSL mode without additional configuration.
7. Verify your setup by clicking **Test Connection** and resolve any errors.
8. Click **Save Connection**.
PowerSync will now create an isolated cloud environment for your instance. This typically takes a minute or two.
### Enable Supabase Auth
After your database connection is configured, enable Supabase Auth:
1. In the PowerSync Dashboard, go to **Client Auth** for your instance.
2. Enable the **Use Supabase Auth** checkbox.
3. If your Supabase project uses the legacy JWT signing keys, copy your JWT Secret from your Supabase project's settings ([JWT Keys](https://supabase.com/dashboard/project/_/settings/jwt)) and paste the secret into the **Supabase JWT Secret (optional) Legacy** field in the PowerSync Dashboard. If you're using Supabase's new [JWT signing keys](https://supabase.com/blog/jwt-signing-keys), you can leave this field empty (PowerSync will auto-configure the JWKS endpoint for your project).
4. Click **Save and Deploy** to apply the changes.
### Troubleshooting
Supabase is configured with a maximum of 4 logical replication slots, with one often used for Supabase Realtime (unrelated to PowerSync).
It is therefore easy to run out of replication slots, resulting in an error such as "All replication slots are in use" when deploying. To resolve this, delete inactive replication slots by running this query:
```sql theme={null}
select slot_name, pg_drop_replication_slot(slot_name) from pg_replication_slots where active = false;
```
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. [Locate the connection details from AWS RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html):
* Copy the **"Endpoint"** value from the AWS Management Console.
* Paste the endpoint into the "**Host**" field in the PowerSync Dashboard.
* Complete the remaining fields: "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify this.
* "**Name**" can be any name for the connection.
* "**Port**" is 5432 for Postgres databases.
* "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
* PowerSync has the AWS RDS CA certificate pre-configured — `verify-full` SSL mode can be used directly, without any additional configuration required.
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
### Troubleshooting
If you get an error such as "IPs in this range are not supported", the instance is likely not configured to be publicly accessible. A DNS lookup on the host should give a public IP, and not for example `10.x.x.x` or `172.31.x.x`.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. Fill in your connection details from Azure.
1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can also paste a connection string into the "**URI**" field to simplify data entry.
2. "**Name**" can be any name for the connection.
3. "**Port**" is 5432 for Postgres databases.
4. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
5. PowerSync has the Azure CA certificate pre-configured — `verify-full` SSL mode can be used directly, without any additional configuration required.
4. Click **Test Connection** and fix any errors.
* If you encounter the error `"must be superuser or replication role to start walsender"`, ensure that you've followed all the steps for enabling logical replication documented [here](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-logical#prerequisites-for-logical-replication-and-logical-decoding).
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. Fill in your connection details from Google Cloud SQL.
* "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry.
* "**Name**" can be any name for the connection.
* "**Port**" is 5432 for Postgres databases.
* "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
* The server certificate can be downloaded from Google Cloud SQL.
* If SSL is enforced, a client certificate and key must also be created on Google Cloud SQL, and configured on the PowerSync instance.
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. Fill in your connection details from [Neon](https://neon.tech/).
1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry.
2. "**Name**" can be any name for the connection.
3. "**Port**" is 5432 for Postgres databases.
4. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
5. Note that if you're using a self-signed SSL certificate for your database server, click the **Download Certificate** button to dynamically fetch the recommended certificate directly from your server.
6. Also note if you get any error such as `server certificate not trusted: SELF_SIGNED_CERT_IN_CHAIN`, click **Download Certificate** to attempt automatic resolution.
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. Fill in your connection details from [Fly Postgres](https://fly.io/docs/postgres/).
1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry.
2. "**Name**" can be any name for the connection.
3. "**Port**" is 5432 for Postgres databases.
4. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
5. Note that if you're using a self-signed SSL certificate for your database server, click the **Download Certificate** button to dynamically fetch the recommended certificate directly from your server.
6. Also note if you get any error such as `server certificate not trusted: SELF_SIGNED_CERT_IN_CHAIN`, click **Download Certificate** to attempt automatic resolution.
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. Head to your PlanetScale database dashboard page at `https://app.planetscale.com//` and click on the "Connect" button to get your database connection parameters.
1. In the PowerSync Dashboard, "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**" and "**Password**" are required.
2. "**Name**" can be any name for the connection.
3. "**Host**" is the `host` connection parameter for your database.
4. "**Port**" is 5432 for Postgres databases.
5. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
1. Important: PlanetScale requires your branch ID to be appended to your username. The username should be `powersync_role.`. Your PlanetScale branch ID can be found on the same connection details page.
6. **SSL Mode** can remain the default `verify-full`.
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
For other providers and self-hosted databases:
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
3. Fill in your connection details.
4. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry.
5. "**Name**" can be any name for the connection.
6. "**Port**" is 5432 for Postgres databases.
7. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/configuration/source-db/setup).
8. Note that if you're using a self-signed SSL certificate for your database server, click the **Download Certificate** button to dynamically fetch the recommended certificate directly from your server.
9. Also note if you get any error such as `server certificate not trusted: SELF_SIGNED_CERT_IN_CHAIN`, click **Download Certificate** to attempt automatic resolution.
10. Click **Test Connection** and fix any errors.
11. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
Make sure that your Postgres database allows access to PowerSync's IPs — see [Security and IP Filtering](/configuration/source-db/security-and-ip-filtering)
Also see:
* [Postgres Source Database Setup](/configuration/source-db/setup#postgres)
* Security & IP Filtering: [TLS with Postgres](/configuration/source-db/security-and-ip-filtering#powersync-cloud:-tls-with-postgres)
## MongoDB Specifics
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to **Database Connections**.
2. Click **Connect to Source Database** and ensure the **MongoDB** tab is selected.
3. Fill in your connection details from MongoDB:
1. Copy your cluster's connection string from MongoDB and paste it into the **URI** field in the PowerSync Dashboard. PowerSync will automatically parse this URI to populate other connection details.
* The format should be `mongodb+srv://[username:password@]host/[database]`. For example, `mongodb+srv://admin:@cluster0.abcde1.mongodb.net/powersync`
2. Enter your database user's password into the **Password** field. See the necessary permissions in [Source Database Setup](/configuration/source-db/setup#mongodb).
3. "**Database name**" is the database in your cluster to replicate.
4. Click **Test Connection** and fix any errors. If have any issues connecting, reach out to our support engineers on our [Discord server](https://discord.gg/powersync) or otherwise [contact us](/resources/contact-us).
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
Make sure that your MongoDB database allows access to PowerSync's IPs — see [Security and IP Filtering](/configuration/source-db/security-and-ip-filtering)
Also see:
* [MongoDB Source Database Setup](/configuration/source-db/setup#mongodb)
* [MongoDB Atlas Device Sync Migration Guide](/migration-guides/atlas-device-sync)
## MySQL (Beta) Specifics
Select your MySQL hosting provider for steps to connect your newly-created PowerSync instance to your MySQL database:
To enable binary logging and GTID replication in AWS Aurora, you need to create a [DB Parameter Group](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Reference.ParameterGroups.html)
and configure it with the necessary parameters. Follow these steps:
1. Navigate to [Amazon RDS console](https://console.aws.amazon.com/rds/)
In the navigation pane, choose Parameter groups and click on **Create Parameter Group**:
2. Add all the required [binlog configuration](/configuration/source-db/setup#binlog-configuration) parameters. For example:
3. Associate your newly created parameter group with your Aurora cluster:
1. In the navigation pane, choose **Databases**.
2. Select your Aurora cluster.
3. Choose **Modify**.
4. In the **DB Parameter Group** section, select the parameter group you created.
5. Click **Continue** and then **Apply** immediately.
4. Whitelist PowerSync's IPs in your Aurora cluster's security group to allow access. See [Security and IP Filtering](/configuration/source-db/security-and-ip-filtering) for more details.
5. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
6. Click **Connect to Source Database** and ensure the **MySQL** tab is selected.
7. Fill in your MySQL connection details from AWS Aurora:
1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" are required.
2. "**Name**" can be any name for the connection.
3. "**Host**" is the endpoint for your Aurora cluster.
4. "**Database name**" is the default database to replicate.
5. "**Username**" and "**Password**" maps to your database user.
8. Click **Test Connection** and fix any errors.
9. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
For other providers and self-hosted databases:
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **MySQL** tab is selected.
3. Fill in your MySQL connection details:
1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" are required.
2. "**Name**" can be any name for the connection.
3. "**Host**" the endpoint for your database.
4. "**Database name**" is the default database to replicate. Additional databases are derived by qualifying the tables in the Sync Rules.
5. "**Username**" and "**Password**" maps to your database user.
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
Make sure that your MySQL database allows access to PowerSync's IPs — see [Security and IP Filtering](/configuration/source-db/security-and-ip-filtering)
Also see:
* [MySQL Source Database Setup](/configuration/source-db/setup#mysql-beta)
## SQL Server (Alpha) Specifics
SQL Server support was [introduced](https://releases.powersync.com/announcements/powersync-service) in version 1.18.1 of the PowerSync Service.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
2. Click **Connect to Source Database** and ensure the **"SQL Server"** tab is selected.
3. Fill in your SQL Server connection details:
1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" are required.
2. "**Name**" can be any name for the connection.
3. "**Host**" is the endpoint for your SQL Server instance.
4. "**Port**" is typically 1433 for SQL Server (default port).
5. "**Database name**" is the database where CDC is enabled.
6. "**Username**" and "**Password**" maps to the database user created in [Source Database Setup](/configuration/source-db/setup#sql-server-alpha) (e.g., `powersync_user`).
4. Click **Test Connection** and fix any errors.
5. Click **Save Connection**.
PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete.
Make sure that your SQL Server database allows access to PowerSync's IPs — see [Security and IP Filtering](/configuration/source-db/security-and-ip-filtering)
Also see:
* [SQL Server Source Database Setup](/configuration/source-db/setup#sql-server-alpha)
# Postgres Maintenance
Source: https://docs.powersync.com/configuration/source-db/postgres-maintenance
## Logical Replication Slots
Postgres logical replication slots are used to keep track of [replication](/architecture/powersync-service#replication-from-the-source-database) progress (recorded as a [LSN](https://www.postgresql.org/docs/current/datatype-pg-lsn.html)).
Every time a new version of [Sync Streams or Sync Rules](/sync/overview) are deployed, PowerSync creates a new replication slot, then switches over and deletes the old replication slot when the reprocessing of the new Sync Streams/Rules version is done.
The replication slots can be viewed using this query:
```sql theme={null}
select slot_name, confirmed_flush_lsn, active, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) as lag from pg_replication_slots;
```
Example output:
| slot\_name | confirmed\_flush\_lsn | active | lag |
| ---------------------- | --------------------- | ------ | -------- |
| powersync\_1\_c3c8cf21 | 0/70D8240 | 1 | 56 bytes |
| powersync\_2\_e62d7e0f | 0/70D8240 | 1 | 56 bytes |
In some cases, a replication slot may remain without being used. In this case, the slot prevents Postgres from deleting older WAL entries. One such example is when a PowerSync instance has been deprovisioned.
While this is desired behavior for slot replication downtime, it could result in excessive disk usage if the slot is not used anymore.
Inactive slots can be dropped using:
```bash theme={null}
select slot_name, pg_drop_replication_slot(slot_name) from pg_replication_slots where active = false;
```
Postgres prevents active slots from being dropped. If it does happen (e.g. while a PowerSync instance is disconnected), PowerSync would automatically re-create the slot, and restart replication.
### Maximum Replication Slots
Postgres is configured with a maximum number of replication slots per server. Since each PowerSync instance uses one replication slot for replication and an additional one while deploying a new Sync Streams/Rules version, the maximum number of PowerSync instances connected to one Postgres server is equal to the maximum number of replication slots, minus 1.
If other clients are also using replication slots, this number is reduced further.
The maximum number of slots can be configured by setting `max_replication_slots` (not all hosting providers expose this), and checked using:
```sql theme={null}
select current_setting('max_replication_slots')
```
If this number is exceeded, you'll see an error such as "all replication slots are in use".
# Private Endpoints
Source: https://docs.powersync.com/configuration/source-db/private-endpoints
## PowerSync Cloud: AWS Private Endpoints
To avoid exposing a database in AWS to the public internet, using AWS Private Endpoints ([AWS PrivateLink](https://aws.amazon.com/privatelink/)) is an option that provides private networking between the source database and the PowerSync Service. Private Endpoints are currently available on our [Team and Enterprise plans](https://www.powersync.com/pricing).
We use Private Endpoints instead of VPC peering to ensure that no other resources are exposed between the VPCs.
Do not rely on Private Endpoints as the only form of security. Always use strong database passwords, and use client certificates if additional security is required.
## Current Limitations
1. Private Endpoints are currently only supported for Postgres and MongoDB instances. [Contact us](/resources/contact-us) if you need this for MySQL or SQL Server.
2. Self-service is not yet available on the PowerSync side — [contact PowerSync support](/resources/contact-us) to configure the instance.
3. Only AWS is supported currently — other cloud providers are not supported yet.
4. The **"Test Connection"** function on the [PowerSync Dashboard](https://dashboard.powersync.com/) is not supported yet — the instance has to be deployed to test the connection.
## Concepts
* [AWS PrivateLink](https://aws.amazon.com/privatelink/) is the overarching feature on AWS.
* **VPC/Private Endpoint Service** is the service that exposes the database, and lives in the same VPC as the source database. It provides a one-way connection to the database without exposing other resources in the VPC.
* **Endpoint Service Name** is a unique identifier for this Endpoint Service.
* Each Endpoint Service may have multiple Private Endpoints in different VPCs.
* **VPC/Private Endpoint** is the endpoint in the PowerSync VPC. This is what the PowerSync instance connects to.
For custom Endpoint Services for Postgres:
* **Network Load Balancer (NLB)** is a load balancer that exposes the source database to the Endpoint Service.
* **Target Group** specifies the IPs and ports for the Network Load Balancer to expose.
* **Listener** for the Network Load Balancer is what describes the incoming port on the Network Load Balancer (the port that the PowerSync instance connects to).
## Private Endpoint Setup
MongoDB Atlas supports creating an Endpoint Service per project for AWS.
**Limitations:**
1. Only Atlas clusters in AWS are supported.
2. The Atlas cluster must be in one of the PowerSync [AWS regions](#aws-regions). Cross-region endpoints are not yet supported by MongoDB Atlas.
3. This is only supported for *Atlas* clusters — PowerSync does not support PrivateLink for MongoDB clusters self-hosted in AWS.
### 1. Configure the Endpoint Service
1. In the Atlas project dashboard, go to **Network Access** → **Private Endpoint** → **Dedicated Cluster**.
2. Select **Add Private Endpoint**.
3. Select **AWS** and the relevant AWS region.
4. Wait for the **Endpoint Service** to be created.
5. "Your VPC ID" and "Your Subnet IDs" are not relevant for PowerSync - leave those blank.
6. Avoid running the command to create the "VPC Interface Endpoint"; this step is handled by PowerSync.
7. Note the **Endpoint Service Name**. This is displayed in the command to run, as the `--service-name` option.
The Service Name should look something like `com.amazonaws.vpce.us-east-1.vpce-svc-0123456`.
Skip the final step of configuring the "VPC Endpoint ID" — this will be done later.
### 2. PowerSync Setup
On PowerSync Cloud, create a new instance, but do not configure the connection yet. Copy the **Instance ID**.
[Contact us](/resources/contact-us) and provide:
1. The **Endpoint Service Name**.
2. The PowerSync **Instance ID**.
We will then configure the instance to use the Endpoint Service for the database connection, and provide you with a **VPC Endpoint ID**, in the form `vpce-12346`.
### 3. Finish Atlas Endpoint Service Setup
On the Atlas Private Endpoint Configuration, in the final step, specify the **VPC Endpoint ID** from above.
If you have already closed the dialog, go through the process of creating a Private Endpoint again. It should have the same Endpoint Service Name as before.
Check that the **Endpoint Status** changes to *Available*.
### 4. Get the Connection String
1. On the Atlas cluster, select **Connect**.
2. Select **Private Endpoint** as the connection type, and select the provisioned endpoint.
3. Select **Drivers** as the connection method, and copy the connection string.
The connection string should look something like `mongodb+srv://:@your-cluster-pl-0.abcde.mongodb.net/`.
### 5. Deploy
Once the Private Endpoint has been created on the PowerSync side, it will be visible in the instance settings
under the connection details, as **VPC Endpoint Hostname**.
Configure the instance the connection string from the previous step, then deploy.
Monitor the logs to ensure the instance can connect after deploying.
To configure a Private Endpoint Service, a network load balancer is required to forward traffic to the database.
This can be used with a Postgres database running on an EC2 instance, or an RDS instance.
For AWS RDS, the guide below does not handle dynamic IPs if the RDS instance's IP changes. This needs additional work to automatically update the IP — see this [AWS blog post](https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/) on the topic. This is specifically relevant if using an RDS cluster with failover support.
Use the following steps to configure the Endpoint Service:
### 1. Create a Target Group
1. Obtain the RDS instance's private IP address. Make sure this points to a writable instance.
2. Create a **Target Group** with IP addresses as target type, using the IP address from above. Use TCP protocol, and specify the database port (typically `5432` for Postgres).
Note: The IP address of your RDS instance may change over time. To maintain a consistent connection, consider implementing automation to monitor and update the target group's IP address as needed. See the [AWS blog post](https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/) on the topic.
### 2. Create a Network Load Balancer (NLB)
1. Select the same VPC as your RDS instance.
2. Choose at least two subnets in different availability zones.
3. Configure a TCP listener and pick a port (for example `5432` again).
4. Associate the listener with the target group created earlier.
### 3. Modify the Security Group
1. Modify the security group associated with your RDS instance to permit traffic from the load balancer IP range.
### 4. Create a VPC Endpoint Service
1. In the AWS Management Console, navigate to the VPC service and select **Endpoint Services**.
2. Click on **Create Endpoint Service**.
3. Select the Network Load Balancer created in the previous step.
4. If the load balancer is in one of the PowerSync regions (see below), it is not required to select any "Supported Region". If the load balancer is in a different region, select the region corresponding to your PowerSync instance here. Note that this will incur additional AWS charges for the cross-region support.
5. Decide whether to require acceptance for endpoint connections. Disabling acceptance can simplify the process but may reduce control over connections.
6. Under **Supported IP address types**, select both IPv4 and IPv6.
7. After creating the endpoint service, note the **Service Name**. This identifier will be used when configuring PowerSync to connect via PrivateLink.
8. Configure the Endpoint Service to accept connections from the principal `arn:aws:iam::131569880293:root`. See the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) for details.
### 5. PowerSync Setup
On PowerSync Cloud, create a new instance, but do not configure the connection yet.
[Contact us](/resources/contact-us) and provide the Service Name from above, as well as the PowerSync instance ID created above. We will then configure the instance to use the Endpoint Service for the database connection.
### 6. Deploy
Once the Private Endpoint has been created on the PowerSync side, it will be visible in the instance settings
under the connection details, as **VPC Endpoint Hostname**.
Verify the connection details, and deploy the instance. Monitor the logs to ensure the instance can connect after deploying.
## AWS Regions
PowerSync Cloud currently runs in the AWS regions below. Make sure the region matching your PowerSync instance is supported by the Endpoint Service.
1. US: `us-east-1`
2. EU: `eu-west-1`
3. BR: `sa-east-1`
4. JP: `ap-northeast-1`
5. AU: `ap-southeast-2`
# Security & IP Filtering
Source: https://docs.powersync.com/configuration/source-db/security-and-ip-filtering
## PowerSync Cloud: IP Filtering
For enhanced security, you can restrict database access to PowerSync Cloud's IP addresses. Below are the IP ranges for each region:
```
50.19.5.255
34.193.39.149
18.234.18.91
18.233.128.219
34.202.251.156
```
```
79.125.70.43
18.200.209.88
18.234.18.91
18.233.128.219
34.202.251.156
```
```
54.248.194.85
57.180.73.135
18.234.18.91
18.233.128.219
34.202.251.156
```
```
52.63.101.65
13.211.184.238
18.234.18.91
18.233.128.219
34.202.251.156
```
```
54.207.21.139
54.232.53.97
18.234.18.91
18.233.128.219
34.202.251.156
```
```
2602:817::/44
```
Do not rely on IP filtering as a primary form of security. Always use strong database passwords, and use client certificates if additional security is required. Support for private endpoints is also available in certain scenarios (see below).
## PowerSync Cloud: AWS Private Endpoints
See [Private Endpoints](./private-endpoints) for using a private network to your database using AWS PrivateLink (AWS only).
## PowerSync Cloud: TLS with Postgres
PowerSync Cloud always enforces TLS on connections to the database, and certificate validation cannot be disabled. PowerSync supports TLS version 1.2 and 1.3.
The **Server Certificate** is always validated. The following two **SSL Modes** are supported:
1. `verify-full` - This verifies the certificate, and checks that the hostname matches. By default, we include CA certificates for AWS RDS, Azure and Supabase. Alternatively, CA certificates to trust can be explicitly specified (any number of certificates in PEM format).
2. `verify-ca` - This verifies the certificate, but does not check the hostname. Because of this, public certificate authorities are not supported — an explicit CA must be specified. This mode can be used with self-signed certificates.
In some cases, hitting the **Test Connection** button when adding a source database connection in the [PowerSync Dashboard](https://dashboard.powersync.com/) will automatically retrieve the certificate for `verify-ca` mode.
Once deployed, the current connections and TLS versions can be viewed using this query:
```sql theme={null}
select
usename,
ssl,
version,
client_addr,
application_name,
backend_type
from
pg_stat_ssl
join pg_stat_activity on pg_stat_ssl.pid = pg_stat_activity.pid
where
ssl = true;
```
## See Also
* [Security](/resources/security): General security overview
* [Data Encryption](/client-sdks/advanced/data-encryption)
# Source Database Setup
Source: https://docs.powersync.com/configuration/source-db/setup
Configure your backend source database for PowerSync, including permissions and replication settings.
Jump to: [Postgres](#postgres) | [MongoDB](#mongodb) | [MySQL](#mysql-beta) | [SQL Server](#sql-server-alpha)
## Postgres
**Version compatibility**: PowerSync requires Postgres version 11 or greater.
Configuring your Postgres database for PowerSync generally involves three tasks:
1. Ensure logical replication is enabled
2. Create a PowerSync database user
3. Create `powersync` logical replication publication
We have documented steps for some specific hosting providers:
### 1. Ensure logical replication is enabled
No action required: Supabase has logical replication enabled by default.
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
Also see our [Supabase integration guide](/integrations/supabase).
### Prerequisites
The instance must be publicly accessible using an IPv4 address.
Access may be restricted to specific IPs if required — see [IP Filtering](/configuration/source-db/security-and-ip-filtering).
### 1. Ensure logical replication is enabled
Set the `rds.logical_replication` parameter to `1` in the parameter group for the instance:
### 2. Create a PowerSync database user
Create a PowerSync user on Postgres:
```sql theme={null}
-- SQL to create powersync user
CREATE ROLE powersync_role WITH BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Allow the role to perform replication tasks
GRANT rds_replication TO powersync_role;
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
PowerSync supports both **Azure Database for PostgreSQL** and **Azure Database for PostgreSQL Flexible Server**.
### Prerequisites
The database must be accessible on the public internet. Once you have created your database, navigate to **Settings** → **Networking** and enable **Public access.**
### 1. Ensure logical replication is enabled
Follow the steps as noted in [this Microsoft article](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-logical#prerequisites-for-logical-replication-and-logical-decoding) to allow logical replication.
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
### 1. Ensure logical replication is enabled
In Google Cloud SQL Postgres, enabling the logical replication is done using flags:
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
### 1. Ensure logical replication is enabled
To [ensure logical replication is enabled](https://neon.tech/docs/guides/logical-replication-postgres#prepare-your-source-neon-database):
1. Select your project in the Neon Console.
2. On the Neon Dashboard, select **Settings**.
3. Select **Logical Replication**.
4. Click **Enable** to ensure logical replication is enabled.
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
Also see our [Neon integration guide](/integrations/neon).
Fly Postgres is a [Fly](https://fly.io/) app with [flyctl](https://fly.io/docs/flyctl/) sugar on top to help you bootstrap and manage a database cluster for your apps.
### 1. Ensure logical replication is enabled
Once you've deployed your Fly Postgres cluster, you can use the following command to ensure logical replication is enabled:
```bash theme={null}
fly pg config update --wal-level=logical
```
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
### 1. Ensure logical replication is enabled
No action required: PlanetScale has logical replication (`wal_level = logical`) enabled by default.
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables.
-- PlanetScale does not support ON ALL TABLES so
-- Specify each table you want to sync
-- The publication must be named "powersync"
CREATE PUBLICATION powersync
FOR TABLE public.lists, public.todos;
```
Logical replication can be enabled for [Render Postgres](https://render.com/docs/postgresql) but you need to contact their support team. Here are some prerequisites before contacting them:
* The disk size must be at least 10 GB.
* You must be on a Professional workspace or higher.
The Render support team will ask you the following:
* Database user for replication (you can use the default or create a new user yourself)
* Schema(s)
* Publication name (only if you want them to set `FOR ALL TABLES`; otherwise, you'll be able to create publications per table yourself later)
If you want to create the publication `FOR ALL TABLES`, you must let their support team know that you want the publication name to be `powersync`.
Additional notes they'll share with you:
> We will reserve approximately 1/8 of your storage for `wal_keep_size`. This will not be available for your normal operations and will always be reserved no matter what.
> We will also schedule maintenance for the database to pick up the changes. It will be initially scheduled for 14 days out with a deadline of 30 days out. Once the maintenance is added, you can reschedule to any time between immediately and the deadline. If you do nothing, it will run automatically at the initially scheduled time of 14 days out.
For other providers and self-hosted databases:
Need help? Simply contact us on [Discord](https://discord.gg/powersync) and we'll help you get set up.
### 1. Ensure logical replication is enabled
PowerSync reads the Postgres WAL using logical replication in order to create [buckets](/architecture/powersync-service#bucket-system) in accordance with your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview).
If you are managing Postgres yourself, set `wal_level = logical` in your config file:
Alternatively, you can use the below SQL commands to check and ensure logical replication
is enabled:
```sql theme={null}
-- Check the replication type
SHOW wal_level;
-- Ensure logical replication is enabled
ALTER SYSTEM SET wal_level = logical;
```
Note that Postgres must be restarted after changing this config.
If you're using a managed Postgres service, there may be a setting for this in the relevant section of the service's admin console.
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
### Unsupported Hosted Postgres Providers
Due to the logical replication requirement, not all Postgres hosting providers are supported. Notably, some "serverless Postgres" providers do not support logical replication, and are therefore not supported by PowerSync yet.
### See Also
* [Postgres Maintenance: Logical Replication Slots](/configuration/source-db/postgres-maintenance)
## MongoDB
**Version compatibility**: PowerSync requires MongoDB version 6.0 or greater.
For more information on migrating from MongoDB Atlas Device Sync to PowerSync, see our [migration guide](/migration-guides/atlas-device-sync).
### Permissions Required: MongoDB Atlas
For MongoDB Atlas databases, the minimum permissions when using built-in roles are:
```
readWrite@._powersync_checkpoints
read@
```
To allow PowerSync to automatically enable [`changeStreamPreAndPostImages`](#post-images) on replicated collections (i.e. the [**Post Images**](#post-images) setting for the MongoDB connection on your PowerSync instance is set to **Auto-Configure**, which is the default for new PowerSync instances), additionally add the `dbAdmin` permission:
```
readWrite@._powersync_checkpoints
read@
dbAdmin@
```
If you are replicating from multiple databases in the cluster, you need read permissions on the entire cluster, in addition to the above:
```
readAnyDatabase@admin
```
### Privileges Required: Self-Hosted / Custom Roles
For self-hosted MongoDB, or for creating custom roles on MongoDB Atlas, PowerSync requires the following privileges/granted actions:
* `listCollections`: This privilege must be granted on the database being replicated.
* `find`: This privilege must be granted either at the database level or on specific collections.
* `changeStream`: This privilege must be granted at the database level (not on individual collections). In MongoDB Atlas, set `collection: ""` or check `Apply to any collection` in MongoDB Atlas if you want to apply this privilege on any collection.
* If replicating from multiple databases, this must apply to the entire cluster. Specify `db: ""` or check `Apply to any database` in MongoDB Atlas.
* For the `_powersync_checkpoints` collection add the following privileges: `createCollection`, `dropCollection`, `find`, `changeStream`, `insert`, `update`, and `remove`
* To allow PowerSync to automatically enable [`changeStreamPreAndPostImages`](#post-images) on replicated collections (i.e. the [**Post Images**](#post-images) setting for the MongoDB connection on your PowerSync instance is set to **Auto-Configure**, which is the default for new PowerSync instances), additionally add the `collMod` permission on the database and all collections being replicated.
### Post Images
To replicate data from MongoDB to PowerSync in a consistent manner, PowerSync uses Change Streams with [post-images](https://www.mongodb.com/docs/v6.0/reference/command/collMod/#change-streams-with-document-pre--and-post-images) to get the complete document after each change.
This requires the `changeStreamPreAndPostImages` option to be enabled on replicated collections.
PowerSync supports three configuration options for post-images:
1. **Off**: (`post_images: off`): Uses `fullDocument: 'updateLookup'` for backwards compatibility. This was the default for older instances. However, this may lead to consistency issues, so we strongly recommend enabling post-images instead.
2. **Auto-Configure**: (`post_images: auto_configure`) The **default** for new instances: Automatically enables the `changeStreamPreAndPostImages` option on collections as needed. Requires the permissions/privileges mentioned above. If a collection is removed from [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview), you need to manually disable `changeStreamPreAndPostImages` on that collection.
3. **Read-only**: (`post_images: read_only`): Uses `fullDocument: 'required'` and requires `changeStreamPreAndPostImages: { enabled: true }` to be set on every collection referenced in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview). Replication will error if this is not configured. This option is ideal when permissions are restricted.
To manually configure collections for `read_only` mode, run this command on each collection:
```js theme={null}
db.runCommand( {
collMod: ,
changeStreamPreAndPostImages: { enabled: }
} )
```
You can view which collections have the option enabled using:
```js theme={null}
db.getCollectionInfos().filter(
(c) => c.options?.changeStreamPreAndPostImages?.enabled
);
```
Post-images can be configured for PowerSync instances as follows:
Configure the **Post Images** setting in the database connection configuration in the
[PowerSync Dashboard](https://dashboard.powersync.com/). Select your project
and instance and go to **Database Connections** to edit the connection settings.
Configure `post_images` in the `config.yaml` file.
### MongoDB Atlas Private Endpoints Using AWS PrivateLink
If you need to use private endpoints with MongoDB Atlas, see [Private Endpoints](/configuration/source-db/private-endpoints) (AWS only).
## MySQL (Beta)
**Version compatibility**: PowerSync requires MySQL version 5.7 or greater.
PowerSync reads from the MySQL [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) (binlog) to replicate changes. We use a modified version of the [Zongji MySQL](https://github.com/powersync-ja/powersync-mysql-zongji) binlog listener to achieve this.
### Binlog Configuration
To ensure that PowerSync can read the binary log, you need to configure your MySQL server to enable binary logging and configure it with the following server command options:
* [`server_id`](https://dev.mysql.com/doc/refman/8.4/en/replication-options.html#sysvar_server_id): Uniquely identifies the MySQL server instance in the replication topology. Default value is **`1`**.
* [`log_bin`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_log_bin): **`ON`**. Enables binary logging. Default is **`ON`** for MySQL 8.0 and later, but **`OFF`** for MySQL 5.7.
* [`enforce_gtid_consistency`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-gtids.html#sysvar_enforce_gtid_consistency): **`ON`**. Enforces GTID consistency. Default is **`OFF`**.
* [`gtid_mode`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-gtids.html#sysvar_gtid_mode): **`ON`**. Enables GTID based logging. Default is **`OFF`**.
* [`binlog_format`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_binlog_format): **`ROW`**. Sets the binary log format to row-based replication. This is required for PowerSync to correctly replicate changes. Default is **`ROW`**.
* [`binlog_row_image`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#sysvar_binlog_row_image): **`FULL`**. Captures the complete row data for each change. This is required for PowerSync to correctly replicate changes. Default is **`FULL`**. The `MINIMAL`/`NOBLOB` options will be supported in a future release.
These can be specified in a MySQL [option file](https://dev.mysql.com/doc/refman/8.4/en/option-files.html):
```
server_id=
log_bin=ON
enforce_gtid_consistency=ON
gtid_mode=ON
binlog_format=ROW
binlog_row_image=FULL
```
### Database User Configuration
PowerSync also requires a MySQL user with **`REPLICATION`** and **`SELECT`** privileges on the source databases. These can be added by running the following SQL commands:
```sql theme={null}
-- Create a user with necessary privileges
CREATE USER 'repl_user'@'%' IDENTIFIED BY '';
-- Grant replication client privilege
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl_user'@'%';
-- Grant select access to the specific database
GRANT SELECT ON .* TO 'repl_user'@'%';
-- Apply changes
FLUSH PRIVILEGES;
```
It is possible to constrain the MySQL user further and limit access to specific tables. Care should be taken to ensure that all the tables in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) are included in the grants.
```sql theme={null}
-- Grant select to the users and the invoices tables in the source database
GRANT SELECT ON .users TO 'repl_user'@'%';
GRANT SELECT ON .invoices TO 'repl_user'@'%';
-- Apply changes
FLUSH PRIVILEGES;
```
### Additional Configuration (Optional)
#### Binlog
The binlog can be configured to limit logging to specific databases. By default, events for tables in all the databases on the MySQL server will be logged.
* [`binlog-do-db`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#option_mysqld_binlog-do-db): Only updates for tables in the specified database will be logged.
* [`binlog-ignore-db`](https://dev.mysql.com/doc/refman/8.4/en/replication-options-binary-log.html#option_mysqld_binlog-ignore-db): No updates for tables in the specified database will be logged.
Examples:
```
# Only row events for tables in the user_db and invoices_db databases will appear in the binlog.
binlog-do-db=user_db
binlog-do-db=invoices_db
```
```
# Row events for tables in the user_db will be ignored. Events for any other database will be logged.
binlog-ignore-db=user_db
```
## SQL Server (Alpha)
**Version compatibility**:
* PowerSync requires SQL Server 2019+ or Azure SQL
Database.
* SQL Server support was introduced in version 1.18.1 of the PowerSync Service.
PowerSync can replicate data from a change data capture (CDC) enabled SQL Server. The CDC process builds up change tables based on changes to tracked tables, by scanning the SQL Server transaction log on a fixed interval.
PowerSync then polls these change tables using built-in stored procedures and applies the changes to the PowerSync [bucket storage](/architecture/powersync-service#bucket-system).
For more information about CDC, see:
* [Change Data Capture (SQL Server)](https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/about-change-data-capture-sql-server)
* [Change Data Capture (Azure SQL Database)](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql)
### Supported Editions/Versions
| Database | Edition | Version | Min Service Tier |
| ---------------- | ------------------------------------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| SQL Server 2022+ | Standard, Enterprise, Developer, Evaluation | 16.0+ | N/A |
| Azure SQL\* | Database, Managed instance | N/A | Any service tier on vCore purchasing model. S3 tier and up on DTU purchasing model. See: [Azure SQL Database compute requirements](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#azure-sql-database-compute-requirements) |
\* Azure SQL Database is always running on the latest version of the SQL Server DB Engine
### Limitations / Known Issues
* Schema change handling is not supported yet.
* Spatial data types are returned as JSON objects as supplied by the Tedious `node-mssql` client. See the notes [here](https://github.com/tediousjs/node-mssql?tab=readme-ov-file#geography-and-geometry).
* There is an inherent latency in replicating data from SQL Server to PowerSync. See [Latency](#latency) for more details.
### Database Setup Requirements
#### 1. Enable CDC on the Database
Change Data Capture (CDC) needs to be enabled on the database:
```sql theme={null}
-- Enable CDC on the database if not already enabled
USE ; -- Only for SQL Server. To switch databases on Azure SQL, you have to connect to the specific database.
IF (SELECT is_cdc_enabled FROM sys.databases WHERE name = '') = 0
BEGIN
EXEC sys.sp_cdc_enable_db;
END
```
#### 2. Create the PowerSync Database User
Create a database user for PowerSync with the following permissions:
**Required permissions:**
* Read/Write permissions on the `_powersync_checkpoints` table
* Read permissions on the replicated tables
* `cdc_reader` role (grants access to CDC changetables and functions)
* `SELECT` permission on the CDC schema (grants access to CDC metadata tables)
* `VIEW DATABASE PERFORMANCE STATE` (SQL Server and Azure SQL)
* `VIEW SERVER PERFORMANCE STATE` (SQL Server only)
Create the login for the user first. This is done on the server / master database level:
```sql theme={null}
-- Create a SQL login for the powersync_user if missing. Note SQL Logins are created at the server level.
USE [master]; -- Use only works on SQL Server. For Azure SQL you have to connect to the master database to run these commands.
IF NOT EXISTS (SELECT 1 FROM sys.server_principals WHERE name = 'powersync_user')
BEGIN
CREATE LOGIN [powersync_user] WITH PASSWORD = 'YOUR_DB_USER_PASSWORD', CHECK_POLICY = ON;
END
```
Create the database user next. This is done on the specific database level:
```sql theme={null}
-- Create the powersync_user database user if missing. Note DB users are created at the database level.
USE []; -- Use only works on SQL Server. For Azure SQL you have to connect to the specific database to run these commands.
IF NOT EXISTS (SELECT 1 FROM sys.database_principals WHERE name = 'powersync_user')
BEGIN
CREATE USER [powersync_user] FOR LOGIN [powersync_user];
END
```
Grant the necessary permissions for the user:
```sql theme={null}
-- Grant SELECT on the specific replicated tables
GRANT SELECT ON dbo. TO [powersync_user];
-- Grant access to CDC tables and functions using the cdc_reader role
IF IS_ROLEMEMBER('cdc_reader', 'powersync_user') = 0
BEGIN
ALTER ROLE cdc_reader ADD MEMBER powersync_user;
END
-- Grant select on the CDC schema
GRANT SELECT ON SCHEMA::cdc TO [powersync_user];
-- Grant the necessary permissions to the user to access the performance state views
-- Note: For Azure SQL, only VIEW DATABASE PERFORMANCE STATE is required. Granted at the database level.
-- PowerSync uses this to access the sys.dm_db_log_stats DMV and the sys.dm_db_partition_stats DMV
GRANT VIEW DATABASE PERFORMANCE STATE TO [powersync_user];
-- VIEW SERVER PERFORMANCE STATE is only necessary on SQL Server (not Azure SQL). Granted at the server/master database level.
-- PowerSync requires this permission to access the sys.dm_db_log_stats DMV on SQL Server.
USE [master];
BEGIN
GRANT VIEW SERVER PERFORMANCE STATE TO [powersync_user];
END
```
For Azure SQL Database, the `VIEW SERVER PERFORMANCE STATE` permission is not
available and not required. Only `VIEW DATABASE PERFORMANCE STATE` is needed.
#### 3. Create the PowerSync Checkpoints Table
PowerSync requires a `_powersync_checkpoints` table to generate regular checkpoints. CDC must be enabled for this table:
```sql theme={null}
-- Create the PowerSync checkpoints table in your schema
IF OBJECT_ID('dbo._powersync_checkpoints', 'U') IS NULL
BEGIN
CREATE TABLE dbo._powersync_checkpoints (
id INT IDENTITY PRIMARY KEY,
last_updated DATETIME NOT NULL DEFAULT GETUTCDATE()
);
END
-- Enable CDC for the powersync checkpoints table if not already enabled
-- Note: the cdc_reader role created the first time CDC is enabled on a table
IF NOT EXISTS (SELECT 1 FROM cdc.change_tables WHERE source_object_id = OBJECT_ID(N'dbo._powersync_checkpoints'))
BEGIN
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'_powersync_checkpoints',
@role_name = N'cdc_reader',
@supports_net_changes = 0;
END
```
Grant read/write access to the table for the `powersync_user`:
```sql theme={null}
GRANT SELECT, INSERT, UPDATE ON dbo._powersync_checkpoints TO [powersync_user];
```
#### 4. Enable CDC on Tables
CDC must be enabled for all tables that need to be replicated:
```sql theme={null}
-- Enable CDC for specific tables in your schema if not already enabled
IF NOT EXISTS (SELECT 1 FROM cdc.change_tables WHERE source_object_id = OBJECT_ID(N'dbo.'))
BEGIN
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'',
@role_name = N'cdc_reader',
@supports_net_changes = 0;
END
```
Repeat this for each table you want to replicate. Note that PowerSync does not currently use the net changes functionality so `@supports_net_changes` can be set to `0`.
### CDC Management
Management and performance tuning of CDC is left to the developer and is primarily done by modifying the change capture jobs. See [Change Data Capture Jobs (SQL Server)](https://learn.microsoft.com/en-us/sql/relational-databases/track-changes/administer-and-monitor-change-data-capture-sql-server?view=sql-server-ver17) for more details.
Capture Job settings of interest to PowerSync:
* **Polling Interval:** The frequency at which the capture job reads changes from the transaction log. Default is every 5 seconds. Can be set to 0 so that there is zero downtime between scans, but this will impact database performance.
* **Max Trans:** The maximum number of transactions that are processed per scan. Default is 500.
* **Max Scans:** The maximum number of scans that are performed per capture job scan cycle. Default is 10.
Cleanup Job settings of interest to PowerSync:
* **Retention:** The retention period before data is expired from the CDC tables. Default is 3 days. If your PowerSync instance is offline for longer than this period, data will need to be fully re-synced. Specified in minutes.
Recommended Capture Job settings:
| Parameter | Recommended Value |
| ----------------- | ----------------- |
| `maxtrans` | 5000 |
| `maxscans` | 10 |
| `pollinginterval` | 1 second |
For Azure SQL Database, the CDC capture and cleanup jobs are managed automatically. Manual configuration is greatly limited.
See [Azure CDC Customization Limitations](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#cdc-customization).
The main limitation is that the capture job polling interval cannot be modified and is fixed at 20 seconds. It is, however, still possible to [manually trigger](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#manual-cdc-control) the capture job on demand.
#### Latency
Due to the fundamental differences in how CDC works compared to logical replication (Postgres) or binlog reading (MySQL), there is an inherent latency in replicating data from SQL Server to PowerSync. The latency is determined by two factors:
1. **Transaction Log Scan Interval**: The frequency at which the CDC capture job scans the transaction log for changes. The default value of 5 seconds can be changed by modifying the capture job settings on SQL Server. The recommended value is 1 second, but this can also be set to 0 based on the database load. For Azure SQL Database, the default value is 20 seconds and cannot be changed. See [Azure CDC Customization Limitations](https://learn.microsoft.com/en-us/azure/azure-sql/database/change-data-capture-overview?view=azuresql#cdc-customization) for more details.
2. **Polling Interval**: The frequency at which PowerSync polls the CDC change tables for changes. The default value is once every 1000ms. This can be changed by setting the `pollingIntervalMs` parameter in the PowerSync configuration.
Configuration parameters for SQL Server like `pollingIntervalMs` and `pollingBatchSize` (see below) can currently only be set when self-hosting PowerSync (e.g. via your config file or the [PowerSync CLI](/tools/cli)). We are working on exposing these settings in the PowerSync Dashboard for PowerSync Cloud instances.
### Memory Management
During each polling cycle, PowerSync will read a limited number of transactions from the CDC change tables. The default value of 10 transactions can be changed by setting the `pollingBatchSize` parameter in the PowerSync configuration.
Increasing this will increase throughput at the cost of increased memory usage. If the volume of transactions being replicated is high, and memory is available, it is recommended to increase this value.
## Next Step
Next, connect PowerSync to your database:
# Error Codes Reference
Source: https://docs.powersync.com/debugging/error-codes
Complete list of PowerSync error codes with explanations and troubleshooting guidance.
This reference documents PowerSync error codes organized by component, with troubleshooting suggestions for developers. Use the search bar to look up specific error codes (e.g., `PSYNC_R0001`).
# PSYNC\_Rxxxx: Sync Rules issues
* **PSYNC\_R0001**:
Catch-all [Sync Rules](/sync/rules/overview) parsing error, if no more specific error is available
## PSYNC\_R11xx: YAML syntax issues
## PSYNC\_R12xx: YAML structure (schema) issues
## PSYNC\_R21xx: SQL syntax issues
## PSYNC\_R22xx: SQL supported feature issues
## PSYNC\_R23xx: SQL schema mismatch issues
## PSYNC\_R24xx: SQL security warnings
# PSYNC\_Sxxxx: Service issues
* **PSYNC\_S0001**:
Internal assertion.
If you see this error, it might indicate a bug in the service code.
* **PSYNC\_S0102**:
TEARDOWN was not acknowledged.
This happens when the TEARDOWN argument was not supplied when running
the service teardown command. The TEARDOWN argument is required since
this is a destructive command.
Run the command with `teardown TEARDOWN` to confirm.
## PSYNC\_S1xxx: Replication issues
* **PSYNC\_S1002**:
Row too large.
There is a 15MB size limit on every replicated row - rows larger than
this cannot be replicated.
* **PSYNC\_S1003**:
Sync rules have been locked by another process for replication.
This error is normal in some circumstances:
1. In some cases, if a process was forcefully terminated, this error may occur for up to a minute.
2. During rolling deploys, this error may occur until the old process stops replication.
If the error persists for longer, this may indicate that multiple replication processes are running.
Make sure there is only one replication process apart from rolling deploys.
* **PSYNC\_S1004**:
JSON nested object depth exceeds the limit of 20.
This may occur if there is very deep nesting in JSON or embedded documents.
## PSYNC\_S11xx: Postgres replication issues
* **PSYNC\_S1101**:
Replication assertion error.
If you see this error, it might indicate a bug in the service code.
* **PSYNC\_S1103**:
Aborted initial replication.
This is not an actual error - it is expected when the replication process
is stopped, or if replication is stopped for any other reason.
* **PSYNC\_S1104**:
Explicit cacert is required for `sslmode: verify-ca`.
Use either verify-full, or specify a certificate with verify-ca.
* **PSYNC\_S1105**:
`database` is required in connection config.
Specify the database explicitly, or in the `uri` field.
* **PSYNC\_S1106**:
`hostname` is required in connection config.
Specify the hostname explicitly, or in the `uri` field.
* **PSYNC\_S1107**:
`username` is required in connection config.
Specify the username explicitly, or in the `uri` field.
* **PSYNC\_S1108**:
`password` is required in connection config.
Specify the password explicitly, or in the `uri` field.
* **PSYNC\_S1109**:
Invalid database URI.
Check the URI scheme and format.
* **PSYNC\_S1110**:
Invalid port number.
Only ports in the range 1024 - 65535 are supported.
* **PSYNC\_S1141**:
Publication does not exist.
Run: `CREATE PUBLICATION powersync FOR ALL TABLES` on the source database.
* **PSYNC\_S1142**:
Publication does not publish all changes.
Create a publication using `WITH (publish = "insert, update, delete, truncate")` (the default).
* **PSYNC\_S1143**:
Publication uses publish\_via\_partition\_root.
* **PSYNC\_S1144**:
Invalid Postgres server configuration for replication and sync bucket storage.
The same Postgres server, running an unsupported version of Postgres, has been configured for both replication and sync bucket storage.
Using the same Postgres server is only supported on Postgres 14 and above.
This error typically indicates that the Postgres version is below 14.
Either upgrade the Postgres server to version 14 or above, or use a different Postgres server for sync bucket storage.
* **PSYNC\_S1145**:
Table has RLS enabled, but the replication role does not have the BYPASSRLS attribute.
We recommend using a dedicated replication role with the BYPASSRLS attribute for replication:
```sql theme={null}
ALTER ROLE powersync_role BYPASSRLS
```
An alternative is to create explicit policies for the replication role. If you have done that,
you may ignore this warning.
## PSYNC\_S12xx: MySQL replication issues
## PSYNC\_S13xx: MongoDB replication issues
* **PSYNC\_S1301**:
Generic MongoServerError.
* **PSYNC\_S1302**:
Generic MongoNetworkError.
* **PSYNC\_S1303**:
MongoDB internal TLS error.
If connection to a shared cluster on MongoDB Atlas, this could be an IP Access List issue.
Check that the service IP is allowed to connect to the cluster.
* **PSYNC\_S1304**:
MongoDB connection DNS error.
Check that the hostname is correct.
* **PSYNC\_S1305**:
MongoDB connection timeout.
Check that the hostname is correct, and that the service IP is allowed to connect to the cluster.
* **PSYNC\_S1306**:
MongoDB authentication error.
Check the username and password.
* **PSYNC\_S1307**:
MongoDB authorization error.
Check that the user has the required privileges.
* **PSYNC\_S1341**:
Sharded MongoDB Clusters are not supported yet.
* **PSYNC\_S1342**:
Standalone MongoDB instances are not supported - use a replica-set.
* **PSYNC\_S1343**:
PostImages not enabled on a source collection.
Use `post_images: auto_configure` to configure post images automatically, or enable manually:
```
db.runCommand({
collMod: 'collection-name',
changeStreamPreAndPostImages: { enabled: true }
});
```
* **PSYNC\_S1344**:
The MongoDB Change Stream has been invalidated.
Possible causes:
* Some change stream documents do not have postImages.
* startAfter/resumeToken is not valid anymore.
* The replication connection has changed.
* The database has been dropped.
Replication will be stopped for this Change Stream. Replication will restart with a new Change Stream.
* **PSYNC\_S1345**:
Failed to read MongoDB Change Stream due to a timeout.
This may happen if there is a significant delay on the source database in reading the change stream.
If this is not resolved after retries, replication may need to be restarted from scratch.
* **PSYNC\_S1346**:
Failed to read MongoDB Change Stream.
See the error cause for more details.
* **PSYNC\_S1347**:
Timeout while getting a resume token for an initial snapshot.
This may happen if there is very high load on the source database.
## PSYNC\_S14xx: MongoDB storage replication issues
* **PSYNC\_S1402**:
Max transaction tries exceeded.
* **PSYNC\_S1500**:
Required updates in the Change Data Capture (CDC) are no longer available.
Possible causes:
* Older data has been cleaned up due to exceeding the retention period.
## PSYNC\_S2xxx: Service API
* **PSYNC\_S2001**:
Generic internal server error (HTTP 500).
See the error details for more info.
* **PSYNC\_S2002**:
Route not found (HTTP 404).
* **PSYNC\_S2003**:
503 service unavailable due to restart.
Wait a while then retry the request.
* **PSYNC\_S2004**:
415 unsupported media type.
This code always indicates an issue with the client.
## PSYNC\_S21xx: Auth errors originating on the client.
This does not include auth configuration errors on the service.
* **PSYNC\_S2101**:
Generic authentication error.
Common causes:
1. **JWT signing key mismatch** (Supabase): The client is using tokens signed with a different key type (legacy vs. new JWT signing keys) than PowerSync expects. If you've migrated to new JWT signing keys, ensure users sign out and back in to get fresh tokens. See [Migrating from Legacy to New JWT Signing Keys](/installation/authentication-setup/supabase-auth#migrating-from-legacy-to-new-jwt-signing-keys).
2. **Missing or invalid key ID (kid)**: The token's kid header doesn't match any keys in PowerSync's keystore.
3. **Incorrect JWT secret or JWKS endpoint**: Verify your authentication configuration matches your auth provider's settings.
* **PSYNC\_S2102**:
Could not verify the auth token signature.
Typical causes include:
1. Token kid is not found in the keystore.
2. Signature does not match the kid in the keystore.
* **PSYNC\_S2103**:
Token has expired. Check the expiry date on the token.
* **PSYNC\_S2104**:
Token expiration period is too long. Issue shorter-lived tokens.
* **PSYNC\_S2105**:
Token audience does not match expected values.
Check the aud value on the token, compared to the audience values allowed in the service config.
* **PSYNC\_S2106**:
No token provided. An auth token is required for every request.
The Authorization header must start with "Token" or "Bearer", followed by the JWT.
## PSYNC\_S22xx: Auth integration errors
* **PSYNC\_S2201**:
Generic auth configuration error. See the message for details.
* **PSYNC\_S2202**:
IPv6 support is not enabled for the JWKS URI.
Use an endpoint that supports IPv4.
* **PSYNC\_S2203**:
IPs in this range are not supported.
Make sure to use a publically-accessible JWKS URI.
* **PSYNC\_S2204**:
JWKS request failed.
## PSYNC\_S23xx: Sync API errors
* **PSYNC\_S2302**:
No sync rules available.
This error may happen if:
1. Sync rules have not been deployed.
2. Sync rules have been deployed, but is still busy processing.
View the replicator logs to see if the sync rules are being processed.
* **PSYNC\_S2304**:
Maximum active concurrent connections limit has been reached.
* **PSYNC\_S2305**:
Too many buckets.
There is a limit on the number of buckets per active connection (default of 1,000). See [Limit on Number of Buckets Per Client](/sync/rules/organize-data-into-buckets#limit-on-number-of-buckets-per-client) and [Performance and Limits](/resources/performance-and-limits).
## PSYNC\_S23xx: Sync API errors - MongoDB Storage
* **PSYNC\_S2401**:
Could not get clusterTime.
* **PSYNC\_S2402**:
Failed to connect to the MongoDB storage database.
* **PSYNC\_S2403**:
Query timed out. Could be due to a large query or a temporary load issue on the storage database.
Retry the request.
* **PSYNC\_S2404**:
Query failure on the storage database. See error details for more information.
## PSYNC\_S23xx: Sync API errors - Postgres Storage
## PSYNC\_S3xxx: Service configuration issues
## PSYNC\_S31xx: Auth configuration issues
* **PSYNC\_S3102**:
Invalid jwks\_uri.
* **PSYNC\_S3103**:
Only http(s) is supported for jwks\_uri.
## PSYNC\_S32xx: Replication configuration issue.
* **PSYNC\_S3201**:
Failed to validate module configuration.
## PSYNC\_S4000: Management / Dev APIs
* **PSYNC\_S4001**:
Internal assertion error.
This error may indicate a bug in the service code.
* **PSYNC\_S4104**:
No active sync rules.
* **PSYNC\_S4105**:
Sync rules API disabled.
When a sync rules file is configured, the dynamic sync rules API is disabled.
# Troubleshooting
Source: https://docs.powersync.com/debugging/troubleshooting
Summary of common issues, troubleshooting tools and pointers.
## Common issues
**Tip**: Asking the AI bot on this page, or on the [#gpt-help](https://discord.com/channels/1138230179878154300/1304118313093173329) channel on our [Discord server](https://discord.com/invite/powersync), is a good way to troubleshoot common issues.
### `SqliteException: Could not load extension` or similar
This client-side error or similar typically occurs when PowerSync is used in conjunction with either another SQLite library or the standard system SQLite library. PowerSync is generally not compatible with multiple SQLite sources. If another SQLite library exists in your project dependencies, remove it if it is not required. In some cases, there might be other workarounds. For example, in Flutter projects, we've seen this issue with `sqflite 2.2.6`, but `sqflite 2.3.3+1` does not throw the same exception.
### `RangeError: Maximum call stack size exceeded` on iOS or Safari
This client-side error commonly occurs when using the PowerSync Web SDK on Safari or iOS (including iOS simulator).
**Solutions:**
1. **Use OPFSCoopSyncVFS (Recommended)**: Switch to the `OPFSCoopSyncVFS` virtual file system, which provides better Safari compatibility and multi-tab support:
```js theme={null}
import { PowerSyncDatabase, WASQLiteOpenFactory, WASQLiteVFS } from '@powersync/web';
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: new WASQLiteOpenFactory({
dbFilename: 'exampleVFS.db',
vfs: WASQLiteVFS.OPFSCoopSyncVFS,
flags: {
enableMultiTabs: typeof SharedWorker !== 'undefined'
}
}),
flags: {
enableMultiTabs: typeof SharedWorker !== 'undefined'
}
});
```
2. **Disable Web Workers (Alternative)**: Set the `useWebWorker` flag to `false`, but note that this disables multi-tab support:
```js theme={null}
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'powersync.db'
},
flags: {
useWebWorker: false
}
});
```
## Tools
Troubleshooting techniques depend on the type of issue:
1. **Connection issues between client and server:** See the tools below.
2. **Expected data not appearing on device:** See the tools below.
3. **Data lagging behind on PowerSync Service:** Data on the PowerSync Service instance cannot currently directly be inspected. This is something we are investigating.
4. **Writes to the backend source database are failing:** PowerSync is not actively involved: use normal debugging techniques (server-side logging; client and server-side error tracking).
5. **Updates are slow to sync, or queries run slow**: See [Performance](#performance)
### Sync Diagnostics Client
Access the Sync Diagnostics Client here: [https://diagnostics-app.powersync.com](https://diagnostics-app.powersync.com)
This is a standalone web app that presents data from the perspective of a specific user. It can be used to:
* See stats about the user's local database.
* Inspect tables, rows and buckets on the device.
* Query the local SQL database.
* Identify common issues, e.g. too many buckets.
See the [Readme](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app#readme) for further details.
### Instance Logs
See [Monitoring and Alerting](/maintenance-ops/monitoring-and-alerting).
### SyncStatus API
We also provide diagnostics via the `SyncStatus` APIs in the client SDKs. Examples include the connection status, last completed sync time, and local upload queue size.
If for example, a change appears to be missing on the client, you can check if the last completed sync time is greater than the time the change occurred.
For usage details, refer to the respective [client SDK docs](/client-sdks/overview).
The JavaScript SDKs ([React Native](/client-sdks/reference/react-native-and-expo), [web](/client-sdks/reference/javascript-web)) also log the contents of bucket changes to `console.debug` if verbose logging is enabled. This should log which `PUT`/`PATCH`/`DELETE` operations have been applied from the server.
### Inspect local SQLite Database
Another useful debugging tool as a developer is to open the SQLite file and inspect the contents. We share an example of how to do this on iOS from macOS in this video:
Essentially, run the following to grab the SQLite file:
`find ~/Library/Developer/CoreSimulator/Devices -name "mydb.sqlite"`
`adb pull data/data/com.mydomain.app/files/mydb.sqlite`
Our [Sync Diagnostics Client](/tools/diagnostics-client) and several of our [demo apps](/intro/examples) also contain a SQL console view to inspect the local database contents. Consider implementing similar functionality in your app. See a React example [here](https://github.com/powersync-ja/powersync-js/blob/main/tools/diagnostics-app/src/app/views/sql-console.tsx).
### Client-side Logging
Our client SDKs support logging to troubleshoot issues. Here's how to enable logging in each SDK:
* **JavaScript-based SDKs** (Web, React Native, and Node.js) - You can use our built-in logger based on [js-logger](https://www.npmjs.com/package/js-logger) for logging. Create the base logger with `const logger = createBaseLogger()` and enable with `logger.useDefaults()` and set level with `logger.setLevel(LogLevel.DEBUG)`. For the Web SDK, you can also enable the `debugMode` flag to log SQL queries on Chrome's Performance timeline.
* **Dart/Flutter SDK** - Logging is enabled by default since version 1.1.2 and outputs logs to the console in debug mode.
* **Kotlin SDK** - Uses [Kermit Logger](https://kermit.touchlab.co/docs/). By default shows `Warnings` in release and `Verbose` in debug mode.
* **Swift SDK** - Supports configurable logging with `DefaultLogger` and custom loggers implementing `LoggerProtocol`. Supports severity levels: `.debug`, `.info`, `.warn`, and `.error`.
* **.NET SDK** - Uses .NET's `ILogger` interface. Configure with `LoggerFactory` to enable console logging and set minimum log level.
## Performance
When running into issues with data sync performance, first review our expected [Performance and Limits](/resources/performance-and-limits).
These are some common pointers when it comes to diagnosing and understanding performance issues:
1. You will notice differences in performance based on the **row size** (think 100 byte rows vs 8KB rows)
2. The **initial sync** on a client can take a while in cases where the operations history is large. See [Compacting Buckets](/maintenance-ops/compacting-buckets) to optimizes sync performance.
3. You can get big performance gains by using **transactions & batching** as explained in this [blog post](https://www.powersync.com/blog/flutter-database-comparison-sqlite-async-sqflite-objectbox-isar).
### Web: Logging queries on the performance timeline
Enabling the `debugMode` flag in the [Web SDK](/client-sdks/reference/javascript-web) logs all SQL queries on the Performance timeline in Chrome's Developer Tools (after recording). This can help identify slow-running queries.
This includes:
* PowerSync queries from client code.
* Internal statements from PowerSync, including queries saving sync data, and begin/commit statements.
This excludes:
* The time waiting for the global transaction lock, but includes all overhead in worker communication. This means you won't see concurrent queries in most cases.
* Internal statements from `powersync-sqlite-core`.
Enable this mode when instantiating `PowerSyncDatabse`:
```js theme={null}
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'powersync.db',
debugMode: true // Defaults to false. To enable in development builds, use
// debugMode: process.env.NODE_ENV !== 'production'
}
});
```
# Custom Conflict Resolution
Source: https://docs.powersync.com/handling-writes/custom-conflict-resolution
How to implement custom conflict resolution strategies in PowerSync to handle concurrent updates from multiple clients.
The default behavior is "**last write wins per field**". Updates to different fields on the same record don't conflict with each other. The server processes operations in the order received, so if two users modify the *same* field, the last update to reach the server wins.
For most apps, this works fine. But some scenarios demand more complex conflict resolution strategies.
## When You Might Need Custom Conflict Resolution
**Retail inventory**: Two clerks ring up the same item while offline. You need to subtract both quantities, not replace one count with the other.
**Healthcare records**: A doctor updates diagnosis while a nurse updates vitals on the same patient record. Both changes matter, you can't lose either.
**Order workflows**: Once an order ships, it should lock. Status must progress logically (pending → processing → shipped), not jump around randomly.
**Collaborative documents**: Multiple people edit different paragraphs simultaneously. Automatic merging prevents losing anyone's work.
## How Data Flows Through PowerSync
Understanding the data flow helps you decide where to implement conflict resolution.
### Client to Backend
When a user updates data in your app:
1. **Client writes to local SQLite** - Changes happen instantly, even offline
2. **PowerSync queues the operation** - Stored in the upload queue
3. **Client sends operation(s) to your backend** - Your `uploadData` function processes it
4. **Backend writes to source database** - Postgres, MySQL, MongoDB etc.
### Backend to Client
When data changes on the server:
1. **Source database updates** - Direct writes or changes from other clients
2. **PowerSync Service detects changes** - Through replication stream
3. **Clients download updates** - Based on their Sync Streams (or legacy Sync Rules)
4. **Local SQLite updates** - Changes merge into the client's database
**Conflicts arise when**: Multiple clients modify the same row (or fields) before syncing, or when a client's changes conflict with server-side rules.
***
## Understanding Operations & `CrudEntry`
PowerSync tracks three operation types:
* **PUT** - Creates new row or replaces entire row (includes all non-null columns)
* **PATCH** - Updates specific fields only (includes ID + changed columns)
* **DELETE** - Removes row (includes only ID)
### `CrudEntry` Structure
When your `uploadData` receives transactions, each one has this structure:
```typescript theme={null}
interface CrudEntry {
clientId: number; // Auto-incrementing client ID
id: string; // ID of the changed row
op: UpdateType; // 'PUT' | 'PATCH' | 'DELETE'
table: string; // Table name
opData?: Record; // Changed column values (optional)
transactionId?: number; // Groups ops from same transaction
metadata?: string; // Custom metadata (trackMetadata)
trackPrevious?: Record; // Previous values (trackPrevious)
}
```
### What Your Backend Receives
**Client-side connector sends:**
```javascript theme={null}
// uploadData in your client connector
async uploadData(database) {
const transaction = await database.getNextCrudTransaction();
if (!transaction) return;
// Send to your backend API
await fetch('https://yourapi.com/data', {
method: 'POST',
body: JSON.stringify({
batch: transaction.crud // Array of CrudEntry objects
})
});
await transaction.complete();
}
```
The following structure is only received by the backend if the transactions are not mutated in your client's `uploadData` function
**Backend API receives:**
```json theme={null}
{
"batch": [
{
"op": "PATCH",
"table": "todos",
"id": "44f21466-d031-11f0-94bd-62f5a66ac26c",
"opData": {
"completed": 1,
"completed_at": "2025-12-03T10:20:04.658Z",
"completed_by": "c7b8cc68-41dd-4643-b559-66664ab6c7c5"
}
}
]
}
```
Operations are **idempotent** - your backend may receive the same operation multiple times. Use `clientId` and the operation's ID to detect and skip duplicates.
***
## Implementation Examples
The following examples demonstrate the core logic and patterns for implementing conflict resolution strategies. All client-side code is written for React/Web applications, backend examples use Node.js, and database queries target Postgres. While these examples should work as-is, they're intended as reference implementations, focus on understanding the underlying patterns and adapt them to your specific stack and requirements.
***
## Strategy 1: Timestamp-Based Detection
The idea is simple: add a `modified_at` timestamp to each row. When a client updates a row, compare their timestamp to the one in the database. If theirs is older, someone else changed the row while they were offline, so you treat it as a conflict.
This is great for quick staleness checks. You are not merging changes, just stopping outdated writes, similar to noticing a Google Doc changed while you were editing a local copy.
The only real catch is **clock drift**. If server and client clocks are out of sync, you can get false conflicts. And if clients generate timestamps themselves, make sure they all use the same timezone.
### Database Schema
**Source database (Postgres):**
```sql theme={null}
CREATE TABLE tasks (
id UUID PRIMARY KEY,
title TEXT,
status TEXT,
modified_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Auto-update timestamp on every change
CREATE OR REPLACE FUNCTION update_modified_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.modified_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER tasks_modified_at
BEFORE UPDATE ON tasks
FOR EACH ROW
EXECUTE FUNCTION update_modified_at();
```
### Backend Conflict Detection
**Backend API (Node.js):**
```javascript theme={null}
async function handleUpdate(operation, userId) {
const { id, opData } = operation;
const clientModifiedAt = opData.modified_at;
// Get current server state
const result = await db.query(
'SELECT * FROM tasks WHERE id = $1',
[id]
);
if (!result.rows[0]) {
// Row was deleted by another client
console.log(`Conflict: Row ${id} deleted`);
return { conflict: 'row_deleted' };
}
const serverModifiedAt = result.rows[0].modified_at;
// Client's version is older than server's
if (new Date(clientModifiedAt) < new Date(serverModifiedAt)) {
console.log(`Conflict: Stale update for ${id}`);
return {
conflict: 'stale_update',
serverVersion: result.rows[0],
clientVersion: opData
};
}
// No conflict - apply update
await db.query(
'UPDATE tasks SET title = $1, status = $2 WHERE id = $3',
[opData.title, opData.status, id]
);
return { success: true };
}
```
Timestamps can be unreliable if servers have **clock skew**. Additionally, if clients are writing timestamps (rather than letting the database generate them), ensure all clients use the same timezone/localization as the server. For critical data, use sequence numbers instead.
***
## Strategy 2: Sequence Number Versioning
Instead of timestamps, you can use a `version` number that increments on every change. It works like a counter on the row. Each time someone updates it, the version increases by one. When a client sends an update, they include the version they last saw. If it doesn’t match the current version in the database, another update happened and you reject the write.
This avoids **clock drift** entirely because the database manages the counter, so clients can’t get out of sync.
The tradeoff is that it’s all or nothing. You can’t merge simultaneous edits to different fields. You only know that the row changed, so the update is rejected. Use this when you want strong conflict detection and are fine asking users to refresh and redo their edits rather than risking corrupted data.
### Database Schema
**Source database (Postgres):**
```sql theme={null}
CREATE TABLE documents (
id UUID PRIMARY KEY,
content TEXT,
version BIGSERIAL NOT NULL
);
```
### Backend Conflict Detection
**Backend API (Node.js):**
```javascript theme={null}
async function handleUpdateWithVersion(operation) {
const { id, opData } = operation;
const clientVersion = opData.version;
const result = await db.query(
'SELECT version FROM documents WHERE id = $1',
[id]
);
if (!result.rows[0]) {
return { conflict: 'row_deleted' };
}
const serverVersion = result.rows[0].version;
// Client's version doesn't match server
if (clientVersion !== serverVersion) {
return {
conflict: 'version_mismatch',
expected: serverVersion,
received: clientVersion
};
}
// Update and increment version atomically
await db.query(
'UPDATE documents SET content = $1, version = version + 1 WHERE id = $2',
[opData.content, id]
);
return { success: true };
}
```
***
## Strategy 3: Field-Level Last Write Wins
Here things get more fine-grained. Instead of tracking changes for the whole row, you track them per field. If one user updates the title and another updates the status, both changes can succeed because they touched different fields.
You store a timestamp for each field you care about. When an update comes in, you compare the client’s timestamp for each field to what’s in the database and only apply the fields that are newer. This allows concurrent edits to coexist as long as they are not modifying the same field.
The downside is extra complexity. You end up with more timestamp columns, and your backend has to compare fields one by one. But for apps like task managers or form builders, where different parts of a record are often edited independently, this avoids a lot of unnecessary conflicts.
### Database Schema
**Source database (Postgres):**
```sql theme={null}
CREATE TABLE tasks (
id UUID PRIMARY KEY,
title TEXT,
title_modified_at TIMESTAMPTZ,
status TEXT,
status_modified_at TIMESTAMPTZ,
assignee TEXT,
assignee_modified_at TIMESTAMPTZ
);
```
### Client Schema with Metadata
**Client schema:**
```typescript theme={null}
const tasks = new Table(
{
title: column.text,
status: column.text,
assignee: column.text,
// Store per-field timestamps in metadata
},
{
trackMetadata: true // Enables _metadata column
}
);
```
### Client Updates with Timestamps
**Client code:**
```typescript theme={null}
await powerSync.execute(
'UPDATE tasks SET title = ?, _metadata = ? WHERE id = ?',
[
'Updated title',
JSON.stringify({
title_modified_at: new Date().toISOString(),
status_modified_at: existingTask.status_modified_at // Keep existing
}),
taskId
]
);
```
### Backend Field-Level Resolution
**Backend API (Node.js):**
```javascript theme={null}
async function fieldLevelLWW(operation) {
const { id, opData, metadata } = operation;
const timestamps = metadata ? JSON.parse(metadata) : {};
// Get current field timestamps from database
const result = await db.query(
'SELECT title_modified_at, status_modified_at, assignee_modified_at FROM tasks WHERE id = $1',
[id]
);
if (!result.rows[0]) {
return { conflict: 'row_deleted' };
}
const currentTimestamps = result.rows[0];
const updates = [];
const values = [];
let paramCount = 1;
// Check each field that was updated
for (const [field, value] of Object.entries(opData)) {
if (field === 'id') continue;
const clientTimestamp = timestamps[`${field}_modified_at`];
const serverTimestamp = currentTimestamps[`${field}_modified_at`];
// Only update if client's version is newer (or server has no timestamp)
if (!serverTimestamp ||
(clientTimestamp && new Date(clientTimestamp) > new Date(serverTimestamp))) {
updates.push(`${field} = $${paramCount}`);
updates.push(`${field}_modified_at = $${paramCount + 1}`);
values.push(value, clientTimestamp);
paramCount += 2;
}
}
if (updates.length > 0) {
values.push(id);
await db.query(
`UPDATE tasks SET ${updates.join(', ')} WHERE id = $${paramCount}`,
values
);
}
return { success: true };
}
```
***
## Strategy 4: Business Rule Validation
Sometimes conflicts aren’t about timing at all, they’re about your business rules. Maybe an order that has shipped can’t be edited, or a status can’t jump from `pending` to `completed` without hitting `processing` or **prices** can only change with manager approval.
This approach isn’t about catching concurrent edits. It’s about enforcing valid state transitions. You look at the current state in the database, compare it to what the client wants, and decide whether that move is allowed.
This is where your domain rules live. The logic becomes the gatekeeper that blocks changes that don’t make sense. You can also layer it with other methods: check timestamps first, then validate your business rules, and only then apply the update.
### Backend with Business Rules
**Backend API (Node.js):**
```javascript theme={null}
async function validateOrderUpdate(operation) {
const { id, opData } = operation;
const result = await db.query(
'SELECT * FROM orders WHERE id = $1',
[id]
);
if (!result.rows[0]) {
return { conflict: 'row_deleted' };
}
const serverOrder = result.rows[0];
// Rule 1: Shipped orders are immutable
if (serverOrder.status === 'shipped' || serverOrder.status === 'completed') {
return {
conflict: 'order_locked',
message: 'Cannot modify shipped or completed orders'
};
}
// Rule 2: Validate status transitions
const validTransitions = {
'pending': ['processing', 'cancelled'],
'processing': ['shipped', 'cancelled'],
'shipped': ['completed'],
'completed': [],
'cancelled': []
};
if (opData.status &&
!validTransitions[serverOrder.status]?.includes(opData.status)) {
return {
conflict: 'invalid_transition',
message: `Cannot change status from ${serverOrder.status} to ${opData.status}`
};
}
// Rule 3: Price changes need approval flag
if (opData.price !== undefined &&
opData.price !== serverOrder.price &&
!opData.manager_approved) {
return {
conflict: 'approval_required',
message: 'Price changes require manager approval'
};
}
// Rule 4: Stock level must be positive
if (opData.quantity !== undefined && opData.quantity < 0) {
return {
conflict: 'invalid_quantity',
message: 'Quantity cannot be negative'
};
}
// All validations passed
const updateFields = [];
const updateValues = [];
let paramCount = 1;
for (const [field, value] of Object.entries(opData)) {
if (field === 'id') continue;
updateFields.push(`${field} = $${paramCount}`);
updateValues.push(value);
paramCount++;
}
updateValues.push(id);
await db.query(
`UPDATE orders SET ${updateFields.join(', ')} WHERE id = $${paramCount}`,
updateValues
);
return { success: true };
}
```
***
## Strategy 5: Server-Side Conflict Recording
Sometimes you can’t automatically fix a conflict. Both versions might be valid, and you need a human to choose. In those cases you record the conflict instead of picking a winner. You save both versions in a write\_conflicts table and sync that back to the client so the user can decide.
The flow is simple: detect the conflict, store the client and server versions, surface it in the UI, and let the user choose or merge. After they resolve it, you mark the conflict as handled.
This is the safest option for high-stakes data where losing either version isn’t acceptable, like medical records, legal documents, or financial entries. The tradeoff is extra UI work and shifting the final decision to the user.
### Step 1: Create Conflicts Table
**Source database (Postgres):**
```sql theme={null}
CREATE TABLE write_conflicts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
table_name TEXT NOT NULL,
row_id UUID NOT NULL,
conflict_type TEXT NOT NULL,
client_data JSONB NOT NULL,
server_data JSONB NOT NULL,
resolved BOOLEAN DEFAULT FALSE,
user_id UUID NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
```
### Step 2: Sync Conflicts to Clients
**Sync Streams / Sync Rules:**
```yaml theme={null}
config:
edition: 3
streams:
user_data:
queries:
- SELECT * FROM tasks WHERE user_id = auth.user_id()
- SELECT * FROM write_conflicts WHERE user_id = auth.user_id() AND NOT resolved
```
```yaml theme={null}
bucket_definitions:
user_data:
parameters:
- SELECT request.user_id() as user_id
data:
- SELECT * FROM tasks WHERE user_id = bucket.user_id
- SELECT * FROM write_conflicts WHERE user_id = bucket.user_id AND resolved = FALSE
```
### Step 3: Record Conflicts in Backend
**Backend API (Node.js):**
```javascript theme={null}
async function handleUpdateWithConflictRecording(operation, userId) {
const { id, opData } = operation;
const result = await db.query(
'SELECT * FROM tasks WHERE id = $1',
[id]
);
if (!result.rows[0]) {
return { conflict: 'row_deleted' };
}
const serverData = result.rows[0];
const clientModifiedAt = opData.modified_at;
const serverModifiedAt = serverData.modified_at;
// Detect conflict
if (new Date(clientModifiedAt) < new Date(serverModifiedAt)) {
// Record for manual resolution
await db.query(
`INSERT INTO write_conflicts
(table_name, row_id, conflict_type, client_data, server_data, user_id)
VALUES ($1, $2, $3, $4, $5, $6)`,
[
'tasks',
id,
'update_conflict',
JSON.stringify(opData),
JSON.stringify(serverData),
userId
]
);
// Don't apply the update - let user resolve it
return { conflict: 'recorded' };
}
// No conflict - apply update
await db.query(
'UPDATE tasks SET title = $1, status = $2 WHERE id = $3',
[opData.title, opData.status, id]
);
return { success: true };
}
```
### Step 4: Build Resolution UI
**Client UI (React):**
```typescript theme={null}
import { useQuery } from '@powersync/react';
import { powerSync } from './db'
function ConflictResolver() {
const { data: conflicts } = useQuery(
'SELECT * FROM write_conflicts WHERE resolved = FALSE'
);
const resolveConflict = async (
conflictId: string,
useClientVersion: boolean
) => {
const conflict = conflicts.find(c => c.id === conflictId);
const clientData = JSON.parse(conflict.client_data);
if (useClientVersion) {
// Reapply client's changes
const fields = Object.keys(clientData).filter(k => k !== 'id');
const placeholders = fields.map(() => '?').join(', ');
const updates = fields.map(f => `${f} = ?`).join(', ');
await powerSync.execute(
`UPDATE ${conflict.table_name} SET ${updates} WHERE id = ?`,
[...fields.map(f => clientData[f]), conflict.row_id]
);
}
// If using server version, it's already applied
// Mark as resolved
await powerSync.execute(
'UPDATE write_conflicts SET resolved = TRUE WHERE id = ?',
[conflictId]
);
};
if (!conflicts || conflicts.length === 0) {
return null;
}
return (
⚠️ {conflicts.length} Conflict(s) Need Your Attention
{conflicts.map(conflict => {
const clientData = JSON.parse(conflict.client_data);
const serverData = JSON.parse(conflict.server_data);
return (
Conflict in {conflict.table_name}
from {new Date(conflict.created_at).toLocaleString()}
Your Changes:
{Object.entries(clientData).map(([key, value]) => (
{key}: {JSON.stringify(value)}
))}
resolveConflict(conflict.id, true)}>
Keep My Version
Server Version:
{Object.entries(serverData).map(([key, value]) => (
{key}: {JSON.stringify(value)}
))}
resolveConflict(conflict.id, false)}>
Keep Server Version
);
})}
);
}
```
***
## Strategy 6: Change-Level Status Tracking
This approach works differently. Instead of merging everything in one atomic update, you log each field change as its own row in a separate table. If a user edits the title of a task, you still apply an optimistic update to the main table, but you also write a row to a `field_changes` table that records who changed what and to which value.
Your backend then processes these changes asynchronously. Each one gets a status like `pending`, `applied`, or `failed`. If a change fails validation, you mark it as `failed` and surface the error in the UI. The user can see exactly which fields succeeded and which didn’t, and retry the failed ones without resubmitting everything.
This gives you excellent visibility. You get a clear history of every change, who made it, and when it happened. The cost is extra writes, since every field update creates an additional log entry. But for compliance-heavy systems or any app that needs detailed auditing, the tradeoff could be worth it.
The implementation below shows the full version with complete status tracking. If you don't need all that complexity, see the simpler variations at the end of this section.
### Step 1: Create Change Log Table
**Source database (Postgres):**
```sql theme={null}
CREATE TABLE field_changes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
table_name TEXT NOT NULL,
row_id UUID NOT NULL,
field_name TEXT NOT NULL,
new_value TEXT,
status TEXT DEFAULT 'pending', -- 'pending', 'applied', 'failed'
error_message TEXT,
user_id UUID NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
```
### Step 2: Client Writes to Both Tables
**Client code:**
```typescript theme={null}
async function updateTaskField(
taskId: string,
field: string,
newValue: any,
powerSync: PowerSyncDatabase
) {
await powerSync.writeTransaction(async (tx) => {
// Optimistic update to main table
await tx.execute(
`UPDATE tasks SET ${field} = ? WHERE id = ?`,
[newValue, taskId]
);
// Log the change for server tracking
await tx.execute(
`INSERT INTO field_changes
(table_name, row_id, field_name, new_value, user_id)
VALUES (?, ?, ?, ?, ?)`,
['tasks', taskId, field, String(newValue), getCurrentUserId()]
);
});
}
```
### Step 3: Backend Processes Changes
**Backend API (Node.js):**
```javascript theme={null}
async function processFieldChanges() {
const result = await db.query(
`SELECT * FROM field_changes
WHERE status = 'pending'
ORDER BY created_at ASC
LIMIT 100`
);
for (const change of result.rows) {
try {
// Validate the change
const isValid = await validateFieldChange(change);
if (!isValid.valid) {
await db.query(
`UPDATE field_changes
SET status = 'failed', error_message = $1
WHERE id = $2`,
[isValid.reason, change.id]
);
continue;
}
// Apply to main table
await db.query(
`UPDATE ${change.table_name}
SET ${change.field_name} = $1
WHERE id = $2`,
[change.new_value, change.row_id]
);
// Mark as applied
await db.query(
`UPDATE field_changes SET status = 'applied' WHERE id = $1`,
[change.id]
);
} catch (error) {
await db.query(
`UPDATE field_changes
SET status = 'failed', error_message = $1
WHERE id = $2`,
[error.message, change.id]
);
}
}
}
async function validateFieldChange(change) {
// Example validation
if (change.field_name === 'price' && parseFloat(change.new_value) < 0) {
return { valid: false, reason: 'Price cannot be negative' };
}
return { valid: true };
}
```
### Step 4: Display Change Status
**Client UI (React):**
```typescript theme={null}
function TaskEditor({ taskId }: { taskId: string }) {
const { data: pendingChanges } = useQuery(
`SELECT * FROM field_changes
WHERE row_id = ?
AND table_name = 'tasks'
AND status IN ('pending', 'failed')
ORDER BY created_at DESC`,
[taskId]
);
const retryChange = async (changeId: string) => {
await powerSync.execute(
'UPDATE field_changes SET status = ? WHERE id = ?',
['pending', changeId]
);
};
return (
{/* Your task editing form */}
{pendingChanges && pendingChanges.length > 0 && (
{pendingChanges.map(change => (
{change.status === 'pending' && (
⏳ Syncing {change.field_name}...
)}
{change.status === 'failed' && (
❌ Failed to update {change.field_name}: {change.error_message}
retryChange(change.id)}>
Retry
)}
))}
)}
);
}
```
### Other Variations
The implementation above syncs the `field_changes` table bidirectionally, giving you full visibility into change status on the client. But there are two simpler approaches that reduce overhead when you don't need complete status tracking:
#### Insert-Only (Fire and Forget)
For scenarios where you just need to record changes without tracking their status. For example, logging analytics events or recording simple increment/decrement operations.
How it works:
* Mark the table as `insertOnly: true` in your client schema
* Don't include the `field_changes` table in your Sync Rules
* Changes are uploaded to the server but never downloaded back to clients
**Client schema:**
```typescript theme={null}
const fieldChanges = new Table(
{
table_name: column.text,
row_id: column.text,
field_name: column.text,
new_value: column.text,
user_id: column.text
},
{
insertOnly: true // Only allows INSERT operations
}
);
```
**When to use:** Analytics logging, audit trails that don't need client visibility, simple increment/decrement where conflicts are rare.
**Tradeoff:** No status visibility on the client. You can't show pending/failed states or implement retry logic.
#### Pending-Only (Temporary Tracking)
For scenarios where you want to show sync status temporarily but don't need a permanent history on the client.
How it works:
* Use a normal table on the client (not `insertOnly`)
* Don't include the `field_changes` table in your Sync Rules
* Pending changes stay on the client until they're uploaded and the server processes them
* Once the server processes a change and PowerSync syncs the next checkpoint, the change automatically disappears from the client
**Client schema:**
```typescript theme={null}
const pendingChanges = new Table({
table_name: column.text,
row_id: column.text,
field_name: column.text,
new_value: column.text,
status: column.text,
user_id: column.text
});
```
**Show pending indicator:**
```typescript theme={null}
function SyncIndicator({ taskId }: { taskId: string }) {
const { data: pending } = useQuery(
`SELECT COUNT(*) as count FROM pending_changes
WHERE row_id = ? AND status = 'pending'`,
[taskId]
);
if (!pending?.[0]?.count) return null;
return (
⏳ {pending[0].count} change{pending[0].count > 1 ? 's' : ''} syncing...
);
}
```
**When to use:** Showing "syncing..." indicators, temporary status tracking without long-term storage overhead, cases where you want automatic cleanup after sync.
**Tradeoff:** Can't show detailed server-side error messages (unless the server writes to a separate errors table that *is* in Sync Rules). No long-term history on the client.
## Strategy 7: Cumulative Operations (Inventory)
For scenarios like inventory management, simply replacing values causes data loss. When two clerks simultaneously sell the same item while offline, both sales must be honored. The solution is to treat certain fields as **deltas** rather than absolute values, you subtract incoming quantities from the current stock rather than replacing the count.
This requires your backend to recognize which operations should be cumulative. For inventory quantity changes, you apply the delta (e.g., `-3 units`) to the current value rather than setting it directly. This ensures all concurrent sales are properly recorded without overwriting each other.
### Database Schema
**Source database (Postgres):**
```sql theme={null}
CREATE TABLE inventory (
id UUID PRIMARY KEY,
product_id UUID NOT NULL,
quantity INTEGER NOT NULL DEFAULT 0,
last_updated TIMESTAMPTZ DEFAULT NOW()
);
-- Prevent negative inventory
ALTER TABLE inventory ADD CONSTRAINT positive_quantity CHECK (quantity >= 0);
```
### Backend: Delta Detection and Application
The key is detecting when an operation should be treated as a delta versus an absolute value. You can identify this through table/field combinations, metadata flags, or operation patterns.
**Backend API (Node.js):**
```javascript theme={null}
async function handleInventoryOperation(db, operation) {
const { table, id, op, opData } = operation;
// Identify cumulative fields for specific tables
if (table === 'inventory' && 'quantity' in opData) {
return await applyInventoryDelta(db, operation);
}
// Default handling for other fields/tables
return await handleGenericOperation(db, operation);
}
async function applyInventoryDelta(db, operation) {
const { id, opData } = operation;
const quantityChange = opData.quantity; // This is the delta, not absolute value
// Get current inventory
const result = await db.query(
'SELECT quantity FROM inventory WHERE id = $1',
[id]
);
if (!result.rows[0]) {
return {
conflict: 'inventory_not_found',
message: `Inventory ${id} does not exist`
};
}
const currentQuantity = result.rows[0].quantity;
const newQuantity = currentQuantity + quantityChange;
// Validate: prevent negative inventory
if (newQuantity < 0) {
console.warn(`Insufficient stock: ${id} has ${currentQuantity}, attempted change: ${quantityChange}`);
return {
conflict: 'insufficient_stock',
message: `Cannot reduce inventory by ${Math.abs(quantityChange)}. Only ${currentQuantity} available.`,
currentQuantity
};
}
// Apply the delta atomically
await db.query(
`UPDATE inventory
SET quantity = quantity + $1,
last_updated = NOW()
WHERE id = $2`,
[quantityChange, id]
);
return {
success: true,
newQuantity,
previousQuantity: currentQuantity
};
}
```
### Client Implementation
On the client side, you need to ensure updates are sent as deltas, not absolute values. When a sale occurs, send the change amount:
**Client code:**
```typescript theme={null}
// When selling 3 units
await powerSync.execute(
'UPDATE inventory SET quantity = quantity - ? WHERE id = ?',
[3, inventoryId] // Send -3 as the delta
);
```
The backend receives this as a PATCH operation where `opData.quantity = -3`, which it then adds to the current quantity rather than replacing it.
### Alternative Approaches
**1. Metadata Flags**: Include operation type in metadata to signal delta operations:
```typescript theme={null}
await powerSync.execute(
'UPDATE inventory SET quantity = ?, _metadata = ? WHERE id = ?',
[
-3,
JSON.stringify({ operation_type: 'delta' }),
inventoryId
]
);
```
Backend checks metadata and applies accordingly.
**2. Separate Transactions Table**: Track each quantity change as its own row, then aggregate them. This provides full audit history but requires syncing an additional table.
**3. Operation-Based Detection**: Infer cumulative operations from the pattern. Negative values likely indicate sales (deltas), while large positive values might be absolute restocks requiring different handling.
***
## Using Custom Metadata
Track additional context about operations using the `_metadata` column.
### Enable in Schema
**Client schema:**
```typescript theme={null}
const tasks = new Table(
{
title: column.text,
status: column.text,
},
{
trackMetadata: true // Enables _metadata column
}
);
```
### Write Metadata
**Client code:**
```typescript theme={null}
await powerSync.execute(
'UPDATE tasks SET title = ?, _metadata = ? WHERE id = ?',
[
'New title',
JSON.stringify({
source: 'mobile_app',
device: 'iPhone 12',
priority: 'high',
reason: 'customer_request'
}),
taskId
]
);
```
### Access in Backend
**Backend API (Node.js):**
```javascript theme={null}
async function processOperation(operation) {
const metadata = operation.metadata ? JSON.parse(operation.metadata) : {};
// Route high-priority operations differently
if (metadata.priority === 'high') {
await processHighPriority(operation);
return;
}
// Track which device made the change
console.log(`Change from: ${metadata.device || 'unknown'}`);
// Custom conflict resolution based on metadata
if (metadata.reason === 'customer_request') {
// Customer requests might override other updates
await forceApplyOperation(operation);
} else {
await standardProcessing(operation);
}
}
```
**Common use cases:**
* Track which device/app version made the change
* Flag operations requiring special handling
* Store user context (role, department)
* Implement source-based conflict resolution (mobile trumps web)
* Pass approval flags or business context
***
## Complete Backend Example
Here's how to tie it all together in a Node.js backend with Postgres.
**Backend API (Node.js + Express):**
```javascript theme={null}
import express from 'express';
import { Pool } from 'pg';
const app = express();
const pool = new Pool({
connectionString: process.env.DATABASE_URL
});
app.post('/api/data', async (req, res) => {
const { batch } = req.body;
const userId = req.user.id; // From auth middleware
const db = await pool.connect();
try {
await db.query('BEGIN');
for (const operation of batch) {
// Choose strategy based on table
if (operation.table === 'orders') {
await handleOrderOperation(db, operation, userId);
} else if (operation.table === 'tasks') {
await handleTaskOperation(db, operation, userId);
} else {
// Default handling
await handleGenericOperation(db, operation);
}
}
await db.query('COMMIT');
res.json({ success: true });
} catch (error) {
await db.query('ROLLBACK');
console.error('Operation failed:', error);
res.status(500).json({ error: error.message });
} finally {
db.release();
}
});
async function handleOrderOperation(db, op, userId) {
if (op.op === 'PUT') {
// Use business rule validation (Strategy 4)
const result = await validateOrderUpdate(db, op);
if (result.conflict) {
throw new Error(result.message);
}
} else if (op.op === 'PATCH') {
await handleOrderPatch(db, op, userId);
} else if (op.op === 'DELETE') {
await handleOrderDelete(db, op);
}
}
async function handleTaskOperation(db, op, userId) {
if (op.op === 'PUT' || op.op === 'PATCH') {
// Use timestamp detection with conflict recording (Strategy 1 + 5)
const result = await handleUpdateWithConflictRecording(db, op, userId);
if (result.conflict && result.conflict !== 'recorded') {
console.warn('Conflict detected:', result);
}
} else if (op.op === 'DELETE') {
await db.query('DELETE FROM tasks WHERE id = $1', [op.id]);
}
}
async function handleGenericOperation(db, op) {
// Default last-write-wins
if (op.op === 'PUT') {
const fields = Object.keys(op.opData);
const values = Object.values(op.opData);
const placeholders = fields.map((_, i) => `$${i + 1}`).join(', ');
const updates = fields.map((f, i) => `${f} = $${i + 1}`).join(', ');
await db.query(
`INSERT INTO ${op.table} (id, ${fields.join(', ')})
VALUES ($${fields.length + 1}, ${placeholders})
ON CONFLICT (id) DO UPDATE SET ${updates}`,
[...values, op.id]
);
} else if (op.op === 'PATCH') {
const fields = Object.keys(op.opData);
const values = Object.values(op.opData);
const updates = fields.map((f, i) => `${f} = $${i + 1}`).join(', ');
await db.query(
`UPDATE ${op.table} SET ${updates} WHERE id = $${fields.length + 1}`,
[...values, op.id]
);
} else if (op.op === 'DELETE') {
await db.query(`DELETE FROM ${op.table} WHERE id = $1`, [op.id]);
}
}
app.listen(3000, () => {
console.log('Backend listening on port 3000');
});
```
***
## Best Practices
**Design for idempotency:**
Operations arrive multiple times. Check for existing records before inserting, use upserts, or track operation IDs to skip duplicates.
**Test offline scenarios:**
Simulate two clients going offline, making conflicting changes, then syncing. Does your resolution strategy behave as expected?
**Provide clear UI feedback:**
Show sync status prominently. Users should know when their changes are pending, synced, or conflicted.
**Consider partial failures:**
If batch processing fails midway, how do you recover? Use database transactions and mark progress carefully.
**Log conflicts for analysis:**
Track how often conflicts occur and why. This data helps you improve UX or adjust resolution strategies.
**Leverage CRDTs for collaborative docs:**
For scenarios with real-time collaboration, consider CRDTs to automatically handle concurrent edits. For information on CRDTs, see [our separate guide](/client-sdks/advanced/crdts).
**Collaborative editing without using CRDTs:**
You can use PowerSync for collaborative text editing without the complexity of CRDTs. See Matthew Weidner's blog post on [collaborative text editing using PowerSync](https://www.powersync.com/blog/collaborative-text-editing-over-powersync).
# Data Pipelines
Source: https://docs.powersync.com/handling-writes/custom-write-checkpoints
Use Custom Write Checkpoints to handle asynchronous data uploads, as in chained data pipelines.
**Availability**:
Custom Write Checkpoints are available for customers on our [Team and Enterprise](https://www.powersync.com/pricing) plans.
To ensure [consistency](/architecture/consistency), PowerSync relies on Write Checkpoints. These checkpoints ensure that clients have uploaded their own local changes/mutations to the server before applying downloaded data from the server to the local database.
The essential requirement is that the client must get a Write Checkpoint after uploading its last write/mutation. Then, when downloading data from the server, the client checks whether the Write Checkpoint is part of the largest [sync checkpoint](https://github.com/powersync-ja/powersync-service/blob/main/docs/sync-protocol.md) received from the server (i.e. from the PowerSync Service). If it is, the client applies the server-side state to the local database.
The default Write Checkpoints implementation relies on uploads being acknowledged *synchronously*, i.e. the change persists in the source database (to which PowerSync is connected) before the [`uploadData` call](/configuration/app-backend/client-side-integration) completes.
Problems occur if the persistence in the source database happens *asynchronously*. If the client's upload is meant to mutate the source database (and eventually does), but this is delayed, it will effectively seem as if the client's uploaded changes were reverted on the server, and then applied again thereafter.
Chained *data pipelines* are a common example of asynchronous uploads -- e.g. data uploads are first written to a different upstream database, or a separate queue for processing, and then finally replicated to the 'source database' (to which PowerSync is connected).
For example, consider the following data pipeline:
1. The client makes a change locally and the local database is updated.
2. The client uploads this change to the server.
3. The server resolves the request and writes the change into an intermediate database (not the source database yet).
4. The client thinks the upload is complete (i.e. persisted into the source database). It requests a Write Checkpoint from the PowerSync Service.
5. The PowerSync Service increments the replication `HEAD` in the source database, and creates a Write Checkpoint for the client. The Write Checkpoint number is returned and recorded in the client.
6. The PowerSync Service replicates past the previous replication `HEAD` (but the changes are still not present in the source database).
7. It should be fine for the client to apply the state of the server to the local database. But the server state does not include the client's uploaded changes mentioned in #2. This is the same as if the client's uploaded changes were rejected (not applied) by the server. This results in the client reverting the changes in its local database.
8. Eventually the change is written to the source database, and increments the replication `HEAD`.
9. The PowerSync Service replicates this change and sends it to the client. The client then reapplies the changes to its local database.
In the above case, the client may see the Write Checkpoint before the data has been replicated. This will cause the client to revert its changes, then apply them again later when it has actually replicated, causing data to "flicker" in the app.
For these use cases, Custom Write Checkpoints should be implemented.
## Custom Write Checkpoints
*Custom Write Checkpoints* allow the developer to define Write Checkpoints and insert them into the replication stream directly, instead of relying on the PowerSync Service to create and return them. An example of this is having the backend persist Write Checkpoints to a dedicated table which is processed as part of the replication stream.
The PowerSync Service then needs to process the (ordered) replication events and correlate the checkpoint table changes to Write Checkpoint events.
## Example Implementation
A self-hosted Node.js demo with Postgres is available here:
## Implementation Details
This outlines what a Custom Write Checkpoints implementation entails.
### Custom Write Checkpoint Table
Create a dedicated `checkpoints` table, which should contain the following checkpoint payload information in some form:
```TypeScript theme={null}
export type CheckpointPayload = {
/**
* The user account id
*/
user_id: string;
/**
* The client id relating to the user account.
* A single user can have multiple clients.
* A client is analogous to a device session.
* Checkpoints are tracked separately for each `user_id` + `client_id`.
*/
client_id: string;
/**
* A strictly increasing Write Checkpoint identifier.
* This number is generated by the application backend.
*/
checkpoint: bigint;
}
```
### Replication Requirements
Replication events for the Custom Write Checkpoint table (`checkpoints` in this example) need to enabled.
For Postgres, this involves adding the table to the [PowerSync logical replication publication](/configuration/source-db/setup), for example:
```SQL theme={null}
create publication powersync for table public.lists, public.todos, public.checkpoints;
```
### Sync Rules Requirements
You need to enable the `write_checkpoints` sync event in your Sync Rules. This event should map the rows from the `checkpoints` table to the `CheckpointPayload` payload.
```YAML theme={null}
# sync-rules.yaml
# Register the custom write_checkpoints event
event_definitions:
write_checkpoints:
payloads:
# This defines where the replicated Custom Write Checkpoints should be extracted from
- SELECT user_id, checkpoint, client_id FROM checkpoints
# Define Sync Rules as usual
bucket_definitions:
global:
data:
...
```
### Application
Your application should handle Custom Write Checkpoints on both the frontend and backend.
#### Frontend
Your client backend connector should make a call to the application backend to create a Custom Write Checkpoint record after uploading items in the `uploadData` method. The Write Checkpoint number should be supplied to the CRUD transactions' `complete` method.
```TypeScript theme={null}
async function uploadData(database: AbstractPowerSyncDatabase): Promise {
const transaction = await database.getNextCrudTransaction();
// Get the unique client ID from the PowerSync Database SQLite storage
const clientId = await db.getClientId();
for (const operation of transaction.crud) {
// Upload the items to application backend
// ....
}
await transaction.complete(await getCheckpoint(clientId));
}
async function getCheckpoint(clientId: string): string {
/**
* Should perform a request to the application backend which should create the
* Write Checkpoint record and return the corresponding checkpoint number.
*/
return "the Write Checkpoint number from the request";
}
```
#### Backend
The backend should create a Write Checkpoint record when the client requests it. The record should automatically increment the Write Checkpoint number for the associated `user_id` and `client_id`.
#### Postgres Example
With the following table defined in the database...
```SQL theme={null}
CREATE TABLE checkpoints (
user_id VARCHAR(255),
client_id VARCHAR(255),
checkpoint INTEGER,
PRIMARY KEY (user_id, client_id)
);
```
...the backend should have a route which creates `checkpoints` records:
```TypeScript theme={null}
router.put('/checkpoint', async (req, res) => {
if (!req.body) {
res.status(400).send({
message: 'Invalid body provided'
});
return;
}
const client = await pool.connect();
// These could be obtained from the session
const { user_id = 'UserID', client_id = '1' } = req.body;
const response = await client.query(
`
INSERT
INTO
checkpoints
(user_id, client_id, checkpoint)
VALUES
($1, $2, '1')
ON
CONFLICT (user_id, client_id)
DO
UPDATE
SET checkpoint = checkpoints.checkpoint + 1
RETURNING checkpoint;
`,
[user_id, client_id]
);
client.release();
// Return the Write Checkpoint number
res.status(200).send({
checkpoint: response.rows[0].checkpoint
});
});
```
An example implementation can be seen in the [Node.js backend demo](https://github.com/powersync-ja/powersync-nodejs-backend-todolist-demo/blob/main/src/api/data.js), including examples for [MongoDB](https://github.com/powersync-ja/powersync-nodejs-backend-todolist-demo/blob/main/src/persistance/mongo/mongo-persistance.js) and [MySQL](https://github.com/powersync-ja/powersync-nodejs-backend-todolist-demo/blob/main/src/persistance/mysql/mysql-persistance.js).
# Handling Update Conflicts
Source: https://docs.powersync.com/handling-writes/handling-update-conflicts
What happens when two users update the same records while offline?
**The default behavior is essentially "last write wins", but this can be** [**customized by the developer**](/handling-writes/custom-conflict-resolution)**.**
The upload queue on the client stores three types of operations:
1. PUT / Create new row — contains the value for each non-null column
2. PATCH / Update existing row — contains the ID, and value of each changed column
3. DELETE / Delete existing row — contains the ID
It is [up to your app backend](/handling-writes/writing-client-changes) to implement these operations and associated conflict handling.
The operations must be idempotent — i.e. the backend may receive the same operation multiple times in some scenarios, and must handle that appropriately.
* A per-client incrementing operation ID is included with each operation that can be used to deduplicate operations, and/or the backend can implement the operations in an idempotent way (e.g. ignore DELETE on a row that is already deleted).
A conflict may arise when two clients update the same record before seeing the other client’s update, or one client deletes the record while the other updates it.
Typically, the backend should be implemented to handle writes as follows:
1. Deletes always win: If one client deletes a row, any future updates to that row are ignored. The row may be created again with the same ID.
2. For multiple concurrent updates, the last update (as received by the server) to each individual field wins.
1. If you require different behavior to "last write wins", implement [custom conflict resolution](/handling-writes/custom-conflict-resolution).
The server could implement some validations. For example, the server could have a record of orders, and once an order is marked as "completed", reject any further updates to the order.
Future versions may include support for custom operations, e.g. "increment column by 1".
### Using CRDTs to Merge Updates Automatically
CRDT data structures such as [Yjs](https://github.com/yjs/yjs) can be stored and synced using PowerSync, allowing you to build collaborative apps that merge users' updates automatically.
See the [CRDTs](/client-sdks/advanced/crdts) section for more detail.
Built-in support for CRDT operations in PowerSync may also be added in the future.
# Handling Write / Validation Errors
Source: https://docs.powersync.com/handling-writes/handling-write-validation-errors
The general approach is that for transient errors (e.g. server or database unavailable), the changes are kept in the client-side upload queue, and retried at 5 second intervals, keeping the original order. In the future it will be possible to control the retry behavior.
For validation errors or write conflicts (see the definition of this below in [Technical Details](/handling-writes/handling-write-validation-errors#additional-technical-details)), changes are automatically rolled back on the client.
Custom logic can be implemented to propagate validation failures back to clients asynchronously. For additional details on how to do that, see the section on [Custom Conflict Resolution.](/handling-writes/custom-conflict-resolution)
## Additional Technical Details
For each change (or batch of changes), some possible scenarios are:
1. Change failed, for example due to network or temporary server error. The change is kept in the queue.
2. Change acknowledged and applied on the server. The client syncs back the change, which would match what the client already had.
3. Change acknowledged but rejected (e.g. validation error). The client rolls back the change.
4. Change acknowledged and partially applied or otherwise alternated. The client syncs back the state as applied on the server.
In all cases, PowerSync ensures that the client state is fully consistent with the server state, once the queue is empty.
### Backend implementation recommendations
The backend should respond with "success" (HTTP 2xx) even in the case of write conflicts or validation failures, unless developer intervention is desired.
Error responses should be reserved for:
1. Network errors.
2. Temporary server errors (e.g. high load, or database unavailable).
3. Unexpected bugs or schema mismatches, where the change should stay in the client-side queue.
If a bug triggers an error, it has to be fixed before the changes from the client can be processed. It is recommended to use an error reporting service on both the server and the client to be alerted of cases like this.
To propagate validation failures or write conflicts back to the client, either:
1. Include error details the body of a success response (HTTP 2xx).
2. Write the details to a different table, asynchronously synchronized back to the client.
For more details on strategies, see:
#### Dead-letter queue
Optionally, the server can implement a "dead-letter queue":
* If a change cannot be processed due to a conflict, schema mismatch and/or bug, the change can be persisted in a separate queue on the backend.
* This can then be manually inspected and processed by the developer or administrator, instead of blocking the client.
* Note that this could result in out-of-order updates if the client continues sending updates, despite earlier updates being persisted in the dead-letter queue.
While the client could implement a dead-letter queue, this is not recommended, since this cannot easily be inspected by the developer. The information is also often not sufficient to present to the user in a friendly way or to allow manual conflict resolution.
## How changes are rolled back
There is no explicit "roll-back" operation on the client — but a similar effect is achieved by the internals of PowerSync. The core principle is that when the client completes a sync with an empty upload queue, the local database will be consistent with the server-side database.
This is achieved as follows:
1. The client keeps a copy of the data as synced from the server, and continuously updates this.
2. Once all the changes from the client are uploaded, and the local "server state" is up to date, it updates the local database with the local server state.
3. If the local change was applied by the server, it will be synced back and included in the local "server state".
4. If the local change was discarded by the server, the server state will not change, and the client will revert to the last known state.
5. If another conflicting write "won", that write will be present in the server state, and will overwrite the local changes.
# Writing Client Changes
Source: https://docs.powersync.com/handling-writes/writing-client-changes
Your backend application needs to expose an API endpoint to apply write operations to your backend source database that are received from the PowerSync Client SDK.
Your backend application receives the write operations based on how you defined your `uploadData()` function in the `PowerSyncBackendConnector` in your client-side app. See [Client-Side Integration](/configuration/app-backend/client-side-integration) for details.
Since you get to define the client-side `uploadData()` function as you wish, you have full control over how to structure your backend application API to accept write operations from the client. For example, you can have:
1. A single API endpoint that accepts a batch of write operations from the client, with minimal client-side processing.
2. Separate API endpoints based on the types of write operations. In your `uploadData()`, you can call the respective endpoints as needed.
3. A combination of the above.
You can also use any API style you want — e.g. REST, GraphQL, gRPC, etc.
It's important that your API endpoint be blocking/synchronous with underlying writes to the backend source database (Postgres, MongoDB, MySQL, or SQL Server).
In other words, don't place writes into something like a queue for processing later — process them immediately. For more details, see the explainer below.
PowerSync uses a server-authoritative architecture with a checkpoint system for conflict resolution and [consistency](/architecture/consistency). The client advances to a new write checkpoint after uploads have been processed, so if the client believes that the server has written changes into your backend source database (Postgres, MongoDB, MySQL, or SQL Server), but the next checkpoint does not contain your uploaded changes, those changes will be removed from the client. This could manifest as UI glitches for your end-users, where the changes disappear from the device for a few seconds and then re-appear.
### Write operations recorded on the client
The upload queue on the client stores three types of operations:
| Operation | Purpose | Contents | SQLite Statement |
| --------- | ------------------- | -------------------------------------------------------- | --------------------------------- |
| `PUT` | Create new row | Contains the value for each non-null column | Generated by `INSERT` statements. |
| `PATCH` | Update existing row | Contains the row `id`, and value of each changed column. | Generated by `UPDATE` statements. |
| `DELETE` | Delete existing row | Contains the row `id` | Generated by `DELETE` statements. |
### Recommendations
The PowerSync Client SDK does not prescribe any specific request/response format for your backend application API that accepts the write operations. You can implement it as you wish.
We do however recommend the following:
1. Use a batch endpoint to handle high volumes of write operations.
2. Use an error response (`5xx`) only when the write operations cannot be applied due to a temporary error (e.g. backend source database not available). In this scenario, the PowerSync Client SDK can retry uploading the write operation and it should succeed at a later time.
3. For validation errors or write conflicts, you should avoid returning an error response (`4xx`), since it will block the PowerSync client's upload queue. Instead, it is best to return a `2xx` response, and if needed, propagate the validation or other error message(s) back to the client, for example by:
1. Including the error details in the `2xx` response.
2. Writing the error(s) into a separate table/collection that is synced to the client, so that the client/user can handle the error(s).
For details on approaches, see:
For details on handling write conflicts, see:
### Example backend implementations
See our [Example Projects](/intro/examples#backend-examples) page for examples of custom backend implementations (e.g. Django, Node.js, Rails, etc.) that you can use as a guide for your implementation.
For Postgres developers, using [Supabase](/integrations/supabase/guide) is an easy alternative to a custom backend. Several of our example/demo apps demonstrate how to use [Supabase](https://supabase.com/) as the backend. These examples use the [PostgREST API](https://supabase.com/docs/guides/api) exposed by Supabase to upload write operations. Alternatively, Supabase's [Edge Functions](https://supabase.com/docs/guides/functions) can also be used.
# Neon + PowerSync
Source: https://docs.powersync.com/integrations/neon
Tutorial-style integration guide for creating synced / local-first / offline-first apps with Neon and PowerSync, using a demo notes web app written in TypeScript.
Used in conjunction with **Neon**, PowerSync enables developers to build synced, local-first & offline-first apps that are robust in poor network conditions and that have highly responsive frontends while relying on [Neon](https://neon.tech/) for their backend. This guide provides instructions for how to configure PowerSync for use with your Neon project.
Before you proceed, this guide assumes that you have already signed up for free accounts with both Neon and PowerSync Cloud (our cloud-hosted offering). If you haven't signed up for a **PowerSync** (Cloud) account yet, [click here](https://accounts.powersync.com/portal/powersync-signup?s=docs) (and if you haven't signed up for Neon yet, [do so now](https://console.neon.tech/signup)).
For web apps, this guide assumes that you have [pnpm](https://pnpm.io/installation#using-npm) installed.
This guide takes 10-15 minutes to complete.
## Architecture
Upon successful integration of Neon + PowerSync, your system architecture will look like this: (click to enlarge image)
The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Neon Postgres database (based on your Sync Streams as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Neon Data API when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database.
For more details on PowerSync's general architecture, [see here](/architecture/architecture-overview).
## Integration Guide/Tutorial Overview
We will follow these steps to get an offline-first 'Notes' demo app up and running:
* Create a Neon project with Auth and Data API
* Set up the database schema
* Configure logical replication
* Create connection to Neon
* Configure authentication
* Configure Sync Streams
Test the configuration using our provided PowerSync-Neon 'Notes' demo app.
## Configure Neon
### Create a Neon Project with Auth and Data API
1. Go to [pg.new](https://pg.new) to create a new Neon project.
2. In the Neon Console, navigate to your project and enable:
* **Neon Auth** — Go to the **Auth** page in the left sidebar and enable it
* **Data API** — Go to the **Data API** page in the left sidebar and enable it
### Set Up the Database
The demo app uses Drizzle ORM for schema management. The schema includes `notes` and `paragraphs` tables with Row Level Security (RLS) policies.
Clone the demo repository and run the migration:
```bash theme={null}
git clone https://github.com/powersync-ja/powersync-js.git
cd powersync-js
pnpm install
pnpm build:packages
cd demos/react-neon-tanstack-query-notes
```
Create a `.env` file in the project root with your database connection string:
```env theme={null}
DATABASE_URL=postgresql://user:password@your-project-id.pooler.region.neon.tech/neondb?sslmode=require
```
Find your connection string in the Neon Console → Dashboard → Connect → Connection string (select "Pooled connection") → Copy snippet.
Run the migration to create the tables and RLS policies:
```bash theme={null}
pnpm db:migrate
```
This creates the following schema:
```sql theme={null}
CREATE TABLE "notes" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"owner_id" text DEFAULT auth.user_id() NOT NULL,
"title" text DEFAULT 'untitled note' NOT NULL,
"created_at" timestamp with time zone DEFAULT now(),
"updated_at" timestamp with time zone DEFAULT now(),
"shared" boolean DEFAULT false
);
CREATE TABLE "paragraphs" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"note_id" uuid REFERENCES notes(id),
"content" text NOT NULL,
"created_at" timestamp with time zone DEFAULT now()
);
```
The migration also sets up RLS policies so users can only access their own notes (and shared notes).
### Configure Logical Replication, User and Publication
PowerSync uses logical replication to sync data from your Neon database.
### 1. Ensure logical replication is enabled
To [ensure logical replication is enabled](https://neon.tech/docs/guides/logical-replication-postgres#prepare-your-source-neon-database):
1. Select your project in the Neon Console.
2. On the Neon Dashboard, select **Settings**.
3. Select **Logical Replication**.
4. Click **Enable** to ensure logical replication is enabled.
### 2. Create a PowerSync database user
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### 3. Create `powersync` publication
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
## Configuring PowerSync
### Create a PowerSync Cloud Instance
When creating a project in the [PowerSync Dashboard](https://dashboard.powersync.com/), *Development* and *Production* instances of the PowerSync Service will be created by default. Select the instance you want to configure.
If you need to create a new instance, follow the steps below.
1. In the dashboard, select your project and open the instance selection dropdown. Click **Add Instance**.
2. Give your instance a name, such as "Production".
3. \[Optional] You can change the default cloud region from US to EU, JP (Japan), AU (Australia) or BR (Brazil) if desired.
* Note: Additional cloud regions will be considered on request, especially for customers on our Enterprise plan. Please [contact us](/resources/contact-us) if you need a different region.
4. Click **Create Instance**.
### Connect PowerSync to Your Neon Database
1. From your Neon Console, select **Connect** in the top navigation bar. Ensure the format is set to "Connection string", and click on "Copy snippet":
2. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Database Connections** view.
3. Click **Connect to Source Database** and ensure the "Postgres" tab is selected.
4. Paste the connection string into the **URI** field. PowerSync will automatically parse this URI to populate the database connection details.
5. Update the **Username** and **Password** fields to use the `powersync_role` and password you created when configuring your Neon for PowerSync (see [Source Database Setup](/configuration/source-db/setup#neon)).
6. Note: PowerSync includes Neon's CA certificate by default, so you can use `verify-full` SSL mode without additional configuration.
7. Verify your setup by clicking **Test Connection** and resolve any errors.
8. Click **Save Connection**.
PowerSync will now create an isolated cloud environment for your instance. This typically takes a minute or two.
### Configure Neon Auth
After your database connection is configured, enable Neon Auth:
1. In the PowerSync Dashboard, go to the **Client Auth** view.
2. Check the **Development tokens** setting (useful for testing).
3. Populate the **"JWKS URI"** with the value from the **"JWKS URL"** field in the Neon Console → Auth → Configuration page.
4. Populate the **"JWT Audience"** with your Neon Auth project root URL (e.g., `https://ep-restless-resonance-adom1z4w.neonauth.c-2.us-east-1.aws.neon.tech`).
The `aud` field is very sensitive, be sure to enter it exactly as shown above, especially removing the trailing `/` character.
5. Click **Save and Deploy** to apply the changes.
### Configure Sync Streams
[Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own notes (plus any shared notes).
1. In the PowerSync Dashboard, select your project and instance and go to the **Sync Streams** view (shown as **Sync Rules** if using legacy Sync Rules).
2. Edit the sync config in the editor and replace the contents with the below:
```yaml theme={null}
config:
edition: 3
streams:
user_notes:
auto_subscribe: true
# Sync notes and paragraphs belonging to the authenticated user
queries:
- SELECT * FROM notes WHERE owner_id = auth.user_id()
- SELECT paragraphs.* FROM paragraphs
INNER JOIN notes ON notes.id = paragraphs.note_id
WHERE notes.owner_id = auth.user_id()
shared_notes:
auto_subscribe: true
# Sync all shared notes to all users (not recommended for production)
queries:
- SELECT * FROM notes WHERE shared = TRUE
- SELECT paragraphs.* FROM paragraphs
INNER JOIN notes ON notes.id = paragraphs.note_id
WHERE notes.shared = TRUE
```
```yaml theme={null}
config:
edition: 2
bucket_definitions:
by_user:
# Only sync rows belonging to the user
parameters: SELECT id as note_id FROM notes WHERE owner_id = request.user_id()
data:
- SELECT * FROM notes WHERE id = bucket.note_id
- SELECT * FROM paragraphs WHERE note_id = bucket.note_id
# Sync all shared notes to all users (not recommended for production)
shared_notes:
parameters: SELECT id as note_id from notes where shared = TRUE
data:
- SELECT * FROM notes WHERE id = bucket.note_id
- SELECT * FROM paragraphs WHERE note_id = bucket.note_id
```
3. Click **"Validate"** and ensure there are no errors. This validates your sync config against your Postgres database.
4. Click **"Deploy"** to deploy your sync config.
* For additional information on PowerSync's Sync Streams, refer to the [Sync Streams](/sync/streams/overview) documentation.
* For legacy Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
## Test Everything (Using Our Demo App)
In this step you'll test your setup using the 'Notes' demo app. This is a modified version of Neon's demo app.
### Configure Environment Variables
In the demo project directory (`powersync-js/demos/react-neon-notes-tanstack`), update your `.env` file with the following:
```env theme={null}
# Neon Data API URL
# Find this in Neon Console → Data API page → "API URL"
VITE_NEON_DATA_API_URL=https://your-project-id.data-api.neon.tech
# Neon Auth Base URL
# Find this in Neon Console → Auth page → "Auth URL"
VITE_NEON_AUTH_URL=https://your-project-id.auth.neon.tech
# PowerSync instance URL
# Find this in PowerSync Dashboard → Connect button
VITE_POWERSYNC_URL=https://your-instance.powersync.journeyapps.com
```
### Run the App
Start the development server:
```bash theme={null}
pnpm dev
```
Open [http://localhost:5173](http://localhost:5173) in your browser.
Once signed in to the demo app, you should see a blank list of notes, so go ahead and create a new note. Try disabling wifi on your device to test out the offline capabilities. Once back online, you should see the data automatically sync.
### Test Sync (Optional)
During development, you can use the **Sync Test** feature in the PowerSync Dashboard to validate your Sync Rules:
1. Click on **"Sync Test"** in the PowerSync Dashboard.
2. Enter the UUID of a user in your Neon Auth database to generate a test JWT.
3. Click **"Launch Sync Diagnostics Client"** to test the Sync Rules.
For more information, explore the [PowerSync docs](/) or join us on [our community Discord](https://discord.gg/powersync) where our team is always available to answer questions.
After deployment, update your Neon Auth settings to allow your Vercel domain. Go to Neon Console → Auth page and add your Vercel URL (e.g., `https://your-project.vercel.app`) to the allowed origins.
# Integrations Overview
Source: https://docs.powersync.com/integrations/overview
Learn how to integrate PowerSync with your favorite tools.
Currently, the following integration guides are available:
If you'd like to see an integration that is not currently available, [let us know on Discord](https://discord.gg/powersync).
# Serverpod + PowerSync
Source: https://docs.powersync.com/integrations/serverpod
Easily add offline-capable sync to your Serverpod projects with PowerSync
Used in conjunction with [Serverpod](https://serverpod.dev/), PowerSync enables developers to build local-first apps that are robust in poor network conditions
and that have highly responsive frontends while relying on Serverpod for shared models in a full-stack Dart project.
This guide walks you through configuring PowerSync within your Serverpod project.
## Overview
PowerSync works by:
1. Automatically streaming changes from your Postgres backend source database into a SQLite database on the client.
2. Collecting local writes that users have performed on the SQLite database, and allowing you to upload those writes to your backend.
See [Architecture Overview](/architecture/architecture-overview) for a full overview.
To integrate PowerSync into a Serverpod project, a few aspects need to be considered:
Your Serverpod models need to be persisted into a Postgres database.
PowerSync needs access to your Postgres database to stream changes to users.
To ensure each user only has access to the data they're supposed to see, Serverpod
authenticates users against PowerSync.
After configuring your clients, your Serverpod projects are offline-ready!
This guide shows all steps in detail. Here, we assume you're working with a fresh Serverpod project.
You can follow along by creating a `notes` project using the Serverpod CLI:
```
# If you haven't already, dart pub global activate serverpod_cli
serverpod create notes
```
Of course, all steps and migrations also apply to established projects.
## Database setup
Begin by configuring your Postgres database for PowerSync. PowerSync requires logical replication
to be enabled. With the `docker-compose.yaml` file generated by Serverpod, add a `command` to the `postgres`
service to enable this option.
This is also a good opportunity to add a health check, which helps PowerSync connect at the right time later:
```yaml theme={null}
services:
# Development services
postgres:
image: pgvector/pgvector:pg16
ports:
- "8090:5432"
command: ["postgres", "-c", "wal_level=logical"] # Added for PowerSync
environment:
POSTGRES_USER: postgres
POSTGRES_DB: notes
# ...
healthcheck: # Added for PowerSync
test: ["CMD-SHELL", "pg_isready -U postgres -d postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
- notes_data:/var/lib/postgresql/data
```
You can also find sources for the completed demo [in this repository](https://github.com/powersync-community/powersync-serverpod-demo).
More information about setting up Postgres for PowerSync is available [here](/configuration/source-db/setup).
Next, configure existing models to be persisted in the database. In the template created by
Serverpod, edit `notes_server/lib/src/greeting.spy.yaml`:
```yaml theme={null}
### A greeting message which can be sent to or from the server.
class: Greeting
table: greeting # Added table key
fields:
### Important! Each model used with PowerSync needs to have a UUID id column.
id: UuidValue,defaultModel=random,defaultPersist=random
### The user id owning this greeting, used for access control in PowerSync
owner: String
### The greeting message.
message: String
### The author of the greeting message.
author: String
### The time when the message was created.
timestamp: DateTime
```
PowerSync works best when ids are stable. And since clients can also create rows locally, using
randomized ids reduces the chance of collisions. This is why we prefer UUIDs over the default
incrementing key.
After making the changes, run `serverpod generate` and ignore the issues in `greeting_endpoint.dart` for now.
Instead, run `serverpod create-migration` and note the generated path:
```
$ serverpod create-migration
✓ Creating migration (87ms)
• Migration created: migrations/
✅ Done.
```
We will use the migration adding the `greeting` table to also configure a replication that PowerSync will hook into.
For that, edit `notes_server/migrations//migration.sql`
At the end of that file, after `COMMIT;`, add this:
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
This is also a good place to set up a Postgres publication that a PowerSync Service will subscribe to:
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
After adding these statements to `migration.sql`, also add them to `definition.sql`. The reason is that Serverpod
runs that file when instantiating the database from scratch, `migration.sql` would be ignored in that case.
## PowerSync configuration
PowerSync requires a service to process Postgres writes into a form that can be synced to clients.
Additionally, your Serverpod backend will be responsible for generating JWTs to authenticate clients as
they connect to this service.
To set that up, begin by generating an RSA key to sign these JWTs. In the server project, run
`dart pub add jose` to add a package supporting JWTs in Dart.
Then, create a `tool/generate_keys.dart` that prints a new key pair when run:
```dart theme={null}
import 'dart:convert';
import 'dart:math';
import 'package:jose/jose.dart';
void main() {
var generatedKey = JsonWebKey.generate('RS256').toJson();
final kid = 'powersync-${generateRandomString(8)}';
generatedKey = {...generatedKey, 'kid': kid};
print('''
JS_JWK_N: ${generatedKey['n']}
PS_JWK_E: ${generatedKey['e']}
PS_JWK_KID: $kid
''');
final encodedKeys = base64Encode(utf8.encode(json.encode(generatedKey)));
print('JWT signing keys for backend: $encodedKeys');
}
String generateRandomString(int length) {
final random = Random.secure();
final buffer = StringBuffer();
for (var i = 0; i < length; i++) {
const alphabet = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';
buffer.writeCharCode(alphabet.codeUnitAt(random.nextInt(alphabet.length)));
}
return buffer.toString();
}
```
Run `dart run tool/generate_jwt.dart` and save its output, it's needed for the next step as well.
For development, you can add the PowerSync Service to the compose file.
It needs access to the source database, a Postgres database to store intermediate data,
and the public half of the generated signing key.
```yaml theme={null}
services:
powersync:
restart: unless-stopped
image: journeyapps/powersync-service:latest
depends_on:
postgres:
condition: service_healthy
command: ["start", "-r", "unified"]
volumes:
- ./powersync.yaml:/config/config.yaml
environment:
POWERSYNC_CONFIG_PATH: /config/config.yaml
# Use the credentials created in the previous step, the /notes is the datase name for Postgres
PS_SOURCE_URI: "postgresql://powersync_role:myhighlyrandompassword@postgres:5432/notes"
PS_STORAGE_URI: "postgresql://powersync_role:myhighlyrandompassword@postgres:5432/powersync_storage"
JS_JWK_N: # output from generate_keys.dart
PS_JWK_E: AQAB # output from generate_keys.dart
PS_JWK_KID: # output from generate_keys.dart
ports:
- 8095:8080
```
To configure PowerSync, create a file called `powersync.yaml` next to the compose file.
This file configures how PowerSync connects to the source database, how to authenticate users,
and which data to sync:
```yaml theme={null}
replication:
connections:
- type: postgresql
uri: !env PS_SOURCE_URI
# SSL settings
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# Connection settings for bucket storage
storage:
type: postgresql
uri: !env PS_STORAGE_URI
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# The port which the PowerSync API server will listen on
port: 8080
sync_config:
content: |
config:
edition: 3
streams:
todos:
# For each user, sync all greeting they own.
auto_subscribe: true # Sync by default
query: SELECT * FROM greeting WHERE owner = auth.user_id()
client_auth:
audience: [powersync]
jwks:
keys:
- kty: RSA
n: !env PS_JWK_N
e: !env PS_JWK_E
alg: RS256
kid: !env PS_JWK_KID
```
More information on available options is available under [Service Configuration](/configuration/powersync-service/self-hosted-instances)
## Authentication
PowerSync processes the entire source database into [buckets](/architecture/powersync-service#bucket-system), an efficient representation
for sync. With the configuration shown here, there is one such bucket per user storing all `greeting`s owned by that user.
For security, it is crucial each user only has access to their own bucket. This is why PowerSync gives you full access control:
1. When a client connects to PowerSync, it fetches an authentication token from your Serverpod instance.
2. Your Dart backend logic returns a JWT describing what data the user should have access to.
3. In the `sync_rules` section, you reference properties of the created JWTs to control data visible to the connecting clients.
In this guide, we will use a single virtual user for everything. For real projects, follow
[Serverpod documentation on authentication](https://docs.serverpod.dev/tutorials/guides/authentication).
PowerSync needs two endpoints, one to request a JWT and one to upload local writes from clients to the backend source database.
In `notes_server/lib/src/powersync_endpoint.dart`, create those endpoints:
```dart theme={null}
import 'dart:convert';
import 'dart:isolate';
import 'generated/protocol.dart';
import 'package:serverpod/serverpod.dart';
import 'package:jose/jose.dart';
class PowerSyncEndpoint extends Endpoint {
Future createJwt(Session session) async {
// TODO: Throw if the session is unauthenticated.
// TODO: Extract user-id from session outside
final userId = 'global_user';
final token = await Isolate.run(() => _createPowerSyncToken(userId));
// Also create default greeting if none exist for this user.
if (await Greeting.db.count(session) == 0) {
await Greeting.db.insertRow(
session,
Greeting(
owner: userId,
message: 'Hello from Serverpod and PowerSync',
author: 'admin',
timestamp: DateTime.now(),
),
);
}
return token;
}
Future createGreeting(Session session, Greeting greeting) async {
// TODO: Throw if the session is unauthenticated.
await Greeting.db.insertRow(session, greeting);
}
Future updateGreeting(Session session, UuidValue id,
{String? message}) async {
// TODO: Throw if the session is unauthenticated, or if the user should not
// be able to update this greeting.
await session.db.transaction((tx) async {
final row = await Greeting.db.findById(session, id);
await Greeting.db.updateRow(session, row!.copyWith(message: message));
});
}
Future deleteGreeting(Session session, UuidValue id) async {
// TODO: Throw if the session is unauthenticated, or if the user should not
// be able to delete this greeting.
await Greeting.db.deleteWhere(session, where: (tbl) => tbl.id.equals(id));
}
}
Future _createPowerSyncToken(String userId) async {
final decoded = _jsonUtf8.decode(base64.decode(_signingKey));
final signingKey = JsonWebKey.fromJson(decoded as Map);
final now = DateTime.now();
final builder = JsonWebSignatureBuilder()
..jsonContent = {
'sub': userId,
'iat': now.millisecondsSinceEpoch ~/ 1000,
'exp': now.add(Duration(minutes: 10)).millisecondsSinceEpoch ~/ 1000,
'aud': ['powersync'],
'kid': _keyId,
}
..addRecipient(signingKey, algorithm: 'RS256');
final jwt = builder.build();
return jwt.toCompactSerialization();
}
final _jsonUtf8 = JsonCodec().fuse(Utf8Codec());
const _signingKey = 'TODO'; // The "JWT signing keys for backend" bit from tool/generate_keys.dart
const _keyId = 'TODO'; // PS_JWK_KID from tool/generate_keys.dart
```
You can delete the existing `greeting_endpoint.dart` file, it's not necessary since PowerSync is used to fetch data from your server.
Also remove invocations related to future calls in `lib/server.dart`.
Don't forget to run `serverpod generate` afterwards.
## Data sync
With all services, configured, it's time to spin up development services:
```
docker compose down
docker compose up --detach --scale powersync=0
# This creates the PowerSync role
dart run bin/main.dart --role maintenance --apply-migrations
# Create the PowerSync bucket storage database, use password from docker-compose.yaml
psql -h 127.0.0.1 -p 8090 -U postgres
Password from user postgres:
postgres=# CREATE DATABASE powersync_storage WITH OWNER = powersync_role;
postgres=# \q
# Start PowerSync Service
docker compose up --detach
# Start backend
dart run bin/main.dart
```
With your Serverpod backend and PowerSync running, you can start connecting your clients.
Go to the `_flutter` project generated by Serverpod and run `dart pub add powersync path path_provider`.
Next, replace `main.dart` with this demo:
```dart theme={null}
import 'package:flutter/foundation.dart';
import 'package:notes_client/notes_client.dart';
import 'package:flutter/material.dart';
import 'package:path/path.dart';
import 'package:path_provider/path_provider.dart';
import 'package:powersync/powersync.dart' hide Column;
import 'package:powersync/powersync.dart' as ps;
import 'package:serverpod_flutter/serverpod_flutter.dart';
/// Sets up a global client object that can be used to talk to the server from
/// anywhere in our app. The client is generated from your server code
/// and is set up to connect to a Serverpod running on a local server on
/// the default port. You will need to modify this to connect to staging or
/// production servers.
/// In a larger app, you may want to use the dependency injection of your choice
/// instead of using a global client object. This is just a simple example.
late final Client client;
late final PowerSyncDatabase db;
late String serverUrl;
void main() async {
// When you are running the app on a physical device, you need to set the
// server URL to the IP address of your computer. You can find the IP
// address by running `ipconfig` on Windows or `ifconfig` on Mac/Linux.
// You can set the variable when running or building your app like this:
// E.g. `flutter run --dart-define=SERVER_URL=https://api.example.com/`
const serverUrlFromEnv = String.fromEnvironment('SERVER_URL');
final serverUrl =
serverUrlFromEnv.isEmpty ? 'http://$localhost:8080/' : serverUrlFromEnv;
client = Client(serverUrl)
..connectivityMonitor = FlutterConnectivityMonitor();
db = PowerSyncDatabase(
// For more options on defining the schema, see https://docs.powersync.com/client-sdks/reference/flutter#1-define-the-client-side-schema
schema: Schema([
ps.Table('greeting', [
ps.Column.text('owner'),
ps.Column.text('message'),
ps.Column.text('author'),
ps.Column.text('timestamp'),
])
]),
path: await getDatabasePath(),
logger: attachedLogger,
);
await db.initialize();
await db.connect(connector: ServerpodConnector(client.powerSync));
Object? lastError;
db.statusStream.listen((status) {
final error = status.anyError;
if (error != null && error != lastError) {
debugPrint('PowerSync error: $error');
}
lastError = error;
});
runApp(const MyApp());
}
Future getDatabasePath() async {
const dbFilename = 'powersync-demo.db';
// getApplicationSupportDirectory is not supported on Web
if (kIsWeb) {
return dbFilename;
}
final dir = await getApplicationSupportDirectory();
return join(dir.path, dbFilename);
}
final class ServerpodConnector extends PowerSyncBackendConnector {
final EndpointPowerSync _service;
ServerpodConnector(this._service);
@override
Future fetchCredentials() async {
final token = await _service.createJwt();
return PowerSyncCredentials(
endpoint: 'http://localhost:8095',
token: token,
);
}
@override
Future uploadData(PowerSyncDatabase database) async {
if (await database.getCrudBatch() case final pendingWrites?) {
for (final write in pendingWrites.crud) {
if (write.table != 'greeting') {
throw 'TODO: handle other tables';
}
switch (write.op) {
case UpdateType.put:
await _service.createGreeting(Greeting.fromJson(write.opData!));
case UpdateType.patch:
await _service.updateGreeting(
UuidValue.fromString(write.id),
message: write.opData!['message'] as String?,
);
case UpdateType.delete:
await _service.deleteGreeting(UuidValue.fromString(write.id));
}
}
await pendingWrites.complete();
}
}
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Serverpod Demo',
theme: ThemeData(primarySwatch: Colors.blue),
home: const GreetingListPage(),
);
}
}
final class GreetingListPage extends StatelessWidget {
const GreetingListPage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('PowerSync + Serverpod'),
actions: const [_ConnectionState()],
),
body: StreamBuilder(
stream:
db.watch('SELECT id, message, author FROM greeting ORDER BY id'),
builder: (context, snapshot) {
if (snapshot.hasData) {
return ListView(
children: [
for (final row in snapshot.requireData)
_GreetingRow(
key: ValueKey(row['id']),
id: row['id'],
message: row['message'],
author: row['author'],
),
],
);
} else if (snapshot.hasError) {
return Text(snapshot.error.toString());
} else {
return const CircularProgressIndicator();
}
},
),
);
}
}
final class _GreetingRow extends StatelessWidget {
final String id;
final String message;
final String author;
const _GreetingRow(
{super.key,
required this.id,
required this.message,
required this.author});
@override
Widget build(BuildContext context) {
return ListTile(
title: Row(
children: [
Expanded(child: Text(message)),
IconButton(
onPressed: () async {
await db.execute('DELETE FROM greeting WHERE id = ?', [id]);
},
icon: Icon(Icons.delete),
color: Colors.red,
),
],
),
subtitle: Text('Greeting from $author'),
);
}
}
final class _ConnectionState extends StatelessWidget {
const _ConnectionState({super.key});
@override
Widget build(BuildContext context) {
return StreamBuilder(
stream: db.statusStream,
initialData: db.currentStatus,
builder: (context, snapshot) {
final data = snapshot.requireData;
return Icon(data.connected ? Icons.wifi : Icons.cloud_off);
},
);
}
}
```
Ensure containers are running (`docker compose up`), start your backend `dart run bin/main.dart` in `notes_server`
and finally launch your app.
When the app is loaded, you should see a greeting synced from the server. To verify PowerSync is working,
here are some things to try:
1. Update in the source database: Connect to the Postgres database again (`psql -h 127.0.0.1 -p 8090 -U postgres`) and
run a query like `update greeting set message = upper(message);`. Note how the app's UI reflects these changes without
you having to write any code for these updates.
2. Click on a delete icon to see local writes automatically being uploaded to the backend.
3. Add new items to the database and stop your backend to simulate being offline. Deleting items still updates the client
immediately, changes will be written to Postgres as your backend comes back online.
## Next steps
This guide demonstrated a minimal setup with PowerSync and Serverpod. To expand on this, you could explore:
* Web support: PowerSync supports Flutter web, but needs [additional assets](/client-sdks/frameworks/flutter-web-support).
* Authentication: If you already have an existing backend that is publicly-reachable, serving a [JWKS URL](https://docs.powersync.com/configuration/auth/custom)
would be safer than using pre-shared keys.
* Deploying: The easiest way to run PowerSync is to [let us host it for you](https://accounts.powersync.com/portal/powersync-signup)
(you still have full control over your source database and backend).
You can also explore [self-hosting](https://docs.powersync.com/intro/powersync-overview) the PowerSync Service.
# Improve Supabase Connector
Source: https://docs.powersync.com/integrations/supabase/connector-performance
Learn how to improve the performance of the Supabase Connector for the [React Native To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist).
# Background
The demos in the [powersync-js](https://github.com/powersync-ja/powersync-js/tree/main/demos) monorepo provide a minimal working example that illustrate the use of PowerSync with different frameworks.
The demos are therefore not necessarily optimized for performance and can therefore be improved.
This tutorial demonstrates how to improve the Supabase Connector's performance by implementing two batching strategies that reduce the number of database operations.
# Batching Strategies
The two batching strategies that will be implemented are:
1. Sequential Merge Strategy, and
2. Pre-sorted Batch Strategy
Overview:
* Merge adjacent `PUT` and `DELETE` operations for the same table
* Limit the number of operations that are merged into a single API request to Supabase
Shoutout to @christoffer\_configura for the original implementation of this optimization.
```typescript {6-12, 15, 17-19, 21, 23-24, 28-40, 43, 47-60, 63-64, 79} theme={null}
async uploadData(database: AbstractPowerSyncDatabase): Promise {
const transaction = await database.getNextCrudTransaction();
if (!transaction) {
return;
}
/**
* Maximum number of PUT or DELETE operations that are merged into a single API request to Supabase.
* Larger numbers can speed up the sync process considerably, but watch out for possible payload size limitations.
* A value of 1 or below disables merging.
*/
const MERGE_BATCH_LIMIT = 100;
let batchedOps: CrudEntry[] = [];
try {
console.log(`Processing transaction with ${transaction.crud.length} operations`);
for (let i = 0; i < transaction.crud.length; i++) {
const cruds = transaction.crud;
const op = cruds[i];
const table = this.client.from(op.table);
batchedOps.push(op);
let result: any;
let batched = 1;
switch (op.op) {
case UpdateType.PUT:
const records = [{ ...cruds[i].opData, id: cruds[i].id }];
while (
i + 1 < cruds.length &&
cruds[i + 1].op === op.op &&
cruds[i + 1].table === op.table &&
batched < MERGE_BATCH_LIMIT
) {
i++;
records.push({ ...cruds[i].opData, id: cruds[i].id });
batchedOps.push(cruds[i]);
batched++;
}
result = await table.upsert(records);
break;
case UpdateType.PATCH:
batchedOps = [op];
result = await table.update(op.opData).eq('id', op.id);
break;
case UpdateType.DELETE:
batchedOps = [op];
const ids = [op.id];
while (
i + 1 < cruds.length &&
cruds[i + 1].op === op.op &&
cruds[i + 1].table === op.table &&
batched < MERGE_BATCH_LIMIT
) {
i++;
ids.push(cruds[i].id);
batchedOps.push(cruds[i]);
batched++;
}
result = await table.delete().in('id', ids);
break;
}
if (batched > 1) {
console.log(`Merged ${batched} ${op.op} operations for table ${op.table}`);
}
}
await transaction.complete();
} catch (ex: any) {
console.debug(ex);
if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) {
/**
* Instead of blocking the queue with these errors,
* discard the (rest of the) transaction.
*
* Note that these errors typically indicate a bug in the application.
* If protecting against data loss is important, save the failing records
* elsewhere instead of discarding, and/or notify the user.
*/
console.error('Data upload error - discarding:', ex);
await transaction.complete();
} else {
// Error may be retryable - e.g. network error or temporary server error.
// Throwing an error here causes this call to be retried after a delay.
throw ex;
}
}
}
```
Overview:
* Create three collections to group operations by type:
* `putOps`: For `PUT` operations, organized by table name
* `deleteOps`: For `DELETE` operations, organized by table name
* `patchOps`: For `PATCH` operations (partial updates)
* Loop through all operations, sort them into the three collections, and then process all operations in batches.
```typescript {8-11, 17-20, 23, 26-29, 32-53, 56, 72} theme={null}
async uploadData(database: AbstractPowerSyncDatabase): Promise {
const transaction = await database.getNextCrudTransaction();
if (!transaction) {
return;
}
try {
// Group operations by type and table
const putOps: { [table: string]: any[] } = {};
const deleteOps: { [table: string]: string[] } = {};
let patchOps: CrudEntry[] = [];
// Organize operations
for (const op of transaction.crud) {
switch (op.op) {
case UpdateType.PUT:
if (!putOps[op.table]) {
putOps[op.table] = [];
}
putOps[op.table].push({ ...op.opData, id: op.id });
break;
case UpdateType.PATCH:
patchOps.push(op);
break;
case UpdateType.DELETE:
if (!deleteOps[op.table]) {
deleteOps[op.table] = [];
}
deleteOps[op.table].push(op.id);
break;
}
}
// Execute bulk operations
for (const table of Object.keys(putOps)) {
const result = await this.client.from(table).upsert(putOps[table]);
if (result.error) {
console.error(result.error);
throw new Error(`Could not bulk PUT data to Supabase table ${table}: ${JSON.stringify(result)}`);
}
}
for (const table of Object.keys(deleteOps)) {
const result = await this.client.from(table).delete().in('id', deleteOps[table]);
if (result.error) {
console.error(result.error);
throw new Error(`Could not bulk DELETE data from Supabase table ${table}: ${JSON.stringify(result)}`);
}
}
// Execute PATCH operations individually since they can't be easily batched
for (const op of patchOps) {
const result = await this.client.from(op.table).update(op.opData).eq('id', op.id);
if (result.error) {
console.error(result.error);
throw new Error(`Could not PATCH data in Supabase: ${JSON.stringify(result)}`);
}
}
await transaction.complete();
} catch (ex: any) {
console.debug(ex);
if (typeof ex.code == 'string' && FATAL_RESPONSE_CODES.some((regex) => regex.test(ex.code))) {
/**
* Instead of blocking the queue with these errors,
* discard the (rest of the) transaction.
*
* Note that these errors typically indicate a bug in the application.
* If protecting against data loss is important, save the failing records
* elsewhere instead of discarding, and/or notify the user.
*/
console.error('Data upload error - discarding transaction:', ex);
await transaction.complete();
} else {
// Error may be retryable - e.g. network error or temporary server error.
// Throwing an error here causes this call to be retried after a delay.
throw ex;
}
}
}
```
# Differences
### Sequential merge strategy
```typescript theme={null}
const MERGE_BATCH_LIMIT = 100;
let batchedOps: CrudEntry[] = [];
```
* Pre-sorts all operations by type and table
* Processes each type in bulk after grouping
### Pre-sorted batch strategy
```typescript theme={null}
const putOps: { [table: string]: any[] } = {};
const deleteOps: { [table: string]: string[] } = {};
let patchOps: CrudEntry[] = [];
```
* Processes operations sequentially
* Merges consecutive operations of the same type up to a batch limit
* More dynamic/streaming approach
### Sequential merge strategy
* Uses a sliding window approach with `MERGE_BATCH_LIMIT`
* Merges consecutive operations up to the limit
* More granular control over batch sizes
* Better for mixed operation types
### Pre-sorted batch strategy
* Groups ALL operations of the same type together
* Executes one bulk operation per type per table
* Better for large numbers of similar operations
## Key similarities and differences
Handling of CRUD operations (PUT, PATCH, DELETE) to sync local changes to Supabase
Transaction management with `getNextCrudTransaction()`
Implement similar error handling for fatal and retryable errors
Complete the transaction after successful processing
Operation grouping strategy
Batching methodology
# Use cases
You need more granular control over batch sizes
You want more detailed operation logging
You need to handle mixed operation types more efficiently
**Best for**: Mixed operation types
**Optimizes for**: Memory efficiency
**Trade-off**: Potentially more network requests
You have a large number of similar operations.
You want to minimize the number of network requests.
**Best for**: Large volumes of similar operations
**Optimizes for**: Minimal network requests
**Trade-off**: Higher memory usage
# Supabase + PowerSync
Source: https://docs.powersync.com/integrations/supabase/guide
Tutorial-style integration guide for creating offline-first apps with Supabase and PowerSync, using a demo to-do list app in Flutter, React Native, Web, Kotlin and Swift.
Used in conjunction with **Supabase**, PowerSync enables developers to build local-first & offline-first apps that are robust in poor network conditions and that have highly responsive frontends while relying on [Supabase](https://supabase.com/) for their backend. This guide provides instructions for how to configure PowerSync for use with your Supabase project.
Before you proceed, this guide assumes that you have already signed up for free accounts with both Supabase and PowerSync Cloud (our cloud-hosted offering). If you haven't signed up for a **PowerSync** (Cloud) account yet, [click here](https://accounts.powersync.com/portal/powersync-signup?s=docs) (and if you haven't signed up for Supabase yet, [click here](https://supabase.com/dashboard/sign-up)).
For mobile/desktop apps, this guide assumes that you already have **Flutter / React Native / Kotlin / Xcode** set up.
For web apps, this guide assumes that you have [pnpm](https://pnpm.io/installation#using-npm) installed.
This guide takes 10-15 minutes to complete.
## Architecture
Upon successful integration of Supabase + PowerSync, your system architecture will look like this: (click to enlarge image)
The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Supabase Postgres database (based on your Sync Streams as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Supabase client library when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database.
For more details on PowerSync's general architecture, [see here](/architecture/architecture-overview).
## Integration Guide/Tutorial Overview
We will follow these steps to get an offline-first 'To-Do List' demo app up and running:
* Create the demo database schema
* Create the Postgres user and publication
* Create connection to Supabase
* Configure Sync Streams
Test the configuration using our provided PowerSync-Supabase 'To-Do List' demo app with your framework of choice.
## Configure Supabase
Create a new Supabase project (or use an existing project if you prefer) and follow the below steps.
### Create the Demo Database Schema
To set up the Postgres database for our *To-Do List* demo app, we will create two new tables: `lists` and `todos`. The demo app will have access to these tables even while offline.
Run the below SQL statements in your **Supabase SQL Editor**:
```sql theme={null}
create table
public.lists (
id uuid not null default gen_random_uuid (),
created_at timestamp with time zone not null default now(),
name text not null,
owner_id uuid not null,
constraint lists_pkey primary key (id),
constraint lists_owner_id_fkey foreign key (owner_id) references auth.users (id) on delete cascade
) tablespace pg_default;
create table
public.todos (
id uuid not null default gen_random_uuid (),
created_at timestamp with time zone not null default now(),
completed_at timestamp with time zone null,
description text not null,
completed boolean not null default false,
created_by uuid null,
completed_by uuid null,
list_id uuid not null,
constraint todos_pkey primary key (id),
constraint todos_created_by_fkey foreign key (created_by) references auth.users (id) on delete set null,
constraint todos_completed_by_fkey foreign key (completed_by) references auth.users (id) on delete set null,
constraint todos_list_id_fkey foreign key (list_id) references lists (id) on delete cascade
) tablespace pg_default;
```
### Create a PowerSync Database User
PowerSync uses the Postgres [Write Ahead Log (WAL)](https://www.postgresql.org/docs/current/wal-intro.html) to replicate data changes in order to keep PowerSync SDK clients up to date.
Run the below SQL statement in your **Supabase SQL Editor** to create a Postgres role/user with replication privileges:
```sql theme={null}
-- Create a role/user with replication privileges for PowerSync
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
```
To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist).
### Create the Postgres Publication
Run the below SQL statement in your **Supabase SQL Editor** to create a Postgres publication:
```sql theme={null}
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
Note that the PowerSync Service has to read all updates present in the publication, regardless of whether the table is referenced in your Sync Streams / Sync Rules definitions. This can cause large spikes in memory usage or introduce replication delays, so if you're dealing with large data volumes, you'll want to specify a comma-separated subset of tables to replicate instead of `FOR ALL TABLES`.
The snippet above replicates all tables and is the simplest way to get started in a dev environment.
## Configure PowerSync
### Create a PowerSync Cloud Instance
When creating a project in the [PowerSync Dashboard](https://dashboard.powersync.com/), *Development* and *Production* instances of the PowerSync Service will be created by default. Select the instance you want to configure.
If you need to create a new instance, follow the steps below.
1. In the dashboard, select your project and open the instance selection dropdown. Click **Add Instance**.
2. Give your instance a name, such as "Production".
3. \[Optional] You can change the default cloud region from US to EU, JP (Japan), AU (Australia) or BR (Brazil) if desired.
* Note: Additional cloud regions will be considered on request, especially for customers on our Enterprise plan. Please [contact us](/resources/contact-us) if you need a different region.
4. Click **Create Instance**.
### Connect PowerSync to Your Supabase
1. From your Supabase Dashboard, select **Connect** in the top navigation bar (or follow this [link](https://supabase.com/dashboard/project/_?showConnect=true)):
2. In the **Direct connection** section, copy the complete connection string (including the `[YOUR-PASSWORD]` placeholder):
3. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to **Database Connections**.
4. Click **Connect to Source Database** and ensure the **Postgres** tab is selected.
5. Paste the connection string into the **URI** field. PowerSync will automatically parse this URI to populate the database connection details.
6. Update the **Username** and **Password** fields to use the `powersync_role` and password you created when configuring your Supabase for PowerSync (see [Source Database Setup](/configuration/source-db/setup#supabase)).
7. Note: PowerSync includes Supabase's CA certificate by default, so you can use `verify-full` SSL mode without additional configuration.
7. Verify your setup by clicking **Test Connection** and resolve any errors.
8. Click **Save Connection**.
PowerSync will now create an isolated cloud environment for your instance. This typically takes a minute or two.
### Enable Supabase Auth
After your database connection is configured, enable Supabase Auth:
1. In the PowerSync Dashboard, go to **Client Auth** for your instance.
2. Enable the **Use Supabase Auth** checkbox.
3. If your Supabase project uses the legacy JWT signing keys, copy your JWT Secret from your Supabase project's settings ([JWT Keys](https://supabase.com/dashboard/project/_/settings/jwt)) and paste the secret into the **Supabase JWT Secret (optional) Legacy** field in the PowerSync Dashboard. If you're using Supabase's new [JWT signing keys](https://supabase.com/blog/jwt-signing-keys), you can leave this field empty (PowerSync will auto-configure the JWKS endpoint for your project).
4. Click **Save and Deploy** to apply the changes.
### Configure Sync Streams
[Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own to-do lists and list items.
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Sync Streams** view (shown as **Sync Rules** if using legacy Sync Rules).
2. Edit the Sync Streams in the editor and replace the contents with the below:
```yaml theme={null}
config:
edition: 3
streams:
user_data:
auto_subscribe: true
queries:
- SELECT * FROM lists WHERE owner_id = auth.user_id()
- SELECT todos.* FROM todos INNER JOIN lists ON todos.list_id = lists.id WHERE lists.owner_id = auth.user_id()
```
```yaml theme={null}
bucket_definitions:
user_lists:
# Separate bucket per To-Do list
parameters: select id as list_id from lists where owner_id = request.user_id()
data:
- select * from lists where id = bucket.list_id
- select * from todos where list_id = bucket.list_id
```
2. Click **"Validate"** and ensure there are no errors. This validates your sync config against your Postgres database.
3. Click **"Deploy"** to deploy your sync config.
* For additional information on PowerSync's Sync Streams, refer to the [Sync Streams](/sync/streams/overview) documentation.
* For legacy Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
* If you're wondering how Sync Streams relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-streams).
## Test Everything (Using Our Demo App)
In this step you'll test your setup using a 'To-Do List' demo app provided by PowerSync.
#### Clone the demo app
Clone the demo app based on your framework:
```bash Flutter theme={null}
git clone https://github.com/powersync-ja/powersync.dart.git
cd powersync.dart/demos/supabase-todolist/
```
```bash React Native theme={null}
git clone https://github.com/powersync-ja/powersync-js.git
cd powersync-js/demos/react-native-supabase-todolist
```
```bash JavaScript Web theme={null}
git clone https://github.com/powersync-ja/powersync-js.git
cd powersync-js/demos/react-supabase-todolist
```
```bash Kotlin theme={null}
git clone https://github.com/powersync-ja/powersync-kotlin.git
# Open `demos/supabase-todolist` in Android Studio
```
```bash Swift theme={null}
git clone https://github.com/powersync-ja/powersync-swift.git
# Open the Demo directory in XCode and follow the README instructions.
```
#### Configure the demo app to use your PowerSync instance
Locate the relevant config file for your framework:
```bash Flutter theme={null}
cp lib/app_config_template.dart lib/app_config.dart
# Edit `lib/app_config.dart` and insert the necessary credentials as detailed below.
```
```bash React Native theme={null}
# Edit the `.env` file and insert the necessary credentials as detailed below.
```
```bash JavaScript Web theme={null}
cp .env.local.template .env.local
# Edit `.env.local` and insert the necessary credentials as detailed below.
```
```bash Kotlin theme={null}
# Make a `local.properties` file in the root and fill in the relevant variables (see points below for further details):
# local.properties
sdk.dir=/path/to/android/sdk
# Enter your PowerSync instance URL
POWERSYNC_URL=https://foo.powersync.journeyapps.com
# Enter your Supabase project's URL and public anon key
SUPABASE_URL=https://foo.supabase.co # from https://supabase.com/dashboard/project/_/settings/api
SUPABASE_ANON_KEY=foo # from https://supabase.com/dashboard/project/_/settings/api-keys
```
```bash Swift theme={null}
# Edit the `_Secrets` file and insert the necessary credentials as detailed below.
```
1. In the relevant config file, replace the values for `supabaseUrl` (from the [Project URL](https://supabase.com/dashboard/project/_/settings/api) section in the Supabase dashboard) and `supabaseAnonKey` (from the [API Keys](https://supabase.com/dashboard/project/_/settings/api-keys) section in the Supabase dashboard)
2. For the value of `powersyncUrl`, click **Connect** in the top bar of the [PowerSync Dashboard](https://dashboard.powersync.com/) and copy the instance URL from the dialog.
#### Run the app
```bash Flutter theme={null}
# Ensure you have [melos](https://melos.invertase.dev/~melos-latest/getting-started) installed.
melos bootstrap
flutter run
```
```bash React Native theme={null}
# In the repo root directory:
pnpm install
pnpm build:packages
# In `demos/react-native-supabase-todolist`:
# Run on iOS
pnpm ios
# Run on Android
pnpm android
```
```bash JavaScript Web theme={null}
# In the repo root directory:
pnpm install
pnpm build:packages
# In `demos/react-supabase-todolist`:
pnpm dev
```
```bash Kotlin theme={null}
# Run the app on Android or iOS in Android Studio using the Run widget.
```
```bash Swift theme={null}
# Run the app using XCode.
```
For ease of use of the demo app, you can disable email confirmation in your Supabase Auth settings. In your Supabase project, go to "Authentication" -> "Providers" -> "Email" and then disable "Confirm email". If you keep email confirmation enabled, the Supabase user confirmation email will reference the default Supabase Site URL of `http://localhost:3000` — you can ignore this.
Once signed in to the demo app, you should see a blank list of to-do lists, so go ahead and create a new list. Try placing your device into airplane mode to test out the offline capabilities. Once the device is back online, you should see the data automatically appear in your Supabase dashboard (e.g. in the Table Editor).
For more information, explore the [PowerSync docs](/) or join us on [our community Discord](https://discord.gg/powersync) where our team is always available to answer questions.
## Bonus: Optional Extras
If you plan on sharing this demo app with other people, you may want to set up demo data triggers so that new user signups don't see an empty screen.
It's useful to have some data when a user signs up to the demo app. The below trigger automatically creates some sample data when a user registers (you can run it in the Supabase SQL Editor). See [Supabase: Managing User Data](https://supabase.com/docs/guides/auth/managing-user-data#using-trigger) for more details.
```sql theme={null}
create function public.handle_new_user_sample_data()
returns trigger as $$
declare
new_list_id uuid;
begin
insert into public.lists (name, owner_id)
values ('Shopping list', new.id)
returning id into new_list_id;
insert into public.todos(description, list_id, created_by)
values ('Bread', new_list_id, new.id);
insert into public.todos(description, list_id, created_by)
values ('Apples', new_list_id, new.id);
return new;
end;
$$ language plpgsql security definer;
create trigger new_user_sample_data after insert on auth.users for each row execute procedure public.handle_new_user_sample_data();
```
# Local Development
Source: https://docs.powersync.com/integrations/supabase/local-development
Local development with Supabase and PowerSync.
Developers using [Supabase local dev](https://supabase.com/docs/guides/cli) might prefer being able to develop against PowerSync locally too, for use cases such as running end-to-end integration tests.
Local development is possible with either self-hosted PowerSync or PowerSync Cloud instances. Self-hosting PowerSync for local development is the recommended workflow as it's more user-friendly.
## Self-hosted Supabase & PowerSync (via Docker)
An example implementation and demo is available here:
See the README for instructions.
## Self-hosted Supabase & PowerSync Cloud (via ngrok)
This guide describes an example local dev workflow that uses ngrok and the PowerSync CLI.
This guide assumes that you have both ngrok and the Supabase CLI installed
This guide only covers using ngrok. Other configurations such as an NGINX reverse proxy are also possible.
### Configure Supabase for SSL
```bash theme={null}
# start supabase
supabase start
# get the name of the supabase-db container
docker ps -f name=supabase-db --format '{{.Names}}'
# The rest of the script assumes it's "supabase-db_supabase-test"
# bash in the container
docker exec -it supabase-db_supabase-test /bin/bash
# Now run in the container:
cd /etc/postgresql-custom
# Create a cert
openssl req -days 3650 -new -text -nodes -subj '/C=US/O=Dev/CN=supabase_dev' -keyout server.key -out server.csr
openssl req -days 3650 -x509 -text -in server.csr -key server.key -out server.cert
chown postgres:postgres server.*
# Enable ssl
echo -e '\n\nssl = on\nssl_ciphers = '\''HIGH:MEDIUM:+3DES:!aNULL'\''\nssl_prefer_server_ciphers = on\nssl_cert_file = '\''/etc/postgresql-custom/server.cert'\''\nssl_key_file = '\''/etc/postgresql-custom/server.key'\''' >> supautils.conf
# Now Ctrl+D to exit bash, and restart the container:
docker restart supabase-db_supabase-test
# Check logs for any issues:
docker logs supabase-db_supabase-test
# (optional, for debugging) validate SSL is enabled
psql -d postgres postgres
postgres=> show ssl; # should return "on"
```
### Start ngrok
Here we obtain the local port that supabase is listening on and initialize ngrok using it.
```bash theme={null}
# look for the PORTS value of the supabase-db_supabase-test container
docker ps -f name=supabase-db --format '{{.Ports}}'
# should see something like 0.0.0.0:54322->5432/tcp
# use the first port
ngrok tcp 54322
# should then see something like this:
Forwarding tcp://4.tcp.us-cal-1.ngrok.io:19263 -> localhost:54322
```
Make a note of the hostname (`4.tcp.us-cal-1.ngrok.io` and port number `19263`), your values will differ.
### Connect PowerSync (GUI)
1. Configure your PowerSync instance using the hostname and port number you noted previously. The default postgres password is "postgres", you may want to change this. NOTE: make sure that the `Host` field does not contain the `tcp://` URI Scheme outputted by ngrok
2. Set the SSL Mode to `verify-ca` and click Download certificate
3. Click "**Test Connection**"
4. Click "**Save**" to provision your instance
### Connect PowerSync (CLI)
Refer to: [CLI (Beta)](/tools/cli)
### Integration Test Example
Coming soon. Reach us on [Discord](https://discord.gg/powersync) in the meantime if you have any questions about testing.
# Real-time Streaming
Source: https://docs.powersync.com/integrations/supabase/realtime-streaming
If your app uses Supabase Realtime to listen for database changes (via e.g. [Stream](https://supabase.com/docs/reference/dart/stream) in the Supabase Flutter client library), it's fairly simple to obtain the same behavior using PowerSync.
Postgres changes are constantly streamed to the [PowerSync Service](/architecture/powersync-service) via the logical replication publication.
When the PowerSync Client SDK is online, the behavior is as follows:
1. Data changes are streamed from the PowerSync Service to the SDK client over HTTPS
2. Using the `watch()` API, client-side SQLite database changes can be streamed to your app UI
When the SDK is offline, the streaming stops, but automatically resumes when connectivity is restored.
Example implementations of `watch()` can be found below
* [React Native example](https://github.com/powersync-ja/powersync-js/blob/92384f75ec95c64ee843e2bb7635a16ca4142945/demos/django-react-native-todolist/library/stores/ListStore.ts#L5)
* [Flutter example](https://github.com/powersync-ja/powersync.dart/blob/master/demos/supabase-todolist/lib/models/todo_list.dart#L46)
# RLS and Sync Streams
Source: https://docs.powersync.com/integrations/supabase/rls-and-sync-streams
PowerSync's [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) and Supabase's support for [Row Level Security (RLS)](https://supabase.com/docs/guides/auth/row-level-security) can be used in conjunction. Here are some high level similarities and differences:
* RLS should be used as the authoritative set of security rules applied to your users' CRUD operations that reach Postgres.
* Sync Streams (or legacy Sync Rules) are only applied for data that is to be downloaded to clients — they do not apply to uploaded data.
* Sync Streams / Sync Rules can typically be considered to be complementary to RLS, and will generally mirror your RLS setup.
Supabase tables are often created with auto-increment IDs. For easiest use of PowerSync, make sure to convert them to text IDs as detailed [**here**](/sync/advanced/client-id)**.**
### Example
Continuing with the schema set up during the guide, below are the RLS policies for the to-do list app:
```sql theme={null}
alter table public.lists
enable row level security;
alter table public.todos
enable row level security;
create policy "owned lists" on public.lists for ALL using (
auth.uid() = owner_id
);
create policy "todos in owned lists" on public.todos for ALL using (
auth.uid() IN (
SELECT lists.owner_id FROM lists WHERE (lists.id = todos.list_id)
)
);
```
`auth.uid()` in a Supabase RLS policy maps to:
* `auth.user_id()` in [Sync Streams](/sync/streams/overview)
* `request.user_id()` (previously `token_parameters.user_id`) in legacy [Sync Rules](/sync/rules/overview)
If you compare these to your sync config, you'll see the access patterns are quite similar.
If you have any questions, join us on [our community Discord](https://discord.gg/powersync) where our team is always available to help.
# Demo Apps & Example Projects
Source: https://docs.powersync.com/intro/examples
Explore demo apps and example projects to see PowerSync in action across different platforms and backends.
The best way to understand how PowerSync works is to explore it hands-on. Browse our collection of demo apps and example projects to see PowerSync in action, experiment with different features, or use as a reference for your own app.
All examples are organized by platform and backend technology. You can adapt any example to work with your preferred backend (see [App Backend](/configuration/app-backend)).
We continuously expand our collection of example projects. If you need an example that isn't available yet, [let us know on Discord](https://discord.gg/powersync).
## Official Demos/Example Projects
#### Supabase Backend
* [To-Do List App](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist#readme)
* Includes [Full-Text Search](/client-sdks/full-text-search) capabilities
* Demonstrates [File/Attachment Handling](/client-sdks/advanced/attachments)
* [To-Do List App + Drift](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-drift#readme)
* See [Dart/Flutter ORM Support](/client-sdks/orms/flutter-orm-support) for more details on our Drift integration.
* [To-Do List App with Local-Only Tables](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-optional-sync#readme) - Shows data persistence without syncing
* See [Local-Only Usage](/client-sdks/advanced/local-only-usage) for more background.
* [Simple Chat App](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-simple-chat#readme)
* [Trello Clone App](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-trello#readme)
#### Dart Custom Backend
* [Built with Flutter and Serverpod](https://github.com/powersync-community/powersync-serverpod-demo)
* [Built with Jaspr, shelf, Riverpod and Drift](https://github.com/powersync-community/self-host-dart-fullstack)
#### Node.js Custom Backend
* [To-Do List App with Firebase Auth](https://github.com/powersync-ja/powersync.dart/tree/main/demos/firebase-nodejs-todolist#readme)
* Corresponding backend: [Node.js To-Do List Backend with Firebase Auth](https://github.com/powersync-ja/powersync-nodejs-firebase-backend-todolist-demo)
#### Rails Custom Backend
* [GoToFun App](https://github.com/powersync-ja/powersync-rails-flutter-demo)
* This repo contains both the Flutter app and Rails backend
#### Django Custom Backend
* [To-Do List App](https://github.com/powersync-ja/powersync.dart/tree/main/demos/django-todolist#readme)
* Corresponding backend: [Django To-Do List Backend](https://github.com/powersync-ja/powersync-django-backend-todolist-demo)
#### Self-Hosted Node.js Postgres Backend
* [Flutter Home Screen Widget Demo](https://github.com/powersync-community/flutter-home-widget)
#### Supabase Backend
* [To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist#readme)
* Demonstrates [File/Attachment Handling](/client-sdks/advanced/attachments)
* [PowerChat - Group Chat App](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-group-chat#readme)
* [To-Do List App: React Native Web & Electron](https://github.com/powersync-community/powersync-react-native-web-expo-electron#readme)
* [Background Sync Demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-background-sync)
* Demonstrates [Background Syncing](/client-sdks/advanced/background-syncing) using Expo's BackgroundTask API
#### Django Custom Backend
* [To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/django-react-native-todolist#readme)
* Corresponding backend: [Django To-Do List Backend](https://github.com/powersync-ja/powersync-django-backend-todolist-demo)
#### Node.js Backend
* ['MBnB' Listing Search App with Node.js MongoDB Backend](https://github.com/powersync-ja/powersync-react-native-mongodb-mbnb#readme)
* This repo contains both the React Native app and Node.js backend
#### Other
* [Point of Sale (POS) App](https://github.com/powersync-community/powersync-pos-demo#readme)
* [OP-SQLite Barebones Demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-barebones-opsqlite#readme)
* This is a barebones example of using the OP-SQLite driver with the PowerSync React Native Client SDK. See [here](/client-sdks/reference/react-native-and-expo#install-peer-dependencies) for more background.
#### Supabase Backend
* [React To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist#readme) (PWA compatible)
* Includes [Full-Text Search](/client-sdks/full-text-search) capabilities
* [React To-Do List App with Local-Only Tables](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-optional-sync#readme) - Shows data persistence without syncing
* See [Local-Only Usage](/client-sdks/advanced/local-only-usage) for more background.
* [React Multi-Client Widget](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-multi-client#readme)
* Featured on the [PowerSync website](https://www.powersync.com/demo) demonstrating real-time data flow between clients
* [Vue To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/vue-supabase-todolist#readme)
* [Nuxt To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/nuxt-supabase-todolist#readme)
* [Angular To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/angular-supabase-todolist#readme)
* [Yjs CRDT Text Collaboration Demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/yjs-react-supabase-text-collab#readme)
* [Vite + React + TS + PowerSync + Supabase](https://github.com/powersync-community/vite-react-ts-powersync-supabase#readme)
* [E2EE Chat App](https://github.com/powersync-community/react-supabase-chat-e2ee#readme) - End-to-end encrypted group chat demo
#### Framework Integration Examples
* [Electron](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-electron) - PowerSync in an Electron web app (renderer process)
* Also see [Node.js + Electron](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-electron-node) for PowerSync in the main process, and see [this blog post](https://www.powersync.com/blog/speeding-up-electron-apps-with-powersync) for more background.
* [Next.js](https://github.com/powersync-ja/powersync-js/blob/main/demos/example-nextjs#readme) - Minimal setup with Next.js
* [Webpack](https://github.com/powersync-ja/powersync-js/blob/main/demos/example-webpack#readme) - Bundling with Webpack
* [Vite](https://github.com/powersync-ja/powersync-js/blob/main/demos/example-vite#readme) - Bundling with Vite
* [Vite with Encryption](https://github.com/powersync-ja/powersync-js/blob/main/demos/example-vite-encryption#readme) - Web database encryption demo
#### Examples
* [Capacitor Example](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-capacitor#readme) - PowerSync in a Capacitor app with web, Android, and iOS support
#### Examples
* [CLI Example](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-node) - Node.js CLI client connecting to PowerSync and running live queries
* [Electron Main Process](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-electron-node#readme) - PowerSync in Electron's main process using the Node.js SDK. See [this blog post](https://www.powersync.com/blog/speeding-up-electron-apps-with-powersync) for more background
* [Node.js + Drizzle Minimal Demo](https://github.com/powersync-community/nodejs-drizzle-example)
#### Supabase Backend
* [To-Do List App](https://github.com/powersync-ja/powersync-kotlin/tree/main/demos/supabase-todolist#readme)
* Supports Android, iOS, and Desktop (JVM) targets
* Includes a guide for [implementing background sync on Android](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/docs/BackgroundSync.md)
* [Native Android To-Do List App](https://github.com/powersync-ja/powersync-kotlin/tree/main/demos/android-supabase-todolist#readme)
* Demonstrates [File/Attachment Handling](/client-sdks/advanced/attachments)
#### Other
* [Java Example](https://github.com/powersync-community/java-kmp-sdk-example#readme) - shows how the Kotlin SDK can be used in a Java 8+ application.
#### Supabase Backend
* [To-Do List App](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/PowerSyncExample)
* Includes [Full-Text Search](/client-sdks/full-text-search) capabilities
* Demonstrates [File/Attachment Handling](/client-sdks/advanced/attachments)
* [To-Do List App + GRDB](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/GRDBDemo)
* Demonstrates [GRDB integration](/client-sdks/orms/swift/grdb) with PowerSync
* [Encryption Demo](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/SwiftEncryptionDemo)
* Demonstrates [Data Encryption](/usage/use-case-examples/data-encryption) using SQLite3MultipleCiphers
* [Counter App](https://github.com/powersync-community/swift-supabase-counter#readme)
#### Examples
* [CLI Application](https://github.com/powersync-ja/powersync-dotnet/tree/main/demos/CommandLine#readme)
* Includes an optional [Supabase connector](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/SupabaseConnector.cs)
* [WPF To-Do List App](https://github.com/powersync-ja/powersync-dotnet/tree/main/demos/WPF#readme)
* A Windows desktop to-do list app built with WPF.
* [MAUI To-Do List App](https://github.com/powersync-ja/powersync-dotnet/tree/main/demos/MAUITodo#readme)
* A cross-platform to-do list app for Android, iOS, and Windows.
#### Examples
* [egui To-Do List](https://github.com/powersync-ja/powersync-native/blob/main/README.md) - Desktop to-do list example using the egui framework and a self-hosted Node.js + Postgres backend.
#### Django
* [Django Backend for To-Do List App](https://github.com/powersync-ja/powersync-django-backend-todolist-demo)
* For use with:
* React Native [To-Do List App](https://github.com/powersync-ja/powersync-js/tree/main/demos/django-react-native-todolist)
* Flutter [To-Do List App](https://github.com/powersync-ja/powersync.dart/tree/main/demos/django-todolist)
#### Node.js
* [Node.js Backend for To-Do List App](https://github.com/powersync-ja/powersync-nodejs-backend-todolist-demo)
* [Node.js Backend with Firebase Auth](https://github.com/powersync-ja/powersync-nodejs-firebase-backend-todolist-demo)
* For use with: Flutter [To-Do List App with Firebase Auth](https://github.com/powersync-ja/powersync.dart/tree/main/demos/firebase-nodejs-todolist)
#### Rails
* [Rails Backend for GoToFun App](https://github.com/powersync-ja/powersync-rails-flutter-demo/tree/main/gotofun-backend)
* For use with: Flutter [GoToFun App](https://github.com/powersync-ja/powersync-rails-flutter-demo/tree/main/gotofun-app)
#### .NET
* [.NET Backend for To-Do List App](https://github.com/powersync-ja/powersync-dotnet-backend-demo)
#### Complete Stacks with Docker Compose
* [To-Do List App with Docker Compose](https://github.com/powersync-ja/self-host-demo) - React web app with various backend configurations:
* [Node.js + Postgres](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs)
* [Node.js + Postgres + Postgres Bucket Storage](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-postgres-bucket-storage)
* [Node.js + MongoDB](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-mongodb)
* [Node.js + MySQL](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-mysql)
* [Node.js + SQL Server](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-mssql)
* [Supabase (Postgres) + Local Development](https://github.com/powersync-ja/self-host-demo/tree/main/demos/supabase)
* [Django + Postgres](https://github.com/powersync-ja/self-host-demo/tree/main/demos/django)
## Community
### Community GitHub Org
Browse the Community GitHub Org for a collection of community-based starter projects, templates, demos and other projects to help you succeed with PowerSync:
### Notable Community Projects
This is a list of additional projects we've spotted from community members 🙌 These projects haven't necessarily been vetted by us.
* [Laravel Backend by @IsmailAshour](https://github.com/IsmailAshour/powersync-laravel-backend)
* [Flutter Instagram Clone with Supabase + Firebase by @Gambley1](https://github.com/Gambley1/flutter-instagram-offline-first-clone)
* [Jepsen PowerSync Testing - Formal consistency validation framework by @nurturenature](https://github.com/nurturenature/jepsen-powersync)
* [Bavard -An Eloquent-inspired ORM for Dart/Flutter by @ILDaviz](https://ildaviz.github.io/bavard/)
* [SolidJS Hooks for PowerSync Queries by @aboviq](https://github.com/aboviq/powersync-solid)
* [Effect + Kysely + Stytch Integration by @guillempuche](https://github.com/guillempuche/localfirst_react_server)
* [Tauri + Shadcn UI by @MrLightful](https://github.com/MrLightful/powersync-tauri)
* [Expo Web Integration by @ImSingee](https://github.com/ImSingee/powersync-web-workers)
* Note: Our [React Native Web support](/client-sdks/frameworks/react-native-web-support) now eliminates the need to patch the `@powersync/web` module
* [Attachments Library for Node.js by @muhammedv](https://www.npmjs.com/package/@muhammedv/powersync-attachments-for-node)
### Notable Community Tutorials
* [Account Optional Apps with PowerSync](https://www.maxmntl.com/blog/optional-account-powersync/)
* Tutorial for starting your new user app experience fully local (without sync,) and then to switch them to a synced experience
* [Building an Offline-First Chat App Using PowerSync and Supabase](https://bndkt.com/blog/2023/building-an-offline-first-chat-app-using-powersync-and-supabase)
* Postgres (Supabase) + React Native + Expo + Tamagui
* [Building an Offline-First Mobile App with PowerSync](https://blog.stackademic.com/building-an-offline-first-mobile-app-with-powersync-40674d8b7ea1)
* Postgres + Flutter + Nest.js + Prisma ORM + Firebase Auth
* [Implementing Local-First Architecture: A Guide to MongoDB Cluster and PowerSync Integration](https://blog.stackademic.com/implementing-local-first-architecture-a-guide-to-mongodb-cluster-and-powersync-integration-6b21fa8059a1)
* MongoDB Atlas + Next.js
## Additional Resources
Haven't found what you're looking for?
Additional tutorial-style technical posts can be found on the [PowerSync Blog](https://www.powersync.com/blog). Popular pages include:
* [Migrating a MongoDB Atlas Device Sync App to PowerSync](https://www.powersync.com/blog/migrating-a-mongodb-atlas-device-sync-app-to-powersync)
* [PowerSync and Supabase: Just the Basics](https://www.powersync.com/blog/powersync-and-supabase-just-the-basics)
* [Flutter Tutorial: Building An Offline-First Chat App With Supabase And PowerSync](https://www.powersync.com/blog/flutter-tutorial-building-an-offline-first-chat-app-with-supabase-and-powersync)
* [Speeding Up Electron Apps With PowerSync](https://www.powersync.com/blog/speeding-up-electron-apps-with-powersync)
* [Building an E2EE Chat App with PowerSync + Supabase](https://www.powersync.com/blog/building-an-e2ee-chat-app-with-powersync-supabase)
* [Collaborative Text Editing Over PowerSync](https://www.powersync.com/blog/collaborative-text-editing-over-powersync) (Without CRDTs)
# PowerSync Docs
Source: https://docs.powersync.com/intro/powersync-overview
PowerSync is a sync engine that keeps backend databases in sync with client-side embedded SQLite databases. It lets you avoid the complexities of using APIs to move app state [over the network](https://www.powersync.com/blog/escaping-the-network-tarpit), and enables real-time reactive local-first & offline-first apps that remain available even when network connectivity is poor or non-existent.
## Ready to Get Started?
Get up and running with implementing PowerSync in your project.
The fastest way to get a feel for PowerSync is to try one of our demos.
Learn about PowerSync's philosophy, key concepts and architecture.
Use the official PowerSync Agent Skills to get started with PowerSync quickly using AI-powered coding tools.
Step-by-step guide to migrate from Atlas Device Sync to PowerSync.
## Supported Backend Source Databases
PowerSync is designed to be backend database agnostic, and supports these source databases:
## Supported Client SDKs
PowerSync is also designed to be client-side stack agnostic, and currently has client SDKs available for:
Looking for an SDK that's not listed above? Upvote it or submit it on [our roadmap](https://roadmap.powersync.com/).
## Need Help?
Can't find what you are looking for in these docs? Try **Ask AI** on this site which is trained on all our documentation, repositories and Discord discussions. Also join us on [our community Discord server](https://discord.gg/powersync) where you can browse topics from the PowerSync community and chat with our team.
# PowerSync Philosophy
Source: https://docs.powersync.com/intro/powersync-philosophy
Our vision is that a local-first or offline-first app architecture should be easier for the developer to implement than cloud-first, and give a better experience for the end-user — even when they're online.
### What PowerSync Means for End-users
The app just works, whether fully online, fully offline, or with spotty connectivity.
The app is always [fast and responsive](https://www.powersync.com/blog/local-first-is-a-big-deal-especially-for-the-web#to-the-user-everything-feels-instant-no-loading-spinners) — no need to wait for network requests.
### What PowerSync Means for the Developer
PowerSync lets you avoid the complexities of using APIs to move app state [over the network](https://www.powersync.com/blog/escaping-the-network-tarpit). Its goal is to solve the hard problems of keeping data in sync, without getting in your way.
You use a standard Postgres, MongoDB, MySQL, or SQL Server \[[1](#footnotes)] database on the server, a standard SQLite database on the client, and your [own backend](/configuration/app-backend/setup) to process mutations. PowerSync simply keeps the SQLite database in sync with your backend database.
#### State Management
Once you have a local SQLite database that is always in sync, [state management](https://www.powersync.com/blog/local-first-state-management-with-sqlite) becomes much easier:
* No need for custom caching logic, whether in-memory or persisted.
* No need for maintaining in-memory state across the application.
[All state is in the local database](https://www.powersync.com/blog/local-first-state-management-with-sqlite). Queries are [reactive](/client-sdks/watch-queries) — updating whenever the underlying data changes.
#### Flexibility
PowerSync allows you to fully customize what data is synced to the client. Syncing the entire database is extremely simple, but it is just as easy to use [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) to transform and filter data for each client (partial sync).
Writing back to the backend source database [is in full control of the developer](/handling-writes/writing-client-changes) — use your own authentication, validation, and constraints.
Our goal is also to be stack-agnostic: whether you are switching from MySQL to Postgres, from Flutter to React Native, or using multiple different stacks — our aim is to maintain maximum engineering optionality for developers.
#### Performance
[SQLite is *fast*](https://www.powersync.com/blog/sqlite-optimizations-for-ultra-high-performance). It can perform tens of thousands of updates per second, even faster reads, with seamless support for concurrent reads. Once you get to filtering through thousands of rows in queries, indexes keep the queries fast.
#### Simplicity
You use plain Postgres, MongoDB, MySQL, or SQL Server on the server — no extensions, and no significant change in your schema required \[[2](#footnotes)]. PowerSync [uses](/configuration/source-db/setup) Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture (CDC) to replicate changes to the [PowerSync Service](/architecture/powersync-service), where data is transformed and partitioned according to [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), and persisted in a way that allows efficiently streaming incremental changes to each client.
PowerSync has been used in apps with hundreds of tables. There are no complex migrations to run: You define your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) and [client-side schema](/intro/setup-guide#define-your-client-side-schema), and the data is automatically kept in sync. If you [change Sync Streams/Rules](/maintenance-ops/implementing-schema-changes), the relevant new set of data is applied atomically on the client. When you do need to make schema changes on the server while still supporting older clients, we [have the processes in place](/maintenance-ops/implementing-schema-changes) to do that without hassle.
No need for CRDTs \[[3](#footnotes)]. PowerSync is a server-client sync platform: since no peer-to-peer syncing is involved, CRDTs can be overkill. Instead, we use a server reconciliation architecture with a default approach of "last write wins", with capability to [customize the conflict resolution if required](/handling-writes/handling-update-conflicts) — the developer is in [full control of the write process](/handling-writes/writing-client-changes). Our [strong consistency guarantees](/architecture/consistency) give you peace of mind about the integrity of data on the client.
### See Also
* [Local-First Software is a Big Deal, Especially for the Web](https://www.powersync.com/blog/local-first-is-a-big-deal-especially-for-the-web)
* [PowerSync Architecture](/architecture/architecture-overview)
### Footnotes
* \[1] Support for more databases planned. See [our roadmap](https://roadmap.powersync.com/) for details.
* \[2] In some cases denormalization is required to effectively partition the data to sync to different users.
* \[3] If you want to use CRDTs for fine-grained collaboration like text editing, we have [examples](/client-sdks/advanced/crdts) of how to do that in conjunction with PowerSync, storing CRDT data in Postgres.
# Self-Hosting
Source: https://docs.powersync.com/intro/self-hosting
An Introduction to self-hosting PowerSync in your own infrastructure (PowerSync Open Edition or PowerSync Enterprise Self-Hosted Edition).
The [PowerSync Service](https://github.com/powersync-ja/powersync-service) can be self-hosted using Docker. It is published to Docker Hub as [journeyapps/powersync-service](https://hub.docker.com/r/journeyapps/powersync-service).
* Note that the [PowerSync Dashboard](https://dashboard.powersync.com/) is currently not available when self-hosting PowerSync.
* Please reach out on our [Discord](https://discord.gg/powersync) if you have any questions not yet covered in these docs.
## Overview Video
This video provides a quick introduction to the PowerSync Open Edition:
## Demo Project
The quickest way to get a feel for the system is to run our example project on your development machine using Docker Compose. You can find it here:
## Local Development
To run PowerSync locally, see [Local Development](/tools/local-development). The easiest path is the [PowerSync CLI](/tools/cli), which sets up a Docker Compose stack for you.
## Full Installation
* See our [Setup Guide](/intro/setup-guide) section for instructions setting up the PowerSync Service and integrating PowerSync into your app project.
* For in-depth instance configuration details, see [Configuration Details → PowerSync Service Setup → Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances)
* For advanced/production topics, see the [Maintenance & Ops → Self-Hosting](/maintenance-ops/self-hosting/overview) section.
## Deployment Platform Guides
Guides for deploying self-hosted PowerSync on common platforms:
Coolify is an open-source & self-hostable alternative to Heroku / Netlify / Vercel / etc.
Railway is a managed cloud platform (PaaS) for deploying and scaling applications, services, and databases via containers.
Amazon Elastic Container Service (ECS) is a fully managed container orchestration service.
## Enterprise Self-Hosted Edition
Self-hosting of PowerSync is also available in an *Enterprise Self-Hosted Edition* with dedicated support plans, advanced functionality and custom pricing. See the *Self-Host PowerSync* section on our [Pricing](https://www.powersync.com/pricing) page for more details.
To get started on the Enterprise Self-Hosted Edition please [contact us](mailto:support@powersync.com).
# PowerSync Setup Guide
Source: https://docs.powersync.com/intro/setup-guide
This guide walks you through adding PowerSync to your app project step-by-step.
# 1. Configure Your Source Database
PowerSync needs to connect to your source database (Postgres, MongoDB, MySQL or SQL Server) to replicate data. Before setting up PowerSync, you need to configure your database with the appropriate permissions and replication settings.
Using the [PowerSync CLI](/tools/cli) and want an automatically integrated Postgres instance for local development? You can skip to [Step 2](#2-set-up-powersync-service-instance) and set one up with the **CLI (Self-Hosted)** tab.
Configuring Postgres for PowerSync involves three main tasks:
1. **Enable logical replication**: PowerSync reads the Postgres WAL using logical replication. Set `wal_level = logical` in your Postgres configuration.
2. **Create a PowerSync database user**: Create a role with replication privileges and read-only access to your tables.
3. **Create a `powersync` publication**: Create a logical replication publication named `powersync` to specify which tables to replicate.
```sql General theme={null}
-- 1. Enable logical replication (requires restart)
ALTER SYSTEM SET wal_level = logical;
-- 2. Create PowerSync database user/role with replication privileges and read-only access to your tables
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
-- 3. Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
```sql Supabase theme={null}
-- Supabase has logical replication enabled by default
-- Just create the user and publication:
-- Create PowerSync database user/role with replication privileges and read-only access to your tables
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
-- Set up permissions for the newly created role
-- Read-only (SELECT) access is required
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
-- Optionally, grant SELECT on all future tables (to cater for schema additions)
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
-- Create a publication to replicate tables. The publication must be named "powersync"
CREATE PUBLICATION powersync FOR ALL TABLES;
```
```bash Docker (Self-hosting) theme={null}
# 1. Create a Docker network (if not already created)
# This allows various PowerSync containers to communicate with each other
docker network create powersync-network
# 2. Run Postgres source database with logical replication enabled (required for PowerSync)
docker run -d \
--name powersync-postgres \
--network powersync-network \
-e POSTGRES_PASSWORD="my_secure_password" \
-p 5432:5432 \
postgres:18 \
postgres -c wal_level=logical
# 3. Configure PowerSync user and publication
# This creates a PowerSync database user/role with replication privileges and read-only access to your tables
# Read-only (SELECT) access is also granted to all future tables (to cater for schema additions)
# It also creates a publication to replicate tables. The publication must be named "powersync"
docker exec -it powersync-postgres psql -U postgres -c "
CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO powersync_role;
CREATE PUBLICATION powersync FOR ALL TABLES;"
```
* **Version compatibility**: PowerSync requires Postgres version 11 or greater.
**Learn More**
* For more details on Postgres setup, including provider-specific guides (Supabase, AWS RDS, etc.), see [Source Database Setup](/configuration/source-db/setup#postgres).
* **Self-hosting PowerSync?** See the [Self-Host-Demo App](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs) for a complete working example of connecting a Postgres source database to PowerSync.
For MongoDB Atlas databases, the minimum permissions when using built-in roles are:
```
read@
readWrite@._powersync_checkpoints
```
To allow PowerSync to automatically enable `changeStreamPreAndPostImages` on replicated collections (optional, but recommended), additionally add:
```
dbAdmin@
```
**Version compatibility**: PowerSync requires MongoDB version 6.0 or greater.
**Learn More**
* For more details including instructions for self-hosted MongoDB, or for custom roles on MongoDB Atlas, see [Source Database Setup](/configuration/source-db/setup#mongodb).
* **Self-hosting PowerSync?** See the [Self-Host-Demo App](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-mongodb) for a complete working example of connecting a MongoDB source database to PowerSync.
For MySQL, you need to configure binary logging and create a user with replication privileges:
```sql theme={null}
-- Configure binary logging
-- Add to MySQL option file (my.cnf or my.ini):
server_id=
log_bin=ON
enforce_gtid_consistency=ON
gtid_mode=ON
binlog_format=ROW
-- Create a user with necessary privileges
CREATE USER 'repl_user'@'%' IDENTIFIED BY '';
-- Grant replication client privilege
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl_user'@'%';
-- Grant select access to the specific database
GRANT SELECT ON .* TO 'repl_user'@'%';
-- Apply changes
FLUSH PRIVILEGES;
```
**Version compatibility**: PowerSync requires MySQL version 5.7 or greater.
**Learn More**
* For more details on MySQL setup, see [Source Database Setup](/configuration/source-db/setup#mysql-beta).
* **Self-hosting PowerSync?** See the [Self-Host-Demo App](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-mysql) for a complete working example of connecting a MySQL source database to PowerSync.
Refer to [these instructions](/configuration/source-db/setup#sql-server-alpha).
**Self-hosting PowerSync?** See the [Self-Host-Demo App](https://github.com/powersync-ja/self-host-demo/tree/main/demos/nodejs-mssql) for a complete working example of connecting a SQL Server source database to PowerSync.
# 2. Set Up PowerSync Service Instance
PowerSync is available as a cloud-hosted service (PowerSync Cloud) or can be self-hosted (PowerSync Open Edition or PowerSync Enterprise Self-Hosted Edition).
If you haven't yet, sign up for a free PowerSync Cloud account [here](https://accounts.powersync.com/portal/powersync-signup?s=docs).
After signing up, you will be taken to the [PowerSync Dashboard](https://dashboard.powersync.com/).
Here, create a new project. *Development* and *Production* instances of the PowerSync Service will be created by default in the project.
If you haven't yet, sign up for a free PowerSync Cloud account [here](https://accounts.powersync.com/portal/powersync-signup?s=docs).
Install the [PowerSync CLI](/tools/cli) (requires Node.js/npm), then log in and scaffold the config directory:
```bash theme={null}
npm install -g powersync
powersync login
powersync init cloud
```
This creates a `powersync/` directory with `service.yaml` (instance name, region, connection, auth) and `sync-config.yaml` (sync config). Edit `powersync/service.yaml` to set your instance name and region — you'll configure the database connection in the next step.
Then create the Cloud instance:
```bash theme={null}
powersync link cloud --create --project-id=
```
Find your project ID in the [PowerSync Dashboard](https://dashboard.powersync.com) URL, or run `powersync fetch instances` after logging in.
Recommended for getting started: the CLI scaffolds your config directory and generates the Docker Compose stack (including a Postgres instance for the source database and storage) so you can run PowerSync locally with minimal setup. For custom setups use the **Manual (Self-Hosted)** tab. Install the [PowerSync CLI](/tools/cli) (requires Node.js/npm); alternative installation options (e.g. installers via GitHub releases) will be available in the near future. Then run:
```bash theme={null}
npm install -g powersync
powersync init self-hosted
powersync docker configure --database postgres --storage postgres
```
Docker sets up Postgres for both the source database and bucket storage and creates `powersync/docker/docker-compose.yaml`. Other databases are supported as well, you will learn more about this in the next step. Before starting, replace `powersync/sync-config.yaml` with this minimal sync config:
```yaml theme={null}
config:
edition: 2
streams:
todos:
# Streams without parameters sync the same data to all users
auto_subscribe: true
query: "SELECT * FROM todos"
```
You'll update this with your actual tables/collections in a later step.
The Docker Postgres instance runs init scripts only on first start. Create your specific tables before running `powersync docker start` for the first time. See the [Docker usage docs](https://github.com/powersync-ja/powersync-cli/blob/main/docs/usage-docker.md) in the PowerSync CLI repository for more details.
Then start the PowerSync Service:
```bash theme={null}
powersync docker start
```
Run `powersync status` to verify it's running.
**Learn More**
* [Self-Hosting Introduction](/intro/self-hosting)
* [Self-Host Demo App](https://github.com/powersync-ja/self-host-demo) for complete working examples.
* [Self-Hosted Service Configuration](/configuration/powersync-service/self-hosted-instances) for more details on the config file structure.
* [CLI documentation](/tools/cli)
Self-hosted PowerSync runs via Docker. The commands below illustrate the basic PowerSync Service requirements.
Below is a minimal example using Postgres for bucket storage. MongoDB is also supported as bucket storage. The source database connection is configured in the next step — you can use the Docker-managed Postgres from Step 1 or point to an external database instead.
```bash theme={null}
# 1. Create a directory for your config
mkdir powersync-service && cd powersync-service
# 2. Set up bucket storage (Postgres and MongoDB are supported)
docker run -d \
--name powersync-postgres-storage \
--network powersync-network \
-p 5433:5432 \
-e POSTGRES_PASSWORD="my_secure_storage_password" \
-e POSTGRES_DB=powersync_storage \
postgres:18
## Set up Postgres storage user
docker exec -it powersync-postgres-storage psql -U postgres -d powersync_storage -c "
CREATE USER powersync_storage_user WITH PASSWORD 'my_secure_user_password';
GRANT CREATE ON DATABASE powersync_storage TO powersync_storage_user;"
# 3. Create config.yaml (see below)
# 4. Run PowerSync Service
# The Service config can be specified as an environment variable (shown below), as a filepath, or as a command line parameter
# See these docs for more details: https://docs.powersync.com/configuration/powersync-service/self-hosted-instances
docker run -d \
--name powersync \
--network powersync-network \
-p 8080:8080 \
-e POWERSYNC_CONFIG_B64="$(base64 -i ./config.yaml)" \
journeyapps/powersync-service:latest
```
**Basic `config.yaml` structure:**
```yaml theme={null}
# Source database connection (see the next step for more details)
replication:
connections:
- type: postgresql # or mongodb, mysql, mssql
uri: postgresql://powersync_role:myhighlyrandompassword@powersync-postgres:5432/postgres
sslmode: disable # Only for local/private networks
# Connection settings for bucket storage (Postgres and MongoDB are supported)
storage:
type: postgresql
uri: postgresql://powersync_storage_user:my_secure_user_password@powersync-postgres-storage:5432/powersync_storage
sslmode: disable # Use 'disable' only for local/private networks
# Sync Streams config (defined in a later step)
sync_config:
content: |
config:
edition: 3
streams:
shared_data:
auto_subscribe: true
queries:
- SELECT * FROM lists
- SELECT * FROM todos
```
**Note**: This example assumes you've configured your source database with the required user and publication (see the previous step)
and are running it via Docker in the 'powersync-network' network.
If you are not using Docker, you will need to specify the connection details in the `config.yaml` file manually (see next step for more details).
**Learn More**
* [Self-Hosting Introduction](/intro/self-hosting)
* [Self-Host Demo App](https://github.com/powersync-ja/self-host-demo) for complete working examples.
* [Self-Hosted Service Configuration](/configuration/powersync-service/self-hosted-instances) for more details on the config file structure.
* [CLI documentation](/tools/cli)
# 3. Connect PowerSync to Your Source Database
The next step is to connect your PowerSync Service instance to your source database.
In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance, then go to **Database Connections**:
1. Click **Connect to Source Database**
2. Select the appropriate database type tab (Postgres, MongoDB, MySQL or SQL Server)
3. Fill in your connection details:
**Note**: Use the username (e.g., `powersync_role`) and password you created in Step 1: Configure your Source Database.
* **Postgres**: Host, Port (5432), Database name, Username, Password, SSL Mode
* **MongoDB**: Connection URI (e.g., `mongodb+srv://user:pass@cluster.mongodb.net/database`)
* **MySQL**: Host, Port (3306), Database name, Username, Password
* **SQL Server**: Name, Host, Port (1433), Database name, Username, Password
4. Click **Test Connection** to verify
5. Click **Save Connection**
PowerSync will now deploy and configure an isolated cloud environment, which can take a few minutes.
**Learn More**
For more details on database connections, including provider-specific connection details (Supabase, AWS RDS, MongoDB Atlas, etc.), see [Source Database Connection](/configuration/source-db/connection).
Edit `powersync/service.yaml` (created in the previous step) with your connection details. Use `!env` for secrets:
**Note**: Use the username (e.g., `powersync_role`) and password you created in Step 1: Configure your Source Database.
```yaml Postgres theme={null}
replication:
connections:
- type: postgresql
uri: postgresql://powersync_role:myhighlyrandompassword@host:5432/postgres
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# Note: 'disable' is only suitable for local/private networks
```
```yaml MongoDB theme={null}
replication:
connections:
- type: mongodb
uri: mongodb+srv://user:password@cluster.mongodb.net/database
post_images: auto_configure
```
```yaml MySQL theme={null}
replication:
connections:
- type: mysql
uri: mysql://repl_user:password@host:3306/database
```
```yaml SQL Server theme={null}
replication:
connections:
- type: mssql
uri: mssql://user:password@host:1433/database
schema: dbo
```
You will run `powersync deploy` in a later step to deploy your config to the PowerSync Cloud instance.
**Learn More**
For more details on database connections, including provider-specific connection details (Supabase, AWS RDS, MongoDB Atlas, etc.), see [Source Database Connection](/configuration/source-db/connection).
If you used Docker in the previous step, the source database connection is already configured. `service.yaml` reads the connection URI from `!env PS_DATA_SOURCE_URI`. The Docker-managed Postgres (`pg-db`) was also pre-configured with `wal_level=logical` and a `powersync` publication by the init scripts.
If you want to use an **external database** instead, update `PS_DATA_SOURCE_URI` in `powersync/docker/.env` with your connection details, then restart:
```bash theme={null}
powersync docker reset
```
You'll also need to complete the source database setup from Step 1 (replication user, publication) on your external database before this will work.
Configure the source database connection in your `config.yaml` file (as you did in the previous step). Examples for the different database types are below.
**Note**: Use the username (e.g., `powersync_role`) and password you created in Step 1: Configure your Source Database.
```yaml Postgres theme={null}
replication:
connections:
- type: postgresql
uri: postgresql://powersync_role:myhighlyrandompassword@powersync-postgres:5432/postgres
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# Note: 'disable' is only suitable for local/private networks, not for public networks
```
```yaml MySQL theme={null}
replication:
connections:
- type: mysql
uri: mysql://repl_user:password@host:3306/database
```
```yaml SQL Server theme={null}
replication:
connections:
- type: mssql
uri: mssql://user:password@$host:1433/database
schema: dbo
additionalConfig:
trustServerCertificate: true
pollingIntervalMs: 1000
pollingBatchSize: 20
```
**Learn More**
See the [self-host-demo app](https://github.com/powersync-ja/self-host-demo) for complete working examples of the different database types.
# 4. Define Sync Streams
PowerSync uses either **Sync Streams** (or legacy **Sync Rules**) to control which data gets synced to which users/devices. Both use SQL-like queries defined in YAML format.
Sync Streams are now in beta and production-ready. We recommend Sync Streams for new projects — they offer a simpler syntax and support on-demand syncing for web apps.
Start with simple **auto-subscribed streams** that sync data to all users by default:
```yaml Postgres Example theme={null}
config:
edition: 3
streams:
shared_data:
auto_subscribe: true
queries:
- SELECT * FROM todos
- SELECT * FROM lists WHERE NOT archived
```
```yaml MongoDB Example theme={null}
config:
edition: 3
streams:
shared_data:
auto_subscribe: true
# MongoDB uses "_id" but PowerSync uses "id" on the client
queries:
- SELECT _id as id, * FROM lists
- SELECT _id as id, * FROM todos WHERE archived = false
```
```yaml MySQL Example theme={null}
config:
edition: 3
streams:
shared_data:
auto_subscribe: true
queries:
- SELECT * FROM todos
- SELECT * FROM lists WHERE NOT archived
```
```yaml SQL Server Example theme={null}
config:
edition: 3
streams:
shared_data:
auto_subscribe: true
queries:
- SELECT * FROM todos
- SELECT * FROM lists WHERE NOT archived
```
**Learn more:** [Sync Streams documentation](/sync/streams/overview)
Sync Rules is the original system for controlling data sync. Use this if you prefer a fully released (non-beta) solution.
```yaml Postgres Example theme={null}
bucket_definitions:
global:
data:
- SELECT * FROM todos
- SELECT * FROM lists WHERE archived = false
```
```yaml MongoDB Example theme={null}
bucket_definitions:
global:
data:
# MongoDB uses "_id" but PowerSync uses "id" on the client
- SELECT _id as id, * FROM lists
- SELECT _id as id, * FROM todos WHERE archived = false
```
```yaml MySQL Example theme={null}
bucket_definitions:
global:
data:
- SELECT * FROM todos
- SELECT * FROM lists WHERE archived = 0
```
```yaml SQL Server Example theme={null}
bucket_definitions:
global:
data:
- SELECT * FROM todos
- SELECT * FROM lists WHERE archived = 0
```
**Learn more:** [Sync Rules documentation](/sync/rules/overview)
### Deploy Your Configuration
In the [PowerSync Dashboard](https://dashboard.powersync.com/):
1. Select your project and instance
2. Go to the **Sync Streams** or **Sync Rules** view (depending on which you’re using)
3. Edit the YAML directly in the dashboard
4. Click **Deploy** to validate and deploy your Sync Rules
Edit `powersync/sync-config.yaml` with your sync config, then validate and deploy to the linked Cloud instance:
```bash theme={null}
powersync validate
powersync deploy
```
This deploys your full config (connection, auth, and sync config). For subsequent sync-only changes, use `powersync deploy sync-config` instead.
Edit `powersync/sync-config.yaml` with your sync config. The default file has a placeholder (`SELECT * FROM todos`) — replace it with your actual table/collection names. Then apply the changes:
```bash theme={null}
powersync validate
powersync docker reset
```
Add a `sync_config` section to your `config.yaml`. Using a separate file (recommended) keeps the main config tidy:
**Recommended — reference a separate file:**
```yaml config.yaml theme={null}
sync_config:
path: sync-config.yaml
```
Put your streams or rules in `sync-config.yaml` (see [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances#sync-streams--sync-rules) for full examples). Alternatively, you can use inline `content: |` with the YAML nested under `sync_config`.
Table/collection names in your configuration must match the table names defined in your client-side schema (defined in a later step below).
# 5. Generate a Development Token
For quick development and testing, you can generate a temporary development token instead of implementing full authentication.
You'll use this token for two purposes:
* **Testing with the *Sync Diagnostics Client*** (in the next step) to verify your setup and Sync Streams (or legacy Sync Rules)
* **Connecting your app** (in a later step) to test the client SDK integration
1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance
2. Go to the **Client Auth** view
3. Check the **Development tokens** setting and save your changes
4. Click the **Connect** button in the top bar
5. **Enter token subject**: Since you're starting with simple streams or buckets that sync all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with).
6. Click **Generate token** and copy the token
Development tokens expire after 12 hours.
Generate a development token with:
```bash theme={null}
powersync generate token --subject=test-user
```
Replace `test-user` with a user ID of your choice (this would normally be the user ID you want to test with).
Requires `allow_temporary_tokens` to be enabled on the instance. Add it to `powersync/service.yaml` if you haven't already, then redeploy:
```yaml theme={null}
client_auth:
allow_temporary_tokens: true
```
```bash theme={null}
powersync deploy
```
Development tokens expire after 12 hours.
Follow the steps below. Steps 1 and 2 configure signing keys and your PowerSync config; in Step 3 you can use the **CLI (recommended)** or the test-client to generate the token.
Generate a temporary private/public key-pair (RS256) or shared key (HS256) for JWT signing and verification.
Use an online JWK generator like [mkjwk.org](https://mkjwk.org/) (select RSA, 2048 bits, Signature use, RS256 algorithm).
Or generate locally with Node.js:
```bash theme={null}
# Install pem-jwk if needed
npm install -g pem-jwk
# Generate private key
openssl genrsa -out private-key.pem 2048
# Convert public key to JWK format
openssl rsa -in private-key.pem -pubout | pem-jwk
```
Use an online JWK generator like [mkjwk.org](https://mkjwk.org/) (select oct, 256 bits, Signature use, HS256 algorithm) - this outputs base64url directly.
Or generate and convert using OpenSSL:
```bash theme={null}
# Generate and convert to base64url
openssl rand -base64 32 | tr '+/' '-_' | tr -d '='
```
For production environments, shared secrets (HS256) are not recommended.
Add the `client_auth` parameter to your PowerSync config (e.g. `service.yaml`):
Copy the JWK values from [mkjwk.org](https://mkjwk.org/) or the `pem-jwk` output, then add to your config:
```yaml config.yaml theme={null}
# Client (application end user) authentication settings
client_auth:
# static collection of public keys for JWT verification
jwks:
keys:
- kty: 'RSA'
n: '[rsa-modulus]'
e: '[rsa-exponent]'
alg: 'RS256'
kid: 'dev-key-1'
```
Copy the `k` value from mkjwk.org or the OpenSSL output, then add to your config:
```yaml config.yaml theme={null}
# Client (application end user) authentication settings
client_auth:
audience: ['http://localhost:8080', 'http://127.0.0.1:8080']
# static collection of public keys for JWT verification
jwks:
keys:
- kty: oct
alg: 'HS256'
k: '[base64url-encoded-shared-secret]'
kid: 'dev-key-1'
```
These examples use static `jwks: keys:` for simplicity. For production, we recommend using `jwks_uri` to point to a JWKS endpoint instead. See [Custom Authentication](/configuration/auth/custom) for more details.
Choose either the [PowerSync CLI](/tools/cli) (recommended) or the test-client:
Apply your config changes (e.g. restart your PowerSync Service or run `powersync docker reset` if running locally with Docker), then run:
```bash theme={null}
powersync generate token --subject=test-user
```
Replace `test-user` with the user ID you want to authenticate:
* If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
* If your data is filtered by parameters , use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
1. If you have not done so already, clone the [`powersync-service` repo](https://github.com/powersync-ja/powersync-service/tree/main)
2. Install and build:
* In the project root: `pnpm install` and `pnpm build`
* In the `test-client` directory: `pnpm build`
3. Generate a token from the `test-client` directory, pointing at your config file:
```bash theme={null}
node dist/bin.js generate-token --config path/to/config.yaml --sub test-user
```
Replace `test-user` with the user ID you want to authenticate:
* If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
* If your data is filtered by parameters , use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
Development tokens expire after 12 hours.
# 6. \[Optional] Test Sync with the Sync Diagnostics Client
Before implementing the PowerSync Client SDK in your app, you can validate that syncing is working correctly using our [Sync Diagnostics Client](https://diagnostics-app.powersync.com) (this hosted version works with both PowerSync Cloud and self-hosted setups).
Use the development token you generated in the [previous step](#5-generate-a-development-token) to connect and verify your setup:
1. Go to [https://diagnostics-app.powersync.com](https://diagnostics-app.powersync.com)
2. Enter your development token at **PowerSync Token** (from the [Generate a Development Token](#5-generate-a-development-token) step above)
3. Enter your PowerSync instance URL at **PowerSync Endpoint** (found in the [PowerSync Dashboard](https://dashboard.powersync.com/) - click **Connect** in the top bar)
4. Click **Proceed**
1. Go to [https://diagnostics-app.powersync.com](https://diagnostics-app.powersync.com)
2. Enter your development token at **PowerSync Token** (from the [Generate a Development Token](#5-generate-a-development-token) step above)
3. Enter your PowerSync Service endpoint at **PowerSync Endpoint** (the URL where your self-hosted service is running, e.g. `http://localhost:8080` if running locally)
4. Click **Proceed**
The Sync Diagnostics Client can also be run as a local standalone web app — see the [README](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app#readme) for instructions.
The Sync Diagnostics Client will connect to your PowerSync Service instance and display [information](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app#functionality) about the synced data, and allow you to [query](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app#sql-console) the client-side SQLite database.
**Checkpoint:**
Inspect your synced tables in the Sync Diagnostics Client — these should match the Sync Streams (or legacy Sync Rules) you [defined previously](#4-define-sync-streams-or-sync-rules). This confirms your setup is working correctly before integrating the client SDK into your app.
# 7. Use the Client SDK
Now it's time to integrate PowerSync into your app. This involves installing the Client SDK, defining your client-side schema, instantiating the database, connecting to your PowerSync Service instance, and reading/writing data.
### Install the Client SDK
Add the PowerSync Client SDK to your app project. PowerSync provides SDKs for various platforms and frameworks.
Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powersync/react-native) to your project:
```bash theme={null}
npx expo install @powersync/react-native
```
```bash theme={null}
yarn expo add @powersync/react-native
```
```
pnpm expo install @powersync/react-native
```
**Install Peer Dependencies**
PowerSync requires a SQLite database adapter. Choose between:
[PowerSync OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) offers:
* Built-in encryption support via SQLCipher
* Smoother transition to React Native's New Architecture
```bash theme={null}
npx expo install @powersync/op-sqlite @op-engineering/op-sqlite
```
```bash theme={null}
yarn expo add @powersync/op-sqlite @op-engineering/op-sqlite
```
```
pnpm expo install @powersync/op-sqlite @op-engineering/op-sqlite
```
The [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) package is the original database adapter for React Native and therefore more battle-tested in production environments.
```bash theme={null}
npx expo install @journeyapps/react-native-quick-sqlite
```
```bash theme={null}
yarn expo add @journeyapps/react-native-quick-sqlite
```
```
pnpm expo install @journeyapps/react-native-quick-sqlite
```
**iOS with `use_frameworks!`**
If your iOS project uses `use_frameworks!`, add the `react-native-quick-sqlite` plugin to your app.json or app.config.js and configure the staticLibrary option:
```
{
"expo": {
"plugins": [
[
"@journeyapps/react-native-quick-sqlite",
{
"staticLibrary": true
}
]
]
}
}
```
This plugin automatically configures the necessary build settings for `react-native-quick-sqlite` to work with `use_frameworks!`.
**Using Expo Go?** Our native database adapters listed below (OP-SQLite and React Native Quick SQLite) are not compatible with Expo Go's sandbox environment. To run PowerSync with Expo Go install our JavaScript-based adapter `@powersync/adapter-sql-js` instead. See details [here](/client-sdks/frameworks/expo-go-support).
**Polyfills and additional notes:**
* For async iterator support with watched queries, additional polyfills are required. See the [Babel plugins section](https://www.npmjs.com/package/@powersync/react-native#babel-plugins-watched-queries) in the README.
* When using the **OP-SQLite** package, we recommend adding this [metro config](https://github.com/powersync-ja/powersync-js/tree/main/packages/react-native#metro-config-optional)
to avoid build issues.
Add the [PowerSync Web NPM package](https://www.npmjs.com/package/@powersync/web) to your project:
```bash theme={null}
npm install @powersync/web
```
```bash theme={null}
yarn add @powersync/web
```
```bash theme={null}
pnpm install @powersync/web
```
**Install Peer Dependencies**
This SDK currently requires [`@journeyapps/wa-sqlite`](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency. Install it in your app with:
```bash theme={null}
npm install @journeyapps/wa-sqlite
```
```bash theme={null}
yarn add @journeyapps/wa-sqlite
```
```bash theme={null}
pnpm install @journeyapps/wa-sqlite
```
Add the [PowerSync Node NPM package](https://www.npmjs.com/package/@powersync/node) to your project:
```bash theme={null}
npm install @powersync/node
```
```bash theme={null}
yarn add @powersync/node
```
```bash theme={null}
pnpm install @powersync/node
```
**Install Peer Dependencies**
The PowerSync SDK for Node.js supports multiple drivers. More details are available under [Encryption and Custom SQLite Drivers](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers). We currently recommend the `better-sqlite3` package for most users:
```bash theme={null}
npm install better-sqlite3
```
```bash theme={null}
yarn add better-sqlite3
```
```bash theme={null}
pnpm install better-sqlite3
```
Previous versions of the PowerSync SDK for Node.js used the `@powersync/better-sqlite3` fork as a
required peer dependency.
This is no longer recommended. After upgrading to `@powersync/node` version `0.12.0` or later, ensure
the old package is no longer installed by running `npm uninstall @powersync/better-sqlite3`
**Common Installation Issues**
The `better-sqlite3` package requires native compilation, which depends on certain system tools.
Prebuilt assets are available and used by default, but a custom compilation may be started depending on the Node.js
or Electron version used.
This compilation process is handled by `node-gyp` and may fail if required dependencies are missing or misconfigured.
Refer to the [PowerSync Node package README](https://www.npmjs.com/package/@powersync/node) for more details.
Add the [PowerSync Capacitor NPM package](https://www.npmjs.com/package/@powersync/capacitor) to your project:
```bash theme={null}
npm install @powersync/capacitor
```
```bash theme={null}
yarn add @powersync/capacitor
```
```bash theme={null}
pnpm install @powersync/capacitor
```
**Install Peer Dependencies**
You must also install the following peer dependencies:
```bash theme={null}
npm install @capacitor-community/sqlite @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
yarn add @capacitor-community/sqlite @powersync/web @journeyapps/wa-sqlite
```
```bash theme={null}
pnpm install @capacitor-community/sqlite @powersync/web @journeyapps/wa-sqlite
```
After installing, sync your Capacitor project:
```bash theme={null}
npx cap sync
```
Add the [PowerSync pub.dev package](https://pub.dev/packages/powersync) to your project:
```bash theme={null}
flutter pub add powersync
```
Add the [PowerSync SDK](https://central.sonatype.com/artifact/com.powersync/core) to your project by adding the following to your `build.gradle.kts` file:
```toml gradle/libs.versions.toml theme={null}
[versions]
# Please check the latest version at https://github.com/powersync-ja/powersync-kotlin/releases/
powersync = "1.10.0"
[libraries]
powersync-core = { module = "com.powersync:core", version.ref = "powersync" }
powersync-integration-supabase = { module = "com.powersync:connector-supabase", version.ref = "powersync" }
```
```Kotlin build.gradle.kts icon="https://mintcdn.com/powersync/GTJdSKFSfUc2Sxtc/logo/gradle.svg?fit=max&auto=format&n=GTJdSKFSfUc2Sxtc&q=85&s=bb14bd89bac7520f103a2ad2abc17053" theme={null}
kotlin {
//...
sourceSets {
commonMain.dependencies {
implementation(libs.powersync.core)
// If you want to use the Supabase Connector, also add the following:
implementation(libs.powersync.integration.supabase)
}
//...
}
}
```
```Kotlin build.gradle.kts icon="https://mintcdn.com/powersync/GTJdSKFSfUc2Sxtc/logo/gradle.svg?fit=max&auto=format&n=GTJdSKFSfUc2Sxtc&q=85&s=bb14bd89bac7520f103a2ad2abc17053" theme={null}
kotlin {
//...
sourceSets {
commonMain.dependencies {
implementation("com.powersync:core:$powersyncVersion")
// If you want to use the Supabase Connector, also add the following:
implementation("com.powersync:connector-supabase:$powersyncVersion")
}
//...
}
}
```
In a Kotlin-Multiplatform project targeting iOS, macOS, tvOS or watchOS, you also need to
install the PowerSync SQLite extension.
The best way to do that depends on how you [integrate Kotlin](https://kotlinlang.org/docs/multiplatform/multiplatform-ios-integration-overview.html) into the XCode project.
PowerSync works with the [direct integration](https://kotlinlang.org/docs/multiplatform/multiplatform-direct-integration.html), you can add the SQLite extension as a dependency
in XCode. In your XCode project settings, under "Package Dependencies", add a package and use
`https://github.com/powersync-ja/powersync-sqlite-core-swift.git` as a package URL.
Use a version dependency and start with the [latest version](https://github.com/powersync-ja/powersync-sqlite-core-swift/releases) to get started.
If you have an existing `Package.swift` file, depend on the SQLite extension like this:
```Swift Package.swift theme={null}
dependencies: [
.package(
url: "https://github.com/powersync-ja/powersync-sqlite-core-swift.git",
// Refer to github.com/powersync-ja/powersync-sqlite-core-swift/releases for the latest version.
exact: "0.4.11",
)
]
```
Note that CocoaPods will become read-only in late 2026, and we won't be able to update the
SQLite extension through CocoaPods afterwards.
Add the following to the `cocoapods` config in your `build.gradle.kts`:
```Kotlin theme={null}
cocoapods {
//...
pod("powersync-sqlite-core") {
linkOnly = true
}
framework {
isStatic = true
export("com.powersync:core")
}
//...
}
```
The `linkOnly = true` attribute and `isStatic = true` framework setting ensure that the `powersync-sqlite-core` binaries are statically linked.
For Android and JVM targets, the extension is embedded in the SDK and doesn't need to be installed manually.
You can add the PowerSync Swift package to your project using either `Package.swift` or Xcode:
```swift theme={null}
let package = Package(
//...
dependencies: [
//...
.package(
url: "https://github.com/powersync-ja/powersync-swift",
exact: ""
),
],
targets: [
.target(
name: "YourTargetName",
dependencies: [
.product(
name: "PowerSync",
package: "powersync-swift"
)
]
)
]
)
```
1. Follow [this guide](https://developer.apple.com/documentation/xcode/adding-package-dependencies-to-your-app#Add-a-package-dependency) to add a package to your project.
2. Use `https://github.com/powersync-ja/powersync-swift.git` as the URL
3. Include the exact version (e.g., `1.0.x`)
For desktop/server/binary use-cases and WPF, add the [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) NuGet package to your project:
```bash theme={null}
dotnet add package PowerSync.Common --prerelease
```
For MAUI apps, add both [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) and [`PowerSync.Maui`](https://www.nuget.org/packages/PowerSync.Maui/) NuGet packages to your project:
```bash theme={null}
dotnet add package PowerSync.Common --prerelease
dotnet add package PowerSync.Maui --prerelease
```
Add `--prerelease` while this package is in alpha. To install a specific version, use `--version` instead: `dotnet add package PowerSync.Common --version 0.0.6-alpha.1`
Add the [PowerSync SDK](https://central.sonatype.com/artifact/com.powersync/core) to your project by adding the following to your `Cargo.toml` file:
```shell theme={null}
cargo add powersync
```
### Define Your Client-Side Schema
This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required .
*PowerSync Cloud:* The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Streams (or legacy Sync Rules) in your preferred language.
Here's an example schema for a simple `todos` table:
```typescript React Native (TS) theme={null}
import { column, Schema, Table } from '@powersync/react-native';
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos
});
```
```typescript Web & Capacitor (TS) theme={null}
import { column, Schema, Table } from '@powersync/web';
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos
});
```
```typescript Node.js (TS) theme={null}
import { column, Schema, Table } from '@powersync/node';
const todos = new Table(
{
list_id: column.text,
created_at: column.text,
completed_at: column.text,
description: column.text,
created_by: column.text,
completed_by: column.text,
completed: column.integer
},
{ indexes: { list: ['list_id'] } }
);
export const AppSchema = new Schema({
todos
});
```
```kotlin Kotlin theme={null}
import com.powersync.db.schema.Column
import com.powersync.db.schema.Schema
import com.powersync.db.schema.Table
import com.powersync.db.schema.Index
import com.powersync.db.schema.IndexedColumn
val AppSchema: Schema = Schema(
listOf(
Table(
name = "todos",
columns = listOf(
Column.text("list_id"),
Column.text("created_at"),
Column.text("completed_at"),
Column.text("description"),
Column.integer("completed"),
Column.text("created_by"),
Column.text("completed_by")
),
indexes = listOf(
Index("list", listOf(IndexedColumn.descending("list_id")))
)
)
)
)
```
```swift Swift theme={null}
import PowerSync
let todos = Table(
name: "todos",
columns: [
Column.text("list_id"),
Column.text("description"),
Column.integer("completed"),
Column.text("created_at"),
Column.text("completed_at"),
Column.text("created_by"),
Column.text("completed_by")
],
indexes: [
Index(
name: "list_id",
columns: [IndexedColumn.ascending("list_id")]
)
]
)
let AppSchema = Schema(todos)
```
```dart Dart/Flutter theme={null}
import 'package:powersync/powersync.dart';
const schema = Schema(([
Table('todos', [
Column.text('list_id'),
Column.text('created_at'),
Column.text('completed_at'),
Column.text('description'),
Column.integer('completed'),
Column.text('created_by'),
Column.text('completed_by'),
], indexes: [
Index('list', [IndexedColumn('list_id')])
])
]));
```
```csharp .NET theme={null}
using PowerSync.Common.DB.Schema;
using PowerSync.Common.DB.Schema.Attributes;
[Table("todos"), Index("list", ["list_id"])]
public class Todo
{
// Attribute-based schema requires an explicit id; other syntaxes define an implicit id key. Learn more in the .NET SDK reference.
[Column("id")]
public string TodoId { get; set; }
[Column("list_id")]
public string ListId { get; set; }
[Column("created_at")]
public string CreatedAt { get; set; }
[Column("completed_at")]
public string CompletedAt { get; set; }
[Column("description")]
public string Description { get; set; }
[Column("created_by")]
public string CreatedBy { get; set; }
[Column("completed_by")]
public string CompletedBy { get; set; }
[Column("completed")]
public bool Completed { get; set; }
}
public static Schema PowerSyncSchema = new Schema(typeof(Todo));
```
This uses the recommended attribute-based syntax, where your C# class doubles as both the schema definition and the result type for queries — so you only define your data structure once. If you prefer to keep your schema definition separate from your data classes, an object initializer syntax is also available. See the [.NET SDK reference](/client-sdks/reference/dotnet#schema-definition-syntax) for details.
```rust Rust theme={null}
use powersync::schema::{Column, Schema, Table};
pub fn app_schema() -> Schema {
let mut schema = Schema::default();
let todos = Table::create(
"todos",
vec![
Column::text("list_id"),
Column::text("created_at"),
Column::text("completed_at"),
Column::text("description"),
Column::integer("completed"),
Column::text("created_by"),
Column::text("completed_by"),
],
|_| {},
);
schema.tables.push(todos);
schema
}
```
**Note**: The schema does not explicitly specify an `id` column, since PowerSync automatically creates an `id` column of type `text`. PowerSync [recommends](/sync/advanced/client-id) using UUIDs.
**Learn More**
The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Streams (or legacy Sync Rules) and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types).
### Instantiate the PowerSync Database
Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Streams (or legacy Sync Rules).
```typescript React Native (TS) theme={null}
import { PowerSyncDatabase } from '@powersync/react-native';
import { AppSchema } from './Schema';
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'powersync.db'
}
});
```
```typescript Web (TS) theme={null}
import { PowerSyncDatabase } from '@powersync/web';
import { AppSchema } from './Schema';
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'powersync.db'
}
});
```
```typescript Node.js (TS) theme={null}
import { PowerSyncDatabase } from '@powersync/node';
import { AppSchema } from './Schema';
export const db = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'powersync.db'
}
});
```
```typescript Capacitor (TS) theme={null}
import { PowerSyncDatabase } from '@powersync/capacitor';
// Import general components from the Web SDK package
import { Schema } from '@powersync/web';
import { Connector } from './Connector';
import { AppSchema } from './AppSchema';
/**
* The Capacitor PowerSyncDatabase will automatically detect the platform
* and use the appropriate database drivers.
*/
export const db = new PowerSyncDatabase({
// The schema you defined in the previous step
schema: AppSchema,
database: {
// Filename for the SQLite database — it's important to only instantiate one instance per file.
dbFilename: 'powersync.db'
}
});
```
```kotlin Kotlin theme={null}
import com.powersync.DatabaseDriverFactory
import com.powersync.PowerSyncDatabase
// Android
val driverFactory = DatabaseDriverFactory(this)
// iOS & Desktop
// val driverFactory = DatabaseDriverFactory()
val database = PowerSyncDatabase({
factory: driverFactory,
schema: AppSchema,
dbFilename: "powersync.db"
})
```
```swift Swift theme={null}
import PowerSync
let db = PowerSyncDatabase(
schema: AppSchema,
dbFilename: "powersync.sqlite"
)
```
```dart Dart/Flutter theme={null}
import 'package:powersync/powersync.dart';
import 'package:path_provider/path_provider.dart';
import 'package:path/path.dart';
openDatabase() async {
final dir = await getApplicationSupportDirectory();
final path = join(dir.path, 'powersync-dart.db');
db = PowerSyncDatabase(schema: schema, path: path);
await db.initialize();
}
```
```csharp .NET - Common theme={null}
using PowerSync.Common.Client;
class Demo
{
static async Task Main()
{
var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions
{
Database = new SQLOpenOptions { DbFilename = "tododemo.db" },
Schema = AppSchema.PowerSyncSchema,
});
await db.Init();
}
}
```
```csharp .NET - MAUI theme={null}
using PowerSync.Common.Client;
using PowerSync.Common.MDSQLite;
using PowerSync.Maui.SQLite;
class Demo
{
static async Task Main()
{
// Ensures the DB file is stored in a platform appropriate location
var dbPath = Path.Combine(FileSystem.AppDataDirectory, "maui-example.db");
var factory = new MAUISQLiteDBOpenFactory(new MDSQLiteOpenFactoryOptions()
{
DbFilename = dbPath
});
var Db = new PowerSyncDatabase(new PowerSyncDatabaseOptions()
{
Database = factory, // Supply a factory
Schema = AppSchema.PowerSyncSchema,
});
await db.Init();
}
}
```
```rust Rust theme={null}
// 1. Process setup: register PowerSync extension early (e.g. in main()).
// 2. Open a connection pool, create env, then database. Spawn async tasks
// before connecting (see Connect step). Requires powersync with tokio feature.
use powersync::{ConnectionPool, PowerSyncDatabase, error::PowerSyncError};
use powersync::env::PowerSyncEnvironment;
use std::sync::Arc;
use http_client::IsahcClient;
fn open_pool() -> Result {
ConnectionPool::open("powersync.db")
}
// This example shows the Tokio runtime. You must call
// `PowerSyncEnvironment::powersync_auto_extension()` before using the SDK and spawn async
// tasks with `db.async_tasks().spawn_with_tokio()` (or `spawn_with` for other runtimes)
// before connecting. See the Rust SDK reference for in-memory pools, smol, or custom runtimes.
#[tokio::main]
async fn main() {
PowerSyncEnvironment::powersync_auto_extension()
.expect("could not load PowerSync core extension");
let pool = open_pool().expect("open pool");
let client = Arc::new(IsahcClient::new());
let env = PowerSyncEnvironment::custom(
client.clone(),
pool,
Box::new(PowerSyncEnvironment::tokio_timer()),
);
let db = PowerSyncDatabase::new(env, app_schema());
db.async_tasks().spawn_with_tokio();
// Connect with a backend connector in the next step.
}
```
### Connect to PowerSync Service Instance
Connect your client-side PowerSync database to the PowerSync Service instance you created in [step 2](#2-set-up-powersync-service-instance) by defining a *backend connector* and calling `connect()`. The backend connector handles authentication and uploading mutations to your backend.
**Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
You don't have to worry about the *backend connector* implementation details right now — you can leave the boilerplate as-is and come back to it later.
For development, you can use the development token you generated in the [Generate a Development Token](#optional-generate-a-development-token) step above. For production, you'll implement proper JWT authentication as we'll explain further below.
```typescript React Native (TS) theme={null}
import { AbstractPowerSyncDatabase, PowerSyncBackendConnector, PowerSyncCredentials } from '@powersync/react-native';
import { db } from './Database';
class Connector implements PowerSyncBackendConnector {
async fetchCredentials(): Promise {
// for development: use development token
return {
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
};
}
async uploadData(database: AbstractPowerSyncDatabase) {
const transaction = await database.getNextCrudTransaction();
if (!transaction) return;
for (const op of transaction.crud) {
const record = { ...op.opData, id: op.id };
// upload to your backend API
}
await transaction.complete();
}
}
// connect the database to PowerSync Service
const connector = new Connector();
await db.connect(connector);
```
```typescript Web & Capacitor (TS) theme={null}
import { AbstractPowerSyncDatabase, PowerSyncBackendConnector, PowerSyncCredentials } from '@powersync/web';
import { db } from './Database';
class Connector implements PowerSyncBackendConnector {
async fetchCredentials(): Promise {
// for development: use development token
return {
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
};
}
async uploadData(database: AbstractPowerSyncDatabase) {
const transaction = await database.getNextCrudTransaction();
if (!transaction) return;
for (const op of transaction.crud) {
const record = { ...op.opData, id: op.id };
// upload to your backend API
}
await transaction.complete();
}
}
// connect the database to PowerSync Service
const connector = new Connector();
await db.connect(connector);
```
```typescript Node.js (TS) theme={null}
import { PowerSyncBackendConnector } from '@powersync/node';
export class Connector implements PowerSyncBackendConnector {
async fetchCredentials() {
// for development: use development token
return {
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
};
}
async uploadData(database) {
// upload to your backend API
}
}
// connect the database to PowerSync Service
const connector = new Connector();
await db.connect(connector);
```
```kotlin Kotlin theme={null}
import com.powersync.PowerSyncCredentials
import com.powersync.PowerSyncDatabase
class MyConnector : PowerSyncBackendConnector {
override suspend fun fetchCredentials(): PowerSyncCredentials {
// for development: use development token
return PowerSyncCredentials(
endpoint = "https://your-instance.powersync.com",
token = "your-development-token-here"
)
}
override suspend fun uploadData(database: PowerSyncDatabase) {
val transaction = database.getNextCrudTransaction() ?: return
for (op in transaction.crud) {
val record = op.opData + ("id" to op.id)
// upload to your backend API
}
transaction.complete()
}
}
// connect the database to PowerSync Service
database.connect(MyConnector())
```
```swift Swift theme={null}
import PowerSync
class Connector: PowerSyncBackendConnector {
func fetchCredentials() async throws -> PowerSyncCredentials {
// for development: use development token
return PowerSyncCredentials(
endpoint: "https://your-instance.powersync.com",
token: "your-development-token-here"
)
}
func uploadData(database: PowerSyncDatabase) async throws {
guard let transaction = try await database.getNextCrudTransaction() else {
return
}
for op in transaction.crud {
var record = op.opData
record["id"] = op.id
// upload to your backend API
}
try await transaction.complete()
}
}
// connect the database to PowerSync Service
let connector = Connector()
await db.connect(connector: connector)
```
```dart Dart/Flutter theme={null}
import 'package:powersync/powersync.dart';
class Connector extends PowerSyncBackendConnector {
@override
Future fetchCredentials() async {
return PowerSyncCredentials(
endpoint: 'https://your-instance.powersync.com',
token: 'your-development-token-here'
);
}
@override
Future uploadData(PowerSyncDatabase database) async {
final transaction = await database.getNextCrudTransaction();
if (transaction == null) return;
for (final op in transaction.crud) {
final record = {...op.opData, 'id': op.id};
// upload to your backend API
}
await transaction.complete();
}
}
// connect the database to PowerSync Service
final connector = Connector();
await db.connect(connector);
```
```csharp .NET theme={null}
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;
using PowerSync.Common.Client;
using PowerSync.Common.Client.Connection;
using PowerSync.Common.DB.Crud;
public class MyConnector : IPowerSyncBackendConnector
{
public MyConnector()
{
}
public async Task FetchCredentials()
{
var powerSyncUrl = "https://your-instance.powersync.com";
var authToken = "your-development-token-here";
// Return credentials with PowerSync endpoint and JWT token
return new PowerSyncCredentials(powerSyncUrl, authToken);
}
public async Task UploadData(IPowerSyncDatabase database)
{
// upload to your backend API
}
}
// connect the database to PowerSync Service
await db.Connect(new MyConnector());
```
```rust Rust theme={null}
use async_trait::async_trait;
use powersync::{BackendConnector, PowerSyncCredentials, PowerSyncDatabase, SyncOptions};
use powersync::error::PowerSyncError;
use std::sync::Arc;
struct MyBackendConnector {
client: Arc,
db: PowerSyncDatabase,
}
#[async_trait]
impl BackendConnector for MyBackendConnector {
async fn fetch_credentials(&self) -> Result {
// for development: use development token
Ok(PowerSyncCredentials {
endpoint: "https://your-instance.powersync.com".to_string(),
token: "your-development-token-here".to_string(),
})
}
async fn upload_data(&self) -> Result<(), PowerSyncError> {
let mut local_writes = self.db.crud_transactions();
while let Some(tx) = local_writes.try_next().await? {
// upload to your backend API
tx.complete().await?;
}
Ok(())
}
}
// connect the database to PowerSync Service
db.connect(SyncOptions::new(MyBackendConnector {
client,
db: db.clone(),
}))
.await;
```
Once connected, you can read from and write to the client-side SQLite database. Changes from your source database will be automatically synced down into the SQLite database. For client-side mutations to be uploaded back to your source database, you need to complete the backend integration as we'll explain below.
### Read Data
Read data using SQL queries. The data comes from your client-side SQLite database:
```typescript React Native, Web, Node.js & Capacitor (TS) theme={null}
// Get all todos
const todos = await db.getAll('SELECT * FROM todos');
// Get a single todo
const todo = await db.get('SELECT * FROM todos WHERE id = ?', [todoId]);
// Watch for changes (reactive query)
const stream = db.watch('SELECT * FROM todos WHERE list_id = ?', [listId]);
for await (const todos of stream) {
// Update UI when data changes
console.log(todos);
}
// Note: The above example requires async iterator support in React Native.
// If you encounter issues, use one of these callback-based APIs instead:
// Option 1: Using onResult callback
// const abortController = new AbortController();
// db.watch(
// 'SELECT * FROM todos WHERE list_id = ?',
// [listId],
// {
// onResult: (todos) => {
// // Update UI when data changes
// console.log(todos);
// }
// },
// { signal: abortController.signal }
// );
// Option 2: Using the query builder API
// const query = db
// .query({
// sql: 'SELECT * FROM todos WHERE list_id = ?',
// parameters: [listId]
// })
// .watch();
// query.registerListener({
// onData: (todos) => {
// // Update UI when data changes
// console.log(todos);
// }
// });
```
```kotlin Kotlin theme={null}
// Get all todos
val todos = database.getAll("SELECT * FROM todos") { cursor ->
Todo.fromCursor(cursor)
}
// Get a single todo
val todo = database.get("SELECT * FROM todos WHERE id = ?", listOf(todoId)) { cursor ->
Todo.fromCursor(cursor)
}
// Watch for changes
database.watch("SELECT * FROM todos WHERE list_id = ?", listOf(listId))
.collect { todos ->
// Update UI when data changes
}
```
```swift Swift theme={null}
// Get all todos
let todos = try await db.getAll(
sql: "SELECT * FROM todos",
mapper: { cursor in
TodoContent(
description: try cursor.getString(name: "description")!,
completed: try cursor.getBooleanOptional(name: "completed")
)
}
)
// Watch for changes
for try await todos in db.watch(
sql: "SELECT * FROM todos WHERE list_id = ?",
parameters: [listId]
) {
// Update UI when data changes
}
```
```dart Dart/Flutter theme={null}
// Get all todos
final todos = await db.getAll('SELECT * FROM todos');
// Get a single todo
final todo = await db.get('SELECT * FROM todos WHERE id = ?', [todoId]);
// Watch for changes
db.watch('SELECT * FROM todos WHERE list_id = ?', [listId])
.listen((todos) {
// Update UI when data changes
});
```
```csharp .NET theme={null}
// Define a result type with properties matching schema columns (some columns omitted for brevity)
// public class ListResult { public string id; public string name; public string owner_id; public string created_at; ... }
// Use db.Get() to fetch a single row:
var list = await db.Get("SELECT * FROM lists WHERE id = ?", [listId]);
// Use db.GetAll() to fetch all rows:
var lists = await db.GetAll("SELECT * FROM lists");
// Watch for changes to query results
var query = await db.Watch("SELECT * FROM lists", null, new WatchHandler
{
OnResult = (results) => Console.WriteLine($"Lists updated: {results.Length} items"),
OnError = (error) => Console.WriteLine($"Error: {error.Message}")
});
// Call query.Dispose() to stop watching for updates
query.Dispose();
```
```rust Rust theme={null}
use rusqlite::params;
use futures::StreamExt; // for try_next() on the watch stream
// Get all todos
async fn get_all_todos(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> {
let reader = db.reader().await?;
let mut stmt = reader.prepare("SELECT * FROM todos")?;
let mut rows = stmt.query(params![])?;
while let Some(row) = rows.next()? {
let id: String = row.get("id")?;
let description: String = row.get("description")?;
// use row data
}
Ok(())
}
// Get a single todo
async fn find_todo(db: &PowerSyncDatabase, todo_id: &str) -> Result<(), PowerSyncError> {
let reader = db.reader().await?;
let mut stmt = reader.prepare("SELECT * FROM todos WHERE id = ?")?;
let mut rows = stmt.query(params![todo_id])?;
while let Some(row) = rows.next()? {
let id: String = row.get("id")?;
let description: String = row.get("description")?;
println!("Found todo: {id}, {description}");
}
Ok(())
}
// Watch for changes
async fn watch_todos(db: &PowerSyncDatabase, list_id: &str) -> Result<(), PowerSyncError> {
let stream = db.watch_statement(
"SELECT * FROM todos WHERE list_id = ?".to_string(),
params![list_id],
|stmt, params| {
let mut rows = stmt.query(params)?;
let mut mapped = vec![];
while let Some(row) = rows.next()? {
mapped.push(() /* TODO: Read row into struct */);
}
Ok(mapped)
},
);
let mut stream = std::pin::pin!(stream);
while let Some(_event) = stream.try_next().await? {
// Update UI when data changes
}
Ok(())
}
```
**Learn More**
* [Reading Data](/client-sdks/reading-data) - Details on querying synced data
* [ORMs Overview](/client-sdks/orms/overview) - Using type-safe ORMs with PowerSync
* [Live Queries / Watch Queries](/client-sdks/watch-queries) - Building reactive UIs with automatic updates
### Write Data
Write data using SQL `INSERT`, `UPDATE`, or `DELETE` statements. PowerSync automatically queues these mutations and uploads them to your backend via the `uploadData()` function, once you've fully implemented your *backend connector* (as we'll talk about below).
```typescript React Native (TS), Web & Node.js theme={null}
// Insert a new todo
await db.execute(
'INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)',
[listId, 'Buy groceries']
);
// Update a todo
await db.execute(
'UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?',
[todoId]
);
// Delete a todo
await db.execute('DELETE FROM todos WHERE id = ?', [todoId]);
```
```kotlin Kotlin theme={null}
// Insert a new todo
database.writeTransaction { ctx ->
ctx.execute(
sql = "INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)",
parameters = listOf(listId, "Buy groceries")
)
}
// Update a todo
database.execute(
sql = "UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?",
parameters = listOf(todoId)
)
// Delete a todo
database.execute(
sql = "DELETE FROM todos WHERE id = ?",
parameters = listOf(todoId)
)
```
```swift Swift theme={null}
// Insert a new todo
try await db.execute(
sql: "INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)",
parameters: [listId, "Buy groceries"]
)
// Update a todo
try await db.execute(
sql: "UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?",
parameters: [todoId]
)
// Delete a todo
try await db.execute(
sql: "DELETE FROM todos WHERE id = ?",
parameters: [todoId]
)
```
```dart Dart/Flutter theme={null}
// Insert a new todo
await db.execute(
'INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)',
[listId, 'Buy groceries']
);
// Update a todo
await db.execute(
'UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?',
[todoId]
);
// Delete a todo
await db.execute('DELETE FROM todos WHERE id = ?', [todoId]);
```
```csharp .NET theme={null}
// Insert a new todo
await db.Execute(
"INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), datetime(), ?, ?)",
[listId, "Buy groceries"]
);
// Update a todo
await db.Execute(
"UPDATE todos SET completed = 1, completed_at = datetime() WHERE id = ?",
[todoId]
);
// Delete a todo
await db.Execute("DELETE FROM todos WHERE id = ?", [todoId]);
```
```rust Rust theme={null}
use rusqlite::params;
// Insert a new todo
async fn insert_todo(
db: &PowerSyncDatabase,
list_id: &str,
description: &str,
) -> Result<(), PowerSyncError> {
let writer = db.writer().await?;
writer.execute(
"INSERT INTO todos (id, created_at, list_id, description) VALUES (uuid(), date(), ?, ?)",
params![list_id, description],
)?;
Ok(())
}
// Update a todo
async fn complete_todo(db: &PowerSyncDatabase, todo_id: &str) -> Result<(), PowerSyncError> {
let writer = db.writer().await?;
writer.execute(
"UPDATE todos SET completed = 1, completed_at = date() WHERE id = ?",
params![todo_id],
)?;
Ok(())
}
// Delete a todo
async fn delete_todo(db: &PowerSyncDatabase, todo_id: &str) -> Result<(), PowerSyncError> {
let writer = db.writer().await?;
writer.execute("DELETE FROM todos WHERE id = ?", params![todo_id])?;
Ok(())
}
```
**Best practice**: Use UUIDs when inserting new rows on the client side. UUIDs can be generated offline/locally, allowing for unique identification of records created in the client database before they are synced to the server. See [Client ID](/sync/advanced/client-id) for more details.
**Learn More**
For more details, see the [Writing Data](/client-sdks/writing-data) page.
# Next Steps
For production deployments, you'll need to:
1. **[Implement Authentication](/configuration/auth/overview)**: Replace development tokens with proper JWT-based authentication. PowerSync supports various authentication providers including Supabase, Firebase Auth, Auth0, Clerk, and custom JWT implementations.
2. **Configure & Integrate Your Backend Application**: Set up your backend to handle mutations uploaded from clients.
* [Server-Side Setup](/configuration/app-backend/setup)
* [Client-Side Integration](/configuration/app-backend/client-side-integration)
### Additional Resources
* Learn more about [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) for controlling partial syncing.
* Explore [Live Queries / Watch Queries](/client-sdks/watch-queries) for reactive UI updates.
* Check out [Example Projects](/intro/examples) for complete implementations.
* Review the [Client SDK References](/client-sdks/overview) for client-side platform-specific details.
# Questions?
Try "Ask AI" on this site which is trained on all our documentation, repositories and Discord discussions. Also join us on [our community Discord server](https://discord.gg/powersync) where you can browse topics from the PowerSync community and chat with our team.
# Understanding the SQLite Database
Source: https://docs.powersync.com/maintenance-ops/client-database-diagnostics
Guide for analyzing and understanding the local SQLite database
## Get the SQLite file
A SQLite database file can use any extension - .db, .sqlite, .sqlite3, etc. The extension doesn’t affect functionality; all contain the same SQLite format. To ensure no recent changes are lost, you can either: Pull the associated [Write-Ahead Log (WAL)](https://www.sqlite.org/wal.html) file (with the same name as the database plus the suffix `-wal`). Or alternatively, run the command `PRAGMA wal_checkpoint(TRUNCATE);` on the database before pulling it, which will merge the WAL file's changes into the main database file.
Ensure your emulator is running, then replace `com.package-name` with your application's package name and `your-db-name.sqlite` with your database file name.
This method may not work on Windows. Alternatively, you can copy the database to `/sdcard/` then use `adb pull`, though this may encounter permission issues on some devices.
```shell theme={null}
adb exec-out run-as com.package-name cat databases/your-db-name.sqlite > "your/local/path/your-db-name.sqlite"
adb exec-out run-as com.package-name cat databases/your-db-name.sqlite-wal > "your/local/path/your-db-name.sqlite-wal"
```
**Common database locations:**
* [React Native Quick SQLite](/client-sdks/reference/react-native-and-expo#react-native-quick-sqlite-2): `/data/data/com.package-name/files/`
* [OP-SQLite](/client-sdks/reference/react-native-and-expo#op-sqlite): `/data/data/com.package-name/databases/`
**Note:** If the database is in a different location, first find it with:
```shell theme={null}
adb shell run-as com.package-name find /data/data/com.package-name -name "your-db-name.sqlite"
```
Replace `your-db-name.sqlite` with your database file name and extension.
```shell theme={null}
find ~/Library/Developer/CoreSimulator/Devices -type f -name 'your-db-name.sqlite'
find ~/Library/Developer/CoreSimulator/Devices -type f -name 'your-db-name.sqlite-wal'
```
**Common database location:**
* App sandbox: `Library/Application Support/`
Write-Ahead Log (WAL) file is not used in web environments. Browser-based SQLite implementations handle transactions differently.
Web applications use browser-based storage APIs. Database files are managed by the browser and not directly accessible via filesystem paths.
**Storage options:**
* **OPFS (Origin Private File System)**: Native filesystem API with better performance (Chrome 102+, Firefox 111+, Safari 17.2+)
* **IndexedDB**: A key-value storage API. Unlike OPFS, IndexedDB doesn't store complete database files - it stores data in a structured format that cannot be directly downloaded as a SQLite file.
Run the JavaScript code in your browser's console (F12 → Console) while on your application's page.
**Export database to your computer (OPFS only):**
```javascript theme={null}
// For OPFS
async function downloadDatabase() {
const root = await navigator.storage.getDirectory();
const fileHandle = await root.getFileHandle('your-db-name.sqlite');
const file = await fileHandle.getFile();
// Download the file
const url = URL.createObjectURL(file);
const a = document.createElement('a');
a.href = url;
a.download = 'your-db-name.sqlite';
a.click();
URL.revokeObjectURL(url);
}
downloadDatabase();
```
**Browser DevTools (inspect only):**
* Chrome/Edge: `F12` → Application → Storage → IndexedDB or OPFS
* Firefox: `F12` → Storage → IndexedDB
* Safari: Develop → Show Web Inspector → Storage
## Inspecting the SQLite file
### 1. Open your SQLite file
Use the `sqlite3` command-line tool or a GUI tool like [DB Browser for SQLite](https://sqlitebrowser.org/) to open your database file:
```shell theme={null}
sqlite3 your-db-name.sqlite
```
### 2. Merge the WAL file
Temporary changes are stored in a separate [Write-Ahead Log (WAL)](https://www.sqlite.org/wal.html) `.wal` file. To measure the database size accurately, merge these changes into the main database:
```sql theme={null}
PRAGMA wal_checkpoint(TRUNCATE);
```
### 3. Get storage statistics
Query the built-in `dbstat` virtual table to see how much space each table uses on disk:
```sql theme={null}
SELECT name, pgsize AS storage_size, payload AS data_size
FROM dbstat
WHERE aggregate = true;
```
This returns:
* `name`: Table name
* `storage_size`: Total storage used on disk (in bytes, including SQLite overhead)
* `payload`: Actual data size (in bytes)
The `dbstat` table is automatically available in SQLite and provides low-level information about physical storage. Values represent on-disk usage including SQLite's internal structures (page headers, B-trees, indexes, free space), which is why they're larger than your logical data size.
## Understanding the size breakdown
PowerSync databases contain more data than just your application tables to support the sync functionality:
1. **Application data**: Your synced data in `ps_data__` tables
2. **Operation log (`ps_oplog`)**: A complete copy of all synced data required for offline conflict resolution and sync
3. **Indexes**: For efficient queries and lookups
4. **PowerSync metadata**: System tables and views for managing sync state (see [Client Architecture](https://docs.powersync.com/architecture/client-architecture#schema))
5. **SQLite overhead**: Page structure, alignment, fragmentation, and internal bookkeeping
The difference between `storage_size` and `payload` in the `dbstat` results shows SQLite's storage overhead. The `ps_oplog` table will typically be one of the largest tables since it maintains a full copy of your synced data.
To see just the JSON data size in `ps_oplog` (excluding SQLite overhead), run:
```sql theme={null}
SELECT sum(length(data)) / 1024.0 / 1024.0 AS size_mb FROM ps_oplog;
```
This measures only the raw JSON payloads, which will be smaller than the on-disk storage reported by `dbstat`.
## Reducing SQLite file size
Both methods of reducing the size of the SQLite file can be executed within the client using `powerSync.execute()`.
Consider these optimizations if your app's database is growing larger than expected or you're working with high data volumes in production.
### VACUUM Command
The `VACUUM` command reclaims unused space in the database:
```sql theme={null}
VACUUM;
```
The `VACUUM` command has important constraints:
* **Disk space**: Requires enough free disk space to create a temporary copy of the entire database
* **Database locking**: Locks the database during execution, which may affect app responsiveness
* Ensure sufficient space is available and run during low-activity periods
### Increase page size
Increasing the page size from the default **4KB** (4096 bytes) to **16KB** (16384 bytes) can reduce storage overhead significantly.
**IndexedDB Compatibility Issue**: Changing the page size is *not* supported when using `IndexedDB` on web platforms and could corrupt the database. Only use this optimization for native SQLite implementations.
**Additional caveats:**
* May increase overhead for many small writes.
* Best suited for apps with larger data records
The page size must be set before any tables are created and before running `VACUUM`. It should be one of the first **PRAGMA** statements after opening a new database:
```sql theme={null}
PRAGMA page_size = 16384;
```
If you're changing the page size on an existing database, you must run `VACUUM` immediately after setting it to apply the change. For optimal results, set the page size when first creating the database.
# Compacting Buckets
Source: https://docs.powersync.com/maintenance-ops/compacting-buckets
[Buckets](/architecture/powersync-service#bucket-system) store data as a history of changes, not only the current state.
This allows clients to download incremental changes efficiently — only changed rows have to be downloaded. However, over time this history can grow large, causing new clients to potentially take a long time to download the initial set of data. To handle this, we compact the history of each bucket.
## Compacting
### PowerSync Cloud
The cloud-hosted version of PowerSync will automatically compact all buckets once per day.
Support to manually trigger compacting is available in the [PowerSync Dashboard](https://dashboard.powersync.com/): Select your project and instance, go to the **Settings** view, and click the **Compact** button in the "Compact operation history" section. Support to trigger compacting from the [CLI](/tools/cli) will be added soon.
[Defragmenting](/maintenance-ops/compacting-buckets#defragmenting) may still be required.
### Self-hosted PowerSync
For self-hosted setups (PowerSync Open Edition & PowerSync Enterprise Self-Hosted Edition), the `compact` command in the Docker image can be used to compact all buckets. This can be run manually, or on a regular schedule using Kubernetes [CronJob](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) or similar scheduling functionality.
[Defragmenting](/maintenance-ops/compacting-buckets#defragmenting) may still be required.
## Background
### Bucket operations
Each bucket is an ordered list of `PUT`, `REMOVE`, `MOVE` and `CLEAR` operations. In normal operation, only `PUT` and `REMOVE` operations are created.
A simplified view of a bucket may look like this:
```bash theme={null}
(1, PUT, row1, )
(2, PUT, row2, )
(3, PUT, row1, )
(4, REMOVE, row2)
```
### Compacting step 1 - MOVE operations
The first step of compacting involves `MOVE` operations. This just indicates that an operation is not needed anymore, since a later `PUT` or `REMOVE` operation replaces the row.
After this compact step, the bucket may look like this:
```bash theme={null}
(1, MOVE)
(2, MOVE)
(3, PUT, row1, )
(4, REMOVE, row2)
```
This does not reduce the number of operations to download, but can reduce the amount of data to download.
### Compacting step 2 - CLEAR operations
The second step of compacting takes a sequence of `CLEAR`, `MOVE` and/or `REMOVE` operations at the start of the bucket, and replaces them all with a single `CLEAR` operation. The `CLEAR` operation indicates to the client that "this is the start of the bucket, delete any prior operations that you may have".
After this compacting step, the bucket may look like this:
```bash theme={null}
(2, CLEAR)
(3, PUT, row1, )
(4, REMOVE, row2)
```
This reduces the number of operations for new clients to download in some cases.
The `CLEAR` operation can only remove operations at the start of the bucket, not in the middle of the bucket, which leads us to the next step.
### Defragmenting
There are cases that the above compacting steps cannot optimize efficiently. The key factor is that the oldest PUT operation in a bucket determines how much of the history can be compacted. This means:
1. If a row has never been updated since its initial creation, its original PUT operation remains at the start of the bucket
2. All operations that come after this oldest PUT cannot be fully compacted
3. This is particularly problematic when you have:
* A small number of rarely-changed rows in the same bucket as frequently-updated rows
* The rarely-changed rows' original PUT operations "block" compacting of the entire bucket
* The frequently-updated rows continue to accumulate operations that can't be fully compacted
For example, imagine this sequence of statements:
```sql theme={null}
-- Insert a single row that rarely changes
INSERT INTO lists(name) VALUES('a');
-- Insert 50k rows that change frequently
INSERT INTO lists (name) SELECT 'b' FROM generate_series(1, 50000);
-- Delete those 50k rows, but keep 'a'
DELETE FROM lists WHERE name = 'b';
```
After compacting, the bucket looks like this:
```bash theme={null}
(1, PUT, row_1, ) -- This original PUT blocks further compacting
(2, MOVE)
(3, MOVE)
...
(50001, MOVE)
(50002, REMOVE, row2)
(50003, REMOVE, row3)
...
(100001, REMOVE, row50000)
```
This is inefficient because:
1. The original PUT operation for row 'a' remains at the start
2. All subsequent operations can't be fully compacted
3. We end up with over 100k operations for what should be a simple bucket
To handle this case, we "defragment" the bucket by updating existing rows in the source database. This creates new PUT operations at the end of the bucket, allowing the compact steps to efficiently compact the entire history:
```sql theme={null}
-- Touch all rows to create new PUT operations
UPDATE lists SET name = name;
-- OR touch specific rows at the start of the bucket
UPDATE lists SET name = name WHERE name = 'a';
```
After defragmenting and compacting, the bucket looks like this:
```bash theme={null}
(100001, CLEAR)
(100002, PUT, row_1, )
```
The bucket is now back to two operations, allowing new clients to sync efficiently.
Note: All rows in the bucket must be updated for this to be effective. If some rows are never updated, they will continue to block compacting of the entire bucket.
**Bucket Design Tip**: If you have a mix of frequently-updated and rarely-changed rows, consider splitting them into separate buckets. This prevents the rarely-changed rows from blocking compacting of the frequently-updated ones.
### When to Defragment
You should consider defragmenting your buckets when:
1. **High Operations-to-Rows Ratio**: If you notice that the number of operations significantly exceeds the number of rows in a bucket. You can inspect this using the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app).
2. **Frequent Updates**: Tables that are frequently updated (e.g., status fields, counters, or audit logs)
3. **Large Data Churn**: Tables where you frequently insert and delete many rows
### Defragmenting Strategies
There are manual and automated approaches to defragmenting:
1. **Manual Defragmentation**
* Use the [PowerSync Dashboard](https://dashboard.powersync.com/) to manually trigger defragmentation: Select your project and instance, go to the **Settings** view, and click the **Defragment** button in the "Compact operation history" section
* Best for one-time cleanup or after major data changes
2. **Scheduled Defragmentation**
* Set up a cron job to regularly update rows
* Recommended for frequently updated tables or tables with large churn
* Example using pg\_cron:
```sql theme={null}
-- Daily defragmentation for high-churn tables
UPDATE audit_logs SET last_updated = now()
WHERE last_updated < now() - interval '1 day';
-- Weekly defragmentation for other tables
UPDATE users SET last_updated = now()
WHERE last_updated < now() - interval '1 week';
```
* This will cause clients to re-sync each updated row, while preventing the number of operations from growing indefinitely. Depending on how often rows in the bucket are modified, the interval can be increased or decreased.
### Defragmenting Trade-offs
Defragmenting + compacting as described above can significantly reduce the number of operations in a bucket, at the cost of existing clients needing to re-sync that data. When and how to do this depends on the specific use-case and data update patterns.
Key considerations:
1. **Frequency**: More frequent defragmentation means fewer operations per sync but more frequent re-syncs
2. **Scope**: Defragmenting all rows at once is more efficient but causes a larger sync cycle
3. **Monitoring**: Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to track operations-to-rows ratio
## Sync Streams Deployments
Whenever modifications to [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) are deployed, all buckets are re-created from scratch. This has a similar effect to fully defragmenting and compacting all buckets. This was recommended as a workaround before explicit compacting became available ([released July 26, 2024](https://releases.powersync.com/announcements/bucket-compacting)).
Soon, we will use [incremental sync rule reprocessing](https://github.com/orgs/powersync-ja/discussions/349) to process changed definitions only.
## Technical details
See the [documentation](https://github.com/powersync-ja/powersync-service/blob/main/docs/compacting-operations.md) in the `powersync-service` repo for more technical details on compacting.
# Deploying Schema Changes
Source: https://docs.powersync.com/maintenance-ops/deploying-schema-changes
The deploy process for schema or [Sync Streams](/sync/streams/overview) / [Sync Rules](/sync/rules/overview) updates depends on the type of change.
See the appropriate subsections below for details on the various scenarios.
Example: Add a new table that a new version of the app depends on, or add a new column to an existing table.
1. Apply source schema changes (i.e. in Postgres database) (often as a pre-deploy step as part of 2)
2. Deploy backend application changes
3. Deploy [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) changes
4. Wait for reprocessing to complete
5. Publish the app (may be deployed with delayed publishing at any prior point)
The approach here is to have the Sync Rules handle both the old and the new table name during the migration period.
Using maintenance mode on the backend here for simplicity. Other processes may be used to avoid maintenance mode, but that doesn't affect PowerSync system.
1. Deploy Sync Rules containing both the old and the new table name, with a mapping (alias) from the new name to the old one (so that both end up with the old name on the client). This will cause validation errors because of a missing table, but PowerSync will still allow the deploy.
2. Wait for Sync Rule reprocessing to complete.
3. Put the backend in maintenance mode.
1. i.e. Backend needs to be made unavailable to avoid breaking things during migrations.
4. Apply the source schema changes (i.e. in Postgres database)
5. Deploy backend changes and re-activate backend.
6. Remove the old table from Sync Rules, then deploy and activate the Sync Rules.
Pass in a "`schema_version`" or similar parameter from the client, and use this in Sync Rules to use either the old or new table name in the data queries.
See this section for details:
[Multiple Client Versions](/sync/advanced/multiple-client-versions)
Treat this as two separate steps and follow the process for both **Renaming a Table on the Server** and **Renaming a Table on the Client**.
Use the `ifnull` function in Sync Rules to output whichever column is available. This would handle both the old and new schema versions:
```sql theme={null}
SELECT IFNULL(description_new, description_old) AS description FROM assets
```
This may produce a validation error because of a missing column, but PowerSync will still allow the deploy.
Once the changes have been deployed and replicated, the old reference can be removed from the Sync Rules:
```sql theme={null}
SELECT description_new AS description FROM assets
```
Use the same approach as for renaming tables.
If the column types have the same representation in Sync Rules, the type can be changed freely without issues (for example changing between `VARCHAR` and `TEXT`).
Other type changes, for example changing between `INT` and `TEXT`, require more care.
To change the type, it is usually best to create a new column with the new type, then remove the old column once nothing uses it anymore.
When changing the type of a column without renaming, use a column type mapping to still use the old type for existing client applications.
# Implementing Schema Changes
Source: https://docs.powersync.com/maintenance-ops/implementing-schema-changes
## Introduction
The [PowerSync protocol](/architecture/powersync-protocol) is schemaless, and not directly affected by schema changes.
Replicating data from the source database to [buckets](/architecture/powersync-service#bucket-system) may be affected by server-side changes to the schema (in the case of Postgres), and may need [reprocessing](/maintenance-ops/compacting-buckets) in some cases.
The [client-side schema](/intro/setup-guide#define-your-client-side-schema) is just a view on top of the schemaless data. Updating this client-side schema is immediate when the new version of the app runs, with no client-side migrations required.
The developer is responsible for keeping client-side schema changes backwards-compatible with older versions of client apps. PowerSync has some functionality to assist with this:
1. [Different stream queries](/sync/advanced/multiple-client-versions) can be applied based on [connection parameters](/sync/streams/parameters#connection-parameters) such as client version. (In Sync Rules, this uses [client parameters](/sync/rules/client-parameters).)
2. Stream queries can apply simple data transformations to keep data in a format compatible with older clients, for example by aliasing or casting columns. (In Sync Rules, this is done via [data query expressions](/sync/rules/data-queries).)
## Client-Side Impact of Schema and Sync Rule Changes
As mentioned above, the PowerSync system itself is schemaless — the client syncs any data as received, in JSON format, regardless of the data model on the client.
The schema as supplied on the client is only a view on top of the schemaless data.
1. If tables/collections not described by the client-side schema are synced, it is stored internally, but not accessible.
2. Same applies for columns/fields not described by the client-side schema.
3. When there is a type mismatch, SQLite's `CAST` functionality is used to cast to the type described by the schema.
1. Data is internally stored as JSON.
2. SQLite's `CAST` is used to cast values to `TEXT`, `INTEGER` or `REAL`.
3. Casting between types should never error, but it may not fully represent the original data. For example, casting an arbitrary string to `INTEGER` will likely result in a "0" value.
4. Full rules for casting between types are described [in the SQLite documentation here](https://www.sqlite.org/lang_expr.html#castexpr).
4. Removing a table/collection is handled on the client as if the table exists with no data.
5. Removing a column/field is handled on the client as if the values are `undefined`.
Nothing in PowerSync will fail hard if there are incompatible schema changes. But depending on how the app uses the data, app logic may break. For example, removing a table/collection that the app actively uses may break workflows in the app.
To avoid certain types of breaking changes on older clients, data transformations may be used — via column aliasing/casting in [Sync Streams](/sync/streams/queries#selecting-columns), or [data query expressions](/sync/rules/data-queries) in Sync Rules.
## Postgres Specifics
PowerSync keeps the [buckets](/architecture/powersync-service#bucket-system) up to date with any incremental data changes, as recorded in the Postgres [WAL](https://www.postgresql.org/docs/8.0/wal.html) / received in the logical replication stream. This is also referred to as DML (Data Manipulation Language) queries.
However, this does not include DDL (Data Definition Language), which includes:
1. Creating, dropping or renaming tables.
2. Changing replica identity of a table.
3. Adding, dropping or renaming columns.
4. Changing the type of a column.
### Postgres schema changes affecting Sync Streams
#### DROP table
Dropping a table is not directly detected by PowerSync, and previous data may be preserved. To make sure the data is removed, `TRUNCATE` the table before dropping, or remove the table from your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
#### CREATE table
The new table is detected as soon as data is inserted.
#### DROP and re-CREATE table
This is a special case of combining `DROP` and `CREATE`. If a dropped table is created again, *and* data is inserted into the new table, the schema change is detected by PowerSync. PowerSync will delete the old data in this case, as if `TRUNCATE` was called before dropping.
#### RENAME table
A renamed table is handled similarly to dropping the old table, and creating a new table with the new name.
The rename is only detected when data is inserted, updated or deleted to the new table. At this point, PowerSync effectively does a `TRUNCATE` of the old table, and replicates the new table.
This may be a slow operation if the table is large, and all other replication will be blocked until the new table is replicated.
#### Change REPLICA IDENTITY
The replica identity of a table is considered changed if either:
1. The type of replica identity changes (`DEFAULT`, `INDEX`, `FULL`, `NOTHING`).
2. The name or type of columns part of the replica identity changes.
The latter can happen if:
1. Using `REPLICA IDENTITY FULL`, and any column is added, removed, renamed, or the type changed.
2. Using `REPLICA IDENTITY DEFAULT`, and the type of any column in the primary key is changed.
3. Using `REPLICA IDENTITY INDEX`, and the type of any column in the replica index is changed.
4. The primary key or replica index is removed or changed.
When the replica identity changes, the entire table is re-replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again.
Sync Streams / Sync Rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes.
#### Column changes
Column changes such as adding, dropping, renaming columns, or changing column types, are not automatically detected by PowerSync (unless it affects the replica identity as described above).
Adding a column with a `NULL` default value will generally not cause issues. Existing records will have a missing value instead of `NULL` value, but those are generally treated the same on the client.
Adding a column with a different default value, whether it's a static or computed value, will not have this default automatically replicated for existing rows. To propagate this value, make an update to every existing row.
Removing a column will not have the values automatically removed for existing rows on PowerSync. To propagate the change, make an update to every existing row.
Changing a column type, and/or changing the value of a column using an `ALTER TABLE` statement, will not be automatically replicated to PowerSync. In some cases, the change will have no effect on PowerSync (for example changing between `VARCHAR` and `TEXT` types). When the values are expected to change, make an update to every existing row to propagate the changes.
#### Publication changes
A table is not replicated unless it is part of the [powersync publication](/configuration/source-db/setup).
If a table is added to the publication, it is treated the same as a new table, and any existing data is replicated. This may be a slow operation if the table is large, and all other replication will be blocked until the new table is replicated.
There are additional changes that can be made to a table in a publication:
1. Which operations are replicated (insert, update, delete and truncate).
2. Which rows are replicated (row filters).
Those changes are not automatically picked up by PowerSync during replication, and can cause PowerSync to miss changes if the changes are filtered out. PowerSync will not automatically recover the data when for example removing a row filter. Use these with caution.
## MongoDB Specifics
Since MongoDB is schemaless, schema changes generally do not impact PowerSync. However, adding, dropping, and renaming collections require special consideration.
### Adding Collections
Sync Rules can include collections that do not yet exist in the source database. These collections will be created in MongoDB when data is first inserted. PowerSync will begin replicating changes as they occur in the source database.
### Dropping Collections
Due to a limitation in the replication process, dropping a collection does not immediately propagate to synced clients. To ensure the change is reflected, any additional `insert`, `update`, `replace`, or `delete` operation must be performed in any collection within a synced database.
### Renaming Collections
Renaming a synced collection to a name that *is not included* in Sync Streams (or legacy Sync Rules) has the same effect as dropping the collection.
Renaming an unsynced collection to a name that is included in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) triggers an initial snapshot replication. The time required for this process depends on the collection size.
Circular renames (e.g., renaming `todos` → `todos_old` → `todos`) are not directly supported. To reprocess the database after such changes, a [Sync Streams](/sync/streams/overview) update (or [Sync Rules](/sync/rules/overview)) must be deployed.
## MySQL (Beta) Specifics
PowerSync keeps the [buckets](/architecture/powersync-service#bucket-system) up to date with any incremental data changes as recorded in the MySQL [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html).
The binary log also provides DDL (Data Definition Language) query updates, which include:
1. Creating, dropping or renaming tables.
2. Truncating tables. (Not technically a schema change, but they appear in the query updates regardless.)
3. Changing replica identity of a table. (Creation, deletion or modification of primary keys, unique indexes, etc.)
4. Adding, dropping, renaming or changing the types of columns.
For MySQL, PowerSync detects schema changes by parsing the DDL queries in the binary log. It may not always be possible to parse the DDL queries correctly, especially if they are complex or use non-standard syntax.
In such cases, PowerSync will ignore the schema change, but will log a warning with the schema change query. If required, the schema change would then need to be manually
handled by redeploying your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). This triggers a re-replication.
### MySQL schema changes affecting Sync Streams
#### DROP table
PowerSync will detect when a table is dropped, and automatically remove the data from the buckets.
#### CREATE table
Table creation is detected and handled the first time row events for the new table appear on the binary log.
#### TRUNCATE table
PowerSync will detect truncate statements in the binary log, and consequently remove all data from the buckets for that table.
#### RENAME table
A renamed table is handled similarly to dropping the old table, and then creating a new table with existing data under the new name.
This may be a slow operation if the table is large, since the "new" table has to be re-replicated. Replication will be blocked until the new table is replicated.
#### Change REPLICA IDENTITY
The replica identity of a table is considered to be changed if either:
1. The type of replica identity changes (`DEFAULT`, `INDEX`, `FULL`, `NOTHING`).
2. The name or type of columns which form part of the replica identity changes.
The latter can happen if:
1. Using `REPLICA IDENTITY FULL`, and any column is added, removed, renamed, or the type changed.
2. Using `REPLICA IDENTITY DEFAULT`, and the type of any column in the primary key is changed.
3. Using `REPLICA IDENTITY INDEX`, and the type of any column in the replica index is changed.
4. The primary key or replica index is removed or changed.
When the replication identity changes, the entire table is replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again.
Sync Streams / Sync Rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes.
#### Column changes
Column changes such as adding, dropping, renaming columns, or changing column types, are detected by PowerSync but will generally not result in re-replication. (Unless the replica identity was affected as described above).
Adding a column with a `NULL` default value will generally not cause issues. Existing records will have a missing value instead of `NULL` value, but those are generally treated the same on the client.
Adding a column with a different default value, whether it's a static or computed value, will not have this default automatically replicated for existing rows. To propagate this value, make an update to every existing row.
Removing a column will not have the values automatically removed for existing rows on PowerSync. To propagate the change, make an update to every existing row.
Changing a column type, and/or changing the default value of a column using an `ALTER TABLE` statement, will not be automatically replicated to PowerSync.
In some cases, the change will have no effect on PowerSync (for example, changing between `VARCHAR` and `TEXT` types). When the values are expected to change, make an update to every existing row to propagate the changes.
## See Also
* [JSON, Arrays and Custom Types](/client-sdks/advanced/custom-types-arrays-and-json)
* [Deploying Schema Changes](/maintenance-ops/deploying-schema-changes)
# Monitoring and Alerting
Source: https://docs.powersync.com/maintenance-ops/monitoring-and-alerting
Overview of monitoring and alerting functionality for PowerSync Cloud instances
You can monitor activity and alert on issues and usage for your PowerSync Cloud instance(s):
* **Monitor Usage**: View time-series and aggregated usage data with [Usage Metrics](#usage-metrics)
* **Monitor Service and Replication Activity**: Track your PowerSync Service and replication logs with [Instance Logs](#instance-logs)
* **Configure Alerts**: Set up alerts for connection or replication issues or usage activity \*
* Includes [Issue Alerts](#issue-alerts) and/or [Usage Alerts](#usage-alerts)
* **Alert Notifications**: Set up [Email notifications](#email-notifications) or [Webhooks](#webhooks) to report events (like issue or usage alerts) to external systems \*
These features can assist with troubleshooting common issues (e.g. replication errors due to a logical replication slot problem), investigating usage spikes, or being notified when usage exceeds a specific threshold.
\* The availability of these features depends on your PowerSync Cloud plan. See the table below for a summary. More details are provided further below.
### Summary of Feature Availability (by PowerSync Cloud Plan)
Monitoring and alerting functionality varies by [PowerSync Cloud plan](https://www.powersync.com/pricing). This table provides a summary of availability:
| Feature | Free | Pro | Team & Enterprise |
| ------------------------ | ------------- | ------------------------ | ------------------------ |
| **Usage Metrics** | Available | Available | Available |
| **Instance Logs** | Available | Available | Available |
| **Log retention period** | 24 hours | 7 days | 30 days |
| **Issue Alerts** | Available | Available | Available |
| **Usage Alerts** | Not available | Not available | Available |
| **Alert Notifications** | - Email | - Email - Webhooks | - Email - Webhooks |
**Self-hosting PowerSync**
Similar monitoring and alerting functionality is planned for PowerSync Open Edition users and Enterprise Self-Hosted customers.
For Open Edition users, alerting APIs are currently available in an early access release. For Enterprise Self-Hosted customers we are planning a full alerting service that includes customizable alerts and webhook integrations.
Until this is available, please chat to us on our [Discord](https://discord.gg/powersync) to discuss your use case or any questions.
## Usage Metrics
View time-series and aggregated usage data for your PowerSync instance(s), including storage size, concurrent connections, and synced data and operations. This data lets you monitor activity, spot patterns or spikes, and budget while tracking your position within our [Cloud pricing plans](https://www.powersync.com/pricing).
### View Usage Metrics
Access usage metrics in the [PowerSync Dashboard](https://dashboard.powersync.com/). Select your project and instance and go to the **Metrics** view:
You have following options:
* **Filter options**: data by time range.
* **Granularity**: See data in a daily, hourly or minute granularity.
* **Aggregates**: View and copy aggregates for each usage metric.
This usage data is also available programmatically via APIs in an early access release. Chat to us on our [Discord](https://discord.gg/powersync) if you require details.
## Instance Logs
You can review logs for your PowerSync instance(s) to troubleshoot replication or sync service issues. Logs capture activity from the PowerSync Service and Replicator processes.
* **Service/API logs**: Reflect sync processes from the PowerSync Service to clients.
* **Replicator logs**: Reflect replication activity from your source database to the PowerSync Service.
**Availability**
The log retention period varies by plan:
* **Free** plan: Logs from the last 24 hours
* **Pro** plan: Logs from the last 7 days
* **Team & Enterprise** plans: Logs from the last 30 days
### View Instance Logs
Access instance logs through the [PowerSync Dashboard](https://dashboard.powersync.com/). Select your project and instance and go to the **Logs** view:
You can manage logs with the following options:
* **Filter Options**: Filter logs by level (`Note`, `Error`, `Warning`, `Debug`) and by date range.
* **Sorting**: Sort logs by newest or oldest first.
* **Metadata**: Display metadata like `user_id` and `user_agent` in the logs if available.
* **View Mode**: Tail logs in real-time or view them statically.
* **Stack Traces**: Option to show or hide stack traces for errors.
## Custom Metadata in Sync Logs
Custom metadata in sync logs allows clients to attach additional context to their PowerSync connection for improved observability and analytics. This metadata appears in the Service/API logs, making it easier to track, debug, and analyze sync behavior across your app. For example, you can tag connections with app version, feature flags, or business context.
### How to Use Custom Metadata
You can specify application metadata when calling `PowerSyncDatabase.connect()`. To update the metadata, reconnect with new metadata values.
**Version compatibility**: This feature requires JavaScript/Web SDK v1.30.0+, React Native SDK v1.28.0+, Node.js SDK v0.15.0+, or Capacitor SDK v0.2.0+, and PowerSync Service v1.17.0+.
```javascript theme={null}
import { PowerSyncDatabase } from '@powersync/web'; // Update this to the appropriate SDK package
const powerSync = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'powersync.db'
}
});
// Set custom metadata when connecting
powerSync.connect(connector, {
appMetadata: {
app_version: '1.2.3',
feature_flag: 'new_sync_flow'
}
});
```
**Version compatibility**: This feature requires Dart/Flutter SDK v1.17.0+ and PowerSync Service v1.17.0+.
```dart theme={null}
import 'package:powersync/powersync.dart';
final powerSync = PowerSyncDatabase(
schema: AppSchema,
path: 'powersync.db'
);
await powerSync.initialize();
// Set custom metadata when connecting
const options = SyncOptions(
appMetadata: {
'app_version': '1.2.3',
'feature_flag': 'new_sync_flow'
}
);
powerSync.connect(
connector: MyConnector(),
options: options
);
```
**Version compatibility**: This feature requires Kotlin SDK v1.10.0+ and PowerSync Service v1.17.0+.
```kotlin theme={null}
import com.powersync.DatabaseDriverFactory
import com.powersync.PowerSyncDatabase
// Android
val driverFactory = DatabaseDriverFactory(this)
// iOS & Desktop
// val driverFactory = DatabaseDriverFactory()
val powerSync = PowerSyncDatabase({
factory: driverFactory,
schema: AppSchema,
dbFilename: "powersync.db"
})
// Set custom metadata when connecting
powerSync.connect(
connector = MyConnector(),
appMetadata = mapOf(
"app_version" to "1.2.3",
"feature_flag" to "new_sync_flow"
)
)
```
**Version compatibility**: This feature requires Swift SDK v1.9.0+ and PowerSync Service v1.17.0+.
```swift theme={null}
import PowerSync
let schema = AppSchema
let connector = Connector() // This connector must conform to PowerSyncBackendConnector
let powerSync = PowerSyncDatabase(
schema: schema,
dbFilename: "powersync.db"
)
// Set custom metadata when connecting
try await powerSync.connect(
connector: connector,
options: ConnectOptions(
appMetadata: [
"app_version": "1.2.3",
"feature_flag": "new_sync_flow"
]
)
)
```
**Version compatibility**: This feature requires .NET SDK v0.0.6-alpha.1+ and PowerSync Service v1.17.0+.
```csharp theme={null}
using PowerSync.Common.Client;
using PowerSync.Common.Client.Sync.Stream;
var powerSync = new PowerSyncDatabase(new PowerSyncDatabaseOptions
{
Database = new SQLOpenOptions { DbFilename = "powersync.db" },
Schema = AppSchema.PowerSyncSchema,
});
await powerSync.Init();
// Set custom metadata when connecting
await powerSync.Connect(
connector: new MyConnector(),
options: new PowerSyncConnectionOptions
{
AppMetadata = new Dictionary
{
{ "app_version", "1.2.3" },
{ "feature_flag", "new_sync_flow" }
}
}
);
```
Example not yet available.
### View Custom Metadata in Logs
Custom metadata appears in the **Service/API logs** section of the [PowerSync Dashboard](https://dashboard.powersync.com/). Navigate to your project and instance, then go to the **Logs** view. The metadata is included in **Sync Stream Started** and **Sync Stream Completed** log entries.
Make sure the **Metadata** checkbox is enabled in the logs view to see custom metadata in log entries.
Note the following when using custom metadata:
* Keep metadata values concise. `app_metadata` is limited to 20 keys, with each string value capped at 100 characters.
* Avoid including sensitive information in metadata as it will appear in logs.
* Metadata is set per connection. Reconnect with new metadata when user context or app state changes (e.g., feature flags).
## Issue Alerts
Issue alerts capture potential problems with your instance, such as connection or replication issues.
**Availability**
* Issue alerts are available on all Cloud plans.
### Configure Issue Alerts
Issue alerts are set up per instance. To set up a new alert, navigate to the **Alerts** section in the [PowerSync Dashboard](https://dashboard.powersync.com/) and click **Create Issue Alert**.
When creating or editing an issue alert, you can configure:
* **Alert Name**: Give your alert a descriptive name to help identify it
* **Issue Type**: Select the type of issue to monitor from the dropdown:
* **Database Connection Issue**: Trigger when there is a connection problem
* **Replication Issue**: Trigger when there is an issue with the replication process
* **Severity Levels**: Choose which severity levels should trigger this alert:
* **Warning**: For non-critical issues
* **Fatal**: For critical issues that require immediate attention
**Important: Set Up Notification Rules**
Creating an issue alert only defines *what* to monitor. To actually receive notifications when alerts trigger, you must also set up [Email Rules](#email-notifications) or [Webhooks](#webhooks) and configure them to notify for "Issue alert state change" events. See the [Alert Notifications](#alert-notifications) section below.
## Usage Alerts
Usage alerts trigger when specific usage metrics exceed a defined threshold. This helps with troubleshooting usage spikes, or unexpected usage activity.
**Availability**
Usage alerts are available on **Team** and **Enterprise** plans.
### Configure Usage Alerts
Usage alerts are set up per instance. Navigate to the **Alerts** section in the [PowerSync Dashboard](https://dashboard.powersync.com/) and click **Create Usage Alert**.
When creating or editing a usage alert, you can configure:
* **Alert Name**: Give your alert a descriptive name to help identify it
* **Metric**: Select from the following usage metrics to monitor:
* Data Synced
* Data Replicated
* Operations Synced
* Operations Replicated
* Peak Concurrent Connections
* Storage Size
These metrics correspond to the data shown in the [Usage Metrics](#view-usage-metrics) workspace and align with the PowerSync Service parameters outlined in our [pricing](https://www.powersync.com/pricing).
* **Window**: The number of minutes to look back when evaluating usage. All usage data points within this time window are included when determining if the configured threshold has been crossed
* **Aggregation**: Choose how to aggregate all data points within the window before comparing to the threshold:
* **Avg**: Calculate the average of all values
* **Max**: Use the highest value
* **Min**: Use the lowest value
* **Condition**: Set whether the alert triggers when usage goes **Above** or **Below** the specified threshold
* **Threshold Value**: The numeric limit for the selected metric (in bytes for size-based metrics; count for all other metrics)
**Important: Set Up Notification Rules**
Creating a usage alert only defines *what* to monitor. To actually receive notifications when alerts trigger, you must also set up [Email Rules](#email-notifications) or [Webhooks](#webhooks) and configure them to notify for "Usage alert state change" events. See the [Alert Notifications](#alert-notifications) section below.
## Alert Notifications
Set up notification rules to be informed of issue or usage alerts, as well as deploy state changes. PowerSync provides multiple notification methods that trigger both when an alert becomes active and when it returns to normal (indicating the monitored conditions are back within acceptable thresholds).
* **Email Rules**: Send alerts directly to your email address
* **Webhooks**: Notify external systems and services
**Availability**
* **Email Rules**: Available on all plans (**Free**, **Pro**, **Team** and **Enterprise**)
* **Webhooks**: Available on **Pro**, **Team** and **Enterprise** plans
### Email Rules
Email rules allow you to receive alerts directly to your email address when specific events occur in PowerSync.
#### Set Up Email Rules
Navigate to the **Alerts** section in the [PowerSync Dashboard](https://dashboard.powersync.com/) and scroll down to the **Notification Rules** section. Click **Create Email Rule** to set up email notifications.
Accounts on the Free plan are restricted to a single email rule; customers on paid plans can create an unlimited number of email rules.
When creating or editing an email rule, you can configure:
* **Recipient Email**: Specify the email address that will receive the notifications (required)
* **Event Triggers**: Select one or more of the following events to trigger the email notification:
* **Usage alert state change**: Fired when a usage alert changes between 'monitoring' and 'alerting' (a threshold has been crossed)
* **Issue alert state change**: Fired when an issue alert changes between 'monitoring' and 'alerting' (the instance has active issues)
* **Deploy state change**: Fired when an instance deploy starts, completes or fails. This includes deprovisioning an instance
* **Enabled**: Toggle to control whether the email rule is active
### Webhooks
Webhooks enable you to notify external systems when specific events occur in PowerSync.
#### Set Up Webhooks
Navigate to the **Alerts** section in the [PowerSync Dashboard](https://dashboard.powersync.com/) and scroll down to the **Notification Rules** section. Click **Create Webhook Rule** to set up webhook notifications.
When creating or editing a webhook rule, you can configure:
* **Webhook Endpoint (URL)**: Define the endpoint that will receive the webhook request (starting with `https://`) (required)
* **Event Triggers**: Select one or more of the following events to trigger the webhook:
* **Usage alert state change**: Fired when a usage alert changes between 'monitoring' and 'alerting' (a threshold has been crossed)
* **Issue alert state change**: Fired when an issue alert changes between 'monitoring' and 'alerting' (the instance has active issues)
* **Deploy state change**: Fired when an instance deploy starts, completes or fails. This includes deprovisioning an instance
* **Enabled**: Toggle to control whether the webhook rule is active
* **Retries**: Configure the number of retry attempts for failed webhook deliveries
After creating a webhook, a secret is automatically generated and copied to your clipboard. Store this secret since you'll need it to verify the webhook request signature.
### Webhook Signature Verification
Every webhook request contains an `x-journey-signature` header, which is a base64-encoded HMAC (Hash-based Message Authentication Code). To verify the request, you need to compute the HMAC using the shared secret that was generated when you created the webhook, and compare it to the value in the `x-journey-signature` header.
**JavaScript Example:**
```javascript theme={null}
import { createHmac } from 'crypto';
// Extract the signature from the request headers
const signature = request.header('x-journey-signature');
// Create an HMAC using your webhook secret and the request body
let verify = createHmac('sha256', '') // The secret provided during webhook setup
.update(Buffer.from(request.body, 'utf-8'))
.digest('base64');
// Compare the computed HMAC with the signature from the request
if (signature === verify) {
console.log("success");
} else {
console.log("verification failed");
}
```
# Production Readiness Best Practices Guide
Source: https://docs.powersync.com/maintenance-ops/production-readiness-guide
Key recommendations for ensuring your deployment is ready for production
Here are the recommended items you should implement as part of supporting PowerSync in a production environment.
1. Client SDK Diagnostics - Implement a sync diagnostics screen/view in your client application that provides critical sync information.
2. Client logging - Implement logging in your client application to capture sync events and errors.
3. Issue Alerts - Trigger notifications when the PowerSync replicator runs into errors.
4. Database - Making sure your database is ready for production when integrated with PowerSync.
# Client specific
## SDK Diagnostics
It’s important to know what’s going on with a PowerSync enabled client application, this becomes useful during debugging issues with end users.
We recommend adding a view/screen in your application that offers diagnostic information about a client. Here you would want to add the following client specific information:
1. `connected` - Boolean; True if the client is connected to the PowerSync Service instance. False if not.
2. `connecting` - Boolean; True if the client is attempting to connect to the PowerSync Service instance. False if not.
3. `uploading` - Boolean; If the client has a network connection and changes in the upload queue are present this will be set to true when the client attempts to upload changes to the backend API in the `uploadData` function. This option can be found on the `dataFlowStatus` object.
4. `downloading` - Boolean; If the client is connected to the PowerSync Service and new data is available, this will be set to true, else it will be false. This option can be found on the `dataFlowStatus` object.
5. `hasSynced` - Boolean; True if the client completed a full sync at least once. False if the client never completed a full sync.
6. `lastSyncedAt` - DateTime; Timestamp of when the client last completed a full sync.
Each of the PowerSync Client SDKs have the `SyncStatus` class that can be used to access the fields mentioned above.
* [Flutter](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus-class.html)
* [Kotlin](https://powersync-ja.github.io/powersync-kotlin/core/com.powersync.sync/-sync-status/index.html?query=data%20class%20SyncStatus%20:%20SyncStatusData)
* [Swift](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata)
* [Web](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus)
* [React Native](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus)
* [Node.js](https://powersync-ja.github.io/powersync-js/node-sdk/classes/SyncStatus)
* [.NET (Alpha)](https://github.com/powersync-ja/powersync-dotnet/blob/2728eab0d13849686ff3f9a603040940744599e1/PowerSync/PowerSync.Common/DB/Crud/SyncStatus.cs)
In addition to the `SyncStatus` options above, it's also a good idea to see what the current length of the upload queue looks like.
The upload queue contains all local mutations that need to be processed by the client specific `uploadData` implementation.
To get this information you can simply count the number of rows present in the internal `ps_crud` SQLite table e.g.
```sqlite theme={null}
SELECT COUNT(*) AS row_count FROM ps_crud;
```
If you're interested in learning more about the internal PowerSync SQLite schema, see the [Client Architecture](/architecture/client-architecture#schema) section of the docs.
## Client logging
### Using `Sentry logging` for Log Aggregation
This is just an example of how to implement Sentry logging. The actual implementation is up to you as the developer. You don't have to use `Sentry logging`, but we recommend using some sort of log aggregation service in production.
```typescript App Entry Point theme={null}
createRoot(document.getElementById("root")!,
{
onUncaughtError: Sentry.reactErrorHandler((error, errorInfo) => {
console.warn('Uncaught error', error, errorInfo.componentStack);
}),
// Callback called when React catches an error in an ErrorBoundary.
onCaughtError: Sentry.reactErrorHandler(),
// Callback called when React automatically recovers from errors.
onRecoverableError: Sentry.reactErrorHandler(),
}).render(
);
```
```typescript System.ts theme={null}
import * as Sentry from '@sentry/react';
import { createBaseLogger, LogLevel } from '@powersync/react-native';
// Initialize Sentry
Sentry.init({
dsn: 'YOUR_SENTRY_DSN_HERE',
transport: Sentry.makeBrowserOfflineTransport(Sentry.makeFetchTransport), // Handle offline scenarios
enableLogs: true // Enable Sentry logging
});
const logger = createBaseLogger();
logger.useDefaults();
logger.setLevel(LogLevel.WARN);
logger.setHandler((messages, context) => {
if (!context?.level) return;
// Get the main message and combine any additional data
const messageArray = Array.from(messages);
const mainMessage = String(messageArray[0] || 'Empty log message');
const extraData = messageArray.slice(1).reduce((acc, curr) => ({ ...acc, ...curr }), {});
const level = context.level.name.toLowerCase();
// Add breadcrumb: creates a trail of events leading up to errors
// This helps debug by showing PowerSync state/operations before crashes
// Breadcrumbs appear in Sentry error reports for context
// We capture all levels (including info/debug) since we might want to know
// what operations happened before an error occurred
Sentry.addBreadcrumb({
message: mainMessage,
level: level as Sentry.SeverityLevel,
data: extraData,
timestamp: Date.now()
});
// Only send warnings and errors to Sentry
if (level == 'warn' || level == 'error') {
console[level](`PowerSync ${level.toUpperCase()}:`, mainMessage, extraData);
Sentry.logger[level](mainMessage, extraData);
}
});
// Create PowerSync instance
export const powerSync = new PowerSyncDatabase({
schema: AppSchema,
database: {
dbFilename: 'example.db'
},
logger: logger // Pass the logger to PowerSync
});
// Register a listener to monitor PowerSync status changes and log upload/download errors that are not handled directly by the SDK
powerSync.registerListener({
statusChanged: (status) => {
// Check for download errors and log them with context
if(status.dataFlowStatus?.downloadError) {
logger.error("PowerSync sync download failed", {
userSession: connector.currentSession, // Current user session for tracking
lastSyncAt: status?.lastSyncedAt, // When the last successful sync occurred
connected: status?.connected, // Network connection status
sdkVersion: powerSync.sdkVersion || 'unknown', // PowerSync SDK version for debugging
downloadError: status.dataFlowStatus?.downloadError // The actual download error details
});
}
// Check for upload errors and log them with context
if(status.dataFlowStatus?.uploadError) {
logger.error("PowerSync sync upload failed", {
userSession: connector.currentSession, // Current user session for tracking
lastSyncAt: status?.lastSyncedAt, // When the last successful sync occurred
connected: status?.connected, // Network connection status
sdkVersion: powerSync.sdkVersion || 'unknown', // PowerSync SDK version for debugging
uploadError: status.dataFlowStatus?.uploadError // The actual upload error details
});
}
}
});
// Usage with additional context
logger.error('PowerSync sync failed', {
userId: userID,
lastSyncAt: status?.lastSyncedAt,
connected: status?.connected,
sdkVersion: powerSync.sdkVersion || 'unknown',
});
```
### Best Practices
* **Log Level Management:** Use appropriate log levels `(WARN/ERROR)` in production
* **Structured Logging:** Include relevant context like user IDs, operation types, timestamps
* **Offline Resilience:** Always have a local fallback for critical logs
* **Performance:** Be mindful of log volume to avoid performance impacts
* **Privacy:** Ensure sensitive data is not logged or is properly sanitized
* **Retention:** Implement log rotation/cleanup for local storage to manage device storage (if applicable)
# Issue Alerts
## PowerSync Cloud
The PowerSync Cloud dashboard offers features and functionality that makes it easy to monitor the replication process from your source DB to your PowerSync Service instance and raise alerts when issues occur.
We highly recommend you read the sections below and configure alerts as suggested.
### Replication Issue Alerts
At a minimum we recommend creating an issue alert for `Replication issues`. For details instructions on how to configure Issue Alerts, see the [Issue Alerts](/maintenance-ops/monitoring-and-alerting#issue-alerts) section of the [Monitoring and Alerting](/maintenance-ops/monitoring-and-alerting) docs.
Here's quick example of what the Issue alert should look like to catch replication issues:
Once configured, create a [Webhook](/maintenance-ops/monitoring-and-alerting#webhooks) alert or [Email](/maintenance-ops/monitoring-and-alerting#email-notifications) notifications to ensure you are notified when replication issues arise.
## PowerSync Self-Host
To view the health and errors for a self-hosted PowerSync Service there are a few different options:
### Health Check Endpoints
The PowerSync Service offers a few HTTP endpoints you can probe to perform health checks on an instance. These endpoints will return a specific HTTP status code dependent on the current health of the instance, but will not give specific error information.
For more information on this, see the [Health Checks](/maintenance-ops/self-hosting/healthchecks#health-check-endpoints) docs.
### Diagnostics API
The PowerSync Service Diagnostics API is an easy way to get details around specific errors that are taking place on an instance.
To configure replication issue alerts for self-hosted instances, we recommend using the Diagnostics API which ships with the PowerSync Service, as the source of replication issues that could occur.
First, make sure you've configured the Diagnostics API for your PowerSync Service. To do so, follow the steps outlined in the [PowerSync Self-Host Diagnostics](/maintenance-ops/self-hosting/diagnostics) docs.
Once enabled, send a request to the Diagnostics API to see the current status. The response of the request from the Diagnostics API would look something like this:
```json theme={null}
{
"data": {
"connections": [
{
"id": "default",
"postgres_uri": "postgresql://powersync:5432/postgres",
"connected": true,
"errors": []
}
],
"active_sync_rules": {
"connections": [
{
"id": "default",
"tag": "default",
"slot_name": "powersync_1_6489",
"initial_replication_done": true,
"last_lsn": "00000000/0AB81970",
"last_keepalive_ts": "2025-08-26T15:51:49.746Z",
"last_checkpoint_ts": "2025-08-26T15:44:10.624Z",
"replication_lag_bytes": 0,
"tables": [
{
"schema": "public",
"name": "counters",
"replication_id": [
"id"
],
"data_queries": true,
"parameter_queries": false,
"errors": []
}
]
}
],
"errors": []
}
}
}
```
The easiest way to check for replication issues is to look at the Diagnostics endpoint on intervals and keep an eye on the errors arrays, this will populate errors as they arise on the service.
# Database Best Practices
## Postgres
### Managing & Monitoring Replication Lag
Because PowerSync relies on Postgres logical replication, it's important to consider the size of the `max_slot_wal_keep_size` and monitoring lag of replication slots used by PowerSync in a production environment to ensure lag of replication slots do not exceed the `max_slot_wal_keep_size`.
The `max_slot_wal_keep_size` Postgres [configuration parameter](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE) limits the size of the Write-Ahead Log (WAL) files that replication slots can hold.
The WAL growth rate is expected to increase substantially during the initial replication of large datasets with high update frequency, particularly for tables included in the PowerSync publication.
During normal operation (after Sync Streams (or legacy Sync Rules) are deployed) the WAL growth rate is much smaller than the initial replication period, since the PowerSync Service can replicate \~5k operations per second, meaning the WAL lag is typically in the MB range as opposed to the GB range.
When deciding what to set the `max_slot_wal_keep_size` configuration parameter the following should be taken in account:
1. Database size - This impacts the time it takes to complete the initial replication from the source Postgres database.
2. Sync Streams (or legacy Sync Rules) complexity - This also impacts the time it takes to complete the initial replication.
3. Postgres update frequency - The frequency of updates (of tables included in the publication you create for PowerSync) during initial replication. The WAL growth rate is directly proportional to this.
To view the current replication slots that are being used by PowerSync you can run the following query:
```
SELECT slot_name,
plugin,
slot_type,
active,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS replication_lag
FROM pg_replication_slots;
```
To view the current configured value of the `max_slot_wal_keep_size` you can run the following query:
```
SELECT setting as max_slot_wal_keep_size
FROM pg_settings
WHERE name = 'max_slot_wal_keep_size'
```
It's recommended to check the current replication slot lag and `max_slot_wal_keep_size` when deploying Sync Streams (or legacy Sync Rules) changes to your PowerSync Service instance, especially when you're working with large database volumes.
If you notice that the replication lag is greater than the current `max_slot_wal_keep_size` it's recommended to increase value of the `max_slot_wal_keep_size` on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays.
### Managing Replication Slots
Under normal operating conditions when new Sync Streams (or legacy Sync Rules) are deployed to a PowerSync Service instance, a new replication slot will also be created and used for replication. The old replication slot from the previous version of the sync config will still remain, until reprocessing is completed, at which point the old replication slot will be removed by the PowerSync Service.
However, in some cases, a replication slot may remain without being used. Usually this happens when a PowerSync Service instance is de-provisioned, stopped intentionally or due to unexpected errors. This results in excessive disk usage due to the continued growth of the WAL.
To check which replication slots used by a PowerSync Service are no longer active, the following query can be executed against the source Postgres database:
```
SELECT slot_name,
pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) AS replication_lag
FROM pg_replication_slots WHERE active = false;
```
If you have inactive replication slots that need to be cleaned up, you can drop them using the following query:
```
SELECT slot_name,
pg_drop_replication_slot(slot_name)
FROM pg_replication_slots where active = false;
```
The alternative to manually checking for inactive replication slots would be to configure the `idle_replication_slot_timeout` configuration parameter on the source Postgres database.
The `idle_replication_slot_timeout` [configuration parameter](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-IDLE-REPLICATION-SLOT-TIMEOUT) is only available from PostgresSQL 18 and above.
The `idle_replication_slot_timeout` will invalidate replication slots that have remained inactive for longer than the value set for the `idle_replication_slot_timeout` parameter.
It's recommended to configure this parameter for source Postgres databases as this will prevent runaway WAL growth for replication slots that are no longer active or used by the PowerSync Service.
# Deploy PowerSync on AWS ECS
Source: https://docs.powersync.com/maintenance-ops/self-hosting/aws-ecs
Guide to deploying PowerSync on AWS ECS with Fargate
[AWS ECS](https://aws.amazon.com/ecs/) with Fargate provides a serverless container orchestration platform for running PowerSync without managing servers.
## Prerequisites
Before deploying PowerSync on AWS ECS, ensure you have:
* AWS account with permissions for EC2, ECS, ALB, IAM and Secrets Manager
* AWS CLI installed and configured
* Understanding of the [deployment architecture](/maintenance-ops/self-hosting/deployment-architecture) for production vs development setup
## 1. PowerSync Configuration
Create your `powersync.yaml` configuration file following the [Self-Hosted Configuration Guide](/configuration/powersync-service/self-hosted-instances).
Your configuration must include:
* [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)): Define which data to sync to clients
* [Client Auth](/configuration/auth/overview): Your authentication provider's JWKS
* [Source Database](/configuration/source-db/setup): Connection details for your source database
* [Bucket Storage](/configuration/powersync-service/self-hosted-instances#bucket-storage-database): Connection details for your bucket storage database. PowerSync supports MongoDB or Postgres as bucket storage databases. In this guide, we focus on MongoDB.
For bucket storage, we recommend configuring an **AWS PrivateLink** to establish a secure, private connection between your ECS tasks and MongoDB Atlas that doesn't traverse the public internet.
Follow the [AWS PrivateLink guide for MongoDB Atlas](https://aws.amazon.com/blogs/apn/connecting-applications-securely-to-a-mongodb-atlas-data-plane-with-aws-privatelink/) to configure the VPC endpoint and update your MongoDB connection string to use the private endpoint. As seen in the [Secrets Manager](#5-secrets-manager) setup, use the updated connection string in your `PS_MONGO_URI` secret.
For self-hosting MongoDB bucket storage on an EC2 instance, refer to AWS's guides (which refer to Amazon DocumentDB, but the installation steps are applicable):
1. [Launch an EC2 Instance](https://docs.aws.amazon.com/dms/latest/sbs/chap-mongodb2documentdb.01.html)
2. [Install and Configure MongoDB](https://docs.aws.amazon.com/dms/latest/sbs/chap-mongodb2documentdb.02.html)
3. **Network Configuration**
* Place MongoDB EC2 instance in the same VPC as your ECS tasks
* Configure security groups to allow ECS tasks to connect to MongoDB on port 27017:
```bash theme={null}
# Create MongoDB security group
MONGO_SG=$(aws ec2 create-security-group \
--group-name mongodb-sg \
--description "MongoDB for PowerSync" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
# Allow ECS tasks to connect to MongoDB ($ECS_SG is the ECS tasks security group created later in the Security Groups section)
aws ec2 authorize-security-group-ingress \
--group-id $MONGO_SG \
--protocol tcp --port 27017 --source-group $ECS_SG
```
## 2. VPC and Networking Setup
This guide uses bash variables throughout for easy copy-paste execution.
```bash theme={null}
# Set your AWS region and account ID
AWS_REGION="us-east-1" # Change to your region
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
# Set your VPC ID (or create a new VPC)
VPC_ID="vpc-xxxxx"
# Set PowerSync version (check Docker Hub for latest: https://hub.docker.com/r/journeyapps/powersync-service/tags)
PS_VERSION="1.19.0"
```
### VPC Architecture Overview
PowerSync on ECS requires a VPC with both **public** and **private** subnets:
* **Public subnets**: Host the Application Load Balancer (ALB) and NAT Gateway with direct internet access
* **Private subnets**: Host ECS tasks for security, with outbound-only internet access via NAT Gateway
**Network Flow:**
```
Internet → Internet Gateway → Public Subnets (ALB, NAT) → Private Subnets (ECS Tasks)
```
**Default VPC users**: The AWS default VPC only contains public subnets. You must create private subnets following the steps below.
### Check Existing Subnets
```bash theme={null}
# List all subnets in your VPC
aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$VPC_ID" \
--query 'Subnets[*].[SubnetId,CidrBlock,MapPublicIpOnLaunch,AvailabilityZone]' \
--output table
```
If `MapPublicIpOnLaunch` is `True`, those are public subnets. Save the public subnet IDs:
```bash theme={null}
# Get public subnets (for ALB and NAT Gateway)
PUBLIC_SUBNET_1=$(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$VPC_ID" "Name=map-public-ip-on-launch,Values=true" \
--query 'Subnets[0].SubnetId' --output text)
PUBLIC_SUBNET_2=$(aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$VPC_ID" "Name=map-public-ip-on-launch,Values=true" \
--query 'Subnets[1].SubnetId' --output text)
echo "Public Subnet 1: $PUBLIC_SUBNET_1"
echo "Public Subnet 2: $PUBLIC_SUBNET_2"
```
### Create Private Subnets
Create two private subnets in different availability zones for high availability:
```bash theme={null}
# Get available zones in your region
AZ1=$(aws ec2 describe-availability-zones --region $AWS_REGION --query 'AvailabilityZones[0].ZoneName' --output text)
AZ2=$(aws ec2 describe-availability-zones --region $AWS_REGION --query 'AvailabilityZones[1].ZoneName' --output text)
echo "Availability Zone 1: $AZ1"
echo "Availability Zone 2: $AZ2"
# Get VPC CIDR to determine available address space
VPC_CIDR=$(aws ec2 describe-vpcs --vpc-ids $VPC_ID --query 'Vpcs[0].CidrBlock' --output text)
echo "VPC CIDR: $VPC_CIDR"
# Create first private subnet (adjust CIDR if conflicts exist)
PRIVATE_SUBNET_1=$(aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 172.31.96.0/20 \
--availability-zone $AZ1 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=powersync-private-1}]' \
--query 'Subnet.SubnetId' \
--output text)
echo "Private Subnet 1: $PRIVATE_SUBNET_1"
# Create second private subnet (adjust CIDR if conflicts exist)
PRIVATE_SUBNET_2=$(aws ec2 create-subnet \
--vpc-id $VPC_ID \
--cidr-block 172.31.112.0/20 \
--availability-zone $AZ2 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=powersync-private-2}]' \
--query 'Subnet.SubnetId' \
--output text)
echo "Private Subnet 2: $PRIVATE_SUBNET_2"
```
**CIDR Block Configuration**: The example uses `172.31.96.0/20` and `172.31.112.0/20`, which work for the default VPC (`172.31.0.0/16`). If you get a CIDR conflict error, adjust these blocks to match unused address space in your VPC. Each /20 block provides 4,096 IP addresses.
**Create Route Table for Private Subnets:**
```bash theme={null}
# Create private route table
PRIVATE_RTB=$(aws ec2 create-route-table \
--vpc-id $VPC_ID \
--tag-specifications 'ResourceType=route-table,Tags=[{Key=Name,Value=powersync-private-rtb}]' \
--query 'RouteTable.RouteTableId' \
--output text)
echo "Private Route Table: $PRIVATE_RTB"
# Associate private subnets with route table
aws ec2 associate-route-table \
--route-table-id $PRIVATE_RTB \
--subnet-id $PRIVATE_SUBNET_1
aws ec2 associate-route-table \
--route-table-id $PRIVATE_RTB \
--subnet-id $PRIVATE_SUBNET_2
echo "Private subnets created and associated with route table"
```
### NAT Gateway Setup
ECS tasks in private subnets need outbound internet access for:
* Pulling container images from Amazon ECR
* Fetching JWKS for authentication (if applicable in your client authentication setup)
* Connecting to external services
**Create NAT Gateway:**
```bash theme={null}
# Allocate Elastic IP
EIP_ALLOC=$(aws ec2 allocate-address \
--domain vpc \
--query 'AllocationId' \
--output text)
echo "Elastic IP Allocation: $EIP_ALLOC"
# Create NAT Gateway in a PUBLIC subnet
NAT_GW=$(aws ec2 create-nat-gateway \
--subnet-id $PUBLIC_SUBNET_1 \
--allocation-id $EIP_ALLOC \
--query 'NatGateway.NatGatewayId' \
--output text)
echo "NAT Gateway: $NAT_GW"
# Wait for NAT Gateway to become available (takes ~2 minutes)
echo "Waiting for NAT Gateway to become available (this takes ~2 minutes)..."
aws ec2 wait nat-gateway-available --nat-gateway-ids $NAT_GW
echo "NAT Gateway is now available"
```
**Add Route to Private Route Table:**
```bash theme={null}
# Add default route to NAT Gateway in private route table
aws ec2 create-route \
--route-table-id $PRIVATE_RTB \
--destination-cidr-block 0.0.0.0/0 \
--nat-gateway-id $NAT_GW
```
**Verify Setup:**
```bash theme={null}
# Verify private route table
aws ec2 describe-route-tables \
--route-table-ids $PRIVATE_RTB \
--query 'RouteTables[0].Routes' \
--output table
# Should show:
# - 172.31.0.0/16 -> local (VPC internal routing)
# - 0.0.0.0/0 -> nat-xxxxx (Internet via NAT)
```
### Create Security Groups
```bash theme={null}
# ALB security group (allows HTTPS from internet)
ALB_SG=$(aws ec2 create-security-group \
--group-name powersync-alb-sg \
--description "PowerSync ALB" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
echo "ALB Security Group: $ALB_SG"
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 443 --cidr 0.0.0.0/0
# ECS security group (allows traffic from ALB only)
ECS_SG=$(aws ec2 create-security-group \
--group-name powersync-ecs-sg \
--description "PowerSync ECS tasks" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
echo "ECS Security Group: $ECS_SG"
aws ec2 authorize-security-group-ingress \
--group-id $ECS_SG \
--protocol tcp --port 8080 --source-group $ALB_SG
echo "Security groups created successfully"
```
## 3. Application Load Balancer
### Domain Setup
PowerSync requires a domain name for SSL certificate provisioning. You can either:
* Use an existing domain by creating a Route 53 hosted zone and updating your registrar's nameservers
* Register a new domain directly through Route 53
For detailed instructions, follow the official AWS guides:
* [Configuring DNS routing for a new domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-new-domain.html) - For existing domains
* [Registering a new domain](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-register.html) - To register through Route 53
Once your hosted zone is created, export the zone ID:
```bash theme={null}
export HOSTED_ZONE_ID=$(aws route53 list-hosted-zones-by-name \
--dns-name yourdomain.com \
--query 'HostedZones[0].Id' \
--output text)
echo "Hosted Zone ID: $HOSTED_ZONE_ID"
```
### Request SSL Certificate
For secure HTTPS connections, request an SSL certificate using AWS Certificate Manager (ACM):
```bash theme={null}
# Set your domain name
POWERSYNC_DOMAIN="powersync.yourdomain.com" # Change to your domain
# Request certificate
CERT_ARN=$(aws acm request-certificate \
--domain-name $POWERSYNC_DOMAIN \
--validation-method DNS \
--region $AWS_REGION \
--query 'CertificateArn' \
--output text)
echo "Certificate ARN: $CERT_ARN"
# Get validation record details
VALIDATION_NAME=$(aws acm describe-certificate \
--certificate-arn $CERT_ARN \
--region $AWS_REGION \
--query 'Certificate.DomainValidationOptions[0].ResourceRecord.Name' \
--output text)
VALIDATION_VALUE=$(aws acm describe-certificate \
--certificate-arn $CERT_ARN \
--region $AWS_REGION \
--query 'Certificate.DomainValidationOptions[0].ResourceRecord.Value' \
--output text)
echo "Validation Name: $VALIDATION_NAME"
echo "Validation Value: $VALIDATION_VALUE"
```
**Add DNS Validation Record:**
Add the `CNAME` record using your DNS provider's management console:
| Type | Name | Value | TTL |
| ------- | ------------------- | -------------------- | ----- |
| `CNAME` | `[VALIDATION_NAME]` | `[VALIDATION_VALUE]` | `300` |
Add the `CNAME` record using AWS CLI:
```bash theme={null}
aws route53 change-resource-record-sets \
--hosted-zone-id $HOSTED_ZONE_ID \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "'$VALIDATION_NAME'",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{"Value": "'$VALIDATION_VALUE'"}]
}
}]
}'
```
**Wait for Certificate Validation:**
```bash theme={null}
aws acm wait certificate-validated --certificate-arn $CERT_ARN --region $AWS_REGION
```
### Create ALB
```bash theme={null}
# Create load balancer
ALB_ARN=$(aws elbv2 create-load-balancer \
--name powersync-alb \
--subnets $PUBLIC_SUBNET_1 $PUBLIC_SUBNET_2 \
--security-groups $ALB_SG \
--scheme internet-facing \
--query 'LoadBalancers[0].LoadBalancerArn' \
--output text)
echo "ALB ARN: $ALB_ARN"
# Create target group
TG_ARN=$(aws elbv2 create-target-group \
--name powersync-tg \
--protocol HTTP \
--port 8080 \
--vpc-id $VPC_ID \
--target-type ip \
--health-check-path /probes/liveness \
--health-check-interval-seconds 30 \
--query 'TargetGroups[0].TargetGroupArn' \
--output text)
echo "Target Group ARN: $TG_ARN"
# Create HTTPS listener
LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTPS \
--port 443 \
--certificates CertificateArn=$CERT_ARN \
--default-actions Type=forward,TargetGroupArn=$TG_ARN \
--query 'Listeners[0].ListenerArn' \
--output text)
echo "Listener ARN: $LISTENER_ARN"
# Configure WebSocket support
# PowerSync uses long-lived WebSocket connections for real-time sync
# Default ALB timeout is 60s, which would disconnect clients prematurely
# Setting to 3600s (1 hour) prevents unnecessary disconnections
aws elbv2 modify-load-balancer-attributes \
--load-balancer-arn $ALB_ARN \
--attributes Key=idle_timeout.timeout_seconds,Value=3600
```
## 4. DNS Configuration
Point your domain to the load balancer:
```bash theme={null}
# Get ALB DNS name
ALB_DNS=$(aws elbv2 describe-load-balancers \
--names powersync-alb \
--query 'LoadBalancers[0].DNSName' \
--output text)
ALB_ZONE=$(aws elbv2 describe-load-balancers \
--names powersync-alb \
--query 'LoadBalancers[0].CanonicalHostedZoneId' \
--output text)
echo "ALB DNS: $ALB_DNS"
echo "ALB Zone: $ALB_ZONE"
```
Create a `CNAME` record pointing to the ALB DNS name.
| Type | Name | Value | TTL |
| ------- | -------------------------- | ----------- | ----- |
| `CNAME` | `powersync.yourdomain.com` | `[ALB_DNS]` | `300` |
Create an alias `A` record pointing to the ALB:
```bash theme={null}
aws route53 change-resource-record-sets \
--hosted-zone-id $HOSTED_ZONE_ID \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "'$POWERSYNC_DOMAIN'",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "'$ALB_ZONE'",
"DNSName": "'$ALB_DNS'",
"EvaluateTargetHealth": true
}
}
}]
}'
```
## 5. Secrets Manager
Store your PowerSync configuration and connection strings securely in AWS Secrets Manager. This allows you to reference them in your ECS task definition without hardcoding sensitive information.
```bash theme={null}
# Store config
aws secretsmanager create-secret \
--name powersync/config \
--secret-string file://powersync.yaml
# Store connection strings
# Set your source database connection string (e.g., PostgreSQL, MongoDB, MySQL, or SQL Server)
aws secretsmanager create-secret \
--name powersync/data-source-uri \
--secret-string "postgresql://user:pass@host:5432/db"
# Set your replication bucket storage connection string (e.g., MongoDB or Postgres)
aws secretsmanager create-secret \
--name powersync/storage-uri \
--secret-string "mongodb://user:pass@host:27017/?replicaSet=rs0"
aws secretsmanager create-secret \
--name powersync/jwks-url \
--secret-string "https://your-auth-provider.com/.well-known/jwks.json"
```
AWS Secrets Manager automatically appends a 6-character suffix to secret ARNs (e.g., `powersync/config-AbCdEf`).
ECS task definitions support **prefix matching**, allowing you to reference secrets using just the base name:
* Created as: `powersync/config-AbCdEf` (with suffix)
* Referenced as: `arn:aws:secretsmanager:region:account:secret:powersync/config` (without suffix)
This means you don't need to update task definitions when secrets are rotated.
## 6. ECS Task Definition
The ECS task definition specifies how to run the PowerSync container, including environment variables, secrets, resource limits, and health checks.
### Create IAM Role
```bash theme={null}
# Create execution role
aws iam create-role \
--role-name PowerSyncTaskExecutionRole \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Wait for role to propagate
sleep 10
aws iam attach-role-policy \
--role-name PowerSyncTaskExecutionRole \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
# Add Secrets Manager access
aws iam put-role-policy \
--role-name PowerSyncTaskExecutionRole \
--policy-name SecretsAccess \
--policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "arn:aws:secretsmanager:'$AWS_REGION':'$AWS_ACCOUNT_ID':secret:powersync/*"
}]
}'
# Save role ARN
TASK_EXECUTION_ROLE_ARN="arn:aws:iam::$AWS_ACCOUNT_ID:role/PowerSyncTaskExecutionRole"
echo "Task Execution Role ARN: $TASK_EXECUTION_ROLE_ARN"
```
### Create Cluster
```bash theme={null}
aws ecs create-cluster \
--cluster-name powersync-cluster \
--capacity-providers FARGATE
```
### Register Task Definition
The task definitions below allocate **2 vCPU and 2GB memory** per container. You can adjust resources based on your workload — see [Deployment Architecture](/maintenance-ops/self-hosting/deployment-architecture) for scaling guidance (recommended baseline: 1 vCPU, 1GB memory).
For production deployments, run separate replication and API processes to enable zero-downtime rolling updates. This allows independent scaling of API containers.
**Create Replication Task Definition**
```bash theme={null}
cat > replication-task-definition.json < api-task-definition.json <
This basic setup runs both replication and API processes in the same container. This is not recommended for production.
Generate the task definition using your environment variables:
```bash theme={null}
cat > task-definition.json <
## 7. Deploy ECS Service
Create the ECS service to run PowerSync tasks
For production deployments, run separate replication and API processes to enable zero-downtime rolling updates. This allows independent scaling of API containers.
**Deploy Replication Service (1 Instance)**
```bash theme={null}
aws ecs create-service \
--cluster powersync-cluster \
--service-name powersync-replication \
--task-definition powersync-replication \
--desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={
subnets=[$PRIVATE_SUBNET_1,$PRIVATE_SUBNET_2],
securityGroups=[$ECS_SG],
assignPublicIp=DISABLED
}" \
--deployment-configuration "minimumHealthyPercent=0,maximumPercent=100"
```
**Deploy API Service (2+ Instances)**
```bash theme={null}
aws ecs create-service \
--cluster powersync-cluster \
--service-name powersync-api \
--task-definition powersync-api \
--desired-count 2 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={
subnets=[$PRIVATE_SUBNET_1,$PRIVATE_SUBNET_2],
securityGroups=[$ECS_SG],
assignPublicIp=DISABLED
}" \
--load-balancers "targetGroupArn=$TG_ARN,containerName=powersync-api,containerPort=8080" \
--health-check-grace-period-seconds 120 \
--deployment-configuration "minimumHealthyPercent=100,maximumPercent=200"
```
**Verify HA Deployment:**
```bash theme={null}
# Check replication service status
aws ecs describe-services \
--cluster powersync-cluster \
--services powersync-replication \
--query 'services[0].[serviceName,status,runningCount,desiredCount]' \
--output table
# Check API service status
aws ecs describe-services \
--cluster powersync-cluster \
--services powersync-api \
--query 'services[0].[serviceName,status,runningCount,desiredCount]' \
--output table
# Wait for tasks to be running (takes 2-3 minutes)
echo "Waiting for tasks to start..."
sleep 60
# Test endpoint (replace with your domain)
curl https://$POWERSYNC_DOMAIN/probes/liveness
# View API logs
aws logs tail /ecs/powersync-api --follow
# View replication logs
aws logs tail /ecs/powersync-replication --follow
```
This basic setup runs both replication and API processes in the same container. Running multiple instances (`desired-count > 1`) will cause **Sync Rule lock errors during rolling updates** when deploying new task definitions. A single-instance setup is not recommended for production.
```bash theme={null}
aws ecs create-service \
--cluster powersync-cluster \
--service-name powersync-service \
--task-definition powersync-service \
--desired-count 1 \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={
subnets=[$PRIVATE_SUBNET_1,$PRIVATE_SUBNET_2],
securityGroups=[$ECS_SG],
assignPublicIp=DISABLED
}" \
--load-balancers "targetGroupArn=$TG_ARN,containerName=powersync,containerPort=8080" \
--health-check-grace-period-seconds 120 \
--deployment-configuration "minimumHealthyPercent=0,maximumPercent=100"
```
**Verify Basic Deployment:**
```bash theme={null}
# Check service status
aws ecs describe-services \
--cluster powersync-cluster \
--services powersync-service \
--query 'services[0].[serviceName,status,runningCount,desiredCount]' \
--output table
# Wait for task to be running (takes 2-3 minutes)
echo "Waiting for tasks to start..."
sleep 60
# Test endpoint (replace with your domain)
curl https://$POWERSYNC_DOMAIN/probes/liveness
# View logs
aws logs tail /ecs/powersync --follow
```
## Production Enhancements
For production deployments, consider adding the following enhancements:
### Daily Compact Job (Recommended)
PowerSync requires [daily compaction](/maintenance-ops/compacting-buckets) to optimize bucket storage. Schedule it as an ECS task with EventBridge:
Generate the compact task definition:
```bash theme={null}
cat > compact-task-definition.json <
### Auto Scaling (High-Availability Setup)
The auto-scaling configuration below only scales based on CPU usage. We are working on expanding this page with additional details on how to also auto-scale based on the number of concurrent connections per API pod. As seen in the [Deployment Architecture](/maintenance-ops/self-hosting/deployment-architecture) documentation, it is recommended to have 1 API pod per 100 concurrent client connections.
```bash theme={null}
aws application-autoscaling register-scalable-target \
--service-namespace ecs \
--resource-id service/powersync-cluster/powersync-api \
--scalable-dimension ecs:service:DesiredCount \
--min-capacity 2 \
--max-capacity 10
aws application-autoscaling put-scaling-policy \
--service-namespace ecs \
--resource-id service/powersync-cluster/powersync-api \
--scalable-dimension ecs:service:DesiredCount \
--policy-name cpu-scaling \
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration '{
"TargetValue": 70.0,
"PredefinedMetricSpecification": {"PredefinedMetricType": "ECSServiceAverageCPUUtilization"}
}'
```
## Troubleshooting
| Symptom | Solution |
| ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| Tasks fail health checks | Check logs: `aws logs tail /ecs/powersync --follow` Increase `startPeriod` in health check to 120 |
| 502 Bad Gateway | Verify security groups allow ALB→ECS on port 8080 Check tasks are running: `aws ecs list-tasks --cluster powersync-cluster` |
| WebSocket disconnects | Verify ALB idle timeout is 3600s (set in [Step 3](#3-application-load-balancer)) |
| Can't pull image | Verify NAT Gateway exists and route table configured correctly Check NAT Gateway has internet access |
| Secrets not loaded | Check IAM role has `secretsmanager:GetSecretValue` permission Verify secrets exist: `aws secretsmanager list-secrets` |
| Sync Rule lock errors during deploy | Using multiple instances without HA setup Use [High Availability Setup](#high-availability-setup) for production |
| CIDR block conflicts | Adjust CIDR blocks in [Step 2](#2-vpc-and-networking-setup) to match available VPC address space |
| Certificate validation fails | Verify DNS nameservers are updated and propagated Check validation CNAME record exists in Route 53 |
### Additional Resources
* [AWS ECS Best Practices](https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/) - AWS's official guide covering security, networking, monitoring, and performance optimization for ECS deployments
* [Self-Host Demo Repository](https://github.com/powersync-ja/self-host-demo) - Working example implementations of PowerSync self-hosting across different platforms and configurations
# Deploy PowerSync Service on Coolify
Source: https://docs.powersync.com/maintenance-ops/self-hosting/coolify
Guide for deploying the [PowerSync Service](http://localhost:3333/architecture/powersync-service) on Coolify
[Coolify](https://coolify.io/) is an open-source, self-hosted platform that simplifies the deployment and management of applications, databases, and services on your own infrastructure.
Think of it as a self-hosted alternative to platforms like Heroku or Netlify.
Before following this guide, you should:
* Read through the [Service Configuration](/configuration/powersync-service/self-hosted-instances)
guide to understand the requirements and configuration options. This guide assumes you have already done so, and will only cover the Coolify specific setup.
* Have Coolify installed and running.
## Background
For the PowerSync Service to function correctly, you will need:
* A database,
* Authentication service, and
* Data upload service.
The easiest way to get started is to use **Supabase** as it provides all three. However, you can also use a different database, and custom authentication and data upload services.
## Steps
Add the [`Compose file`](#base-docker-compose-yaml-file) as a Docker Compose Empty resource to your project.
Update the environment variables and config files.
Instructions for each can be found in the [Configuration options](#configuration-options) section.
Click on the `Deploy` button to deploy the PowerSync Service.
The PowerSync Service will now be available at
* `http://localhost:8080` if default config was used, or
* `http://{your_coolify_domain}:{PS_PORT}` if a custom domain or port was specified.
To check the health of the PowerSync Service, see [Health Checks](/maintenance-ops/self-hosting/healthchecks).
## Configuration Options
The following configuration options should be updated:
* Environment variables
* `sync-config.yaml` file (according to your data requirements)
* `powersync.yaml` file
Environment Variable
Value
PS\_DATABASE\_TYPE
postgresql
PS\_DATABASE\_URI
**Connection string obtained from Supabase** See step 5 in [Connect PowerSync to Your Supabase](/integrations/supabase/guide#connect-powersync-to-your-supabase)
PS\_PORT
**Keep default value (8080)**
PS\_MONGO\_URI
mongodb://mongo:27017
PS\_JWKS\_URL
**Keep default value**
```yaml {5} theme={null}
...
# Client (application end user) authentication settings
client_auth:
# Enable this if using Supabase Auth
supabase: true
...
```
Environment Variable
Value
PS\_DATABASE\_TYPE
postgresql OR mongodb OR mysql OR SQL Server
PS\_DATABASE\_URI
The database connection URI (according to your database type) where your data is stored.
PS\_PORT
**Default value (8080)** You can change this if you want the PowerSync Service to be available on a different port.
PS\_MONGO\_URI
mongodb://mongo:27017
PS\_JWKS\_URL
The URL of the JWKS endpoint of your authentication service.
```yaml {5, 11-15,18, 23} theme={null}
...
# Client (application end user) authentication settings
client_auth:
# Enable this if using Supabase Auth
supabase: false
# JWKS URIs can be specified here
jwks_uri: !env PS_JWKS_URL
# Optional static collection of public keys for JWT verification
jwks:
keys:
- kty: 'oct'
k: 'use_a_better_token_in_production'
alg: 'HS256'
# JWKS audience
audience: ["powersync-dev", "powersync", "http://localhost:8080"]
api:
tokens:
# These tokens are used for local admin API route authentication
- use_a_better_token_in_production
```
## Base `Compose` File
The following Compose file serves as a universal starting point for deploying the PowerSync Service on Coolify.
```yaml theme={null}
services:
mongo:
image: mongo:7.0
command: --replSet rs0 --bind_ip_all --quiet
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongo_storage:/data/db
# Initializes the MongoDB replica set. This service will not usually be actively running
mongo-rs-init:
image: mongo:7.0
depends_on:
- mongo
restart: on-failure
entrypoint:
- bash
- -c
- 'mongosh --host mongo:27017 --eval ''try{rs.status().ok && quit(0)} catch {} rs.initiate({_id: "rs0", version: 1, members: [{ _id: 0, host : "mongo:27017" }]})'''
# PowerSync Service
powersync:
image: journeyapps/powersync-service:latest
container_name: powersync
depends_on:
- mongo-rs-init
command: [ "start", "-r", "unified"]
restart: unless-stopped
environment:
- NODE_OPTIONS="--max-old-space-size=1000"
- POWERSYNC_CONFIG_PATH=/home/config/powersync.yaml
- PS_DATABASE_TYPE=${PS_DEMO_BACKEND_DATABASE_TYPE:-postgresql}
- PS_DATABASE_URI=${PS_DATABASE_URI:-postgresql://postgres:postgres@localhost:5432/postgres}
- PS_PORT=${PS_PORT:-8080}
- PS_MONGO_URI=${PS_MONGO_URI:-mongodb://mongo:27017}
- PS_SUPABASE_AUTH=${USE_SUPABASE_AUTH:-false}
- PS_JWKS_URL=${PS_JWKS_URL:-http://localhost:6060/api/auth/keys}
ports:
- ${PS_PORT}:${PS_PORT}
volumes:
- ./volumes/config:/home/config
- type: bind
source: ./volumes/config/sync-config.yaml
target: /home/config/sync-config.yaml
content: |
config:
edition: 3
streams:
user_list_data:
# Sync all lists and todos for the authenticated user
auto_subscribe: true
queries:
- SELECT * FROM lists WHERE owner_id = auth.user_id()
- SELECT * FROM todos WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
- type: bind
source: ./volumes/config/powersync.yaml
target: /home/config/powersync.yaml
content: |
# yaml-language-server: $schema=../schema/schema.json
# Note that this example uses YAML custom tags for environment variable substitution.
# Using `!env [variable name]` will substitute the value of the environment variable named
# [variable name].
# migrations:
# # Migrations run automatically by default.
# # Setting this to true will skip automatic migrations.
# # Migrations can be triggered externally by altering the container `command`.
# disable_auto_migration: true
# Settings for telemetry reporting
# See https://docs.powersync.com/self-hosting/telemetry
telemetry:
# Opt out of reporting anonymized usage metrics to PowerSync telemetry service
disable_telemetry_sharing: false
# Settings for source database replication
replication:
# Specify database connection details
# Note only 1 connection is currently supported
# Multiple connection support is on the roadmap
connections:
- type: !env PS_DATABASE_TYPE
# The PowerSync server container can access the Postgres DB via the DB's service name.
# In this case the hostname is pg-db
# The connection URI or individual parameters can be specified.
# Individual params take precedence over URI params
uri: !env PS_BACKEND_DATABASE_URI
# Or use individual params
# hostname: pg-db # From the Docker Compose service name
# port: 5432
# database: postgres
# username: postgres
# password: mypassword
# SSL settings
sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
# 'disable' is OK for local/private networks, not for public networks
# Required for verify-ca, optional for verify-full
# This should be the certificate(s) content in PEM format
# cacert: !env PS_PG_CA_CERT
# Include a certificate here for HTTPs
# This should be the certificate content in PEM format
# client_certificate: !env PS_PG_CLIENT_CERT
# This should be the key content in PEM format
# client_private_key: !env PS_PG_CLIENT_PRIVATE_KEY
# This is valid if using the `mongo` service defined in `ps-mongo.yaml`
# Connection settings for bucket storage
storage:
type: mongodb
uri: !env PS_MONGO_URI
# Use these if authentication is required. The user should have `readWrite` and `dbAdmin` roles
# username: my-mongo-user
# password: my-password
# The port which the PowerSync API server will listen on
port: !env PS_PORT
# Specify Sync Streams (or legacy Sync Rules)
sync_config:
path: /home/config/sync-config.yaml
# Client (application end user) authentication settings
client_auth:
# Enable this if using Supabase Auth
supabase: true
# JWKS URIs can be specified here
jwks_uri: !env PS_JWKS_URL
# Optional static collection of public keys for JWT verification
# jwks:
# keys:
# - kty: 'RSA'
# n: !env PS_JWK_N
# e: !env PS_JWK_E
# alg: 'RS256'
# kid: !env PS_JWK_KID
# JWKS audience
audience: ["powersync-dev", "powersync"]
api:
tokens:
# These tokens are used for local admin API route authentication
- use_a_better_token_in_production
```
# Deployment Architecture
Source: https://docs.powersync.com/maintenance-ops/self-hosting/deployment-architecture
Infrastructure requirements, scaling, and deployment architecture for self-hosted PowerSync
## Minimal Setup
A minimal "development" setup (e.g. for a staging or a QA environment) is:
1. A single PowerSync "compute" container (API + replication) with 512MB memory, 1 vCPU.
2. A single MongoDB node in replica set mode, 2GB memory, 1 vCPU. M10+ when using Atlas.
3. Load balancer for TLS.
This setup has no redundancy. If the replica set fails, you may need to recreate it from scratch which will re-sync all clients.
## Production
For production, we recommend running a high-availability setup with the following baseline requirements:
1. 1x PowerSync replication container, 1GB memory, 1 vCPU
2. 2+ PowerSync API containers, 1GB memory each, 1vCPU each.
3. A 3-node MongoDB replica set, 2+GB memory each. Refer to the MongoDB documentation for deployment requirements. M10+ when using Atlas.
4. A load balancer with redundancy.
5. Run a daily compact job.
For further robustness, in cases where you have larger rows and higher load, we recommend increasing the memory and CPU requirements to 2GB memory and 2 vCPU. This applies to both the replication and API containers.
When scaling up, add 1x PowerSync API container per 100 connections. The MongoDB replica set should be scaled based on CPU and memory usage.
### Replication Container
The replication container handles replicating from the source database to PowerSync's bucket storage.
The replication process is run using the docker command `start -r sync`, for example `docker run powersync start -r sync`.
Only one process can replicate at a time. If multiple are running concurrently, you may see an error `[PSYNC_S1003] Sync rules have been locked by another process for replication`.
If you use rolling deploys, it is normal to see this error for a short duration while multiple processes are running.
Memory and CPU usage of the replication container is primarily driven by write load on the source database. A good starting point is 1GB memory and 1 vCPU for the container, but this may be scaled down depending on the load patterns.
Set the environment variable `NODE_OPTIONS=--max-old-space-size=800` for 800MB, or set to 80% of the total assigned memory if scaling up or down.
### API Containers
The API container handles streaming sync connections, as well as any other API calls.
The replication process is run using the docker command `start -r api`, for example `docker run powersync start -r api`.
Each API container is limited to 200 concurrent connections, but we recommend targeting 100 concurrent connections or less per container. This may change as we implement additional performance optimizations.
Memory and CPU usage of API containers are driven by:
1. Number of concurrent connections.
2. Number of buckets per connection.
3. Amount of data synced to each connection.
A good starting point is 1GB memory and 1 vCPU per container, but this may be scaled up or down depending on the specific load patterns.
Set the environment variable `NODE_OPTIONS=--max-old-space-size=800` for 800MB, or set to 80% of the total assigned memory if scaling up or down.
### Compact Job
We recommend running a compact job daily as a cron job, or after any large maintenance jobs. For details, see the documentation on [Compacting Buckets](/maintenance-ops/compacting-buckets).
Run the compact job using the docker command `compact`, for example `docker run powersync compact`.
The compact job uses up to 1GB memory for compacting, if available. Set the environment variable `NODE_OPTIONS=--max-old-space-size=800` for 800MB, or set to 80% of the total assigned memory if scaling up or down.
### Load Balancer
A load balancer is required in front of the API containers to provide TLS support and load balancing. Most cloud providers have built-in options for load balancing, such as ALB on AWS.
It is currently required to host the API container on a dedicated subdomain - we do not support running it on the same subdomain as another service.
For self-hosting, [nginx](https://nginx.org/en/) is always a good option. A basic nginx configuration could look like this:
```yaml theme={null}
server {
listen 443 ssl;
server_name powersync.example.org;
# SSL configuration here
# Reverse proxy settings
location / {
proxy_pass http://powersync_server_ip:powersync_port; # Replace with your powersync details
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Disable proxy response buffering.
# This is not relevant for websocket connections, but is important when using
# HTTP streaming connections (configured in the PowerSync Client SDK).
proxy_buffering off;
}
}
```
When using nginx as a Kubernetes ingress, set the proxy buffering option as an annotation on the ingress:
```yaml theme={null}
nginx.ingress.kubernetes.io/proxy-buffering: "off"
```
### Health Checks
If the load balancer supports health checks, it may be configured to poll the API container at `/probes/liveness`. This endpoint is expected to have a 200 response when the container is healthy. See [Health Checks](/maintenance-ops/self-hosting/healthchecks) for details.
### Migrations
Occasionally, new versions of the PowerSync Service image may require migrations on the underlying storage database. This is also specifically required the first time the service starts up on a new storage database.
By default, migrations are run as part of the replication and API containers. In some cases, a migration may add significant delay to the container startup.
To avoid this startup delay, the migrations may be run as a separate job on each update, before replacing the rest of the containers. To run the migrations, run the docker command `migrate up`, for example `docker run powersync migrate up`.
In this case, disable automatic migrations in the config:
```yaml theme={null}
# powersync.yaml
migrations:
# Setting this to false (default) enables automatic migrations on startup.
# When set to true, migrations must be triggered manually by modifying the container `command`.
disable_auto_migration: true
```
Note that if you disable automatic migrations, and do not run the migration job manually,
the service may run with an outdated storage schema version. This may lead to unexpected and potentially difficult-to-debug errors in the service.
## Backups
We recommend using Git to backup your configuration files.
None of the containers use any local storage, so no backups are required there.
The bucket storage database may be backed up using the recommendations for the storage database system. This is not a strong requirement, since this data can be recovered by re-replicating from the source database.
## Self-Hosted Architecture Diagram
# Diagnostics
Source: https://docs.powersync.com/maintenance-ops/self-hosting/diagnostics
How to use the PowerSync Service Diagnostics API
All self-hosted PowerSync Service instances ship with a Diagnostics API.
This API provides the following diagnostic information:
* Connections → Connected backend source database and any active errors associated with the connection.
* Active Sync Streams / Sync Rules → Currently deployed Sync Streams (or legacy Sync Rules) and its status.
## CLI
If you have the [PowerSync CLI](/tools/cli) installed, use `powersync status` to check instance status without calling the API directly. This works with any running PowerSync instance — local or remote.
```bash theme={null}
powersync status
# Extract a specific field
powersync status --output=json | jq '.connections[0]'
```
## Diagnostics API
# Configuration
1. To enable the Diagnostics API, specify an API token in your PowerSync YAML file:
```yaml powersync.yaml theme={null}
api:
tokens:
- YOUR_API_TOKEN
```
Make sure to use a secure API token as part of this configuration
2. Restart the PowerSync Service.
3. Once configured, send an HTTP request to your PowerSync Service Diagnostics API endpoint. Include the API token set in step 1 as a Bearer token in the Authorization header.
```shell theme={null}
curl -X POST http://localhost:8080/api/admin/v1/diagnostics \
-H "Authorization: Bearer YOUR_API_TOKEN"
```
# Health Checks
Source: https://docs.powersync.com/maintenance-ops/self-hosting/healthchecks
## Overview
PowerSync Service provides health check endpoints and configuration options to help you monitor the health and readiness of your deployment. These checks allow you to catch issues before they impact your users.
## Health Check Endpoints
The following HTTP endpoints are available:
* **Startup Probe:**\
`GET /probes/startup`
* `200` – Service has started up correctly
* `400` – Service has **not** yet started
* **Liveness Probe:**\
`GET /probes/liveness`
* `200` – Service is alive
* `400` – Service is **not** alive
## Example: Docker Health Checks
A configuration with Docker Compose might look like:
```yaml theme={null}
healthcheck:
test: ["CMD", "node", "-e", "fetch('http://localhost:${PS_PORT}/probes/liveness').then(r => r.ok ? process.exit(0) : process.exit(1)).catch(() => process.exit(1))"]
interval: 5s
timeout: 1s
retries: 15
```
You can find a complete example in the [self-host-demo app](https://github.com/powersync-ja/self-host-demo/blob/main/services/powersync.yaml).
## Advanced: Configurable Health Check Probes (v1.12.0+)
Starting with version **1.12.0**, PowerSync Service supports configurable health check probes.\
You can now choose between filesystem-based and HTTP-based probes, or use both, via the config file. This is especially useful for environments with restricted I/O.
**Configuration options:**
```yaml theme={null}
healthcheck:
probes:
use_filesystem: true # Enables filesystem-based health probes
use_http: true # Enables HTTP-based health probes
```
If no `healthcheck` configuration is provided, the service defaults to the previous behavior for backwards compatibility.
# Metrics
Source: https://docs.powersync.com/maintenance-ops/self-hosting/metrics
Managing and using the PowerSync Service Metrics
# Metrics Endpoint
PowerSync exposes instance metrics via a Prometheus-compatible endpoint. This allows you to integrate with Prometheus or other monitoring systems that scrape Prometheus endpoints.
It's not recommended to scrape the Prometheus endpoint manually, we suggest using Prometheus or other compatible tools. PowerSync does not currently support pushing to OpenTelemetry collectors.
### Configuration
1. To enable metrics, update your PowerSync YAML file to include the `prometheus_port` and set a port number.
```yaml powersync.yaml theme={null}
telemetry:
# Set the port at which the Prometheus metrics will be exposed
prometheus_port: 9090
```
2. Update your Docker compose file to forward the `prometheus_port`.
```yaml docker-compose.yaml theme={null}
ports:
# Forward port 8080 for the PowerSync Service
- 8080:8080
# Forward port 9090 for Prometheus metrics
- 9090:9090
```
Once enabled, restart the service and the metrics endpoint will return Prometheus-formatted metrics, as described in the [What is Collected](/maintenance-ops/self-hosting/telemetry#whatiscollected) section of the [Telemetry](/maintenance-ops/self-hosting/telemetry) docs.
If you're running multiple containers (e.g. splitting up replication containers and API containers) you need to scrape the metrics separately for each container.
# Migrating Between Instances
Source: https://docs.powersync.com/maintenance-ops/self-hosting/migrating-instances
Migrating users between PowerSync instances
## Overview
In some cases, you may want to migrate users between PowerSync instances. This may be between cloud and self-hosted instances, or even just to change the endpoint.
If the PowerSync instances use the same source database and have the same basic configuration and Sync Streams (or legacy Sync Rules), you can migrate users by just changing the endpoint to the new instance.
To make this process easier, we recommend using an API to retrieve the PowerSync endpoint, instead of hardcoding the endpoint in the client application. If you're using custom authentication, this can be done in the same API call as getting the authentication token.
There should be no downtime for users when switching between endpoints. The client will have to re-sync all data, but this will all happen automatically, and the client will atomically switch between the two. The main effect visible to users will be a delay in syncing new data while the client is re-syncing. All data will remain available to read on the client for the entire process.
# Multiple PowerSync Instances
Source: https://docs.powersync.com/maintenance-ops/self-hosting/multiple-instances
Scaling using multiple instances
## Overview
Multiple instances are not required in most cases. See the [Deployment Architecture](/maintenance-ops/self-hosting/deployment-architecture) for details on standard horizontal scaling setups.
When exceeding a couple thousand concurrent connections, the standard PowerSync setup may not scale sufficiently to handle the load. In this case, we recommend you [contact us](/resources/contact-us) to discuss the options. However, we give a basic overview of using multiple PowerSync instances to scale here.
Each PowerSync "instance" is a single endpoint (URL), that is backed by:
1. One replication container.
2. Multiple API containers, scaling horizontally.
3. One bucket storage database.
This setup is described in the [Deployment Architecture](/maintenance-ops/self-hosting/deployment-architecture).
To scale further, multiple copies of this setup can be run, using the same source database.
## Mapping users to PowerSync endpoints
Since each PowerSync instance maintains its own copy of the bucket data, the exact list of operations and associated checksum will be different between them. This means the same client must connect to the same endpoint every time, otherwise they will have to re-sync all their data every time they switch. Multiple PowerSync instances cannot be load-balanced behind the same subdomain.
To ensure the same user always connects to the same endpoint, we recommend:
1. Do an API lookup from the client application to get the PowerSync endpoint, don't hardcode it in the application.
2. Either store the endpoint associated with each user, or compute it automatically using a hash function on the user id e.g. `hash(user_id) % n` where `n` is your number of instances.
# Self-Hosting Maintenance & Ops
Source: https://docs.powersync.com/maintenance-ops/self-hosting/overview
The [PowerSync CLI](/tools/cli) provides commands that work alongside any running self-hosted instance: `powersync status`, `powersync validate`, `powersync generate schema`, `powersync generate token`. You don't need to have set up the instance with the CLI to use these.
## Production Topics
Details for production self-hosted PowerSync deployments, including architecture/setup recommendations, security, health checks, maintenance, and monitoring.
## Deployment Platforms
Guides for deploying self-hosted PowerSync on common platforms:
Coolify is an open-source & self-hostable alternative to Heroku / Netlify / Vercel / etc.
Railway is a managed cloud platform (PaaS) for deploying and scaling applications, services, and databases via containers.
Amazon Elastic Container Service (ECS) is a fully managed container orchestration service.
# Railway + PowerSync
Source: https://docs.powersync.com/maintenance-ops/self-hosting/railway
Deploy PowerSync Service with a custom backend on Railway, including Postgres source database, bucket storage, and sync diagnostics client.
[Railway](https://railway.com/) is a managed cloud platform (PaaS) for deploying and scaling applications, services, and databases via containers.
## Step 1: Deploy on Railway
Find the "PowerSync Starter (Postgres)" template on the Railway Marketplace, or click the button below to get started:
The starter template will deploy and boot with a default configuration for all of the services. You can always add more services to your project as you need them and update the configuration as you go.
Once you've opened the deployed project in Railway, you'll see the following services:
* **PowerSync Service**: The PowerSync Service is responsible for syncing data between the Postgres database and your client applications.
* **Postgres Source Data**: The Postgres source data is the database that contains your application data.
* **Postgres (PowerSync Bucket Storage)**: The Postgres (PowerSync Bucket Storage) is the database that contains your PowerSync bucket data.
* **Demo Node.js Backend**: The Node.js backend is the backend for your application and is responsible for generating JWT tokens and API requests that handle upload events from a connected client. Note that this backend is not secured at all, and is intended to be used for demo purposes only.
* **Sync Diagnostics Client**: The Sync Diagnostics Client is a web app that implements the [PowerSync Web SDK](/client-sdks/reference/javascript-web) and allows you to test your PowerSync connection and see the data that is being synced.
* **Execute Scripts**: The Execute Scripts service is used to apply schema changes to your PowerSync Postgres instance.
This template automatically creates `lists` and `todos` tables in your Postgres database. The default Sync Rules are configured to sync these tables to your clients.
The `Execute Scripts` service creates the **powersync** publication for these tables. We recommend limiting the publication to only the tables you want clients to download.
Once you're up and running with the default `lists` and `todos` tables, you can add more tables at any time using either of these approaches:
**Option 1: Use your existing Postgres tools**
Manage your database schema as you normally would. For example, using `psql`:
```shell theme={null}
psql $POSTGRES_URL <<'SQL'
CREATE TABLE notes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
list_id UUID REFERENCES lists(id) ON DELETE CASCADE,
content TEXT,
created_at TIMESTAMP DEFAULT now()
);
CREATE PUBLICATION powersync FOR TABLE notes;
SQL
```
**Option 2: Use the Execute Scripts service**
The Execute Scripts service can also be used as a general-purpose tool to apply schema changes to your PowerSync Postgres instance:
1. Add your new table creation statements and publication updates to the Execute Scripts code
2. Redeploy the Execute Scripts service
**After adding tables with either option:**
1. Update the Sync Rules to include the new tables in the `sync_rules` section of your YAML config.
2. Re-encode the YAML config to base64 and update the `POWERSYNC_CONFIG_B64` environment variable. See [Understanding the PowerSync Service Configuration](#understanding-the-powersync-service-configuration) for more details.
## Step 2: Test with the Sync Diagnostics Client
1. Generate a development token
* The `Demo Node.js Backend` service has a `/api/auth/token` endpoint you can hit to get a development JWT token. You can use this endpoint to generate a development token.
* You can also generate a development token by following the [Generate development token](/configuration/auth/development-tokens#self-hosted) tutorial.
2. Open the `Sync Diagnostics Client` service in the browser.
3. Paste your token to test your connection and Sync Rules
## Step 3: Connect Your Client
Follow our guide to connect your app to your backend and PowerSync instance. See our guide to learn more about how to implement your client-side application.
## Step 4: Implement your Backend
PowerSync is designed to integrate with your existing backend infrastructure for persisting client mutations to your backend source database. See our guide to learn more about how to implement your backend application.
## Understanding the PowerSync Service Configuration
The PowerSync Service configuration is written in YAML and converted to base64 to be used as an environment variable.
If you need to make changes to the configuration, you can copy and edit the example YAML file below, [base64 encode](https://www.base64encode.org/) it, and update the `POWERSYNC_CONFIG_B64` environment variable in the `PowerSync Service` service. This will be required if you need to update the Sync Rules of your project.
```yaml config.yaml theme={null}
replication:
connections:
- type: postgresql
uri: !env PS_POSTGRES_SOURCE_URL
sslmode: disable
storage:
type: postgresql
uri: !env PS_POSTGRES_BUCKET_URL
sslmode: disable
port: 80
sync_config:
content: |
bucket_definitions:
global:
data:
- SELECT * FROM lists
- SELECT * FROM todos
client_auth:
jwks_uri: !env PS_AUTH_JWKS
audience:
- !env PS_AUTH_AUD
```
# Securing Your Deployment
Source: https://docs.powersync.com/maintenance-ops/self-hosting/securing-your-deployment
From a security perspective, the primary activity required will be placing a load balancer with TLS in front of PowerSync.
This section is a work in progress. Please reach out on [our Discord](https://discord.gg/powersync) if you have specific questions.
Below is an architecture diagram of a successful deployment:
Data doesn't always flow in the direction of your firewall rules, so the below table documents which components are making connections to others:
| Request Originator | Request Destination | Protocol |
| ------------------ | -------------------------- | ----------- |
| PowerSync Service | Postgres | TCP |
| PowerSync Service | MongoDB | TCP |
| PowerSync Service | OpenTelemetry Collector | TCP or UDP |
| PowerSync Service | JWKS Endpoint | TCP (HTTPS) |
| App Client | PowerSync Service (via LB) | TCP (HTTPS) |
| App Client | App Backend | TCP (HTTPS) |
| App Backend | Postgres | TCP |
# Telemetry
Source: https://docs.powersync.com/maintenance-ops/self-hosting/telemetry
PowerSync integrates with OpenTelemetry
## Overview
PowerSync uses OpenTelemetry to gather metrics about usage and health.
This telemetry is shared with the PowerSync team unless you opt-out. This allows us to gauge adoption and usage patterns across deployments so that we can better allocate R\&D capacity and ultimately better serve our customers (including you!). The metrics are linked to a random UUID and are therefore completely anonymous.
## What is Collected
Below are the data points collected every few minutes and associated with a random UUID representing your instance:
Type definitions for each metric dimension are available in the [powersync-service](https://github.com/powersync-ja/powersync-service/blob/main/packages/types/src/metrics.ts) repository.
| Dimension | Type |
| --------------------------------- | ------- |
| data\_replicated\_bytes | counter |
| data\_synced\_bytes | counter |
| rows\_replicated\_total | counter |
| transactions\_replicated\_total | counter |
| chunks\_replicated\_total | counter |
| operations\_synced\_total | counter |
| replication\_storage\_size\_bytes | gauge |
| operation\_storage\_size\_bytes | gauge |
| parameter\_storage\_size\_bytes | gauge |
| concurrent\_connections | gauge |
To scrape your self-hosted PowerSync Service metrics, please see the [Metrics](/maintenance-ops/self-hosting/metrics) docs page for more details.
### Opting Out
To disable the sending of telemetry to PowerSync, set the `disable_telemetry_sharing` key in your [configuration file](/configuration/powersync-service/self-hosted-instances) (`config.yaml` or `config.json`) to `true`:
```yaml powersync.yaml theme={null}
telemetry:
# Opt out of reporting anonymized usage metrics to PowerSync telemetry service
disable_telemetry_sharing: true
```
# Update Sync Streams (Sync Config)
Source: https://docs.powersync.com/maintenance-ops/self-hosting/update-sync-rules
How to update Sync Streams (or legacy Sync Rules) in a self-hosted PowerSync deployment
There are three ways to update your sync config in a self-hosted deployment:
1. **CLI** — Edit your config and apply with `powersync docker reset`
2. **Config file** — Update your config and restart the service
3. **API endpoint** — Deploy at runtime without restarting
During deployment, existing Sync Streams/Sync Rules continue serving clients while new sync config processes. Clients seamlessly transition once [initial replication](/architecture/powersync-service#initial-replication-vs-incremental-replication) completes.
Run `powersync validate` in the CLI before deploying to catch errors in your sync config without applying changes.
## Option 1: CLI
If you set up PowerSync using the CLI (`powersync docker`), update your sync config and apply it without a full service restart:
Update `powersync/sync-config.yaml` in your project directory.
```bash theme={null}
powersync validate
```
```bash theme={null}
powersync docker reset
```
This restarts the PowerSync Service and applies your updated sync config.
## Option 2: Config File
Define your sync config in `powersync.yaml` either inline or via a separate file. See [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances) for the full config reference.
Update the `sync_config:` section in your `powersync.yaml`. The `sync_config:` key is used for both Sync Streams and Sync Rules:
```yaml Sync Streams — Separate File (Recommended) theme={null}
sync_config:
path: sync-config.yaml
```
```yaml Sync Streams — Inline theme={null}
sync_config:
content: |
config:
edition: 3
streams:
users:
auto_subscribe: true
query: SELECT * FROM public.users
```
```yaml Sync Rules — Separate File (Legacy) theme={null}
sync_config:
path: sync-config.yaml
```
```yaml Sync Rules — Inline (Legacy) theme={null}
sync_config:
content: |
bucket_definitions:
global:
data:
- SELECT * FROM public.users
```
Restart your service to apply changes:
```shell theme={null}
docker compose restart powersync
```
Once the service starts up, it will load the updated sync config and begin processing it while continuing to serve the existing config until initial replication completes.
## Option 3: Deploy via API
Deploy sync config at runtime without restarting. Useful for quick iterations during development.
The API is disabled when Sync Streams (or legacy Sync Rules) are defined in `powersync.yaml`.
Sync Streams (or legacy Sync Rules) defined in `powersync.yaml` always take precedence.
Add an API token to your `powersync.yaml` and restart:
```yaml powersync.yaml theme={null}
api:
tokens:
- !env PS_API_TOKEN
```
```shell theme={null}
curl -X POST http://:/api/sync-rules/v1/deploy \
-H "Content-Type: application/yaml" \
-H "Authorization: Bearer ${PS_API_TOKEN}" \
-d @sync-rules.yaml
```
Use `/api/sync-rules/v1/validate` first to check for errors without deploying.
### Additional Endpoints
| Endpoint | Method | Description |
| ------------------------------ | ------ | ------------------------------------------------ |
| `/api/sync-rules/v1/current` | GET | Get active and pending Sync Streams / Sync Rules |
| `/api/sync-rules/v1/reprocess` | POST | Restart replication from scratch |
## Troubleshooting
Common errors when using the API:
| Error Code | Meaning |
| ------------- | ------------------------------------------------------------------ |
| `PSYNC_S4105` | Sync Streams / Sync Rules defined in config file - API is disabled |
| `PSYNC_S4104` | No Sync Streams / Sync Rules deployed yet |
| `PSYNC_R0001` | Invalid Sync Streams / Sync Rules YAML - check `details` field |
See [Error Codes Reference](/debugging/error-codes) for the complete list.
# MongoDB Atlas Device Sync Migration Guide
Source: https://docs.powersync.com/migration-guides/atlas-device-sync
This guide lays out all the steps of migrating from MongoDB Atlas Device Sync to PowerSync.
## Introduction
Migrating from the deprecated MongoDB Atlas Device Sync to PowerSync allows you to benefit from efficient data synchronization using open and proven technologies. Users get always-available, instantly-responsive offline-first apps that also stream data updates in real-time when online.
## Why PowerSync?
PowerSync’s [history](https://www.powersync.com/company) goes as far back as 2009, when the original version of the sync engine was developed as part of an app development platform used by some of the world’s largest industrial companies to power offline-capable apps deployed in harsh environments.
PowerSync was spun off as a standalone product in 2023, and gives engineering teams a proven, open and robust sync engine with a familiar **server-client** [architecture](/architecture/architecture-overview).
PowerSync’s MongoDB connector has been **developed in collaboration with MongoDB** to provide an easy setup process. It reached **General Availability (GA) status** with its [V1 release](https://www.powersync.com/blog/powersyncs-mongodb-connector-hits-ga-with-version-1-0) and is fully supported for production use. Multiple MongoDB customers currently use PowerSync in production environments.
The server-side [PowerSync Service](/architecture/powersync-service) connects to MongoDB and pre-processes and pre-indexes data to be efficiently synced to users based on defined *Sync Streams* (or legacy *Sync Rules*). Client applications embedding the *PowerSync Client SDK* connect to the PowerSync Service to sync only a relevant subset of data to each user, based on the Sync Streams (or legacy Sync Rules). Incremental updates in MongoDB are synced to clients in real-time.
Client applications get a SQLite database that they can read from and write to. PowerSync provides for bi-directional syncing so that mutations in the client-side SQLite database are automatically synced back to the source MongoDB database. If users are offline or have patchy connectivity, PowerSync automatically manages network failures and retries.
By introducing PowerSync as a sync engine, you get:
* **Predictable sync behavior** that syncs relevant data to each user.
* **Instantly responsive user experience** as the app works with a zero-latency SQLite database.
* **Consistency guarantees** ensuring consistent state of the client-side SQLite database.
* **Real-time multi-user applications** as data updates are streamed to connected clients in real-time.
* **Offline-first capabilities** enabling apps to continue to work regardless of network conditions.
Please review this guide to understand the required changes and prerequisites. Following the provided steps will help your team transition smoothly.
If you need further assistance at any point, you can:
* **Ask AI** (see lower right corner of this site), which is trained on all our documentation, repositories and Discord discussions.
* [Set up a call](https://calendly.com/powersync/powersync-chat) with us.
* Ask us anything on our [Discord server](https://discord.gg/powersync).
* [Contact us](mailto:hello@powersync.com) through email.
## Architecture: Before and After
If you have MongoDB Atlas Device Sync deployed today, at a high level your architecture will look something like this:
Migrating to PowerSync results in this architecture: (new components in green)
Here is a quick overview of the resulting PowerSync architecture:
* The **PowerSync Service** is the server-side component of PowerSync. It's available as a cloud-hosted service ([PowerSync Cloud](https://powersync.com/pricing)), or you can [self-host](/intro/self-hosting) using our Open Edition.
* **Authentication**: PowerSync piggybacks off your app’s existing [authentication](/configuration/auth/overview), and JWTs are used to authenticate between clients and the PowerSync Service. If you are using Atlas Device SDKs for authentication, you will need to implement an authentication provider.
* **PowerSync Client SDKs** use **SQLite** under the hood. Even though MongoDB is a "NoSQL" document database, PowerSync’s use of SQLite works well with MongoDB, since the [PowerSync protocol](/architecture/powersync-protocol) is schemaless (it syncs schemaless JSON data) and we dynamically apply a [client-side schema](/intro/setup-guide#define-your-client-side-schema) to the data in SQLite using SQLite views. Client-side queries can be written in SQL or you can make use of an ORM (we provide a few [ORM integrations](https://www.powersync.com/blog/using-orms-with-powersync)). Working with embedded documents and arrays from MongoDB is easy with SQLite due to [its JSON support](/client-sdks/advanced/query-json-in-sqlite).
* **Reads vs Writes**: PowerSync handles syncing of reads differently from writes (mutations)
* **Reads**: The PowerSync Service connects to your MongoDB database for real-time replication of data, and syncs data to clients based on [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). Sync Streams/Rules are more flexible than MongoDB Realm Flexible Sync, but are defined on the server-side, not on the client-side.
* **Writes**: The client-side application can perform writes (mutations) directly on the SQLite database. The PowerSync Client SDK automatically places those mutations into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) and invokes an `uploadData()` function (defined by you) as needed to upload those mutations sequentially to your backend application.
* **Authorization**: Authorization is controlled separately for reads vs. writes.
* **Reads**: The [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) control which users can access which data.
* **Writes**: Your backend application controls authorization for how users can modify data, when it receives uploaded mutations from clients.
* **Backend Application**: PowerSync requires a backend API interface to upload mutations to MongoDB (and optionally for custom authentication too). There are currently two options:
* **"Bring your own backend"**: If you already have a backend application as part of your stack, you should use your existing backend. If you don’t yet have one, but would like to run your own backend, we have example implementations available. See the [instructions below](#2-accept-uploads-on-the-backend) for more details.
* **Serverless cloud functions (hosted/managed)**: An alternative option is to use CloudCode, a serverless cloud functions environment provided by PowerSync. We have a template available that you can use as a turnkey starting point. Details are [explained below](#2-accept-uploads-on-the-backend).
## Migration Steps
Follow the steps below to migrate a MongoDB Atlas Device Sync app to PowerSync.
It is not necessary to remove Realm in order to install PowerSync. It is possible to initially run Realm and PowerSync in parallel, and remove Realm once PowerSync has been set up.
### 1. Follow the PowerSync Setup Guide
Follow the steps for MongoDB and your client platform/framework in our standard [Setup Guide](/intro/setup-guide):
* [Configure Your Source Database](/intro/setup-guide#1-configure-your-source-database)
* [Set Up PowerSync Service Instance](/intro/setup-guide#2-set-up-powersync-service-instance)
* [Connect PowerSync To Your Source Database](/intro/setup-guide#3-connect-powersync-to-your-source-database) (MongoDB)
* [Define Sync Streams or Sync Rules](/intro/setup-guide#4-define-sync-streams-or-sync-rules)
* [Generate a Development Token](/intro/setup-guide#5-generate-a-development-token)
* [Test Sync with the Sync Diagnostics Client](/intro/setup-guide#6-%5Boptional%5D-test-sync-with-the-sync-diagnostics-client)
* [Use the Client SDK](/intro/setup-guide#7-use-the-client-sdk)
* [Install the Client SDK](/intro/setup-guide#install-the-client-sdk)
* [Define Your Client-Side Schema](/intro/setup-guide#define-your-client-side-schema)
* [Instantiate the PowerSync Database](/intro/setup-guide#instantiate-the-powersync-database)
* [Connect to PowerSync Service Instance](/intro/setup-guide#connect-to-powersync-service-instance)
* [Read and Write Data (Using SQLite)](/intro/setup-guide#read-data)
For specific details on working with embedded documents and arrays from MongoDB, see our guide on [Querying JSON Data in SQLite](/client-sdks/advanced/query-json-in-sqlite)
Once you have completed the *Setup Guide*, the only two remaining steps are to configure & integrate a backend application to handle mutations uploaded from clients, and to implement authentication.
### 2. Accept Uploads on the Backend
MongoDB Atlas Device Sync provides built-in writes/uploads to the MongoDB database.
PowerSync offers [full customizability](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) regarding how writes (mutations) are applied to the source MongoDB database, via your own application backend. This gives you control to apply your own business logic, data validations, authorization and conflict resolution logic.
There are two options:
* **"Bring your own backend"**: If you already have a backend application as part of your stack, you should use your existing backend. This can be any kind of backend environment including a custom backend (e.g. Node.js, Rails, Laravel, Django, ASP.NET), an API platform (e.g. Hasura), some kind of serverless cloud functions (e.g. Azure Functions, AWS Lambda, Google Cloud Functions, Cloudflare Workers, etc.), or any other equivalent system that allows you to run privileged logic securely or apply mutations to your MongoDB database securely. If you don’t yet have a backend application, but would like to run your own backend environment, we have example implementations available (see below).
* **Serverless cloud functions (hosted/managed)**: PowerSync offers serverless cloud functions hosted on the same infrastructure as PowerSync Cloud which can be used for the needed backend functionality. We provide a MongoDB-specific template for this which can be used as a turnkey solution.
#### Using Your Own Custom Backend API
This option gives you complete control over the backend. The simplest backend implementation is to simply apply mutations to MongoDB as they are received, which results in a last-write-wins conflict resolution strategy. See [App Backend Setup](/configuration/app-backend/setup) and [Writing Client Changes](/handling-writes/writing-client-changes) for more details.
We have [example backend implementations](/intro/examples#backend-examples) available (e.g. Node.js, Django, Rails, .NET).
The [Migrating A MongoDB Atlas Device Sync App To PowerSync](https://www.powersync.com/blog/migrating-a-mongodb-atlas-device-sync-app-to-powersync) practical example on our blog also provides an example of a custom Node.js backend implementation.
On the client-side, you need to wire up the `uploadData()` function in the "backend connector" to use your own backend API. The [Client-Side Integration With Your Backend](/configuration/app-backend/client-side-integration) section of our docs provides more details on this.
#### Using PowerSync’s Serverless Cloud Functions
PowerSync provides serverless cloud functions for backend functionality, with a template available for MongoDB. See the [step-by-step instructions](/configuration/app-backend/cloudcode) on how to use the template. The template can be customized, or it can be used as-is.
The template provides [turnkey conflict resolution](https://www.powersync.com/blog/turnkey-backend-functionality-conflict-resolution-for-powersync#turnkey-conflict-resolution) which roughly matches the built-in conflict resolution behavior provided by MongoDB Atlas Device Sync.
PowerSync's serverless cloud functions require a bit of "white glove" assistance from our team. If you want to use this option, please [get in touch with us](https://www.powersync.com/contact) so we can get you set up.
For more information, see our blog post: [Turnkey Backend Functionality & Conflict Resolution for PowerSync](https://www.powersync.com/blog/turnkey-backend-functionality-conflict-resolution-for-powersync).
### 3. Set Up Authentication Integration
For quick development and testing purposes, the *Setup Guide* from step 1 instructions you to generate a temporary development token to use for authentication.
At some point you will need to replace the development tokens with proper JWT-based authentication integration. PowerSync supports various authentication providers including Supabase, Firebase Auth, Auth0, Clerk, and custom JWT implementations.
The [Authentication Setup](/configuration/auth/overview) section of our docs provides full details on this.
## Questions? Need help?
* **Ask AI** (see lower right corner of this site), which is trained on all our documentation, repositories and Discord discussions.
* [Get in touch](https://www.powersync.com/contact) with us.
# Contact Us
Source: https://docs.powersync.com/resources/contact-us
## Need help or have questions?
### Discord community
Join our [Discord](https://discord.gg/powersync) server where you can browse topics from our community, ask questions, share feedback, or just say hello :)
### Support for Pro, Team & Enterprise customers
If you are a customer on our Pro, Team or Enterprise (Cloud or Self-Hosted) [plans](https://www.powersync.com/pricing), you can contact us using the support details provided to you during onboarding.
You are also welcome to use our [Discord](https://discord.gg/powersync) community for questions, but please note that [support SLAs](https://www.powersync.com/legal/commercial-license-and-services-agreement#appendix-c) (Team and Enterprise plans) are not available for Discord support.
## Found a bug?
Bugs can be logged as [GitHub issues](https://github.com/powersync-ja) on the respective repo.
## Feedback or ideas?
* [Submit an idea](https://roadmap.powersync.com/tabs/5-roadmap/submit-idea) via our public roadmap
* Or [schedule a chat](https://calendly.com/powersync/powersync-chat) with someone from our product team.
## Pricing or commercial questions?
Please [shoot us an email](mailto:help@powersync.com) to get in touch.
# FAQ
Source: https://docs.powersync.com/resources/faq
Frequently Asked Questions about PowerSync.
A major product principle that has guided us is to provide an open-source real database on the client-side, with a specific focus on [SQLite](https://sqlite.org/). This is as opposed to some kind of cache, key-value store or non-standards-based relational datastore.
This approach leverages the power of SQLite and its ecosystem:
* **SQL functionality & concepts**: Millions of developers are already well-versed in SQL constructs and syntax, which means that there’s an instant familiarity with using SQLite. It also means having access to its rich functionality such as aggregations, joins, advanced indexing and JSON support.
* **Ecosystem & extensibility:** SQLite brings a lot with it: You can use popular ORMs that you’re already familiar with, such as Drizzle, Kysely and Drift. You can use SQLite extensions such as SQLCipher for encryption and FTS5 for full-text search. You can use standard tools for inspecting the database and doing more in-depth debugging. You get all the benefits of the SQLite community and the innovation around it: SQLite just keeps becoming more popular, and people keep doing more new interesting things with it.
* **Performance & maturity:** SQLite is also really fast, and extremely battle-tested: the SQLite team estimates that there are more than a trillion SQLite databases deployed, and every line in the codebase has 600 lines of test code.
The ubiquity of SQLite also creates opportunities for adopting PowerSync in the SQLite “installed base”: wherever you find SQLite, you can likely use PowerSync too. Usage of SQLite also means low lock-in. PowerSync is designed to be a “pluggable middleware” layer rather than a high lock-in monolithic system. It sits between popular backend databases on the server-side, and SQLite on the client-side. Replacing it with a different sync engine is fairly straightforward. Since PowerSync is built to work with open technologies and is itself open too, you can have an end-to-end stack optimized for low risk.
**PowerSync uses near real-time streaming of changes to the client (\< 1s delay).**
A persistent connection is used to continuously stream changes to the client.
This is implemented using a standard HTTP/2 request with a streaming response, or WebSockets.
A polling API will also be available for cases where the client only needs to update data periodically, and prefers to not keep a connection open.
The real-time streaming is not designed for "update as you type" — it still depends on explicitly saving changes. Real-time collaboration is supported as long as users do not edit the same data (same columns of the same rows) at the same time.
Concurrently working on text documents is not supported out of the box. This is solved better by CRDTs — see the [CRDTs](/client-sdks/advanced/crdts) section.
See the section on [Performance and Limits](/resources/performance-and-limits).
If no sync rule changes were deployed in this period, the user will only need to download the incremental changes that happened since the user was last connected.
*For example, a new record should not be displayed until the server received it, or it should be displayed as pending, or the entire screen must block with a spinner.*
**While PowerSync does not have out-of-the-box support for this due to the great variety of requirements, this is easy to build on top of the sync system.** A simple approach is to store a "status" or "pending changes" column on the table, and set that whenever the client makes a change. When the server receives the change, it then sets it to "processed" / "no pending changes". So when the server has processed the change, the client automatically syncs that status back.For more granular information, record individual changes in a separate table, as explained in [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution).Note: Blocking the entire screen with a spinner is not recommended, since the change may take a very long time to be processed if the user is offline.
**Right now, we don’t have support for replicating data via APIs.** A workaround would be to have custom code to replicate the data from the API to a PostgreSQL instance, then sync that with PowerSync. We may add a way in the future to replicate the data directly from an API to the PowerSync Service, without a database in between.
**Yes.** The PowerSync Client SDKs support real-time streaming of changes, and can automatically rerun a query if the underlying data changed. It does not support incrementally updating the result set yet, but it should be fast if the query is indexed appropriately, and the result set is small enough.
See [Troubleshooting](/debugging/troubleshooting)
**Client-side transactions are supported**, and use standard SQLite locking to avoid conflicts. **Client-server transactions are not supported.** This would require online connectivity to detect conflicts and retry the transaction, which is not possible for changes made offline. Instead, it is recommended to model the data to allow atomic changes (see previous sections on conflict detection).
**This is generally not recommended, but it can be used in some cases, with caveats.**
See the section on [client ID](/sync/advanced/client-id) for details.
**An attachment sync or caching system can be built on top of PowerSync.**
See the section on [Attachments](/client-sdks/advanced/attachments) for details.
Currently, PowerSync can only read from Postgres databases directly. GraphQL or REST APIs can be used for the write path by the PowerSync SDK.
By default PowerSync is not susceptible to SQL injection. The PowerSync execute API is parameterized, and as long as developers use that, SQL injection is not possible. It is however the developer's responsibility to ensure that they use the parameterized API and don't directly insert user-provided data into underlying SQLite tables.
[getCrudBatch()](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/getCrudBatch.html) [getNextCrudTransaction()](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/getNextCrudTransaction.html)
Use getCrudBatch() when you don't care about atomic transactions, and want to do bulk updates for performance reasons.
PowerSync will only sync the difference (buckets added or removed).
# Feature Status
Source: https://docs.powersync.com/resources/feature-status
PowerSync feature states and their implications for factors such as API stability and support.
Features in PowerSync are introduced through a phased release cycle to ensure quality and stability. Below is an overview of the four release stages namely Closed Alpha, Open Alpha, Beta and V1:
| **Stage** | **Production Readiness** | **API Stability** | **Support** | **Documentation** |
| ---------------- | --------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------------- | ----------------------------------------------- |
| **Closed Alpha** | Not production-ready; purpose is early feedback and testing of new ideas. | Subject to breaking changes. | Not covered under SLAs. | Limited or placeholder documentation. |
| **Open Alpha** | Not production-ready; purpose is broader testing and wider public feedback. | Subject to changes based on feedback. | Not covered under SLAs. | Basic documentation provided. |
| **Beta** | Production-ready for tested use cases. | Fully stable; breaking changes clearly communicated. | Covered under SLAs. | Documentation provided; may contain known gaps. |
| **V1** | Production-ready for all main use cases. | Fully stable; backwards compatibility maintained as far as possible; breaking changes clearly communicated. | Covered under SLAs. | Comprehensive and finalized documentation. |
# Service Release Channels
PowerSync Service features are deployed to different release channels throughout their lifecycle.
## Open Edition
The latest stable PowerSync Docker image is available under the latest tag and can be pulled using:
```bash theme={null}
docker pull journeyapps/powersync-service:latest
```
Development images may be released for bleeding edge feature additions or hotfix testing purposes. These images are usually versioned as a `0.0.0-dev-XXXXXXXXXXXXXX` prereleases.
## PowerSync Cloud
In the PowerSync Dashboard, developers can configure the service version channel for their instance. This option is available in the Settings view for each instance.
### Stable
The Stable channel provides the most reliable release of the PowerSync Service. It includes features that may be in the `V1`, `Beta`, or `Open Alpha` stages. `Open Alpha` features in this channel are typically mature but may still have bugs or known issues.
### Next
The Next channel builds on the Stable channel and includes new features, fixes, or modifications to existing stable functionality that may require additional testing or validation.
# Feature Status Summary
Below is a summary of the current main PowerSync features and their release states:
| **Category / Item** | **Status** |
| --------------------------------- | ------------ |
| **Database Connectors** | |
| SQL Server | Alpha |
| MySQL | Beta |
| MongoDB | V1 |
| Postgres | V1 |
| | |
| **PowerSync Service** | |
| Enterprise Self-Hosted | Closed Alpha |
| Sync Streams | Beta |
| Postgres Bucket Storage | V1 |
| | |
| **Client SDKs** | |
| Rust SDK | Experimental |
| .NET SDK | Alpha |
| Capacitor SDK | Alpha |
| TanStack Query | Alpha |
| Node.js SDK | Beta |
| OP-SQLite Support | Beta |
| Flutter Web Support | Beta |
| React Native Web Support | Beta |
| Flutter SQLCipher | Beta |
| Kotlin SQLite3MultipleCiphers | Beta |
| Vue Composables | Beta |
| Swift SDK | V1 |
| Kotlin SDK | V1 |
| JavaScript/Web SDK | V1 |
| Dart/Flutter SDK | V1 |
| React Native SDK | V1 |
| React Hooks | V1 |
| | |
| **ORMs/SQL Libraries** | |
| Room (Kotlin) | Alpha |
| TanStack DB (JS) | Alpha |
| GRDB (Swift) | Alpha |
| Drift (Flutter) | Beta |
| Drizzle (JS) | Beta |
| Kysely (JS) | Beta |
| SQLDelight (Kotlin) | Beta |
| | |
| **Attachment Helpers** | |
| Kotlin | Alpha |
| Swift | Alpha |
| JavaScript (new built-in library) | Alpha |
| Flutter (new built-in library) | Alpha |
| | |
| **Other** | |
| CLI | Beta |
Also see:
* [PowerSync Roadmap](https://roadmap.powersync.com)
# HIPAA Compliance
Source: https://docs.powersync.com/resources/hipaa
Details on HIPAA compliance with PowerSync Cloud
Note: HIPAA compliance is only available on the Team and Enterprise plans of PowerSync Cloud.
The Health Insurance Portability and Accountability Act (HIPAA) is a comprehensive U.S. federal law that protects the privacy and security of individuals' health information, known as Protected Health Information (**PHI**) or electronic PHI (**ePHI**).
Entities that handle ePHI must comply with the HIPAA Privacy Rule, Security Rule, and Breach Notification Rule.
PowerSync serves as a **Business Associate (BA)** for customers (the **Covered Entity** or their BA) who utilize our service to synchronize healthcare-related data. As a BA, PowerSync has specific legal obligations to safeguard ePHI that passes through our synchronization service.
To achieve HIPAA compliance when using PowerSync, two primary conditions must be met:
1. The customer must execute a **Business Associate Agreement (BAA)** with PowerSync.
2. The customer must use the PowerSync Service within a HIPAA-compliant configuration, e.g., using required encryption, proper access controls (MFA), a custom deployment setup, and network restrictions.
We also ensure that all our upstream vendors and sub-processors who may handle ePHI (such as cloud infrastructure providers) are covered by their own BAAs and comply with their obligations.
**Mandatory Bucket Storage Requirement**\
With a standard setup, PowerSync Cloud provides “bucket storage” (persistent database storage where bucket data such as operation history and metadata are stored by the PowerSync Service) as part of the cloud service. For HIPAA compliant setups, however, the customer must provide a dedicated MongoDB Atlas cluster in their own Atlas account to serve as the bucket storage database for the PowerSync Service instance(s).
## Customer Responsibilities
The customer remains the owner of their application, databases, and client devices, and therefore holds critical responsibilities in the shared compliance model:
* **Business Associate Agreement (BAA)**\
Customers **must sign a BAA** with PowerSync *before* storing or synchronizing any ePHI using the service. The BAA can be requested by emailing [hello@powersync.com](mailto:hello@powersync.com)
* **Source Database**\
Customers must ensure their **source database** (which PowerSync connects to) is hosted in a HIPAA-compliant environment and is protected by the appropriate vendor BAAs (e.g., with AWS, Azure, or GCP).
* **Bucket Storage - MongoDB Database**\
Customers must ensure their **bucket storage MongoDB Atlas cluster** (which PowerSync connects to) is hosted in a HIPAA-compliant environment.
* **Client Device Security**\
Customers must implement all necessary **administrative, physical, and technical safeguards** on the **client-side devices** (mobile, web app). This includes device access controls, encryption of the client-side PowerSync SQLite database, and secure disposal of data when a user or device is de-provisioned.
* **Data Filtering and Access Control**\
Customers must configure Sync Streams / Sync Rules (legacy) to ensure only the minimum necessary ePHI is synchronized to specific client devices, and must ensure the authentication setup is correctly implemented to restrict data to the correct client devices.
* **Network Restrictions (IP Filtering, AWS Private Endpoints)**\
Customers must use [AWS PrivateLink](/configuration/source-db/private-endpoints) where possible, or configure and restrict source database and bucket storage database access to PowerSync Cloud’s [IP addresses](/configuration/source-db/security-and-ip-filtering).
* **Breach Notification**\
Customers must follow their internal policies for notifying individuals and/or [HHS](https://www.hhs.gov/), and reporting breaches discovered by the customer to PowerSync as required by the BAA.
* **PowerSync Dashboard Account**\
Customers are in full control of their PowerSync Cloud account and are responsible for managing the users who have access to the PowerSync Dashboard. Multi-factor authentication (MFA) must be enabled for the PowerSync Dashboard.
## PowerSync’s Responsibilities (as BA)
PowerSync’s core responsibility is to protect ePHI while it is in transit and temporarily processed by our synchronization service.
As a Business Associate, PowerSync is directly liable for compliance with certain provisions of the HIPAA Rules and adheres to the terms of the BAA by:
* **Technical Safeguards**\
**Encrypting ePHI in transit** (using TLS/SSL) between the customer's databases, the PowerSync Service, and the client devices.
* **Vendor Management**\
Ensuring all underlying cloud infrastructure providers (sub-BAs) that handle ePHI have executed a **BAA** with PowerSync.
* **Breach Reporting**\
Notifying the customer immediately upon the discovery of a **security incident** or **breach** involving unsecured ePHI processed or stored by the PowerSync Service, as outlined in the BAA.
* **Infrastructure and Auditing**\
Maintaining appropriate **administrative and physical controls** over our infrastructure, including access management, logging, monitoring, and regular third-party audits (e.g. SOC 2) to validate our security posture.
## Shared Model of Responsibility
HIPAA compliance is a continuous, shared process between the customer and PowerSync (BA).
| Area of Responsibility | Customer | PowerSync (Business Associate) |
| :--------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------ |
| **Source Database** | Responsible for the security and HIPAA status of the source database hosting. | Responsible for the secure, encrypted connection to the database. |
| **Bucket Storage Database** | Responsible for the security and HIPAA status of the bucket storage database hosting. | Responsible for the secure, encrypted connection to the database. |
| **Synchronization Service** | Responsible for proper configuration of Sync Streams / Sync Rules data filtering to prevent unnecessary data exposure. | Responsible for securing the PowerSync Service infrastructure and ensuring data is encrypted while processed. |
| **Client Devices (e.g., Mobile App, Web App)** | **Wholly Responsible** for securing the client-side SQLite database, applying user authentication, authorization, and data purge policies on the device. | Responsible securing the client-side SDKs |
## Frequently Asked Questions
### What is the difference between SOC 2 and HIPAA?
**SOC 2 (Service Organization Control 2)** is an auditing procedure that validates a company’s controls relevant to security, availability, processing integrity, confidentiality, and privacy. It is not industry-specific.
**HIPAA** is a federal regulation specific to the U.S. healthcare industry that dictates the protection of PHI.
A strong **SOC 2 Type 2 report** provides independent assurance that PowerSync maintains the necessary security posture to meet the administrative and technical safeguards required for a HIPAA Business Associate. [Learn more](/resources/security).
### How often is PowerSync audited?
PowerSync undergoes **annual third-party audits** of our security controls (e.g., SOC 2 Type 2). These audits review the controls that are foundational to our ability to fulfill HIPAA BAA requirements.
### Where can I find PowerSync’s BAA?
The BAA is available upon request to customers seeking to process ePHI. Please contact [hello@powersync.com](mailto:hello@powersync.com) to initiate the BAA execution process. **Only the Team and Enterprise plans on PowerSync Cloud are supported.**
### Is a HIPAA Compliance Report available?
Yes. To provide independent assurance of our security controls, PowerSync can provide a HIPAA Compliance Report to customers on the Team or Enterprise plans of PowerSync Cloud. To request a copy, please contact [hello@powersync.com](mailto:hello@powersync.com).
# Local-First Software
Source: https://docs.powersync.com/resources/local-first-software
How does PowerSync fit in to the local-first software movement?
## What is local-first software?
### The vision of local-first
Local-first software is a term coined by the research lab [Ink & Switch](https://www.inkandswitch.com/) in its [2019 manifesto essay](https://www.inkandswitch.com/local-first/).
Ink & Switch's rationale for local-first is to get the best of both worlds of stand-alone desktop apps (so-called "old-fashioned" software) and cloud software:
> *"We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by ‘old-fashioned’ software".*
The manifesto proceeds to define local-first as software that:
> *"prioritizes the use of local storage (the disk built into your computer) and local networks (such as your home WiFi) over servers in remote data centers".*
It also puts emphasis on the primacy of the local copy of data:
> "In local-first applications \[...] we treat the copy of the data on your local device \[...] as the primary copy. Servers still exist, but they hold secondary copies of your data in order to assist with access from multiple devices."
Expanding on this, the manifesto identifies [7 ideals](https://www.inkandswitch.com/local-first/#seven-ideals-for-local-first-software) "to strive for in local-first software", which we will explore further below.
**Much more theoretical research is still needed** to practically build software that conforms to all of the ideals of local-first software as envisioned by Ink & Switch, since it will need a fully decentralized architecture and needs many complex requirements to be addressed (see [here](https://www.powersync.com/blog/local-first-software-origins-and-evolution#why-are-the-ideals-of-local-first-difficult-to-achieve) for more details). In the meantime, the manifesto essay does provide [practical guidance](https://www.inkandswitch.com/local-first/#for-practitioners) on things that developers can do to bring their software closer to the ideals.
### Local-first in practice today
Most implementations that are referred to as "local-first" today conform to only a subset of the local-first ideals envisioned by Ink & Switch. We argue that a practical definition of most local-first implementations today is the following:
> Local-first implementation today generally refers to apps that work with a local client database which syncs automatically with a backend database in the background. All reads and writes go to the local database first.
This kind of architecture already enables large benefits for both end-users (speed, network resilience, real-time collaboration, offline usage) as well for developers (reduced backend complexity, simplified state management, etc.). Refer to [References](/resources/local-first-software#references) for more on this.
## Does PowerSync allow building local-first software?
### High-level concepts
Here's how building software with [PowerSync](https://www.powersync.com/) as its sync engine stacks up in terms of the high-level definitions of local-first software mentioned above:
| Local-First Concept / Definition | Does PowerSync Enable This? |
| -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Software that prioritizes the use of local storage. All reads and writes go to the local database first. | Yes. PowerSync allows developers to build software that uses a local database for reads and writes. |
| Software that treats the data on the user's local device as the primary copy of the data. | Yes, generally. PowerSync allows the developer to treat the data in the local end-user's database as the primary copy of the data. PowerSync does use a server-authoritative architecture where the server can [resolve conflicts](/handling-writes/handling-update-conflicts) and all clients then update to match the server state. But the client [will not update](/architecture/consistency) its local state to the server state until all pending client changes have been processed by the server. |
| Software with a decentralized architecture, which allows the software "to outlive any backend services managed by their vendors" | No. PowerSync does not use a decentralized architecture. PowerSync uses a server-authoritative architecture. However, there are ways to ensure a degree of longevity of software built using PowerSync (see below). |
### The 7 ideals of local-first
Here's how applications built using PowerSync can be brought closer to the [7 ideals of local-first](https://www.inkandswitch.com/local-first/#seven-ideals-for-local-first-software) in the Ink & Switch manifesto essay:
| 7 Ideals of Local-First | PowerSync Perspective |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Fast**: By accessing data locally, the software should be able to respond near-instantaneously to user input | PowerSync inherently provides this: All reads and writes use a local SQLite database, resulting in near-zero latency for accessing data. |
| **Multi-Device**: Data should be synchronized across all of the devices on which a user does their work. | PowerSync automatically syncs data to different user devices. |
| **Offline**: The user should be able to read and write their data anytime, even while offline. | PowerSync allows for offline usage of applications for arbitrarily long periods of time. Developers can also optionally create apps as [offline-only](/client-sdks/advanced/local-only-usage) and turn on syncing of data when it suits them, including on a per-user basis.When syncing is configured, data is synced to users based on the [Sync Streams](/sync/streams/overview) (or [Sync Rules](/sync/rules/overview)) for offline access. Mutations to data while the user is offline are placed in an upload queue and periodically attempted to be [uploaded](/configuration/app-backend/client-side-integration) when connectivity is available (this is automatically managed by the PowerSync Client SDK). |
| **Collaboration**: The ideal is to support real-time collaboration that is on par with the best cloud apps today. | PowerSync allows building collaborative applications either with [custom conflict resolution](/handling-writes/custom-conflict-resolution), or [using CRDT](/client-sdks/advanced/crdts) data structures stored as blob data for fine-grained collaboration. |
| **Longevity**: Work the user did with the software should continue to be accessible indefinitely, even after the company that produced the software is gone. | PowerSync relies on open-source and source-available software, meaning that the end-user can self-host Postgres (open-source) and the [PowerSync Service](/architecture/powersync-service) (source-available) should they wish to continue using PowerSync to sync data after the software producer shuts down backend services. There is also an onus on the software developer to ensure longevity, such as allowing exporting of data and avoiding reliance on other proprietary backend services. |
| **Privacy**: The software should use end-to-end encryption so that servers that store a copy of users’ files only hold encrypted data that they cannot read. | For details on end-to-end encryption with PowerSync, refer to our [Encryption](/client-sdks/advanced/data-encryption) section. |
| **User Control:** No company should be able to restrict what a user is allowed to do with the software. | In theory, the server-authoritative architecture of PowerSync allows the vendor's backend to override the user's local data (once all pending changes by the user have been [processed by the server](/architecture/consistency)). However, this is ultimately in the control of the developer. |
## References
* [Local-First Software: Origins and Evolution](https://www.powersync.com/blog/local-first-software-origins-and-evolution)
* [Local-First Software is a Big Deal, Especially for the Web](https://www.powersync.com/blog/local-first-is-a-big-deal-especially-for-the-web)
# Performance and Limits
Source: https://docs.powersync.com/resources/performance-and-limits
Expected performance and limits for PowerSync Cloud.
[PowerSync Cloud plans](https://www.powersync.com/pricing) have the limits and performance expectations outlined below.
The PowerSync Cloud **Team** and **Enterprise** plans allow several of these limits to be customized based on your specific needs.
## Limits
| **Component** | **Limit** | **Details** |
| ----------------------------- | --------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Synced buckets per user** | Maximum: 1,000 by default; Configurable up to 10,000 by request | Sync requests exceeding this will fail with an error. This limit can be increased upon request for Team and Enterprise customers, however: as the number of buckets exceeds 1,000, performance degrades. |
| **Maximum row/document size** | 15MB | Applies to both source database rows and transformed rows synced to clients. |
| **Concurrent connections** | Maximum: configurable (50k+ per instance) | PowerSync Service instances have default limits configured based on the [Pricing plan](https://www.powersync.com/pricing). These limits can be increased upon request for Team and Enterprise customers, and currently scale to over 50,000 per instance. |
| **Data hosted** | Maximum: configurable | PowerSync Service instances have default limits configured based on the [Pricing plan](https://www.powersync.com/pricing). These limits can be increased upon request for Enterprise customers. |
| **Columns per table** | 1,999 | Hard limit of the client schema, excluding the `id` column. |
| **Number of users** | No limit | No hard limit on unique users. |
| **Number of tables** | No limit | Hundreds of tables may impact startup and sync performance. |
## Performance Expectations
### Database Replication (Source DB → PowerSync Service)
* **Small rows**: 2,000-4,000 operations per second
* **Large rows**: Up to 5MB per second
* **Transaction processing**: \~60 transactions per second for smaller transactions
* **Reprocessing**: Same rates apply when reprocessing Sync Streams/Sync Rules or adding new tables
### Sync (PowerSync Service → Client)
* **Rows per client**:
* Good performance expected up to 1 million rows per client
* Up to 10 million rows per client may still work, but this requires testing with the specific SDK
* Increasing number of rows increases initial sync time, memory usage and database size
* **Sync speed**: Expect a rate of 2,000-20,000 operations per second per client, depending on the client
# Security & HIPAA
Source: https://docs.powersync.com/resources/security
Details on PowerSync Cloud's cybersecurity posture and how to report issues
PowerSync is trusted by tens of thousands of developers for building and deploying secure applications.
## PowerSync is SOC 2 Type 2 Audited
SOC 2 Type 2 audit reports are available to customers on the [Team and Enterprise plans](https://www.powersync.com/pricing) of PowerSync Cloud, as well as customers using the Enterprise Self-Hosted Edition.
## PowerSync Cloud Security
### General
* Customer data is encrypted at rest, access to that data by support staff is strictly controlled by access control mechanisms, and robust write-only logging is present across the entire stack.
* All HTTP connections are encrypted using TLS.
* Additionally, customers on our [Enterprise plan](https://www.powersync.com/pricing) can request their data to be housed in managed, isolated tenants.
* Independent third-party cybersecurity penetration testing reports are available to customers on our [Enterprise plan](https://www.powersync.com/pricing).
### AWS Private Endpoints
See [Private Endpoints](/configuration/source-db/private-endpoints) for using a private network to your database using AWS PrivateLink.
We use Private Endpoints instead of VPC peering, to ensure that no other resources are exposed between VPCs.
### HIPAA Compliance
PowerSync Cloud is HIPAA compliant. You can sync Protected Health Information (PHI) or electronic PHI (ePHI) using PowerSync Cloud provided that you fulfill your obligations under our shared responsibility model. Refer to our [HIPAA Compliance](/resources/hipaa) page for details.
## Client-Side Security
Refer to: [Data Encryption](/client-sdks/advanced/data-encryption)
## Security Reporting
### Our Commitment
Security of our users’ data is of utmost importance at PowerSync. We welcome the disclosure of any vulnerability you may find in our product.
We will treat each security report with the utmost seriousness. We commit to communicating promptly while we investigate the impact on our customers and will remediate the issue if deemed necessary. Having said that, we generally see a deluge of very low quality reports, many of them AI generated, and a response from our team is not guaranteed if your submission falls into this category.
We uphold the principles of Responsible Disclosure, including but not limited to:
* Make every effort to avoid accessing data of other users, and avoid disruption of our services.
* Keep within our [Terms of Service](https://www.powersync.com/legal/licensing-terms).
* Avoid publicly disclosing any vulnerability until PowerSync has had reasonable time to resolve or mitigate the issue.
Additionally, avoid any social engineering or phishing on our customers or employees, and do not physically access any of our properties.
If you follow the responsible disclosure guidelines, we commit to:
* Treat each report with the utmost seriousness.
* Communicate promptly, and work with you to understand and resolve the issue.
PowerSync does not operate a bug bounty program at this time, but may choose to offer a reward for security reports at our discretion.
### How to Report an Issue
Contact [security@powersync.com](mailto:security@powersync.com) with details on the issue.
Include at least the following information:
* A description and severity of the issue.
* Steps to reproduce the issue.
* Any sensitive details that you may have accidentally accessed during the research.
If you plan to provide sensitive credentials or data in the report, please let us know, and we will provide you with a public GPG key for encryption.
### What Reports We Are Interested In
We are interested in any reports affecting the security of our product.
We are not interested in reports of:
* Common non-vulnerabilities, such as those listed [here](https://bughunters.google.com/about/rules/google-friends/google-and-alphabet-vulnerability-reward-program-vrp-rules#non-qualifying-vulnerabilities).
* Issues that are not exploitable.
* Security best practice concerns. For example, issues pertaining to password policies such as password complexity, password reuse, etc.
* Results from automated scans.
* Social engineering or phishing attacks.
* Extracting data using a compromised device or credentials.
Please reach out to us if anything is unclear.
### See Also
* [Security & IP Filtering](/configuration/source-db/security-and-ip-filtering)
* [Data Encryption](/client-sdks/advanced/data-encryption)
# Supported Platforms
Source: https://docs.powersync.com/resources/supported-platforms
Supported platforms and major features by PowerSync Client SDK
## Dart/Flutter SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ----------------------------------- | ------------------------------------------------------------- |
| Flutter Android | Yes (x86-64, aarch64, armv7) | |
| Flutter iOS | Yes | |
| Flutter macOS | Yes (x86-64, aarch64) | |
| Flutter Windows | Yes (x86-64 only) | |
| Flutter Linux | Yes (x86-64, aarch64) | |
| Flutter web | Yes | Only dart2js is tested, dart2wasm has issues |
| Dart web | With custom setup | |
| Dart macOS | With custom setup | |
| Dart Windows | With custom setup (x86-64 only) | |
| Dart Linux | With custom setup (x86-64, aarch64) | Dart supports armv7 and riscv64gc as well, we currently don't |
| HTTP connection method | Yes | |
| WebSocket connection method | No | |
## React Native SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ----------------- | ------------------ |
| React Native | Yes | |
| React Native w/ Expo | Yes | |
| React Native for Web | Yes | |
| React Strict DOM | YMMV - not tested | |
| React Native for Windows | No | |
| HTTP connection method | Yes | Legacy (supported) |
| WebSocket connection method | Yes | Default |
## JS/Web SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ---------- | ---------------------------------- |
| Chrome & Chrome-based | Yes | See VFS notes |
| Firefox | Yes | OPFS Not supported in private tabs |
| Safari | Yes | OPFS Not supported in private tabs |
| HTTP connection method | Yes | |
| WebSocket connection method | Yes | |
## Capacitor SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ---------- | -------------------------------------------------- |
| iOS | Yes | Uses native SQLite via Capacitor Community SQLite. |
| Android | Yes | Uses native SQLite via Capacitor Community SQLite. |
| Web | Yes | Uses WASQLite via the PowerSync Web SDK. |
| Electron | Yes | Uses WASQLite via the PowerSync Web SDK. |
| HTTP connection method | Yes | |
| WebSocket connection method | Yes | |
## Node.js SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ---------- | ----- |
| macOS | Yes | |
| Linux | Yes | |
| Windows | Yes | |
| HTTP connection method | Yes | |
| WebSocket connection method | Yes | |
## Kotlin SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Android | Yes (x86-64, x86, aarch64, armv7) | |
| Android native | No | |
| iOS | Yes (aarch64 device, x86-64 and aarch64 simulators) | |
| macOS (native) | Yes (x86-64, aarch64) | |
| macOS catalyst (native) | No | Blocked on [KT-40442: Support building Kotlin/Native for Mac Catalyst (x86-64 and arm64)](https://youtrack.jetbrains.com/issue/KT-40442/Support-building-Kotlin-Native-for-Mac-Catalyst-x86-64-and-arm64) |
| watchOS | Yes (aarch64 device, armv8 32-bit pointers ABI, x86-64 and aarch64 simulators) | |
| tvOS | Yes (aarch64 device, x86-64 and aarch64 simulators) | |
| visionOS | No | Blocked on [KT-59571: Add support for visionOS SDK](https://youtrack.jetbrains.com/issue/KT-59571/Add-support-for-visionOS-SDK) |
| Windows (JVM) | Yes (x86-64 only) | |
| Linux (JVM) | Yes (x86-64, aarch64) | |
| macOS (JVM) | Yes (x86-64, aarch64) | |
| Linux (native) | No | Maybe soon |
| Windows (native) | No | Maybe soon |
| JS | No | |
| WebAssembly | No | |
| HTTP connection method | Yes | |
| WebSocket connection method | Yes | Note: Only as an automated fallback for clients without backpressure support. |
## Swift SDK
| Platform / Feature | Supported? | Notes |
| ---------------------------------- | ---------- | --------------------------------------------------------------------------- |
| macOS | Yes | |
| iOS | Yes | |
| watchOS | Yes | watchOS 26 not supported yet |
| iPadOS | Yes | |
| tvOS | Yes | Added in v1.11.0 |
| macOS Catalyst | No | KT-40442 Support building Kotlin/Native for Mac Catalyst (x86-64 and arm64) |
| visionOS | No | KT-59571 Add support for visionOS SDK |
| Non-apple targets (Linux, Windows) | No | No good way to link PowerSync |
| HTTP connection method | Yes | |
| WebSocket connection method | No | |
## .NET SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ----------------- | ----------------------- |
| WPF | No | Some known build issues |
| MAUI | Yes | |
| Winforms | YMMV - not tested | |
| CLI Windows | Yes | |
| CLI Mac | Yes | |
| Avalonia UI | YMMV - not tested | |
| HTTP connection method | Yes | |
| WebSocket connection method | No | |
## Rust SDK
| Platform / Feature | Supported? | Notes |
| --------------------------- | ---------- | ------------------------------------------------------------------------------- |
| All | Yes | The SDK supports all `std` Rust targets, but is currently only tested on Linux. |
| HTTP connection method | Yes | |
| WebSocket connection method | No | |
# Usage & Billing
Source: https://docs.powersync.com/resources/usage-and-billing
Usage & billing for PowerSync Cloud (our cloud-hosted offering).
## How billing works
When using [PowerSync Cloud](https://www.powersync.com/pricing), your organization may contain multiple projects. Each project can contain multiple instances. For example:
* **Organization**: Acme Corporation
* **Project**: Travel App
* **Instance**: Staging
* **Instance**: Production
* **Project**: Admin App
* **Instance**: Staging
* **Instance**: Production
Read more: [Hierarchy: Organization, project, instance](/tools/powersync-dashboard#hierarchy-organization-project-instance)
Your organization only has a single subscription with a single plan (Free, Pro, Team or Enterprise).
Usage quotas (e.g. data processing, storage, sync operations) apply to your entire organization, regardless of the number of projects.
Upgrading to a paid plan unlocks all benefits for every project in your organization. For example, no instances in a "Pro" organization will be paused. See our [pricing page](https://www.powersync.com/pricing) for plan details.
### Invoicing
Usage for all projects in your organization is aggregated in a monthly billing cycle. These totals are reflected in your monthly invoice.
On our paid plans, the base fee (plus applicable tax) is charged at the start of every billing cycle.
If your month's usage exceeds your plan's limits, the overage will be charged at the end of the billing cycle.
Your current billing cycle's usage and upcoming invoice total can be tracked in the Dashboard - learn more in [View and manage your subscription](/resources/usage-and-billing#view-and-manage-your-subscription).
Invoices will be automatically charged to your provided payment card. Learn more in [Spending caps](/resources/usage-and-billing#spending-caps).
## View and manage your subscription
Your PowerSync usage and billing can be tracked and managed in the [PowerSync Dashboard](https://dashboard.powersync.com/) at the organization level.
### Subscriptions
In the "**Subscriptions**" tab you can:
1. View your active subscription
2. View your usage for the current billing cycle
3. View the amount of your upcoming invoice
4. Upgrade or cancel your [PowerSync subscription](https://www.powersync.com/pricing)
### Billing settings
In the "**Billing"** tab you can:
1. Update billing details, such as your billing organization name, address and email address which should receive invoices and receipts.
2. Manage your credit card(s) used for payments.
* Credit card details are never stored on our servers; all billing is securely processed by our payment provider, [Stripe](https://stripe.com/).
### Spending caps
Spending caps are not yet available, but are planned for a future release.
In the meantime, Pro plan invoices over `$100` and Team plan invoices over `$1,000` will not immediately be charged. In these cases, we will reach out to the organization owner for review. This threshold amount can be customized per organization — [let us know](/resources/contact-us) if you need a higher or lower amount configured.
## Limits
Usage limits for PowerSync Cloud are specified on our [Pricing page](https://www.powersync.com/pricing).
### Inactive instances
Instances on the Free plan that have had no deploys or client connections for over 7 days will be deprovisioned. This helps us optimize our cloud resources and ensure a better experience for all users.
If your instance is deprovisioned, you can easily restart it from the [PowerSync Dashboard](https://dashboard.powersync.com/) or [CLI](/tools/cli) by deploying your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) to it. Note that this will reprocess it from scratch, causing data to re-sync to existing users.
For projects in production we recommend subscribing to a [paid plan](https://www.powersync.com/pricing) to avoid any interruptions. To upgrade to a paid plan, navigate to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and visit the **Plans & Billing** section.
# Pricing Example
Source: https://docs.powersync.com/resources/usage-and-billing/pricing-example
Practical example of how pricing is calculated on the Pro or Team plan of PowerSync Cloud (usage-based pricing)
## Chat app example
Use this real-world example of a basic chat app to gauge your PowerSync usage and costs, on the [Pro plan](https://www.powersync.com/pricing) of PowerSync Cloud. This is not an exact estimate, but it can help you better understand how your PowerSync usage would be billed. Usage costs for the Team plan can be calculated in the same way since only the base plan fee differs.
This use case has the peculiarity that all data is user-generated and necessarily shared with other users (in the form of messages). More typical use cases might sync the same server-side data with many different users and have less user-generated data to sync.
### Overview: Costs by usage
To illustrate typical costs, consider an example chat app, where users can initiate chats with other users. Users can see their active chats in a list, read messages, and send messages.
For this app, all messages are stored on a backend database like Postgres. PowerSync is used to make sure users see new messages in real-time, and can access or create messages even when their devices are offline.
#### Assumptions
User base assumptions:
* **Daily Active Users (DAUs) are 10% of total app installations.** These are the users that actively open and use your app on a given day, which is typically a small subset of your total app installations. For the calculations below, we estimated DAUs as 10% of the total number of app installations. We use this assumption as an input to calculate the total number of messages sent and received every day.
* **Peak concurrent connections are 10% of DAUs.** This is the maximum number of users actively using your app at exactly the same time as other users, which is typically a small subset of your Daily Active Users. For the calculations below, we estimated peak concurrent connections as 10% of the total number of app installations.
Data size, transfer and storage assumptions:
* **Messages are 0.25 KB in size on average.** 1KB can store around half a page’s worth of text. We assume the average message size on this app will be a quarter of that.
* **DAUs send and receive a combined total of 100 messages per day,** generating 100 rows in the messages table each day\*\*.\*\*
* **Message data is only stored on local databases for three months.** Using PowerSync’s [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview), only messages sent and received in the last 3 months are stored in the local database embedded within a user’s app.
* **No attachments synced through PowerSync.** Attachments like files or photos are not synced through PowerSync.
* **1 PowerSync instance.** The backend database connects to a single PowerSync instance. A more typical setup may use 2 PowerSync instances: one for syncing from the staging database and one for the production database. Since staging data volumes are often negligible, we’ve ignored that in this example.
#### Table of Assumptions
| DAUs as % of all installs | 10% |
| ------------------------------------------ | ------------------ |
| Peak concurrent connections as % of DAUs | 10% |
| Messages sent and received per day per DAU | 100 |
| Message average size | 0.25 KB |
| Messages kept on local database for | 3 months (90 days) |
For 50,000 app installs (5,000 Daily Active Users): **\$51/month** on the Pro plan.
## Data synced
| | |
| --------------------- | ------------------------------------------------------------------- |
| Data synced per month | 100 messages / day \* 5,000 DAUs \* 0.25 KB \* 30 = 3.75 GB / month |
| Total data synced costs / month | |
| ------------------------------- | --------------- |
| Usage: | 3.75 GB |
| Less included usage: | (30 GB) |
| Cost for additional usage: | \$0 |
| **Total usage costs** | **\$0 / month** |
## Data hosted
| | |
| ------------------------------------------ | ----------------------------------------------------------------- |
| Total size of replicated data to be hosted | 100 messages / day \* 5,000 DAUs \* 0.25 KB \* 90 days = 11.25 GB |
| Total data hosted costs / month | |
| ------------------------------- | ---------------- |
| Usage: | 11.25 GB |
| Less included usage: | (10 GB) |
| Cost for additional usage: | 2 GB \* \$1 / GB |
| **Total usage costs** | **\$2 / month** |
## Peak concurrent connections
| | |
| ------------------------------------------- | --------------------------------------------------- |
| Total number of peak concurrent connections | 5,000 DAUs \* 10% = 500 peak concurrent connections |
| Total peak concurrent connections costs / month | |
| ----------------------------------------------- | --------------- |
| Usage: | 500 |
| Less included usage: | (1,000) |
| Cost for additional usage: | \$0 |
| **Total usage costs** | **\$0 / month** |
| Total monthly costs | |
| --------------------------- | ---------------- |
| Pro Plan | \$49 / month |
| Data synced | \$ 0 / month |
| Data hosted | \$ 2 / month |
| Peak concurrent connections | \$ 0 / month |
| **Total monthly costs** | **\$51 / month** |
For 1,000,000 app installs (100,000 Daily Active Users): **\$399/month** on the Pro plan.
## Data synced
| | |
| --------------------- | ------------------------------------------------------------------- |
| Data synced per month | 100 messages / day \* 100,000 DAUs \* 0.25 KB \* 30 = 75 GB / month |
| Total data synced costs / month | |
| ------------------------------- | -------------------- |
| Usage: | 75 GB |
| Less included usage: | (30 GB) |
| Cost for additional usage: | 45 GB \* \$1.00 / GB |
| **Total usage costs** | **\$45 / month** |
## Data hosted
| | |
| ------------------------------------------ | ----------------------------------------------------------------- |
| Total size of replicated data to be hosted | 100 messages / day \* 100,000 DAUs \* 0.25 KB \* 90 days = 225 GB |
| Total data hosted costs / month | |
| ------------------------------- | ------------------ |
| Usage: | 225 GB |
| Less included usage: | (10 GB) |
| Cost for additional usage: | 215 GB \* \$1 / GB |
| **Total usage costs** | **\$215 / month** |
## Peak concurrent connections
| | |
| ------------------------------------------- | -------------------------------------------------------- |
| Total number of peak concurrent connections | 100,000 DAUs \* 10% = 10,000 peak concurrent connections |
| Total peak concurrent connections costs / month | |
| ----------------------------------------------- | --------------------- |
| Usage: | 10,000 |
| Less included usage: | (1,000) |
| Cost for additional usage: | 9,000 \* \$30 / 1,000 |
| **Total usage costs** | **\$270 / month** |
| Total monthly costs | |
| --------------------------- | ----------------- |
| Pro Plan | \$ 49 / month |
| Data synced | \$45 / month |
| Data hosted | \$215 / month |
| Peak concurrent connections | \$270 / month |
| **Total monthly costs** | **\$579 / month** |
For 10,000,000 app installs (1,000,000 Daily Active Users): **\$6,064/month** on the Pro plan.
At this scale, our [Enterprise plan](https://www.powersync.com/pricing) is typically more cost effective and provides more predictable billing.
## Data synced
| | |
| --------------------- | ---------------------------------------------------------------------- |
| Data synced per month | 100 messages / day \* 1,000,000 DAUs \* 0.25 KB \* 30 = 750 GB / month |
| Total data synced costs / month | |
| ------------------------------- | --------------------- |
| Usage: | 750 GB |
| Less included usage: | (30 GB) |
| Cost for additional usage: | 720 GB \* \$1.00 / GB |
| **Total usage costs** | **\$720 / month** |
## Data hosted
| | |
| ------------------------------------------ | --------------------------------------------------------------------- |
| Total size of replicated data to be hosted | 100 messages / day \* 1,000,000 DAUs \* 0.25 KB \* 90 days = 2,250 GB |
| Total data hosted costs / month | |
| ------------------------------- | -------------------- |
| Usage: | 2,250 GB |
| Less included usage: | (10 GB) |
| Cost for additional usage: | 2,240 GB \* \$1 / GB |
| **Total usage costs** | **\$2,240 / month** |
## Peak concurrent connections
| | |
| ------------------------------------------- | ----------------------------------------------------------- |
| Total number of peak concurrent connections | 1,000,000 DAUs \* 10% = 100,000 peak concurrent connections |
| Total peak concurrent connections costs / month | |
| ----------------------------------------------- | ---------------------- |
| Usage: | 100,000 |
| Less included usage: | (1,000) |
| Cost for additional usage: | 99,000 \* \$30 / 1,000 |
| **Total usage costs** | **\$2,970 / month** |
| Total monthly costs | |
| --------------------------- | ---------------------- |
| Pro Plan | \$ 49.00 / month |
| Data synced | \$720.00 / month |
| Data hosted | \$2,240.00 / month |
| Peak concurrent connections | \$2,970.00 / month |
| **Total monthly costs** | **\$6,064.00 / month** |
# FAQ & Troubleshooting
Source: https://docs.powersync.com/resources/usage-and-billing/usage-and-billing-faq
Usage and billing FAQs and troubleshooting strategies.
We have simplified our Cloud pricing plans and billing. Learn more in the [blog post](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced).
We are continuously improving the reporting and tools to help you troubleshoot usage. Please [reach out](/resources/contact-us) if you have any feedback or need help understanding or managing your usage.
# Usage and Billing Metrics FAQs
You can track usage in two ways:
* **Individual instances**: Visit the [Usage metrics](/maintenance-ops/monitoring-and-alerting#usage-metrics) workspace in the PowerSync Dashboard to see metrics for a specific instance.
* **Organization-wide**: Go to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and check the **Plan Usage** section for aggregated metrics across all instances in your current billing cycle.
A sync operation occurs when a single row is synced from the PowerSync Service to a user device.
The PowerSync Service maintains a history of operations for each row to ensure efficient streaming and data integrity. This means:
* Every row change (insert, update, delete) creates a new operation, and this operations history accumulates over time.
* When a new client connects, it downloads the entire history on first sync.
* Existing clients only download new operations since their last sync.
As a result, sync operation counts often exceed the number of actual data mutations, especially for frequently updated rows. This is normal.
You can manage operations history through:
* Daily automatic compacting (built into PowerSync Cloud)
* Regular [defragmentation](/maintenance-ops/compacting-buckets#defragmenting) (recommended for frequently updated data)
See the [Usage Troubleshooting](#usage-troubleshooting) section for more details.
**Billing note:** Sync operations are not billed under the [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). Billing for data throughput is based on "data synced" instead. You can still use sync operation counts for diagnostics.
A concurrent connection is one client actively connected to the PowerSync Service. When a device calls `.connect()`, it establishes one long-lived connection for streaming real-time updates.
Key points about concurrent connections:
* Billing is based on peak concurrent connections, which is the highest number of simultaneous connections during the billing cycle.
* **Billing (Pro/Team)**: 1,000 connections are included, then \$30 per 1,000 over the included amount.
* PowerSync Cloud Pro plan is limited to 3,000 concurrent connections.
* PowerSync Cloud Team plan is limited to 10,000 concurrent connections by default.
* PowerSync Cloud Free plans are limited to 50 peak concurrent connections.
* When limits are reached, new connection attempts receive a 429 HTTP response while existing connections continue syncing. Clients retry after a delay and should connect once capacity is available.
Data synced is the only metric used for data throughput billing in our [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced).
It measures the total uncompressed size of data synced from PowerSync Service instances to client devices. If the same data is synced by multiple users, each transfer counts toward the total.
**Billing (Pro/Team)**: 30 GB included, then \$1.00 per GB over the included amount.
The PowerSync Service hosts three types of data:
1. A current copy of the data, which should be roughly equal to the subset of your source data covered by your Sync Streams (or legacy Sync Rules).
2. A history of all operations on data in buckets, which can be larger than the source since it includes history and one row can be in multiple buckets.
3. Data for parameter lookups, which is typically small.
Because of this structure, your hosted data size may be larger than your source database size.
**Billing (Pro/Team)**: 10 GB included, then \$1.00 per GB over the included amount.
**Note:** The data processing billing metric has been removed in our [updated Cloud pricing model](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced).
Data processing was calculated as the total uncompressed size of data replicated from your source database(s) to PowerSync Service instances, plus data synced from PowerSync Service instances to user devices. These values are still available in your [Usage metrics](/maintenance-ops/monitoring-and-alerting#usage-metrics) as "Data replicated per day/hour" and "Data synced per day/hour".
Data replicated refers to activity from your backend source database (Postgres, MongoDB, MySQL, or SQL Server database) to the PowerSync Service — this is not billed.
Data synced refers to data streamed from the PowerSync Service to client devices — this is used for billing.
# Billing FAQs
Go to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and open the **Plan Usage** section. This shows your total usage (aggregated across all projects) for your current billing cycle. Data updates once a day.
Update your billing details in the **Plans & Billing** section of the [PowerSync Dashboard](https://dashboard.powersync.com/) at the organization level.
Review your historic invoices in the Stripe Customer Portal by signing in with your billing email [here](https://billing.stripe.com/p/login/7sI6pU48L42cguc7ss). We may surface these in the Dashboard in the future.
Under the updated pricing for Pro and Team plans, the following metrics are billed:
* **Data synced**: 30 GB included, then \$1.00 per GB over the included amount.
* **Peak concurrent connections**: 1,000 included, then \$30 per 1,000 over the included amount.
* **Data hosted**: 10 GB included, then \$1.00 per GB over the included amount (unchanged from before).
The following metrics are not billed:
* Replication operations (count)
* Data replicated (per GB)
* Sync operations (count)
See the blog post for details: [Simplified Cloud Pricing Based On Data Synced](https://www.powersync.com/blog/simplified-cloud-pricing-based-on-data-synced). For plan specifics, see [our Pricing](https://www.powersync.com/pricing).
# Usage Troubleshooting
If you're seeing unexpected spikes in your usage metrics, here's how to diagnose and fix common issues:
## Common Usage Patterns
### More Operations Than Rows
If you're syncing significantly more operations than you have rows in your database, this usually indicates a large operations history has built up. This is common with frequently updated data.
**Solution:** [Defragmentation](/maintenance-ops/compacting-buckets#defragmenting) reduces the operations history by compacting buckets. While defragmentation triggers additional sync operations for existing users, it significantly reduces operations for new installations.
Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to compare total rows vs. operations synced to identify if this is affecting you.
### Repetitive Syncing by the Same User
If you see the same user syncing repeatedly in quick succession, this could indicate a client code issue.
**First steps to troubleshoot:**
1. **Check SDK version**: Ensure you're using the latest SDK version.
2. **Review client logs**: Check your client-side logs for connection issues or sync loops.
3. **Check instance logs**: Review [Instance logs](/maintenance-ops/monitoring-and-alerting#instance-logs) to see sync patterns and identify which users are affected.
If you need help, [contact us](/resources/contact-us) with your logs for further diagnosis.
## Concurrent Connections
The most common cause of excessive concurrent connections is opening multiple copies of `PowerSyncDatabase` and calling `.connect()` on each. Debug your connection handling by reviewing your code and [Instance logs](/maintenance-ops/monitoring-and-alerting#instance-logs). Ensure you're only opening one connection per user/session.
## Sync Operations
Sync operations are not billed in our updated pricing model, but they're useful for diagnosing spikes in data synced and understanding how data mutations affect usage.
While sync operations typically correspond to data mutations on synced rows (those in your Sync Streams (or legacy Sync Rules)), several scenarios can affect your operation count:
### Key Scenarios
1. **New App Installations:**
New users need to sync the complete operations history. We help manage this by running automatic daily compacting on Cloud instances and providing manual [defragmentation options](/maintenance-ops/compacting-buckets#defragmenting) in the PowerSync Dashboard.
2. **Existing Users:**
Compacting and defragmenting reduce operations history but trigger additional sync operations for existing users. See our [defragmenting guide](/maintenance-ops/compacting-buckets#defragmenting) to optimize this.
3. **Sync Rule Deployments:**
When you deploy changes to Sync Streams (or legacy Sync Rules), PowerSync recreates buckets from scratch. New app installations sync fewer operations since the operations history is reset, but existing users temporarily experience increased sync operations as they re-sync the updated buckets.
We're working on [incremental sync rule reprocessing](https://roadmap.powersync.com/c/85-more-efficient-sync-reprocessing), which will only reprocess buckets whose definitions have changed.
4. **Unsynced Columns:**
Any row update triggers a new operation in the logical replication stream, regardless of which columns changed. PowerSync tracks changes at the row level, not the column level. This means updates to columns not included in your Sync Streams (or legacy Sync Rules) still create sync operations, and even a no-op update like `UPDATE mytable SET id = id` generates a new operation for each affected row.
Selectively syncing columns helps with data access control and reducing data transfer size, but it doesn't reduce the number of sync operations.
## Data Synced
Data synced measures the total uncompressed bytes streamed from the PowerSync Service to clients. Spikes typically come from either many sync operations (high churn) or large rows (large payloads), and can also occur during first-time syncs, defragmentation, or Sync Rule updates.
If your spikes in data synced correspond with spikes in sync operations, also see the [Sync Operations](#sync-operations) troubleshooting guidelines above.
### Diagnose Data Synced Spikes
1. **Pinpoint when it spiked:**
Use [Usage Metrics](/maintenance-ops/monitoring-and-alerting#usage-metrics) to find the exact hour/day of the spike.
2. **Inspect instance logs for size:**
In [Instance Logs](/maintenance-ops/monitoring-and-alerting#instance-logs), enable Metadata and search for "Sync stream complete" to see the size of data transferred and operations synced per stream.
You may need to scroll to load more logs. If you need a CSV export of your logs for a limited time-range, [contact us](/resources/contact-us). For certain scenarios, these are easier to search than the instance logs in the dashboard.
3. **Compare operations vs row sizes:**
If operations are high and size scales with it, you likely have tables being updated frequently, or a large operations history has built up. See our [defragmenting guide](/maintenance-ops/compacting-buckets#defragmenting). If operations are moderate but size is large, your rows likely contain large data (e.g., large JSON columns or blobs).
4. **Identify large payloads in your database:**
Check typical row sizes for frequently updated tables and look for large columns (e.g., long TEXT/JSON fields, embedded files).
5. **Consider recent maintenance and app changes:**
Defragmentation and Sync Rule deploys cause existing clients to re-sync content, temporarily increasing data synced. New app installs trigger initial full sync, so expect higher usage when onboarding new sets of users.
## Data Hosted
Your hosted data size may be larger than your source database size because it includes the history of all operations on data in buckets. This can be bigger than the source since it includes history, and one row can be in multiple buckets.
Data hosted can temporarily spike during Sync Rule deployments and defragmentation because buckets are reprocessed. During this window, both the previous and new bucket data may exist concurrently.
# Troubleshooting Strategies
## 1. Identify Timing
Use [Usage Metrics](/maintenance-ops/monitoring-and-alerting#usage-metrics) to pinpoint usage spikes.
## 2. Review Logs
Use [Instance Logs](/maintenance-ops/monitoring-and-alerting#instance-logs) to review sync service logs during the spike(s). Enable the **Metadata** option, then search for "Sync stream complete" entries (use your browser's search function) to review how many operations synced, the size of data transferred, and which clients/users were involved.
You may need to scroll to load more logs. If you need a CSV export of your logs for a limited time-range, [contact us](/resources/contact-us). For certain scenarios, these are easier to search than the instance logs in the dashboard.
## 3. Compare Metrics
Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to compare total rows vs. operations synced to the user device. If you're seeing significantly more operations than rows, you might benefit from [defragmentation](/maintenance-ops/compacting-buckets#defragmenting).
## 4. Detailed Sync Operations
Use the [test-client](https://github.com/powersync-ja/powersync-service/blob/main/test-client/src/bin.ts)'s `fetch-operations` command with the `--raw` flag:
```bash theme={null}
node dist/bin.js fetch-operations --raw --token your-jwt --endpoint https://12345.powersync.journeyapps.com
```
This returns the individual operations for a user in JSON. Example response:
```bash theme={null}
{
"by_user[\"0b32a7cb-26fb-4993-9c60-9291a430337e\"]": [
{
"op_id": "0",
"op": "CLEAR",
"checksum": 2082236117
},
{
"op_id": "1145383",
"op": "PUT",
"object_type": "todos",
"object_id": "69688ea0-d3f6-46c9-81a2-cdbe54eeb54d",
"checksum": 3246341700,
"subkey": "6752f74f8176c1b5ba851480/fcb2cd3c-dcef-5c46-8b17-7b83d31fda2b",
"data": "{\"id\":\"69688ea0-d3f6-46c9-81a2-cdbe54eeb54d\",\"created_at\":\"2024-09-16 10:16:35.352665Z\",\"description\":\"Buy groceries\",\"user_id\":\"0b32a7cb-26fb-4993-9c60-9291a430337e\"}"
},
{
"op_id": "1145387",
"op": "PUT",
"object_type": "todos",
"object_id": "7e4a4550-af3b-4876-a01a-10dc0084f0a6",
"checksum": 1103209588,
"subkey": "6752f74f8176c1b5ba851480/75bbc91d-cfc9-5b22-9f85-ea31a8720bf8",
"data": "{\"id\":\"7e4a4550-af3b-4876-a01a-10dc0084f0a6\",\"created_at\":\"2024-10-07 16:17:37Z\",\"description\":\"Plant tomatoes\",\"user_id\":\"0b32a7cb-26fb-4993-9c60-9291a430337e\"}"
}
]
}
```
# Accident Forgiveness
Accidentally ran up a high bill? No problem — we've got your back. Reach out to us at [support@powersync.com](mailto:support@powersync.com) and we'll work with you to resolve the issue and prevent it from happening again.
# Case Sensitivity
Source: https://docs.powersync.com/sync/advanced/case-sensitivity
For simplicity, we recommend using only lower case identifiers for all table/collection and column/field names used in PowerSync. If you need to use a different case, continue reading.
### Case in Sync Rules
PowerSync converts all table/collection and column/field names to lower-case by default in Sync Rule queries (this is how Postgres also behaves). To preserve the case, surround the names with double quotes, for example:
```sql theme={null}
SELECT "ID" as id, "Description", "ListID" FROM "TODOs" WHERE "TODOs"."ListID" = bucket.list_id
```
When using `SELECT *`, the original case is preserved for the returned columns/fields.
### Client-Side Case
On the client side, the case of table and column names in the [client-side schema](/intro/setup-guide#define-your-client-side-schema) must match the case produced by Sync Rules exactly. For the above example, use the following in Dart:
```dart theme={null}
Table('TODOs', [
Column.text('Description'),
Column.text('ListID')
])
```
SQLite itself is case-insensitive. When querying and modifying the data on the client, any case may be used. For example, the above table may be queried using `SELECT description FROM todos WHERE listid = ?`.
Operations (`PUT`/`PATCH`/`DELETE`) are stored in the upload queue using the case as defined in the schema above for table and column names, not the case used in queries.
As another example, in this Sync Rule query:
```sql theme={null}
SELECT ID, todo_description as Description FROM todo_items as TODOs
```
Each identifier in the example is unquoted and converted to lower case. That means the client-side schema would be:
```dart theme={null}
Table('todos', [
Column.text('description')
])
```
# Client ID
Source: https://docs.powersync.com/sync/advanced/client-id
On the client, PowerSync only supports a single primary key column called `id`, of type `text`.
For tables where the client will create new rows:
* Postgres, MySQL and SQL Server: use a UUID for `id`. Use the `uuid()` helper to generate a random UUID (v4) on the client.
* MongoDB: use an `ObjectId` for `_id`. Generate an `ObjectId()` in your app code and store it in the client's `id` column as a string; this will map to MongoDB's `_id`.
To use a different column/field from the server-side database as the record ID on the client, use a column/field alias in your [Sync Streams](/sync/streams/overview) query (or [Sync Rules](/sync/rules/overview) data query):
```sql theme={null}
SELECT client_id as id FROM my_data
```
MongoDB uses `_id` as the name of the ID field in collections. You must use `SELECT _id as id` (and include any other columns you need) in [Sync Streams](/sync/streams/overview) queries and [Sync Rules](/sync/rules/overview) data queries when using MongoDB as the backend source database. When inserting new documents from the client, prefer `ObjectId` values for `_id` (stored in the client's `id` column).
Custom transformations can also be used for the ID column. This is useful in certain scenarios for example when dealing with join tables, because PowerSync doesn't currently support composite primary keys. For example:
```sql theme={null}
-- Concatenate multiple columns into a single id column
SELECT item_id || '.' || category_id as id, * FROM item_categories
-- the source database schema for the above example is CREATE TABLE item_categories(item_id uuid, category_id uuid, PRIMARY KEY(item_id, category_id));
```
If you want to upload data to a table with a custom record ID, ensure that `uploadData()` isn't blindly using a field named `id` when handling CRUD operations. See the [Sequential ID mapping tutorial](/client-sdks/advanced/sequential-id-mapping#update-client-to-use-uuids) for an example where the record ID is aliased to `uuid` on the backend.
PowerSync does not perform any validation that IDs are unique. Duplicate IDs on a client could occur in any of these scenarios:
1. A non-unique column is used for the ID.
2. Multiple table partitions are used (Postgres), with the same ID present in different partitions.
3. Multiple data queries returning the same record. This is typically not an issue if the queries return the same values (same transformations used in each query).
We recommend using a unique index on the fields in the source database to ensure uniqueness — this will prevent (1) at least.
If the client does sync multiple records with the same ID, only one will be present in the final database. This would typically be the one modified last, but this is subject to change — do not depend on any specific record being picked.
### Postgres: Strategies for Auto-Incrementing IDs
With auto-incrementing / sequential IDs (e.g. `sequence` type in Postgres), the issue is that the ID can only be generated on the server, and not on the client while offline. If this *must* be used, there are some options, depending on the use case.
#### Option 1: Generate ID when server receives record
If the client does not use the ID as a reference (foreign key) elsewhere, insert any unique value on the client in the `id` field, then generate a new ID when the server receives it.
#### Option 2: Pre-create records on the server
For some use cases, it could work to have the server pre-create a set of e.g. 100 draft records for each user. While offline, the client can populate these records without needing to generate new IDs. This is similar to providing an employee with a paper book of blank invoices — each with an invoice number pre-printed.
This does mean that a user has a limit on how many records can be populated while offline.
Care must be taken if a user can populate the same records from different devices while offline — ideally each device must have a unique set of pre-created records.
#### Option 3: Use an ID mapping
Use UUIDs on the client, then map them to sequential IDs when performing an update on the server. This allows using a sequential primary key for each record, with a UUID as a secondary ID.
This mapping must be performed wherever the UUIDs are referenced, including for every foreign key column.
For more information, have a look at [Sequential ID Mapping](/client-sdks/advanced/sequential-id-mapping).
# Compatibility
Source: https://docs.powersync.com/sync/advanced/compatibility
Configure sync behavior: enable latest backwards-incompatible fixes (recommended for new projects) or keep legacy behavior.
To ensure consistency, it is important that the PowerSync Service does not interpret the same source row in different ways after updating to a new version.
At the same time, we want to fix bugs or other inaccuracies that have accumulated during the development of the Service.
## Overview
To make this trade‑off explicit, you choose whether to keep the existing behavior or turn on newer fixes that slightly change how data is processed.
Use the `config` block in your sync config YAML to choose the behavior. There are two ways to turn fixes on:
1. Set an `edition` to enable the full set of fixes for that edition. This is the recommended approach for new projects.
2. Toggle individual options for more fine‑grained control.
For older projects, the previous behavior remains the default. New projects should enable all current fixes.
### Configuration
For new projects, it is recommended to enable all current fixes by setting `edition: `:
```yaml theme={null}
config:
edition: 3 # Recommended to set to the latest available edition (see 'Supported fixes' table below)
streams:
# ...
```
Or, specify options individually:
```yaml theme={null}
config:
timestamps_iso8601: true
versioned_bucket_ids: true
fixed_json_extract: true
custom_postgres_types: true
```
## Sync Streams Requirement
**New Sync Streams configurations should use `edition: 3`**, which enables the new compiler with an expanded SQL feature set (including `JOIN`, CTEs, multiple queries per stream, `BETWEEN`, `CASE`, and more):
```yaml theme={null}
config:
edition: 3
streams:
my_stream:
query: SELECT * FROM my_table WHERE user_id = auth.user_id()
```
**Upgrading from alpha**: If you have existing Sync Streams using `edition: 2`, updgrade to `edition: 3` to enable the new compiler with an expanded SQL feature set (including `JOIN`, CTEs, multiple queries per stream, `BETWEEN`, `CASE`, and more). See [Supported SQL](/sync/supported-sql) for the full list of supported features.
## Supported fixes
This table lists all fixes currently supported:
| Name | Explanation | Added in Service version | Fixed in edition |
| ----------------------- | ------------------------------- | ------------------------ | ---------------- |
| `timestamps_iso8601` | [Link](#timestamps-iso8601) | 1.15.0 | 2 |
| `versioned_bucket_ids` | [Link](#versioned-bucket-ids) | 1.15.0 | 2 |
| `fixed_json_extract` | [Link](#fixed-json-extract) | 1.15.0 | 2 |
| `custom_postgres_types` | [Link](#custom-postgres-types). | 1.15.3 | 2 |
### `timestamps_iso8601`
PowerSync is supposed to encode timestamps according to the ISO-8601 standard.
Without this fix, the service encoded timestamps from MongoDB and Postgres source databases incorrectly.
To ensure time values from Postgres compare lexicographically, they're also padded to six digits of accuracy when encoded.
Since MongoDB only stores values with an accuracy of milliseconds, only three digits of accuracy are used.
For instance, the value `2025-09-22T14:29:30` would be encoded as follows:
* For Postgres: `2025-09-22 14:29:30` without the fix, `2025-09-22T14:29:30.000000` with the fix applied.
* For MongoDB: `2025-09-22 14:29:30.000` without the fix, `2025-09-22T14:29:30.000` with the fix applied.
Note that MySQL has never been affected by this issue, and thus behaves the same regardless of the option used.
#### Configurable sub-second datetime precision
When the `timestamps_iso8601` option is enabled, PowerSync will synchronize date and time values with a higher
precision depending on the source database.
You can use the `timestamp_max_precision` option to configure the actual precision to use.
For instance, a Postgres timestamp value would sync as `2025-09-22T14:29:30.000000` by default.
If you don't want that level of precision, you can use the following options to make it sync as `2025-09-22T14:29:30.000`:
```yaml sync-config.yaml theme={null}
config:
edition: 3
timestamp_max_precision: milliseconds
```
Valid options for `timestamp_max_precision` are `seconds`, `milliseconds`, `microseconds` and `nanoseconds`. When an explicit
value is given, all synced time values will use that precision.
If a source value has a higher precision, it will be truncated (it is not rounded).
If a source value has a lower precision, it will be padded (so setting the option to `microseconds` with a MongoDB source database
will sync values as `2025-09-22T14:29:30.123000`, with the last three sub-second digits always being set to zero).
If no option is given, the default precision depends on the source database:
| Source database | Default precision | Max precision | Notes |
| --------------- | ----------------- | ------------- | ------------------------------------------------------------------------------------------------------- |
| MongoDB | Milliseconds | Milliseconds | |
| Postgres | Microseconds | Microseconds | |
| MySQL | Milliseconds | Microseconds | Defaults to milliseconds, but can be expanded with the option. |
| SQL Server | Nanoseconds | Nanoseconds | SQL Server supports 7 digits of accuracy, the sync service pads values to always use 9 for nanoseconds. |
### `versioned_bucket_ids`
Sync Rules define buckets, which rows to sync are then assigned to. When you run a full defragmentation or
redeploy Sync Rules, the same bucket identifiers are re-used when processing data again.
Because the second iteration uses different checksums for the same bucket ids, clients may sync data
twice before realizing that something is off and starting from scratch.
Applying this fix improves client-side progress estimation and is more efficient, since data would not get
downloaded twice.
### `fixed_json_extract`
This fixes the `json_extract` functions as well as the `->` and `->>` operators in Sync Rules to behave similar
to recent SQLite versions: We only split on `.` if the path starts with `$.`.
For instance, `'json_extract({"foo.bar": "baz"}', 'foo.bar')` would evaluate to:
1. `baz` with the option enabled.
2. `null` with the option disabled.
### `custom_postgres_types`
If you have custom Postgres types in your backend source database schema, older versions of the PowerSync Service
would not recognize these values and sync them with the textual wire representation used by Postgres.
This is especially noticeable when defining `DOMAIN` types with e.g. a `REAL` inner type: The wrapped
`DOMAIN` type should get synced as a real value as well, but it would actually get synced as a string.
With this fix applied:
* `DOMAIN TYPE`s are synced as their inner type.
* Array types of custom types get parsed correctly, and sync as a JSON array.
* Custom types get parsed and synced as a JSON object containing their members.
* Ranges sync as a JSON object corresponding to the following TypeScript definition:
```TypeScript theme={null}
export type Range =
| {
lower: T | null;
upper: T | null;
lower_exclusive: boolean;
upper_exclusive: boolean;
}
| 'empty';
```
* Multi-ranges sync as an array of ranges.
# Multiple Client Versions
Source: https://docs.powersync.com/sync/advanced/multiple-client-versions
In some cases, different client versions may need different output schemas.
When schema changes are additive, old clients would just ignore the new tables and columns, and no special handling is required. However, in some cases, the schema changes may be more drastic and may need separate Sync Streams (or Sync Rules) based on the client version.
To distinguish between client versions, clients can pass version information to the PowerSync Service. In [Sync Streams](/sync/streams/overview), these are called connection parameters (accessed via `connection.parameter()`). In legacy [Sync Rules](/sync/rules/overview), these are called [client parameters](/sync/rules/client-parameters).
Example to use different table names based on the client's `schema_version`:
```yaml theme={null}
# Client passes connection params: {"schema_version": }
streams:
assets_v1:
query: SELECT * FROM assets AS assets_v1
WHERE user_id = auth.user_id()
AND connection.parameter('schema_version') = '1'
assets_v2:
query: SELECT * FROM assets AS assets_v2
WHERE user_id = auth.user_id()
AND connection.parameter('schema_version') = '2'
```
```yaml theme={null}
# Client passes in: "params": {"schema_version": }
assets_v1:
parameters: SELECT request.user_id() AS user_id
WHERE request.parameters() ->> 'schema_version' = '1'
data:
- SELECT * FROM assets AS assets_v1 WHERE user_id = bucket.user_id
assets_v2:
parameters: SELECT request.user_id() AS user_id
WHERE request.parameters() ->> 'schema_version' = '2'
data:
- SELECT * FROM assets AS assets_v2 WHERE user_id = bucket.user_id
```
Handle queries based on parameters set by the client with care. The client can send any value for these parameters, so it's not a good place to do authorization. If the parameter must be authenticated, use parameters from the JWT instead.
# Advanced Topics
Source: https://docs.powersync.com/sync/advanced/overview
Advanced topics relating to Sync Streams / Sync Rules.
# Partitioned Tables (Postgres)
Source: https://docs.powersync.com/sync/advanced/partitioned-tables
Partitioned tables and wildcard table name matching
For partitioned tables in Postgres, each individual partition is replicated and processed using [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
To use the same queries and same output table name for each partition, use `%` for wildcard suffix matching of the table name:
```yaml theme={null}
streams:
user_todos:
queries:
# Wildcard matches all user partition tables (e.g. users_2024, users_2025)
- SELECT * FROM "users_%" WHERE id = auth.user_id()
# Wildcard matches all todo partition tables (e.g. todos_2024, todos_2025)
- SELECT * FROM "todos_%" AS todos WHERE user_id = auth.user_id()
```
```yaml theme={null}
by_user:
# Use wildcard in a parameter query
parameters: SELECT id AS user_id FROM "users_%"
data:
# Use wildcard in a data query
- SELECT * FROM "todos_%" AS todos WHERE user_id = bucket.user_id
```
The wildcard character can only be used as the last character in the table name.
When using wildcard table names, the original table suffix is available in the special `_table_suffix` column. This works the same way in both Sync Streams and Sync Rules:
```yaml theme={null}
streams:
active_todos:
query: SELECT * FROM "todos_%" AS todos WHERE _table_suffix != 'archived'
```
```sql theme={null}
SELECT * FROM "todos_%" AS todos WHERE _table_suffix != 'archived'
```
When no table alias is provided, the original table name is preserved.
`publish_via_partition_root` on the publication is not supported — the individual partitions must be published.
# Prioritized Sync
Source: https://docs.powersync.com/sync/advanced/prioritized-sync
In some scenarios, you may want to sync tables using different priorities. For example, you may want to sync a subset of all tables first to log a user in as fast as possible, then sync the remaining tables in the background.
## Overview
PowerSync supports defining sync priorities, which allows you to control the sync order for different data. This is particularly useful when certain data should be available sooner than others.
In Sync Streams, priorities are assigned to streams and PowerSync manages the underlying buckets internally. (In legacy Sync Rules, priorities were assigned to buckets explicitly.)
**Availability**
This feature was introduced in version **1.7.1** of the PowerSync Service, and in the following SDK versions:
* [Flutter v1.12.0](/client-sdks/reference/flutter)
* [React Native v1.18.1](/client-sdks/reference/react-native-and-expo)
* [JavaScript Web v1.14.2](/client-sdks/reference/javascript-web)
* [Kotlin v1.0.0-BETA26](/client-sdks/reference/kotlin)
* [Swift v1.0.0-Beta.8](/client-sdks/reference/swift)
* [.NET v0.0.6-alpha.1](/client-sdks/reference/dotnet)
## Why Use Sync Priorities?
PowerSync's standard sync protocol ensures that:
* The local data view is only updated when a fully consistent checkpoint is available.
* All pending local changes must be uploaded, acknowledged, and synced back before new data is applied.
While this guarantees consistency, it can lead to delays, especially for large datasets or continuous client-side updates. Sync priorities provide a way to speed up syncing of high-priority data while still maintaining overall integrity.
## How It Works
Each bucket is assigned a priority value between 0 and 3, where:
* 0 is the highest priority and has special behavior (detailed below).
* 3 is the default and lowest priority.
* Lower numbers indicate higher priority.
Higher-priority data syncs first, and lower-priority data syncs later. If you only use a single priority, there is no difference between priorities 1-3. The difference only comes in when you use multiple different priorities.
In Sync Streams, you assign priorities directly to streams. PowerSync manages buckets internally, so you don't need to think about bucket structure. Each stream with a given priority will have its data synced at that priority level.
```yaml theme={null}
streams:
lists:
auto_subscribe: true
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
priority: 1 # Syncs first
todos:
auto_subscribe: true
query: SELECT * FROM todos WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
priority: 2 # Syncs after lists
```
Clients can also override the priority when subscribing:
```js theme={null}
// Override the stream's default priority for this subscription
const sub = await db.syncStream('todos', { list_id: 'abc' }).subscribe({ priority: 1 });
```
When different components subscribe to the same stream with the same parameters but different priorities, PowerSync uses the highest priority for syncing. That higher priority is kept until the subscription ends (or its TTL expires). Subscriptions with different parameters are independent and do not conflict.
In Sync Rules, you assign priorities to bucket definitions. The priority determines when data in that bucket syncs relative to other buckets.
```yaml theme={null}
bucket_definitions:
user_lists:
priority: 1 # Syncs first
parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
data:
- SELECT * FROM lists WHERE id = bucket.list_id
user_todos:
priority: 2 # Syncs after lists
parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
data:
- SELECT * FROM todos WHERE list_id = bucket.list_id
```
## Syntax and Configuration
In Sync Streams, set the `priority` option on the stream definition:
```yaml theme={null}
streams:
high_priority_data:
auto_subscribe: true
query: SELECT * FROM important_table WHERE user_id = auth.user_id()
priority: 1
low_priority_data:
auto_subscribe: true
query: SELECT * FROM background_table WHERE user_id = auth.user_id()
priority: 2
```
In Sync Rules, priorities can be defined using the `priority` YAML key on bucket definitions, or with the `_priority` attribute inside parameter queries:
```yaml theme={null}
bucket_definitions:
# Using the `priority` YAML key
user_data:
priority: 1
parameters: SELECT request.user_id() AS id WHERE ...
data:
# ...
# Using the `_priority` attribute (useful for multiple parameter queries with different priorities)
project_data:
parameters: SELECT id AS project_id, 2 AS _priority FROM projects WHERE ...
data:
# ...
```
Priorities must be static and cannot depend on row values within a parameter query.
## Example: Syncing Lists Before Todos
Consider a scenario where you want to display lists immediately while loading todos in the background. This approach allows users to view and interact with lists right away without waiting for todos to sync.
```yaml theme={null}
config:
edition: 3
streams:
lists:
auto_subscribe: true
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
priority: 1 # Syncs first
todos:
auto_subscribe: true
query: |
SELECT * FROM todos
WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
priority: 2 # Syncs after lists
```
The `lists` stream syncs first (priority 1), allowing users to see and interact with their lists immediately. The `todos` stream syncs afterward (priority 2), loading in the background.
```yaml theme={null}
bucket_definitions:
user_lists:
priority: 1 # Syncs first
parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
data:
- SELECT * FROM lists WHERE id = bucket.list_id
user_todos:
priority: 2 # Syncs after lists
parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
data:
- SELECT * FROM todos WHERE list_id = bucket.list_id
```
The `user_lists` bucket syncs first (priority 1), allowing users to see and interact with their lists immediately. The `user_todos` bucket syncs afterward (priority 2), loading in the background.
## Behavioral Considerations
* **Interruption for Higher Priority Data**: Syncing lower-priority data *may* be interrupted if new data for higher-priority streams/buckets arrives.
* **Local Changes & Consistency**: If local writes fail due to validation or permission issues, they are only reverted after *all* data has synced.
* **Deleted Data**: Deleted data may only be removed after *all* priorities have completed syncing. Future updates may improve this behavior.
* **Data Ordering**: Lower-priority data will never appear before higher-priority data.
## Special Case: Priority 0
Priority 0 buckets sync regardless of pending uploads.
For example, in a collaborative document editing app (e.g., using Yjs), each change is stored as a separate row. Since out-of-order updates don’t affect document integrity, Priority 0 can ensure immediate availability of updates.
Caution: If misused, Priority 0 may cause flickering or inconsistencies, as updates could arrive out of order.
## Consistency Considerations
PowerSync's full consistency guarantees only apply once all priorities have completed syncing.
When higher-priority data is synced, all inserts and updates at that priority level will be consistent. However, deletes are only applied when the full sync completes, so you may still have some stale data at those priority levels.
Consider the following example:
Imagine a task management app where users create lists and todos. Some users have millions of todos. To improve first-load speed:
* Lists are assigned Priority 1, syncing first to allow UI rendering.
* Todos are assigned Priority 2, loading in the background.
Now, if another user adds new todos, it’s possible for the list count (synced at Priority 1) to temporarily not match the actual todos (synced at Priority 2). If real-time accuracy is required, both lists and todos should use the same priority.
## Client-Side Considerations
PowerSync's client SDKs provide APIs to allow applications to track sync status at different priority levels. Developers can leverage these to ensure critical data is available before proceeding with UI updates or background processing. This includes:
1. `waitForFirstSync(priority: int)`. When passing the optional `priority` parameter to this method, it will wait for specific priority level to complete syncing.
2. `SyncStatus.priorityStatusEntries()` A list containing sync information for each priority that was seen by the PowerSync Service.
3. `SyncStatus.statusForPriority(priority: int)` This method takes a fixed priority and returns the sync state for that priority by looking it up in `priorityStatusEntries`.
## Example
Using the above we can render a lists component only once the user's lists (with priority 1) have completed syncing, else display a message indicating that the sync is still in progress:
```dart theme={null}
// Define the priority level for lists
static final _listsPriority = BucketPriority(1);
@override
Widget build(BuildContext context) {
// Use FutureBuilder to wait for the first sync of the specified priority to complete
return FutureBuilder(
future: db.waitForFirstSync(priority: _listsPriority),
builder: (context, snapshot) {
if (snapshot.connectionState == ConnectionState.done) {
// Use StreamBuilder to render the lists once the sync completes
return StreamBuilder(
stream: TodoList.watchListsWithStats(),
builder: (context, snapshot) {
if (snapshot.data case final todoLists?) {
return ListView(
padding: const EdgeInsets.symmetric(vertical: 8.0),
children: todoLists.map((list) {
return ListItemWidget(list: list);
}).toList(),
);
} else {
return const CircularProgressIndicator();
}
},
);
} else {
return const Text('Busy with sync...');
}
},
);
}
```
Example implementations of prioritized sync are also available in the following apps:
* Flutter: [Supabase To-Do List](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist)
* Kotlin:
* [Supabase To-Do List (KMP)](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/shared/src/commonMain/kotlin/com/powersync/demos/App.kt#L46)
* [Supabase To-Do List (Android)](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/android-supabase-todolist/app/src/main/java/com/powersync/androidexample/screens/HomeScreen.kt#L69)
* Swift: [Supabase To-Do List](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/PowerSyncExample)
# Schemas and Connections
Source: https://docs.powersync.com/sync/advanced/schemas-and-connections
## Schemas (Postgres)
When no schema is specified, the Postgres `public` schema is used for every query. A different schema can be specified as a prefix:
```sql theme={null}
-- Note: the schema must be in double quotes
SELECT * FROM "other"."assets"
```
## High Availability / Replicated Databases (Postgres)
When the source Postgres database is replicated, for example with Amazon RDS Multi-AZ deployments, specify a single connection with multiple host endpoints. Each host endpoint will be tried in sequence, with the first available primary connection being used.
For this, each endpoint must point to the same physical database, with the same replication slots. This is the case when block-level replication is used between the databases, but not when streaming physical or logical replication is used. In those cases, replication slots are unique on each host, and all data would be re-synced in a fail-over event.
## Multiple Separate Database Connections (Planned)
This feature will be available in a future release. See this [item on our roadmap](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections).
In the future, it will be possible to configure PowerSync with multiple separate source database connections, where each connection is concurrently replicated.
You should not add multiple connections to multiple replicas of the same database — this would cause data duplication. Only use this when the data on each connection does not overlap.
It will be possible for each connection to be configured with a "tag", to distinguish these connections in Sync Rules. The same tag may be used for multiple connections (if the schema is the same in each).
By default, queries will reference the "default" tag. To use a different connection or connections, assign a different tag, and specify it in the query as a schema prefix. In this case, the schema itself must also be specified.
```sql theme={null}
-- Note the usage of quotes here
SELECT * FROM "secondconnection.public"."assets"
```
# Sharded Databases
Source: https://docs.powersync.com/sync/advanced/sharded-databases
Sharding is often used in backend databases to handle higher data volumes.
In the case of Postgres, PowerSync cannot replicate Postgres [foreign tables](https://www.postgresql.org/docs/current/ddl-foreign-data.html).
However, PowerSync does have options available to support sharded databases in general.
When using MongoDB, MySQL, or SQL Server as the backend source database, PowerSync does not currently support connecting to sharded clusters.
The primary options are:
1. Use a separate PowerSync Service instance per database.
2. Add a connection for each database in the same PowerSync Service instance ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release).
Where feasible, using separate PowerSync Service instances would give better performance and give more control over how changes are rolled out, especially around Sync Rule reprocessing.
Some specific scenarios:
#### 1. Different tables on different databases
This is common when separate "services" use separate databases, but multiple tables across those databases need to be synced to the same users.
Use a single PowerSync Service instance, with a separate connection for each source database ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release). Use a unique [connection tag](/sync/advanced/schemas-and-connections) for each source database, allowing them to be distinguished in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview).
#### 2a. All data for any single customer is contained in a single shard
This is common when sharding per customer account / organization.
In this case, use a separate PowerSync Service instance for each database.
#### 2b. Most customer data is in a single shard, but some data is in a shared database
If the amount of shared data is small, still use a separate PowerSync Service instance for each database, but also add the shared database connection to each PowerSync Service instance using a separate connection tag ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release).
#### 3. Only some tables are sharded
In some cases, most tables would be on a shared server, with only a few large tables being sharded.
For this case, use a single PowerSync Service instance. Add each shard as a new connection on this instance ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release) — all with the same connection tag, so that the same [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) applies to each.
# Guide: Sync Data by Time
Source: https://docs.powersync.com/sync/advanced/sync-data-by-time
Learn how to sync data by time using Sync Streams or legacy Sync Rules.
A common need in offline-first apps is syncing data based on time, for example, only syncing issues updated in the last 7 days instead of the entire dataset.
You might expect to write something like:
```yaml theme={null}
# Sync Streams
streams:
issues_after_start_date:
query: SELECT * FROM issues WHERE updated_at > subscription.parameter('start_at')
# Sync Rules
bucket_definitions:
issues_after_start_date:
parameters: SELECT request.parameters() ->> 'start_at' as start_at
data: SELECT * FROM issues WHERE updated_at > bucket.start_date
```
However, this won't work. Here's why.
# The Problem
PowerSync pre-computes and caches which rows belong to which parameters to enable efficient streaming. This means parameter-based filtering is limited to equality checks (`=`, `IN`, `IS NULL`) — range operators like `>`, `<`, `>=`, or `<=` are not supported on parameters.
Additionally, time-based functions like `now()` aren't allowed in parameter expressions because the result changes depending on when the query runs, making pre-computation impossible.
These constraints apply to both Sync Streams and legacy Sync Rules.
This guide covers a few practical workarounds.
We are working on a more elegant solution for this problem. When ready, this guide will be updated accordingly.
# Workarounds
## 1: Pre-defined time ranges
Add a boolean column to your table that indicates whether a row falls within a specific time range. Keep this column updated in your source database using a scheduled job.
For example, add an `updated_this_week` column:
```sql theme={null}
ALTER TABLE issues ADD COLUMN updated_this_week BOOLEAN DEFAULT false;
```
Update it periodically using a cron job (e.g., with pg\_cron):
```sql theme={null}
UPDATE issues SET updated_this_week = (updated_at > now() - interval '7 days');
```
```yaml theme={null}
config:
edition: 3
streams:
recent_issues:
auto_subscribe: true
query: SELECT * FROM issues WHERE updated_this_week = true
```
For multiple time ranges, define a stream per range and let the client subscribe to the one it needs:
```yaml theme={null}
config:
edition: 3
streams:
issues_1week:
query: SELECT * FROM issues WHERE updated_this_week = true
issues_1month:
query: SELECT * FROM issues WHERE updated_this_month = true
```
The client subscribes to the desired range:
```javascript theme={null}
// Subscribe to one-week range
await db.syncStream('issues_1week').subscribe();
// Or subscribe to one-month range
await db.syncStream('issues_1month').subscribe();
```
```yaml theme={null}
bucket_definitions:
recent_issues:
data:
- SELECT * FROM issues WHERE updated_this_week = true
```
For multiple time ranges, add multiple bucket definitions and let the client choose which bucket to sync:
```yaml theme={null}
bucket_definitions:
issues_1week:
parameters: SELECT WHERE request.parameters() ->> 'range' = '1week'
data:
- SELECT * FROM issues WHERE updated_this_week = true
issues_1month:
parameters: SELECT WHERE request.parameters() ->> 'range' = '1month'
data:
- SELECT * FROM issues WHERE updated_this_month = true
```
The client passes the desired range as a client parameter:
```javascript theme={null}
await db.connect(connector, {
params: {
range: '1week',
},
})
```
This approach works well when you have a small, fixed set of time ranges. However, it requires schema changes and a scheduled job to keep the columns updated.
This approach requires schema changes and scheduled jobs (e.g., pg\_cron). Limited to pre-defined time ranges.
If you need more flexibility like letting users pick arbitrary date ranges, see Workaround 2 below.
## 2: Buckets Per Date
Instead of pre-defined ranges, create a bucket for each date and let the client specify which dates to sync.
Use `substring` to extract the date portion from a timestamp and match it with `=`:
```yaml theme={null}
config:
edition: 3
streams:
issues_by_date:
query: SELECT * FROM issues WHERE substring(updated_at, 1, 10) = subscription.parameter('date')
```
The client subscribes once per date it wants to sync:
```javascript theme={null}
await db.syncStream('issues_by_date', { date: '2026-01-07' }).subscribe();
await db.syncStream('issues_by_date', { date: '2026-01-08' }).subscribe();
await db.syncStream('issues_by_date', { date: '2026-01-09' }).subscribe();
```
Each subscription can be managed independently — you can subscribe and unsubscribe to individual dates without affecting others.
```yaml theme={null}
bucket_definitions:
issues_by_update_at:
parameters: SELECT value as date FROM json_each(request.parameters() ->> 'dates')
data:
- SELECT * FROM issues WHERE substring(updated_at, 1, 10) = bucket.date
```
The client passes the dates it wants as client parameters:
```javascript theme={null}
await db.connect(connector, {
params: {
dates: ["2026-01-07", "2026-01-08", "2026-01-09"],
},
})
```
This gives users full control over which dates to sync, with no schema changes or scheduled jobs required.
The trade-off is granularity. In this example we're using daily buckets. If you need finer precision (hourly), syncing a large range means many buckets, which can degrade sync performance and approach [PowerSync's limit of 1,000 buckets per user](https://docs.powersync.com/resources/performance-and-limits#performance-and-limits). If you use larger buckets (monthly), you lose the ability to filter accurately.
You must commit to a single granularity. Daily = too many buckets for long ranges. Monthly = lose precision for recent data.
You have to pick a granularity and stick with it. If that's a problem—say, you want hourly precision for recent data but don't want hundreds of buckets when syncing a full month, see Workaround 3 below.
## 3: Multiple Granularities
Combine multiple granularities in a single definition. This lets you use larger buckets (days) for older data and smaller buckets (hours, minutes) for recent data.
```yaml theme={null}
config:
edition: 3
streams:
issues_by_partition:
queries:
# By day (e.g., "2026-01-07")
- SELECT * FROM issues WHERE substring(updated_at, 1, 10) = subscription.parameter('partition')
# By hour (e.g., "2026-01-07T14")
- SELECT * FROM issues WHERE substring(updated_at, 1, 13) = subscription.parameter('partition')
# By 10 minutes (e.g., "2026-01-07T14:3")
- SELECT * FROM issues WHERE substring(updated_at, 1, 15) = subscription.parameter('partition')
```
The client subscribes once per partition, mixing granularities as needed:
```javascript theme={null}
await db.syncStream('issues_by_partition', { partition: '2026-01-05' }).subscribe();
await db.syncStream('issues_by_partition', { partition: '2026-01-06' }).subscribe();
await db.syncStream('issues_by_partition', { partition: '2026-01-07T10' }).subscribe();
await db.syncStream('issues_by_partition', { partition: '2026-01-07T11' }).subscribe();
await db.syncStream('issues_by_partition', { partition: '2026-01-07T12:0' }).subscribe();
await db.syncStream('issues_by_partition', { partition: '2026-01-07T12:1' }).subscribe();
await db.syncStream('issues_by_partition', { partition: '2026-01-07T12:2' }).subscribe();
```
Each query naturally acts as a filter based on the length of the partition value — a day-format partition only matches the day query, an hour-format partition only matches the hour query, and so on.
```yaml theme={null}
bucket_definitions:
issues_by_time:
parameters: SELECT value as partition FROM json_each(request.parameters() ->> 'partitions')
data:
# By day (e.g., "2026-01-07")
- SELECT * FROM issues WHERE substring(updated_at, 1, 10) = bucket.partition
# By hour (e.g., "2026-01-07T14")
- SELECT * FROM issues WHERE substring(updated_at, 1, 13) = bucket.partition
# By 10 minutes (e.g., "2026-01-07T14:3")
- SELECT * FROM issues WHERE substring(updated_at, 1, 15) = bucket.partition
```
The client then mixes granularities as needed:
```javascript theme={null}
await db.connect(connector, {
params: {
partitions: [
"2026-01-05",
"2026-01-06",
"2026-01-07T10",
"2026-01-07T11",
"2026-01-07T12:0",
"2026-01-07T12:1",
"2026-01-07T12:2"
]
},
})
```
This syncs January 5–6 by day, the morning of January 7 by hour, and the last 30 minutes in 10-minute chunks, without creating hundreds of buckets.
The trade-off is complexity. The client must decide which granularity to use for each time segment, and each row belongs to multiple buckets, which increases replication overhead.
When using multiple time granularities (e.g., monthly, daily, hourly), rows move between buckets as time passes. Since each granularity creates a different bucket ID, the client must re-download the row from the new bucket even if it already has the data. This re-download overhead can nullify the benefits of granular filtering. For this reason, in some cases it may be better to sync entire months avoiding the re-sync overhead, even if you sync more data initially.
Each row belongs to multiple buckets (replication overhead). Re-sync overhead when rows move between bucket granularities. Added complexity may not justify the gains over Workaround 2.
# Conclusion
Time-based sync is a common need, but PowerSync doesn't support range operators or time-based functions on parameters directly.
To recap the workarounds:
* **Pre-defined time ranges** — Simplest option. Use when you have a fixed set of time ranges and don't mind schema changes.
* **Buckets Per Date** — More flexible. Use when you need arbitrary date ranges but can live with a single granularity.
* **Multiple Granularities** — Most flexible. Use when you need precision for recent data without syncing hundreds of buckets. Be mindful of the re-sync overhead.
We're working on a more elegant solution. This guide will be updated when it's ready.
# Sync Streams and Sync Rules
Source: https://docs.powersync.com/sync/overview
PowerSync Sync Streams and the legacy Sync Rules allow developers to control which data syncs to which clients/devices (i.e. they enable partial sync).
## Sync Streams (Beta) — Recommended
[Sync Streams](/sync/streams/overview) are now in beta and considered production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to [migrate](/sync/streams/migration). Sync Streams are designed to give developers flexibility to either dynamically sync data on-demand, or to "sync data upfront" for offline-first use cases.
Key improvements in Sync Streams over legacy Sync Rules include:
* **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters, on-demand. You still have the option of auto-subscribing streams when a client connects, for "sync data upfront" behavior.
* **Temporary caching-like behavior**: Each subscription includes a configurable TTL that keeps data active after the client unsubscribes, acting as a warm cache for re-subscribing.
* **Simpler developer experience**: Simplified syntax and mental model, and capabilities such as your UI components automatically managing subscriptions (for example, React hooks).
## Sync Rules (Legacy)
Sync Rules is the legacy approach for controlling data sync. It remains available and supported for existing projects:
If you're currently using Sync Rules and want to migrate to Sync Streams, see our [migration docs](/sync/streams/migration).
## How It Works
You may also find it useful to look at the [PowerSync Service architecture](/architecture/powersync-service) for background.
Each [PowerSync Service](/architecture/powersync-service) instance has a deployed *Sync Streams* (or legacy *Sync Rules*) configuration. This takes the form of a YAML file which contains:
* **In the case of Sync Streams:** Definitions of the streams that exist, with a SQL-like query (which can also contain limited subqueries), which defines the data in the stream, and references the necessary parameters.
* **In the case of Sync Rules:** Definitions of the different [buckets](/architecture/powersync-service#bucket-system) that exist, with SQL-like queries to specify the parameters used by each bucket (if any), as well as the data contained in each bucket.
A *parameter* is a value that can be used in Sync Streams (or legacy Sync Rules) to create dynamic sync behavior for each user/client. Each client syncs only the relevant [*buckets*](/architecture/powersync-service#bucket-system) based on the parameters for that client.
* Sync Streams can make use of *authentication parameters* from the JWT token (such as the user ID or other JWT claims), *connection parameters* (specified at connection), and *subscription parameters* (specified by the client when it subscribes to a stream at any time). See [Using Parameters](/sync/streams/parameters).
* Sync Rules can make use of *authentication parameters* from the JWT token, as well as [*client parameters*](/sync/rules/client-parameters) (passed directly from the client when it connects to the PowerSync Service).
It is also possible to have buckets/streams with no parameters. In the case of Sync Rules, these buckets sync to all users/clients automatically.
The concept of *buckets* is core to PowerSync and key to its performance and scalability. The [PowerSync Service architecture overview](/architecture/powersync-service) provides more background on this.
* In *Sync Streams*, buckets and parameters are implicit — they are automatically created based on the streams, their queries and subqueries. You don't need to explicitly define the buckets that exist.
* In legacy *Sync Rules*, buckets and their parameters are [explicitly defined](/sync/rules/overview#bucket-definition).
There are limitations on the SQL syntax and functionality that is supported in Sync Streams and Sync Rules. See [Supported SQL](/sync/supported-sql) for details and limitations.
In addition to filtering data based on parameters, Sync Streams and Sync Rules also enable:
* Selecting only specific tables/collections and columns/fields to sync.
* Filtering data based on static conditions.
* Transforming column/field names and values.
### Sync Streams/Rules Determine Replication From the Source Database
A PowerSync Service instance [replicates and transforms](/architecture/powersync-service#replication-from-the-source-database) relevant data from your backend source database according to your Sync Streams (or legacy Sync Rules). During replication, data and metadata are persisted in [buckets](/architecture/powersync-service#bucket-system) on the PowerSync Service. Buckets are incrementally updated so that they contain the latest state as well as a history of changes (operations). This is key to how PowerSync achieves efficient delta syncing — having the operation history for each bucket allows clients to sync only the deltas that they need to get up to date (see [Protocol](/architecture/powersync-protocol#protocol) for more details).
As a practical example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be embedded in the JWT). Now let's say users with IDs `A` and `B` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with IDs `user_todo_lists["A"]` and `user_todo_lists["B"]`. When the user with ID `A` connects, they can efficiently sync just the bucket with ID `user_todo_lists["A"]`.
### Sync Streams/Rules Determine Real-Time Streaming Sync to Clients
Whenever buckets are updated (buckets added or removed, or operations added to existing buckets), these changes are [streamed in real-time](/architecture/powersync-service#streaming-sync) to clients based on the Sync Streams (or legacy Sync Rules).
This syncing behavior can be highly dynamic: in the case of Sync Streams, syncing will dynamically adjust based on the stream subscriptions (which can make use of *subscription parameters*), as well as *connection parameters* and *authentication parameters* (from the JWT). In the case of Sync Rules, syncing will dynamically adjust based on changes in *client parameters* and *authentication parameters*.
The bucket data is persisted in SQLite on the client-side, where it is easily queryable based on the [client-side schema](/intro/setup-guide#define-your-client-side-schema), which corresponds to the Sync Streams/Rules.
For more information on the client-side SQLite database structure, see [Client Architecture](/architecture/client-architecture#client-side-schema-and-sqlite-database-structure).
# Client Parameters
Source: https://docs.powersync.com/sync/rules/client-parameters
Pass parameters from the client directly for use in Sync Rules.
Use client parameters with caution. Please make sure to read the [Security consideration](#security-consideration) section below.
Client parameters are parameters that are passed to the PowerSync Service instance from the client SDK, and can be used in Sync Rules' [parameter queries](/sync/rules/parameter-queries) to further filter data.
PowerSync already supports using **token parameters** in parameter queries. An example of a token parameter is a user ID, and this is commonly used to filter synced data by the user. These parameters are embedded in the JWT [authentication token](/configuration/auth/custom), and therefore can be considered trusted and can be used for access control purposes.
**Client parameters** are specified directly by the client (i.e. not through the JWT authentication token). The advantage of client parameters is that they give client-side control over what data to sync, and can therefore be used to further filter or limit synced data. A common use case is [lazy-loading](/client-sdks/infinite-scrolling#2-control-data-sync-using-client-parameters), where data is split into pages and a client parameter can be used to specify which page(s) to sync to a user, and this can update dynamically as the user paginates (or reaches the end of an infinite-scrolling feed).
[Sync Streams](/sync/streams/overview) make it easier to manage dynamic parameters, especially for apps where parameters are managed across different UI components and tabs. Sync Streams offer *subscription parameters* (specified when subscribing to a stream) and *connection parameters* (the equivalent of client parameters).
We recommend Sync Streams for new projects, and [migrating](/sync/streams/migration) existing projects.
### Usage
Client parameters are defined when [instantiating the PowerSync database](/intro/setup-guide#instantiate-the-powersync-database), within the options of PowerSync's `connect()` method:
```js theme={null}
const connector = new DemoConnector();
const powerSync = db;
function connectPowerSync() {
powerSync.connect(connector, {
params: { "current_page": } // Specify client parameters here
});
}
```
The parameter is then available in [Sync Rules](/sync/rules/overview) under `request.parameters` (alongside the already supported `request.user_id`).
In this example, only 'posts' from the user's current page are synced:
```yaml theme={null}
# sync-rules.yaml
bucket_definitions:
shared_posts:
parameters: SELECT (request.parameters() ->> 'current_page') as page_number
data:
- SELECT * FROM posts WHERE page_number = bucket.page_number
```
### Security consideration
An important consideration with client parameters is that a client can pass any value, and sync data accordingly. Hence, client parameters should always be treated with care, and should not be used for access control purposes. Where permissions are required, use token parameters (`request.jwt()`) instead, or use token parameters in combination with client parameters.
The following examples show **secure** vs. **insecure** ways of using client and token parameters:
#### Secure (using a token parameter only):
```yaml theme={null}
# sync-rules.yaml
bucket_definitions:
selected_projects:
# Sync projects based on org_id from the JWT
# Since these parameters are embedded in the JWT (authentication token)
# they can be considered trusted
parameters: SELECT id as project_id FROM projects WHERE org_id IN request.jwt() ->> 'app_metadata.org_id'
data:
- ...
```
#### Insecure (using a client parameter only):
```yaml theme={null}
# sync-rules.yaml
bucket_definitions:
selected_projects:
# Do NOT do this: Sync projects based on a client parameter
# request.parameters() are specified by the client directly
# Because the client can send any value for these parameters
# it's not a good place to do authorization
parameters: SELECT id as project_id FROM projects WHERE id in request.parameters() ->> 'selected_projects'
data:
- ...
```
#### Secure (using a token parameter combined with a client parameter):
```yaml theme={null}
# sync-rules.yaml
bucket_definitions:
selected_projects:
# Sync projects based on org_id from the JWT, and additionally sync archived projects
# only when specifically requested by the client
# The JWT is a Supabase specific example with a
# custom field set in app_metadata
parameters: SELECT id as project_id FROM projects WHERE org_id IN request.jwt() ->> 'app_metadata.org_id' AND archived = true AND request.parameters() ->> 'include_archived'
data:
- ...
```
### Warning on potentially dangerous queries
Based on the above security consideration, the [PowerSync Dashboard](https://dashboard.powersync.com/) will warn developers when client parameters are being used in Sync Rules in an insecure way (i.e. where the query does not also include a parameter from `request.jwt()`).
The below Sync Rules will display the warning:
> Potentially dangerous query based on parameters set by the client. The client can send any value for these parameters so it's not a good place to do authorization.
```yaml theme={null}
# sync-rules.yaml
bucket_definitions:
selected_projects:
parameters: SELECT request.parameters() ->> 'project_id' as project_id
data:
- ...
```
This warning can be disabled by specifying `accept_potentially_dangerous_queries: true` in the bucket definition:
```yaml theme={null}
# sync-rules.yaml
bucket_definitions:
selected_projects:
accept_potentially_dangerous_queries: true
parameters: SELECT request.parameters() ->> 'project_id' as project_id
data:
- ...
```
# Data Queries
Source: https://docs.powersync.com/sync/rules/data-queries
Data Queries select the data that form part of a [bucket](/architecture/powersync-service#bucket-system), using the bucket [parameters](/sync/rules/overview#parameters).
Multiple Data Queries can be specified for a single [bucket definition](/sync/rules/overview#bucket-definition).
## Every Data Query Must Use Every Bucket Parameter
Data Queries are used to group data into buckets, so each Data Query must use every bucket [parameter](/sync/rules/overview#parameters).
When PowerSync does [incremental replication](/architecture/powersync-service#initial-replication-vs-incremental-replication) of data from your source database, it evaluates every row/document received on the CDC stream , and computes a list of [buckets](/architecture/powersync-service#bucket-system) that that row/document belongs to. This allows PowerSync to efficiently update only the specific buckets that are affected by each change event received. PowerSync uses the Data Queries in the Sync Rules bucket definitions to determine which rows/documents belong to which buckets. Therefore, if it was possible for a certain bucket parameter to *not* be used in the `WHERE` clause of a Data Query, the bucket IDs to which the row/document belongs would be ambiguous — we would have to assume "all possible values" for an ambiguous parameter value in the bucket ID – and the row/document would have to be exploded into many buckets. To avoid this, PowerSync imposes the constraint that every Data Query needs to use every parameter defined on the bucket.
## Supported SQL
The supported SQL in Data Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql) for full details.
## Examples
#### Grouping by Parameter Query Values
```yaml theme={null}
bucket_definitions:
owned_lists:
parameters: |
SELECT id as list_id FROM lists WHERE
owner_id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.id = bucket.list_id
- SELECT * FROM todos WHERE todos.list_id = bucket.list_id
```
#### Selecting Output Columns/Fields
When specific columns/fields are selected, only those columns/fields are synced to the client.
This is good practice, to ensure the synced data does not unintentionally change when new columns are added to the schema (in the case of Postgres) or to the data structure (in the case of MongoDB).
Note: An `id` column must always be present, and must have a `text` type. If the primary key is different, use a column alias and/or transformations to output a `text` id column.
```yaml theme={null}
bucket_definitions:
global:
data:
- SELECT id, name, owner_id FROM lists
```
MongoDB uses `_id` as the name of the ID field in collections. Therefore, PowerSync requires using `SELECT _id as id` in the data queries when [using MongoDB](/configuration/source-db/setup) as the backend source database.
#### Renaming Columns/Fields
Different names (aliases) may be specified for columns/fields:
```yaml theme={null}
bucket_definitions:
global:
data:
- SELECT id, name, created_timestamp AS created_at FROM lists
```
#### Transforming Columns/Fields
A limited set of operators and functions are available to transform the output value of columns/fields.
```yaml theme={null}
bucket_definitions:
global:
data:
# Cast number to text
- SELECT id, item_number :: text AS item_number FROM todos
# Alternative syntax for the same cast
- SELECT id, CAST(item_number as TEXT) AS item_number FROM todos
# Convert binary data (bytea) to base64
- SELECT id, base64(thumbnail) AS thumbnail_base64 FROM todos
# Extract field from JSON or JSONB column
- SELECT id, metadata_json ->> 'description' AS description FROM todos
# Convert time to epoch number
- SELECT id, unixepoch(created_at) AS created_at FROM todos
```
# Global Buckets
Source: https://docs.powersync.com/sync/rules/global-buckets
The simplest Sync Rules are for "global" data — synced to all users, using "Global Buckets"
Any bucket with no *Parameter Query* in the bucket definition is automatically a *Global Bucket*. These buckets will be synced to all clients/users.
For example, the following Sync Rules sync all `todos` and only unarchived `lists` to all clients/users:
```yaml theme={null}
bucket_definitions:
global_bucket:
data:
# Sync all todos
- SELECT * FROM todos
# Sync all lists except archived ones
- SELECT * FROM lists WHERE archived = false
```
`global_bucket` is *not* a reserved keyword. You can give the bucket any name. If no **Parameter Query** is specified in the bucket definition, the bucket is automatically a global bucket.
**Note**: The table/collection names that your Data Queries select from in your Sync Rules must match the table names defined in your [client-side schema](/intro/setup-guide#define-your-client-side-schema).
As explained in the [Overview & Key Concepts](/sync/rules/overview#potential-parameter-values-determine-created-buckets), PowerSync uses the possible values for parameters (found in your source database) to generate individual buckets that can be efficiently synced by clients/users depending on the specific parameters that apply to them. By contrast, with a Global Bucket, only a single bucket will be generated on the PowerSync Service that is shared between all your users/clients.
# Guide: Many-to-Many and Join Tables
Source: https://docs.powersync.com/sync/rules/many-to-many-join-tables
Strategies for handling many-to-many relationships in Sync Rules, which don't support JOINs directly.
Join tables are often used to implement many-to-many relationships between tables. Join queries are not directly supported in PowerSync Sync Rules, and require some workarounds depending on the use case. This guide contains some recommended strategies.
**Using Sync Streams?** Sync Streams support [JOINs](/sync/streams/queries#using-joins) and [nested subqueries](/sync/streams/queries#using-subqueries), which handle most many-to-many relationships directly without the workarounds described here. See [Many-to-Many with Sync Streams](/sync/streams/examples#many-to-many-relationships) for examples.
**Postgres users:** For Postgres source databases, you can use the [`pg_ivm` extension](https://www.powersync.com/blog/using-pg-ivm-to-enable-joins-in-powersync) to create incrementally maintained materialized views with JOINs that can be referenced directly in Sync Rules. This approach avoids the need to denormalize your schema.
## Example
As an example, consider a social media application. The app has message boards. Each user can subscribe to boards, make posts, and comment on posts. Posts may also have one or more topics.
```sql theme={null}
create table users (
id uuid not null default gen_random_uuid (),
name text not null,
last_activity timestamp with time zone,
constraint users_pkey primary key (id)
);
create table boards (
id uuid not null default gen_random_uuid (),
name text not null,
constraint boards_pkey primary key (id)
);
create table posts (
id uuid not null default gen_random_uuid (),
board_id uuid not null,
created_at timestamp with time zone not null default now(),
author_id uuid not null,
title text not null,
body text not null,
constraint posts_pkey primary key (id),
constraint posts_author_id_fkey foreign key (author_id) references users (id),
constraint posts_board_id_fkey foreign key (board_id) references boards (id)
);
create table comments (
id uuid not null default gen_random_uuid (),
post_id uuid not null,
created_at timestamp with time zone not null default now(),
author_id uuid not null,
body text not null,
constraint comments_pkey primary key (id),
constraint comments_author_id_fkey foreign key (author_id) references users (id),
constraint comments_post_id_fkey foreign key (post_id) references posts (id)
);
create table board_subscriptions (
id uuid not null default gen_random_uuid (),
user_id uuid not null,
board_id uuid not null,
constraint board_subscriptions_pkey primary key (id),
constraint board_subscriptions_board_id_fkey foreign key (board_id) references boards (id),
constraint board_subscriptions_user_id_fkey foreign key (user_id) references users (id)
);
create table topics (
id uuid not null default gen_random_uuid (),
label text not null,
constraint topics_pkey primary key (id)
);
create table post_topics (
id uuid not null default gen_random_uuid (),
board_id uuid not null,
post_id uuid not null,
topic_id uuid not null,
constraint post_topics_pkey primary key (id),
constraint post_topics_board_id_fkey foreign key (board_id) references boards (id),
constraint post_topics_post_id_fkey foreign key (post_id) references posts (id),
constraint post_topics_topic_id_fkey foreign key (topic_id) references topics (id)
);
```
### Many-to-many: Bucket parameters
For this app, we generally want to sync all posts in boards that users have subscribed to. To simplify these examples, we assume a user has to be subscribed to a board to post.
Boards make a nice grouping of data for Sync Rules: We sync the boards that a user has subscribed to, and the same board data is synced to all users subscribed to that board.
The relationship between users and boards is a many-to-many, specified via the `board_subscriptions` table.
To start with, in our PowerSync Sync Rules, we define a [bucket](/sync/rules/organize-data-into-buckets) and sync the posts. The [parameter query](/sync/rules/parameter-queries) is defined using the `board_subscriptions` table:
```yaml theme={null}
bucket_definitions:
board_data:
parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- SELECT * FROM posts WHERE board_id = bucket.board_id
```
### Avoiding joins in data queries: Denormalize relationships (comments)
Next, we also want to sync comments for those boards. There is a one-to-many relationship between boards and comments, via the `posts` table. This means conceptually we can add comments to the same board bucket. With general SQL, the query could be:
```sql theme={null}
SELECT comments.* FROM comments
JOIN posts ON posts.id = comments.post_id
WHERE board_id = bucket.board_id
```
Unfortunately, joins are not supported in PowerSync's Sync Rules. Instead, we denormalize the data to add a direct foreign key relationship between comments and boards: (Postgres example)
```sql theme={null}
ALTER TABLE comments ADD COLUMN board_id uuid;
ALTER TABLE comments ADD CONSTRAINT comments_board_id_fkey FOREIGN KEY (board_id) REFERENCES boards (id);
```
Now we can add it to the bucket definition in our Sync Rules:
```yaml theme={null}
bucket_definitions:
board_data:
parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- SELECT * FROM posts WHERE board_id = bucket.board_id
# Add comments:
- SELECT * FROM comments WHERE board_id = bucket.board_id
```
Now we want to sync topics of posts. In this case we added `board_id` from the start, so `post_topics` is simple in our Sync Rules:
```yaml theme={null}
bucket_definitions:
board_data:
parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- SELECT * FROM posts WHERE board_id = bucket.board_id
- SELECT * FROM comments WHERE board_id = bucket.board_id
# Add post_topics:
- SELECT * FROM post_topics WHERE board_id = bucket.board_id
```
### Many-to-many strategy: Sync everything (topics)
Now we need access to sync the topics for all posts synced to the device. There is a many-to-many relationship between posts and topics, and by extension boards to topics. This means there is no simple direct way to partition topics into buckets — the same topics be used on any number of boards.
If the topics table is limited in size (say 1,000 or less), the simplest solution is to just sync all topics in our Sync Rules:
```yaml theme={null}
bucket_definitions:
global_topics:
data:
- SELECT * FROM topics
```
### Many-to-many strategy: Denormalize data (topics, user names)
If there are many thousands of topics, we may want to avoid syncing everything. One option is to denormalize the data by copying the topic label over to `post_topics`: (Postgres example)
```sql theme={null}
ALTER TABLE post_topics ADD COLUMN topic_label text not null;
```
Now we don't need to sync the `topics` table itself, as everything is included in `post_topics`. Assuming the topic label never or rarely changes, this could be a good solution.
Next up, we want to sync the relevant user profiles, so we can show it together with comments and posts. For simplicity, we sync profiles for all users subscribed to a board.
One option is to add the author name to each board subscription, similar to what we've done for `topics`: (Postgres example)
```sql theme={null}
ALTER TABLE board_subscriptions ADD COLUMN user_name text;
```
Sync Rules:
```yaml theme={null}
bucket_definitions:
board_data:
parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- SELECT * FROM posts WHERE board_id = bucket.board_id
- SELECT * FROM comments WHERE board_id = bucket.board_id
- SELECT * FROM post_topics WHERE board_id = bucket.board_id
# Add subscriptions which include the names:
- SELECT * FROM board_subscriptions WHERE board_id = bucket.board_id
```
### Many-to-many strategy: Array of IDs (user profiles)
If we need to sync more than just the name (let's say we need a last activity date, profile picture and bio text as well), the above approach doesn't scale as well. Instead, we want to sync the `users` table directly. To sync user profiles directly in the bucket for the board, we need a new array.
Adding an array to the schema in Postgres:
```sql theme={null}
ALTER TABLE users ADD COLUMN subscribed_board_ids uuid[];
```
By using an array instead of or in addition to a join table, we can use it directly in Sync Rules:
```yaml theme={null}
bucket_definitions:
board_data:
parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- SELECT * FROM posts WHERE board_id = bucket.board_id
- SELECT * FROM comments WHERE board_id = bucket.board_id
- SELECT * FROM post_topics WHERE board_id = bucket.board_id
# Add participating users:
- SELECT name, last_activity, profile_picture, bio FROM users WHERE bucket.board_id IN subscribed_board_ids
```
This approach does require some extra effort to keep the array up to date. One option is to use a trigger in the case of Postgres:
```sql theme={null}
CREATE OR REPLACE FUNCTION recalculate_subscribed_boards()
RETURNS TRIGGER AS $$
BEGIN
-- Recalculate subscribed_board_ids for the affected user
UPDATE users
SET subscribed_board_ids = (
SELECT array_agg(board_id)
FROM board_subscriptions
WHERE user_id = COALESCE(NEW.user_id, OLD.user_id)
)
WHERE id = COALESCE(NEW.user_id, OLD.user_id);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_board_subscriptions_change
AFTER INSERT OR UPDATE OR DELETE ON board_subscriptions
FOR EACH ROW
EXECUTE FUNCTION recalculate_subscribed_boards();
```
Note that this approach does have scaling limitations. When the number of board subscriptions per user becomes large (say over 100 rows per user), then:
1. Updating the `subscribed_board_ids` array in Postgres becomes slower.
2. The overhead is even more pronounced on PowerSync, since PowerSync maintains a separate copy of the data in each bucket.
In those cases, another approach may be more suitable.
# Organize Data Into Buckets
Source: https://docs.powersync.com/sync/rules/organize-data-into-buckets
Designing your Sync Rules is about *organizing data into buckets*, and creating the bucket definitions accordingly. Each [bucket definition](/sync/rules/overview#bucket-definition) defines a set of tables/collections and rows/documents to sync.
* If there's some data you want to sync to *all* your users/clients, you can add bucket definitions for one or more [Global Buckets](/sync/rules/global-buckets). This is the simplest way to get started with PowerSync.
* If there's data that that you want to filter by user, so that different users/clients get different subsets of data, you can can add bucket definitions to your Sync Rules specifically for that purpose, using the `user_id` *Authentication Parameter* that comes from the JWT.
* You can also filter data based on other parameters, such as project, organization, etc. — using additional bucket definitions with those parameters.
When designing your buckets, it is recommended, but not required, to group all data in a bucket where the same parameters apply.
## Defining Buckets
Sync Rules take the form of a YAML file containing all your [bucket definitions](/sync/rules/overview#bucket-definition).
A *bucket definition* contains two sets of queries:
1. [Parameter Queries](/sync/rules/parameter-queries): Select bucket parameters
2. [Data Queries](/sync/rules/data-queries): Select data in the bucket using the bucket parameters
Here is an example of Sync Rules containing a single bucket definition, which will sync only the `lists` that belong to the user:
```yaml theme={null}
bucket_definitions:
user_lists:
# select parameters for the bucket - in this case we are just selecting the user_id
parameters: SELECT request.user_id() as user_id # (request.user_id() comes from the JWT token)
data:
# select data rows/documents using the parameters above
- SELECT * FROM lists WHERE owner_id = bucket.user_id
```
You can choose any name for a bucket. The above Sync Rules contains only one bucket definition, with a bucket name of `user_lists`.
**Note**: The table/collection names that your Data Queries select from in your Sync Rules must match the table names defined in your [client-side schema](/intro/setup-guide#define-your-client-side-schema).
The supported SQL in *Parameter Queries* and *Data Queries* is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql).
## Limit on Number of Buckets Per Client
There is a maximum cap on the number of buckets that each user/client can sync — [the default limit](/resources/performance-and-limits) is 1,000.
What this practically means is that the total number of results returned from all the **Parameter Queries** across all your bucket definitions *for each specific user/client* cannot exceed that limit. If the Parameter Queries for a specific user/client exceed that limit, you will get a [`PSYNC_S2305` error](/debugging/error-codes#psync_s23xx:-sync-api-errors) from the PowerSync Service.
Note that this limit only applies to *each individual user/client*. You can have many more buckets in total in your project, as long as each user syncs no more total buckets than the limit. For example, your PowerSync Service instance could track let's say 1,000,000 buckets in total, but each user syncs a small fraction of those buckets.
This limit can be increased upon request for [Team and Enterprise](https://www.powersync.com/pricing) customers, however: as the number of buckets exceeds 1,000, performance degrades. See [Performance and Limits](/resources/performance-and-limits)
# Sync Rules (Legacy)
Source: https://docs.powersync.com/sync/rules/overview
Understand Sync Rules, the legacy mechanism for controlling data sync with explicit bucket definitions and parameter queries.
PowerSync Sync Rules is the legacy mechanism to control which data gets synced to which clients/devices (i.e. they enable *partial sync*).
**Sync Streams Recommended**
[Sync Streams](/sync/streams/overview) are now in beta and production-ready. We recommend Sync Streams for all new projects — they offer a simpler developer experience, on-demand syncing with subscription parameters, and caching-like behavior with TTL.
Existing projects should [migrate to Sync Streams](/sync/streams/migration). Sync Rules remain supported but are considered legacy.
Sync Rules are defined in a YAML file. For PowerSync Cloud, they are edited and deployed to a specific PowerSync instance in the [PowerSync Dashboard](/tools/powersync-dashboard#project-&-instance-level). For self-hosting setups, they are defined as part of your [instance configuration](/configuration/powersync-service/self-hosted-instances).
## Key Concepts
### Bucket Definition
*Sync Rules* are a set of *bucket definitions*. A *bucket definition* defines the actual individual [buckets](/architecture/powersync-service#bucket-system) that will be created by the PowerSync Service when it replicates data from your source database. Clients then sync the individual buckets that are relevant to them based on their parameters (see below).
Each *bucket definition* consists of:
* A custom **name** for the bucket, e.g `user_lists`
* Zero or more [**Parameter Queries**](#parameter-queries) which are used to explicitly select the **parameters** for the bucket. If no Parameter Query is specified in the bucket definition, it's automatically a [global bucket](#global-buckets).
* One or more [**Data Queries**](#data-queries) which selects the data for the bucket. Data Queries can make use of the **parameters** that are selected in the Parameter Query.
```yaml Sync Rules with a single bucket definition theme={null}
bucket_definitions:
user_lists: # name for the bucket
# Parameter Query, selecting a user_id parameter:
parameters: SELECT request.user_id() as user_id
data: # Data Query, selecting data, filtering using the user_id parameter:
- SELECT * FROM lists WHERE owner_id = bucket.user_id
```
### Parameters
A **Parameter** is a value that can be used in the Sync Rules to create dynamic sync behavior for each user/client. Each client syncs [only the relevant buckets](/architecture/powersync-service#bucket-system) based on the parameters for that client (i.e. the results that would be returned from the **Parameter Query** for that client).
### Parameter Queries
**Parameter Queries** are SQL-like queries that explicitly define the parameters for a bucket.
The following values can be selected in Parameter Queries:
* **Authentication Parameters** (see below)
* **Client Parameters** (see below)
* **Values From a Table/Collection** (see below)
See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples. Also see [Supported SQL](/sync/supported-sql) for limitations.
### Authentication Parameters
This is a set of parameters specified in the user's JWT authentication token, and they are signed as part of the JWT (see [Authentication](/configuration/auth/overview)). This always includes the JWT subject (`sub`) which is the user ID, but may include additional and custom parameters (other claims in the JWT). *Authentication Parameters* are used to identify the user, and specify permissions for the user. They need to be explicitly selected in the Parameter Query to be used in Sync Rules:
```yaml Example of selecting Authentication Parameter in a Parameter Query theme={null}
parameters: SELECT request.user_id() as user_id
```
See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples.
### Client Parameters
Clients can specify **Client Parameters** when connecting to PowerSync (i.e. when [`connect()` is called](/intro/setup-guide#connect-to-powersync-service-instance)). *Client Parameters* need to be explicitly selected in the Parameter Query to be used in Sync Rules:
```yaml Example of selecting a Client Parameter in a Parameter Query theme={null}
parameters: SELECT (request.parameters() ->> 'current_project') as current_project
```
The `->>` operator in the above example extracts a value from a string containing JSON (which is the format provided by `request.parameters()`). See [Operators and Functions](/sync/supported-sql#operators)
A client can pass any value for a Client Parameter. Hence, Client Parameters should always be treated with care, and should [not be used](/sync/rules/client-parameters#security-consideration) for access control purposes.
That being said, Client Parameters can be useful for use cases such as syncing different buckets based on state in the client app, for example only syncing data for the project currently selected, or syncing different buckets based on the client version ([see here](/sync/advanced/multiple-client-versions)).
See [Client Parameters](/sync/rules/client-parameters) and [Parameter Queries](/sync/rules/parameter-queries) for more details and examples.
### Values From a Table/Collection
A **Parameter Query** can select a parameter from a table/collection in your source database, e.g.:
```yaml Selecting a parameter from a table in the source database theme={null}
SELECT primary_list_id FROM users WHERE users.id = request.user_id()
```
See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples.
### Data Queries
**Data Queries** select the data for the bucket, and can make use of any of the bucket **parameters** (i.e. values returned by the **Parameter Queries**).
When referencing a parameter, the syntax `bucket.` should be used in the Data Query:
```yaml theme={null}
data:
- SELECT * FROM lists WHERE owner_id = bucket.user_id
```
See [Data Queries](/sync/rules/data-queries) for more details and examples. Also see [Supported SQL](/sync/supported-sql) for limitations.
### Global Buckets
If no **Parameter Query** is specified in the bucket definition, the bucket is automatically a global bucket. These buckets will be synced to all clients/users.
See [Global Buckets](/sync/rules/global-buckets) for more details and examples.
## Potential Parameter Values Determine Created Buckets
When your PowerSync Service instance [replicates data from your source database](/architecture/powersync-service#replication-from-the-source-database) based on your Sync Rules (i.e. your bucket definitions), it finds all possible values for your defined parameters in the relevant tables/collections in your source database, and creates individual buckets based on those values.
For example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be obtained from the JWT) to scope those to-do lists. Now let's say users with IDs `1`, `2` and `3` exist in the source database. PowerSync will then replicate data from the source database and preemptively create individual buckets with bucket IDs of `user_todo_lists["1"]`, `user_todo_lists["2"]` and `user_todo_lists["3"]`.
This architecture is key to the scalability and performance of PowerSync. See the [PowerSync Service architecture overview](/architecture/powersync-service) for more background.
## Designing Sync Rules
Designing your Sync Rules is basically about *organizing data into buckets*, and creating the bucket definitions accordingly.
See [Organize Data Into Buckets](/sync/rules/organize-data-into-buckets).
# Parameter Queries
Source: https://docs.powersync.com/sync/rules/parameter-queries
*Parameter Queries* allow [parameters](/sync/rules/overview#parameters) to be defined on a [bucket](/sync/rules/overview#bucket-definition) to group data.
Each [bucket](/sync/rules/overview#bucket-definition) can have zero or more Parameter Queries.
Parameter Queries can return multiple rows/documents. The values selected in each row/document become parameters for the bucket.
The following values can be selected in Parameter Queries:
* **Authentication Parameters**, which come from the JWT token.
* **Client Parameters**, which are passed directly from clients (specified [at connection](/sync/rules/client-parameters#usage))
* **Values From a Table/Collection** (in your source database)
Parameter Queries are not run directly on your source database. Instead, the Parameter Queries in your Sync Rules are used to pre-process rows/documents as they are [replicated from your source database](/architecture/powersync-service#replication-from-the-source-database). During replication, parameter values are indexed for [efficient use](/architecture/powersync-service#bucket-system) in the sync process.
## Using Authentication Parameters
The following functions allow you to select Authentication Parameters in your Parameter Queries:
| Function | Description |
| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `request.user_id()` | Returns the JWT subject (`sub`). Same as `request.jwt() ->> 'sub'` (see below) |
| `request.jwt()` | Returns the entire (signed) JWT payload as a JSON string. If there are other *claims* in your JWT (in addition to the user ID), you can select them from this JSON string. |
Since `request.jwt()` is a string containing JSON, use the `->>` [operator](/sync/supported-sql#operators) to select values from it:
```sql theme={null}
request.jwt() ->> 'sub' -- the 'subject' of the JWT - same as `request.user_id()
```
As an example, Supabase Auth includes [various claims](https://supabase.com/docs/guides/auth/jwt-fields) in their JWTs:
```sql theme={null}
request.jwt() ->> 'role' -- 'authenticated' or 'anonymous'
request.jwt() ->> 'email' -- automatic email field
request.jwt() ->> 'app_metadata.custom_field' -- custom field added by a service account (authenticated)
```
This is a simple example of Sync Rules with a single bucket definition with a Parameter Query that selects the user ID from the JWT:
```yaml theme={null}
bucket_definitions:
# Bucket Name
user_lists:
# Parameter Query
parameters: SELECT request.user_id() as user_id
# Data Query
data:
- SELECT * FROM lists WHERE lists.owner_id = bucket.user_id
```
A legacy syntax for Parameter Queries used `token_parameters.user_id` to return the JWT subject. Example:
```yaml theme={null}
bucket_definitions:
by_user_parameter:
parameters: SELECT token_parameters.user_id as user_id
data:
- SELECT * FROM lists WHERE lists.owner_id = bucket.user_id
```
That legacy syntax also allowed custom claims from the JWT, but only if they were nested under a `parameters` claim in the JWT.
If you are still using this legacy syntax, you can migrate to the current syntax as follows:
1. `token_parameters.user_id` references can simply be updated to `request.user_id()`
2. For custom parameters, if you keep your custom JWT in the format required by the legacy syntax, you can update `token_parameters.my_custom_field` references to `request.jwt() ->> 'parameters.my_custom_field'`
3. Alternatively, you can get custom parameters directly from the JWT payload/claims, e.g. `request.jwt() ->> 'my_custom_field'`
Example:
```yaml theme={null}
bucket_definitions:
by_user_parameter:
# request.user_id() is the same as the previous token_parameter.user_id
parameters: SELECT request.user_id() as user_id
data:
- SELECT * FROM lists WHERE lists.owner_id = bucket.user_id
```
## Using Client Parameters
| Function | Description |
| ---------------------- | ---------------------------------------------------------------------------- |
| `request.parameters()` | Returns [Client Parameters](/sync/rules/client-parameters) as a JSON string. |
Example usage:
```sql theme={null}
request.parameters() ->> 'param' -- select Client Parameter named 'param'
```
For full details, see the dedicated page on [Client Parameters](/sync/rules/client-parameters).
## Using Values From a Table/Collection
A Parameter Query can select a parameter from a table/collection in your source database, for example:
```yaml theme={null}
bucket_definitions:
user_lists_table:
# This is similar to the 'user_lists' example above, but with the advantage that access
# can instantly be revoked by deleting the user row/document from the source database:
parameters: SELECT id as user_id FROM users WHERE users.id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.user_id = bucket.user_id
```
## Supported SQL
The supported SQL in Parameter Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql) for full details.
## Usage Examples
### Filter on Additional Columns/Fields
```yaml theme={null}
bucket_definitions:
admin_users:
parameters: |
SELECT id as user_id FROM users WHERE
users.id = request.user_id() AND
users.is_admin = true
data:
- SELECT * FROM lists WHERE lists.owner_id = bucket.user_id
```
### Group According to Different Columns/Fields
```yaml theme={null}
bucket_definitions:
primary_list:
parameters: |
SELECT primary_list_id FROM users WHERE users.id = request.user_id()
data:
- SELECT * FROM todos WHERE todos.list_id = bucket.primary_list_id
```
### Using Different Tables/Collections for Parameters
```yaml theme={null}
bucket_definitions:
owned_lists:
parameters: |
SELECT id as list_id FROM lists WHERE owner_id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.id = bucket.list_id
- SELECT * FROM todos WHERE todos.list_id = bucket.list_id
```
### Multiple Columns/Fields
Parameter Queries may select multiple columns/fields parameters.
Note that [every bucket parameter must be used in every Data Query](/sync/rules/data-queries#every-data-query-must-use-every-bucket-parameter).
```yaml theme={null}
bucket_definitions:
owned_org_lists:
parameters: |
SELECT id as list_id, org_id FROM lists WHERE
owner_id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.id = bucket.list_id and lists.org_id = bucket.org_id
- SELECT * FROM todos WHERE todos.list_id = bucket.list_id and todos.org_id = bucket.org_id
```
### Using a Join Table/Collection
In this example, the Parameter Query can return multiple rows/documents, resulting in multiple sets of bucket parameters for a single user.
```yaml theme={null}
bucket_definitions:
user_lists:
parameters: |
SELECT list_id FROM user_lists WHERE user_lists.user_id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.id = bucket.list_id
- SELECT * FROM todos WHERE todos.list_id = bucket.list_id
```
Keep in mind that the total number of buckets per user should [remain limited](/sync/rules/organize-data-into-buckets#limit-on-number-of-buckets-per-client) (\<= 1,000 [by default](/resources/performance-and-limits)), so buckets should not be too granular.
For more advanced details on many-to-many relationships and join tables, see [this guide](/sync/rules/many-to-many-join-tables).
### Expanding JSON Array Into Multiple Parameters
Using the `json_each()` [function](/sync/supported-sql#functions) and `->` [operator](/sync/supported-sql#operators), we can expand a parameter that is a JSON array into multiple rows, thereby filtering by multiple parameter values:
```yaml theme={null}
bucket_definitions:
user_projects:
parameters: SELECT project_id FROM json_each(request.jwt() -> 'project_ids')
data:
- SELECT * FROM projects WHERE id = bucket.project_id
```
### Multiple Parameter Queries
Multiple Parameter Queries can be used in the same bucket definition, however, the output columns must be exactly the same for each of these Parameter Queries:
```yaml theme={null}
bucket_definitions:
user_lists:
parameters:
- SELECT id as list_id FROM lists WHERE owner_id = request.user_id()
- SELECT list_id FROM user_lists WHERE user_lists.user_id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.id = bucket.list_id
- SELECT * FROM todos WHERE todos.list_id = bucket.list_id
```
Keep in mind that the total number of buckets per user should [remain limited](/sync/rules/organize-data-into-buckets#limit-on-number-of-buckets-per-client) (\<= 1,000 [by default](/resources/performance-and-limits)), so buckets should not be too granular.
### No Output Columns/Fields
A Parameter Query with no output columns may be specified to only sync the bucket to a subset of users.
```yaml theme={null}
bucket_definitions:
global_admins:
parameters: |
SELECT FROM users WHERE
users.id = request.user_id() AND
users.is_admin = true
data:
- SELECT * FROM admin_settings
```
### No Parameter Query
Any bucket with no *Parameter Query* in the bucket definition is automatically a *Global Bucket*. These buckets will be synced to all clients/users.
See [Global Buckets](/sync/rules/global-buckets)
# Client-Side Usage
Source: https://docs.powersync.com/sync/streams/client-usage
Subscribe to Sync Streams from your client app, manage subscriptions, and track sync progress.
After [defining your streams](/sync/streams/overview#defining-streams) on the server-side, your client app subscribes to them to start syncing data (this is an explicit operation unless streams are configured to [auto-subscribe](/sync/streams/overview#using-auto-subscribe)). This page covers everything you need to use Sync Streams from your client code.
## Quick Start
Streams that are configured to [auto-subscribe](/sync/streams/overview#using-auto-subscribe) will automatically start syncing as soon as you connect to your PowerSync instance in your client-side application.
For any other streams, the basic pattern is: **subscribe** to a stream, **wait** for data to sync, then **unsubscribe** when done.
```js theme={null}
// Subscribe to a stream with parameters
const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
// Wait for initial data to sync
await sub.waitForFirstSync();
// Your data is now available - query it normally
const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', ['abc123']);
// When leaving the screen or component...
sub.unsubscribe();
```
```dart theme={null}
// Subscribe to a stream with parameters
final sub = await db.syncStream('list_todos', {'list_id': 'abc123'}).subscribe();
// Wait for initial data to sync
await sub.waitForFirstSync();
// Your data is now available - query it normally
final todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', ['abc123']);
// When leaving the screen or component...
sub.unsubscribe();
```
```kotlin theme={null}
// Subscribe to a stream with parameters
val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123")))
.subscribe()
// Wait for initial data to sync
sub.waitForFirstSync()
// Your data is now available - query it normally
val todos = database.getAll("SELECT * FROM todos WHERE list_id = ?", listOf("abc123"))
// When leaving the screen or component...
sub.unsubscribe()
```
```swift theme={null}
// Subscribe to a stream with parameters
let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")]).subscribe()
// Wait for initial data to sync
try await sub.waitForFirstSync()
// Your data is now available - query it normally
let todos = try await db.getAll(sql: "SELECT * FROM todos WHERE list_id = ?", parameters: ["abc123"])
// When leaving the screen or component...
try await sub.unsubscribe()
```
```csharp theme={null}
// Subscribe to a stream with parameters
var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }).Subscribe();
// Wait for initial data to sync
await sub.WaitForFirstSync();
// Your data is now available - query it normally
var todos = await db.GetAll("SELECT * FROM todos WHERE list_id = ?", new[] { "abc123" });
// When leaving the screen or component...
sub.Unsubscribe();
```
## Framework Integrations
Most developers use framework-specific hooks that handle subscription lifecycle automatically.
The `useSyncStream` hook automatically subscribes when the component mounts and unsubscribes when it unmounts:
```jsx theme={null}
function TodoList({ listId }) {
// Automatically subscribes/unsubscribes based on component lifecycle
const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: listId } });
// Check if data has synced
if (!stream?.subscription.hasSynced) {
return ;
}
// Data is ready - query and render
const { data: todos } = useQuery('SELECT * FROM todos WHERE list_id = ?', [listId]);
return ;
}
```
You can also have `useQuery` wait for a stream before running:
```jsx theme={null}
// This query waits for the stream to sync before executing
const { data: todos } = useQuery(
'SELECT * FROM todos WHERE list_id = ?',
[listId],
{ streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] }
);
```
Both the `useQuery` and `useQueries` hooks automatically subscribe when the component mounts and will unsubscribe when it unmounts:
```jsx theme={null}
function TodoList({ listId }) {
// Automatically subscribes/unsubscribes based on component lifecycle
const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: listId } });
const { data: todos, isLoading } = useQuery({
queryKey: ['test'],
query: 'SELECT 1',
streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }]
});
// Check if data has synced
if (isLoading) {
return ;
}
// Data is ready - query and render
return ;
}
```
```jsx theme={null}
function TodoList({ listId }) {
// Automatically subscribes/unsubscribes based on component lifecycle
const { allData, anyPending} = useQueries({
queries: [
{ queryKey: ['test1'], query: 'SELECT 1', streams: [{ name: 'a' }] },
{ queryKey: ['test2'], query: 'SELECT 2' }
],
combine: (results) => ({
allData: results.map((r) => r.data),
anyPending: results.some((r) => r.isPending)
})
})}
...
}
```
The `useSyncStream` composable automatically subscribes when the component mounts and unsubscribes when it unmounts:
```vue theme={null}
```
You can also have `useQuery` wait for a stream before running:
```Javascript theme={null}
// This query waits for the stream to sync before executing
const { data: todos } = useQuery(
'SELECT * FROM todos WHERE list_id = ?',
[listId],
{ streams: [
{ name: 'list_todos',
parameters: { list_id: listId },
waitForStream: true
}
]
}
);
```
The `composeSyncStream` extension subscribes to a stream as long as a composable is part of the composition. It returns `SyncStreamStatus?` for the subscription so you can check sync state.
The `composeSyncStream` helper was added in Kotlin SDK v1.11.0.
```kotlin theme={null}
@Composable
fun TodoList(database: PowerSyncDatabase, listId: String) {
val status = database.composeSyncStream(
name = "list_todos",
parameters = mapOf("list_id" to JsonParam.String(listId))
)
if (status?.subscription?.hasSynced != true) {
LoadingSpinner()
return
}
val todos = database.getAll("SELECT * FROM todos WHERE list_id = ?", listOf(listId))
TodoItems(todos = todos)
}
```
You can pass `ttl` and `priority` for cache duration and [sync priority](/sync/advanced/prioritized-sync):
```kotlin theme={null}
database.composeSyncStream(
name = "list_todos",
parameters = mapOf("list_id" to JsonParam.String(listId)),
ttl = 1.hours,
priority = StreamPriority(1)
)
```
## Type-Safe Stream Wrappers
When you generate your client-side schema from the [PowerSync Dashboard](https://dashboard.powersync.com) or CLI, typed stream wrappers are generated alongside the schema for all SDKs. These catch typos in stream names and parameter names at compile time — mistakes that would otherwise cause silent data-missing bugs only detectable by inspecting sync status.
For example, without typed wrappers this fails silently — no error, but data won't sync:
```js theme={null}
// Wrong stream name, wrong parameter key — no compile error, data just won't sync
await db.syncStream('note', { project_id: 'abc' }).subscribe();
```
### The Generated Code
The schema generator produces typed wrappers at the bottom of your generated schema file. Given streams like:
```yaml theme={null}
streams:
lists:
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
todos:
query: SELECT * FROM todos WHERE list_id = subscription.parameter('list')
```
The generated output (JavaScript/TypeScript) looks like:
```typescript theme={null}
import { column, Schema, Table, PowerSyncDatabase, SyncStream } from '@powersync/web';
// OR: import { ... } from '@powersync/react-native';
// ... table definitions ...
export const AppSchema = new Schema({ lists, todos });
export function typedStreams(db: PowerSyncDatabase) {
return {
lists(): SyncStream {
return db.syncStream('lists', {});
},
todos(params: { list: string }): SyncStream {
return db.syncStream('todos', params);
}
};
}
```
### Usage
Use the generated wrappers instead of calling `db.syncStream()` directly. Each method returns a `SyncStream`, so you can chain `.subscribe()`, `.subscribe({ ttl, priority })`, and all other methods covered on this page.
```typescript theme={null}
import { typedStreams } from './powersync/schema';
// Stream without subscription parameters
const sub = await typedStreams(db).lists().subscribe();
// Stream with subscription parameters — names and types are enforced
const sub = await typedStreams(db).todos({ list: 'list-id-abc' }).subscribe();
// Works with framework hooks
const stream = useSyncStream(typedStreams(db).todos({ list: listId }));
```
```dart theme={null}
// Stream without subscription parameters
final sub = await TypedSyncStreams(db).lists().subscribe();
// Stream with subscription parameters
final sub = await TypedSyncStreams(db).todos(list: 'list-id-abc').subscribe();
```
```kotlin theme={null}
// Stream without subscription parameters
val sub = TypedSyncStreams(db).lists().subscribe()
// Stream with subscription parameters
val sub = TypedSyncStreams(db).todos(list = "list-id-abc").subscribe()
```
```swift theme={null}
// Stream without subscription parameters
let sub = try await TypedSyncStreams(db).lists().subscribe()
// Stream with subscription parameters
let sub = try await TypedSyncStreams(db).todos(list: "list-id-abc").subscribe()
```
```csharp theme={null}
// Stream without subscription parameters
var sub = await new TypedSyncStreams(db).Lists().Subscribe();
// Stream with subscription parameters
var sub = await new TypedSyncStreams(db).Todos(list: "list-id-abc").Subscribe();
```
Type-safe wrappers are only generated for streams that do **not** have `auto_subscribe: true`. Auto-subscribe streams start syncing automatically on connect and don't require explicit client subscriptions, so no wrapper is generated for them.
## Checking Sync Status
You can check whether a subscription has synced and monitor download progress:
```js theme={null}
const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
// Check if this subscription has completed initial sync
const status = db.currentStatus.forStream(sub);
console.log(status?.subscription.hasSynced); // true/false
console.log(status?.progress); // download progress
```
```dart theme={null}
final sub = await db.syncStream('list_todos', {'list_id': 'abc123'}).subscribe();
// Check if this subscription has completed initial sync
final status = db.currentStatus.forStream(sub);
print(status?.subscription.hasSynced); // true/false
print(status?.progress); // download progress
```
```kotlin theme={null}
val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123")))
.subscribe()
// Check if this subscription has completed initial sync
val status = database.currentStatus.forStream(sub)
println(status?.subscription?.hasSynced) // true/false
println(status?.progress) // download progress
```
```swift theme={null}
let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")]).subscribe()
// Check if this subscription has completed initial sync
let status = db.currentStatus.forStream(stream: sub)
print(status?.subscription.hasSynced ?? false) // true/false
print(status?.progress) // download progress
```
```csharp theme={null}
var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }).Subscribe();
// Check if this subscription has completed initial sync
var status = db.CurrentStatus.ForStream(sub);
Console.WriteLine(status?.Subscription.HasSynced); // true/false
Console.WriteLine(status?.Progress); // download progress
```
## TTL (Time-To-Live)
TTL controls how long data remains cached after you unsubscribe. This enables "warm cache" behavior — when users navigate back to a screen, data may already be available without waiting for a sync.
**Default behavior:** Data is cached for 24 hours after unsubscribing. For most apps, this default works well.
### Setting a Custom TTL
```js theme={null}
// Cache for 1 hour after unsubscribe (TTL in seconds)
const sub = await db.syncStream('todos', { list_id: 'abc' })
.subscribe({ ttl: 3600 });
// Cache indefinitely (data never expires)
const sub = await db.syncStream('todos', { list_id: 'abc' })
.subscribe({ ttl: Infinity });
// No caching (remove data immediately on unsubscribe)
const sub = await db.syncStream('todos', { list_id: 'abc' })
.subscribe({ ttl: 0 });
```
```dart theme={null}
// Cache for 1 hour after unsubscribe
final sub = await db.syncStream('todos', {'list_id': 'abc'})
.subscribe(ttl: const Duration(hours: 1));
// Cache for 7 days
final sub = await db.syncStream('todos', {'list_id': 'abc'})
.subscribe(ttl: const Duration(days: 7));
```
```kotlin theme={null}
// Cache for 1 hour after unsubscribe
val sub = database.syncStream("todos", mapOf("list_id" to JsonParam.String("abc")))
.subscribe(ttl = 1.hours)
// Cache for 7 days
val sub = database.syncStream("todos", mapOf("list_id" to JsonParam.String("abc")))
.subscribe(ttl = 7.days)
```
```swift theme={null}
// Cache for 1 hour after unsubscribe (TTL in seconds)
let sub = try await db.syncStream(name: "todos", params: ["list_id": JsonValue.string("abc")])
.subscribe(ttl: 60 * 60, priority: nil)
// Cache for 7 days
let sub = try await db.syncStream(name: "todos", params: ["list_id": JsonValue.string("abc")])
.subscribe(ttl: 60 * 60 * 24 * 7, priority: nil)
```
```csharp theme={null}
// Cache for 1 hour after unsubscribe
var sub = await db.SyncStream("todos", new() { ["list_id"] = "abc" })
.Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) });
// Cache for 7 days
var sub = await db.SyncStream("todos", new() { ["list_id"] = "abc" })
.Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromDays(7) });
```
### How TTL Works
* **Per-subscription**: Each `(stream name, parameters)` pair has its own TTL.
* **First subscription wins**: If you subscribe to the same stream with the same parameters multiple times, the TTL from the first subscription is used.
* **After unsubscribe**: Data continues syncing for the TTL duration, then is removed from the client-side SQLite database.
```js theme={null}
// Example: User opens two lists with different TTLs
const subA = await db.syncStream('todos', { list_id: 'A' }).subscribe({ ttl: 43200 }); // 12h
const subB = await db.syncStream('todos', { list_id: 'B' }).subscribe({ ttl: 86400 }); // 24h
// Each subscription is independent
// List A data cached for 12h after unsubscribe
// List B data cached for 24h after unsubscribe
```
## Priority Override
Streams can have a default priority set in the YAML sync configuration (see [Prioritized Sync](/sync/advanced/prioritized-sync)). When subscribing, you can override this priority for a specific subscription:
```js theme={null}
// Override the stream's default priority
const sub = await db.syncStream('todos', { list_id: 'abc' }).subscribe({ priority: 1 });
```
When different components subscribe to the same stream with the same parameters but different priorities, PowerSync uses the highest priority for syncing. That higher priority is kept until the subscription ends (or its TTL expires). Subscriptions with different parameters are independent and do not conflict.
## Connection Parameters
Connection parameters are a more advanced feature for values that apply to all streams in a session. They're the Sync Streams equivalent of [Client Parameters](/sync/rules/client-parameters) in legacy Sync Rules.
For most use cases, **subscription parameters** (passed when subscribing) are more flexible and recommended. Use connection parameters only when you need a single global value across all streams, like an environment flag.
Define streams that use connection parameters:
```yaml theme={null}
streams:
config:
auto_subscribe: true
query: SELECT * FROM config WHERE env = connection.parameter('environment')
```
Set connection parameters when connecting:
```js theme={null}
await db.connect(connector, {
params: { environment: 'production' }
});
```
```dart theme={null}
await db.connect(
connector: connector,
params: {'environment': 'production'},
);
```
```kotlin theme={null}
database.connect(
connector,
params = mapOf("environment" to JsonParam.String("production"))
)
```
```swift theme={null}
try await db.connect(
connector: connector,
options: ConnectOptions(params: ["environment": JsonValue.string("production")])
)
```
```csharp theme={null}
await db.Connect(connector, new ConnectOptions {
Params = new() { ["environment"] = "production" }
});
```
## API Reference
For quick reference, here are the key methods available in each SDK:
| Method | Description |
| --------------------------------- | -------------------------------------------------------------------------------------------------------------- |
| `db.syncStream(name, params)` | Get a `SyncStream` instance for a stream with optional parameters |
| `stream.subscribe(options)` | Subscribe to the stream. Returns a `SyncStreamSubscription` |
| `subscription.waitForFirstSync()` | Wait until the subscription has completed its initial sync |
| `subscription.unsubscribe()` | Unsubscribe from the stream (data [remains cached](/sync/streams/client-usage#how-ttl-works) for TTL duration) |
| `db.currentStatus.forStream(sub)` | Get sync status and progress for a subscription |
# Common Table Expressions (CTEs)
Source: https://docs.powersync.com/sync/streams/ctes
Reuse common query patterns within a stream using CTEs to simplify configurations and improve efficiency.
When a stream needs reusable filtering logic, you can define it once in a Common Table Expression (CTE) and reuse it in that stream's queries. This keeps stream definitions DRY and makes it easier to maintain. For the supported syntax of the `with` block and CTE rules, see [Supported SQL — CTE and WITH syntax](/sync/supported-sql#cte-and-with-syntax).
## Why Use CTEs
Consider an app where users belong to organizations. Several tables need to filter by the user's organizations:
```yaml theme={null}
# Without CTEs - repetitive and hard to maintain
streams:
org_projects:
query: |
SELECT * FROM projects
WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id())
org_repositories:
query: |
SELECT * FROM repositories
WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id())
org_settings:
query: |
SELECT * FROM settings
WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id())
```
The same subquery appears three times. You can merge these into one stream and define the logic once using a CTE:
```yaml theme={null}
# With a CTE and multiple queries
streams:
org_data:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
queries:
- SELECT * FROM projects WHERE org_id IN user_orgs
- SELECT * FROM repositories WHERE org_id IN user_orgs
- SELECT * FROM settings WHERE org_id IN user_orgs
```
If the membership logic changes, you update it in one place.
## Defining CTEs
Define CTEs in a `with` block inside a stream. Each CTE has a name and a `SELECT` query:
```yaml theme={null}
streams:
my_stream:
with:
cte_name: SELECT columns FROM table WHERE conditions
query: SELECT * FROM some_table WHERE col IN cte_name
```
The CTE query can include any filtering logic, including parameters:
```yaml theme={null}
streams:
user_data:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
active_projects: SELECT id FROM projects WHERE archived = false
query: SELECT * FROM projects WHERE org_id IN user_orgs AND id IN active_projects
```
## Using CTEs in Queries
Once defined in a stream's `with` block, use the CTE name in that stream's query or queries. You can use it like a subquery or join it as if it were a table.
**Short-hand syntax** (when the CTE has exactly one column):
```yaml theme={null}
streams:
projects:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
query: SELECT * FROM projects WHERE org_id IN user_orgs
```
The short-hand `IN cte_name` is equivalent to `IN (SELECT * FROM cte_name)`. If the CTE has more than one column, this form is an error; use explicit subquery or join syntax instead.
**Explicit subquery syntax** (when you need to select specific columns):
```yaml theme={null}
streams:
projects:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
query: SELECT * FROM projects WHERE org_id IN (SELECT org_id FROM user_orgs)
```
**Join syntax** (you can join a CTE as if it were a table). Only `INNER JOIN` is supported:
```yaml theme={null}
streams:
projects:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
query: SELECT projects.* FROM projects INNER JOIN user_orgs ON user_orgs.org_id = projects.org_id
```
## Combining with Multiple Queries
CTEs work well with the `queries` feature (multiple queries per stream). This lets you share the CTE and keep all query results in one stream: the client only needs to manage one subscription instead of multiple.
```yaml theme={null}
streams:
user_data:
with:
my_org: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
queries:
- SELECT * FROM projects WHERE org_id IN my_org
- SELECT * FROM repositories WHERE org_id IN my_org
- SELECT * FROM team_members WHERE org_id IN my_org
```
## Complete Example
A full configuration using CTEs. Each stream that needs shared logic defines its own `with` block:
```yaml theme={null}
config:
edition: 3
streams:
# Organization-level data (auto-sync) - one stream with CTE and multiple queries
org_and_projects:
auto_subscribe: true
with:
user_orgs: |
SELECT org_id FROM org_memberships WHERE user_id = auth.user_id()
accessible_projects: |
SELECT id FROM projects
WHERE org_id IN (SELECT org_id FROM org_memberships WHERE user_id = auth.user_id())
OR id IN (SELECT project_id FROM project_shares WHERE shared_with = auth.user_id())
queries:
- SELECT * FROM organizations WHERE id IN user_orgs
- SELECT * FROM projects WHERE id IN accessible_projects
# Project details (on-demand) - same CTE and param, so one stream with multiple queries
project_details:
with:
accessible_projects: |
SELECT id FROM projects
WHERE org_id IN (SELECT org_id FROM org_memberships WHERE user_id = auth.user_id())
OR id IN (SELECT project_id FROM project_shares WHERE shared_with = auth.user_id())
queries:
- |
SELECT * FROM tasks
WHERE project_id = subscription.parameter('project_id')
AND project_id IN accessible_projects
- |
SELECT * FROM files
WHERE project_id = subscription.parameter('project_id')
AND project_id IN accessible_projects
```
## Limitations
The following rules apply to CTEs. For the full syntax reference, see [Supported SQL — CTE and WITH syntax](/sync/supported-sql#cte-and-with-syntax).
**Sync Streams do not support global CTEs.** Use a `with` block only inside a stream. To reuse logic across streams, define the same CTE (or equivalent subquery) in each stream that needs it, or combine streams using [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) so one stream can share a single CTE across queries.
**CTEs cannot reference other CTEs.** Each CTE must be self-contained:
```yaml theme={null}
# This won't work - cte2 cannot reference cte1
streams:
my_stream:
with:
cte1: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
cte2: SELECT id FROM projects WHERE org_id IN cte1 # Error!
```
If you need to chain filters, use nested subqueries in your stream query instead:
```yaml theme={null}
streams:
tasks:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
query: |
SELECT * FROM tasks
WHERE project_id IN (
SELECT id FROM projects WHERE org_id IN user_orgs
)
```
**The short-hand `IN cte_name` works only when the CTE has exactly one column.** If the CTE has multiple columns, use explicit subquery syntax or join the CTE as a table.
**CTE names take precedence over table/collection names.** If you define a CTE with the same name as a database table/collection, the CTE will be used. Choose distinct names to avoid confusion.
# Examples, Patterns & Demos
Source: https://docs.powersync.com/sync/streams/examples
Common patterns, use case examples, and working demo apps for Sync Streams.
## Common Patterns
These patterns show how to combine Sync Streams features to solve common real-world scenarios.
### Organization-Scoped Data
For apps where users belong to an organization (or company, team, workspace, etc.), use JWT claims to scope data. The `org_id` in the JWT ensures users only see data from their organization, without needing to pass it from the client.
```yaml theme={null}
streams:
# All projects in the user's organization (auto-sync on connect)
org_projects:
auto_subscribe: true
query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id')
# Tasks for a specific project (sync on-demand)
project_tasks:
query: |
SELECT * FROM tasks
WHERE project_id = subscription.parameter('project_id')
AND project_id IN (SELECT id FROM projects WHERE org_id = auth.parameter('org_id'))
```
Your backend should include the `org_id` in the JWT payload when issuing tokens — e.g. `{ "sub": "user-123", "org_id": "org-456" }`. Clients auto-subscribe to `org_projects` when they connect, so the project list is available offline immediately. Subscribe to `project_tasks` when the user opens a project:
```js theme={null}
// When the user opens a project view
const sub = await db.syncStream('project_tasks', { project_id: projectId }).subscribe();
await sub.waitForFirstSync();
// Unsubscribe when the user navigates away
sub.unsubscribe();
```
For more complex organization structures where users can belong to multiple organizations, see [Expanding JSON Arrays](/sync/streams/parameters#expanding-json-arrays).
### Role-Based Access
When different users should see different data based on their role, use JWT claims to apply visibility rules. This keeps authorization logic on the server side where it's secure.
```yaml theme={null}
streams:
# Admins see all articles, others see only published or their own
articles:
auto_subscribe: true
query: |
SELECT * FROM articles
WHERE org_id = auth.parameter('org_id')
AND (
status = 'published'
OR author_id = auth.user_id()
OR auth.parameter('role') = 'admin'
)
```
Your backend should include both `org_id` and `role` in the JWT — e.g. `{ "sub": "user-123", "org_id": "org-456", "role": "admin" }`. The `role` claim is set by your backend so users can't escalate their own privileges. In this example, Clients auto-subscribe to `articles` when they connect — no client-side subscription call needed.
### Shared Resources
For apps where users can share items with each other (like documents or folders), combine ownership checks with a "shares table" lookup. This syncs both items the user owns and items others have shared with them.
```yaml theme={null}
streams:
my_documents:
auto_subscribe: true
query: |
SELECT * FROM documents
WHERE owner_id = auth.user_id()
OR id IN (SELECT document_id FROM document_shares WHERE shared_with = auth.user_id())
```
Clients auto-subscribe to `my_documents` when they connect, so the user's documents (owned and shared) are available immediately.
### Syncing Related Data
When a detail view needs data from multiple tables (like an issue and its comments), use a [CTE](/sync/streams/ctes) and [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to define the authorization check once and sync both tables in one subscription.
```yaml theme={null}
streams:
issue_with_comments:
with:
my_projects: SELECT project_id FROM project_members WHERE user_id = auth.user_id()
queries:
- |
SELECT * FROM issues
WHERE id = subscription.parameter('issue_id')
AND project_id IN my_projects
- |
SELECT comments.* FROM comments
INNER JOIN issues ON comments.issue_id = issues.id
WHERE comments.issue_id = subscription.parameter('issue_id')
AND issues.project_id IN my_projects
```
Subscribe once when the user opens an issue:
```js theme={null}
// When the user opens an issue view
const issueSub = await db.syncStream('issue_with_comments', { issue_id: issueId }).subscribe();
await issueSub.waitForFirstSync();
// Unsubscribe when the user navigates away
issueSub.unsubscribe();
```
If multiple streams share the same filtering logic, consider using [CTEs](/sync/streams/ctes) to avoid repetition and [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) so the client only needs to manage one subscription instead of multiple. This is more efficient and results in fewer sync buckets .
### User's Default or Primary Item
When users have a "default" or "primary" item stored in their profile, you can sync related data automatically without the client needing to know the ID upfront.
```yaml theme={null}
streams:
# Sync todos from the user's primary list
primary_list_todos:
auto_subscribe: true
query: |
SELECT * FROM todos
WHERE list_id IN (
SELECT primary_list_id FROM users WHERE id = auth.user_id()
)
```
The subquery looks up the user's `primary_list_id` from the `users` table, then syncs all `todos` from that list. When the user changes their primary list in the database, the synced data updates automatically. Clients auto-subscribe to `primary_list_todos` when they connect — no client-side subscription call needed.
### Hierarchical Data
When your data has parent-child relationships across multiple levels, you can traverse the hierarchy using nested subqueries or joins. This is common in apps where access to child records is determined by membership at a higher level.
For example, consider an app with organizations, projects, and tasks. Users belong to organizations, and should see all tasks in projects that belong to their organizations:
```
Organization → Projects → Tasks
↑
User membership
```
**Using nested subqueries:**
```yaml theme={null}
streams:
org_tasks:
auto_subscribe: true
query: |
SELECT * FROM tasks
WHERE project_id IN (
SELECT id FROM projects WHERE org_id IN (
SELECT org_id FROM org_members WHERE user_id = auth.user_id()
)
)
```
The query reads from inside out: find the user's organizations, then find projects in those organizations, then find tasks in those projects.
**Using joins** (often easier to read for deeply nested hierarchies):
```yaml theme={null}
streams:
org_tasks:
auto_subscribe: true
query: |
SELECT tasks.* FROM tasks
INNER JOIN projects ON tasks.project_id = projects.id
INNER JOIN org_members ON projects.org_id = org_members.org_id
WHERE org_members.user_id = auth.user_id()
```
Both queries produce the same result. PowerSync handles these nested relationships efficiently, so you don't need to denormalize your database or add redundant foreign keys. Clients auto-subscribe to `org_tasks` when they connect — no client-side subscription call needed.
### Many-to-Many Relationships
Many-to-many relationships (like users subscribing to boards) typically use a join table. Sync Streams support `INNER JOIN`s, so you can traverse these relationships directly without denormalizing your schema.
Consider a social app where users subscribe to message boards:
```
Users ←→ board_subscriptions ←→ Boards → Posts → Comments
```
```yaml theme={null}
streams:
# Posts from boards the user subscribes to
board_posts:
auto_subscribe: true
query: |
SELECT posts.* FROM posts
INNER JOIN board_subscriptions ON posts.board_id = board_subscriptions.board_id
WHERE board_subscriptions.user_id = auth.user_id()
# Comments on those posts (no denormalization needed)
board_comments:
auto_subscribe: true
query: |
SELECT comments.* FROM comments
INNER JOIN posts ON comments.post_id = posts.id
INNER JOIN board_subscriptions ON posts.board_id = board_subscriptions.board_id
WHERE board_subscriptions.user_id = auth.user_id()
# User profiles for co-subscribers (people who share a board with me)
board_users:
auto_subscribe: true
query: |
SELECT users.* FROM users
INNER JOIN board_subscriptions ON users.id = board_subscriptions.user_id
WHERE board_subscriptions.board_id IN (
SELECT board_id FROM board_subscriptions WHERE user_id = auth.user_id()
)
```
Clients auto-subscribe to all three streams when they connect. Each query joins through `board_subscriptions` to find relevant data: posts in the user's boards, comments on those posts, and other users sharing those boards.
Unlike with legacy [Sync Rules](/sync/rules/many-to-many-join-tables), you don't need to denormalize your schema or maintain array columns to handle these relationships.
## Use Case Examples
Complete configurations for common application types.
### To-do List App
Sync the list of `lists` upfront, but only sync `todos` when the user opens a specific list:
```yaml theme={null}
config:
edition: 3
streams:
# Always available - user can see their lists offline
lists:
auto_subscribe: true
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
# Loaded on demand - only sync todos for the list being viewed
list_todos:
query: |
SELECT * FROM todos
WHERE list_id = subscription.parameter('list_id')
AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
```
Clients auto-subscribe to `lists` when they connect. Subscribe to `list_todos` when the user opens a list:
```js theme={null}
// Lists are already synced (auto_subscribe: true)
const lists = await db.getAll('SELECT * FROM lists');
// When user opens a list
const sub = await db.syncStream('list_todos', { list_id: selectedListId }).subscribe();
await sub.waitForFirstSync();
// Todos are now available locally
const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', [selectedListId]);
// Unsubscribe when user navigates back to the list overview
sub.unsubscribe();
```
### Chat Application
Chat apps typically have many conversations but users only view one at a time. Sync the conversation list upfront so users can see all their chats immediately, but load messages on-demand to avoid syncing potentially thousands of messages across all conversations.
```yaml theme={null}
config:
edition: 3
streams:
# User's conversations - always show the conversation list
my_conversations:
auto_subscribe: true
query: |
SELECT * FROM conversations
WHERE id IN (SELECT conversation_id FROM participants WHERE user_id = auth.user_id())
# Messages - only load for the active conversation
conversation_messages:
query: |
SELECT * FROM messages
WHERE conversation_id = subscription.parameter('conversation_id')
AND conversation_id IN (
SELECT conversation_id FROM participants WHERE user_id = auth.user_id()
)
```
Clients auto-subscribe to `my_conversations` when they connect. Subscribe to `conversation_messages` when the user opens a conversation:
```js theme={null}
// Conversations are already synced (auto_subscribe: true)
const conversations = await db.getAll('SELECT * FROM conversations');
// When user opens a conversation
const sub = await db.syncStream('conversation_messages', {
conversation_id: conversationId
}).subscribe();
await sub.waitForFirstSync();
// Unsubscribe when user closes the conversation
sub.unsubscribe();
```
### Project Management App
This example shows a multi-tenant project management app where users can access public projects or projects they're members of. Each stream that needs "accessible projects" defines a CTE in that stream (Sync Streams do not support a top-level `with` block).
```yaml theme={null}
config:
edition: 3
streams:
# Organization data - always available
org_info:
auto_subscribe: true
query: SELECT * FROM organizations WHERE id = auth.parameter('org_id')
# All accessible projects - always available for navigation
projects:
auto_subscribe: true
with:
user_projects: |
SELECT id FROM projects
WHERE org_id = auth.parameter('org_id')
AND (is_public OR id IN (
SELECT project_id FROM project_members WHERE user_id = auth.user_id()
))
query: SELECT * FROM projects WHERE id IN user_projects
# Project details - on demand when user opens a project (one CTE, multiple queries)
project_details:
with:
user_projects: |
SELECT id FROM projects
WHERE org_id = auth.parameter('org_id')
AND (is_public OR id IN (
SELECT project_id FROM project_members WHERE user_id = auth.user_id()
))
queries:
- |
SELECT * FROM tasks
WHERE project_id = subscription.parameter('project_id')
AND project_id IN user_projects
- |
SELECT * FROM files
WHERE project_id = subscription.parameter('project_id')
AND project_id IN user_projects
```
Your backend should include `org_id` in the JWT — e.g. `{ "sub": "user-123", "org_id": "org-456" }`. Clients auto-subscribe to `org_info` and `projects` when they connect. Subscribe to project details when the user opens a project:
```js theme={null}
// org_info and projects are already synced (auto_subscribe: true)
const projects = await db.getAll('SELECT * FROM projects');
// When user opens a project
const sub = await db.syncStream('project_details', { project_id: projectId }).subscribe();
await sub.waitForFirstSync();
// Unsubscribe when user navigates away
sub.unsubscribe();
```
### Organization Workspace (Using Multiple Queries)
When several tables share the same access pattern, you can group them into a single stream using multiple queries and a CTE. Sync is more efficient and the client only needs to manage one subscription instead of multiple.
```yaml theme={null}
config:
edition: 3
streams:
# All org-level data syncs together in one stream
org_data:
auto_subscribe: true
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
queries:
- SELECT * FROM organizations WHERE id IN user_orgs
- SELECT * FROM projects WHERE org_id IN user_orgs
- SELECT * FROM team_members WHERE org_id IN user_orgs
# Project details - on demand. CTE includes subscription.parameter so queries stay simple.
project_details:
with:
selected_project: |
SELECT projects.id FROM projects
INNER JOIN org_members ON org_members.org_id = projects.org_id AND org_members.user_id = auth.user_id()
WHERE projects.id = subscription.parameter('project_id')
queries:
- SELECT * FROM tasks WHERE project_id IN selected_project
- SELECT * FROM files WHERE project_id IN selected_project
- SELECT * FROM comments WHERE project_id IN selected_project
```
The `user_orgs` CTE in `org_data` looks up org membership using `auth.user_id()`. In `project_details`, the CTE can include `subscription.parameter('project_id')` so it both authorizes (user must be in the project's org) and applies the selected project — the queries then just filter by `project_id IN selected_project`. Clients auto-subscribe to `org_data` when they connect. Subscribe to `project_details` when the user opens a project:
```js theme={null}
// org_data is already synced (auto_subscribe: true)
const projects = await db.getAll('SELECT * FROM projects');
// When user opens a project
const sub = await db.syncStream('project_details', { project_id: projectId }).subscribe();
await sub.waitForFirstSync();
// Unsubscribe when user navigates away
sub.unsubscribe();
```
The `project_details` stream uses a [CTE](/sync/streams/ctes) and groups tasks, files, and comments for a specific project into a single subscription.
## Demo Apps
Working demo apps that demonstrate Sync Streams in action. These show how to combine auto-subscribe streams (for data that should always be available) with on-demand streams (for data loaded when needed).
Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README.
In this demo:
* The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior).
* The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters).
* When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior).
Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which supports Sync Streams.
Deploy the following Sync Streams:
```yaml theme={null}
config:
edition: 3
streams:
lists:
auto_subscribe: true
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
todos:
query: |
SELECT * FROM todos
WHERE list_id = subscription.parameter('list')
AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
```
In this demo:
* The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior).
* The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters).
* When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior).
Sync Streams support is available. Demo app coming soon.
Sync Streams support is available. Demo app coming soon.
Sync Streams support is available. Demo app coming soon.
# Migrating from Sync Rules
Source: https://docs.powersync.com/sync/streams/migration
How to migrate existing projects from legacy Sync Rules to Sync Streams.
## Why Migrate?
PowerSync's original Sync Rules system was optimized for offline-first use cases where you want to "sync everything upfront" when the client connects, so data is available locally if the user goes offline.
However, many developers are building apps where users are mostly online, and you don't want to make users wait to sync a lot of data upfront. This is especially true for **web apps**: users are mostly online, you often want to sync only the data needed for the current page, and users frequently have multiple browser tabs open — each needing different subsets of data.
### The Problem with Client Parameters
[Client Parameters](/sync/rules/client-parameters) in Sync Rules partially support on-demand syncing — for example, using a `project_ids` array to sync only specific projects. However, manually managing these arrays across different browser tabs becomes painful:
* You need to aggregate IDs across all open tabs
* You need additional logic for different data types (tables)
* If you want to keep data around after a tab closes (caching), you need even more management
### How Sync Streams Solve This
Sync Streams address these limitations:
1. **On-demand syncing**: Define streams once, then subscribe from your app one or more times with different parameters. No need to manage arrays of IDs — each subscription is independent.
2. **Multi-tab support**: Each subscription manages its own lifecycle. Open the same list in two tabs? Each tab subscribes independently. Close one? The other keeps working.
3. **Built-in caching**: Each subscription has a configurable `ttl` that keeps data cached after unsubscribing. When users return to a screen, data may already be available — no loading state needed.
4. **Simpler, more powerful syntax**: Queries with subqueries, JOINs, and CTEs. No separate [parameter queries](/sync/rules/overview#parameter-queries). The syntax is closer to plain SQL and supports more SQL features than Sync Rules.
5. **Framework integration**: [React hooks and Kotlin Compose](/sync/streams/client-usage#framework-integrations) extensions let your UI components automatically manage subscriptions based on what's rendered.
### Still Need Offline-First?
If you want "sync everything upfront" behavior (like Sync Rules), set [`auto_subscribe: true`](/sync/streams/overview#using-auto-subscribe) on your Sync Streams and clients will subscribe automatically when they connect.
## Requirements
* PowerSync Service v1.20.0+ (Cloud instances already meet this)
* Latest SDK versions with [Rust-based sync client](https://releases.powersync.com/announcements/improved-sync-performance-in-our-client-sdks) (enabled by default on latest SDKs)
* `config: edition: 3` in your sync config
| SDK | Minimum Version | Rust Client Default |
| ------------ | --------------- | ------------------- |
| JS Web | v1.27.0 | v1.32.0 |
| React Native | v1.25.0 | v1.29.0 |
| React hooks | v1.8.0 | — |
| Node.js | v0.11.0 | v0.16.0 |
| Capacitor | v0.0.1 | v0.3.0 |
| Dart/Flutter | v1.16.0 | v1.17.0 |
| Kotlin | v1.7.0 | v1.9.0 |
| Swift | v1.11.0 | v1.8.0 |
| .NET | v0.0.8-alpha.1 | v0.0.5-alpha.1 |
If you're on an SDK version below the "Rust Client Default" version, enable the Rust client manually:
**JavaScript:**
```js theme={null}
await db.connect(new MyConnector(), {
clientImplementation: SyncClientImplementation.RUST
});
```
**Dart:**
```dart theme={null}
database.connect(
connector: YourConnector(),
options: const SyncOptions(
syncImplementation: SyncClientImplementation.rust,
),
);
```
**Kotlin:**
```kotlin theme={null}
database.connect(MyConnector(), options = SyncOptions(
newClientImplementation = true,
))
```
**Swift:**
```swift theme={null}
@_spi(PowerSyncExperimental) import PowerSync
try await db.connect(connector: connector, options: ConnectOptions(
newClientImplementation: true,
))
```
## Migration Tool
You can generate a Sync Streams draft from your existing Sync Rules in two ways:
1. **Dashboard:** In the [PowerSync Dashboard](https://dashboard.powersync.com/), use the **Migrate to Sync Streams** button. It converts your Sync Rules into a Sync Streams draft that you can review before deploying.
2. **CLI:** Run `powersync migrate sync-rules` to produce a Sync Streams draft from your current sync config.
A standalone migration tool is also available [here](https://powersync-community.github.io/bucket-definitions-to-sync-streams/).
The output uses `auto_subscribe: true` by default, preserving your existing sync-everything-upfront behavior so no client-side changes are required when you first deploy.
**Next steps:** Review the draft, then deploy it (via the Dashboard or `powersync deploy sync-config`). After that, you can optionally migrate individual streams to on-demand subscriptions over time — remove `auto_subscribe: true` from specific streams and update client code to use the `syncStream()` API where it makes sense for your app.
## Stream Definition Reference
```yaml theme={null}
config:
edition: 3
streams:
:
# CTEs (optional) - define with block inside each stream
with:
: SELECT ... FROM ...
# Behavior options (place above query/queries)
auto_subscribe: true # Auto-subscribe clients on connect (default: false)
priority: 1 # Sync priority (optional). Lower number -> higher priority
accept_potentially_dangerous_queries: true # Silence security warnings (default: false)
# Query options (use one)
query: SELECT * FROM WHERE ... # Single query
queries: # Multiple queries
- SELECT * FROM WHERE ...
- SELECT * FROM WHERE ...
```
| Option | Default | Description |
| -------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `query` | — | SQL-like query defining which data to sync. Use either `query` or `queries`, not both. See [Writing Queries](/sync/streams/queries). |
| `queries` | — | Array of queries defining which data to sync. More efficient than defining separate streams: the client manages one subscription and PowerSync merges the data from all queries (see [Multiple Queries per Stream](/sync/streams/queries#multiple-queries-per-stream)). |
| `with` | — | [CTEs](/sync/streams/ctes) available to this stream's queries. Define the `with` block inside each stream. |
| `auto_subscribe` | `false` | When `true`, clients automatically subscribe on connect. |
| `priority` | — | Sync priority (lower value = higher priority). See [Prioritized Sync](/sync/advanced/prioritized-sync). |
| `accept_potentially_dangerous_queries` | `false` | Silences security warnings when queries use client-controlled parameters (i.e. *connection parameters* and *subscription parameters*), as opposed to *authentication parameters* that are signed as part of the JWT. Set to `true` only if you've verified the query is safe. See [Using Parameters](/sync/streams/parameters). |
## Migration Examples
### Global Data (No Parameters)
In Sync Rules, a ["global" bucket](/sync/rules/global-buckets) syncs the same data to all users. In Sync Streams, you achieve this with queries that have no parameters. Add [`auto_subscribe: true`](/sync/streams/overview#using-auto-subscribe) to maintain the Sync Rules behavior where data syncs automatically on connect.
**Sync Rules:**
```yaml theme={null}
bucket_definitions:
global:
data:
- SELECT * FROM todos
- SELECT * FROM lists WHERE archived = false
```
**Sync Streams:**
```yaml theme={null}
config:
edition: 3
streams:
shared_data:
auto_subscribe: true # Sync automatically like Sync Rules
queries:
- SELECT * FROM todos
- SELECT * FROM lists WHERE archived = false
```
Without `auto_subscribe: true`, clients would need to explicitly subscribe to these streams. This gives you flexibility to migrate incrementally or switch to on-demand syncing later.
### User-Scoped Data
**Sync Rules:**
```yaml theme={null}
bucket_definitions:
user_lists:
priority: 1
parameters: SELECT request.user_id() as user_id
data:
- SELECT * FROM lists WHERE owner_id = bucket.user_id
```
**Sync Streams:**
```yaml theme={null}
config:
edition: 3
streams:
user_lists:
auto_subscribe: true
priority: 1
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
```
### Data with Subqueries (Replaces Parameter Queries)
**Sync Rules:**
```yaml theme={null}
bucket_definitions:
owned_lists:
parameters: |
SELECT id as list_id FROM lists WHERE owner_id = request.user_id()
data:
- SELECT * FROM lists WHERE lists.id = bucket.list_id
- SELECT * FROM todos WHERE todos.list_id = bucket.list_id
```
**Sync Streams:**
```yaml theme={null}
config:
edition: 3
streams:
owned_lists:
auto_subscribe: true
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
list_todos:
query: |
SELECT * FROM todos
WHERE list_id = subscription.parameter('list_id')
AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
```
### Client Parameters → Subscription Parameters
**Sync Rules** used global [Client Parameters](/sync/rules/client-parameters):
```yaml theme={null}
bucket_definitions:
posts:
parameters: SELECT (request.parameters() ->> 'current_page') as page_number
data:
- SELECT * FROM posts WHERE page_number = bucket.page_number
```
**Sync Streams** use Subscription Parameters, which are more flexible — you can subscribe multiple times with different values:
```yaml theme={null}
config:
edition: 3
streams:
posts:
query: SELECT * FROM posts WHERE page_number = subscription.parameter('page_number')
```
```js theme={null}
// Subscribe to multiple pages simultaneously
const page1 = await db.syncStream('posts', { page_number: 1 }).subscribe();
const page2 = await db.syncStream('posts', { page_number: 2 }).subscribe();
```
## Parameter Syntax Changes
| Sync Rules | Sync Streams |
| -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `request.user_id()` | `auth.user_id()` |
| `request.jwt() ->> 'claim'` | `auth.parameter('claim')` |
| `request.parameters() ->> 'key'` | `subscription.parameter('key')` ([subscription parameter](/sync/streams/parameters#subscription-parameters)) or `connection.parameter('key')` ([connection parameter](/sync/streams/parameters#connection-parameters)) |
| `bucket.param_name` | Use the parameter directly in the query e.g. `subscription.parameter('key')` |
## Client-Side Changes
After updating your sync config, update your client code to use subscriptions:
```js theme={null}
// Before (Sync Rules with Client Parameters)
await db.connect(connector, {
params: { current_project: projectId }
});
// After (Sync Streams with Subscriptions)
await db.connect(connector);
const sub = await db.syncStream('project_data', { project_id: projectId }).subscribe();
```
See [Client-Side Usage](/sync/streams/client-usage) for detailed examples.
# Sync Streams
Source: https://docs.powersync.com/sync/streams/overview
Sync Streams enable partial syncing, letting you define exactly which data from your backend can sync to each client using simple SQL-like queries.
Instead of syncing entire tables, you tell PowerSync exactly which data each user/client can sync. You write simple SQL-like queries to define streams of data, and your client app subscribes to the streams it needs. PowerSync handles the rest, keeping data in sync in real-time and making it available offline.
For example, you might create a stream that syncs only the current user's to-do items, another for shared projects they have access to, and another for reference data that everyone needs. Your app subscribes to these streams on demand, and only that data syncs to the client-side SQLite database. Offline-first apps that need all relevant data available upfront can use `auto_subscribe: true` so streams sync automatically when clients connect.
**Beta Release**
Sync Streams are now in beta and production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to [migrate from Sync Rules](/sync/streams/migration).
We welcome your feedback — please share with us in [Discord](https://discord.gg/powersync).
## Defining Streams
Streams are defined in a YAML configuration file. Each stream has a **name** and a **query** that specifies which rows to sync using SQL-like syntax. The query can reference [parameters](/sync/overview#how-it-works) like the authenticated user's ID to personalize what each user receives.
In the [PowerSync Dashboard](https://dashboard.powersync.com/):
1. Select your project and instance
2. Go to **Sync Streams**
3. Edit the YAML directly in the dashboard
4. Click **Deploy** to validate and deploy
```yaml theme={null}
config:
edition: 3
streams:
todos:
query: SELECT * FROM todos WHERE owner_id = auth.user_id()
```
Add a `sync_config` section to your `config.yaml`. Using a **separate file** is recommended (e.g. `sync_config: path: sync-config.yaml`). Put the stream definition in that file:
```yaml sync-config.yaml theme={null}
config:
edition: 3
streams:
todos:
query: SELECT * FROM todos WHERE owner_id = auth.user_id()
```
You can also use inline `sync_config: content: |` with the YAML nested in your main config. See [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances#sync-streams--sync-rules) for both options.
Available stream options:
```yaml theme={null}
config:
edition: 3
streams:
:
# CTEs (optional) - define with block inside each stream
with:
: SELECT ... FROM ...
# Behavior options (place above query/queries)
auto_subscribe: true # Auto-subscribe clients on connect (default: false)
priority: 1 # Sync priority (optional). Lower number -> higher priority
accept_potentially_dangerous_queries: true # Silence security warnings (default: false)
# Query options (use one)
query: SELECT * FROM WHERE ... # Single query
queries: # Multiple queries
- SELECT * FROM WHERE ...
- SELECT * FROM WHERE ...
```
| Option | Default | Description |
| -------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `query` | — | SQL-like query defining which data to sync. Use either `query` or `queries`, not both. See [Writing Queries](/sync/streams/queries). |
| `queries` | — | Array of queries defining which data to sync. More efficient than defining separate streams: the client manages one subscription and PowerSync merges the data from all queries (see [Multiple Queries per Stream](/sync/streams/queries#multiple-queries-per-stream)). |
| `with` | — | [CTEs](/sync/streams/ctes) available to this stream's queries. Define the `with` block inside each stream. |
| `auto_subscribe` | `false` | When `true`, clients automatically subscribe on connect. |
| `priority` | — | Sync priority (lower value = higher priority). See [Prioritized Sync](/sync/advanced/prioritized-sync). |
| `accept_potentially_dangerous_queries` | `false` | Silences security warnings when queries use client-controlled parameters (i.e. *connection parameters* and *subscription parameters*), as opposed to *authentication parameters* that are signed as part of the JWT. Set to `true` only if you've verified the query is safe. See [Using Parameters](/sync/streams/parameters). |
## Basic Examples
There are two independent concepts to understand:
* *What* data the stream returns. For example:
* *Global data*: No parameters. Same data for all users (e.g. reference tables like categories).
* *Filtered data*: Filters the data by a parameter value. This can make use of *auth parameters* from the JWT token (such as the user ID or other JWT claims), *subscription parameters* (specified by the client when it subscribes to a stream at any time), or *connection parameters* (specified at connection). Different users will get different sets of data based on the parameters. See [Using Parameters](/sync/streams/parameters) for the full reference.
* *When* the client syncs the data
* *Auto-subscribe*: Client automatically subscribes on connect (`auto_subscribe: true`)
* *On-demand*: Client explicitly subscribes when needed (default behavior)
### Global Data
Data without parameters is "global" data, meaning the same data goes to all users/clients. This is useful for reference tables:
```yaml theme={null}
config:
edition: 3
streams:
# Same categories for everyone
categories:
query: SELECT * FROM categories
# Same active products for everyone
products:
query: SELECT * FROM products WHERE active = true
```
Global data streams still require clients to subscribe explicitly unless you set `auto_subscribe: true`
### Filtering Data by User
Use `auth.user_id()` or other [JWT claims](/sync/streams/parameters#auth-parameters) to return different data per user:
```yaml theme={null}
config:
edition: 3
streams:
# Each user gets their own lists
my_lists:
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
# Each user gets their own orders
my_orders:
query: SELECT * FROM orders WHERE user_id = auth.user_id()
```
### Filtering Data Based on Subscription Parameters
Use `subscription.parameter()` for data that clients subscribe to explicitly:
```yaml theme={null}
config:
edition: 3
streams:
# Sync todos for a specific list when the client subscribes with a list_id
list_todos:
query: |
SELECT * FROM todos
WHERE list_id = subscription.parameter('list_id')
AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
```
```js theme={null}
// Client subscribes with the list they want to view
const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
```
### Using Auto-Subscribe
Set `auto_subscribe: true` to sync data automatically when clients connect. This is useful for:
* Reference data that all users need, or that are needed in many screens in the app.
* User data that should always be available offline
* Maintaining [Sync Rules](/sync/rules/overview) default behavior ("sync everything upfront") when migrating to Sync Streams
```yaml theme={null}
config:
edition: 3
streams:
# Global data, synced automatically
categories:
auto_subscribe: true
query: SELECT * FROM categories
# User-scoped data, synced automatically
my_orders:
auto_subscribe: true
query: SELECT * FROM orders WHERE user_id = auth.user_id()
# Parameterized data, subscribed on-demand (no auto_subscribe)
order_items:
query: |
SELECT * FROM order_items
WHERE order_id = subscription.parameter('order_id')
AND order_id IN (SELECT id FROM orders WHERE user_id = auth.user_id())
```
## Client-Side Usage
Subscribe to streams from your client app:
```js theme={null}
const sub = await db.syncStream('list_todos', { list_id: 'abc123' })
.subscribe({ ttl: 3600 });
// Wait for this subscription to have synced
await sub.waitForFirstSync();
// When the component needing the subscription is no longer active...
sub.unsubscribe();
```
**React hooks:**
```jsx theme={null}
const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: 'abc123' } });
// Check download progress or subscription information
stream?.progress;
stream?.subscription.hasSynced;
```
The `useQuery` hook can wait for Sync Streams before running queries:
```jsx theme={null}
const { data } = useQuery(
'SELECT * FROM todos WHERE list_id = ?',
[listId],
{ streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] }
);
```
```dart theme={null}
final sub = await db
.syncStream('list_todos', {'list_id': 'abc123'})
.subscribe(ttl: const Duration(hours: 1));
// Wait for this subscription to have synced
await sub.waitForFirstSync();
// When the component needing the subscription is no longer active...
sub.unsubscribe();
```
```kotlin theme={null}
val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123")))
.subscribe(ttl = 1.0.hours)
// Wait for this subscription to have synced
sub.waitForFirstSync()
// When the component needing the subscription is no longer active...
sub.unsubscribe()
```
```swift theme={null}
let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")])
.subscribe(ttl: 60 * 60, priority: nil) // 1 hour
// Wait for this subscription to have synced
try await sub.waitForFirstSync()
// When the component needing the subscription is no longer active...
try await sub.unsubscribe()
```
```csharp theme={null}
var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" })
.Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) });
// Wait for this subscription to have synced
await sub.WaitForFirstSync();
// When the component needing the subscription is no longer active...
sub.Unsubscribe();
```
### TTL (Time-To-Live)
Each subscription has a `ttl` that keeps data cached after unsubscribing. This enables warm cache behavior — when users return to a screen and you re-subscribe to relevant streams, data is already available on the client. Default TTL is 24 hours. See [Client-Side Usage](/sync/streams/client-usage) for details.
```js theme={null}
// Set TTL in seconds when subscribing
const sub = await db.syncStream('todos', { list_id: 'abc' })
.subscribe({ ttl: 3600 }); // Cache for 1 hour after unsubscribe
```
## Developer Notes
* **SQL Syntax**: Stream queries use a SQL-like syntax with `SELECT` statements. You can use subqueries, `INNER JOIN`, and [CTEs](/sync/streams/ctes) for filtering. `GROUP BY`, `ORDER BY`, and `LIMIT` are not supported. See [Writing Queries](/sync/streams/queries) for details on joins, multiple queries per stream, and other features.
* **Type Conversion**: Data types from your source database (Postgres, MongoDB, MySQL, SQL Server) are converted when synced to the client's SQLite database. SQLite has a limited type system, so most types become `text` and you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled.
* **Primary Key**: PowerSync requires every synced table to have a primary key column named `id` of type `text`. If your backend uses a different column name or type, you'll need to map it. For MongoDB, collections use `_id` as the ID field; you must alias it in your stream queries (e.g. `SELECT _id as id, * FROM your_collection`).
* **Case Sensitivity**: To avoid issues across different databases and platforms, use **lowercase identifiers** for all table and column names in your Sync Streams. If your backend uses mixed case, see [Case Sensitivity](/sync/advanced/case-sensitivity) for how to handle it.
* **Bucket Limits**: PowerSync uses internal partitions called [buckets](/architecture/powersync-service#bucket-system) to efficiently sync data. There's a default [limit of 1,000 buckets](/resources/performance-and-limits) per user/client. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. You can use [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count.
* **Troubleshooting**: If data isn't syncing as expected, the [Sync Diagnostics Client](/tools/diagnostics-client) helps you inspect what's happening for a specific user — you can see which buckets the user has and what data is being synced.
## Examples & Demos
See [Examples & Demos](/sync/streams/examples) for working demo apps and complete application patterns.
## Migrating from Legacy Sync Rules
If you have an existing project using legacy Sync Rules, see the [Migration Guide](/sync/streams/migration) for step-by-step instructions, syntax changes, and examples.
# Using Parameters
Source: https://docs.powersync.com/sync/streams/parameters
Filter data dynamically using subscription, auth, and connection parameters in your stream queries.
Parameters let you filter data dynamically based on who the user is and what they need to see. Sync Streams support three types of parameters, each serving a different purpose.
## Subscription Parameters
Passed from the client when it subscribes to a stream. This is the most common way to request specific data on demand.
For example, if a user opens two different to-do lists, the client subscribes to the same `list_todos` stream twice, once for each list:
```yaml theme={null}
streams:
list_todos:
query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id')
```
```js theme={null}
// User opens List A - subscribe with list_id = 'list-a'
const subA = await db.syncStream('list_todos', { list_id: 'list-a' }).subscribe();
// User also opens List B - subscribe again with list_id = 'list-b'
const subB = await db.syncStream('list_todos', { list_id: 'list-b' }).subscribe();
// Both lists' todos are now syncing independently
```
| Function | Description |
| ------------------------------- | ------------------------------------------- |
| `subscription.parameter('key')` | Get a single parameter by name |
| `subscription.parameters()` | All parameters as JSON (for dynamic access) |
## Auth Parameters
Claims from the user's JWT token. Use these to filter data based on who the user is. These values are secure and tamper-proof since they are signed as part of the JWT by your authentication system.
```yaml theme={null}
streams:
my_lists:
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
# Access custom JWT claims
org_data:
query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id')
```
| Function | Description |
| ----------------------- | ----------------------------------------------- |
| `auth.user_id()` | The user's ID (same as `auth.parameter('sub')`) |
| `auth.parameter('key')` | Get a specific JWT claim |
| `auth.parameters()` | Full JWT payload as JSON |
## Connection Parameters
Specified "globally" at the connection level, before any streams are subscribed. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. Use them when you need a value that applies across all streams for the session.
```yaml theme={null}
streams:
app_config:
query: SELECT * FROM config WHERE environment = connection.parameter('environment')
```
| Function | Description |
| ----------------------------- | --------------------------------- |
| `connection.parameter('key')` | Get a single connection parameter |
| `connection.parameters()` | All connection parameters as JSON |
Changing connection parameters requires reconnecting. For values that change during a session, use subscription parameters instead.
See [Client Usage](/sync/streams/client-usage#connection-parameters) for details on specifying connection parameters in your client-side code.
## When to Use Each
**Subscription parameters** are the most flexible option. Use them when the client needs to choose what data to sync at runtime. Each subscription operates independently, so a user can have multiple subscriptions to the same stream with different parameters.
**Auth parameters** are the most secure option. Use them when you need to filter data based on who the user is. Since these values come from the signed JWT, they can't be tampered with by the client.
**Connection parameters** apply globally across all streams for the session. Use them for values that rarely change, like environment flags or feature toggles. Keep in mind that changing them requires reconnecting.
For most use cases, subscription parameters are the best choice. They're more flexible and work well with modern app patterns like multiple tabs.
## Expanding JSON Arrays
If a user's JWT contains an array of IDs (e.g., `{ "project_ids": ["proj-1", "proj-2", "proj-3"] }`), you can expand it to sync all matching records. The example below syncs all three projects to the user/client:
**Shorthand syntax** (recommended):
```yaml theme={null}
streams:
# User's JWT contains: { "project_ids": ["proj-1", "proj-2", "proj-3"] }
my_projects:
auto_subscribe: true
query: SELECT * FROM projects WHERE id IN auth.parameter('project_ids')
```
**JOIN syntax** with table-valued function:
```yaml theme={null}
streams:
my_projects:
auto_subscribe: true
query: |
SELECT * FROM projects
JOIN json_each(auth.parameter('project_ids')) AS allowed ON projects.id = allowed.value
```
**Subquery syntax**:
```yaml theme={null}
streams:
my_projects:
auto_subscribe: true
query: |
SELECT * FROM projects
WHERE id IN (SELECT value FROM json_each(auth.parameter('project_ids')))
```
All three sync the same data: projects whose IDs are in the user's JWT `project_ids` claim.
`json_each()` works with auth and connection parameters. It can also be used with columns from joined tables in some cases (e.g. `SELECT * FROM lists WHERE id IN (SELECT lists.value FROM access_control a, json_each(a.allowed_lists) as lists WHERE a.user = auth.user_id())`).
## Combining Parameters
You can combine different parameter types in a single query. A common pattern is using subscription parameters for on-demand data while using auth parameters for authorization:
```yaml theme={null}
streams:
# User subscribes with a list_id, but can only see lists they have access to
list_items:
query: |
SELECT * FROM items
WHERE list_id = subscription.parameter('list_id')
AND list_id IN (
SELECT id FROM lists
WHERE owner_id = auth.user_id()
OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id())
)
```
See [Writing Queries](/sync/streams/queries) for more filtering techniques using subqueries and joins.
# Writing Queries
Source: https://docs.powersync.com/sync/streams/queries
Learn query syntax for filtering with subqueries and joins, selecting columns, and transforming data types.
This page covers query syntax for Sync Streams: filtering, selecting columns, and transforming data.
For parameter usage, see [Using Parameters](/sync/streams/parameters). For real-world patterns, see [Examples, Patterns & Demos](/sync/streams/examples).
## Basic Queries
The simplest stream query syncs all rows from a table:
```yaml theme={null}
streams:
auto_subscribe: true
categories:
query: SELECT * FROM categories
```
Add a `WHERE` clause to filter:
```yaml theme={null}
streams:
auto_subscribe: true
active_products:
query: SELECT * FROM products WHERE active = true
```
## Filtering by User
Most apps need to sync different data to different users. Use `auth.user_id()` to filter by the authenticated user:
```yaml theme={null}
streams:
auto_subscribe: true
my_lists:
query: SELECT * FROM lists WHERE owner_id = auth.user_id()
```
This syncs only the lists owned by the current user. The user ID comes from the `sub` claim in their JWT token. See [Auth Parameters](/sync/streams/parameters#auth-parameters).
## On-Demand Data with Subscription Parameters
For data that should only sync when the user navigates to a specific screen, use subscription parameters. The client passes these when subscribing to a stream:
```yaml theme={null}
streams:
list_todos:
query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id')
```
**Authorization:** This example filters only by `subscription.parameter('list_id')`. Any client can pass any `list_id`, so a user could access another user's todos. For production, add an authorization check so the user can only see lists they own or have access to — for example, add `AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id() OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id()))`. See [Combining Parameters with Subqueries](#combining-parameters-with-subqueries) below.
```js theme={null}
// When user opens a specific list, subscribe with that list's ID
const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
```
See [Using Parameters](/sync/streams/parameters) for the full reference on parameters.
## Selecting Columns
Select specific columns instead of `*` to reduce data transfer:
```yaml theme={null}
streams:
users:
query: SELECT id, name, email, avatar_url FROM users WHERE org_id = auth.parameter('org_id')
```
### Renaming Columns
Use `AS` to rename columns in the synced data:
```yaml theme={null}
streams:
todos:
query: SELECT id, name, created_timestamp AS created_at FROM todos
```
### Type Transformations
PowerSync syncs data to SQLite on the client. You may need to transform types for compatibility:
```yaml theme={null}
streams:
items:
query: |
SELECT
id,
CAST(item_number AS TEXT) AS item_number, -- Cast to text
metadata_json ->> 'description' AS description, -- Extract field from JSON
base64(thumbnail) AS thumbnail_base64, -- Binary to base64
unixepoch(created_at) AS created_at -- DateTime to epoch
FROM items
```
See [Type Mapping](/sync/types) for details on how each database type is handled.
## Using Subqueries
Subqueries let you filter based on related tables. Use `IN (SELECT ...)` to sync data where a foreign key matches rows in another table:
```yaml theme={null}
streams:
# Sync comments for issues owned by the current user
my_issue_comments:
query: |
SELECT * FROM comments
WHERE issue_id IN (SELECT id FROM issues WHERE owner_id = auth.user_id())
```
### Nested Subqueries
Subqueries can be nested to traverse multiple levels of relationships. This is useful for normalized database schemas:
```yaml theme={null}
streams:
# Sync tasks for projects in organizations the user belongs to
org_tasks:
query: |
SELECT * FROM tasks
WHERE project_id IN (
SELECT id FROM projects WHERE org_id IN (
SELECT org_id FROM org_members WHERE user_id = auth.user_id()
)
)
```
### Combining Parameters with Subqueries
A common pattern is using subscription parameters to select what data to sync, while using subqueries for authorization:
```yaml theme={null}
streams:
# User subscribes with a list_id, but can only see lists they own or that are shared with them
list_items:
query: |
SELECT * FROM items
WHERE list_id = subscription.parameter('list_id')
AND list_id IN (
SELECT id FROM lists
WHERE owner_id = auth.user_id()
OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id())
)
```
## Using Joins
For complex queries that traverse multiple tables, join syntax is often easier to read than nested subqueries. You can use `JOIN` or `INNER JOIN` (they're equivalent). For the exact supported JOIN syntax and restrictions, see [Supported SQL — JOIN syntax](/sync/supported-sql#join-syntax).
Consider this query:
```yaml theme={null}
streams:
# Nested subquery version
user_comments:
query: |
SELECT * FROM comments WHERE issue_id IN (
SELECT id FROM issues WHERE project_id IN (
SELECT project_id FROM project_members WHERE user_id = auth.user_id()
)
)
```
The same query using joins:
```yaml theme={null}
streams:
# Join version - same result, easier to read
user_comments:
query: |
SELECT comments.* FROM comments
INNER JOIN issues ON comments.issue_id = issues.id
INNER JOIN project_members ON issues.project_id = project_members.project_id
WHERE project_members.user_id = auth.user_id()
```
Both queries sync the same data. Choose whichever style is clearer for your use case.
### Multiple Joins
You can chain multiple joins to traverse complex relationships. This example joins four tables to sync checkpoints for assignments the user has access to.
```yaml theme={null}
streams:
my_checkpoints:
query: |
SELECT checkpoint.* FROM user_assignment_scope uas
JOIN assignment a ON a.id = uas.assignment_id
JOIN assignment_checkpoint ac ON ac.assignment_id = a.id
JOIN checkpoint ON checkpoint.id = ac.checkpoint_id
WHERE uas.user_id = auth.user_id()
AND a.active = true
```
### Self-Joins
You can join the same table multiple times; aliases are required to distinguish each occurrence (e.g. `gm1` and `gm2` for the two `group_memberships` joins). This is useful for finding related records through a shared relationship — for example, finding all users who share a group with the current user:
```yaml theme={null}
streams:
users_in_my_groups:
query: |
SELECT users.* FROM users
JOIN group_memberships gm1 ON users.id = gm1.user_id
JOIN group_memberships gm2 ON gm1.group_id = gm2.group_id
WHERE gm2.user_id = auth.user_id()
```
### Join Limitations
When writing stream queries with JOINs, keep in mind: use only `JOIN` or `INNER JOIN`; select columns from a single table (e.g. `comments.*`); and use simple equality conditions (`table1.column = table2.column`). For the full list of supported JOIN syntax and invalid examples, see [Supported SQL — JOIN syntax](/sync/supported-sql#join-syntax).
## Multiple Queries per Stream
You can group multiple queries into a single stream using `queries` instead of `query`. This is useful when several tables share the same access pattern:
```yaml theme={null}
streams:
user_data:
auto_subscribe: true
queries:
- SELECT * FROM notes WHERE owner_id = auth.user_id()
- SELECT * FROM settings WHERE user_id = auth.user_id()
- SELECT * FROM preferences WHERE user_id = auth.user_id()
```
You subscribe once to the stream; PowerSync merges the data from all queries efficiently. This is more efficient than defining separate streams, each requiring its own subscription.
### When to Use Multiple Queries
Use `queries` when:
* Multiple tables have the same filtering logic (e.g., all filtered by `user_id`)
* You want to optimize sync by using one stream so the client subscribes once and PowerSync merges the data from all queries, and to reduce bucket count (see Developer Notes )
* Related data should sync together
```yaml theme={null}
streams:
# All project-related data syncs together
project_details:
queries:
- SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id')
- SELECT * FROM files WHERE project_id = subscription.parameter('project_id')
- SELECT * FROM comments WHERE project_id = subscription.parameter('project_id')
```
### Compatibility Requirements
For multiple queries in one stream to be valid, they must use compatible parameter inputs. In practice, this means they should filter on the same parameters in the same way:
```yaml theme={null}
# Valid - all queries use the same parameter pattern
streams:
user_content:
queries:
- SELECT * FROM notes WHERE user_id = auth.user_id()
- SELECT * FROM bookmarks WHERE user_id = auth.user_id()
# Valid - all queries use the same subscription parameter
streams:
project_data:
queries:
- SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id')
- SELECT * FROM files WHERE project_id = subscription.parameter('project_id')
```
### Combining with CTEs
Multiple queries work well with [Common Table Expressions (CTEs)](/sync/streams/ctes) to share the filtering logic and keep all results in one stream, requiring clients to manage one subscription instead of many:
```yaml theme={null}
streams:
org_data:
auto_subscribe: true
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
queries:
- SELECT * FROM projects WHERE org_id IN user_orgs
- SELECT * FROM repositories WHERE org_id IN user_orgs
- SELECT * FROM team_members WHERE org_id IN user_orgs
```
## Complete Example
A full configuration combining multiple techniques:
```yaml theme={null}
config:
edition: 3
streams:
# Global reference data (no parameters, auto-subscribed)
categories:
auto_subscribe: true
query: SELECT id, name, CAST(sort_order AS TEXT) AS sort_order FROM categories
# User's own items with transformed fields (auth parameter, auto-subscribed)
my_items:
auto_subscribe: true
query: |
SELECT
id,
name,
metadata ->> 'status' AS status,
unixepoch(created_at) AS created_at,
base64(thumbnail) AS thumbnail
FROM items
WHERE owner_id = auth.user_id()
# On-demand item details (subscription parameter with auth check)
item_comments:
query: |
SELECT * FROM comments
WHERE item_id = subscription.parameter('item_id')
AND item_id IN (SELECT id FROM items WHERE owner_id = auth.user_id())
```
See [Examples & Patterns](/sync/streams/examples) for real-world examples like multi-tenant apps and role-based access, and [Supported SQL](/sync/supported-sql) for all available operators and functions.
# Supported SQL
Source: https://docs.powersync.com/sync/supported-sql
SQL syntax, operators, and functions supported in Sync Streams and Sync Rules queries.
This page documents the SQL supported in [Sync Streams](/sync/streams/overview) and [Sync Rules (legacy)](/sync/rules/overview).
Some fundamental restrictions on the usage of SQL expressions are:
1. They must be deterministic — no random or time-based functions.
2. No external state can be used.
3. They must operate on data available within a single row/document. For example, no aggregation functions are allowed.
For parameter-specific WHERE restrictions, see [Filtering: WHERE Clause](#filtering-where-clause).
## Query Syntax
The supported SQL is based on a subset of the standard SQL syntax. Sync Streams support more SQL features than the legacy Sync Rules.
* `SELECT` with column selection and [`WHERE` filtering](#filtering-where-clause)
* [Subqueries](/sync/streams/queries#using-subqueries) with `IN (SELECT ...)` and nested subqueries
* [`INNER JOIN`](#join-syntax) (selected columns must come from a single table)
* [Common Table Expressions (CTEs)](#cte-and-with-syntax) via the `with:` block
* Multiple queries per stream via `queries:`
* Table-valued functions such as `json_each()` for [expanding arrays](/sync/streams/parameters#expanding-json-arrays)
* `BETWEEN` and `CASE` expressions
* A limited set of [operators](#operators) and [functions](#functions)
**Not supported**: aggregation, sorting, or set operations (`GROUP BY`, `ORDER BY`, `LIMIT`, `UNION`, etc.). See [Writing Queries](/sync/streams/queries) for details.
* Simple `SELECT` with column selection
* `WHERE` filtering on parameters (see [Filtering: WHERE Clause](#filtering-where-clause))
* A limited set of [operators](#operators) and [functions](#functions)
**Not supported**: subqueries, JOINs, CTEs, aggregation, sorting, or set operations (`GROUP BY`, `ORDER BY`, `LIMIT`, `UNION`, etc.).
## Filtering: WHERE Clause
Sync queries support a subset of SQL `WHERE` syntax. Allowed operators and combinations differ between Sync Streams and Sync Rules, and are more restrictive than standard SQL.
**`=` and `IS NULL`** — Compare a row column to a static value, a parameter, or another column:
```sql theme={null}
-- Static value
WHERE status = 'active'
WHERE deleted_at IS NULL
-- Parameter (auth, connection, or subscription)
WHERE owner_id = auth.user_id()
WHERE region = connection.parameter('region')
```
**`AND`** — Fully supported. You can mix parameter comparisons, subqueries, and row-value conditions in the same clause.
```sql theme={null}
-- Two parameter conditions
WHERE owner_id = auth.user_id()
AND org_id = auth.parameter('org_id')
-- Parameter condition + row-value condition
WHERE owner_id = auth.user_id()
AND status = 'active'
-- Parameter condition + subquery
WHERE list_id = subscription.parameter('list_id')
AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
```
**`OR`** — Supported, including `OR` nested inside `AND`. PowerSync rewrites combinations like `A AND (B OR C)` into separate branches before evaluating. Each `OR` branch must be a valid filter on its own; you cannot have a branch that only makes sense when combined with the other.
```sql theme={null}
-- Top-level OR
WHERE owner_id = auth.user_id()
OR shared_with = auth.user_id()
-- OR nested inside AND
WHERE status = 'active'
AND (owner_id = auth.user_id() OR shared_with = auth.user_id())
```
**`NOT`** — Supported for simple conditions on row values. `NOT IN` with a literal set of values is supported: use a JSON array string (e.g. `'["draft", "hidden"]'`), or the `ARRAY['draft', 'hidden']` and `ROW('draft', 'hidden')` forms. You cannot negate a subquery or a parameter array expansion.
```sql theme={null}
-- Simple row-value conditions
WHERE status != 'archived'
WHERE deleted_at IS NOT NULL
-- NOT IN with JSON array string (any of these forms)
WHERE category NOT IN '["draft", "hidden"]'
WHERE category NOT IN ARRAY['draft', 'hidden']
WHERE category NOT IN ROW('draft', 'hidden')
-- Not supported: negating a subquery
-- WHERE issue_id NOT IN (SELECT id FROM issues WHERE owner_id = auth.user_id())
-- Not supported: negating a parameter array
-- WHERE id NOT IN subscription.parameter('excluded_ids')
```
**`=` and `IS NULL`** — Compare a row column to a static value or a bucket parameter:
```sql theme={null}
-- Static value
WHERE status = 'active'
WHERE deleted_at IS NULL
-- Bucket parameter
WHERE owner_id = bucket.user_id
```
**`AND`** — Supported in both Parameter Queries and Data Queries. In Parameter Queries, each condition may match a different parameter. However, you cannot combine two `IN` expressions on parameters in the same `AND`; split them into separate Parameter Queries instead.
```sql theme={null}
-- Supported: parameter condition + row-value condition
WHERE users.id = request.user_id()
AND users.is_admin = true
-- Not supported: two IN expressions on parameters in the same AND
-- WHERE bucket.list_id IN lists.allowed_ids
-- AND bucket.org_id IN lists.allowed_org_ids
```
**`OR`** — Supported when both sides of the `OR` reference the exact same set of parameters. If the two sides use different parameters, use separate parameter queries instead.
```sql theme={null}
-- Supported: both sides reference the same parameter
WHERE lists.owner_id = request.user_id()
OR lists.shared_with = request.user_id()
-- Not supported: sides reference different parameters
-- WHERE lists.owner_id = request.user_id()
-- OR lists.org_id = bucket.org_id
```
**`NOT`** — Supported for simple row-value conditions. Not supported on parameter-matching expressions.
```sql theme={null}
-- Supported
WHERE status != 'archived'
WHERE deleted_at IS NOT NULL
WHERE NOT users.is_admin = true
-- Not supported in parameter queries
-- WHERE NOT users.id = request.user_id()
```
## Operators
Operators can be used in `WHERE` clauses and in `SELECT` expressions. When filtering on parameters (e.g. `auth.user_id()`, `subscription.parameter('id')`), some combinations are restricted — see [Filtering: WHERE Clause](#filtering-where-clause).
* **Comparison:** `=`, `!=`, `<`, `>`, `<=`, `>=` — If either side is `null`, the result is `null`.
* **Null:** `IS NULL`, `IS NOT NULL`
* **Logical:** `AND`, `OR`, `NOT` — See [Filtering: WHERE Clause](#filtering-where-clause) for restrictions when filtering on parameters.
* **Mathematical:** `+`, `-`, `*`, `/`
* `||` — Joins two text values together.
* `json -> 'path'` - Returns the value as a JSON string.
* `json ->> 'path'` — Returns the extracted value.
* **Sync Streams:** `left IN right` — `left` can be a row column and `right` a parameter array (e.g. `id IN subscription.parameter('ids')`), or `left` a parameter and `right` a row JSON array column. Also supports subqueries: `id IN (SELECT ...)`.
* **Sync Rules:** Returns true if `left` is in the `right` JSON array. In Data Queries, `left` must be a row column and `right` cannot be a bucket parameter. In Parameter Queries, either side may be a parameter.
* `x BETWEEN a AND b`, `x NOT BETWEEN a AND b` — True if `x` is in the inclusive range `[a, b]`. Usable in `WHERE` or as a `SELECT` expression. If any operand is `null`, the result is `null`.
Example: `WHERE price BETWEEN 10 AND 100`
Supported in Sync Streams only. Not available in Sync Rules.
* ` && ` — True if the JSON array in `left` and the set `right` share at least one value. Use when the row stores an array (e.g. a `tagged_users` column). `left` must be a row column (JSON array); `right` must be a subquery or parameter array.
Example: `WHERE tagged_users && (SELECT id FROM org_members WHERE org_id = auth.parameter('org_id'))`
Use `IN` when the row has a single value to check against a set; use `&&` when the row has an array and you want to match any element.
Supported in Sync Streams only. Not available in Sync Rules.
## Functions
Functions can be used to transform columns/fields before being synced to a client. They operate on row data or parameters. Type names below (`text`, `integer`, `real`, `blob`, `null`) refer to [SQLite storage classes](https://www.sqlite.org/datatype3.html).
Most functions are from [SQLite built-in functions](https://www.sqlite.org/lang_corefunc.html) and [SQLite JSON functions](https://www.sqlite.org/json1.html).
* **[upper(text)](https://www.sqlite.org/lang_corefunc.html#upper)** — Convert text to upper case.
* **[lower(text)](https://www.sqlite.org/lang_corefunc.html#lower)** — Convert text to lower case.
* **[substring(text, start, length)](https://www.sqlite.org/lang_corefunc.html#substr)** — Extracts a portion of a string based on specified start index and length. Start index is 1-based. Example: `substring(created_at, 1, 10)` returns the date portion of the timestamp.
* **[hex(data)](https://www.sqlite.org/lang_corefunc.html#hex)** — Convert blob or text data to hexadecimal text.
* **base64(data)** — Convert blob or text data to base64 text.
* **[length(data)](https://www.sqlite.org/lang_corefunc.html#length)** — For text, return the number of characters. For blob, return the number of bytes. For null, return null. For integer and real, convert to text and return the number of characters.
* `CAST(x AS type)` or `x :: type` — Cast to `text`, `numeric`, `integer`, `real`, or `blob`. See [Type mapping](/sync/types) and [SQLite types](https://www.sqlite.org/datatype3.html).
* **[typeof(data)](https://www.sqlite.org/lang_corefunc.html#typeof)** — Returns `text`, `integer`, `real`, `blob`, or `null`.
* **[json\_each(data)](https://www.sqlite.org/json1.html#jeach)** — Expands a JSON array into rows.
* **Sync Streams:** Works with auth and connection parameters (e.g. `JOIN json_each(auth.parameter('ids')) AS t` or `WHERE id IN (SELECT value FROM json_each(auth.parameter('ids')))`). Can also be used with columns from joined tables in some cases (e.g. `SELECT * FROM lists WHERE id IN (SELECT lists.value FROM access_control a, json_each(a.allowed_lists) as lists WHERE a.user = auth.user_id())`). See [Expanding JSON arrays](/sync/streams/parameters#expanding-json-arrays).
* **Sync Rules:** Expands a JSON array or object from a request or token parameter into a set of parameter rows. Example: `SELECT value AS project_id FROM json_each(request.jwt() -> 'project_ids')`.
* **[json\_extract(data, path)](https://www.sqlite.org/json1.html#jex)** — Same as `->>` operator, but the path must start with `$.`
* **[json\_array\_length(data)](https://www.sqlite.org/json1.html#jarraylen)** — Given a JSON array (as text), returns the length of the array. If data is null, returns null. If the value is not a JSON array, returns 0.
* **[json\_valid(data)](https://www.sqlite.org/json1.html#jvalid)** — Returns 1 if the data can be parsed as JSON, 0 otherwise.
* **json\_keys(data)** — Returns the set of keys of a JSON object as a JSON array. Example: `SELECT * FROM items WHERE bucket.user_id IN json_keys(permissions_json)`.
* **[ifnull(x, y)](https://www.sqlite.org/lang_corefunc.html#ifnull)** — Returns x if non-null, otherwise returns y.
* **[iif(x, y, z)](https://www.sqlite.org/lang_corefunc.html#iif)** — Returns y if x is true, otherwise returns z.
* **[unixepoch(time-value, \[modifier\])](https://www.sqlite.org/lang_datefunc.html)** — Returns a time-value as Unix timestamp. If modifier is "subsec", the result is a floating point number, with milliseconds included in the fraction. The time-value argument is required — this function cannot be used to get the current time.
* **[datetime(time-value, \[modifier\])](https://www.sqlite.org/lang_datefunc.html)** — Returns a time-value as a date and time string, in the format YYYY-MM-DD HH:MM:SS. If the specifier is "subsec", milliseconds are also included. If the modifier is "unixepoch", the argument is interpreted as a Unix timestamp. Both modifiers can be included: `datetime(timestamp, 'unixepoch', 'subsec')`. The time-value argument is required — this function cannot be used to get the current time.
* **[uuid\_blob(id)](https://sqlite.org/src/file/ext/misc/uuid.c)** — Convert a UUID string to bytes.
* **[ST\_AsGeoJSON(geometry)](/client-sdks/advanced/gis-data-postgis)** — Convert [PostGIS](/client-sdks/advanced/gis-data-postgis) (in Postgres) geometry from WKB to GeoJSON. Combine with JSON operators to extract specific fields.
* **[ST\_AsText(geometry)](/client-sdks/advanced/gis-data-postgis)** — Convert [PostGIS](/client-sdks/advanced/gis-data-postgis) (in Postgres) geometry from WKB to Well-Known Text (WKT).
* **[ST\_X(point)](/client-sdks/advanced/gis-data-postgis)** — Get the X coordinate of a [PostGIS](/client-sdks/advanced/gis-data-postgis) point (in Postgres).
* **[ST\_Y(point)](/client-sdks/advanced/gis-data-postgis)** — Get the Y coordinate of a [PostGIS](/client-sdks/advanced/gis-data-postgis) point (in Postgres).
If you need an operator or function not listed, [contact us](/resources/contact-us) so we can consider adding it.
## JOIN Syntax
Supported in Sync Streams only. Not available in Sync Rules.
Sync Streams support a subset of join syntax. The following rules define what is valid:
* **Only inner joins:** Use `JOIN` or `INNER JOIN`. `LEFT`, `RIGHT`, and `OUTER` joins are not supported.
* **Single output table:** All selected columns must come from one table. Use `table.*` or list columns from that table (e.g. `comments.*`, `comments.id`). Selecting columns from multiple tables is invalid.
* **Simple join conditions:** Join conditions must be equality comparisons of the form `table1.column = table2.column`. Other comparisons (e.g. `a.x > b.y`) are not supported.
* **Table-valued functions in JOINs:** `json_each()` and similar functions work with auth or connection parameters (e.g. `json_each(auth.parameter('ids'))`). They can also be used with columns from joined tables in some cases.
```sql theme={null}
-- Valid: columns from one table
SELECT comments.* FROM comments INNER JOIN issues ON comments.issue_id = issues.id
-- Invalid: columns from multiple tables
SELECT comments.*, issues.title FROM comments JOIN issues ON comments.issue_id = issues.id
-- Invalid: non-equality join condition
SELECT * FROM a JOIN b ON a.x > b.y
```
For how to use JOINs in your stream queries (when to use them, patterns, and examples), see [Using Joins](/sync/streams/queries#using-joins).
## CTE and WITH Syntax
Supported in Sync Streams only. Not available in Sync Rules.
Common Table Expressions (CTEs) are defined in a `with:` block inside a stream. Each CTE is a name and a single `SELECT` query. The following rules apply:
* **CTEs cannot reference other CTEs.** Each CTE must be self-contained. To chain logic (e.g. orgs → projects), use nested subqueries in your stream query and reference only the CTE at the leaf level.
* **CTE names take precedence over table names.** If a CTE has the same name as a database table, the CTE is used. Use distinct names to avoid confusion.
* **Short-hand `IN cte_name`** works only when the CTE has exactly one column.
```yaml theme={null}
# Valid: CTE in a stream
streams:
projects:
with:
user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
query: SELECT * FROM projects WHERE org_id IN user_orgs
# Invalid: CTE referencing another CTE
# streams:
# my_stream:
# with:
# user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
# project_ids: SELECT id FROM projects WHERE org_id IN user_orgs # Error
```
For how to use CTEs, see [Common Table Expressions (CTEs)](/sync/streams/ctes).
## CASE Expressions
Supported in Sync Streams only. Not available in Sync Rules.
`CASE` is allowed anywhere an expression is allowed — in `SELECT` columns or `WHERE` clauses.
**Searched CASE** — Each `WHEN` is an independent boolean condition:
```sql theme={null}
CASE
WHEN THEN
WHEN THEN
ELSE
END
```
```sql theme={null}
-- Compute a label based on a column value
SELECT id,
CASE
WHEN score >= 90 THEN 'A'
WHEN score >= 70 THEN 'B'
ELSE 'C'
END AS grade
FROM results
```
**Simple CASE** — Compares one expression against a list of values:
```sql theme={null}
CASE
WHEN THEN
WHEN THEN
ELSE
END
```
```sql theme={null}
-- Map numeric status codes to readable labels
SELECT id,
CASE status
WHEN 1 THEN 'pending'
WHEN 2 THEN 'active'
WHEN 3 THEN 'closed'
ELSE 'unknown'
END AS status_label
FROM tasks
```
`ELSE` is optional. If omitted and no `WHEN` matches, the result is `null`.
# Types
Source: https://docs.powersync.com/sync/types
PowerSync's Sync Streams and Sync Rules use the [SQLite type system](https://www.sqlite.org/datatype3.html).
The supported client-side SQLite types are:
1. `null`
2. `integer`: a 64-bit signed integer
3. `real`: a 64-bit floating point number
4. `text`: A UTF-8 text string
5. `blob`: Binary data
## Postgres Type Mapping
Postgres types are mapped to SQLite types as follows:
| Postgres Data Type | PowerSync / SQLite Column Type | Notes |
| ---------------------- | ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `text`, `varchar` | `text` | |
| `int2`, `int4`, `int8` | `integer` | |
| `numeric` / `decimal` | `text` | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite |
| `bool` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. |
| `float4`, `float8` | `real` | |
| `enum` | `text` | |
| `uuid` | `text` | |
| `timestamptz` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ`. This is compatible with ISO8601 and SQLite's functions. Precision matches the precision used in Postgres. `-infinity` becomes `0000-01-01 00:00:00Z` and `infinity` becomes `9999-12-31 23:59:59Z`. |
| `timestamp` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sss`. In most cases, timestamptz should be used instead. `-infinity` becomes `0000-01-01 00:00:00` and `infinity` becomes `9999-12-31 23:59:59`. |
| `date`, `time` | `text` | |
| `json`, `jsonb` | `text` | `json` and `jsonb` values are treated as `text` values in their serialized representation. [JSON functions and operators](/sync/supported-sql#operators) operate directly on these `text` values. |
| `interval` | `text` | |
| `macaddr` | `text` | |
| `inet` | `text` | |
| `bytea` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). |
| `geometry` (PostGIS) | `text` | Hex string of the binary data. Use the [ST functions](/sync/supported-sql#functions) to convert to other formats |
| Arrays | `text` | JSON array. |
| `DOMAIN` types | `text` / depends | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), inner type or raw wire representation (legacy). |
| Custom types | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object or raw wire representation (legacy). |
| (Multi-)ranges | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object (array for multi-ranges) or raw wire representation (legacy). |
Binary data can be accessed in the Sync Streams / Sync Rules, but cannot be used as [parameters](/sync/overview#how-it-works). To sync binary columns/fields to clients, those columns need to be converted to hex or base64 representation using the relevant [functions](/sync/supported-sql#functions).
## MongoDB Type Mapping
MongoDB types are mapped to SQLite types as follows:
| BSON Type | PowerSync / SQLite Column Type | Notes |
| ------------------ | ------------------------------ | ------------------------------------------------------------------------------------------------------------------ |
| `String` | `text` | |
| `Int`, `Long` | `integer` | |
| `Double` | `real` | |
| `Decimal128` | `text` | |
| `Object` | `text` | Converted to a JSON string |
| `Array` | `text` | Converted to a JSON string |
| `ObjectId` | `text` | Lower-case hex string |
| `UUID` | `text` | Lower-case hex string |
| `Boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. |
| `Date` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ` |
| `Null` | `null` | |
| `Binary` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). |
| Regular Expression | `text` | JSON text in the format `{"pattern":"...","options":"..."}` |
| `Timestamp` | `integer` | Converted to a 64-bit integer |
| `Undefined` | `null` | |
| `DBPointer` | `text` | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` |
| `JavaScript` | `text` | JSON text in the format `{"code": "...", "scope": ...}` |
| `Symbol` | `text` | |
| `MinKey`, `MaxKey` | `null` | |
* Data is converted to a flat list of columns, one column per top-level field in the MongoDB document.
* Special BSON types are converted to plain SQLite alternatives. For example, `ObjectId`, `Date`, `UUID` are all converted to a plain `TEXT` column.
* Nested objects and arrays are converted to JSON, and [JSON functions and operators](/sync/supported-sql#operators) can be used to query them (in the Sync Streams / Sync Rules and/or on the client-side SQLite statements).
* Binary data nested in objects or arrays is not supported.
Binary data can be accessed in the Sync Streams / Sync Rules, but cannot be used as [parameters](/sync/overview#how-it-works). To sync binary columns/fields to clients, those columns need to be converted to hex or base64 representation using the relevant [functions](/sync/supported-sql#functions).
## MySQL (Beta) Type Mapping
MySQL types are mapped to SQLite types as follows:
| MySQL Data Type | PowerSync / SQLite Column Type | Notes |
| -------------------------------------------------------------- | ------------------------------ | ------------------------------------------------------------------------------------------- |
| `tinyint`, `smallint`, `mediumint`, `bigint`, `integer`, `int` | `integer` | |
| `numeric`, `decimal` | `text` | |
| `bool`, `boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. |
| `float`, `double`, `real` | `real` | |
| `enum` | `text` | |
| `set` | `text` | Converted to JSON array |
| `char`, `varchar` | `text` | |
| `tinytext`, `text`, `mediumtext`, `longtext` | `text` | |
| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
| `date` | `text` | Format: `YYYY-MM-DD` |
| `time`, `datetime` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
| `year` | `text` | |
| `json` | `text` | There is no dedicated JSON type in SQLite — JSON functions operate directly on text values. |
| `bit` | `blob` | \* See note below regarding syncing binary types |
| `binary`, `varbinary` | `blob` | |
| `image` | `blob` | |
| `geometry`, `geometrycollection` | `blob` | |
| `point`, `multipoint` | `blob` | |
| `linestring`, `multilinestring` | `blob` | |
| `polygon`, `multipolygon` | `blob` | |
Binary data can be accessed in the Sync Streams / Sync Rules, but cannot be used as [parameters](/sync/overview#how-it-works). To sync binary columns/fields to clients, those columns need to be converted to hex or base64 representation using the relevant [functions](/sync/supported-sql#functions).
## SQL Server (Alpha) Type Mapping
SQL Server types are mapped to SQLite types as follows:
| SQL Server Data Type | PowerSync / SQLite Column Type | Notes |
| ---------------------------------------------------------- | ------------------------------ | ------------------------------------------------------ |
| `tinyint`, `smallint`, `int`, `bigint` | `integer` | |
| `numeric`, `decimal` | `text` | Numeric string |
| `float`, `real` | `real` | |
| `bit` | `integer` | |
| `money`, `smallmoney` | `text` | Numeric string |
| `xml` | `text` | |
| `char`, `nchar`, `ntext` | `text` | |
| `varchar`, `nvarchar`, `text` | `text` | |
| `uniqueidentifier` | `text` | |
| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
| `date` | `text` | Format: `YYYY-MM-DD` |
| `time` | `text` | Format: `HH:mm:ss.sss` |
| `datetime`, `datetime2`, `smalldatetime`, `datetimeoffset` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
| `json` | `text` | Only exists for Azure SQL Database and SQL Server 2025 |
| `geometry`, `geography` | `text` | `text` of JSON object describing the spatial data type |
| `binary`, `varbinary`, `image` | `blob` | \* See note below regarding binary types |
| `rowversion`, `timestamp` | `blob` | \* See note below regarding binary types |
| User Defined Types: `hiearchyid` | `blob` | \* See note below regarding binary types |
Binary data can be accessed in the Sync Streams / Sync Rules, but cannot be used as [parameters](/sync/overview#how-it-works). To sync binary columns/fields to clients, those columns need to be converted to hex or base64 representation using the relevant [functions](/sync/supported-sql#functions).
# AI Tools
Source: https://docs.powersync.com/tools/ai-tools
Resources for working with PowerSync with AI-powered coding tools
# PowerSync and AI Coding Tools
This is a growing collection of resources designed to help you work with PowerSync using AI-powered IDE tools like Cursor, Claude, or Windsurf. These tools can help you implement PowerSync features faster and more efficiently.
## PowerSync Agent Skills
PowerSync Agent Skills gives your AI agents all the PowerSync specific context it needs to action tasks in your code base. This includes the ability to perform actions specific to PowerSync.
To get started quickly, simply run the following command in your terminal and follow the prompts to add the PowerSync Agent Skills to your project.
```bash theme={null}
npx skills add powersync-ja/agent-skills
```
For more information on PowerSync Agent Skills, check out the official PowerSync Agent Skills repo on GitHub:
PowerSync Agent Skills repository.
## AI-Accessible Documentation
### Markdown Version of Documentation Pages
For any page within our documentation, you can obtain the Markdown version, which is more easily readable by LLMs. There are several methods to do this:
1. Press **CTRL/CMD+C** to copy the page in Markdown.
2. Use the context menu on a page to view or copy the page in Markdown:
3. Append `.md` to the URL to view the Markdown version, for example:
```
https://docs.powersync.com/client-sdks/reference/javascript-web.md
```
### Feed a Page to ChatGPT or Claude Directly
Use the context menu on a page to directly send it to ChatGPT or Claude for ingestion:
### Full Documentation Text
We provide text versions of our documentation that LLMs can easily ingest:
* **Full Documentation**: [https://docs.powersync.com/llms-full.txt](https://docs.powersync.com/llms-full.txt)
* Our entire documentation site in a single text file
* Perfect for giving your AI assistant complete context about PowerSync
* **Page Outline**: [https://docs.powersync.com/llms.txt](https://docs.powersync.com/llms.txt)
* All documentation pages in a single text file
* This helps AI assistants in indexing our documentation
## Community Resources
Join our [Discord community](https://discord.com/invite/powersync) to share your experiences in using AI tools with PowerSync and to learn from other developers.
# CLI
Source: https://docs.powersync.com/tools/cli
Manage PowerSync Cloud and self-hosted instances from the command line.
The PowerSync CLI lets you manage PowerSync Service instances, deploy sync config (your Sync Streams or Sync Rules), generate client schemas, run diagnostics, and more. It is distributed as the [powersync](https://www.npmjs.com/package/powersync) npm package.
The CLI is currently in [beta](/resources/feature-status). We recommend it for new and existing projects.
For a full step-by-step flow using the CLI, use the [Setup Guide](/intro/setup-guide): choose the **CLI (Cloud)** or **CLI (Self-Hosted)** tab in steps 2–5 to configure your instance, connect the source database, deploy sync config, and generate development tokens.
The CLI was overhauled in version 0.9.0; the redesign is based on this [design proposal](https://docs.google.com/document/d/1iqpJF2gog2jB-ZWeN8TBEjcad8aBKNKbue2yJ21q_-s/edit).
**Main improvements:**
* **Project-based config** — A `powersync/` directory in your repo holds `service.yaml` and `sync-config.yaml`, so you version and review config with your app code.
* **Self-hosted support** — Most commands work against any linked instance, PowerSyncCloud and self-hosted. You can also use `powersync docker configure` to scaffold a local PowerSync stack with no manual setup.
* **Better validation** — `powersync validate` checks your config before deploy and reports errors with line and column numbers.
* **Config Studio** — `powersync edit config` opens a built-in editor with schema validation, autocomplete, inline errors, and more. See the [Config Studio README](https://github.com/powersync-ja/powersync-cli/tree/main/packages/editor).
* **Browser login** — `powersync login` opens a browser to create or paste a PAT and stores it; in CI use `PS_ADMIN_TOKEN`.
* **Plugins** — npm-based plugin system ([OCLIF](https://oclif.io)); install with `powersync plugins install ` or build with `@powersync/cli-core`.
* **Open source** — Source and advanced docs are in the [PowerSync CLI repo](https://github.com/powersync-ja/powersync-cli).
See [Migrating from the previous CLI](#migrating-from-the-previous-cli) if you used the older flow.
## Installation
Install globally or run via `npx`:
```bash theme={null}
npm install -g powersync
```
```bash theme={null}
npx powersync@0.9.0 # 0.9.0 is the first version with the new CLI
```
## Authentication (Cloud)
Cloud commands require a PowerSync **personal access token (PAT)**.
**Interactive login (recommended for local use):**
```bash theme={null}
powersync login
```
You can open a browser to [create a token in the PowerSync Dashboard](https://dashboard.powersync.com/account/access-tokens) or paste an existing token. The CLI stores the token in secure storage when available (e.g. macOS Keychain), or in a config file after confirmation.
**CI and scripts:** Set the `PS_ADMIN_TOKEN` environment variable. The CLI uses `PS_ADMIN_TOKEN` when set; otherwise it uses the token from `powersync login`.
```bash theme={null}
export PS_ADMIN_TOKEN=your-personal-access-token
powersync fetch instances --project-id=
```
To clear stored credentials: `powersync logout`.
## Config files
The CLI uses a config directory (default `powersync/`) with YAML files:
| File | Purpose |
| ------------------ | --------------------------------------------------------------------------- |
| `service.yaml` | Instance configuration: name, region, replication connection, client auth |
| `sync-config.yaml` | Sync Streams (or Sync Rules) |
| `cli.yaml` | Link file (written by `powersync link`); ties this directory to an instance |
### Developer notes:
* Use the **`!env`** tag for secrets, e.g. `uri: !env PS_DATABASE_URI` (or `!env VAR::number` / `!env VAR::boolean` for types).
* Edit files in your IDE, then run `powersync validate` and `powersync deploy`. For schema validation and `!env` support in your editor, run **`powersync configure ide`**; or run **`powersync edit config`** to open Config Studio (built-in web-based editor).
* To use one config directory across multiple instances (e.g. dev, staging, prod), see the CLI usage docs on [configuring multiple instances](https://github.com/powersync-ja/powersync-cli/blob/main/docs/usage.md#configuring-multiple-instances-eg-dev-staging-production).
* For Cloud secrets in `service.yaml`, use `password: secret: !env VAR` to supply the value from an environment variable at deploy time; after the first deploy you can switch to `secret_ref: default_password` to reuse the stored secret. [Details](https://github.com/powersync-ja/powersync-cli#cloud-secrets-format-serviceyaml)
## Cloud workflows
You can create instances, deploy and pull config, run all Cloud commands.
### Create a new instance
```bash theme={null}
powersync login
powersync init cloud
```
Edit `powersync/service.yaml` (name, region, replication, auth) and sync config; use `!env` for secrets.
```bash theme={null}
powersync link cloud --create --project-id=
```
Add `--org-id=` if your token has multiple orgs.
```bash theme={null}
powersync validate
powersync deploy
```
Use `--directory=` for a different config folder.
### Use an existing instance (pull)
Pull config from an instance that already exists (e.g. created in the Dashboard):
```bash theme={null}
powersync login
powersync pull instance --project-id= --instance-id=
```
Then edit `service.yaml` and `sync-config.yaml` as needed, run `powersync validate`, and `powersync deploy`. Run `powersync pull instance` again (no IDs if already linked) to refresh from the cloud.
### Run commands without local config
To run commands (e.g. `powersync generate schema`, `powersync status`) against an instance managed elsewhere (e.g. Dashboard):
* **Link once:** `powersync link cloud --instance-id= --project-id=` (writes `cli.yaml`); later commands use that instance.
* **Or pass each time:** `--instance-id`, `--project-id`, and `--org-id` when the token has multiple orgs. Or set `INSTANCE_ID`, `PROJECT_ID`, `ORG_ID` in the environment.
The CLI resolves instance and linking context in a fixed order: flags take precedence, then environment variables, then values in `cli.yaml`. For the full resolution order and how to set up multiple instances (e.g. dev, staging, prod), see [supplying linking information for Cloud and self-hosted commands](https://github.com/powersync-ja/powersync-cli/blob/main/docs/usage.md#supplying-linking-information-for-cloud-and-self-hosted-commands) in the CLI usage docs.
## Self-hosted workflows
Support is limited: you **link** to an existing PowerSync API and run a **subset of commands**. The CLI does not create, deploy to, or pull config from your server; you manage the server and its config yourself. For local development, use **Docker** to run a PowerSync Service (and optional DB/storage) in containers.
### Authenticate
In your PowerSync instance config, define API tokens in `service.yaml`:
```yaml theme={null}
api:
tokens:
- dev-token-do-not-use-in-production # or !env MY_API_TOKEN
```
```bash theme={null}
powersync link self-hosted --api-url
```
Writes `cli.yaml` with the API URL.
In `cli.yaml` set `api_key: !env PS_ADMIN_TOKEN` (or a literal value matching a server token), or set the **`PS_ADMIN_TOKEN`** environment variable. If both are set, the environment variable takes precedence.
```yaml theme={null}
# powersync/cli.yaml (self-hosted)
type: self-hosted
api_url: https://powersync.example.com
api_key: !env PS_ADMIN_TOKEN # or a literal value matching one of the tokens in service.yaml
```
### Scaffold and link (no Docker)
When you already have a running PowerSync API:
```bash theme={null}
powersync init self-hosted
# Edit powersync/service.yaml with instance details and api.tokens
powersync link self-hosted --api-url https://powersync.example.com
powersync status
```
Use `--directory=` for a different config folder.
### Supported commands (self-hosted)
Only these commands apply to self-hosted instances: **`powersync status`**, **`powersync generate schema`**, **`powersync generate token`**, **`powersync validate`**, **`powersync fetch instances`** (scans current directory for folders with `cli.yaml`).
Cloud-only commands (**`powersync deploy`**, **`powersync pull instance`**, **`powersync fetch config`**, **`powersync destroy`**, **`powersync stop`**) do not apply.
### Docker (local development)
Run a PowerSync Service (and optional DB/storage) in containers on your machine—no remote server.
```bash theme={null}
powersync init self-hosted
powersync docker configure # links to the local API automatically
powersync docker start
```
Then use the same commands as any self-hosted instance (`powersync status`, `powersync generate schema`, etc.). To stop: **`powersync docker stop`** (add `--remove` to remove containers, `--remove-volumes` to reset so init scripts run again). For a clean setup: **`powersync docker reset`** (stop and remove, then start).
For the full Docker workflow, all flags (`--database`, `--storage`, `--remove`, `--remove-volumes`), and how the template layout and init scripts work, see [Docker usage](https://github.com/powersync-ja/powersync-cli/blob/main/docs/usage-docker.md) in the CLI repo. Run `powersync docker --help` for command options.
## Common commands
| Command | Description |
| --------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| `powersync login` | Store PAT for Cloud (interactive or paste token); use `PS_ADMIN_TOKEN` in CI |
| `powersync logout` | Remove stored token |
| `powersync init cloud` | Scaffold Cloud config directory |
| `powersync init self-hosted` | Scaffold self-hosted config directory |
| `powersync configure ide` | IDE: YAML schema validation and `!env` support |
| `powersync link cloud --project-id=` | Link to existing Cloud instance |
| `powersync link cloud --create --project-id=` | Create new Cloud instance and link |
| `powersync link self-hosted --api-url=` | Link to self-hosted instance |
| `powersync pull instance --project-id= --instance-id=` | Download Cloud config to local files |
| `powersync deploy` | Deploy full config to linked Cloud instance |
| `powersync deploy service-config` | \[Cloud] Deploy only service config |
| `powersync deploy sync-config` | \[Cloud] Deploy only sync config |
| `powersync validate` | Validate config and sync rules/streams |
| `powersync edit config` | Open Config Studio (Monaco editor) |
| `powersync status` | Instance diagnostics (Cloud and self-hosted) |
| `powersync generate schema --output=ts --output-path=schema.ts` | Generate client schema |
| `powersync generate token --subject=user-123` | Generate development JWT |
| `powersync fetch instances` | List Cloud and linked instances |
| `powersync fetch config` | \[Cloud] Print instance config (YAML/JSON) |
| `powersync migrate sync-rules` | Migrate Sync Rules to Sync Streams |
| `powersync destroy --confirm=yes` | \[Cloud] Permanently destroy instance |
| `powersync stop --confirm=yes` | \[Cloud] Stop instance (restart with deploy) |
Run `powersync --help` or `powersync --help` for flags. Full [command reference](https://github.com/powersync-ja/powersync-cli/blob/main/cli/README.md#commands) in the CLI repo.
## Deploying from CI (e.g. GitHub Actions)
You can automate sync config (and full config) deployments using the CLI in CI. Use the config directory as the source of truth: keep `service.yaml` and `sync-config.yaml` in the repo (with secrets via `!env` and CI secrets), then run `powersync deploy` (or `powersync deploy sync-config`).
**Secrets:** Set `PS_ADMIN_TOKEN` to your PowerSync personal access token. If the workflow does not use a linked directory, also set `INSTANCE_ID` and `PROJECT_ID` (and `ORG_ID` only if your token has multiple organizations). For self-hosted, `API_URL` can specify the PowerSync API base URL.
Example: deploy sync config on push to main
## Migrating from the previous CLI
If you used the older PowerSync CLI (e.g. `npx powersync init` to set token and org/project, then `powersync instance set`, `powersync instance deploy`, etc.), the new CLI uses a different flow. Version 0.9.0 and above are **not backwards compatible** with 0.8.0. If you are not ready to migrate, you can stay on the old CLI:
```bash theme={null}
npm install -g @powersync/cli@0.8.0
```
Otherwise, upgrade to the latest **powersync** npm package and follow the mapping below.
| Previous CLI | New CLI |
| ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `npx powersync init` (enter token, org, project) | **`powersync login`** (token only). Then **`powersync init cloud`** to scaffold config, or **`powersync pull instance --project-id=... --instance-id=...`** to pull an existing instance. |
| `powersync instance set --instanceId=` | **`powersync link cloud --instance-id= --project-id=`** (writes `cli.yaml` in config directory). Or use `--directory` for a specific folder. |
| `powersync instance deploy` (interactive or long flag list) | Edit **`powersync/service.yaml`** and **`powersync/sync-config.yaml`**, then **`powersync deploy`**. Config is in files, not command args. |
| `powersync instance config` | **`powersync fetch config`** (output as YAML or JSON with `--output`). |
| Deploy only sync rules | **`powersync deploy sync-config`**. |
| `powersync instance schema` | **`powersync generate schema --output=... --output-path=...`** (and/or **`powersync status`** for diagnostics). |
| Org/project stored by init | Pass **`--org-id`** and **`--project-id`** when needed, or use **`powersync link cloud`** so they are stored in **`powersync/cli.yaml`**. For CI, use env vars: **`PS_ADMIN_TOKEN`**, **`INSTANCE_ID`**, **`PROJECT_ID`**, **`ORG_ID`** (optional). |
**Summary:** Authenticate with **`powersync login`** (or `PS_ADMIN_TOKEN` in CI). Use a **config directory** with `service.yaml` and `sync-config.yaml` as the source of truth. **Link** with **`powersync link cloud`** or **`powersync pull instance`**, then run **`powersync deploy`** or **`powersync deploy sync-config`**. No more setting “current instance” separately from config—the directory and `cli.yaml` define the target.
## Additional documentation (CLI repository)
More information is available in the [PowerSync CLI repository](https://github.com/powersync-ja/powersync-cli).
| Resource | Description |
| ---------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [CLI README](https://github.com/powersync-ja/powersync-cli/blob/main/cli/README.md) | Getting started, Cloud and self-hosted overview, and full **command reference** with all flags. |
| [General usage](https://github.com/powersync-ja/powersync-cli/blob/main/docs/usage.md) | **How the CLI works**: local config vs linking, resolution order (flags → env vars → `cli.yaml`), and **configuring multiple instances** (e.g. dev/staging/prod with separate directories or `!env` in `cli.yaml`). |
| [Docker (local development)](https://github.com/powersync-ja/powersync-cli/blob/main/docs/usage-docker.md) | Self-hosted Docker workflow, configure/start/stop/reset, database and storage modules, and template layout. |
| [Config Studio (editor)](https://github.com/powersync-ja/powersync-cli/tree/main/packages/editor) | Built-in Monaco-powered editor for `service.yaml` and `sync-config.yaml` (`powersync edit config`), schema validation, and local development. |
| [Examples](https://github.com/powersync-ja/powersync-cli/blob/main/examples/README.md) | Sample projects initialized with the CLI (e.g. Cloud pull, self-hosted Postgres, self-hosted Supabase). |
## Known issues and limitations
* When secure storage is unavailable, `powersync login` may store the token in a plaintext config file after explicit confirmation.
* Self-hosted: the CLI does not create or manage instances on your server, or deploy config to it; it only links to an existing API and runs a subset of commands (status, generate schema/token, validate). The sole exception is **Docker**: it starts a local PowerSync Service (and optional DB/storage) in containers on your machine for development — not a remote or production instance.
* Some validation checks require a connected instance to complete successfully; validation of an unprovisioned instance may show errors that resolve after the first deployment.
## Reference
Package and version history
Create or revoke tokens in the PowerSync Dashboard
Source code, usage docs, Docker usage, and examples
# Sync Diagnostics Client
Source: https://docs.powersync.com/tools/diagnostics-client
# Local Development
Source: https://docs.powersync.com/tools/local-development
Getting started with self-hosted PowerSync on your development machine.
The easiest way to run PowerSync locally is with the [PowerSync CLI](/tools/cli), which scaffolds and manages a Docker Compose stack for you.
See the [Setup Guide](/intro/setup-guide) for step-by-step CLI instructions: `powersync init self-hosted` + `powersync docker configure` + `powersync docker start`.
If you'd prefer to write your own Docker Compose setup, here's a minimal example.
## Docker Compose Example
Create a working directory with three files:
```
powersync/
docker-compose.yaml
service.yaml
sync-config.yaml
```
### `docker-compose.yaml`
This example uses Postgres as the source database and MongoDB as bucket storage. Postgres is also supported as bucket storage — see [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances) for details.
```yaml theme={null}
services:
powersync:
restart: unless-stopped
depends_on:
mongo-rs-init:
condition: service_completed_successfully
postgres:
condition: service_healthy
image: journeyapps/powersync-service:latest
command: ["start", "-r", "unified"]
volumes:
- ./service.yaml:/config/service.yaml
- ./sync-config.yaml:/config/sync-config.yaml
environment:
POWERSYNC_CONFIG_PATH: /config/service.yaml
ports:
- 8080:8080
# Source database (Postgres with logical replication enabled)
postgres:
image: postgres:latest
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
- POSTGRES_PASSWORD=postgres
- PGPORT=5432
volumes:
- pg_data:/var/lib/postgresql/data
ports:
- "5432:5432"
command: ["postgres", "-c", "wal_level=logical"]
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d postgres"]
interval: 5s
timeout: 5s
retries: 5
# MongoDB used internally for bucket storage
mongo:
image: mongo:7.0
command: --replSet rs0 --bind_ip_all --quiet
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongo_storage:/data/db
# Initializes the MongoDB replica set. This service will not usually be actively running
mongo-rs-init:
image: mongo:7.0
depends_on:
- mongo
restart: on-failure
entrypoint:
- bash
- -c
- 'mongosh --host mongo:27017 --eval ''try{rs.status().ok && quit(0)} catch {} rs.initiate({_id: "rs0", version: 1, members: [{ _id: 0, host : "mongo:27017" }]})'''
volumes:
mongo_storage:
pg_data:
```
### `service.yaml`
The main PowerSync Service configuration. See [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances) for the full reference.
```yaml theme={null}
# Source database connection
replication:
connections:
- type: postgresql
uri: postgresql://postgres:postgres@postgres:5432/postgres
sslmode: disable # verify-full, verify-ca, or disable
# Bucket storage (MongoDB shown; Postgres is also supported)
storage:
type: mongodb
uri: mongodb://mongo:27017/powersync_storage
# The port which the PowerSync API server will listen on
port: 8080
# Points to the sync config file
sync_config:
path: sync-config.yaml
# Settings for client authentication
client_auth:
# Enable this if using Supabase Auth
supabase: false
# Or enter a static collection of public keys for JWT verification, and generate a development token using the CLI (`powersync generate token`)
# jwks:
# keys:
# - kty: 'RSA'
# n: '[rsa-modulus]'
# e: '[rsa-exponent]'
# alg: 'RS256'
# kid: '[key-id]'
# JWKS audience
# audience: ['powersync-dev', 'powersync', 'http://localhost:8080']
```
### `sync-config.yaml`
Defines what data syncs to clients. See [Sync Streams](/sync/streams/overview) for full syntax.
```yaml theme={null}
config:
edition: 3
streams:
global:
# Streams without parameters sync the same data to all users
auto_subscribe: true
queries:
- SELECT * FROM todos
- SELECT * FROM lists
```
### Start the stack
```bash theme={null}
docker compose up
```
## Resources
* [PowerSync CLI](https://github.com/powersync-ja/powersync-cli) — open source CLI; use it to scaffold and run a Docker-based local stack
* [self-host-demo](https://github.com/powersync-ja/self-host-demo) — complete working examples with Docker Compose
* [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances) — full `service.yaml` reference
* [Sync Streams](/sync/streams/overview) — sync config syntax
* [Generate a Development Token](/intro/setup-guide#5-generate-a-development-token) — for testing without a full auth setup
# Tools
Source: https://docs.powersync.com/tools/overview
Dashboard for PowerSync Cloud. Allows managing your PowerSync organization, projects and instances.
Manage PowerSync Cloud and self-hosted instances from the command line.
Web app to inspect and debug syncing. Works with both cloud and self-hosted PowerSync Service instances.
Using Docker Compose to self-host PowerSync for development purposes.
Resources for working with PowerSync with AI-powered coding tools.
# PowerSync Dashboard
Source: https://docs.powersync.com/tools/powersync-dashboard
Introduction to and overview of the PowerSync Dashboard
The PowerSync Dashboard is available in [PowerSync Cloud](https://www.powersync.com/pricing) (our cloud-hosted offering) and provides an interface for managing your PowerSync organization, projects, instances, and account settings.
The dashboard is available here: [https://dashboard.powersync.com/](https://dashboard.powersync.com/)
### Hierarchy: Organization, project, instance
* After successfully [signing up](https://accounts.powersync.com/portal/powersync-signup?s=docs) for PowerSync Cloud, your **PowerSync account** is created.
* Your account is assigned an **organization** on the [Free pricing plan](https://www.powersync.com/pricing).
* To get started, you'll need to create a **project** in your organization.
When you click "Create a New Project", you can opt to create a *Development* and *Production* **instance** automatically for that project (recommended). An instance runs a copy of the [PowerSync Service](/architecture/powersync-service) and connects to your [backend source database](/configuration/source-db/connection). You can also update region for each instance from US to EU, JP (Japan), AU (Australia) or BR (Brazil).
Here is an example of how this hierarchy might be used by a customer:
* **Organization**: Wanderlust Inc.
* **Project**: Wanderlust Tracker
* **Instance**: Development
* **Instance**: Production
### Dashboard Overview
The PowerSync Dashboard is organized into three main levels, each providing different functionality:
1. **Organization Level** - Manage projects, team members, organization and billingsettings
2. **Account Level** - Manage your personal account settings and access tokens
3. **Project & Instance Level** - Configure and monitor your PowerSync instances
### Organization Level
URL structure: `https://dashboard.powersync.com/org/{orgId}/projects`
At the organization level, you can manage your PowerSync projects and organization-wide settings. Navigate to your organization to access:
* **Projects** - View all projects in your organization, create new projects, and manage project settings
* **Team** - Invite team members to your organization, manage user roles, and remove access
* **Plans & Billing** - View or update your PowerSync subscription plan and manage billing details
* **Plan Usage** - Monitor usage metrics across all projects in your organization for the current billing cycle
* **Settings** - Update the organization name
### Account Level
URL structure: `https://dashboard.powersync.com/account/me`
At the account level, you can manage your personal account settings:
* **Account Details** - View your account email and name
* **Security** - Reset your password and configure multi-factor authentication (MFA)
* **Access Tokens** - Create and manage personal access tokens for use with the [CLI](/tools/cli)
### Project & Instance Level
URL structure: `https://dashboard.powersync.com/org/{orgId}/project/{projectId}/{instanceId}/{view}`
When you navigate to a specific instance, you'll see a left sidebar with various views for configuring and monitoring that instance:
* **Health** - Overview of its connection health, deploy history, replication status, and recently connected clients
* **Database Connections** - Configure and manage the source database connection
* **Client Auth** - Configure authentication settings
* **Sync Streams / Sync Rules** - Edit, validate, and deploy your sync config.
* **Sync Test** - Test your Sync Streams (or legacy Sync Rules)
* **Client SDK Setup** - Generate the [client-side schema](/intro/setup-guide#define-your-client-side-schema) based on your deployed [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview)
* **Write API** - Resources for exposing the write API endpoint
* **Logs** - View replication and service logs
* **Metrics** - Monitor usage metrics and performance
* **Settings** - Advanced options such as updating the [Service version](#advanced-service-version-locking), [compacting or defragmenting buckets](/maintenance-ops/compacting-buckets), and options for deprovisioning and destroying your instance
In the top bar, you'll see a "Connect" button that provides quick access to your instance URL and other resources for connecting to your instance.
#### Common Tasks
Here are some of the most common tasks you'll perform in the dashboard:
* **Edit and deploy Sync Streams / Sync Rules** - Select your project and instance and go to the **Sync Streams** (or legacy **Sync Rules**) view to edit your sync config, then click **"Validate"** and **"Deploy"** to deploy
* **Generate development token** - Navigate to the **Client Auth** and ensure the **Development tokens** setting is checked. Click the "Connect" button in the top bar and follow instructions to generate a [development token](/configuration/auth/development-tokens).
* **Launch the Sync Diagnostics Client** - Navigate to the **Sync Test**, generate a development token and click "Launch" to launch the [Sync Diagnostics Client](/tools/diagnostics-client).
* **Copy your instance URL** - Click **Connect** in the top bar and copy the instance URL from the dialog.
* **Generate client-side schema** - Click the **Connect** button in the top bar to generate the [client-side schema](/intro/setup-guide#define-your-client-side-schema) based on your deployed [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) in your preferred language
* **Monitor instance health** - Navigate to the **Health** view to see an overview of your instance status, database connections, and recent deploys
* **View logs** - Navigate to the **Logs** view to review replication and client sync logs
* **Monitor metrics** - Navigate to the **Metrics** view to track usage metrics
#### Advanced: Service Version Locking
Customers on our [Team and Enterprise plans](https://www.powersync.com/pricing) can lock their PowerSync Cloud instances to a specific version of the PowerSync Service. This option is available under your instance settings.
Versions are specified as `major.minor.patch`. When locked, only new `.patch` releases will automatically be applied to the instance.
**Downgrade limitations:** Not all downgrade paths are available automatically. If you need to downgrade to an older version, please [contact our team](/resources/contact-us) for assistance.