# Architecture Overview Source: https://docs.powersync.com/architecture/architecture-overview The core components of PowerSync are the service and client SDKs. The [PowerSync Service](/architecture/powersync-service) and client SDK operate in unison to keep client-side SQLite databases in sync with a backend source database. Learn about their architecture: ### Protocol Learn about the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service): # Client Architecture Source: https://docs.powersync.com/architecture/client-architecture Learn how the PowerSync Client SDK manages connections, authentication, and the local SQLite database. The [PowerSync Client SDK](/client-sdks/overview) is embedded into a software application. The Client SDK manages the client connection to your [PowerSync Service](/architecture/powersync-service) instance, authenticating via a [JWT](/configuration/auth/overview). The connection between the client and the PowerSync Service is encrypted, and either uses HTTP streams or WebSockets (depending on the specific [Client SDK](/client-sdks/overview) being used) The Client SDK provides access to a managed [SQLite](/resources/faq#why-does-powersync-use-sqlite-as-the-client-side-database) database that is automatically kept in sync with the backend source database via the PowerSync Service, based on the [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) that are active on the PowerSync Service instance. ## Reading Data (SQLite) App clients always read data from the client-side [SQLite](https://sqlite.org/) database, regardless of whether the user is online or offline. When the user is online and the app is connected to the PowerSync Service, changes on the source database reflect in real-time in the SQLite database. [Live Queries / Watch Queries](/client-sdks/watch-queries) allows the app UI to have real-time reactivity too. ## Client-Side Schema and SQLite Database Structure When you implement the PowerSync Client SDK in your application, you need to define a [client-side schema](/intro/setup-guide#define-your-client-side-schema) with tables, columns and indexes that correspond to your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). You provide this schema when the PowerSync-managed SQLite database is [instantiated](/intro/setup-guide#instantiate-the-powersync-database). The tables defined in your client-side schema are usable in SQL queries as if they were actual SQLite tables, while in reality they are created as *SQLite views* based on the schemaless JSON data being synced (see [PowerSync Protocol](/architecture/powersync-protocol)). The PowerSync Client SDK automatically maintains the following tables in the SQLite database:
Table Description
`ps_data__`
This contains the data for each "table", in JSON format. Since JSON is being used, this table's schema does not change when columns are added, removed or changed in the Sync Streams (or legacy Sync Rules) and client-side schema.
`ps_data_local__`
Same as the previous point, but for [local-only](/client-sdks/advanced/local-only-usage) tables.
`` (`VIEW`)
These are views on the above `ps_data` tables, with each defined column in the client-side schema extracted from the JSON. For example, a `description` text column would be `CAST(data ->> '$.description' as TEXT)`.
`ps_untyped` Any synced table that is not defined in the client-side schema is placed here. If the table is added to the schema at a later point, the data is then migrated to `ps_data__`.
`ps_oplog` This is the operation history data as received from the [PowerSync Service](/architecture/powersync-service), grouped per bucket.
`ps_crud` The client-side upload queue (see [Writing Data](#writing-data-via-sqlite-database-and-upload-queue) below)
`ps_buckets` A small amount of metadata for each bucket.
`ps_migrations` Table keeping track of Client SDK schema migrations.
Most rows will be present in at least two tables — the `ps_data__` table, and in `ps_oplog`. The copy of the row in `ps_oplog` may be newer than the one in `ps_data__
`. This is because of the checkpoint system in PowerSync that gives the system its consistency properties. When a full [checkpoint](/architecture/consistency) has been downloaded, data is copied over from `ps_oplog` to the individual `ps_data__
` tables. It is possible for different [buckets](/architecture/powersync-service#bucket-system) to include overlapping data (for example, if multiple buckets contain data from the same table). If rows with the same table and ID have been synced via multiple buckets, it may be present multiple times in `ps_oplog`, but only one will be preserved in the `ps_data__
` table (the one with the highest `op_id`). **Raw Tables Instead of JSON-Backed SQLite Views**: If you run into limitations with the above JSON-based SQLite view system, check out the [Raw Tables experimental feature](/client-sdks/advanced/raw-tables) which allows you to define and manage raw SQLite tables to work around some of the limitations of PowerSync's default JSON-backed SQLite views system. We are actively seeking feedback on the raw tables functionality. ## Writing Data (via SQLite Database and Upload Queue) Any mutations on the SQLite database, namely updates, deletes and inserts, are immediately reflected in the SQLite database, and also also automatically placed into an **upload queue** by the Client SDK. The upload queue is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue. The upload queue is automatically managed by the PowerSync Client SDK. The Client SDK processes the upload queue by invoking an `uploadData()` function [that you define](/configuration/app-backend/client-side-integration) when you integrate the Client SDK. Your `uploadData()` function implementation should call your [backend application API](/configuration/app-backend/setup) to persist the mutations to the backend source database. The reason why we designed PowerSync this way is that it allows you to apply your own backend business logic, validations and authorization to any mutations going to your source database. The PowerSync Client SDK automatically takes care of network failures and retries. If processing mutations in the upload queue fails (e.g. because the user is offline), it is automatically retried. # Consistency Source: https://docs.powersync.com/architecture/consistency PowerSync uses the concept of "checkpoints" to ensure that data is consistent. ## PowerSync: Designed for Causal+ Consistency PowerSync is designed to have [causal+ consistency](https://jepsen.io/consistency/models/causal), while providing enough flexibility for applications to perform their own data validations and conflict handling. PowerSync's consistency properties have been [tested and verified](https://github.com/nurturenature/jepsen-powersync#readme). ## How It Works: Checkpoints A checkpoint is a single point-in-time on the server (similar to an [LSN in Postgres](https://www.postgresql.org/docs/current/datatype-pg-lsn.html)) with a consistent state — only fully committed transactions are part of the state. The client only updates its local state when it has all the data matching a checkpoint, and then it updates the state to exactly match that of the checkpoint. There is no intermediate state while downloading large sets of changes such as large server-side transactions. Different tables and [buckets](/architecture/powersync-service#bucket-system) are all included in the same consistent checkpoint, to ensure that the state is consistent over all data in the client. ## Client-Side Mutations Client-side mutations are applied on top of the last checkpoint received from the server, as well as being persisted into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue). While mutations are present in the upload queue, the client does not advance to a new checkpoint. This means the client never has to resolve conflicts locally. Only once all the client-side mutations have been acknowledged by the server, and the data for that new checkpoint is downloaded by the client, does the client advance to the next checkpoint. This ensures that the operations are always ordered correctly on the client. There is one nuanced case here, which is buckets with [Priority 0](/sync/advanced/prioritized-sync#special-case:-priority-0) if you are using [Prioritized Syncing](/sync/advanced/prioritized-sync). ## Types of Client-Side Mutations/Operations The client automatically records mutations to the client-side database as `PUT`, `PATCH` or `DELETE` operations — corresponding to `INSERT`, `UPDATE` or `DELETE` statements in SQLite. These are grouped together in a batch per client-side transaction. Since the [developer has full control](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) over how mutations are applied to the source database, more advanced operations can be modeled on top of these three. See [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution) for examples. ## Validation and Conflict Handling With PowerSync offering [full flexibility](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) in how mutations are applied on the server, it is also the developer's responsibility to implement this correctly to avoid consistency issues. Some scenarios to consider: While the client was offline, a row was modified on the client-side. By the time the client is online again, that row has been deleted on the source database. Some options for handling the mutation in your backend: * Discard the mutation. * Discard the entire transaction. * Re-create the row. * Record the failed mutation elsewhere, potentially notifying the user and allowing the user to resolve the issue. Some other examples include foreign-key or not-null constraints, maximum size of numeric fields, unique constraints, and access restrictions (such as row-level security policies). In an online-only application, the user typically sees the error as soon as it occurs, and can correct the issue as required. In an offline-capable application that syncs asynchronously with the server, these errors may occur much later than when the mutation was made, so more care is required to handle these cases. Special care must be taken so that issues such as those do not block the upload queue. The upload queue in the PowerSync Client SDK is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue, and the queue cannot advance if the backend does not acknowledge a mutation. And as mentioned above, if the queue cannot be cleared, the client does not move on to the next checkpoint of synced data. There is no single correct choice on how to handle write failures such as mentioned above — the best action depends on the specific application and scenario. However, we do have some suggestions for general approaches: 1. In general, consider relaxing constraints somewhat on the backend where they are not absolutely required. It may be better to accept data that is somewhat inconsistent (e.g. a client not applying all expected validations), rather than discarding the data completely. 2. If it is critical to preserve all client mutations and preserve the order of mutations: 1. Block the client's upload queue on unexpected errors (don't acknowledge the mutation in your backend API). 2. Implement error monitoring to be notified of issues, and resolve the issues as soon as possible. 3. If it is critical to preserve all client mutations, but the exact order may not be critical: 1. On a constraint error, persist the transaction in a separate queue on your backend, and acknowledge the change. 2. The backend queue can then be inspected and retried asynchronously, without blocking the client-side upload queue. 4. If it is acceptable to lose some mutations due to constraint errors: 1. Discard the mutation, or the entire transaction if the changes must all be applied together. 2. Implement error notifications to detect these issues. See also: * [Handling Update Conflicts](/handling-writes/handling-update-conflicts) * [Custom Conflict Resolution](/handling-writes/custom-conflict-resolution) ## Questions? If you have any questions about consistency, please [join our Discord](https://discord.gg/powersync) to discuss. # PowerSync Protocol Source: https://docs.powersync.com/architecture/powersync-protocol Overview of the sync protocol used between PowerSync clients and the PowerSync Service for efficient delta syncing. This contains a broad overview of the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service). For details, see the implementation in the various PowerSync Client SDKs. ## Design The PowerSync protocol is designed to efficiently sync changes to clients, while maintaining [consistency](/architecture/consistency) and integrity of data. The same process is used for: * Downloading the initial set of data * Bulk downloading changes after being offline for a while * And incrementally streaming changes while connected. ## Concepts ### Buckets All synced data is grouped into [buckets](/architecture/powersync-service#bucket-system). A bucket represents a collection of synced rows, synced to any number of users. [Buckets](/architecture/powersync-service#bucket-system) is a core concept that allows PowerSync to efficiently scale to tens of thousands of concurrent clients per PowerSync Service instance, and incrementally sync changes to hundreds of thousands of rows (or even [a million or more](/resources/performance-and-limits#sync-powersync-service-→-client)) to each client. Each bucket keeps an ordered list of changes to rows within the bucket (operation history) — generally as `PUT` or `REMOVE` operations. * `PUT` is the equivalent of `INSERT OR REPLACE` * `REMOVE` is slightly different from `DELETE`: a row is only deleted from the client if it has been removed from *all* buckets synced to the client. As a practical example of how buckets manifest themselves, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be obtained from the JWT). Now let's say users with IDs `A` and `B` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs `user_todo_lists["A"]` and `user_todo_lists["B"]`. As you can see, buckets are essentially scoped by their parameters (`A` and `B` in this example), so they are always synced as a whole. For user `A` to receive only their relevant to-do lists, they would sync the entire contents of the bucket `user_todo_lists["A"]` ### Checkpoints A checkpoint is a sequential ID that represents a single point-in-time for consistency purposes. This is further explained in [Consistency](/architecture/consistency). ### Checksums for Verifying Data Integrity For any checkpoint, the client and server compute a per-bucket checksum. This is essentially the sum of checksums of individual operations within the bucket, which each individual checksum being a hash of the operation data. The checksum helps to ensure that the client has all the correct data. In the hypothetical scenario where the bucket data becomes corrupted on the PowerSync Service, the checksums will stop matching, and the client will re-download the entire bucket. Note: Checksums are not a cryptographically secure method to verify data integrity. Rather, it is designed to detect simple data mismatches, whether due to bugs, bucket data tampering, or other corruption issues. ### Compacting To avoid indefinite growth in size of buckets, the operation history of a bucket can be [compacted](/maintenance-ops/compacting-buckets). Stale updates are replaced with marker entries, which can be merged together, while keeping the same checksums. ## Protocol A client initiates a sync session using: 1. A JWT token that typically contains the `user_id`, and additional parameters (optional). 2. A list of current buckets that the client has, and the latest operation ID in each. The server then responds with a stream of: 1. **Checkpoint available**: A new checkpoint ID, with a checksum for each bucket in the checkpoint. 2. **Data**: New operations for the above checkpoint for each relevant bucket, starting from the last operation ID as sent by the client. 3. **Checkpoint complete**: Sent once all data for the checkpoint have been sent. The server then waits until a new checkpoint is available, then repeats the above sequence. The stream can be interrupted at any time, at which point the client will initiate a new session, resuming from the last point. If a checksum validation fails on the client, the client will delete the bucket and start a new sync session. Data for individual rows are represented [using JSON](/architecture/client-architecture#client-side-schema-and-sqlite-database-structure). The protocol itself is schemaless — the client is expected to use their own copy of the schema, and gracefully handle schema differences. #### Write Checkpoints Write checkpoints are used to ensure clients have synced their own mutations back before applying downloaded data locally. Creating a write checkpoint is a separate operation, which is performed by the client after all mutations has been uploaded (i.e. the client's [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) has been successfully fully processed and is empty). It is [important](/handling-writes/writing-client-changes#why-must-my-write-endpoint-be-synchronous) that this happens after the data has been written to the backend source database. The server then keeps track of the current CDC stream position on the database (LSN in Postgres and SQL Server, resume token in MongoDB and GTID+Binlog Position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream. # PowerSync Service Source: https://docs.powersync.com/architecture/powersync-service Understand the PowerSync Service architecture, including the bucket system, data replication, and real-time streaming sync. When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the *read path* from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). ## Bucket System The concept of *buckets* is core to PowerSync and its scalability. *Buckets* are basically partitions of data that allow the PowerSync Service to efficiently query the correct data that a specific client needs to sync. With [Sync Streams](/sync/streams/overview), buckets are created **implicitly** based on your stream definitions, their queries, and subqueries. You don't need to understand or manage buckets directly — the PowerSync Service handles this automatically. For example, if you define a stream like: ```yaml theme={null} streams: user_lists: auto_subscribe: true query: SELECT * FROM lists WHERE owner_id = auth.user_id() ``` PowerSync automatically creates the appropriate buckets internally based on the query parameters. With legacy [Sync Rules](/sync/rules/overview), you explicitly define the buckets using `bucket_definitions` and specify which [parameters](/sync/rules/overview#parameters) are used for each bucket. ### How Buckets Work To understand how buckets enable efficient syncing, consider this example: Let's say you have data scoped to users — the to-do lists for each user. Based on the data that exists in your source database, PowerSync will create individual buckets for each user. If users with IDs `1`, `2`, and `3` exist in your source database, PowerSync will create buckets with IDs `user_todo_lists["1"]`, `user_todo_lists["2"]`, and `user_todo_lists["3"]`. When a user with `user_id=1` in their JWT connects to the PowerSync Service, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`. With legacy Sync Rules, a bucket ID is formed from the bucket definition name and its parameter values, for example `user_todo_lists["1"]`. With Sync Streams, the bucket IDs are generated automatically based on your stream queries — you don't need to define and name buckets explicitly. ### Deduplication for Scalability The bucket system also allows for high-scalability because it *deduplicates* data that is shared between different users. For example, let's pretend that instead of `user_todo_lists`, we have `org_todo_lists` buckets, each containing the to-do lists for an *organization*., and we use an `organization_id` parameter from the JWT for this bucket. Now let's pretend that both users with IDs `1` and `2` both belong to an organization with an ID of `1`. In this scenario, both users `1` and `2` will sync from a bucket with a bucket ID of `org_todo_lists["1"]`. This also means that the PowerSync Service has to keep track of less state per-user — and therefore, server-side resource requirements don't scale linearly with the number of users/clients. ## Operation History Each bucket stores the *recent history* of operations on each row, not just the latest state of the row. This is another core part of the PowerSync architecture — the PowerSync Service can efficiently query the *operations* that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync. When a change occurs in the source database that affects a certain bucket (based on your Sync Streams, or legacy Sync Rules), that change will be appended to the operation history in that bucket. Buckets are therefore treated as "append-only" data structures. That being said, to avoid an ever-growing operation history, the buckets can be [compacted](/maintenance-ops/compacting-buckets) (this is automatically done on PowerSync Cloud). ## Bucket Storage The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres are currently supported as *bucket storage* databases. The *bucket storage* database is separate from the connection to your *source database* (Postgres, MongoDB, MySQL or SQL Server). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the *bucket storage* database. Persisting the bucket state in a database is also part of how PowerSync achieves high scalability: it means that the PowerSync Service can have a low memory footprint even as you scale to very large volumes of synced data and users/clients. ## Replication From the Source Database As mentioned above, one of the primary purposes of the PowerSync Service is replicating data from the source database, based on your Sync Streams (or legacy Sync Rules): When the PowerSync Service replicates data from the source database, it: 1. Pre-processes the data according to your [Sync Streams](/sync/streams/overview) (or [Sync Rules](/sync/rules/overview)), splitting data into *buckets* (as explained above) and transforming the data if required. 2. Persists each operation into the relevant buckets, ready to be streamed to clients. ### Initial Replication vs. Incremental Replication Whenever a new version of Sync Streams (or legacy Sync Rules) is deployed, initial replication takes place by means of taking a snapshot of all tables/collections they reference. After that, data is incrementally replicated using a change data capture stream (the specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture). ## Streaming Sync As mentioned above, the other primary purpose of the PowerSync Service is streaming data to clients. The PowerSync Service authenticates clients/users using [JWTs](/configuration/auth/overview). Once a client/user is authenticated: 1. The PowerSync Service calculates a list of buckets for the user to sync based on their Sync Stream subscriptions (or [Parameter Queries](/sync/rules/parameter-queries) in legacy Sync Rules). 2. The Service streams any operations added to those buckets since the last time the client/user connected. The Service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes. Only the internal *bucket storage* of the PowerSync Service is used — the source database is not queried directly during streaming. For more details on exactly how streaming sync works, see [PowerSync Protocol](/architecture/powersync-protocol#protocol). ## Source Code Repo The repo for the PowerSync Service can be found here: # Attachments / Files Source: https://docs.powersync.com/client-sdks/advanced/attachments Keep files out of your database and handle attachments in an entirely storage-agnostic way. PowerSync syncs minimal metadata while an offline-first queue automatically handles uploads, downloads, and retries. ## Introduction The `@powersync/attachments` package (JavaScript/TypeScript) and `powersync_attachments_helper` package (Flutter/Dart) are deprecated. Attachment functionality is now built-in to the PowerSync SDKs. Please use the [built-in attachment helpers](#sdk-%26-demo-reference) instead, and see the [migration notes](#migrating-from-deprecated-packages). While PowerSync excels at syncing structured data, storing large files (images, videos, PDFs) directly in SQLite is not recommended. Embedding files as base64-encoded data or binary blobs in database rows can lead to many issues. Instead, PowerSync uses a **metadata + storage provider pattern**: sync small metadata records through PowerSync while storing actual files in purpose-built storage systems (S3, Supabase Storage, Cloudflare R2, etc.). This approach provides: * **Optimal performance** - Database stays small and fast * **Automatic queue management** - Background uploads/downloads with retry logic * **Offline-first support** - Local files available immediately, sync happens in background * **Cache management** - Automatic cleanup of unused files * **Platform flexibility** - Works across web, mobile, and desktop ## SDK & Demo Reference We provide attachment helpers for multiple platforms: | SDK | Package | Min. SDK version | Demo App | | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **JavaScript/TypeScript** | [Built-in attachments (alpha)](https://github.com/powersync-ja/powersync-js/tree/main/packages/common/src/attachments) | Web v1.33.0, React Native v1.30.0, Node.js v0.17.0 | [React Native Todo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist) · [React Web Todo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist) | | **Flutter** | [Built-in attachments (alpha)](https://pub.dev/documentation/powersync_core/latest/topics/attachments-topic.html) | v1.16.0 | [Flutter Todo](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) | | **Swift** | [Built-in attachments (alpha)](https://github.com/powersync-ja/powersync-swift/blob/main/Sources/PowerSync/attachments/README.md) | v1.0.0 | [iOS Demo](https://github.com/powersync-ja/powersync-swift/tree/main/Demo) | | **Kotlin** | [Built-in attachments (alpha)](https://github.com/powersync-ja/powersync-kotlin/tree/main/common/src/commonMain/kotlin/com/powersync/attachments) | v1.0.0 | [Android Todo](https://github.com/powersync-ja/powersync-kotlin/tree/main/demos/android-supabase-todolist) | Most demo applications use Supabase Storage as the storage provider, but the patterns are adaptable to any storage system. ## How It Works PowerSync attachments flow & architecture ### Workflow 1. **Save file** - Your app calls `saveFile()` with file data and an `updateHook` to handle linking the attachment to your data model 2. **Queue for upload** - File is saved locally and a record is created in the attachments table with state `QUEUED_UPLOAD` 3. **Background upload** - The attachment queue automatically uploads file to remote storage (S3/Supabase/etc.) 4. **Remote storage** - File is stored in remote storage with the attachment ID 5. **State update** - The `updateHook` runs, updating your data model with the attachment ID and marking the file locally as `SYNCED` 6. **Cross-device sync** - PowerSync syncs the data model changes to other clients 7. **Data model updated** - Other clients receive the updated data model with the new attachment reference (e.g., `user.photo_id = "id-123"`) 8. **Watch detects attachment** - Other clients' `watchAttachments()` callback detects the new attachment reference and creates a record in the attachments table with state `QUEUED_DOWNLOAD` 9. **File download** - The attachment queue automatically downloads the file from remote storage 10. **Local storage** - File is saved to local storage on the other client 11. **State update** - File is marked locally as `SYNCED` and ready for use ### Attachment States | State | Description | | ----------------- | ------------------------------------------------------------------ | | `QUEUED_UPLOAD` | File saved locally, waiting to upload to remote storage | | `QUEUED_DOWNLOAD` | Data model synced from another device, file needs to be downloaded | | `SYNCED` | File exists both locally and in remote storage, fully synchronized | | `QUEUED_DELETE` | Marked for deletion from both local and remote storage | | `ARCHIVED` | No longer referenced in your data model, candidate for cleanup | ## Core Components ### Attachment Table The **Attachment Table** is a local-only table that stores metadata about each file. It's not synced through PowerSync's sync rules - instead, it's managed entirely by the attachment queue on each device. **Metadata stored:** * `id` - Unique attachment identifier (UUID) * `filename` - File name with extension (e.g., `photo-123.jpg`) * `localUri` - Path to file in local storage * `size` - File size in bytes * `mediaType` - MIME type (e.g., `image/jpeg`) * `state` - Current sync state (see states above) * `hasSynced` - Boolean indicating if file has ever been uploaded * `timestamp` - Last update time * `metaData` - Optional JSON string for custom data **Key characteristics:** * **Local-only** - Each device maintains its own attachment table * **Automatic management** - Queue handles all inserts/updates * **Cross-client coordination** - Your data model (e.g., `users.photo_id`) tells each client which files it needs ### Remote Storage Adapter The **Remote Storage Adapter** is an interface you implement to connect PowerSync with your cloud storage provider. It's completely platform-agnostic - Implementations can use S3, Supabase Storage, Cloudflare R2, Azure Blob, or even IPFS. **Interface methods:** * `uploadFile(fileData, attachment)` - Upload file to cloud storage * `downloadFile(attachment)` - Download file from cloud storage * `deleteFile(attachment)` - Delete file from cloud storage **Common pattern:** For security reasons client side implementations should use **signed URLs** 1. Request a signed upload/download URL from your backend 2. Your backend validates permissions and generates a temporary URL 3. Client uploads/downloads directly to storage using the signed URL 4. Never expose storage credentials to clients ### Local Storage Adapter The **Local Storage Adapter** handles file persistence on the device. PowerSync provides implementations for common platforms and allows you to create custom adapters. **Interface methods:** * `initialize()` - Set up storage (create directories, etc.) * `saveFile(path, data)` - Write file to storage * `readFile(path)` - Read file from storage * `deleteFile(path)` - Remove file from storage * `fileExists(path)` - Check if file exists * `getLocalUri(filename)` - Get full path for a filename **Built-in adapters:** * **IndexedDB** - For web browsers (`IndexDBFileSystemStorageAdapter`) * **Node.js Filesystem** - For Node/Electron (`NodeFileSystemAdapter`) * **React Native** - For React Native with Expo or bare React Native we have a dedicated package [(`@powersync/attachments-storage-react-native`)](https://github.com/powersync-ja/powersync-js/tree/main/packages/attachments-storage-react-native) * **Native mobile storage** - For Flutter, Kotlin, Swift The React Native local storage adapter requires Expo 54 or later. ### Attachment Queue The **Attachment Queue** is the orchestrator that manages the entire attachment lifecycle. It: * **Watches your data model** - You pass a `watchAttachments` function as a parameter that monitors which files your app references * **Manages state transitions** - Automatically moves files through states (upload/download → synced → archive → delete) * **Handles retries** - Failed operations are retried on the next sync interval * **Performs cleanup** - Removes archived files that are no longer needed * **Verifies integrity** - Checks local files exist and repairs inconsistencies **Watched Attachments pattern:** The queue needs to know which attachments exist in your data model. The `watchAttachments` function you provide monitors your data model and returns a list of attachment IDs that your app references. The queue compares this list with its internal attachment table to determine: * **New attachments** - Download them * **Missing attachments** - Upload them * **Removed attachments** - Archive them The `watchAttachments` queries are reactive and execute whenever the watched tables change, keeping the attachment queue synchronized with your data model. There are a few scenarios you might encounter: **Single Attachment Type** For a single attachment type, you watch one table. For example, if users have profile photos: ```sql theme={null} SELECT photo_id FROM users WHERE photo_id IS NOT NULL ``` **Multiple Attachment Types - Single Queue** You can watch multiple attachment types using a single queue by combining queries with SQL `UNION` or `UNION ALL`. This allows you to monitor attachments across different tables (e.g., `users.photo_id`, `documents.document_id`, `videos.video_id`) in one queue. Each attachment type may have different file extensions, which can be handled in the query by selecting the extension from your data model or using type-specific defaults. For example: ```sql theme={null} SELECT photo_id as id, photo_file_extension as file_extension FROM users WHERE photo_id IS NOT NULL UNION ALL SELECT document_id as id, document_file_extension as file_extension FROM documents WHERE document_id IS NOT NULL UNION ALL SELECT video_id as id, video_file_extension as file_extension FROM videos WHERE video_id IS NOT NULL ``` Use `UNION ALL` when you want to include all rows (including duplicates), or `UNION` when you want to automatically deduplicate results. For attachment watching, `UNION ALL` is typically preferred since attachment IDs should already be unique. The UNION query executes whenever any of the watched tables change, which may have higher database overhead compared to watching a single table. Implementation examples are shown in the [Initialize Attachment Queue](#initialize-attachment-queue) section below. **Multiple Attachment Types - Multiple Queues** Alternatively, you can create separate queues for different attachment types. Each queue watches its own specific table(s) with simpler queries, allowing for independent configuration and management. Multiple queues may use more memory, but each queue watches simpler queries. Implementation examples are shown in the [Initialize Attachment Queue](#initialize-attachment-queue) section below. ## Implementation Guide ### Installation ```bash JavaScript/TypeScript theme={null} Included with web and node and react-native packages, for react-native adapters install @powersync/attachments-storage-react-native. ``` ```bash Flutter theme={null} comes with flutter SDK, check SDK installation guide ``` ```swift Swift theme={null} comes with swift SDK, check SDK installation guide ``` ```kotlin Kotlin theme={null} comes with Kotlin SDK, check SDK installation guide ``` ### Setup: Add Attachment Table to Schema ```typescript JavaScript/TypeScript theme={null} import { Schema, Table, column, AttachmentTable } from '@powersync/web'; const appSchema = new Schema({ users: new Table({ name: column.text, email: column.text, photo_id: column.text // References attachment ID }), // Add the attachment table attachments: new AttachmentTable() }); ``` ```dart Flutter theme={null} import 'package:powersync/powersync.dart'; import 'package:powersync_core/attachments/attachments.dart'; final schema = Schema([ Table('users', [ Column.text('name'), Column.text('email'), Column.text('photo_id'), // References attachment ID ]), AttachmentsQueueTable(), ]); ``` ```swift Swift theme={null} import PowerSync let users = Table( name: "users", columns: [ Column.text("name"), Column.text("email"), Column.text("photo_id"), // References attachment ID ] ) let schema = Schema( tables: [ users, // Add the local-only table which stores attachment states createAttachmentTable(name: "attachments") ] ) ``` ```kotlin Kotlin theme={null} import com.powersync.attachments.createAttachmentsTable import com.powersync.db.schema.Column import com.powersync.db.schema.Schema import com.powersync.db.schema.Table val users = Table( name = "users", columns = listOf( Column.text("name"), Column.text("email"), Column.text("photo_id") // References attachment ID ) ) val schema = Schema( users, // Add the local-only table which stores attachment states createAttachmentsTable("attachments") ) ``` ### Configure Storage Adapters ```typescript JavaScript/TypeScript theme={null} // For web browsers (IndexedDB) import { IndexDBFileSystemStorageAdapter } from '@powersync/web'; const localStorage = new IndexDBFileSystemStorageAdapter('my-app-files'); // For Node.js/Electron (filesystem) // import { NodeFileSystemAdapter } from '@powersync/node'; // const localStorage = new NodeFileSystemAdapter('./user-attachments'); // For React Native (Expo or bare React Native) // Need to install @powersync/attachments-storage-react-native // // For Expo projects, also install expo-file-system // import { ExpoFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native'; // const localStorage = new ExpoFileSystemStorageAdapter(); // // For bare React Native, also install @dr.pogodin/react-native-fs // import { ReactNativeFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native'; // const localStorage = new ReactNativeFileSystemStorageAdapter(); // Remote storage adapter (example with signed URLs) const remoteStorage = { async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) { // Request signed upload URL from your backend const { uploadUrl } = await fetch('/api/attachments/upload-url', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ filename: attachment.filename, contentType: attachment.mediaType }) }).then(r => r.json()); // Upload to cloud storage using signed URL await fetch(uploadUrl, { method: 'PUT', body: fileData, headers: { 'Content-Type': attachment.mediaType || 'application/octet-stream' } }); }, async downloadFile(attachment: AttachmentRecord): Promise { // Request signed download URL from your backend const { downloadUrl } = await fetch( `/api/attachments/${attachment.id}/download-url` ).then(r => r.json()); // Download from cloud storage const response = await fetch(downloadUrl); return response.arrayBuffer(); }, async deleteFile(attachment: AttachmentRecord) { // Delete via your backend await fetch(`/api/attachments/${attachment.id}`, { method: 'DELETE' }); } }; ``` ```dart Flutter theme={null} import 'dart:io'; import 'dart:typed_data'; import 'package:path_provider/path_provider.dart'; import 'package:powersync_core/attachments/attachments.dart'; import 'package:powersync_core/attachments/io.dart'; import 'package:http/http.dart' as http; // For Flutter (native platforms) Future getLocalStorage() async { final appDocDir = await getApplicationDocumentsDirectory(); final attachmentsDir = Directory('${appDocDir.path}/attachments'); return IOLocalStorage(attachmentsDir); } // Remote storage adapter (example with signed URLs) class SignedUrlStorageAdapter implements RemoteStorage { @override Future uploadFile( Stream> fileData, Attachment attachment, ) async { // Request signed upload URL from your backend final response = await http.post( Uri.parse('/api/attachments/upload-url'), headers: {'Content-Type': 'application/json'}, body: jsonEncode({ 'filename': attachment.filename, 'contentType': attachment.mediaType, }), ); final uploadUrl = jsonDecode(response.body)['uploadUrl'] as String; // Collect stream data final bytes = []; await for (final chunk in fileData) { bytes.addAll(chunk); } // Upload to cloud storage using signed URL await http.put( Uri.parse(uploadUrl), body: Uint8List.fromList(bytes), headers: { 'Content-Type': attachment.mediaType ?? 'application/octet-stream', }, ); } @override Future>> downloadFile(Attachment attachment) async { // Request signed download URL from your backend final response = await http.get( Uri.parse('/api/attachments/${attachment.id}/download-url'), ); final downloadUrl = jsonDecode(response.body)['downloadUrl'] as String; // Download from cloud storage final httpResponse = await http.get(Uri.parse(downloadUrl)); return Stream.value(httpResponse.bodyBytes); } @override Future deleteFile(Attachment attachment) async { // Delete via your backend await http.delete( Uri.parse('/api/attachments/${attachment.id}'), ); } } ``` ```swift Swift theme={null} import Foundation import PowerSync // For iOS/macOS (FileManager) func getAttachmentsDirectoryPath() throws -> String { guard let documentsURL = FileManager.default.urls( for: .documentDirectory, in: .userDomainMask ).first else { throw PowerSyncAttachmentError.attachmentError("Could not determine attachments directory path") } return documentsURL.appendingPathComponent("attachments").path } let localStorage = FileManagerStorageAdapter() // Remote storage adapter (example with signed URLs) class SignedUrlStorageAdapter: RemoteStorageAdapter { func uploadFile(fileData: Data, attachment: Attachment) async throws { // Request signed upload URL from your backend struct UploadUrlResponse: Codable { let uploadUrl: String } let requestBody = [ "filename": attachment.filename, "contentType": attachment.mediaType ?? "application/octet-stream" ] var request = URLRequest(url: URL(string: "/api/attachments/upload-url")!) request.httpMethod = "POST" request.setValue("application/json", forHTTPHeaderField: "Content-Type") request.httpBody = try JSONSerialization.data(withJSONObject: requestBody) let (data, _) = try await URLSession.shared.data(for: request) let response = try JSONDecoder().decode(UploadUrlResponse.self, from: data) // Upload to cloud storage using signed URL var uploadRequest = URLRequest(url: URL(string: response.uploadUrl)!) uploadRequest.httpMethod = "PUT" uploadRequest.setValue(attachment.mediaType ?? "application/octet-stream", forHTTPHeaderField: "Content-Type") uploadRequest.httpBody = fileData let (_, uploadResponse) = try await URLSession.shared.data(for: uploadRequest) guard let httpResponse = uploadResponse as? HTTPURLResponse, (200...299).contains(httpResponse.statusCode) else { throw PowerSyncAttachmentError.generalError("Upload failed") } } func downloadFile(attachment: Attachment) async throws -> Data { // Request signed download URL from your backend struct DownloadUrlResponse: Codable { let downloadUrl: String } let request = URLRequest(url: URL(string: "/api/attachments/\(attachment.id)/download-url")!) let (data, _) = try await URLSession.shared.data(for: request) let response = try JSONDecoder().decode(DownloadUrlResponse.self, from: data) // Download from cloud storage let downloadRequest = URLRequest(url: URL(string: response.downloadUrl)!) let (fileData, _) = try await URLSession.shared.data(for: downloadRequest) return fileData } func deleteFile(attachment: Attachment) async throws { // Delete via your backend var request = URLRequest(url: URL(string: "/api/attachments/\(attachment.id)")!) request.httpMethod = "DELETE" let (_, response) = try await URLSession.shared.data(for: request) guard let httpResponse = response as? HTTPURLResponse, (200...299).contains(httpResponse.statusCode) else { throw PowerSyncAttachmentError.generalError("Delete failed") } } } let remoteStorage = SignedUrlStorageAdapter() ``` ```kotlin Kotlin theme={null} import com.powersync.attachments.LocalStorage import com.powersync.attachments.RemoteStorage import com.powersync.attachments.Attachment import com.powersync.attachments.storage.IOLocalStorageAdapter import kotlinx.coroutines.flow.Flow import kotlinx.coroutines.flow.flowOf import kotlinx.io.files.Path // For local storage (uses IOLocalStorageAdapter by default) // On Android: "${applicationContext.filesDir.canonicalPath}/attachments" val attachmentsDirectory = Path("attachments").toString() val localStorage: LocalStorage = IOLocalStorageAdapter() // Remote storage adapter (example with signed URLs) val remoteStorage = object : RemoteStorage { override suspend fun uploadFile( fileData: Flow, attachment: Attachment ) { // Request signed upload URL from your backend val uploadUrl = // ... fetch from your API // Upload to cloud storage using signed URL // Collect the flow and upload val bytes = mutableListOf() fileData.collect { bytes.add(it) } val allBytes = bytes.flatMap { it.toList() }.toByteArray() // Upload allBytes to uploadUrl // ... your HTTP upload implementation } override suspend fun downloadFile(attachment: Attachment): Flow { // Request signed download URL from your backend val downloadUrl = // ... fetch from your API // Download from cloud storage val response = // ... your HTTP download implementation return flowOf(response) // or convert your ByteArray to Flow } override suspend fun deleteFile(attachment: Attachment) { // Delete via your backend // ... your HTTP delete implementation } } ``` **Security Best Practice:** Always use your backend to generate signed URLs and validate permissions. Never expose storage credentials directly to clients. ### Initialize Attachment Queue ```typescript JavaScript/TypeScript theme={null} import { AttachmentQueue } from '@powersync/web'; const attachmentQueue = new AttachmentQueue({ db: db, // PowerSync database instance localStorage, remoteStorage, // Define which attachments exist in your data model watchAttachments: (onUpdate) => { db.watch( `SELECT photo_id FROM users WHERE photo_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.photo_id, fileExtension: 'jpg' })) ?? []; await onUpdate(attachments); } } ); }, // Optional configuration syncIntervalMs: 30000, // Sync every 30 seconds downloadAttachments: true, // Auto-download referenced files archivedCacheLimit: 100 // Keep 100 archived files before cleanup }); // Start the sync process await attachmentQueue.startSync(); ``` ```dart Flutter theme={null} import 'package:logging/logging.dart'; import 'package:powersync/powersync.dart'; import 'package:powersync_core/attachments/attachments.dart'; final logger = Logger('AttachmentQueue'); late AttachmentQueue attachmentQueue; Future initializeAttachmentQueue(PowerSyncDatabase db) async { attachmentQueue = AttachmentQueue( db: db, remoteStorage: SignedUrlStorageAdapter(), localStorage: await getLocalStorage(), // Define which attachments exist in your data model watchAttachments: () => db.watch(''' SELECT photo_id as id FROM users WHERE photo_id IS NOT NULL ''').map( (results) => [ for (final row in results) WatchedAttachmentItem( id: row['id'] as String, fileExtension: 'jpg', ) ], ), // Optional configuration syncInterval: const Duration(seconds: 30), // Sync every 30 seconds downloadAttachments: true, // Auto-download referenced files archivedCacheLimit: 100, // Keep 100 archived files before cleanup logger: logger, ); // Start the sync process await attachmentQueue.startSync(); } ``` ```swift Swift theme={null} let attachmentQueue = AttachmentQueue( db: db, // PowerSync database instance remoteStorage: remoteStorage, attachmentsDirectory: try getAttachmentsDirectoryPath(), // Define which attachments exist in your data model watchAttachments: { try db.watch( sql: """ SELECT photo_id FROM users WHERE photo_id IS NOT NULL """, parameters: [], mapper: { cursor in try WatchedAttachmentItem( id: cursor.getString(name: "photo_id"), fileExtension: "jpg" ) } ) }, // Optional configuration syncInterval: 30.0, // Sync every 30 seconds downloadAttachments: true, // Auto-download referenced files archivedCacheLimit: 100 // Keep 100 archived files before cleanup ) // Start the sync process try await attachmentQueue.startSync() ``` ```kotlin Kotlin theme={null} import com.powersync.attachments.AttachmentQueue import com.powersync.attachments.WatchedAttachmentItem import com.powersync.db.getString import kotlinx.coroutines.flow.Flow import kotlin.time.Duration.Companion.seconds val attachmentQueue = AttachmentQueue( db = db, // PowerSync database instance remoteStorage = remoteStorage, attachmentsDirectory = attachmentsDirectory, localStorage = localStorage, // Optional, defaults to IOLocalStorageAdapter() // Define which attachments exist in your data model watchAttachments = { db.watch( sql = """ SELECT photo_id FROM users WHERE photo_id IS NOT NULL """, parameters = null ) { cursor -> WatchedAttachmentItem( id = cursor.getString("photo_id"), fileExtension = "jpg" ) } }, // Optional configuration syncInterval = 30.seconds, // Sync every 30 seconds downloadAttachments = true, // Auto-download referenced files archivedCacheLimit = 100 // Keep 100 archived files before cleanup ) // Start the sync process attachmentQueue.startSync() ``` The `watchAttachments` callback is crucial - it tells the queue which files your app needs based on your data model. The queue uses this to automatically download, upload, or archive files. #### Watching Multiple Attachment Types When watching multiple attachment types, you need to provide the `fileExtension` for each attachment. You can store this in your data model tables or derive it from other fields. Here are examples for both patterns: **Pattern 2: Single Queue with UNION** ```typescript JavaScript/TypeScript theme={null} // Example: Watching users.photo_id, documents.document_id, and videos.video_id // Assuming your tables store file extensions const attachmentQueue = new AttachmentQueue({ db: db, localStorage, remoteStorage, watchAttachments: (onUpdate) => { db.watch( `SELECT photo_id as id, photo_file_extension as file_extension FROM users WHERE photo_id IS NOT NULL UNION ALL SELECT document_id as id, document_file_extension as file_extension FROM documents WHERE document_id IS NOT NULL UNION ALL SELECT video_id as id, video_file_extension as file_extension FROM videos WHERE video_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.id, fileExtension: row.file_extension })) ?? []; await onUpdate(attachments); } } ); }, // ... other options }); await attachmentQueue.startSync(); ``` ```dart Flutter theme={null} // Example: Watching users.photo_id, documents.document_id, and videos.video_id // Assuming your tables store file extensions attachmentQueue = AttachmentQueue( db: db, remoteStorage: SignedUrlStorageAdapter(), localStorage: await getLocalStorage(), watchAttachments: () => db.watch(''' SELECT photo_id as id, photo_file_extension as file_extension FROM users WHERE photo_id IS NOT NULL UNION ALL SELECT document_id as id, document_file_extension as file_extension FROM documents WHERE document_id IS NOT NULL UNION ALL SELECT video_id as id, video_file_extension as file_extension FROM videos WHERE video_id IS NOT NULL ''').map( (results) => [ for (final row in results) WatchedAttachmentItem( id: row['id'] as String, fileExtension: row['file_extension'] as String, ) ], ), // ... other options ); await attachmentQueue.startSync(); ``` ```swift Swift theme={null} // Example: Watching users.photo_id, documents.document_id, and videos.video_id // Assuming your tables store file extensions let attachmentQueue = AttachmentQueue( db: db, remoteStorage: remoteStorage, attachmentsDirectory: try getAttachmentsDirectoryPath(), watchAttachments: { try db.watch( sql: """ SELECT photo_id as id, photo_file_extension as file_extension FROM users WHERE photo_id IS NOT NULL UNION ALL SELECT document_id as id, document_file_extension as file_extension FROM documents WHERE document_id IS NOT NULL UNION ALL SELECT video_id as id, video_file_extension as file_extension FROM videos WHERE video_id IS NOT NULL """, parameters: [], mapper: { cursor in try WatchedAttachmentItem( id: cursor.getString(name: "id"), fileExtension: cursor.getString(name: "file_extension") ) } ) }, // ... other options ) try await attachmentQueue.startSync() ``` ```kotlin Kotlin theme={null} // Example: Watching users.photo_id, documents.document_id, and videos.video_id // Assuming your tables store file extensions val attachmentQueue = AttachmentQueue( db = db, remoteStorage = remoteStorage, attachmentsDirectory = attachmentsDirectory, localStorage = localStorage, watchAttachments = { db.watch( sql = """ SELECT photo_id as id, photo_file_extension as file_extension FROM users WHERE photo_id IS NOT NULL UNION ALL SELECT document_id as id, document_file_extension as file_extension FROM documents WHERE document_id IS NOT NULL UNION ALL SELECT video_id as id, video_file_extension as file_extension FROM videos WHERE video_id IS NOT NULL """, parameters = null ) { cursor -> WatchedAttachmentItem( id = cursor.getString("id"), fileExtension = cursor.getString("file_extension") ) } }, // ... other options ) attachmentQueue.startSync() ``` **Pattern 3: Multiple Queues** ```typescript JavaScript/TypeScript theme={null} // Create separate queues for different attachment types const photoQueue = new AttachmentQueue({ db: db, localStorage, remoteStorage, watchAttachments: (onUpdate) => { db.watch( `SELECT photo_id FROM users WHERE photo_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.photo_id, fileExtension: 'jpg' })) ?? []; await onUpdate(attachments); } } ); }, }); const documentQueue = new AttachmentQueue({ db: db, localStorage, remoteStorage, watchAttachments: (onUpdate) => { db.watch( `SELECT document_id FROM documents WHERE document_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.document_id, fileExtension: 'pdf' })) ?? []; await onUpdate(attachments); } } ); }, }); await Promise.all([ photoQueue.startSync(), documentQueue.startSync() ]); ``` ```dart Flutter theme={null} // Create separate queues for different attachment types final photoQueue = AttachmentQueue( db: db, remoteStorage: SignedUrlStorageAdapter(), localStorage: await getLocalStorage(), watchAttachments: () => db.watch(''' SELECT photo_id as id FROM users WHERE photo_id IS NOT NULL ''').map( (results) => [ for (final row in results) WatchedAttachmentItem( id: row['id'] as String, fileExtension: 'jpg', ) ], ), ); final documentQueue = AttachmentQueue( db: db, remoteStorage: SignedUrlStorageAdapter(), localStorage: await getLocalStorage(), watchAttachments: () => db.watch(''' SELECT document_id as id FROM documents WHERE document_id IS NOT NULL ''').map( (results) => [ for (final row in results) WatchedAttachmentItem( id: row['id'] as String, fileExtension: 'pdf', ) ], ), ); await Future.wait([ photoQueue.startSync(), documentQueue.startSync(), ]); ``` ```swift Swift theme={null} // Create separate queues for different attachment types let photoQueue = AttachmentQueue( db: db, remoteStorage: remoteStorage, attachmentsDirectory: try getAttachmentsDirectoryPath(), watchAttachments: { try db.watch( sql: """ SELECT photo_id FROM users WHERE photo_id IS NOT NULL """, parameters: [], mapper: { cursor in try WatchedAttachmentItem( id: cursor.getString(name: "photo_id"), fileExtension: "jpg" ) } ) } ) let documentQueue = AttachmentQueue( db: db, remoteStorage: remoteStorage, attachmentsDirectory: try getAttachmentsDirectoryPath(), watchAttachments: { try db.watch( sql: """ SELECT document_id FROM documents WHERE document_id IS NOT NULL """, parameters: [], mapper: { cursor in try WatchedAttachmentItem( id: cursor.getString(name: "document_id"), fileExtension: "pdf" ) } ) } ) try await photoQueue.startSync() try await documentQueue.startSync() ``` ```kotlin Kotlin theme={null} // Create separate queues for different attachment types val photoQueue = AttachmentQueue( db = db, remoteStorage = remoteStorage, attachmentsDirectory = attachmentsDirectory, localStorage = localStorage, watchAttachments = { db.watch( sql = """ SELECT photo_id FROM users WHERE photo_id IS NOT NULL """, parameters = null ) { cursor -> WatchedAttachmentItem( id = cursor.getString("photo_id"), fileExtension = "jpg" ) } } ) val documentQueue = AttachmentQueue( db = db, remoteStorage = remoteStorage, attachmentsDirectory = attachmentsDirectory, localStorage = localStorage, watchAttachments = { db.watch( sql = """ SELECT document_id FROM documents WHERE document_id IS NOT NULL """, parameters = null ) { cursor -> WatchedAttachmentItem( id = cursor.getString("document_id"), fileExtension = "pdf" ) } } ) photoQueue.startSync() documentQueue.startSync() ``` ### Upload an Attachment ```typescript JavaScript/TypeScript theme={null} async function uploadProfilePhoto(imageBlob: Blob, userId: string) { const arrayBuffer = await imageBlob.arrayBuffer(); const attachment = await attachmentQueue.saveFile({ data: arrayBuffer, fileExtension: 'jpg', mediaType: 'image/jpeg', // updateHook runs in same transaction, ensuring atomicity updateHook: async (tx, attachment) => { await tx.execute( 'UPDATE users SET photo_id = ? WHERE id = ?', [attachment.id, userId] ); } }); return attachment; } // The queue will: // 1. Save file locally immediately // 2. Create attachment record with state QUEUED_UPLOAD // 3. Update user record in same transaction // 4. Automatically upload file in background // 5. Update state to SYNCED when complete ``` ```dart Flutter theme={null} import 'dart:io'; import 'dart:typed_data'; import 'package:powersync_core/attachments/attachments.dart'; Future uploadProfilePhoto( File imageFile, String userId, ) async { final imageBytes = await imageFile.readAsBytes(); final attachment = await attachmentQueue.saveFile( data: Stream.value(imageBytes), mediaType: 'image/jpeg', fileExtension: 'jpg', // updateHook runs in same transaction, ensuring atomicity updateHook: (context, attachment) async { await context.execute( 'UPDATE users SET photo_id = ? WHERE id = ?', [attachment.id, userId], ); }, ); return attachment; } // The queue will: // 1. Save file locally immediately // 2. Create attachment record with state QUEUED_UPLOAD // 3. Update user record in same transaction // 4. Automatically upload file in background // 5. Update state to SYNCED when complete ``` ```swift Swift theme={null} func uploadProfilePhoto(imageData: Data, userId: String) async throws -> Attachment { let attachment = try await attachmentQueue.saveFile( data: imageData, mediaType: "image/jpeg", fileExtension: "jpg", // updateHook runs in same transaction, ensuring atomicity updateHook: { tx, attachment in try tx.execute( sql: "UPDATE users SET photo_id = ? WHERE id = ?", parameters: [attachment.id, userId] ) } ) return attachment } // The queue will: // 1. Save file locally immediately // 2. Create attachment record with state QUEUED_UPLOAD // 3. Update user record in same transaction // 4. Automatically upload file in background // 5. Update state to SYNCED when complete ``` ```kotlin Kotlin theme={null} import kotlinx.coroutines.flow.flowOf suspend fun uploadProfilePhoto(imageBytes: ByteArray, userId: String) { val attachment = attachmentQueue.saveFile( data = flowOf(imageBytes), mediaType = "image/jpeg", fileExtension = "jpg", // updateHook runs in same transaction, ensuring atomicity updateHook = { tx, attachment -> tx.execute( "UPDATE users SET photo_id = ? WHERE id = ?", listOf(attachment.id, userId) ) } ) return attachment } // The queue will: // 1. Save file locally immediately // 2. Create attachment record with state QUEUED_UPLOAD // 3. Update user record in same transaction // 4. Automatically upload file in background // 5. Update state to SYNCED when complete ``` The `updateHook` parameter is the recommended way to link attachments to your data model. It runs in the same database transaction, ensuring data consistency. ### Download/Access an Attachment ```typescript JavaScript/TypeScript theme={null} // Downloads happen automatically when watchAttachments references a file async function getProfilePhotoUri(userId: string): Promise { const user = await db.get( 'SELECT photo_id FROM users WHERE id = ?', [userId] ); if (!user?.photo_id) { return null; } const attachment = await db.get( 'SELECT * FROM attachments WHERE id = ?', [user.photo_id] ); if (!attachment) { return null; } if (attachment.state === 'SYNCED' && attachment.local_uri) { return attachment.local_uri; } return null; } // Example: Display image in React with watch query function ProfilePhoto({ userId }: { userId: string }) { const [photoUri, setPhotoUri] = useState(null); useEffect(() => { const watch = db.watch( `SELECT a.local_uri, a.state FROM users u LEFT JOIN attachments a ON a.id = u.photo_id WHERE u.id = ?`, [userId], { onResult: (result) => { const row = result.rows?._array[0]; if (row?.state === 'SYNCED' && row?.local_uri) { setPhotoUri(row.local_uri); } } } ); return () => watch.close(); }, [userId]); if (!photoUri) { return
Loading photo...
; } return Profile; } ``` ```dart Flutter theme={null} import 'package:powersync/powersync.dart'; import 'package:powersync_core/attachments/attachments.dart'; // Downloads happen automatically when watchAttachments references a file Future getProfilePhotoUri( PowerSyncDatabase db, String userId, ) async { final user = await db.get( 'SELECT photo_id FROM users WHERE id = ?', [userId], ); if (user == null || user['photo_id'] == null) { return null; } final attachment = await db.get( 'SELECT * FROM attachments_queue WHERE id = ?', [user['photo_id']], ); if (attachment == null) { return null; } final state = AttachmentState.fromInt(attachment['state'] as int); final localUri = attachment['local_uri'] as String?; if (state == AttachmentState.synced && localUri != null) { // Resolve full path from local storage final appDocDir = await getApplicationDocumentsDirectory(); return '${appDocDir.path}/attachments/$localUri'; } return null; } // Example: Display image in Flutter with StreamBuilder StreamBuilder>>( stream: db.watch(''' SELECT a.local_uri, a.state FROM users u LEFT JOIN attachments_queue a ON a.id = u.photo_id WHERE u.id = ? ''').map((results) => results.toList()), builder: (context, snapshot) { if (!snapshot.hasData || snapshot.data!.isEmpty) { return const CircularProgressIndicator(); } final row = snapshot.data!.first; final state = AttachmentState.fromInt(row['state'] as int); final localUri = row['local_uri'] as String?; if (state == AttachmentState.synced && localUri != null) { // Load and display image return Image.file(File(localUri)); } return const Text('Loading photo...'); }, ) ``` ```swift Swift theme={null} // Downloads happen automatically when watchAttachments references a file func getProfilePhotoUri(userId: String) async throws -> String? { guard let user = try await db.getOptional( sql: "SELECT photo_id FROM users WHERE id = ?", parameters: [userId], mapper: { cursor in try cursor.getStringOptional(name: "photo_id") } ), let photoId = user else { return nil } guard let attachment = try await db.getOptional( sql: "SELECT * FROM attachments WHERE id = ?", parameters: [photoId], mapper: { cursor in try Attachment.fromCursor(cursor) } ) else { return nil } if attachment.state == .synced, let localUri = attachment.localUri { return localUri } return nil } // Example: Display image in SwiftUI with watch query struct ProfilePhotoView: View { let userId: String @State private var photoUri: String? var body: some View { Group { if let photoUri = photoUri { AsyncImage(url: URL(fileURLWithPath: photoUri)) { image in image.resizable() } placeholder: { ProgressView() } } else { Text("Loading photo...") } } .task { do { for try await results in try db.watch( sql: """ SELECT a.local_uri, a.state FROM users u LEFT JOIN attachments a ON a.id = u.photo_id WHERE u.id = ? """, parameters: [userId], mapper: { cursor in ( state: try AttachmentState.from(cursor.getInt(name: "state")), localUri: try cursor.getStringOptional(name: "local_uri") ) } ) { if let first = results.first, first.state == .synced, let localUri = first.localUri { photoUri = localUri } } } catch { print("Error watching photo: \(error)") } } } } ``` ```kotlin Kotlin theme={null} import com.powersync.attachments.AttachmentState import com.powersync.db.getString import com.powersync.db.getStringOptional import kotlinx.coroutines.flow.Flow import kotlinx.coroutines.flow.map // Downloads happen automatically when watchAttachments references a file suspend fun getProfilePhotoUri(userId: String): String? { val user = db.get( "SELECT photo_id FROM users WHERE id = ?", listOf(userId) ) { cursor -> cursor.getStringOptional("photo_id") } if (user == null) { return null } val attachment = db.get( "SELECT * FROM attachments WHERE id = ?", listOf(user) ) { cursor -> com.powersync.attachments.Attachment.fromCursor(cursor) } if (attachment == null) { return null } if (attachment.state == AttachmentState.SYNCED && attachment.localUri != null) { return attachment.localUri } return null } // Example: Watch attachment state in Compose/UI fun watchProfilePhoto(userId: String): Flow { return db.watch( sql = """ SELECT a.local_uri, a.state FROM users u LEFT JOIN attachments a ON a.id = u.photo_id WHERE u.id = ? """, parameters = listOf(userId) ) { cursor -> val state = AttachmentState.fromLong(cursor.getLong("state")) val localUri = cursor.getStringOptional("local_uri") if (state == AttachmentState.SYNCED && localUri != null) { localUri } else { null } }.map { results -> results.firstOrNull() } } ```
### Delete an Attachment ```typescript JavaScript/TypeScript theme={null} async function deleteProfilePhoto(userId: string, photoId: string) { await attachmentQueue.deleteFile({ id: photoId, // updateHook ensures atomic deletion updateHook: async (tx, attachment) => { await tx.execute( 'UPDATE users SET photo_id = NULL WHERE id = ?', [userId] ); } }); console.log('Photo queued for deletion'); // The queue will: // 1. Delete from remote storage // 2. Delete local file // 3. Remove attachment record } // Alternative: Remove reference and let queue archive it automatically async function removePhotoReference(userId: string) { await db.execute( 'UPDATE users SET photo_id = NULL WHERE id = ?', [userId] ); // The watchAttachments callback will detect this change // The queue will automatically archive the unreferenced attachment // After reaching archivedCacheLimit, it will be deleted } ``` ```dart Flutter theme={null} Future deleteProfilePhoto( String userId, String photoId, ) async { await attachmentQueue.deleteFile( attachmentId: photoId, // updateHook ensures atomic deletion updateHook: (context, attachment) async { await context.execute( 'UPDATE users SET photo_id = NULL WHERE id = ?', [userId], ); }, ); print('Photo queued for deletion'); // The queue will: // 1. Delete from remote storage // 2. Delete local file // 3. Remove attachment record } // Alternative: Remove reference and let queue archive it automatically Future removePhotoReference( PowerSyncDatabase db, String userId, ) async { await db.execute( 'UPDATE users SET photo_id = NULL WHERE id = ?', [userId], ); // The watchAttachments callback will detect this change // The queue will automatically archive the unreferenced attachment // After reaching archivedCacheLimit, it will be deleted } ``` ```swift Swift theme={null} func deleteProfilePhoto(userId: String, photoId: String) async throws { try await attachmentQueue.deleteFile( attachmentId: photoId, // updateHook ensures atomic deletion updateHook: { tx, attachment in try tx.execute( sql: "UPDATE users SET photo_id = NULL WHERE id = ?", parameters: [userId] ) } ) print("Photo queued for deletion") // The queue will: // 1. Delete from remote storage // 2. Delete local file // 3. Remove attachment record } // Alternative: Remove reference and let queue archive it automatically func removePhotoReference(userId: String) async throws { try await db.execute( sql: "UPDATE users SET photo_id = NULL WHERE id = ?", parameters: [userId] ) // The watchAttachments callback will detect this change // The queue will automatically archive the unreferenced attachment // After reaching archivedCacheLimit, it will be deleted } ``` ```kotlin Kotlin theme={null} suspend fun deleteProfilePhoto(userId: String, photoId: String) { attachmentQueue.deleteFile( attachmentId = photoId, // updateHook ensures atomic deletion updateHook = { tx, attachment -> tx.execute( "UPDATE users SET photo_id = NULL WHERE id = ?", listOf(userId) ) } ) // The queue will: // 1. Delete from remote storage // 2. Delete local file // 3. Remove attachment record } // Alternative: Remove reference and let queue archive it automatically suspend fun removePhotoReference(userId: String) { db.writeTransaction { tx -> tx.execute( "UPDATE users SET photo_id = NULL WHERE id = ?", listOf(userId) ) } // The watchAttachments callback will detect this change // The queue will automatically archive the unreferenced attachment // After reaching archivedCacheLimit, it will be deleted } ``` ## Advanced Topics ### Error Handling Implement custom error handling to control retry behavior: ```typescript JavaScript/TypeScript theme={null} import { AttachmentErrorHandler } from '@powersync/web'; const errorHandler: AttachmentErrorHandler = { async onDownloadError(attachment, error) { console.error(`Download failed: ${attachment.filename}`, error); // Return true to retry, false to archive if (error.message.includes('404')) { return false; // File doesn't exist, don't retry } return true; // Retry on network errors }, async onUploadError(attachment, error) { console.error(`Upload failed: ${attachment.filename}`, error); return true; // Always retry uploads }, async onDeleteError(attachment, error) { console.error(`Delete failed: ${attachment.filename}`, error); return true; // Retry deletes } }; const queue = new AttachmentQueue({ // ... other options errorHandler }); ``` ```dart Flutter theme={null} import 'package:powersync_core/attachments/attachments.dart'; final errorHandler = AttachmentErrorHandler( onDownloadError: (attachment, exception, stackTrace) async { print('Download failed: ${attachment.filename}'); print('Error: $exception'); // Return true to retry, false to archive if (exception.toString().contains('404')) { return false; // File doesn't exist, don't retry } return true; // Retry on network errors }, onUploadError: (attachment, exception, stackTrace) async { print('Upload failed: ${attachment.filename}'); print('Error: $exception'); return true; // Always retry uploads }, onDeleteError: (attachment, exception, stackTrace) async { print('Delete failed: ${attachment.filename}'); print('Error: $exception'); return true; // Retry deletes }, ); final queue = AttachmentQueue( // ... other options errorHandler: errorHandler, ); ``` ```swift Swift theme={null} class CustomErrorHandler: SyncErrorHandler { func onDownloadError(attachment: Attachment, error: Error) async -> Bool { print("Download failed: \(attachment.filename), error: \(error)") // Return true to retry, false to archive if let urlError = error as? URLError, urlError.code == .badServerResponse { return false // File doesn't exist (404), don't retry } return true // Retry on network errors } func onUploadError(attachment: Attachment, error: Error) async -> Bool { print("Upload failed: \(attachment.filename), error: \(error)") return true // Always retry uploads } func onDeleteError(attachment: Attachment, error: Error) async -> Bool { print("Delete failed: \(attachment.filename), error: \(error)") return true // Retry deletes } } let queue = AttachmentQueue( db: db, remoteStorage: remoteStorage, attachmentsDirectory: try getAttachmentsDirectoryPath(), watchAttachments: watchAttachments, errorHandler: CustomErrorHandler() ) ``` ```kotlin Kotlin theme={null} import com.powersync.attachments.SyncErrorHandler val errorHandler = object : SyncErrorHandler { override suspend fun onDownloadError( attachment: Attachment, exception: Exception ): Boolean { println("Download failed: ${attachment.filename}", exception) // Return true to retry, false to archive if (exception.message?.contains("404") == true) { return false // File doesn't exist, don't retry } return true // Retry on network errors } override suspend fun onUploadError( attachment: Attachment, exception: Exception ): Boolean { println("Upload failed: ${attachment.filename}", exception) return true // Always retry uploads } override suspend fun onDeleteError( attachment: Attachment, exception: Exception ): Boolean { println("Delete failed: ${attachment.filename}", exception) return true // Retry deletes } } val queue = AttachmentQueue( // ... other options errorHandler = errorHandler ) ``` ### Custom Storage Adapters The following is an example of how to implement a custom storage adapter for IPFS: ```typescript JavaScript/TypeScript theme={null} import { LocalStorageAdapter, RemoteStorageAdapter } from '@powersync/web'; // Example: IPFS remote storage class IPFSStorageAdapter implements RemoteStorageAdapter { async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) { // Upload to IPFS const cid = await ipfs.add(fileData); // Store CID in your backend for retrieval await fetch('/api/ipfs-cids', { method: 'POST', body: JSON.stringify({ attachmentId: attachment.id, cid }) }); } async downloadFile(attachment: AttachmentRecord): Promise { // Retrieve CID from backend const { cid } = await fetch(`/api/ipfs-cids/${attachment.id}`) .then(r => r.json()); // Download from IPFS return ipfs.cat(cid); } async deleteFile(attachment: AttachmentRecord) { // IPFS is immutable, but you can unpin and remove from backend await fetch(`/api/ipfs-cids/${attachment.id}`, { method: 'DELETE' }); } } ``` ```dart Flutter theme={null} // Example: IPFS remote storage class IPFSStorageAdapter implements RemoteStorage { @override Future uploadFile( Stream> fileData, Attachment attachment, ) async { // Collect the stream final bytes = []; await for (final chunk in fileData) { bytes.addAll(chunk); } // Upload to IPFS final cid = await ipfs.add(Uint8List.fromList(bytes)); // Store CID in your backend for retrieval await http.post( Uri.parse('/api/ipfs-cids'), body: jsonEncode({ 'attachmentId': attachment.id, 'cid': cid, }), ); } @override Future>> downloadFile(Attachment attachment) async { // Retrieve CID from backend final response = await http.get( Uri.parse('/api/ipfs-cids/${attachment.id}'), ); final cid = jsonDecode(response.body)['cid'] as String; // Download from IPFS final data = await ipfs.cat(cid); return Stream.value(data); } @override Future deleteFile(Attachment attachment) async { // IPFS is immutable, but you can unpin and remove from backend await http.delete( Uri.parse('/api/ipfs-cids/${attachment.id}'), ); } } ``` ```swift Swift theme={null} // Example: IPFS remote storage class IPFSStorageAdapter: RemoteStorageAdapter { func uploadFile(fileData: Data, attachment: Attachment) async throws { // Upload to IPFS // let cid = try await ipfs.add(fileData) // Store CID in your backend for retrieval struct CIDRequest: Codable { let attachmentId: String let cid: String } let requestBody = CIDRequest(attachmentId: attachment.id, cid: "your-cid-here") var request = URLRequest(url: URL(string: "/api/ipfs-cids")!) request.httpMethod = "POST" request.setValue("application/json", forHTTPHeaderField: "Content-Type") request.httpBody = try JSONEncoder().encode(requestBody) _ = try await URLSession.shared.data(for: request) } func downloadFile(attachment: Attachment) async throws -> Data { // Retrieve CID from backend struct CIDResponse: Codable { let cid: String } let request = URLRequest(url: URL(string: "/api/ipfs-cids/\(attachment.id)")!) let (data, _) = try await URLSession.shared.data(for: request) let response = try JSONDecoder().decode(CIDResponse.self, from: data) // Download from IPFS // let fileData = try await ipfs.cat(response.cid) // return fileData return Data() // Replace with actual IPFS download } func deleteFile(attachment: Attachment) async throws { // IPFS is immutable, but you can unpin and remove from backend var request = URLRequest(url: URL(string: "/api/ipfs-cids/\(attachment.id)")!) request.httpMethod = "DELETE" _ = try await URLSession.shared.data(for: request) } } ``` ```kotlin Kotlin theme={null} // Example: IPFS remote storage class IPFSStorageAdapter : RemoteStorage { override suspend fun uploadFile( fileData: Flow, attachment: Attachment ) { // Collect the flow val bytes = mutableListOf() fileData.collect { bytes.add(it) } val allBytes = bytes.flatMap { it.toList() }.toByteArray() // Upload to IPFS val cid = // ... upload to IPFS // Store CID in your backend for retrieval // ... your HTTP POST to store CID } override suspend fun downloadFile(attachment: Attachment): Flow { // Retrieve CID from backend val cid = // ... fetch CID from your API // Download from IPFS val data = // ... download from IPFS return flowOf(data) } override suspend fun deleteFile(attachment: Attachment) { // IPFS is immutable, but you can unpin and remove from backend // ... your HTTP DELETE implementation } } ``` ### Verification and Recovery `verifyAttachments()` is always called internally during `startSync()`. This method does the following: 1- Local files exist at expected paths 2- Repairs broken localUri references 3- Archives attachments with missing files 4- Requeues downloads for synced files with missing local copies ```typescript Javascript/Typescript theme={null} await attachmentQueue.verifyAttachments(); ``` ```dart Flutter theme={null} Coming soon, need to expose the function publicly ``` ```swift Swift theme={null} try await attachmentQueue.waitForInit() ``` ```kotlin Kotlin theme={null} Coming soon, need to expose the function publicly ``` ### Cache Management Control archived file retention: ```typescript JavaScript/TypeScript theme={null} const queue = new AttachmentQueue({ // ... other options archivedCacheLimit: 200 // Keep 200 archived files; oldest deleted when limit reached }); // For manually expiring the cache queue.expireCache() ``` ```dart Flutter theme={null} final queue = AttachmentQueue( // ... other options archivedCacheLimit: 200, // Keep 200 archived files; oldest deleted when limit reached ); // For manually expiring the cache await queue.expireCache(); ``` ```swift Swift theme={null} let queue = AttachmentQueue( db: db, remoteStorage: remoteStorage, attachmentsDirectory: try getAttachmentsDirectoryPath(), watchAttachments: watchAttachments, // ... other options archivedCacheLimit: 200 // Keep 200 archived files; oldest deleted when limit reached ) // For manually expiring the cache try await queue.expireCache() ``` ```kotlin Kotlin theme={null} val queue = AttachmentQueue( // ... other options archivedCacheLimit = 200 // Keep 200 archived files; oldest deleted when limit reached ) // For manually expiring the cache queue.expireCache() ``` ### Offline-First Considerations The attachment queue is designed for offline-first apps: * **Local-first operations** - Files are saved locally immediately, synced later * **Automatic retry** - Failed uploads/downloads retry when connection returns * **Queue persistence** - Queue state survives app restarts * **Conflict-free** - Files are immutable, identified by UUID * **Bandwidth efficient** - Only syncs when needed, respects network conditions ## Migrating From Deprecated Packages If you are migrating from the now deprecated attachment helpers for Dart or JavaScript, follow the notes below: A fairly simple migration from `powersync_attachments_helper` to the new utilities would be to adopt the new library with a different Attachment Queue table name and drop the legacy package. This means existing attachments are lost, but will be re-downloaded automatically. Import `AttachmentTable` and `AttachmentQueue` directly from your platform SDK (`@powersync/web`, `@powersync/node`, or `@powersync/react-native`), then remove `@powersync/attachments` from your dependencies. **React Native only:** also install `@powersync/attachments-storage-react-native` plus either `expo-file-system` (Expo 54+) or `@dr.pogodin/react-native-fs`. **What changed:** | Before (`@powersync/attachments`) | After (platform SDK) | | --------------------------------------------- | ------------------------------------------------------------------------ | | `AbstractAttachmentQueue` subclass | `AttachmentQueue` instantiated directly | | `onAttachmentIdsChange(ids: string[])` | `watchAttachments` — items must be `{ id, fileExtension }`, not just IDs | | `newAttachmentRecord()` + `saveToQueue()` | `saveFile({ data, fileExtension, updateHook })` | | `init()` | `startSync()` | | Single `storage` adapter | `localStorage` + `remoteStorage` (two separate adapters) | | `syncInterval` | `syncIntervalMs` | | `cacheLimit` | `archivedCacheLimit` | | `AttachmentTable` option: `name` | `viewName` | | `AttachmentTable` option: `additionalColumns` | Removed — use the built-in `meta_data` column (JSON string) instead | | Error handlers return `{ retry: boolean }` | Return `Promise`; `onDeleteError` is now also required | **Tip:** use a different `viewName` (e.g. `attachment_queue`) to avoid a SQLite conflict with the old `attachments` table during the transition. **Data on existing users:** the new local attachments table starts empty. Files already in remote storage will re-download automatically once referenced by your `watchAttachments` query. Files that were only ever stored locally and never uploaded have no remote copy and will not be recoverable. ## Related Resources * **[An Implementation Walkthrough Using The Flutter/Dart Attachment Helpers](https://www.powersync.com/blog/building-offline-first-file-uploads-with-powersync-attachments-helper)** - Blog post on building offline-first uploads *** # Background Syncing Source: https://docs.powersync.com/client-sdks/advanced/background-syncing Run PowerSync operations while your app is inactive or in the background Applications often need to sync data when they're not in active use. This document explains background syncing implementations with PowerSync. ## Platform Support Background syncing has been tested in: * **Flutter** - Using [workmanager](https://github.com/fluttercommunity/flutter_workmanager/) * **React Native & Expo** - Using Expo's `BackgroundTask` API. See our [demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-background-sync) and accompanying [blog post](https://www.powersync.com/blog/keep-background-apps-fresh-with-expo-background-tasks-and-powersync). * **Kotlin - Android** - Implementation details in the [Supabase To-Do List demo](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/docs/BackgroundSync.md) These examples can be adapted for other platforms/frameworks. For implementation questions or assistance, chat to us on [Discord](https://discord.gg/powersync). ## Flutter Implementation Guide ### Prerequisites 1. Complete the [workmanager platform setup](https://github.com/fluttercommunity/flutter_workmanager/#platform-setup) 2. Review the [Supabase To-Do List Demo](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) for context ### Configure the Background Task In `main.dart`: ```dart theme={null} void main() async { // ... existing setup code ... const simpleTaskKey = "com.domain.myapp.taskId"; // Mandatory if the App is obfuscated or using Flutter 3.1+ @pragma('vm:entry-point') void callbackDispatcher() { Workmanager().executeTask((task, inputData) async { switch (task) { case simpleTaskKey: // Initialize PowerSync database and connection final currentConnector = await openDatabase(); db.connect(connector: currentConnector!); // Perform database operations await TodoList.create('New background task item'); await currentConnector.uploadData(db); await TodoList.create('testing1111'); await currentConnector.uploadData(db); // print("$simpleTaskKey was executed. inputData = $inputData"); break; } // Close database when done await db.close(); return Future.value(true); }); } // Initialize the workmanager with your callback Workmanager().initialize( callbackDispatcher, // Shows notifications during task execution (useful for debugging) isInDebugMode: true ); // ... rest of your app initialization ... } ``` Note specifically in the switch statement: ```dart theme={null} // currentConnector is the connector to the remote DB // openDatabase sets the db variable to the PowerSync database final currentConnector = await openDatabase(); // connect PowerSync to the remote database db.connect(connector: currentConnector!); // a database write operation await TodoList.create('Buy new shoes'); // Sync with the remote database await currentConnector.uploadData(db); ``` 1. Since WorkManager executes in a new process, you need to set up the PowerSync local database and connect to the remote database using your connector. 2. Run a write (in the case of this demo app, we create a 'todo list') 3. Make sure to run `currentConnector.uploadData(db);` so that the local write is uploaded to the remote database. ### Testing Add a test button: ```dart theme={null} ElevatedButton( title: const Text("Start the Flutter background service"), onTap: () async { await Workmanager().cancelAll(); // print("RUN BACKGROUND TASK"); await Workmanager().registerOneOffTask( simpleTaskKey, simpleTaskKey, initialDelay: Duration(seconds: 10), inputData: { int': 1, }, ); }, ), ``` Press the button, background the app, wait 10 seconds, then verify new records in the remote database. ### Platform Compatibility #### Android * Implementation works as expected. #### iOS * At the time of last testing this (January 2024), we were only able to get part of this to work using the branch for [this PR](https://github.com/fluttercommunity/flutter_workmanager/pull/511) into workmanager. * While testing we were not able to get iOS background fetching to work, however this is most likely an [issue](https://github.com/fluttercommunity/flutter_workmanager/issues/515) with the package. # CRDT Data Structures Source: https://docs.powersync.com/client-sdks/advanced/crdts PowerSync does not use [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) directly as part of its sync or conflict resolution process, but CRDT data structures (from a library such as [Yjs](https://github.com/yjs/yjs) or y-crdt) may be persisted and synced using PowerSync. This may be useful for cases such as document editing, where last-write-wins is not sufficient for conflict resolution. PowerSync becomes the provider for CRDT data — both for local storage and for propagating changes to other clients. ### Example Implementations For an example implementation, refer to the following demo built using the PowerSync Web SDK: * [Yjs Document Collaboration Demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/yjs-react-supabase-text-collab) # JSON, Arrays and Custom Types Source: https://docs.powersync.com/client-sdks/advanced/custom-types-arrays-and-json PowerSync supports JSON/JSONB and arrays, and can sync other custom types by serializing them to text. PowerSync supports JSON/JSONB and array columns. They are synced as JSON text and can be queried with SQLite JSON functions on the client. Other custom Postgres types can be synced by serializing their values to text in the client-side schema. When updating client data, you have the option to replace the entire column value with a string or enable [advanced schema options](#advanced-schema-options-to-process-writes) to track more granular changes and include custom metadata. ## JSON and JSONB The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). **Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync. ### Postgres JSON columns are represented as: ```sql theme={null} ALTER TABLE todos ADD COLUMN custom_payload json; ``` ### Sync Streams PowerSync treats JSON columns as text. Use `json_extract()` and other JSON functions in stream queries. Subscribe per list to sync only that list's todos: ```yaml theme={null} config: edition: 3 streams: my_json_todos: auto_subscribe: true with: owned_lists: SELECT id AS list_id FROM lists WHERE owner_id = auth.user_id() query: SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') IN owned_lists ``` The client subscribes once per list (e.g. `db.syncStream('my_json_todos', { list_id: listId }).subscribe()`). PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as `json_extract()`. ```yaml theme={null} bucket_definitions: my_json_todos: # Separate bucket per To-Do list parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id() data: - SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') = bucket.list_id ``` ### Client SDK **Schema** Add your JSON column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). ```dart theme={null} Table( name: 'todos', columns: [ Column.text('custom_payload'), // ... other columns ... ], // Optionally, enable advanced update tracking options (see details at the end of this page): trackPreviousValues: true, trackMetadata: true, ignoreEmptyUpdates: true, ) ``` ```javascript theme={null} const todos = new Table( { custom_payload: column.text, // ... other columns ... }, { // Optionally, enable advanced update tracking options (see details at the end of this page): trackPrevious: true, trackMetadata: true, ignoreEmptyUpdates: true, } ); ``` ```csharp theme={null} new Table { Name = "todos", Columns = { ["custom_payload"] = ColumnType.Text, // ... other columns ... }, // Optionally, enable advanced update tracking options (see details at the end of this page): TrackPreviousValues = new TrackPreviousOptions(), TrackMetadata = true, IgnoreEmptyUpdates = true } ``` Example not yet available. **Writing Changes** You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about: ```dart theme={null} // Full replacement (basic): await db.execute('UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', [ '{"foo": "bar", "baz": 123}', 'op-metadata-example', // Example metadata value '00000000-0000-0000-0000-000000000000' ]); // Diffing columns in uploadData (advanced): // See details about these advanced schema options at the end of this page import 'dart:convert'; if (op.op == UpdateType.put && op.previousValues != null) { var oldJson = jsonDecode(op.previousValues['custom_payload'] ?? '{}'); var newJson = jsonDecode(op.opData['custom_payload'] ?? '{}'); var metadata = op.metadata; // Access metadata here // Compare oldJson and newJson to determine what changed // Use metadata as needed as you process the upload } ``` ```javascript theme={null} // Full replacement (basic): await db.execute( 'UPDATE todos set custom_payload = ?, _metadata = ? WHERE id = ?', ['{"foo": "bar", "baz": 123}', 'op-metadata-example', '00000000-0000-0000-0000-000000000000'] ); // Diffing columns in uploadData (advanced): // See details about these advanced schema options at the end of this page if (op.op === UpdateType.PUT && op.previousValues) { const oldJson = JSON.parse(op.previousValues['custom_payload'] ?? '{}'); const newJson = JSON.parse(op.opData['custom_payload'] ?? '{}'); const metadata = op.metadata; // Access metadata here // Compare oldJson and newJson to determine what changed // Use metadata as needed as you process the upload } ``` ```csharp theme={null} // Full replacement (basic): await db.Execute( "UPDATE todos SET custom_payload = ?, _metadata = ? WHERE id = ?", new object[] { "{\"foo\": \"bar\", \"baz\": 123}", "op-metadata-example", "00000000-0000-0000-0000-000000000000" } ); // Diffing columns in UploadData (advanced): // See details about these advanced schema options at the end of this page using Newtonsoft.Json; if (op.Op.ToString() == "PUT" && op.PreviousValues != null) { var oldJson = JsonConvert.DeserializeObject>( op.PreviousValues.GetValueOrDefault("custom_payload", "{}")?.ToString() ?? "{}" ); var newJson = JsonConvert.DeserializeObject>( (op.OpData != null ? op.OpData.GetValueOrDefault("custom_payload", "{}")?.ToString() ?? "{}" : "{}") ?? "{}" ); var metadata = op.Metadata; // Access metadata here // Compare oldJson and newJson to determine what changed // Use metadata as needed as you process the upload } ``` Example not yet available. ## Arrays PowerSync treats array columns as JSON text. This means that the SQLite JSON operators can be used on any array columns. Additionally, array membership is supported in [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) so you can sync rows based on whether a parameter value appears in an array column. **Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync. ### Postgres Array columns are defined in Postgres using the following syntax: ```sql theme={null} ALTER TABLE todos ADD COLUMN unique_identifiers text[]; ``` ### Sync Streams Array columns are converted to text by the PowerSync Service. A text array as defined above would be synced to clients as the following string: `["00000000-0000-0000-0000-000000000000", "12345678-1234-1234-1234-123456789012"]` **Array Membership** Sync rows where a subscription parameter value is in the row's array column using `IN`: ```yaml theme={null} config: edition: 3 streams: custom_todos: query: SELECT * FROM todos WHERE subscription.parameter('list_id') IN unique_identifiers ``` The client subscribes per list (e.g. `db.syncStream('custom_todos', { list_id: listId }).subscribe()`). It's possible to sync rows dynamically based on the contents of array columns using the `IN` operator: ```yaml theme={null} bucket_definitions: custom_todos: # Separate bucket per To-Do list parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id() data: - SELECT * FROM todos WHERE bucket.list_id IN unique_identifiers ``` See these additional details when using the `IN` operator: [Operators](/sync/supported-sql#operators) ### Client SDK **Schema** Add your array column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). ```javascript theme={null} const todos = new Table( { unique_identifiers: column.text, // ... other columns ... }, { // Optionally, enable advanced update tracking options (see details at the end of this page): trackPrevious: true, trackMetadata: true, ignoreEmptyUpdates: true, } ); ``` ```dart theme={null} Table( name: 'todos', columns: [ Column.text('unique_identifiers'), // ... other columns ... ], // Optionally, enable advanced update tracking options (see details at the end of this page): trackPreviousValues: true, trackMetadata: true, ignoreEmptyUpdates: true, ) ``` ```csharp theme={null} new Table { Name = "todos", Columns = { ["unique_identifiers"] = ColumnType.Text, // ... other columns ... }, // Optionally, enable advanced update tracking options (see details at the end of this page): TrackPreviousValues = new TrackPreviousOptions(), TrackMetadata = true, IgnoreEmptyUpdates = true } ``` Example not yet available. **Writing Changes** You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about: ```javascript theme={null} // Full replacement (basic): await db.execute( 'UPDATE todos set unique_identifiers = ?, _metadata = ? WHERE id = ?', ['["DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF"]', 'op-metadata-example', '00000000-0000-0000-0000-000000000000'] ); // Diffing columns in uploadData (advanced): // See details about these advanced schema options at the end of this page if (op.op === UpdateType.PUT && op.previousValues) { const oldArray = JSON.parse(op.previousValues['unique_identifiers'] ?? '[]'); const newArray = JSON.parse(op.opData['unique_identifiers'] ?? '[]'); const metadata = op.metadata; // Access metadata here // Compare oldArray and newArray to determine what changed // Use metadata as needed as you process the upload } ``` ```dart theme={null} // Full replacement (basic): await db.execute('UPDATE todos set unique_identifiers = ?, _metadata = ? WHERE id = ?', [ '["DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF", "ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF"]', 'op-metadata-example', // Example metadata value '00000000-0000-0000-0000-000000000000' ]); // Diffing columns in uploadData (advanced): // See details about these advanced schema options at the end of this page if (op.op == UpdateType put && op.previousValues != null) { final oldArray = jsonDecode(op.previousValues['unique_identifiers'] ?? '[]'); final newArray = jsonDecode(op.opData['unique_identifiers'] ?? '[]'); final metadata = op.metadata; // Access metadata here // Compare oldArray and newArray to determine what changed // Use metadata as needed as you process the upload } ``` ```csharp theme={null} // Full replacement (basic): await db.Execute( "UPDATE todos SET unique_identifiers = ?, _metadata = ? WHERE id = ?", new object[] { "[\"DEADBEEF-DEAD-BEEF-DEAD-BEEFDEADBEEF\", \"ABCDEFAB-ABCD-ABCD-ABCD-ABCDEFABCDEF\"]", "op-metadata-example", "00000000-0000-0000-0000-000000000000" } ); // Diffing columns in UploadData (advanced): // See details about these advanced schema options at the end of this page using Newtonsoft.Json; if (op.Op.ToString() == "PUT" && op.PreviousValues != null) { var oldArray = JsonConvert.DeserializeObject>( (op.PreviousValues != null ? op.PreviousValues.GetValueOrDefault("unique_identifiers", "[]")?.ToString() : "[]") ?? "[]" ); var newArray = JsonConvert.DeserializeObject>( (op.OpData != null ? op.OpData.GetValueOrDefault("unique_identifiers", "[]")?.ToString() : "[]") ?? "[]" ); var metadata = op.Metadata; // Access metadata here // Compare oldArray and newArray to determine what changed // Use metadata as needed as you process the upload } ``` Example not yet available. **Attention Supabase users:** Supabase can handle writes with arrays, but you must convert from string to array using `jsonDecode` in the connector's `uploadData` function. The default implementation of `uploadData` does not handle complex types like arrays automatically. ## Custom Types PowerSync respects Postgres custom types: DOMAIN types sync as their inner type, custom type columns as JSON objects, arrays of custom types as JSON arrays, and ranges (and multi-ranges) as structured JSON. This behavior is the default for Sync Streams. For configuration and legacy behavior, see [Compatibility](/sync/advanced/compatibility#custom-postgres-types). For type handling in queries, see [Types](/sync/types). ### Postgres Postgres allows developers to create custom data types for columns. For example: ```sql theme={null} create type location_address AS ( street text, city text, state text, zip numeric ); ``` ### Sync Streams The custom type column is serialized as JSON and you can use `json_extract()` and other JSON functions in stream queries: ```yaml theme={null} config: edition: 3 streams: todos_by_city: query: SELECT * FROM todos WHERE json_extract(location, '$.city') = subscription.parameter('city') ``` Custom type columns are converted to text by the PowerSync Service. Depending on whether the `custom_postgres_types` [compatibility option](/sync/advanced/compatibility) is enabled, PowerSync would sync the row as: * `{"street":"1000 S Colorado Blvd.","city":"Denver","state":"CO","zip":80211}` if the option is enabled. * `("1000 S Colorado Blvd.",Denver,CO,80211)` if the option is disabled. You can use regular string and JSON manipulation functions in Sync Rules. This means that individual values of the type can be synced with `json_extract` if the `custom_postgres_types` compatibility option is enabled. Without the option, the entire column must be synced as text. ### Client SDK **Schema** Add your custom type column as a `text` column in your client-side schema definition. For advanced update tracking, see [Advanced Schema Options](#advanced-schema-options). ```javascript theme={null} const todos = new Table( { location: column.text, // ... other columns ... }, { // Optionally, enable advanced update tracking options (see details at the end of this page): trackPrevious: true, trackMetadata: true, ignoreEmptyUpdates: true, } ); ``` ```dart theme={null} Table( name: 'todos', columns: [ Column.text('location'), // ... other columns ... ], // Optionally, enable advanced update tracking options (see details at the end of this page): trackPreviousValues: true, trackMetadata: true, ignoreEmptyUpdates: true, ) ``` ```csharp theme={null} new Table { Name = "todos", Columns = { ["location"] = ColumnType.Text, // ... other columns ... }, // Optionally, enable advanced update tracking options (see details at the end of this page): TrackPreviousValues = new TrackPreviousOptions(), TrackMetadata = true, IgnoreEmptyUpdates = true } ``` Example not yet available. **Writing Changes** You can write the entire updated column value as a string, or, with `trackPreviousValues` enabled, compare the previous and new values to process only the changes you care about: ```javascript theme={null} // Full replacement (basic): await db.execute( 'UPDATE todos set location = ?, _metadata = ? WHERE id = ?', ['("1234 Update Street",Denver,CO,80212)', 'op-metadata-example', 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b'] ); // Diffing columns in uploadData (advanced): // See details about these advanced schema options at the end of this page if (op.op === UpdateType.PUT && op.previousValues) { const oldCustomType = op.previousValues['location'] ?? 'null'; const newCustomType = op.opData['location'] ?? 'null'; const metadata = op.metadata; // Access metadata here // Compare oldCustomType and newCustomType to determine what changed // Use metadata as needed as you process the upload } ``` ```dart theme={null} // Full replacement (basic): await db.execute('UPDATE todos set location = ?, _metadata = ? WHERE id = ?', [ '("1234 Update Street",Denver,CO,80212)', 'op-metadata-example', // Example metadata value 'faffcf7a-75f9-40b9-8c5d-67097c6b1c3b' ]); // Diffing columns in uploadData (advanced): // See details about these advanced schema options at the end of this page if (op.op == UpdateType.put && op.previousValues != null) { final oldCustomType = op.previousValues['location'] ?? 'null'; final newCustomType = op.opData['location'] ?? 'null'; final metadata = op.metadata; // Access metadata here // Compare oldCustomType and newCustomType to determine what changed // Use metadata as needed as you process the upload } ``` ```csharp theme={null} // Full replacement (basic): await db.Execute( "UPDATE todos SET location = ?, _metadata = ? WHERE id = ?", new object[] { "(\"1234 Update Street\",Denver,CO,80212)", "op-metadata-example", "faffcf7a-75f9-40b9-8c5d-67097c6b1c3b" } ); // Diffing columns in UploadData (advanced): // See details about these advanced schema options at the end of this page if (op.Op.ToString() == "PUT" && op.PreviousValues != null) { var oldCustomType = op.PreviousValues.GetValueOrDefault("location", "null")?.ToString() ?? "null"; var newCustomType = op.OpData.GetValueOrDefault("location", "null")?.ToString() ?? "null"; var metadata = op.Metadata; // Access metadata here // Compare oldCustomType and newCustomType to determine what changed // Use metadata as needed as you process the upload } ``` Example not yet available. ## Bonus: Mashup What if we had a column defined as an array of custom types, where a field in the custom type was JSON? Consider the following Postgres schema: ```sql theme={null} -- define custom type CREATE TYPE extended_location AS ( address_label text, json_address json ); -- add column ALTER TABLE todos ADD COLUMN custom_locations extended_location[]; ``` ## Advanced Schema Options to Process Writes With arrays and JSON fields, it's common for only part of the value to change during an update. To make handling these writes easier, you can enable advanced schema options that let you track exactly what changed in each row—not just the new state. * `trackPreviousValues` (or `trackPrevious` in our JS SDKs): Access previous values for diffing JSON or array fields. Accessible later via `CrudEntry.previousValues`. * `trackMetadata`: Adds a `_metadata` column for storing custom metadata. Value of the column is accessible later via `CrudEntry.metadata`. * `ignoreEmptyUpdates`: Skips updates when no data has actually changed. These advanced schema options were introduced in the following SDK versions: * Flutter v1.13.0 * React Native v1.20.1 * JavaScript/Web v1.20.1 * Kotlin v1.1.0 * Swift v1.1.0 * Node.js v0.4.0 * .NET v0.0.6-alpha.1 # Data Encryption Source: https://docs.powersync.com/client-sdks/advanced/data-encryption ### In Transit Encryption Data is always encrypted in transit using TLS — both between the client and PowerSync, and between PowerSync [and the source database](/configuration/source-db/postgres-maintenance#tls). ### At Rest Encryption The client-side database can be encrypted at rest. This is currently available for: [SQLCipher](https://www.zetetic.net/sqlcipher/) support is available for Dart/Flutter through the `powersync_sqlcipher` SDK. See usage details in the package README: [SQLCipher](https://www.zetetic.net/sqlcipher/) support is available for PowerSync's React Native SDK through the `@powersync/op-sqlite` package. See usage details in the package README: The Web SDK uses the [ChaCha20 cipher algorithm by default](https://utelle.github.io/SQLite3MultipleCiphers/docs/ciphers/cipher_chacha20/). See usage details in the package README: Additionally, a minimal example demonstrating encryption of the web database is available [here](https://github.com/powersync-ja/powersync-js/tree/main/demos/example-vite-encryption). Encryption support is available for PowerSync's Node.js SDK using [`better-sqlite3-multiple-ciphers`](https://www.npmjs.com/package/better-sqlite3-multiple-ciphers). See usage details and code examples in the [Node.js SDK reference](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers). Encryption support is available for PowerSync's Kotlin SDK (since version 1.9.0) using [`SQLite3MultipleCiphers`](https://utelle.github.io/SQLite3MultipleCiphers/) via the [`com.powersync:sqlite3multipleciphers`](https://central.sonatype.com/artifact/com.powersync/sqlite3multipleciphers) package. This allows you to encrypt your local SQLite database with various cipher algorithms. **Setup:** 1. Replace your dependency on `com.powersync:core` with `com.powersync:common` of the same version. 2. Add a dependency on `com.powersync:sqlite3multipleciphers`. 3. Since `:core` includes a Ktor client implementation, you'll need to [add one manually](https://ktor.io/docs/client-engines.html) if you're not already using Ktor: * Android/JVM: `io.ktor:ktor-client-okhttp` * Apple targets (Kotlin/Native): `io.ktor:ktor-client-darwin` 4. Use the appropriate encrypted database factory when creating your `PowerSyncDatabase`: ```kotlin theme={null} // Android val database = PowerSyncDatabase( factory = AndroidEncryptedDatabaseFactory( context, Key.Passphrase("your encryption key") ), schema = yourSchema, dbFilename = "your_database" ) // JVM val database = PowerSyncDatabase( factory = JavaEncryptedDatabaseFactory( Key.Passphrase("your encryption key") ), schema = yourSchema, dbFilename = "your_database" ) // Kotlin/Native (Apple targets) val database = PowerSyncDatabase( factory = NativeEncryptedDatabaseFactory( Key.Passphrase("your encryption key") ), schema = yourSchema, dbFilename = "your_database" ) ``` Store encryption keys securely rather than hardcoding them in your code. For more details, see the [`sqlite3multipleciphers` README](https://github.com/powersync-ja/powersync-kotlin/tree/main/sqlite3multipleciphers) in the PowerSync Kotlin SDK repository. Encryption support is available for PowerSync's Swift SDK (since version 1.10.0) using [`SQLite3MultipleCiphers`](https://utelle.github.io/SQLite3MultipleCiphers/). Encryption keys are configured with the `initialStatements` parameter on `PowerSyncDatabase()` which allows running `PRAGMA key` statements. **Setup requirements:** The PowerSync Swift SDK depends on [CSQLite](https://github.com/powersync-ja/CSQLite) to build and link SQLite. That package can be configured to optionally link SQLite3 Multiple Ciphers by enabling the `Encryption` trait. Due to SwiftPM limitations, we can't directly expose that trait on the Swift SDK. Instead, we recommend directly depending on CSQLite with the encryption trait, which will enable the same for the SDK (since each package can only appear in a build once). Since Xcode doesn't support specifying package traits when adding dependencies, you first need to add a local Swift package as a workaround. 1. Create a local `Package.swift` in your project that depends on CSQLite with the `Encryption` trait: ```swift theme={null} // swift-tools-version: 6.2 import PackageDescription let package = Package( name: "helper", products: [ .library(name: "helper", targets: ["helper"]), ], dependencies: [ .package(url: "https://github.com/powersync-ja/CSQLite.git", exact: "3.51.2", traits: ["Encryption"]), ], targets: [ .target(name: "helper", dependencies: [.product(name: "CSQLite", package: "CSQLite")]), ] ) ``` 2. Add a dependency to this local package from Xcode and resolve packages. This enables `sqlite3mc` for your entire app, including the PowerSync framework. 3. Configure encryption when opening the database: ```swift theme={null} let db = PowerSyncDatabase( schema: yourSchema, initialStatements: ["pragma key = 'your encryption key'"] ) ``` Store encryption keys securely (e.g., in Keychain) rather than hardcoding them in your code. For a complete working example, see the [SwiftEncryptionDemo](https://github.com/powersync-ja/powersync-swift/tree/main/Demos/SwiftEncryptionDemo) in the PowerSync Swift SDK repository. Support for encryption on other platforms is planned. In the meantime, let us know your needs and use cases on [Discord](https://discord.gg/powersync). ### End-to-end Encryption For end-to-end encryption, the encrypted data can be synced using PowerSync. The data can then either be encrypted and decrypted directly in memory by the application, or a separate local-only table can be used to persist the decrypted data — allowing querying the data directly. [Raw SQLite Tables](/client-sdks/advanced/raw-tables) can be used for full control over the SQLite schema and managing tables for the decrypted data. We have a [React & Supabase example app](https://github.com/powersync-community/react-supabase-chat-e2ee) that demonstrates this approach. See also the accompanying [blog post](https://www.powersync.com/blog/building-an-e2ee-chat-app-with-powersync-supabase). ## See Also * Database Setup → [Security & IP Filtering](/configuration/source-db/security-and-ip-filtering) * Resources → [Security](/resources/security) # GIS Data: PostGIS Source: https://docs.powersync.com/client-sdks/advanced/gis-data-postgis For Postgres, PowerSync integrates well with PostGIS and provides tools for working with geo data. Custom types, arrays and [PostGIS](https://postgis.net/) are frequently presented together since geospatial data is often complex and multidimensional. It's therefore recommend to first quickly scan the content in [Custom Types, Arrays and JSON](/client-sdks/advanced/custom-types-arrays-and-json) ### PostGIS In Supabase, the PostGIS extension needs to be added to your project to use this type. Run the following command in the SQL editor to include the PostGIS extension: ```sql theme={null} CREATE extension IF NOT EXISTS postgis; ``` The `geography` and `geometry` types are now available in your Postgres. ## Supabase Configuration Example: This example builds on the To-Do List demo app in our [Supabase integration guide](/integrations/supabase/guide). ### Add custom type, array and PostGIS columns to the `todos` table ```sql theme={null} --SQL command to update the todos table with 3 additional columns: ALTER TABLE todos ADD COLUMN address location_address null, ADD COLUMN contact_numbers text [] null, ADD COLUMN location geography (point) null ``` ### Insert a row of data into the table ```sql theme={null} -- Grab the id of a list object and a user id and create a new todos INSERT INTO public.todos(description, list_id, created_by, address, location, contact_numbers) VALUES ('Bread', 'list_id', 'user_id', '("1000 S Colorado Blvd.","Denver","CO",80211)', st_point(39.742043, -104.991531), '{000-000-0000, 000-000-0000, 000-000-0000}'); ``` Note the following: **Custom type**: Specify the value for the `address` column by wrapping the value in single quotes and comma separate the different location\_address properties. * `'("1000 S Colorado Blvd.","Denver","CO",80211)'` **Array**: Specify the value of the `contact_numbers` column, by surrounding the comma-separated array items in curly braces. * `'{000-000-0000, 000-000-0000, 000-000-0000}'` **PostGIS**: Specify the value of the `location` column by using the `st_point` function and pass in the latitude and longitude * `st_point(39.742043, -104.991531)` ### What this data looks like in Postgres Postgres' internal binary representation of the PostGIS type is as follows: | location | | -------------------------------------------------- | | 0101000020E6100000E59CD843FBDE4340E9818FC18AC052C0 | ## On the Client ### AppSchema example ```js theme={null} export const AppSchema = new Schema([ new Table({ name: 'todos', columns: [ new Column({ name: 'list_id', type: ColumnType.TEXT }), new Column({ name: 'created_at', type: ColumnType.TEXT }), new Column({ name: 'completed_at', type: ColumnType.TEXT }), new Column({ name: 'description', type: ColumnType.TEXT }), new Column({ name: 'completed', type: ColumnType.INTEGER }), new Column({ name: 'created_by', type: ColumnType.TEXT }), new Column({ name: 'completed_by', type: ColumnType.TEXT }), new Column({name: 'address', type: ColumnType.TEXT}), new Column({name: 'contact_numbers', type: ColumnType.TEXT}) new Column({name: 'location', type: ColumnType.TEXT}), ], indexes: [new Index({ name: 'list', columns: [new IndexedColumn({ name: 'list_id' })] })] }), new Table({ name: 'lists', columns: [ new Column({ name: 'created_at', type: ColumnType.TEXT }), new Column({ name: 'name', type: ColumnType.TEXT }), new Column({ name: 'owner_id', type: ColumnType.TEXT }) ] }) ]); ``` Note: * The custom type, array and PostGIS type have been defined as `TEXT` in the AppSchema. The Postgres PostGIS capabilities are not available because the PowerSync SDK uses SQLite, which only has a limited number of types. This means that everything is replicated into the SQLite database as TEXT values. * Depending on your application, you may need to implement functions in the client to parse the values and then other functions to write them back to the Postgres database. ### What does the data look like in SQLite? The data looks exactly how it’s stored in the Postgres database i.e. 1. **Custom Type**: It has the same format as if you inserted it using a SQL statement, i.e. 1. `(1000 S Colorado Blvd.,Denver,CO,80211)` 2. **Array**: Array types act similar in that it shows the data in the same way it was inserted e.g 1. `{000-000-0000, 000-000-0000, 000-000-0000}` 3. **PostGIS**: The `geography` type is transformed into an encoded form of the value. 1. If you insert coordinates as `st_point(39.742043, -104.991531)` then it is shown as `0101000020E6100000E59CD843FBDE4340E9818FC18AC052C0` ## Sync Streams ### PostGIS Example use case: Extract x (long) and y (lat) values from a PostGIS type, to use these values independently in an application. PowerSync supports the following PostGIS functions in Sync Streams (or legacy Sync Rules): [Operators and Functions](/sync/supported-sql#functions) 1. `ST_AsGeoJSON` 2. `ST_AsText` 3. `ST_X` 4. `ST_Y` IMPORTANT NOTE: These functions will only work if your Postgres instance has the PostGIS extension installed and you’re storing values as type `geography` or `geometry`. ```yaml theme={null} config: edition: 3 streams: global: queries: - SELECT * FROM lists - SELECT *, st_x(location) as longitude, st_y(location) as latitude FROM todos ``` ```yaml theme={null} bucket_definitions: global: data: - SELECT * FROM lists - SELECT *, st_x(location) as longitude, st_y(location) as latitude from todos ``` # Local-Only Usage Source: https://docs.powersync.com/client-sdks/advanced/local-only-usage Some use cases require data persistence before the user has registered or signed in. In some of those cases, the user may want to register and start syncing data with other devices or users at a later point, while other users may keep on using the app without ever registering or going online." PowerSync supports these scenarios. By default, all local changes will be stored in the upload queue, and will be uploaded to the backend server if the user registers at a later point. A caveat is that if the user never registers, this queue will keep on growing in size indefinitely. For many applications this should be small enough to not be significant, but some data-intensive applications may want to avoid the indefinite queue growth. There are two general approaches we recommend for this: ### 1. Local-only tables ```dart theme={null} final table = Table.localOnly( ... ) ``` **Flutter + Drift users:** If you're using local-only tables with `viewName` overrides, Drift's watch streams may not update correctly. See the [troubleshooting guide](/client-sdks/orms/flutter-orm-support#troubleshooting:-watch-streams-with-local-only-tables) for the solution. ```js theme={null} const lists = new Table({ ... }, { localOnly: true }); ``` ```kotlin theme={null} val Table = Table( ... localOnly = true ) ``` ```swift theme={null} let table = Table( ... localOnly: true ) ``` ```csharp theme={null} public static Table Todos = new Table { Name = "todos", Columns = { // ... column definitions ... }, LocalOnly = true }; ``` Example not yet available. Use local-only tables until the user has registered or signed in. This would not store any data in the upload queue, avoiding any overhead or growth in database size. Once the user registers, move the data over to synced tables, at which point the data would be placed in the upload queue. The following example implementations are available: | Client framework | Link | | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | | Flutter To-Do List App (with Supabase) | [supabase-todolist-optional-sync](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-optional-sync) | | React To-Do List App (with Supabase) | [react-supabase-todolist-optional-sync](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-optional-sync) | ### 2. Clearing the upload queue The upload queue can be cleared periodically (for example on every app start-up), avoiding the growth in database size over time. This can be done using: ```sql theme={null} DELETE FROM ps_crud ``` It is up to the application to then re-create the queue when the user registers, or upload data directly from the existing tables instead. A small amount of metadata per row is also stored in the `ps_oplog` table. We do not recommend deleting this data, as it can cause or hide consistency issues when later uploading the data. If the overhead in `ps_oplog` is too much, rather use the local-only tables approach. ### Local-only columns on synced tables If you need individual local-only columns on a table that is otherwise synced (rather than an entirely local-only table), this can be achieved with [raw tables](/client-sdks/advanced/raw-tables#local-only-columns). # Pre-Seeding SQLite Databases Source: https://docs.powersync.com/client-sdks/advanced/pre-seeded-sqlite Optimizing Initial Sync by Pre-Seeding SQLite Databases. # Overview When syncing large amounts of data to connected clients, it can be useful to pre-seed the SQLite database with an initial snapshot of the data. This can help to reduce the initial sync time and improve the user experience. To achieve this, you can run server-side processes using the [PowerSync Node.js SDK](/client-sdks/reference/node) to pre-seed SQLite files. These SQLite files can then be uploaded to blob storage providers such as AWS S3, Azure Blob Storage, or Google Cloud Storage and downloaded directly by client applications. Client applications can then initialize the pre-seeded SQLite file, effectively bypassing the initial sync process. ## Demo App If you're interested in seeing an end-to-end example, we've prepared a demo repo that can be used as a template for your own implementation. This repo covers all of the key concepts and code examples shown in this page. Self-hosted PowerSync instance connected to a PostgreSQL database, using the PowerSync Node.js SDK, React Native SDK and AWS S3 for storing the pre-seeded SQLite files. # Main Concepts ## Generate a scoped JWT token In most cases you'd want to pre-seed the SQLite database with user specific data and not all data from the source database, as you normally would when using PowerSync. For this you would need to generate JWT tokens that include the necessary properties to satisfy the conditions of the queries in your Sync Streams (or legacy Sync Rules). Let's say we have the following sync config: ```yaml theme={null} sync_config: content: | config: edition: 3 streams: store_products: query: SELECT * FROM products WHERE store_id = auth.parameter('store_id') ``` ```yaml theme={null} sync_config: content: | bucket_definitions: store_products: parameters: SELECT id as store_id FROM stores WHERE id = request.jwt() ->> 'store_id' data: - SELECT * FROM products WHERE store_id = bucket.store_id ``` In the example above the `store_id` is part of the JWT payload and is used to filter products by store for a user. Given this we would want to do the following: 1. Query the source database, directly from the Node.js application, for all the store ids you'd want a pre-seeded SQLite database for. 2. Generate a JWT token for each store and include the `store_id` in the payload. 3. In the Node.js application which implements the PowerSync SDK, return the JWT token in the `fetchCredentials()` function. This will ensure that only the data for a specific store is pre-seeded into the SQLite database. Here's an example of a function that generates a JWT token based on the `store_id` using the [`jose`](https://github.com/panva/jose) library: ```typescript theme={null} import * as jose from 'jose'; export const generateToken = async (subject: string, store_id: string) => { return await new jose.SignJWT({store_id: store_id}) // Set the store_id in the payload .setProtectedHeader({ alg: 'HS256', kid: "My Kid" }) .setSubject(subject) .setIssuedAt(new Date()) .setAudience('powersync') .setExpirationTime('1h') .sign(Buffer.from("My Base64 Encoded Secret", 'base64url')); }; ``` ## Pre-seeding script Once you've got a plan in place for generating the JWT tokens, you can write a simple script to connect to the PowerSync instance and pre-seed the SQLite database. Here's an example of a script that does this: ```typescript theme={null} async function prepareDatabase (storeId: string) { const backupPath = `/path/to/sqlite/${storeId}.sqlite`; const connector = new Connector(); await powersync.connect(connector); await powersync.waitForFirstSync(); const result = await powersync.execute("DELETE FROM ps_kv WHERE key = ?", ["client_id"]); const vacuumResult = await powersync.execute(`VACUUM INTO ${backupPath}`); await uploadFile(storeId, `${storeId}.sqlite`, backupPath); await powersync.close(); await powersync.disconnect(); } ``` Some critical points to note: * You will need to wait for the first sync to complete before deleting the `client_id` key and vacuuming the database. This makes sure all of the data is synced to the database before we proceed. * The `client_id` key is used to identify the client device and is typically set when the client connects to the PowerSync instance. So when pre-seeding the database, we need to delete the `client_id` key to avoid conflicts when the client connects to the PowerSync instance. * It's important to note that you will need to use the [`VACUUM INTO`](https://sqlite.org/lang_vacuum.html) command to create a clean, portable SQLite database file. This will help to reduce the size of the database file and provide an optimized version for the client to download. * In this example the upload function is using AWS S3, but you can use any blob storage provider that you prefer. ### Scheduling and Cleaning Up To enhance the process you can consider doing the following: * To keep the pre-seeded SQLite databases fresh, schedule a CRON jobs for periodic regeneration, ensuring that new clients always download the latest snapshot of the initial sync data. * After each run, perform some environment cleanup to avoid disk bloat. This can be done by deleting the pre-seeded SQLite database files after they have been uploaded to the blob storage provider. ## Client Side Usage When the client application boots, before connecting to the PowerSync instance, check if a SQLite database exists in the application's permanent storage. If it does, use it, else download a pre-seeded SQLite database from the blob storage provider. Here's an example of a function that checks if a file exists in the application's permanent storage: ```typescript theme={null} import { File, Paths } from 'expo-file-system/next'; export const FilePath = `${Paths.document.uri}`; export const fileExists = (storeId: string) => { const file = new File(FilePath, `${storeId}.sqlite`); return file.exists; } ``` Here's an example of a function that downloads the pre-seeded SQLite database from the blob storage provider: ```typescript theme={null} export const downloadFile = async (storeId: string) => { // Retrieve a pre-signed URL from the server that allows the client to download the file. const response = await fetch(`https://your-api-url.com/database?store_id=${storeId}`); const { databaseUrl } = await response.json(); // Download the file to the permanent location on the device. const newFile = new File(FilePath, `${storeId}.sqlite`); await File.downloadFileAsync(databaseUrl, newFile); } ``` It's important to note that when the client downloads the pre-seeded SQLite database that it's stored in a permanent location on the device. This means that the database will not be deleted when the app is restarted. Depending on which PowerSync SDK you are using, you may need to use framework specific methods to store the file in a permanent location on the device. For example, with React Native + Expo you can use the [`expo-file-system`](https://docs.expo.dev/versions/latest/sdk/filesystem/) module to store the file in a permanent location on the device. Once the database is downloaded, initialize the `PowerSyncDatabase` class with the file path and connect to the PowerSync instance. ```typescript theme={null} import { OPSqliteOpenFactory } from '@powersync/op-sqlite'; import { PowerSyncDatabase } from '@powersync/react-native'; import { AppSchema } from './Schema'; // databasePath is the path to the pre-seeded SQLite database file on the device. export const configureDatabase = async (storeId: string) => { const opSqlite = new OPSqliteOpenFactory({ dbFilename: `${storeId}.sqlite`, dbLocation: FilePath.replace('file://', '') }); const powersync = new PowerSyncDatabase({ schema: AppSchema, database: opSqlite, }); // Call init() first, this will ensure the database is initialized, but not connected to the PowerSync instance. await powersync.init(); // Insert a new `client_id` key into the `ps_kv` table to avoid conflicts when the client connects to the PowerSync instance. await powersync.execute("INSERT INTO ps_kv (key, value) VALUES (?, ?)", ["client_id", "1234567890"]); // Connect to the PowerSync instance. await powersync.connect(connector); } ``` It's important that you insert a new `client_id` key into the `ps_kv` table to avoid conflicts when the client connects to the PowerSync instance. At this point the client would connect to the PowerSync instance and sync the data from where the pre-seeded snapshot was created, bypassing the initial sync process. # Querying JSON Data in SQLite Source: https://docs.powersync.com/client-sdks/advanced/query-json-in-sqlite How to query JSON data synced from your backend and stored as strings in SQLite # Overview When syncing data from your backend source database to PowerSync, JSON columns (whether from MongoDB documents, PostgreSQL JSONB columns, or other JSON data types) are stored as `TEXT` in SQLite. See the [type mapping guide](/sync/types) for more details. This guide shows you how to effectively query and filter JSON data using SQLite's powerful JSON functions on the client. ## Understanding JSON Storage in PowerSync Your backend source database might store structured data as JSON in various ways: * **MongoDB**: Nested documents and arrays * **PostgreSQL**: JSONB, JSON, array, or custom types * **MySQL**: JSON columns * **SQL Server**: JSON columns Regardless of the source, PowerSync syncs these JSON structures to SQLite as `TEXT` columns. On the client side, you can query this data using SQLite's built-in JSON functions without needing to parse it yourself. Learn more about [how PowerSync handles JSON, arrays, and custom types](/client-sdks/advanced/custom-types-arrays-and-json#javascript). ## Example Data Structure Let's use a task management system where tasks have nested metadata: ```json theme={null} { "id": "task_123", "title": "Redesign homepage", "assignees": [ { "user_id": "user_001", "role": "designer", "hours_allocated": 20 }, { "user_id": "user_002", "role": "developer", "hours_allocated": 40 } ], "tags": ["urgent", "frontend", "design"], "metadata": { "priority": 1, "sprint": "2024-Q1", "dependencies": ["task_100", "task_101"] } } ``` In SQLite, the `assignees`, `tags`, and `metadata` columns are stored as JSON strings. For details on how different backend types map to SQLite, see [database types and mapping](/sync/types). ## JSON Extraction Basics ### Standard [`json_extract()`](https://sqlite.org/json1.html#jex) Function Extract values from JSON using path expressions: ```sql theme={null} SELECT id, title, json_extract(metadata, '$.priority') AS priority, json_extract(metadata, '$.sprint') AS sprint FROM tasks; ``` **Path syntax:** * `$` - root element * `.` - object member access * `[index]` - array element access ### Shorthand: The -> and ->> Operators SQLite provides convenient [shorthand operators](https://sqlite.org/json1.html#jptr) for JSON extraction: ```sql theme={null} SELECT id, title, metadata -> '$.dependencies' AS dependencies_array, -- maintains JSON array metadata ->> '$.sprint' AS sprint FROM tasks; ``` **The difference between [-> and ->>](https://sqlite.org/json1.html#jptr):** * `->` returns JSON (preserves type information, quotes strings) * `->>` extracts the value unquoted (strings as TEXT, numbers/booleans as their native types) ```sql theme={null} -- Using -> (returns JSON) SELECT metadata -> '$.priority' FROM tasks; -- Result: 1 (as a SQLite TEXT value) -- Using ->> (returns parsed value) SELECT metadata ->> '$.priority' FROM tasks; -- Result: 1 (as a SQLite INTEGER value) -- For strings, the difference is clearer: SELECT metadata -> '$.sprint' FROM tasks; -- Result: "2024-Q1" (with quotes, as JSON) SELECT metadata ->> '$.sprint' FROM tasks; -- Result: 2024-Q1 (without quotes, as text) ``` **When to use which:** * Use `->>` when extracting **final values** for display or comparison * Use `->` when extracting **intermediate JSON** for further processing * `->>` preserves data types (numbers stay numbers, not strings) ### Nested Path Access Access deeply nested values: ```sql theme={null} -- All three are equivalent: json_extract(metadata, '$.dependencies[0]') metadata -> '$.dependencies[0]' metadata -> '$.dependencies' -> '$[0]' ``` ## Querying Arrays with [`json_each()`](https://sqlite.org/json1.html#jeach) ### Flattening Simple Arrays For the `tags` array, use `json_each()` to create one row per element: ```sql theme={null} SELECT t.id, t.title, tag.value AS tag FROM tasks t, json_each(t.tags) AS tag WHERE tag.value = 'urgent'; ``` **What's happening:** * `json_each(t.tags)` creates a virtual table with one row per tag * `tag.value` contains each individual tag string * You can filter, join, or aggregate these expanded rows ### Querying Nested Objects in Arrays For complex objects like `assignees`: ```sql theme={null} SELECT t.id, t.title, assignee.value ->> '$.user_id' AS user_id, assignee.value ->> '$.role' AS role, assignee.value -> '$.hours_allocated' AS hours FROM tasks t, json_each(t.assignees) AS assignee WHERE (assignee.value ->> '$.role') = 'developer'; ``` **Key points:** * Each `assignee.value` is a JSON object representing one assignee * Use `->>` to extract text values for comparison * Use `->` when you need numeric values for calculations ## Real-World Query Examples ### Example 1: Finding Tasks by Assignee **Use case:** Show all tasks assigned to a specific user. ```sql theme={null} SELECT DISTINCT t.id, t.title, t.metadata ->> '$.priority' AS priority FROM tasks t, json_each(t.assignees) AS assignee WHERE (assignee.value ->> '$.user_id') = 'user_001' ORDER BY t.metadata ->> '$.priority'; ``` ### Example 2: Calculating Total Hours by Role **Use case:** Aggregate hours across all tasks grouped by role. ```sql theme={null} SELECT assignee.value ->> '$.role' AS role, SUM(assignee.value ->> '$.hours_allocated') AS total_hours, COUNT(DISTINCT t.id) AS task_count FROM tasks t, json_each(t.assignees) AS assignee GROUP BY role ORDER BY total_hours DESC; ``` ### Example 3: Tasks with Specific Tags **Use case:** Find tasks tagged with multiple specific tags. ```sql theme={null} -- Tasks with BOTH 'urgent' AND 'frontend' tags SELECT DISTINCT t.* FROM tasks t WHERE EXISTS ( SELECT 1 FROM json_each(t.tags) WHERE value = 'urgent' ) AND EXISTS ( SELECT 1 FROM json_each(t.tags) WHERE value = 'frontend' ); ``` Or using a simpler approach for single tags: ```sql theme={null} -- Tasks with 'urgent' tag SELECT * FROM tasks t, json_each(t.tags) AS tag WHERE tag.value = 'urgent'; ``` ### Example 4: Filtering by Array Contents **Use case:** Find tasks that depend on a specific task ID. ```sql theme={null} SELECT * FROM tasks t, json_each(t.metadata -> '$.dependencies') AS dep WHERE dep.value = 'task_100'; ``` ### Example 5: Checking for Array Membership **Use case:** Check if a task has any dependencies. ```sql theme={null} SELECT id, title, json_array_length(metadata -> '$.dependencies') AS dep_count FROM tasks WHERE json_array_length(metadata -> '$.dependencies') > 0; ``` ## Working with Comma or Delimiter-Separated Values Sometimes JSON strings contain delimiter-separated values (like `"NYC;LAX;MIA"`). Here's how to query them efficiently: ```sql theme={null} -- Assume tasks have a field: "approved_by": "user_001;user_002;user_003" -- Find tasks approved by a specific user SELECT * FROM tasks WHERE instr( ';' || (metadata ->> '$.approved_by') || ';', ';user_001;' ) > 0; ``` **Why this pattern works:** * Wraps the value: `";user_001;user_002;user_003;"` * Searches for `;user_001;` ensuring exact delimiter-bounded match * Prevents false matches (won't match "user\_0011" when searching for "user\_001") **Avoid `LIKE` for delimited strings:** ```sql theme={null} -- ❌ WRONG - can match partial values WHERE (metadata ->> '$.approved_by') LIKE '%user_001%' -- This would incorrectly match "user_0011" or "user_001_archive" -- ✅ CORRECT - exact delimiter match WHERE instr(';' || (metadata ->> '$.approved_by') || ';', ';user_001;') > 0 ``` ## Advanced Techniques ### Using CTEs for Cleaner Queries Common Table Expressions make complex JSON queries more readable: ```sql theme={null} WITH task_assignees AS ( SELECT t.id, t.title, assignee.value ->> '$.user_id' AS user_id, assignee.value ->> '$.role' AS role, assignee.value ->> '$.hours_allocated' AS hours FROM tasks t, json_each(t.assignees) AS assignee ) SELECT user_id, role, SUM(hours) AS total_hours, COUNT(*) AS assignment_count FROM task_assignees WHERE hours > 10 GROUP BY user_id, role; ``` ### Combining Multiple JSON Arrays Query across multiple nested arrays: ```sql theme={null} SELECT DISTINCT t.id, t.title, assignee.value ->> '$.user_id' AS assigned_to, tag.value AS tag FROM tasks t, json_each(t.assignees) AS assignee, json_each(t.tags) AS tag WHERE tag.value IN ('urgent', 'high-priority') AND assignee.value ->> '$.role' = 'developer'; ``` **Cartesian product warning:** When using multiple `json_each()` calls, you create a Cartesian product. A task with 3 assignees and 4 tags creates 12 rows. Use `DISTINCT` when needed and filter early to minimize row expansion. ### Checking for Key Existence Verify if a JSON key exists: ```sql theme={null} -- Check if 'sprint' key exists SELECT * FROM tasks WHERE json_extract(metadata, '$.sprint') IS NOT NULL; -- Or using shorthand SELECT * FROM tasks WHERE metadata -> '$.sprint' IS NOT NULL; ``` ## Performance Optimization **Important Performance Considerations** 1. **Index JSON columns for better performance**: If you frequently query JSON fields, add indexes to the JSON string columns in your `AppSchema`: ```typescript theme={null} const tasks = new Table( { id: column.text, title: column.text, metadata: column.text, tags: column.text, }, { indexes: { tagsIndex: ['tags'] } } ); ``` 2. **Minimize `json_each()` usage**: Each `json_each()` call expands rows. For a table with 10,000 tasks averaging 5 assignees each, you're processing 50,000 rows. 3. **Use EXISTS for membership checks**: More efficient than joining: ```sql theme={null} -- ✅ BETTER for large datasets SELECT * FROM tasks t WHERE EXISTS ( SELECT 1 FROM json_each(t.tags) WHERE value = 'urgent' ); -- vs joining which creates all row combinations ``` 4. **Cache extracted values in CTEs**: Extract once, use multiple times: ```sql theme={null} WITH task_metrics AS ( SELECT t.id, t.title, t.metadata, COUNT(assignee.value) AS assignee_count, SUM(assignee.value ->> '$.hours_allocated') AS total_hours FROM tasks t, json_each(t.assignees) AS assignee GROUP BY t.id, t.title, t.metadata ) SELECT * FROM task_metrics WHERE metadata ->> '$.sprint' = '2024-Q1' AND assignee_count > 1 ORDER BY total_hours DESC; ``` ## Useful JSON Functions Beyond extraction, SQLite offers many JSON utilities: ```sql theme={null} -- Get array length SELECT json_array_length(tags) FROM tasks; -- Check JSON validity SELECT json_valid(metadata) FROM tasks; -- Get all object keys SELECT json_each.key, json_each.value FROM tasks, json_each(tasks.metadata) WHERE id = 'task_123'; -- Get JSON type of a value SELECT json_type(metadata -> '$.priority') FROM tasks; -- Returns: 'integer', 'text', 'array', 'object', 'null', etc. -- Aggregate JSON arrays SELECT json_group_array(tag.value) FROM tasks t, json_each(t.tags) AS tag WHERE t.id = 'task_123'; ``` ## Common Gotchas **Watch out for these common issues:** 1. **NULL vs missing keys**: `json_extract()` returns `NULL` for non-existent paths. Always check for NULL: ```sql theme={null} WHERE COALESCE(metadata ->> '$.priority', 999) = 1 ``` 2. **Type mismatches**: ```sql theme={null} -- ❌ String comparison (wrong!) WHERE metadata -> '$.priority' > 5 -- ✅ BEST: Use ->> for direct numeric extraction WHERE metadata ->> '$.priority' > 5 ``` 3. **Array index bounds**: Out-of-bounds array access returns NULL, not an error: ```sql theme={null} SELECT metadata -> '$.dependencies[99]' -- Returns NULL if not enough elements ``` 4. **Quotes in JSON strings**: Use `->>` to get unquoted text, not `->`: ```sql theme={null} -- ❌ Returns: "2024-Q1" (with quotes) WHERE metadata -> '$.sprint' = '2024-Q1' -- ✅ Returns: 2024-Q1 (without quotes) WHERE metadata ->> '$.sprint' = '2024-Q1' ``` 5. **Performance on large arrays**: `json_each()` on arrays with thousands of elements can be slow. Consider data restructuring for such cases. ## Summary Querying JSON data in SQLite effectively requires: * Understanding that JSON is stored as strings but queryable with built-in functions * Using `json_extract()` or the shorthand `->` and `->>` operators * Leveraging `json_each()` to flatten arrays for filtering and aggregation * Being mindful of type conversions and NULL handling * Optimizing queries by filtering early and considering denormalization for critical paths With these techniques, you can query complex nested data structures synced from your backend while maintaining good performance on mobile and edge devices. For complete SQLite JSON function reference, see the [SQLite JSON documentation](https://www.sqlite.org/json1.html). # Raw SQLite Tables to Bypass JSON View Limitations Source: https://docs.powersync.com/client-sdks/advanced/raw-tables Use raw tables for native SQLite functionality and improved performance. Raw tables are an experimental feature. We're actively seeking feedback on: * API design and developer experience * Additional features or optimizations needed View the [roadmap doc](https://docs.google.com/document/d/1h2sayKHsQ2hwSAaBlR8z7ReEVDeJ2t1Zzs8N0um5SJc/edit?tab=t.0) to see our latest thinking on the future of this feature, and join our [Discord community](https://discord.gg/powersync) to share your experience and get help. By default, PowerSync uses a [JSON-based view system](/architecture/client-architecture#schema) where data is stored schemalessly in JSON format and then presented through SQLite views based on the client-side schema. Raw tables allow you to define native SQLite tables in the client-side schema, bypassing this. This eliminates overhead associated with extracting values from the JSON data and provides access to advanced SQLite features like foreign key constraints and custom indexes. **Availability** Features describes on this page were introduced in the following versions of our client SDKs: * **JavaScript** (Node: `0.18.0`, React-Native: `1.31.0`, Web: `1.35.0`) * **Dart**: Version 1.18.0 of `package:powersync`. * **Kotlin**: Version 1.11.0. * **Swift**: Version 1.12.0. * **Rust**: Version 0.0.4. * This feature is not yet available on our .NET SDK. ## When to Use Raw Tables Consider raw tables when you need: * **Indexes** - PowerSync's default schema has basic support for indexes on columns, while raw tables give you complete control to create indexes on expressions, use `GENERATED` columns, etc. * **Improved performance** for complex queries (e.g., `SELECT SUM(value) FROM transactions`) - raw tables more efficiently get these values directly from the SQLite column, instead of extracting the value from the JSON object on every row. * **Reduced storage overhead** - eliminate JSON object overhead for each row in `ps_data__
.data` column. * **To manually create tables** - Sometimes you need full control over table creation, for example when implementing custom triggers. **Advanced SQLite features** like `FOREIGN KEY` and `ON DELETE CASCADE` constraints need [special consideration](#using-foreign-keys). ## How Raw Tables Work ### Current JSON-Based System Currently the sync system involves two general steps: 1. Download bucket operations from the PowerSync Service. 2. Once the client has a complete checkpoint and no pending local changes in the upload queue, sync the local database with the bucket operations. The bucket operations use JSON to store the individual operation data. The local database uses tables with a simple schemaless `ps_data__` structure containing only an `id` (TEXT) and `data` (JSON) column. PowerSync automatically creates views on that table that extract JSON fields to resemble standard tables reflecting your schema. ### Raw Tables Approach When opting in to raw tables, you are responsible for creating the tables before using them - PowerSync will no longer create them automatically. Because PowerSync takes no control over raw tables, you need to manually: 1. Define how PowerSync's [schemaless protocol](/architecture/powersync-protocol#protocol) maps to your raw tables — see [Define sync mapping for raw tables](#define-sync-mapping-for-raw-tables) 2. Define triggers that capture local writes from raw tables — see [Capture local writes with triggers](#capture-local-writes-with-triggers) For the purpose of this example, consider a simple table like this: ```sql theme={null} CREATE TABLE todo_lists ( id TEXT NOT NULL PRIMARY KEY, created_by TEXT NOT NULL, title TEXT NOT NULL, content TEXT ) STRICT; ``` ### Define sync mapping for raw tables To sync into the raw `todo_lists` table instead of `ps_data__`, PowerSync needs the SQL statements extracting columns from the untyped JSON protocol used during syncing. Internally, this involves two SQL statements: 1. A `put` SQL statement for upserts, responsible for creating a `todo_list` row or updating it based on its `id` and data columns. 2. A `delete` SQL statement responsible for deletions. The PowerSync client as part of our SDKs will automatically run these statements in response to sync lines being sent from the PowerSync Service. In most cases, these statements can be inferred automatically. However, the statements can also be given explicitly if customization is needed. #### Inferring sync statements In most cases, the `put` and `delete` statements are obvious when looking at the structure of the table. With the `todo_list` example, a delete statement would `DELETE FROM todo_lists WHERE id = $row_id_to_delete`. Similarly, a `put` statement would use a straightforward upsert to create or update rows. When the SDK knows the name of the local table you're inserting into, it can infer statements automatically by analyzing the `CREATE TABLE` structure. The name of raw tables can be provided with the `RawTableSchema` type: ```javascript JavaScript theme={null} // Raw tables are not included in the regular Schema() object. // Instead, add them afterwards using withRawTables(). const mySchema = new Schema({ // Define your PowerSync-managed schema here // ... }); mySchema.withRawTables({ todo_lists: { schema: {}, } }); ``` ```dart Dart theme={null} // Raw tables are not part of the regular tables list and can be defined with the optional rawTables parameter. const schema = Schema([], rawTables: [ RawTable.inferred( name: 'todo_lists', schema: RawTableSchema(), ), ]); ``` ```kotlin Kotlin theme={null} // To define a raw table, include it in the list of tables passed to the Schema val schema = Schema(listOf( RawTable( name = "todo_lists", schema = RawTableSchema(), ) )) ``` ```swift Swift theme={null} // To define a raw table, include it in the list of tables passed to the Schema let lists = RawTable( name: "todo_lists", schema: RawTableSchema() ) let schema = Schema(lists) ``` ```csharp .NET theme={null} Unfortunately, raw tables are not yet available in the .NET SDK. ``` ```rust Rust theme={null} use powersync::schema::{RawTable, RawTableSchema, Schema}; pub fn app_schema() -> Schema { let mut schema = Schema::default(); let table = RawTable::with_schema("todo_lists", RawTableSchema::default()); schema.raw_tables.push(table); schema } ``` **When to use inferred statements** If you have a local table that directly corresponds to the schema of a synced output table, inferred statements greatly simplify the schema setup. You will need explicit sync statements if, for instance: * you want to apply transformations on synced values before inserting them into your local database. * you need custom default values for synced `NULL` values. * you're using the [rest column pattern](#the-_extra-column-pattern) to help with migrations. * you have a custom setup where a raw table stores data from multiple source tables. If the name of the SQLite table and the name of the synced table aren't the same, the inferred statements can be customized. For instance, say you had a `local_users` table in your SQLite database and want to sync rows from the `users` table in your backend. Here, the name of the raw table must be `users` to match PowerSync definitions, but the `RawTableSchema` type on every SDK has an optional `tableName` field that can be set to `local_users` in this case. #### Explicit sync statements To pass statements explicitly, use the `put` and `delete` parameters available in each SDK. A statement consists of two parts: 1. An SQL string of the statement to run. It should use positional parameters (`?`) as placeholders for values from the synced row. 2. An array describing the instantiation of positional parameter. `delete` statements can reference the id of the affected row, while `put` statements can also reference individual column values. A `rest` parameter is also available, see [migrations](#the-_extra-column-pattern) for details on how that can be useful. Declaring these statements and parameters happens as part of the schema passed to PowerSync databases: ```javascript JavaScript theme={null} // Raw tables are not included in the regular Schema() object. // Instead, add them afterwards using withRawTables(). // The values of parameters are described as a JSON array either containing: // - the string 'Id' to reference the id of the affected row. // - the object { Column: name } to reference the value of the column 'name'. const mySchema = new Schema({ // Define your PowerSync-managed schema here // ... }); mySchema.withRawTables({ // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match // the table name from the backend source database as sent by the PowerSync Service. todo_lists: { put: { sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)', params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }] }, delete: { sql: 'DELETE FROM lists WHERE id = ?', params: ['Id'] } } }); // We will simplify this API after understanding the use-cases for raw tables better. ``` ```dart Dart theme={null} // Raw tables are not part of the regular tables list and can be defined with the optional rawTables parameter. final schema = Schema(const [], rawTables: const [ RawTable( // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match // the table name from the backend source database as sent by the PowerSync Service. name: 'todo_lists', put: PendingStatement( sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)', params: [ .id(), .column('created_by'), .column('title'), .column('content'), ], ), delete: PendingStatement( sql: 'DELETE FROM todo_lists WHERE id = ?', params: [ .id(), ], ), ), ]); ``` ```kotlin Kotlin theme={null} // To define a raw table, include it in the list of tables passed to the Schema val schema = Schema(listOf( RawTable( // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match // the table name from the backend database as sent by the PowerSync service. name = "todo_lists", put = PendingStatement( "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)", listOf( PendingStatementParameter.Id, PendingStatementParameter.Column("created_by"), PendingStatementParameter.Column("title"), PendingStatementParameter.Column("content") ) ), delete = PendingStatement( "DELETE FROM todo_lists WHERE id = ?", listOf(PendingStatementParameter.Id) ) ) )) ``` ```swift Swift theme={null} // To define a raw table, include it in the list of tables passed to the Schema let lists = RawTable( name: "todo_lists", put: PendingStatement( sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)", parameters: [.id, .column("created_by"), .column("title"), .column("content")] ), delete: PendingStatement( sql: "DELETE FROM todo_lists WHERE id = ?", parameters: [.id], ), ) let schema = Schema(lists) ``` ```csharp .NET theme={null} Unfortunately, raw tables are not yet available in the .NET SDK. ``` ```rust Rust theme={null} use powersync::schema::{PendingStatement, PendingStatementValue, RawTable, Schema}; pub fn app_schema() -> Schema { let mut schema = Schema::default(); let lists = RawTable::with_statements( "todo_lists", PendingStatement { sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?)".into(), params: vec![ PendingStatementValue::Id, PendingStatementValue::Column("created_by".into()), PendingStatementValue::Column("title".into()), PendingStatementValue::Column("content".into()), ] }, PendingStatement { sql: "DELETE FROM todo_lists WHERE id = ?".into(), params:vec![PendingStatementValue::Id] } ); schema.raw_tables.push(lists); schema } ``` After adding raw tables to the schema, you're also responsible for creating them by executing the corresponding `CREATE TABLE` statement before `connect()`-ing the database. ### Capture local writes with triggers PowerSync uses an internal SQLite table to collect local writes. For PowerSync-managed views, a trigger for insertions, updates and deletions automatically forwards local mutations into this table. When using raw tables, defining those triggers is your responsibility. The [PowerSync SQLite extension](https://github.com/powersync-ja/powersync-sqlite-core) creates an insert-only virtual table named `powersync_crud` with these columns: ```sql theme={null} -- This table is part of the PowerSync SQLite core extension CREATE VIRTUAL TABLE powersync_crud( -- The type of operation: 'PUT' or 'DELETE' op TEXT, -- The id of the affected row id TEXT, type TEXT, -- optional (not set on deletes): The column values for the row data TEXT, -- optional: Previous column values to include in a CRUD entry old_values TEXT, -- optional: Metadata for the write to include in a CRUD entry metadata TEXT, ); ``` The virtual table associates local mutations with the current transaction and ensures writes made during the sync process (applying server-side changes) don't count as local writes. The role of triggers is to insert into `powersync_crud` to record writes on raw tables. Like [with statements](#inferring-sync-statements), these triggers can usually be inferred from the schema of the table. #### Inferred triggers The `powersync_create_raw_table_crud_trigger` SQL function is available in migrations to create triggers for raw tables. It takes three arguments: 1. A JSON description of the raw table with options, which can be generated by PowerSync SDKs. 2. The name of the trigger to create. 3. The type of write for which to generate a trigger (`INSERT`, `UPDATE` or `DELETE`). Typically, you'd generate all three. `powersync_create_raw_table_crud_trigger` parses the structure of tables from the database schema, so it must be called *after* the raw table has been created. ```javascript JavaScript theme={null} const table: RawTable = { name: 'todo_lists', schema: {} }; await database.execute("CREATE TABLE todo_lists (...)"); for (const write of ["INSERT", "UPDATE", "DELETE"]) { await database.execute( "SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)", [JSON.stringify(Schema.rawTableToJson(table)), `users_${write}`, write], ); } ``` ```dart Dart theme={null} const table = RawTable.inferred( name: 'todo_lists', schema: RawTableSchema(), ); await database.execute("CREATE TABLE todo_lists (...)"); for (final write in ["INSERT", "UPDATE", "DELETE"]) { await database.execute( "SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)", [json.encode(table), "users_$write", write], ); } ``` ```kotlin Kotlin theme={null} // To define a raw table, include it in the list of tables passed to the Schema val table = RawTable( name = "todo_lists", schema = RawTableSchema(), ) database.execute("CREATE TABLE todo_lists (...)") for (write in listOf("INSERT", "UPDATE", "DELETE)) { database.execute( "SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)", listOf(table.jsonDescription(), "users_$write", write), ) } ``` ```swift Swift theme={null} let lists = RawTable( // The name here specifies the name of the table in your backend database or sync configuration. name: "todo_lists", schema: RawTableSchema() ) try await database.execute("CREATE TABLE todo_lists (...)") for write in ["INSERT", "UPDATE", "DELETE"] { try await database.execute( sql: "SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)", parameters: [ lists.jsonDescription(), "todo_lists_\(write)", write, ] ) } ``` ```csharp .NET theme={null} Unfortunately, raw tables are not yet available in the .NET SDK. ``` ```rust Rust theme={null} use powersync::schema::{RawTable, RawTableSchema}; pub async fn configure_raw_tables(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> { let raw_table = RawTable::with_schema("todo_lists", RawTableSchema::default()); let serialized_table = serde_json::to_string(&raw_table).unwrap(); let mut writer = db.writer().await?; writer.execute("CREATE TABLE todo_lists (...);")?; let mut trigger_stmt = writer.prepare("SELECT powersync_create_raw_table_crud_trigger(?, ?, ?)"); for write in &["INSERT", "UPDATE", "DELETE"] { trigger_stmt.query_one( params![serialized_table, format!("todo_lists_{write}", write)], |_| Ok(()), )?; } Ok(()) } ``` Note that these triggers are created just once! It is your responsibility to drop and re-create them after altering the table. Regular JSON-based tables include [advanced options](/client-sdks/advanced/custom-types-arrays-and-json#advanced-schema-options-to-process-writes). These are also available on raw tables and they affect the generated trigger. You can track previous values, mark a raw table as insert-only or configure the trigger to ignore empty updates by passing an `options` parameter (Rust, Swift, Dart, Kotlin) or set the options on the object literal when defining raw tables (JavaScript). #### Explicit triggers Triggers on raw tables can also be defined explicitly instead of using `powersync_create_raw_table_crud_trigger`. It is your responsibility to setup and migrate these triggers along with the table: ```sql theme={null} CREATE TRIGGER todo_lists_insert AFTER INSERT ON todo_lists FOR EACH ROW BEGIN INSERT INTO powersync_crud (op, id, type, data) VALUES ('PUT', NEW.id, 'todo_lists', json_object( 'created_by', NEW.created_by, 'title', NEW.title, 'content', NEW.content )); END; CREATE TRIGGER todo_lists_update AFTER UPDATE ON todo_lists FOR EACH ROW BEGIN SELECT CASE WHEN (OLD.id != NEW.id) THEN RAISE (FAIL, 'Cannot update id') END; -- TODO: You may want to replace the json_object with a powersync_diff call of the old and new values, or -- use your own diff logic to avoid marking unchanged columns as updated. INSERT INTO powersync_crud (op, id, type, data) VALUES ('PATCH', NEW.id, 'todo_lists', json_object( 'created_by', NEW.created_by, 'title', NEW.title, 'content', NEW.content )); END; CREATE TRIGGER todo_lists_delete AFTER DELETE ON todo_lists FOR EACH ROW BEGIN INSERT INTO powersync_crud (op, id, type) VALUES ('DELETE', OLD.id, 'todo_lists'); END; ``` #### Using foreign keys Raw tables support advanced table constraints including foreign keys. When enabling foreign keys however, you need to be aware of the following: 1. While PowerSync will always apply synced data in a transaction, there is no way to control the order in which rows get applied. For this reason, foreign keys need to be configured with `DEFERRABLE INITIALLY DEFERRED`. 2. When using [stream priorities](/sync/advanced/prioritized-sync), you need to ensure you don't have foreign keys from high-priority rows to lower-priority data. PowerSync applies data in one transaction per priority, so these foreign keys would not work. 3. As usual when using foreign keys, note that they need to be explicitly enabled with `pragma foreign_keys = on`. ## Local-Only Columns Raw tables allow you to add columns that exist only on the client and are never synced to the backend. This is useful for client-specific state like user preferences, local notes, or UI flags that should persist across app restarts but have no equivalent in the backend database. Local-only columns are not supported with PowerSync's default [JSON-based view system](/architecture/client-architecture#schema). Raw tables are required for this functionality. Building on the `todo_lists` example above, you can add local-only columns such as `is_pinned` and `local_notes`: ```sql theme={null} CREATE TABLE IF NOT EXISTS todo_lists ( id TEXT NOT NULL PRIMARY KEY, -- Synced columns created_by TEXT NOT NULL, title TEXT NOT NULL, content TEXT, -- Local-only columns (not synced) is_pinned INTEGER NOT NULL DEFAULT 0, local_notes TEXT ) STRICT; ``` ### With inferred statements and triggers Both the inferred `put` and `delete` statements as well as triggers generated by `powersync_create_raw_table_crud_trigger` support local-only columns. To configure this, include a `syncedColumns` array on the `RawTableSchema`: ```javascript JavaScript theme={null} const table: RawTable = { name: 'todo_lists', schema: { syncedColumns: ['created_by', 'title', 'content'], }, }; ``` ```dart Dart theme={null} const table = RawTable.inferred( name: 'todo_lists', schema: RawTableSchema( syncedColumns: ['created_by', 'title', 'content'], ), ); ``` ```kotlin Kotlin theme={null} // To define a raw table, include it in the list of tables passed to the Schema val table = RawTable( name = "todo_lists", schema = RawTableSchema( syncedColumns = listOf("created_by", "title", "content"), ), ) ``` ```swift Swift theme={null} let lists = RawTable( name: "todo_lists", schema: RawTableSchema( syncedColumns: ["created_by", "title", "content"] ) ) ``` ```csharp .NET theme={null} Unfortunately, raw tables are not yet available in the .NET SDK. ``` ```rust Rust theme={null} use powersync::schema::{RawTable, RawTableSchema}; let raw_table = RawTable::with_schema("todo_lists", { let mut info = RawTableSchema::default(); // Columns not included in this list will not be synced. info.synced_columns = Some(vec!["created_by", "title", "content"]); info }); ``` ### With explicit statements The standard raw table setup requires modifications to support local-only columns: #### Use upsert instead of INSERT OR REPLACE The `put` statement must use `INSERT ... ON CONFLICT(id) DO UPDATE SET` instead of `INSERT OR REPLACE`. `INSERT OR REPLACE` deletes and re-inserts the row, which resets local-only columns to their defaults on every sync update. An upsert only updates the specified synced columns, leaving local-only columns intact. Only synced columns should be referenced in the `put` params. Local-only columns are omitted entirely: ```javascript JavaScript theme={null} schema.withRawTables({ todo_lists: { put: { sql: `INSERT INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?) ON CONFLICT(id) DO UPDATE SET created_by = excluded.created_by, title = excluded.title, content = excluded.content`, params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }] }, delete: { sql: 'DELETE FROM todo_lists WHERE id = ?', params: ['Id'] } } }); ``` ```dart Dart theme={null} final schema = Schema(const [], rawTables: const [ RawTable( name: 'todo_lists', put: PendingStatement( sql: '''INSERT INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?) ON CONFLICT(id) DO UPDATE SET created_by = excluded.created_by, title = excluded.title, content = excluded.content''', params: [ PendingStatementValue.id(), PendingStatementValue.column('created_by'), PendingStatementValue.column('title'), PendingStatementValue.column('content'), ], ), delete: PendingStatement( sql: 'DELETE FROM todo_lists WHERE id = ?', params: [ PendingStatementValue.id(), ], ), ), ]); ``` ```kotlin Kotlin theme={null} val schema = Schema(listOf( RawTable( name = "todo_lists", put = PendingStatement( """INSERT INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?) ON CONFLICT(id) DO UPDATE SET created_by = excluded.created_by, title = excluded.title, content = excluded.content""", listOf( PendingStatementParameter.Id, PendingStatementParameter.Column("created_by"), PendingStatementParameter.Column("title"), PendingStatementParameter.Column("content") ) ), delete = PendingStatement( "DELETE FROM todo_lists WHERE id = ?", listOf(PendingStatementParameter.Id) ) ) )) ``` ```swift Swift theme={null} let lists = RawTable( name: "todo_lists", put: PendingStatement( sql: """ INSERT INTO todo_lists (id, created_by, title, content) VALUES (?, ?, ?, ?) ON CONFLICT(id) DO UPDATE SET created_by = excluded.created_by, title = excluded.title, content = excluded.content """, parameters: [.id, .column("created_by"), .column("title"), .column("content")] ), delete: PendingStatement( sql: "DELETE FROM todo_lists WHERE id = ?", parameters: [.id], ), ) let schema = Schema(lists) ``` #### Exclude local-only columns from triggers The `json_object()` in both the INSERT and UPDATE triggers should only reference synced columns. Local-only columns must not appear in the CRUD payload sent to the backend. Additionally, the UPDATE trigger needs a `WHEN` clause that checks only synced columns. Without it, changes to local-only columns would fire the trigger and produce unnecessary CRUD entries that get uploaded. The `WHEN` clause must use `IS NOT` instead of `!=` for NULL-safe comparisons. `NULL != NULL` evaluates to `NULL` in SQLite, which would cause the trigger to skip legitimate changes to nullable synced columns. ```sql theme={null} CREATE TRIGGER todo_lists_insert AFTER INSERT ON todo_lists FOR EACH ROW BEGIN INSERT INTO powersync_crud (op, id, type, data) VALUES ('PUT', NEW.id, 'todo_lists', json_object( 'created_by', NEW.created_by, 'title', NEW.title, 'content', NEW.content )); END; -- WHEN clause ensures this only fires for synced column changes. -- Uses IS NOT instead of != for correct NULL handling. CREATE TRIGGER todo_lists_update AFTER UPDATE ON todo_lists FOR EACH ROW WHEN OLD.created_by IS NOT NEW.created_by OR OLD.title IS NOT NEW.title OR OLD.content IS NOT NEW.content BEGIN INSERT INTO powersync_crud (op, id, type, data) VALUES ('PATCH', NEW.id, 'todo_lists', json_object( 'created_by', NEW.created_by, 'title', NEW.title, 'content', NEW.content )); END; CREATE TRIGGER todo_lists_delete AFTER DELETE ON todo_lists FOR EACH ROW BEGIN INSERT INTO powersync_crud (op, id, type) VALUES ('DELETE', OLD.id, 'todo_lists'); END; ``` With this setup, local-only columns can be queried and updated using standard SQL without affecting sync: ```sql theme={null} -- Updating a local-only column does not produce a CRUD entry UPDATE todo_lists SET is_pinned = 1 WHERE id = '...'; -- Local-only columns can be used in queries and ordering SELECT * FROM todo_lists ORDER BY is_pinned DESC, title ASC; ``` ## Migrations In PowerSync's [JSON-based view system](/architecture/client-architecture#schema) the client-side schema is applied to the schemaless data, meaning no migrations are required. Raw tables however are excluded from this, so it is the developers responsibility to manage migrations for these tables. ### Adding raw tables as a new table When you're adding new tables to your Sync Streams (or legacy Sync Rules), clients will start to sync data on those tables - even if the tables aren't mentioned in the client's schema yet. So at the time you're introducing a new raw table to your app, it's possible that PowerSync has already synced some data for that table, which would be stored in `ps_untyped`. When adding regular tables, PowerSync will automatically extract rows from `ps_untyped`. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table: ``` INSERT INTO my_table (id, my_column, ...) SELECT id, data ->> 'my_column' FROM ps_untyped WHERE type = 'my_table'; DELETE FROM ps_untyped WHERE type = 'my_table'; ``` This does not apply if you've been using the raw table from the beginning (and never called `connect()` without them) - you only need this for raw tables you already had locally. Another workaround is to clear PowerSync data when changing raw tables and opt for a full resync. ### Migrating to raw tables To migrate from PowerSync-managed tables to raw tables, first: 1. Open the database with the new schema mentioning raw tables. PowerSync will copy data from tables previously managed by PowerSync into `ps_untyped`. 2. Create raw tables. 3. Run the `INSERT FROM SELECT` statement to insert `ps_untyped` data into your raw tables. ### Migrations on raw tables For JSON-based tables, migrations are trivial since all rows are stored as complete JSON objects. Adding or removing columns only affects views over unchanged JSON data, making the schema a stateless structure. For raw tables, the situation is different. When adding a new column for instance, existing rows would not have a default value even if one could have been synced already. Suppose a new column is added with a simple migration: `ALTER TABLE todo_list ADD COLUMN priority INTEGER`. This adds the new column on the client, with null values for each existing row. If the client updates the schema before the server and then syncs the changes, every row effectively resyncs and reflects populated values for the new column. So clients observe a consistent state after the sync. If new values have been synced before the client updates, existing rows may not receive the new column until those rows are synced again! This is why special approaches are needed when migrating synced tables. #### Deleting data on migrations One option that makes migrations safe (with obvious downsides) is to simply reset the database before migrating: `await db.disconnectAndClear(soft: true)` deletes materialized sync rows while keeping downloaded data active. Afterwards, migrations can migrate the schema in any way before you reconnect. In a soft clear, data doesn't have to be downloaded again in most cases. This might reduce the downtime in which no data is available, but a network connection is necessary for data to become available again. #### Triggering resync on migrations An alternative to the approach of deleting data could be to trigger a re-sync *without* clearing tables. For example: ```sql theme={null} -- We need an (optimistic) default value for existing rows ALTER TABLE todo_list ADD COLUMN priority INTEGER DEFAULT 1 NOT NULL; SELECT powersync_trigger_resync(TRUE); ``` The optimistic default value would be overridden on the next completed sync (depending on when the user is online again). This means that the app is still usable offline after an update, but having optimistic state on the client is a caveat because PowerSync normally has [stronger consistency guarantees](architecture/consistency#consistency). There may be cases where the approach of deleting data is a safer choice. #### The `_extra` column pattern Another option to avoid data inconsistencies in migrations is to ensure the raw table stores a full row as expected by PowerSync. To do that, you can introduce an extra column on your table designed to hold values from the backend database that a client is not yet aware of: ```sql theme={null} CREATE TABLE todo_lists ( id TEXT NOT NULL PRIMARY KEY, created_by TEXT NOT NULL, title TEXT NOT NULL, content TEXT, _extra TEXT ) STRICT; ``` The `_extra` column is not used in the app, but the sync service can be informed about it using the `Rest` column source: ```javascript JavaScript theme={null} mySchema.withRawTables({ // The name here doesn't have to match the name of the table in SQL. Instead, it's used to match // the table name from the backend source database as sent by the PowerSync Service. todo_lists: { put: { sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)', params: ['Id', { Column: 'created_by' }, { Column: 'title' }, { Column: 'content' }, 'Rest'] }, delete: ... } }); ``` ```dart Dart theme={null} final schema = Schema(const [], rawTables: const [ RawTable( name: 'todo_lists', put: PendingStatement( sql: 'INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)', params: [ .id(), .column('created_by'), .column('title'), .column('content'), .rest(), ], ), delete: PendingStatement(...), ), ]); ``` ```kotlin Kotlin theme={null} val schema = Schema(listOf( RawTable( name = "todo_lists", put = PendingStatement( "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)", listOf( PendingStatementParameter.Id, PendingStatementParameter.Column("created_by"), PendingStatementParameter.Column("title"), PendingStatementParameter.Column("content"), PendingStatementParameter.Rest, ) ), delete = PendingStatement(...) ) )) ``` ```swift Swift theme={null} let lists = RawTable( name: "todo_lists", put: PendingStatement( sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)", parameters: [.id, .column("created_by"), .column("title"), .column("content"), .rest] ), delete: ... ) ``` ```csharp .NET theme={null} Unfortunately, raw tables are not yet available in the .NET SDK. ``` ```rust Rust theme={null} use powersync::schema::{PendingStatement, PendingStatementValue, RawTable, Schema}; let lists = RawTable::with_statements( "todo_lists", PendingStatement { sql: "INSERT OR REPLACE INTO todo_lists (id, created_by, title, content, _extra) VALUES (?, ?, ?, ?, ?)".into(), params: vec![ PendingStatementValue::Id, PendingStatementValue::Column("created_by".into()), PendingStatementValue::Column("title".into()), PendingStatementValue::Column("content".into()), PendingStatementValue::Rest, ] }, ... ); ``` If PowerSync then syncs a row like `{"created_by": "User", "title": "title", "content": "content", "tags": "Important"}`, this put statement would set `_extra` to `{"tags":"Important"}`, ensuring that the entire source row can be recovered from a row in the raw table. This then allows writing migrations: 1. Adding new columns by using `json_extract(_extra, '$.newColumnName')` as a default value. 2. Removing existing columns by updating `_extra = json_set(_extra, '$.droppedColumnName', droppedColumnName)` before dropping the column. Don't forget to delete triggers before running these statements in migrations, since these updates shouldn't result in `ps_crud` writes. ## Deleting data and raw tables APIs that clear an entire PowerSync database, like e.g. `disconnectAndClear()`, don't affect raw tables by default. You can use the `clear` parameter on the `RawTable` constructor to set an SQL statement to run when clearing the database. Typically, something like `DELETE FROM $tableName` would be a reasonable statement to run. `clear` statements are not inferred automatically and must always be set explicitly. Raw tables themselves are not managed by PowerSync and need to be dropped to delete them. # Sequential ID Mapping Source: https://docs.powersync.com/client-sdks/advanced/sequential-id-mapping Learn how to map a local UUID to a remote sequential (auto-incrementing) ID. ## Introduction When auto-incrementing / sequential IDs are used on the backend source database, the ID can only be generated on the backend source database, and not on the client while offline. To handle this, you can use a secondary UUID on the client, then map it to a sequential ID when performing an update on the backend source database. This allows using a sequential primary key for each record, with a UUID as a secondary ID. This mapping must be performed wherever the UUIDs are referenced, including for every foreign key column. To illustrate this, we will use the [React To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist) and modify it to use UUIDs on the client and map them to sequential IDs on the backend source database (Supabase in this case). ### Overview Before we get started, let's outline the changes we will have to make: Update the `lists` and `todos` tables Add two triggers that will map the UUID to the integer ID and vice versa. Update your Sync Streams (or legacy Sync Rules) to use the UUID column instead of the integer ID. The following components/files will have to be updated: * *Files*: * `AppSchema.ts` * `fts_setup.ts` * `SupabaseConnector.ts` * *Components*: * `lists.tsx` * `page.tsx` * `SearchBarWidget.tsx` * `TodoListsWidget.tsx` ## Schema In order to map the UUID to the integer ID, we need to update the * `lists` table by adding a `uuid` column, which will be the secondary ID, and * `todos` table by adding a `uuid` column, and a `list_uuid` foreign key column which references the `uuid` column in the `lists` table. ```sql schema {3, 13, 21, 26} theme={null} create table public.lists ( id serial, uuid uuid not null unique, created_at timestamp with time zone not null default now(), name text not null, owner_id uuid not null, constraint lists_pkey primary key (id), constraint lists_owner_id_fkey foreign key (owner_id) references auth.users (id) on delete cascade ) tablespace pg_default; create table public.todos ( id serial, uuid uuid not null unique, created_at timestamp with time zone not null default now(), completed_at timestamp with time zone null, description text not null, completed boolean not null default false, created_by uuid null, completed_by uuid null, list_id int not null, list_uuid uuid not null, constraint todos_pkey primary key (id), constraint todos_created_by_fkey foreign key (created_by) references auth.users (id) on delete set null, constraint todos_completed_by_fkey foreign key (completed_by) references auth.users (id) on delete set null, constraint todos_list_id_fkey foreign key (list_id) references lists (id) on delete cascade, constraint todos_list_uuid_fkey foreign key (list_uuid) references lists (uuid) on delete cascade ) tablespace pg_default; ``` With the schema updated, we now need a method to synchronize and map the `list_id` and `list_uuid` in the `todos` table, with the `id` and `uuid` columns in the `lists` table. We can achieve this by creating SQL triggers. ## Create SQL Triggers We need to create triggers that can look up the integer ID for the given UUID and vice versa. These triggers will maintain consistency between `list_id` and `list_uuid` in the `todos` table by ensuring that they remain synchronized with the `id` and `uuid` columns in the `lists` table; even if changes are made to either field. We will create the following two triggers that cover either scenario of updating the `list_id` or `list_uuid` in the `todos` table: 1. `update_integer_id`, and 2. `update_uuid_column` The `update_integer_id` trigger ensures that whenever a `list_uuid` value is inserted or updated in the `todos` table, the corresponding `list_id` is fetched from the `lists` table and updated automatically. It also validates that the `list_uuid` exists in the `lists` table; otherwise, it raises an exception. ```sql theme={null} CREATE OR REPLACE FUNCTION func_update_integer_id() RETURNS TRIGGER AS $$ BEGIN IF TG_OP = 'INSERT' THEN -- Always update list_id on INSERT SELECT id INTO NEW.list_id FROM lists WHERE uuid = NEW.list_uuid; IF NOT FOUND THEN RAISE EXCEPTION 'UUID % does not exist in lists', NEW.list_uuid; END IF; ELSIF TG_OP = 'UPDATE' THEN -- Only update list_id if list_uuid changes IF NEW.list_uuid IS DISTINCT FROM OLD.list_uuid THEN SELECT id INTO NEW.list_id FROM lists WHERE uuid = NEW.list_uuid; IF NOT FOUND THEN RAISE EXCEPTION 'UUID % does not exist in lists', NEW.list_uuid; END IF; END IF; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER update_integer_id BEFORE INSERT OR UPDATE ON todos FOR EACH ROW EXECUTE FUNCTION func_update_integer_id(); ``` The `update_uuid_column` trigger ensures that whenever a `list_id` value is inserted or updated in the todos table, the corresponding `list_uuid` is fetched from the `lists` table and updated automatically. It also validates that the `list_id` exists in the `lists` table. ```sql update_uuid_column theme={null} CREATE OR REPLACE FUNCTION func_update_uuid_column() RETURNS TRIGGER AS $$ BEGIN IF TG_OP = 'INSERT' THEN -- Always update list_uuid on INSERT SELECT uuid INTO NEW.list_uuid FROM lists WHERE id = NEW.list_id; IF NOT FOUND THEN RAISE EXCEPTION 'ID % does not exist in lists', NEW.list_id; END IF; ELSIF TG_OP = 'UPDATE' THEN -- Only update list_uuid if list_id changes IF NEW.list_id IS DISTINCT FROM OLD.list_id THEN SELECT uuid INTO NEW.list_uuid FROM lists WHERE id = NEW.list_id; IF NOT FOUND THEN RAISE EXCEPTION 'ID % does not exist in lists', NEW.list_id; END IF; END IF; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER update_uuid_column BEFORE INSERT OR UPDATE ON todos FOR EACH ROW EXECUTE FUNCTION func_update_uuid_column(); ``` We now have triggers in place that will handle the mapping for our updated schema and can move on to updating your Sync Streams (or legacy Sync Rules) to use the UUID column instead of the integer ID. ## Update Sync Streams As sequential IDs can only be created on the backend source database, we need to use UUIDs in the client. The sync config is updated to use the `uuid` column as the `id` column for the `lists` and `todos` tables, explicitly defining which columns to select so that `list_id` (the integer ID) is no longer exposed to the client. ```yaml theme={null} config: edition: 3 streams: user_lists: auto_subscribe: true with: user_lists_param: SELECT id FROM lists WHERE owner_id = auth.user_id() queries: - "SELECT lists.uuid AS id, lists.created_at, lists.name, lists.owner_id FROM lists WHERE lists.id IN user_lists_param" - "SELECT todos.uuid AS id, todos.created_at, todos.completed_at, todos.description, todos.completed, todos.created_by, todos.list_uuid FROM todos WHERE todos.list_id = user_lists_param" ``` ```yaml sync-config.yaml {4, 7-8} theme={null} bucket_definitions: user_lists: # Separate bucket per todo list parameters: select id from lists where owner_id = request.user_id() data: # Explicitly define all the columns - select uuid as id, created_at, name, owner_id from lists where id = bucket.id - select uuid as id, created_at, completed_at, description, completed, created_by, list_uuid from todos where list_id = bucket.id ``` We can now move on to updating the client to use UUIDs. ## Update Client to Use UUIDs With Sync Streams updated, we no longer have the `list_id` column in the `todos` table. We start by updating `AppSchema.ts` and replacing `list_id` with `list_uuid` in the `todos` table. ```typescript AppSchema.ts {3, 11} theme={null} const todos = new Table( { list_uuid: column.text, created_at: column.text, completed_at: column.text, description: column.text, created_by: column.text, completed_by: column.text, completed: column.integer }, { indexes: { list: ['list_uuid'] } } ); ``` The `uploadData` function in `SupabaseConnector.ts` needs to be updated to use the new `uuid` column in both tables. ```typescript SupabaseConnector.ts {13, 17, 20} theme={null} export class SupabaseConnector extends BaseObserver implements PowerSyncBackendConnector { // other code async uploadData(database: AbstractPowerSyncDatabase): Promise { // other code try { for (const op of transaction.crud) { lastOp = op; const table = this.client.from(op.table); let result: any; switch (op.op) { case UpdateType.PUT: const record = { ...op.opData, uuid: op.id }; result = await table.upsert(record); break; case UpdateType.PATCH: result = await table.update(op.opData).eq('uuid', op.id); break; case UpdateType.DELETE: result = await table.delete().eq('uuid', op.id); break; } } } catch (ex: any) { // other code } } } ``` For the remaining files, we simply need to replace any reference to `list_id` with `list_uuid`. ```typescript fts_setup.ts {3} theme={null} export async function configureFts(): Promise { await createFtsTable('lists', ['name'], 'porter unicode61'); await createFtsTable('todos', ['description', 'list_uuid']); } ``` ```tsx page.tsx {4, 14} theme={null} const TodoEditSection = () => { // code const { data: todos } = useQuery( `SELECT * FROM ${TODOS_TABLE} WHERE list_uuid=? ORDER BY created_at DESC, id`, [listID] ); // code const createNewTodo = async (description: string) => { // other code await powerSync.execute( `INSERT INTO ${TODOS_TABLE} (id, created_at, created_by, description, list_uuid) VALUES (uuid(), datetime(), ?, ?, ?)`, [userID, description, listID!] ); } } ``` ```tsx TodoListWidget.tsx {10, 18} theme={null} export function TodoListsWidget(props: TodoListsWidgetProps) { // hooks and navigation const { data: listRecords, isLoading } = useQuery(` SELECT ${LISTS_TABLE}.*, COUNT(${TODOS_TABLE}.id) AS total_tasks, SUM(CASE WHEN ${TODOS_TABLE}.completed = true THEN 1 ELSE 0 END) as completed_tasks FROM ${LISTS_TABLE} LEFT JOIN ${TODOS_TABLE} ON ${LISTS_TABLE}.id = ${TODOS_TABLE}.list_uuid GROUP BY ${LISTS_TABLE}.id; `); const deleteList = async (id: string) => { await powerSync.writeTransaction(async (tx) => { // Delete associated todos await tx.execute(`DELETE FROM ${TODOS_TABLE} WHERE list_uuid = ?`, [id]); // Delete list record await tx.execute(`DELETE FROM ${LISTS_TABLE} WHERE id = ?`, [id]); }); }; } ``` ```tsx SearchBarWidget.tsx {8, 19} theme={null} export const SearchBarWidget: React.FC = () => { const handleInputChange = async (value: string) => { if (value.length !== 0) { let listsSearchResults: any[] = []; const todoItemsSearchResults = await searchTable(value, 'todos'); for (let i = 0; i < todoItemsSearchResults.length; i++) { const res = await powersync.get(`SELECT * FROM ${LISTS_TABLE} WHERE id = ?`, [ todoItemsSearchResults[i]['list_uuid'] ]); todoItemsSearchResults[i]['list_name'] = res.name; } if (!todoItemsSearchResults.length) { listsSearchResults = await searchTable(value, 'lists'); } const formattedListResults: SearchResult[] = listsSearchResults.map( (result) => new SearchResult(result['id'], result['name']) ); const formattedTodoItemsResults: SearchResult[] = todoItemsSearchResults.map((result) => { return new SearchResult(result['list_uuid'], result['list_name'] ?? '', result['description']); }); setSearchResults([...formattedTodoItemsResults, ...formattedListResults]); } }; } ``` # State Management Libraries Source: https://docs.powersync.com/client-sdks/advanced/state-management Use PowerSync with state management libraries in Dart/Flutter This guide is currently specific to the Dart/Flutter SDK. We may expand it to cover other SDKs in the future. Our [demo apps](/intro/examples) for Flutter are intentionally kept simple to focus on demonstrating PowerSync APIs. Instead of using heavy state management solutions, they use simple global fields to make the PowerSync database accessible to widgets. When adopting PowerSync in your own app, you might want a more sophisticated approach for state management. This guide explains how PowerSync's Dart/Flutter SDK integrates with popular state management packages. Adopting PowerSync can actually simplify your app architecture by using a local SQLite database as the single source of truth for all data. For a general discussion on how PowerSync fits into modern app architecture, see [this blog post](https://dinkomarinac.dev/building-local-first-flutter-apps-with-riverpod-drift-and-powersync). PowerSync exposes database queries with the standard `Future` and `Stream` classes from `dart:async`. Given how widely used these are in the Dart ecosystem, PowerSync works well with all popular approaches for state management, such as: 1. Providers with `package:provider`: Create your database as a `Provider` and expose watched queries to child widgets with `StreamProvider`! The provider for databases should `close()` the database in `dispose`. 2. Providers with `package:riverpod`: We mention relevant snippets [below](#riverpod). 3. Dependency injection with `package:get_it`: PowerSync databases can be registered with `registerSingletonAsync`. Again, make sure to `close()` the database in the `dispose` callback. 4. The BLoC pattern with the `bloc` package: You can easily listen to watched queries in Cubits (although, if you find your Blocs and Cubits becoming trivial wrappers around database streams, consider just `watch()`ing database queries in widgets directly. That doesn't make your app [less testable](/client-sdks/advanced/unit-testing)!). To simplify state management, avoid the use of hydrated blocs and cubits for state that depends on database queries. With PowerSync, regular data is already available locally and doesn't need a second local cache. ## Riverpod We have a [complete example](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-drift) using PowerSync with modern Flutter libraries like Riverpod, Drift, and `auto_route`. A good way to open PowerSync databases with Riverpod is to use an async provider. You can manage your `connect` and `disconnect` calls there, for instance by listening to authentication state: ```dart theme={null} @Riverpod(keepAlive: true) Future powerSyncInstance(Ref ref) async { final db = PowerSyncDatabase( schema: schema, path: await _getDatabasePath(), logger: attachedLogger, ); await db.initialize(); // TODO: Listen for auth changes and connect() the database here. ref.listen(yourAuthProvider, (prev, next) { if (next.isAuthenticated && !prev.isAuthenticated) { db.connect(connector: MyConnector()); } // ... }); ref.onDispose(db.close); return db; } ``` ### Querying Data To expose auto-updating query results, use a `StreamProvider` that reads from the database: ```dart theme={null} final _lists = StreamProvider((ref) async* { final database = await ref.read(powerSyncInstanceProvider.future); yield* database.watch('SELECT * FROM lists'); }); ``` ### Waiting for sync If you were awaiting `waitForFirstSync` before, you can keep doing that: ```dart theme={null} final db = await ref.read(powerSyncInstanceProvider.future); await db.waitForFirstSync(); ``` Alternatively, you can expose the sync status as a provider and use that to determine whether the synchronization has completed: ```dart theme={null} final syncStatus = statefulProvider((ref, change) { final status = Stream.fromFuture(ref.read(powerSyncInstanceProvider.future)) .asyncExpand((db) => db.statusStream); final sub = status.listen(change); ref.onDispose(sub.cancel); return const SyncStatus(); }); @riverpod bool didCompleteSync(Ref ref, [BucketPriority? priority]) { final status = ref.watch(syncStatus); if (priority != null) { return status.statusForPriority(priority).hasSynced ?? false; } else { return status.hasSynced ?? false; } } final class MyWidget extends ConsumerWidget { const MyWidget({super.key}); @override Widget build(BuildContext context, WidgetRef ref) { final didSync = ref.watch(didCompleteSyncProvider()); if (!didSync) { return const Text('Busy with sync...'); } // ... content after first sync } } ``` ### Attachment queue If you're using the attachment queue helper to synchronize media assets, you can also wrap that in a provider: ```dart theme={null} @Riverpod(keepAlive: true) Future attachmentQueue(Ref ref) async { final db = await ref.read(powerSyncInstanceProvider.future); final queue = YourAttachmentQueue(db, remoteStorage); await queue.init(); return queue; } ``` Reading and awaiting this provider can then be used to show attachments: ```dart theme={null} final class PhotoWidget extends ConsumerWidget { final TodoItem todo; const PhotoWidget({super.key, required this.todo}); @override Widget build(BuildContext context, WidgetRef ref) { final photoState = ref.watch(_getPhotoStateProvider(todo.photoId)); if (!photoState.hasValue) { return Container(); } final data = photoState.value; if (data == null) { return Container(); } String? filePath = data.photoPath; bool fileIsDownloading = !data.fileExists; bool fileArchived = data.attachment?.state == AttachmentState.archived.index; if (fileArchived) { return Column( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.center, children: [ const Text("Unavailable"), const SizedBox(height: 8), ], ); } if (fileIsDownloading) { return const Text("Downloading..."); } File imageFile = File(filePath!); int lastModified = imageFile.existsSync() ? imageFile.lastModifiedSync().millisecondsSinceEpoch : 0; Key key = ObjectKey('$filePath:$lastModified'); return Image.file( key: key, imageFile, width: 50, height: 50, ); } } class _ResolvedPhotoState { String? photoPath; bool fileExists; Attachment? attachment; _ResolvedPhotoState( {required this.photoPath, required this.fileExists, this.attachment}); } @riverpod Future<_ResolvedPhotoState> _getPhotoState(Ref ref, String? photoId) async { if (photoId == null) { return _ResolvedPhotoState(photoPath: null, fileExists: false); } final queue = await ref.read(attachmentQueueProvider.future); final photoPath = await queue.getLocalUri('$photoId.jpg'); bool fileExists = await File(photoPath).exists(); final row = await queue.db .getOptional('SELECT * FROM attachments_queue WHERE id = ?', [photoId]); if (row != null) { Attachment attachment = Attachment.fromRow(row); return _ResolvedPhotoState( photoPath: photoPath, fileExists: fileExists, attachment: attachment); } return _ResolvedPhotoState( photoPath: photoPath, fileExists: fileExists, attachment: null); } ``` # Unit Testing Source: https://docs.powersync.com/client-sdks/advanced/unit-testing Set up unit tests for PowerSync in Dart/Flutter This guide is currently specific to the Dart/Flutter SDK. We may expand it to cover other SDKs in the future. For unit testing your projects using PowerSync (for example, testing whether your queries run as expected), you'll need the `powersync-sqlite-core` binary in your project's root directory. ## Setup 1. Download the PowerSync SQLite binary * Go to [powersync-sqlite-core Releases](https://github.com/powersync-ja/powersync-sqlite-core/releases) * Download the binary for your OS 2. Rename the binary * Remove the architecture suffix from the filename * Examples: * `powersync_x64.dll` → `powersync.dll` (Windows) * `libpowersync_aarch64.dylib` → `libpowersync.dylib` (macOS) * `libpowersync_x64.so` → `libpowersync.so` (Linux) 3. Place the binary * Move the renamed binary to your project's root directory ## Example Test This example shows basic unit testing with PowerSync in Flutter. For more information, see the [Flutter unit testing documentation](https://docs.flutter.dev/cookbook/testing/unit/introduction). ```dart theme={null} import 'dart:io'; import 'package:powersync/powersync.dart'; import 'package:path/path.dart'; const schema = Schema([ Table('customers', [Column.text('name'), Column.text('email')]) ]); late PowerSyncDatabase testDB; String getTestDatabasePath() async { const dbFilename = 'powersync-test.db'; final dir = Directory.current.absolute.path; return join(dir, dbFilename); } Future openTestDatabase() async { testDB = PowerSyncDatabase( schema: schema, path: await getTestDatabasePath(), logger: testLogger, ); await testDB.initialize(); } test('INSERT', () async { await testDB.execute( 'INSERT INTO customers(name, email) VALUES(?, ?)', ['John Doe', 'john@hotmail.com']); final results = await testDB.getAll('SELECT * FROM customers'); expect(results.length, 1); expect(results, ['John Doe', 'john@hotmail.com']); }); ``` #### If you have trouble with loading the extension, confirm the following Ensure that your SQLite3 binary install on your system has extension loading enabled. You can confirm this by doing the following * Run `sqlite3` in your command-line interface. * In the sqlite3 prompt run `PRAGMA compile_options;` * Check the output for the option `ENABLE_LOAD_EXTENSION`. * If you see `ENABLE_LOAD_EXTENSION`, it means extension loading is enabled. If the above steps don't work, you can also confirm if extension loading is enabled by trying to load the extension in your command-line interface. * Run `sqlite3` in your command-line interface. * Run `.load /path/to/file/libpowersync.dylib` (macOS) or `.load /path/to/file/libpowersync.so` (Linux) or `.load /path/to/file/powersync.dll` (Windows). * If this runs without error, then extension loading is enabled. If it fails with an error message about extension loading being disabled, then it’s not enabled in your SQLite installation. If it is not enabled, you will have to download a compiled SQLite binary with extension loading enabled (e.g. using Homebrew) or [compile SQLite](https://www.sqlite.org/howtocompile.html) with extension loading enabled and include it in your project's folder alongside the extension. # Cascading Delete Source: https://docs.powersync.com/client-sdks/cascading-delete Perform cascading deletes on the client-side database PowerSync [uses SQLite views](/architecture/client-architecture#schema) instead of standard tables, so SQLite features like foreign key constraints and cascading deletes are not available. There's no built-in support for cascading deletes on the client, but you can achieve this in two ways: 1. Manual deletion in a transaction - Delete all related records in a single transaction (recommended for most cases) Every local mutation performed against SQLite via the PowerSync SDK will be returned in `uploadData`. As long as you're using `.execute()` for the mutation, the operation will be present in the upload queue. 2. Triggers - Create triggers on the [internal tables](/architecture/client-architecture#schema) (more complex, but more automatic) You create triggers on the internal tables (not the views defined by the client schema), similar to what is done [here](https://github.com/powersync-ja/powersync-js/blob/e77b1abfbed91988de1f4c707c24855cd66b2219/demos/react-supabase-todolist/src/app/utils/fts_setup.ts#L50). ## Example: Manual Transaction This example from the [React Native To-Do List demo app](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-native-supabase-todolist) shows how to delete a `list` and all its associated `todos` in a single transaction: ```typescript theme={null} const deleteList = async (id: string) => { await system.powersync.writeTransaction(async (tx) => { // Delete associated todos await tx.execute(`DELETE FROM ${TODO_TABLE} WHERE list_id = ?`, [id]); // Delete list record await tx.execute(`DELETE FROM ${LIST_TABLE} WHERE id = ?`, [id]); }); }; ``` Every mutation performed via `.execute()` is added to the upload queue and returned in `uploadData`. PowerSync will also delete local records when your backend performs cascade deletes on the source database (as long as those tables are in the publication). For example, if you delete a record from the local `lists` table and Supabase cascade deletes a record from the `todo` table, PowerSync will also delete the local `todo` record when online. # Expo Go Support Source: https://docs.powersync.com/client-sdks/frameworks/expo-go-support PowerSync supports Expo Go with @powersync/adapter-sql-js Expo Go is a sandbox environment that allows you to quickly test your application without building a development build. To enable PowerSync in Expo Go, we provide a JavaScript-based database adapter: [`@powersync/adapter-sql-js`](https://www.npmjs.com/package/@powersync/adapter-sql-js). # @powersync/adapter-sql-js `@powersync/adapter-sql-js` is a development package for PowerSync which uses SQL.js to provide a pure JavaScript SQLite implementation. This eliminates the need for native dependencies and enables development with Expo Go and other JavaScript-only environments. Under the hood, it uses our custom fork [powersync-sql-js](https://github.com/powersync-ja/powersync-sql-js) - a fork of SQL.js (SQLite compiled to JavaScript via Emscripten) that loads PowerSync's Rust core extension. This package is in an **alpha** release. **Expo Go Sandbox Environment Only** This adapter is specifically designed for Expo Go and similar JavaScript-only environments. It will be much slower than native database adapters and has limitations. Every write operation triggers a complete rewrite of the entire database file to persistent storage, not just the changed data. In addition to the performance overheads, this adapter doesn't provide any of the SQLite consistency guarantees - you may end up with missing data or a corrupted database file if the app is killed while writing to the database file. ## Usage ### Quickstart 1. Create a new Expo app: ```bash theme={null} npx create-expo-app@latest my-app ``` 2. Navigate to your app directory and start the development server: ```bash theme={null} cd my-app && npm run ios ``` 3. In a new terminal tab, install PowerSync dependencies: ```bash theme={null} npm install @powersync/react-native @powersync/adapter-sql-js ``` 4. Replace the code in `app/(tabs)/index.tsx` with: ```tsx app/(tabs)/index.tsx theme={null} import { SQLJSOpenFactory } from "@powersync/adapter-sql-js"; import { PowerSyncDatabase, Schema } from "@powersync/react-native"; import { useEffect, useState } from "react"; import { Text } from "react-native"; export const powerSync = new PowerSyncDatabase({ schema: new Schema({}), // todo: define the schema - see Next Steps below database: new SQLJSOpenFactory({ dbFilename: "example.db", }), }); export default function HomeScreen() { const [version, setVersion] = useState(null); useEffect(() => { powerSync.get("select powersync_rs_version();").then((r) => {setVersion(JSON.stringify(r))}); }, []); return ( <>{version && PowerSync Initialized - {version}} ); } ``` 1. Install the SQL.js adapter: ```bash theme={null} npm install @powersync/adapter-sql-js ``` 2. Set up PowerSync by using the Sql.js factory: ```tsx SystemProvider.tsx theme={null} import { SQLJSOpenFactory } from "@powersync/adapter-sql-js"; import { PowerSyncDatabase, Schema } from "@powersync/react-native"; import { useEffect, useState } from "react"; import { Text } from "react-native"; export const powerSync = new PowerSyncDatabase({ schema: new Schema({}), // todo: define the schema - see Next Steps below database: new SQLJSOpenFactory({ dbFilename: "example.db", }), }); export default function HomeScreen() { const [version, setVersion] = useState(null); useEffect(() => { powerSync.get("select powersync_rs_version();").then((r) => {setVersion(JSON.stringify(r))}); }, []); return ( <>{version && PowerSync Initialized - {version}} ); } ``` ## Next Steps After adding PowerSync to your app: 1. [**Define what data to sync by setting up Sync Rules**](/sync/rules/overview) 2. [**Implement your SQLite client schema**](/client-sdks/reference/react-native-and-expo#1-define-the-client-side-schema) 3. [**Connect to PowerSync and your backend**](/client-sdks/reference/react-native-and-expo#3-integrate-with-your-backend) ## Data Persistence The default version of this adapter uses in-memory persistence, but you can specify your own `persister` option to the open factory. See an example in the package [README](https://www.npmjs.com/package/@powersync/adapter-sql-js). ## Moving Beyond Expo Go When you're ready to move beyond the Expo Go sandbox environment - whether for native development builds or production deployment - we recommend switching to our native database adapters: * [OP-SQLite](https://www.npmjs.com/package/@powersync/op-sqlite) (Recommended) - Offers built-in encryption support and better React Native New Architecture compatibility * [React Native Quick SQLite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) - Our original native adapter These database adapters cannot run in Expo Go because they require native code compilation. Specifically, PowerSync needs a SQLite implementation that can load our [Rust core extension](https://github.com/powersync-ja/powersync-sqlite-core), which isn't possible in Expo Go's prebuilt app container. These adapters provide better performance, full SQLite consistency guarantees, and are suitable for both development builds and production deployment. See the SDKs [Installation](/client-sdks/reference/react-native-and-expo#install-peer-dependencies) details for setup instructions. ### Switching Between Adapters - Example If you want to keep using Expo Go alongside development and production builds, you can switch between different adapters based on the Expo `executionEnvironment`: ```js SystemProvider.tsx theme={null} import { SQLJSOpenFactory } from "@powersync/adapter-sql-js"; import { PowerSyncDatabase } from "@powersync/react-native"; import Constants from "expo-constants"; const isExpoGo = Constants.executionEnvironment === "storeClient"; export const powerSync = new PowerSyncDatabase({ schema: AppSchema, database: isExpoGo ? new SQLJSOpenFactory({ dbFilename: "app.db", }) : { dbFilename: "sqlite.db", }, }); ``` # Dart/Flutter Web Support (Beta) Source: https://docs.powersync.com/client-sdks/frameworks/flutter-web-support Web support for Flutter in version `^1.9.0` is currently in a **beta** release. It is functionally ready for production use, provided that you've tested your use cases. Please see the [Limitations](#limitations) detailed below. ## Demo app The easiest way to test Flutter Web support is to run the [Supabase Todo-List](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app: 1. Clone the [powersync.dart](https://github.com/powersync-ja/powersync.dart/tree/main) repo. 1. **Note**: If you are an existing user updating to the latest code after a git pull, run `melos exec 'flutter pub upgrade'` in the repo's root and make sure it succeeds. 2. Run `melos prepare` in the repo's root 3. cd into the `demos/supabase-todolist` folder 4. If you haven’t yet: `cp lib/app_config_template.dart lib/app_config.dart` (optionally update this config with your own Supabase and PowerSync project details). 5. Run `flutter run -d chrome` ## Installing PowerSync in your own project Install the [latest version](https://pub.dev/packages/powersync/versions) of the package, for example: ```bash theme={null} flutter pub add powersync:'^1.9.0' ``` ### Additional config #### Assets Web support requires `sqlite3.wasm` and worker (`powersync_db.worker.js` and `powersync_sync.worker.js`) assets to be served from the web application. They can be downloaded to the web directory by running the following command in your application's root folder. ```bash theme={null} dart run powersync:setup_web ``` The same code is used for initializing native and web `PowerSyncDatabase` clients. #### OPFS for improved performance This SDK supports different storage modes of the SQLite database with varying levels of performance and compatibility: * **IndexedDB**: Highly compatible with different browsers, but performance is slow. * **OPFS** (Origin-Private File System): Significantly faster but requires additional configuration. OPFS is the preferred mode when it is available. Otherwise database storage falls back to IndexedDB. Enabling OPFS requires adding two headers to the HTTP server response when a client requests the Flutter web application: * `Cross-Origin-Opener-Policy`: Needs to be set to `same-origin`. * `Cross-Origin-Embedder-Policy`: Needs to be set to `require-corp`. When running the app locally, you can use the following command to include the required headers: ```bash theme={null} flutter run -d chrome --web-header "Cross-Origin-Opener-Policy=same-origin" --web-header "Cross-Origin-Embedder-Policy=require-corp" ``` When serving a Flutter Web app in production, the [Flutter docs](https://docs.flutter.dev/deployment/web#building-the-app-for-release) recommend building the web app with `flutter build web`, then serving the content with an HTTP server. The server should be configured to use the above headers. **Further reading**: [Drift](https://drift.simonbinder.eu/) uses the same packages as our [`sqlite_async`](https://github.com/powersync-ja/sqlite_async.dart) package under the hood, and has excellent documentation for how the web filesystem is selected. See [here](https://drift.simonbinder.eu/platforms/web/) for web compatibility notes and [here](https://drift.simonbinder.eu/platforms/web/#additional-headers) for additional notes on the required web headers. ## Limitations The API for Web is essentially the same as for native platforms, however, some features within `PowerSyncDatabase` clients are not available. ### Imports Flutter Web does not support importing directly from `sqlite3.dart` as it uses `dart:ffi`. Change imports from: ```dart theme={null} import 'package/powersync/sqlite3.dart` ``` to: ```dart theme={null} import 'package/powersync/sqlite3_common.dart' ``` in code which needs to run on the Web platform. Isolated native-specific code can still import from `sqlite3.dart`. ### Database connections Web database connections do not support concurrency. A single database connection is used. `readLock` and `writeLock` contexts do not implement checks for preventing writable queries in read connections and vice-versa. Direct access to the synchronous `CommonDatabase` (`sqlite.Database` equivalent for web) connection is not available. `computeWithDatabase` is not available on web. # Next.js + PowerSync Source: https://docs.powersync.com/client-sdks/frameworks/next-js A guide for creating a new Next.js application with PowerSync for offline/local first functionality ## Introduction In this tutorial, we'll explore how to enhance a Next.js application with offline-first capabilities using PowerSync. In the following sections, we'll walk through the process of integrating PowerSync into a Next.js application, setting up local-first storage, and handling synchronization efficiently. PowerSync is tailored for client-side applications — there isn't much benefit to using SSR with PowerSync. Some frameworks like Next.js push towards enabling SSR by default, which means code is evaluated in a Node.js runtime. The PowerSync Web SDK requires browser APIs which are not available in Node.js. For ergonomics, the SDK performs no-ops if used in Node.js (rather than throwing errors), but you should not expect any data from PowerSync during server-side rendering. If you are using SSR in your application, we recommend explicitly isolating PowerSync to client-side code. ## Setup ### Next.js Project Setup Let's start by bootstrapping a new Next.js application using [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app). ```shell npm theme={null} npx create-next-app@latest my-powersync-app ``` ```shell yarn theme={null} yarn create next-app my-powersync-app ``` ```shell pnpm theme={null} pnpm create next-app my-powersync-app ``` When running this command you'll be presented with a few options. The PowerSync suggested selection for the setup options Next.js offers are: ```shell theme={null} Would you like to use TypeScript? Yes Would you like to use ESLint? Yes Would you like to use Tailwind CSS? Yes Would you like your code inside a `src/` directory? Yes Would you like to use App Router? (recommended) Yes Would you like to use Turbopack for `next dev`? Yes Would you like to customize the import alias (`@/*` by default)? Yes ``` Turbopack is supported in Next.js 16+. If you're using an older version of Next.js, see the [Webpack configuration (legacy)](#webpack-configuration-legacy) section below. ### Install PowerSync Dependencies Using PowerSync in a Next.js application will require the use of the [PowerSync Web SDK](https://www.npmjs.com/package/@powersync/web) and it's peer dependencies. In addition to this we'll also install [`@powersync/react`](https://www.npmjs.com/package/@powersync/react), which provides several hooks and providers for easier integration. ```shell npm theme={null} npm install @powersync/web @journeyapps/wa-sqlite @powersync/react ``` ```shell yarn theme={null} yarn add @powersync/web @journeyapps/wa-sqlite @powersync/react ``` ```shell pnpm theme={null} pnpm install @powersync/web @journeyapps/wa-sqlite @powersync/react ``` This SDK currently requires [@journeyapps/wa-sqlite](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency. ### Copy Worker Assets When using Turbopack, you need to copy the PowerSync worker files to your public directory. Add a `postinstall` script to your `package.json`: ```json package.json theme={null} { "scripts": { "postinstall": "powersync-web copy-assets -o public" } } ``` Then run the script to copy the assets: ```shell npm theme={null} npm run postinstall ``` ```shell yarn theme={null} yarn postinstall ``` ```shell pnpm theme={null} pnpm postinstall ``` This copies the pre-bundled worker files to `public/@powersync/`, which are required since Turbopack doesn't support dynamic imports of workers yet. Add `public/@powersync/*` to your `.gitignore` file since these are generated assets. ## Next.js Config Setup For Next.js 16+ with Turbopack, the configuration is minimal: ```typescript next.config.ts theme={null} module.exports = { images: { disableStaticImages: true }, turbopack: {} }; ``` Run `pnpm dev` to start the development server and check that everything compiles correctly, before moving onto the next section. ### Webpack configuration (legacy) If you're using an older version of Next.js (before 16) or prefer to use Webpack, use this configuration instead: ```typescript next.config.ts theme={null} module.exports = { webpack: (config: any, { isServer }: any) => { config.experiments = { ...config.experiments, asyncWebAssembly: true, topLevelAwait: true, }; if (!isServer) { config.module.rules.push({ test: /\.wasm$/, type: "asset/resource", }); } return config; } } ``` ## Configure a PowerSync Instance Now that we've got our project setup, let's create a new PowerSync Cloud instance and connect our client to it. For the purposes of this demo, we'll be using Supabase as the backend source database that PowerSync will connect to. To set up a new PowerSync instance, follow the steps covered in the [Installation - Database Connection](/configuration/source-db/connection) docs page. ## Configure PowerSync in your project ### Add core PowerSync files Start by adding a new directory in `./src/lib` named `powersync`. #### `AppSchema` Create a new file called `AppSchema.ts` in the newly created `powersync` directory and add your App Schema to the file. Here is an example of this. ```typescript lib/powersync/AppSchema.ts theme={null} import { column, Schema, Table } from '@powersync/web'; const lists = new Table({ created_at: column.text, name: column.text, owner_id: column.text }); const todos = new Table( { list_id: column.text, created_at: column.text, completed_at: column.text, description: column.text, created_by: column.text, completed_by: column.text, completed: column.integer }, { indexes: { list: ['list_id'] } } ); export const AppSchema = new Schema({ todos, lists }); // For types export type Database = (typeof AppSchema)['types']; export type TodoRecord = Database['todos']; // OR: // export type Todo = RowType; export type ListRecord = Database['lists']; ``` This defines the local SQLite database schema and PowerSync will hydrate the tables once the SDK connects to the PowerSync instance. #### `BackendConnector` Create a new file called `BackendConnector.ts` in the `powersync` directory and add the following to the file. ```typescript lib/powersync/BackendConnector.ts theme={null} import { AbstractPowerSyncDatabase, PowerSyncBackendConnector, UpdateType } from '@powersync/web'; export class BackendConnector implements PowerSyncBackendConnector { private powersyncUrl: string | undefined; private powersyncToken: string | undefined; constructor() { this.powersyncUrl = process.env.NEXT_PUBLIC_POWERSYNC_URL; // This token is for development only. // For production applications, integrate with an auth provider or custom auth. this.powersyncToken = process.env.NEXT_PUBLIC_POWERSYNC_TOKEN; } async fetchCredentials() { // TODO: Use an authentication service or custom implementation here. if (this.powersyncToken == null || this.powersyncUrl == null) { return null; } return { endpoint: this.powersyncUrl, token: this.powersyncToken }; } async uploadData(database: AbstractPowerSyncDatabase): Promise { const transaction = await database.getNextCrudTransaction(); if (!transaction) { return; } try { for (const op of transaction.crud) { // The data that needs to be changed in the remote db const record = { ...op.opData, id: op.id }; switch (op.op) { case UpdateType.PUT: // TODO: Instruct your backend API to CREATE a record break; case UpdateType.PATCH: // TODO: Instruct your backend API to PATCH a record break; case UpdateType.DELETE: //TODO: Instruct your backend API to DELETE a record break; } } await transaction.complete(); } catch (error: any) { console.error(`Data upload error - discarding`, error); await transaction.complete(); } } } ``` There are two core functions to this file: * `fetchCredentials()` - Used to return a JWT token to the PowerSync Service for authentication. * `uploadData()` - Used to upload changes captured in the local SQLite database that need to be sent to the backend source database, in this case Supabase. We'll get back to this further down. You'll notice that we need to add a `.env` file to our project which will contain two variables: * `NEXT_PUBLIC_POWERSYNC_URL` - This is the PowerSync instance url. You can grab this from the PowerSync Cloud dashboard. * `NEXT_PUBLIC_POWERSYNC_TOKEN` - For development purposes we'll be using a development token. To generate one, please follow the steps outlined in [Development Token](/configuration/auth/development-tokens) from our installation docs. ### Create Providers Create a new directory in `./src/app/components` named `providers` #### `SystemProvider` Add a new file in the newly created `providers` directory called `SystemProvider.tsx`. ```typescript components/providers/SystemProvider.tsx theme={null} 'use client'; import { AppSchema } from '@/lib/powersync/AppSchema'; import { BackendConnector } from '@/lib/powersync/BackendConnector'; import { PowerSyncContext } from '@powersync/react'; import { PowerSyncDatabase, WASQLiteOpenFactory, createBaseLogger, LogLevel } from '@powersync/web'; import React, { Suspense } from 'react'; const logger = createBaseLogger(); logger.useDefaults(); logger.setLevel(LogLevel.DEBUG); const factory = new WASQLiteOpenFactory({ dbFilename: 'powersync.db', // Use the pre-bundled worker from public/@powersync/ // This is required since Turbopack doesn't support dynamic imports of workers yet worker: '/@powersync/worker/WASQLiteDB.umd.js' }); export const db = new PowerSyncDatabase({ database: factory, schema: AppSchema, flags: { disableSSRWarning: true }, sync: { // Use the pre-bundled sync worker from public/@powersync/ worker: '/@powersync/worker/SharedSyncImplementation.umd.js' } }); const connector = new BackendConnector(); db.connect(connector); export const SystemProvider = ({ children }: { children: React.ReactNode }) => { return ( {children} ); }; export default SystemProvider; ``` The `SystemProvider` is responsible for initializing the `PowerSyncDatabase`. The worker paths point to the pre-bundled workers copied to the public directory by the `powersync-web copy-assets` command. We also instantiate our `BackendConnector` and pass an instance of that to `db.connect()`. This will connect to the PowerSync instance, validate the token supplied in the `fetchCredentials` function and then start syncing with the PowerSync Service. #### Update `layout.tsx` In our main `layout.tsx` we'll update the `RootLayout` function to use the `SystemProvider`. ```typescript app/layout.tsx theme={null} 'use client'; import { SystemProvider } from '@/app/components/providers/SystemProvider'; import { Geist, Geist_Mono } from "next/font/google"; import "./globals.css"; const geistSans = Geist({ variable: "--font-geist-sans", subsets: ["latin"], }); const geistMono = Geist_Mono({ variable: "--font-geist-mono", subsets: ["latin"], }); export default function RootLayout({ children, }: Readonly<{ children: React.ReactNode; }>) { return ( {children} ); } ``` #### Use PowerSync ##### Reading Data In our `page.tsx` we can now use the `useQuery` hook or other PowerSync functions to read data from the SQLite database and render the results in our application. ```typescript app/page.tsx theme={null} 'use client'; import { useState, useEffect } from 'react'; import { useQuery, useStatus, usePowerSync } from '@powersync/react'; export default function Page() { // Hook const powersync = usePowerSync(); // Get database status information e.g. downloading, uploading and lastSycned dates const status = useStatus(); // Example 1: Reactive Query const { data: lists } = useQuery("SELECT * FROM lists"); // Example 2: Standard query const [lists, setLists] = useState([]); useEffect(() => { powersync.getAll('SELECT * from lists').then(setLists) }, []); return (
    {lists.map(list =>
  • {list.name}
  • )}
) } ``` ##### Writing Data Using the `execute` function we can also write data into our local SQLite database. ```typescript theme={null} await powersync.execute("INSERT INTO lists (id, created_at, name, owner_id) VALUES (?, ?, ?, ?)", [uuid(), new Date(), "Test", user_id]); ``` Changes made against the local data will be stored in the upload queue and will be processed by the `uploadData` in the BackendConnector class. # Nuxt Integration Source: https://docs.powersync.com/client-sdks/frameworks/nuxt PowerSync has first class support for Nuxt. Use this guide to get started. ## Introduction `@powersync/nuxt` is a Nuxt module that wraps [`@powersync/vue`](https://www.npmjs.com/package/@powersync/vue) and provides everything you need to build offline-first Nuxt applications. It re-exports all `@powersync/vue` composables so this is the only PowerSync dependency you need, and it adds a Nuxt Devtools integration with a PowerSync diagnostics panel for inspecting sync status, local data, config, and logs. **Alpha:** The Nuxt PowerSync integration is currently in Alpha. APIs and behavior may change. We welcome feedback in [Discord](https://discord.com/invite/powersync) or on [GitHub](https://github.com/powersync-ja/powersync-js). PowerSync is tailored for client-side applications — there isn't much benefit to using SSR with PowerSync. Nuxt evaluates plugins server-side unless you use the `.client.ts` suffix. The PowerSync Web SDK requires browser APIs that are not available in Node.js; it performs no-ops in a Node.js runtime rather than throwing errors, but you should not expect any data from PowerSync during server-side rendering. Always create your PowerSync plugin as `plugins/powersync.client.ts` to ensure it runs only in the browser. For a complete working example, see the [Nuxt + Supabase Todo List demo](https://github.com/powersync-ja/powersync-js/tree/main/demos/nuxt-supabase-todolist). ## Setup ### Install PowerSync Dependencies ```shell npm theme={null} npm install @powersync/nuxt ``` ```shell pnpm theme={null} pnpm add @powersync/nuxt @powersync/vue @powersync/web ``` With **npm** (v7+), peer dependencies are installed automatically. With **pnpm**, you must install peer dependencies explicitly, as shown above. ### Add the Module Add `@powersync/nuxt` to the `modules` array in `nuxt.config.ts` and include the required Vite configuration: ```typescript nuxt.config.ts theme={null} export default defineNuxtConfig({ modules: ['@powersync/nuxt'], vite: { optimizeDeps: { exclude: ['@powersync/web'] }, worker: { format: 'es' } } }); ``` If you are using Tailwind CSS in your project, see the [Known Issues](#known-issues) section. ## Configure PowerSync in your Project ### Define your Schema Create a file at `powersync/AppSchema.ts` and define your local SQLite schema. PowerSync will hydrate these tables once the SDK connects to your PowerSync instance. ```typescript powersync/AppSchema.ts theme={null} import { column, Schema, Table } from '@powersync/web'; const lists = new Table({ created_at: column.text, name: column.text, owner_id: column.text }); const todos = new Table( { list_id: column.text, created_at: column.text, completed_at: column.text, description: column.text, created_by: column.text, completed_by: column.text, completed: column.integer }, { indexes: { list: ['list_id'] } } ); export const AppSchema = new Schema({ todos, lists }); // For types export type Database = (typeof AppSchema)['types']; export type TodoRecord = Database['todos']; export type ListRecord = Database['lists']; ``` Learn more about defining your schema in the [JavaScript Web SDK reference](/client-sdk-references/javascript-web#1-define-the-schema). ### Create your Connector Create a file at `powersync/PowerSyncConnector.ts`. The connector handles authentication and uploading local changes to your backend. ```typescript powersync/PowerSyncConnector.ts theme={null} import { UpdateType, type PowerSyncBackendConnector } from '@powersync/web'; export class PowerSyncConnector implements PowerSyncBackendConnector { async fetchCredentials() { // Return a JWT for the PowerSync Service to authenticate the client. // See https://docs.powersync.com/installation/authentication-setup // For quick local testing, use a development token: // https://docs.powersync.com/installation/authentication-setup/development-tokens return { endpoint: '[Your PowerSync instance URL]', token: '[Your auth token]' }; } async uploadData(db: any) { // Send local changes to your backend. // See https://docs.powersync.com/client-sdk-references/javascript-web#3-integrate-with-your-backend const transaction = await db.getNextCrudTransaction(); if (!transaction) return; try { for (const op of transaction.crud) { const record = { ...op.opData, id: op.id }; switch (op.op) { case UpdateType.PUT: // TODO: send CREATE to your backend API break; case UpdateType.PATCH: // TODO: send PATCH to your backend API break; case UpdateType.DELETE: // TODO: send DELETE to your backend API break; } } await transaction.complete(); } catch (error: any) { console.error('Data upload error - discarding', error); await transaction.complete(); } } } ``` ### Create the Plugin Create a [Nuxt plugin](https://nuxt.com/docs/guide/directory-structure/plugins) at `plugins/powersync.client.ts`. The `.client.ts` suffix ensures this only runs in the browser. ```typescript plugins/powersync.client.ts theme={null} import { NuxtPowerSyncDatabase, createPowerSyncPlugin } from '@powersync/nuxt'; import { AppSchema } from '~/powersync/AppSchema'; import { PowerSyncConnector } from '~/powersync/PowerSyncConnector'; export default defineNuxtPlugin({ async setup(nuxtApp) { const db = new NuxtPowerSyncDatabase({ database: { dbFilename: 'my-app.sqlite' }, schema: AppSchema }); const connector = new PowerSyncConnector(); await db.init(); await db.connect(connector); const plugin = createPowerSyncPlugin({ database: db }); nuxtApp.vueApp.use(plugin); } }); ``` ## Using PowerSync The module automatically exposes all `@powersync/vue` composables. You can import and use them directly in any component or composable. ### Reading Data ```vue components/TodoList.vue theme={null} ``` ### Writing Data Use `execute` to write to the local SQLite database. Changes are queued and uploaded to your backend via `uploadData` in the connector. ```typescript theme={null} import { usePowerSync } from '@powersync/nuxt'; import { v4 as uuid } from 'uuid'; const powersync = usePowerSync(); await powersync.execute( 'INSERT INTO lists (id, created_at, name, owner_id) VALUES (?, ?, ?, ?)', [uuid(), new Date().toISOString(), 'My List', currentUserId] ); ``` ## Kysely ORM (Optional) The module optionally exposes a `usePowerSyncKysely()` composable for type-safe query building. You must install the driver and opt in via config. Install the driver: ```shell npm theme={null} npm install @powersync/kysely-driver ``` ```shell pnpm theme={null} pnpm add @powersync/kysely-driver ``` Enable it in `nuxt.config.ts`: ```typescript nuxt.config.ts theme={null} export default defineNuxtConfig({ modules: ['@powersync/nuxt'], powersync: { kysely: true }, vite: { optimizeDeps: { exclude: ['@powersync/web'] }, worker: { format: 'es' } } }); ``` Then use `usePowerSyncKysely` with your schema's `Database` type for full type safety: ```typescript theme={null} import { usePowerSyncKysely } from '@powersync/nuxt'; import { type Database } from '~/powersync/AppSchema'; const db = usePowerSyncKysely(); const lists = await db.selectFrom('lists').selectAll().execute(); ``` ## Diagnostics & Inspector The `@powersync/nuxt` module includes a PowerSync diagnostics panel (Inspector) that you can open from the **Nuxt Devtools** PowerSync tab or at **`/__powersync-inspector`**. It shows sync status, local data, config, and logs. Diagnostics must be explicitly enabled (see below). ### Enabling Diagnostics Add `powersync: { useDiagnostics: true }` to your `nuxt.config.ts`: ```typescript nuxt.config.ts theme={null} export default defineNuxtConfig({ modules: ['@powersync/nuxt'], powersync: { useDiagnostics: true }, vite: { optimizeDeps: { exclude: ['@powersync/web'] }, worker: { format: 'es' } } }); ``` When `useDiagnostics: true` is set, `NuxtPowerSyncDatabase` automatically: * Extends your schema with the diagnostics schema * Sets up diagnostics recording and logging * Stores the connector internally so the inspector can access it No changes to your plugin code are needed. ### Accessing the Inspector PowerSync Inspector Once diagnostics are enabled, you can open the inspector in two ways: * **Nuxt Devtools**: open Devtools in your browser and look for the PowerSync tab * **Direct URL**: navigate to `http://localhost:3000/__powersync-inspector` The inspector provides the following views: * **Sync Status** — real-time connection status, sync progress, upload queue statistics, and error monitoring * **Data Inspector** — browse and search your local SQLite tables * **Bucket Inspector** - browse your buckets and their data * **Config Inspector** — view your PowerSync configuration, connection options, and schema * **Logs** — real-time log output with syntax highlighting and search ## Known Issues PowerSync Inspector uses `unocss` as a transitive dependency, which can conflict with Tailwind CSS. If you use Tailwind, add the following to your `nuxt.config.ts`: ```typescript nuxt.config.ts theme={null} export default defineNuxtConfig({ unocss: { autoImport: false } }); ``` # React Hooks Source: https://docs.powersync.com/client-sdks/frameworks/react The `@powersync/react` package provides React hooks for use with the [JavaScript Web SDK](/client-sdks/reference/javascript-web) or [React Native SDK](/client-sdks/reference/react-native-and-expo). These hooks are designed to support reactivity, and can be used to automatically re-render React components when query results update or to access PowerSync connectivity status changes. The main hooks available are: * `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. * `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. * `useSuspenseQuery`: This hook also allows you to access the results of a watched query, but its loading and fetching states are handled through [Suspense](https://react.dev/reference/react/Suspense). It automatically converts certain loading/fetching states into Suspense signals, triggering Suspense boundaries in parent components. For advanced watch query features like incremental updates and differential results for React Hooks, see [Live Queries / Watch Queries](/client-sdks/watch-queries). The full API Reference and example code can be found here: # React Native Web Support Source: https://docs.powersync.com/client-sdks/frameworks/react-native-web-support [React Native for Web](https://necolas.github.io/react-native-web/) enables developers to use the same React Native codebase for both mobile and web platforms. **Availability** Support for React Native Web is available since versions 1.12.1 of the PowerSync [React Native SDK](/client-sdks/reference/react-native-and-expo) and 1.8.0 if the [JavaScript Web SDK](/client-sdks/reference/javascript-web), and is currently in a **beta** release. A demo app showcasing this functionality is available here: ## Configuring PowerSync in your React Native for Web project To ensure that PowerSync features are fully supported in your React Native Web project, follow the below steps. This documentation covers necessary web worker configurations, database instantiation, and multi-platform implementations. ### 1. Install Web SDK The [PowerSync Web SDK](/client-sdks/reference/javascript-web), alongside the [PowerSync React Native SDK](/client-sdks/reference/react-native-and-expo), is required for Web support. See installation instructions [here](https://www.npmjs.com/package/@powersync/web). ### 2. Configure Web Workers For React Native for Web, workers need to be configured when instantiating `PowerSyncDatabase`. An example of this is available [here](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-web-supabase-todolist/library/powersync/system.ts). To do this, copy the contents of `node_modules/@powersync/web/dist` to the root of your project (typically in the `public `directory). To make it easier to manage these files in the `public` directory, it is recommended to place the contents in a nested directory like `@powersync`. The [`@powersync/web`](https://github.com/powersync-ja/powersync-js/tree/main/packages/web) package includes a CLI utility which can copy the required assets to the `public` directory (configurable with the `--output` option). ```bash theme={null} # Places assets into public/@powersync by default. Override with `--output path/from_current_working_dir`. npx @powersync/web copy-assets # or pnpm @powersync/web copy-assets ``` ### 3. Instantiate Web Workers The example below demonstrates how to instantiate the workers (PowerSync requires a database and a sync worker) when instantiating `PowerSyncDatabase`. You can either specify a path to the worker (they are available in the `worker` directory of the `dist` contents), or provide a factory function to create the worker. ```js theme={null} const factory = new WASQLiteOpenFactory({ dbFilename: 'sqlite.db', // Option 1: Specify a path to the database worker worker: '/@powersync/worker/WASQLiteDB.umd.js' // Option 2: Or provide a factory function to create the worker. // The worker name should be unique for the database filename to avoid conflicts if multiple clients with different databases are present. // worker: (options) => { // if (options?.flags?.enableMultiTabs) { // return new SharedWorker(`/@powersync/worker/WASQLiteDB.umd.js`, { // name: `shared-DB-worker-${options?.dbFilename}` // }); // } else { // return new Worker(`/@powersync/worker/WASQLiteDB.umd.js`, { // name: `DB-worker-${options?.dbFilename}` // }); // } // } }); this.powersync = new PowerSyncDatabaseWeb({ schema: AppSchema, database: factory, sync: { // Option 1: You can specify a path to the sync worker worker: '/@powersync/worker/SharedSyncImplementation.umd.js' //Option 2: Or provide a factory function to create the worker. // The worker name should be unique for the database filename to avoid conflicts if multiple clients with different databases are present. // worker: (options) => { // return new SharedWorker(`/@powersync/worker/SharedSyncImplementation.umd.js`, { // name: `shared-sync-${options?.dbFilename}` // }); // } } }); ``` This `PowerSyncDatabaseWeb` database will be used alongside the native `PowerSyncDatabase` to support platform-specific implementations. See the [Instantiating PowerSync](#implementations) section below for more details. ### 4. Enable multiple platforms To target both mobile and web platforms, you need to adjust the Metro configuration and handle platform-specific libraries accordingly. #### Metro config Refer to the example [here](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-web-supabase-todolist/metro.config.js). Setting `config.resolver.resolveRequest` allows Metro to behave differently based on the platform. ```js theme={null} config.resolver.resolveRequest = (context, moduleName, platform) => { if (platform === 'web') { // Depending on `@powersync/web` for functionality, ignore mobile specific dependencies. if (['react-native-prompt-android', '@powersync/react-native'].includes(moduleName)) { return { type: 'empty' }; } const mapping = { 'react-native': 'react-native-web', '@powersync/web': '@powersync/web/dist/index.umd.js' }; if (mapping[moduleName]) { console.log('remapping', moduleName); return context.resolveRequest(context, mapping[moduleName], platform); } } else { // Depending on `@powersync/react-native` for functionality, ignore `@powersync/web` dependencies. if (['@powersync/web'].includes(moduleName)) { return { type: 'empty' }; } } // Ensure you call the default resolver. return context.resolveRequest(context, moduleName, platform); }; ``` #### Implementations Many `react-native` and `web` packages are implemented with only their specific platform in mind, as such there may be times where you will need to evaluate the platform and provide alternative implementations. **Instantiating PowerSync** The following snippet constructs the correct `PowerSyncDatabase` depending on the platform that the code is executing on. ```js theme={null} import React from 'react'; import { PowerSyncDatabase as PowerSyncDatabaseNative } from '@powersync/react-native'; import { PowerSyncDatabase as PowerSyncDatabaseWeb } from '@powersync/web'; if (PowerSyncDatabaseNative) { this.powersync = new PowerSyncDatabaseNative({ schema: AppSchema, database: { dbFilename: 'sqlite.db' } }); } else { const factory = new WASQLiteOpenFactory({ dbFilename: 'sqlite.db', worker: '/@powersync/worker/WASQLiteDB.umd.js' }); this.powersync = new PowerSyncDatabaseWeb({ schema: AppSchema, database: factory, sync: { worker: '/@powersync/worker/SharedSyncImplementation.umd.js' } }); } ``` **Implementations that don't support both mobile and web** ```js theme={null} import { Platform } from 'react-native'; import { Platform } from 'react-native'; import rnPrompt from 'react-native-prompt-android'; // Example conditional implementation export async function prompt( title = '', description = '', onInput = (_input: string | null): void | Promise => {}, options: { placeholder: string | undefined } = { placeholder: undefined } ) { const isWeb = Platform.OS === 'web'; let name: string | null = null; if (isWeb) { name = window.prompt(`${title}\n${description}`, options.placeholder); } else { name = await new Promise((resolve) => { rnPrompt( title, description, (input) => { resolve(input); }, { placeholder: options.placeholder, style: 'shimo' } ); }); } await onInput(name); } ``` Which can then be used agnostically in a component. ```js theme={null} import { Button } from 'react-native'; import { prompt } from 'util/prompt';