# Architecture Overview Source: https://docs.powersync.com/architecture/architecture-overview The core components of PowerSync are the service and client SDKs The [PowerSync Service](/architecture/powersync-service) and client SDK operate in unison to keep client-side SQLite databases in sync with a backend database. Learn about their architecture: ### Protocol Learn about the sync protocol used between PowerSync clients and a [PowerSync Service](/architecture/powersync-service): ### Self-Hosted Architecture For more details on typical architecture of a production self-hosted deployment, see here: # Client Architecture Source: https://docs.powersync.com/architecture/client-architecture ### Reading and Writing Data From the client-side perspective, there are two data flow paths: * Reading data from the server or downloading data (to the SQLite database) * Writing changes back to the server, or uploading data (from the SQLite database) #### Reading Data App clients always read data from a local SQLite database. The local database is asynchronously hydrated from the PowerSync Service. A developer configures [Sync Rules](/usage/sync-rules) for their PowerSync instance to control which data is synced to which users. The PowerSync Service connects directly to the backend database and uses a change stream to hydrate dynamic data partitions, called [sync buckets](/usage/sync-rules/organize-data-into-buckets). Sync buckets are used to partition data according to the configured Sync Rules. (In most use cases, only a subset of data is required in a client's database and not a copy of the entire backend database.) The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the backend database, based on the [Sync Rules](/usage/sync-rules) configured by the developer: #### Writing Data Client-side data modifications, namely updates, deletes and inserts, are persisted in the embedded SQLite database as well as stored in an upload queue. The upload queue is a blocking [FIFO](https://en.wikipedia.org/wiki/FIFO_%28computing_and_electronics%29) queue that gets processed when network connectivity is available. Each entry in the queue is processed by writing the entry to your existing backend application API, using a function [defined by you](/installation/client-side-setup/integrating-with-your-backend) (the developer). This is to ensure that existing backend business logic is honored when uploading data changes. For more information, see the section on [integrating with your backend](/installation/client-side-setup/integrating-with-your-backend). ### Schema On the client, the application [defines a schema](/installation/client-side-setup/define-your-schema) with tables, columns and indexes. These are then usable as if they were actual SQLite tables, while in reality these are created as SQLite views. The client SDK maintains the following tables: 1. `ps_data__` This contains the data for `
` , in JSON format. This table's schema does not change when columns are added, removed or changed. 2. `ps_data_local__
` Same as the above, but for local-only tables. 3. `
` (VIEW) - this is a view on the above table, with each defined column extracted from the JSON field. For example, a "description" text column would be `CAST(data ->> '$.description' as TEXT)`. 4. `ps_untyped` - Any synced table that does is not defined in the client-side schema is placed here. If the table is added to the schema at a later point, the data is then migrated to `ps_data__
`. 5. `ps_oplog` - This is data as received by the [PowerSync Service](/architecture/powersync-service), grouped per bucket. 6. `ps_crud` - The local upload queue. 7. `ps_buckets` - A small amount of metadata for each bucket. 8. `ps_migrations` - Table keeping track of SDK schema migrations. Most rows will be present in at least two tables — the `ps_data__
` table, and in `ps_oplog`. It may be present multiple times in `ps_oplog`, if it was synced via multiple buckets. The copy in `ps_oplog` may be newer than the one in `ps_data__
`. Only when a full checkpoint has been downloaded, will the data be copied over to the individual tables. If multiple rows with the same table and id has been synced, only one will be preserved (the one with the highest `op_id`). If you run into limitations with the above JSON-based SQLite view system, check out [this experimental feature](/usage/use-case-examples/raw-tables) which allows you to define and manage raw SQLite tables to work around some limitations. We are actively seeking feedback about this functionality. # Consistency Source: https://docs.powersync.com/architecture/consistency PowerSync uses the concept of "checkpoints" to ensure the data is consistent. ## PowerSync: Designed for causal+ consistency PowerSync is designed to have [Causal+ Consistency](https://jepsen.io/consistency/models/causal), while providing enough flexibility for applications to perform their own data validations and conflict handling. ## How it works: Checkpoints A checkpoint is a single point-in-time on the server (similar to an [LSN in Postgres](https://www.postgresql.org/docs/current/datatype-pg-lsn.html)) with a consistent state — only fully committed transactions are part of the state. The client only updates its local state when it has all the data matching a checkpoint, and then it updates the state to exactly match that of the checkpoint. There is no intermediate state while downloading large sets of changes such as large server-side transactions. Different tables and sync buckets are all included in the same consistent checkpoint, to ensure that the state is consistent over all data in the app. ## Local client changes Local changes are applied on top of the last checkpoint received from the server, as well as being persisted into an upload queue. While changes are present in the upload queue, the client does not advance to a new checkpoint. This means the client never has to resolve conflicts locally. Only once all the local changes have been acknowledged by the server, and the data for that new checkpoint is downloaded by the client, does the client advance to the next checkpoint. This ensures that the operations are always ordered correctly on the client. ## Types of local operations The client automatically records changes to the local database as PUT, PATCH or DELETE operations — corresponding to INSERT, UPDATE or DELETE statements. These are grouped together in a batch per local transaction. Since the developer has full control over how operations are applied, more advanced operations can be modeled on top of these three. For example an insert-only "operations" table can be added, that records additional metadata for individual operations. ## Validation and conflict handling With PowerSync offering full flexibility in how changes are applied on the server, it is also the developer's responsibility to implement this correctly to avoid consistency issues. Some scenarios to consider: While the client was offline, a record was modified locally. By the time the client is online again, that record has been deleted. Some options for handling the change: * Discard the change. * Discard the entire transaction. * Re-create the record. * Record the change elsewhere, potentially notifying the user and allowing the user to resolve the issue. Some other examples include foreign-key or not-null constraints, maximum size of numeric fields, unique constraints, and access restrictions (such as row-level security policies). With an online-only application, the user typically sees the error as soon as it occurs, and can make changes as required. In an offline-capable application, these errors may occur much later than when the change was made, so more care is required to handle these cases. Special care must be taken so that issues such as those do not block the upload queue — the queue cannot advance if the server does not acknowledge a change. There is no single correct choice on how to handle write failures such as mentioned above — the best action depends on the specific application and scenario. However, we do have some suggestions for general approaches: 1. In general, consider relaxing constraints somewhat on the server where it is not absolutely important. It may be better to accept data that is somewhat inconsistent (e.g. a client not applying all expected validations), rather than discarding the data completely. 2. If it is critical to preserve all client changes and preserve the order of changes: 1. Block the client's queue on unexpected errors (don't acknowledge the change). 2. Implement error monitoring to be notified of issues, and resolve the issues as soon as possible. 3. If it is critical to preserve all client changes, but the exact order may not be critical: 1. On a constraint error, persist the transaction in a separate server-side queue, and acknowledge the change. 2. The server-side queue can then be inspected and retried asynchronously, without blocking the client-side queue. 4. If it is acceptable to lose some changes due to constraint errors: 1. Discard the change, or the entire transaction if the changes must all be applied together. 2. Implement error notifications to detect these issues. See also: * [Handling Update Conflicts](/usage/lifecycle-maintenance/handling-update-conflicts) * [Custom Conflict Resolution](/usage/lifecycle-maintenance/handling-update-conflicts/custom-conflict-resolution) If you have any questions about consistency, please [join our Discord](https://discord.gg/powersync) to discuss. # PowerSync Protocol Source: https://docs.powersync.com/architecture/powersync-protocol This contains a broad overview of the sync protocol used between PowerSync clients and a [PowerSync Service](/architecture/powersync-service) instance. For details, see the implementation in the various client SDKs. ## Design The PowerSync protocol is designed to efficiently sync changes to clients, while maintaining [consistency](/architecture/consistency) and integrity of data. The same process is used to download the initial set of data, bulk download changes after being offline for a while, and incrementally stream changes while connected. ## Concepts ### Buckets All synced data is grouped into [buckets](/usage/sync-rules/organize-data-into-buckets). A bucket represents a collection of synced rows, synced to any number of users. [Buckets](/usage/sync-rules/organize-data-into-buckets) is a core concept that allows PowerSync to efficiently scale to thousands of concurrent users, incrementally syncing changes to hundreds of thousands of rows to each. Each bucket keeps an ordered list of changes to rows within the bucket — generally as "PUT" or "REMOVE" operations. * PUT is the equivalent of "INSERT OR REPLACE" * REMOVE is slightly different from "DELETE": a row is only deleted from the client if it has been removed from *all* buckets synced to the client. ### Checkpoints A checkpoint is a sequential id that represents a single point-in-time for consistency purposes. This is further explained in [Consistency](/architecture/consistency). ### Checksums For any checkpoint, the client and server can compute a per-bucket checksum. This is essentially the sum of checksums of individual operations within the bucket, which each individual checksum being a hash of the operation data. The checksum helps to ensure that the client has all the correct data. If the bucket data changes on the server, for example because of a manual edit to the underlying bucket storage, the checksums will stop matching, and the client will re-download the entire bucket. Note: Checksums are not a cryptographically secure method to verify data integrity. Rather, it is designed to detect simple data mismatches, whether due to bugs, manual data modification, or other corruption issues. ### Compacting To avoid indefinite growth in size of buckets, the history of a bucket can be compacted. Stale updates are replaced with marker entries, which can be merged together, while keeping the same checksums. ## Protocol A client initiates a sync session using: 1. A JWT token that typically contains the user\_id, and additional parameters (optional). 2. A list of current buckets and the latest operation id in each. The server then responds with a stream of: 1. "Checkpoint available": A new checkpoint id, with a checksum for each bucket in the checkpoint. 2. "Data": New operations for the above checkpoint for each relevant bucket, starting from the last operation id as sent by the client. 3. "Checkpoint complete": Sent once all data for the checkpoint have been sent. The server then waits until a new checkpoint is available, then repeats the above sequence. The stream can be interrupted at any time, at which point the client will initiate a new session, resuming from the last point. If a checksum validation fails on the client, the client will delete the bucket and start a new sync session. Data for individual rows are represented using JSON. The protocol itself is schemaless - the client is expected to use their own copy of the schema, and gracefully handle schema differences. #### Write Checkpoints Write checkpoints are used to ensure clients have synced their own changes back before applying downloaded data locally. Creating a write checkpoint is a separate operation, which is performed by the client after all data has been uploaded. It is important that this happens after the data has been written to the backend source database. The server then keeps track of the current CDC stream position on the database (LSN in Postgres, resume token in MongoDB, or binlog position in MySQL), and notifies the client when the data has been replicated, as part of checkpoint data in the normal data stream. # PowerSync Service Source: https://docs.powersync.com/architecture/powersync-service Each PowerSync instance runs a copy of the PowerSync Service. The primary purpose of this service is to stream changes to clients. This service has the following components: ## Replication The service continuously replicates data from the source database, then: 1. Pre-processes the data according to the [sync rules](/usage/sync-rules) (both data queries and parameter queries), splitting data into [sync buckets](/usage/sync-rules/organize-data-into-buckets) and transforming the data if required. 2. Persists each operation into the relevant sync buckets, ready to be streamed to clients. The recent history of operations to each row is stored, not only the current version. This supports the "append-only" structure of sync buckets, which allows clients to efficiently stream changes while maintaining data integrity. Sync buckets can be compacted to avoid an ever-growing history. Replication is initially performed by taking a snapshot of all tables defined in the sync rules, then data is incrementally replicated using [logical replication](https://www.postgresql.org/docs/current/logical-replication.html). When sync rules are updated, this process restarts with a new snapshot. ## Authentication The service authenticates users using [JWTs](/installation/authentication-setup), before allowing access to data. ## Streaming Sync Once a user is authenticated: 1. The service calculates a list of buckets for the user to sync using [parameter queries](/usage/sync-rules/parameter-queries). 2. The service streams any operations added to those buckets since the last time the user connected. The service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes. Only the internal (replicated) storage of the PowerSync Service is used — the source database is not queried directly during streaming. ## Source Code To access the source code for the PowerSync Service, refer to the [powersync-service](https://github.com/powersync-ja/powersync-service) repo on GitHub. ## See Also * [PowerSync Overview](/intro/powersync-overview) # .NET (alpha) Source: https://docs.powersync.com/client-sdk-references/dotnet SDK reference for using PowerSync in .NET clients. This SDK is distributed via NuGet [\[External link\].](https://www.nuget.org/packages/PowerSync.Common/) Refer to the powersync-dotnet repo on GitHub. A full API Reference for this SDK is not yet available. This is planned for a future release. Gallery of .NET example projects/demo apps built with PowerSync. This SDK is currently in an [**alpha** release](/resources/feature-status). It is not suitable for production use as breaking changes may still occur. ## Supported Frameworks and Targets The PowerSync .NET SDK supports: * **.NET Versions**: 6, 8, and 9 * **.NET Framework**: Version 4.8 (requires additional configuration) * **MAUI**: Cross-platform support for Android, iOS, and Windows * **WPF**: Windows desktop applications **Current Limitations**: * Blazor (web) platforms are not yet supported. For more details, please refer to the package [README](https://github.com/powersync-ja/powersync-dotnet/tree/main?tab=readme-ov-file). ## SDK Features * Provides real-time streaming of database changes. * Offers direct access to the SQLite database, enabling the use of SQL on both client and server sides. * Enables subscription to queries for receiving live updates. * Eliminates the need for client-side database migrations as these are managed automatically. ## Quickstart For desktop/server/binary use-cases and WPF, add the [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) NuGet package to your project: ```bash dotnet add package PowerSync.Common --prerelease ``` For MAUI apps, add both [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) and [`PowerSync.Maui`](https://www.nuget.org/packages/PowerSync.Maui/) NuGet packages to your project: ```bash dotnet add package PowerSync.Common --prerelease dotnet add package PowerSync.Maui --prerelease ``` Add `--prerelease` while this package is in alpha. Next, make sure that you have: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. ### 1. Define the schema The first step is defining the schema for the local SQLite database. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the local PowerSync database is constructed (as we'll show in the next step). You can use [this example](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/AppSchema.cs) as a reference when defining your schema. ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed database. Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. **Example**: The initialization syntax differs slightly between the Common and MAUI SDKs: ```cs using PowerSync.Common.Client; class Demo { static async Task Main() { var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions { Database = new SQLOpenOptions { DbFilename = "tododemo.db" }, Schema = AppSchema.PowerSyncSchema, }); await db.Init(); } } ``` ```cs using PowerSync.Common.Client; using PowerSync.Common.MDSQLite; using PowerSync.Maui.SQLite; class Demo { static async Task Main() { // Ensures the DB file is stored in a platform appropriate location var dbPath = Path.Combine(FileSystem.AppDataDirectory, "maui-example.db"); var factory = new MAUISQLiteDBOpenFactory(new MDSQLiteOpenFactoryOptions() { DbFilename = dbPath }); var Db = new PowerSyncDatabase(new PowerSyncDatabaseOptions() { Database = factory, // Supply a factory Schema = AppSchema.PowerSyncSchema, }); await db.Init(); } } ``` ### 3. Integrate with your Backend The PowerSync backend connector provides the connection between your application backend and the PowerSync client-side managed SQLite database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. 2. Apply local changes on your backend application server (and from there, to your backend database) Accordingly, the connector must implement two methods: 1. [PowerSyncBackendConnector.FetchCredentials](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L50) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. [PowerSyncBackendConnector.UploadData](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/NodeConnector.cs#L72) - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```cs using System; using System.Collections.Generic; using System.Net.Http; using System.Text; using System.Text.Json; using System.Threading.Tasks; using PowerSync.Common.Client; using PowerSync.Common.Client.Connection; using PowerSync.Common.DB.Crud; public class MyConnector : IPowerSyncBackendConnector { private readonly HttpClient _httpClient; // User credentials for the current session public string UserId { get; private set; } // Service endpoints private readonly string _backendUrl; private readonly string _powerSyncUrl; private string? _clientId; public MyConnector() { _httpClient = new HttpClient(); // In a real app, this would come from your authentication system UserId = "user-123"; // Configure your service endpoints _backendUrl = "https://your-backend-api.example.com"; _powerSyncUrl = "https://your-powersync-instance.powersync.journeyapps.com"; } public async Task FetchCredentials() { try { // Obtain a JWT from your authentication service. // See https://docs.powersync.com/installation/authentication-setup // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see // - https://docs.powersync.com/installation/authentication-setup/supabase-auth // - https://docs.powersync.com/installation/authentication-setup/firebase-auth var authToken = "your-auth-token"; // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly // Return credentials with PowerSync endpoint and JWT token return new PowerSyncCredentials(_powerSyncUrl, authToken); } catch (Exception ex) { Console.WriteLine($"Error fetching credentials: {ex.Message}"); throw; } } public async Task UploadData(IPowerSyncDatabase database) { // Get the next transaction to upload CrudTransaction? transaction; try { transaction = await database.GetNextCrudTransaction(); } catch (Exception ex) { Console.WriteLine($"UploadData Error: {ex.Message}"); return; } // If there's no transaction, there's nothing to upload if (transaction == null) { return; } // Get client ID if not already retrieved _clientId ??= await database.GetClientId(); try { // Convert PowerSync operations to your backend format var batch = new List(); foreach (var operation in transaction.Crud) { batch.Add(new { op = operation.Op.ToString(), // INSERT, UPDATE, DELETE table = operation.Table, id = operation.Id, data = operation.OpData }); } // Send the operations to your backend var payload = JsonSerializer.Serialize(new { batch }); var content = new StringContent(payload, Encoding.UTF8, "application/json"); HttpResponseMessage response = await _httpClient.PostAsync($"{_backendUrl}/api/data", content); response.EnsureSuccessStatusCode(); // Mark the transaction as completed await transaction.Complete(); } catch (Exception ex) { Console.WriteLine($"UploadData Error: {ex.Message}"); throw; } } } ``` With your database instantiated and your connector ready, call `connect` to start the synchronization process: ```cs await db.Connect(new MyConnector()); await db.WaitForFirstSync(); // Optional, to wait for a complete snapshot of data to be available ``` ## Usage After connecting the client database, it is ready to be used. You can run queries and make updates as follows: ```cs // Use db.Get() to fetch a single row: Console.WriteLine(await db.Get("SELECT powersync_rs_version();")); // Or db.GetAll() to fetch all: // Where List result is defined: // record ListResult(string id, string name, string owner_id, string created_at); Console.WriteLine(await db.GetAll("SELECT * FROM lists;")); // Use db.Watch() to watch queries for changes (await is used to wait for initialization): await db.Watch("select * from lists", null, new WatchHandler { OnResult = (results) => { Console.WriteLine("Results: "); foreach (var result in results) { Console.WriteLine(result.id + ":" + result.name); } }, OnError = (error) => { Console.WriteLine("Error: " + error.Message); } }); // And db.Execute for inserts, updates and deletes: await db.Execute( "insert into lists (id, name, owner_id, created_at) values (uuid(), 'New User', ?, datetime())", [connector.UserId] ); ``` ## Configure Logging Enable logging to help you debug your app. By default, the SDK uses a no-op logger that doesn't output any logs. To enable logging, you can configure a custom logger using .NET's `ILogger` interface: ```cs using Microsoft.Extensions.Logging; using PowerSync.Common.Client; // Create a logger factory ILoggerFactory loggerFactory = LoggerFactory.Create(builder => { builder.AddConsole(); // Enable console logging builder.SetMinimumLevel(LogLevel.Information); // Set minimum log level }); var logger = loggerFactory.CreateLogger("PowerSyncLogger"); var db = new PowerSyncDatabase(new PowerSyncDatabaseOptions { Database = new SQLOpenOptions { DbFilename = "powersync.db" }, Schema = AppSchema.PowerSyncSchema, Logger = logger }); ``` # Flutter Source: https://docs.powersync.com/client-sdk-references/flutter Full SDK reference for using PowerSync in Flutter/Dart clients The SDK is distributed via pub.dev [\[External link\].](https://pub.dev/packages/powersync) Refer to the powersync.dart repo on GitHub. Full API reference for the PowerSync SDK [\[External link\].](https://pub.dev/documentation/powersync/latest/powersync/powersync-library.html) Gallery of example projects/demo apps built with Flutter and PowerSync. ### SDK Features * **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic. * **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions. * **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity. * **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date. * **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side. Web support is currently in a beta release. Refer to [Flutter Web Support](/client-sdk-references/flutter/flutter-web-support) for more details. ## Installation Add the [PowerSync pub.dev package](https://pub.dev/packages/powersync) to your project: ```bash flutter pub add powersync ``` ## Getting Started Before implementing the PowerSync SDK in your project, make sure you have completed these steps: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. * [Installed](/client-sdk-references/flutter#installation) the PowerSync Flutter SDK. For this reference document, we assume that you have created a Flutter project and have the following directory structure: ```plaintext lib/ ├── models/ ├── schema.dart └── todolist.dart ├── powersync/ ├── my_backend_connector.dart └── powersync.dart ├── widgets/ ├── lists_widget.dart ├── todos_widget.dart ├── main.dart ``` ### 1. Define the Schema The first step is defining the schema for the local SQLite database. This will be provided as a `schema` parameter to the [PowerSyncDatabase](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/PowerSyncDatabase.html) constructor. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the PowerSync database is constructed. **Generate schema automatically** In the [dashboard](/usage/tools/powersync-dashboard), the schema can be generated based off your sync rules by right-clicking on an instance and selecting **Generate client-side schema**. Similar functionality exists in the [CLI](/usage/tools/cli). The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. For details on how Postgres types are mapped to the types below, see the section on [Types](/usage/sync-rules/types) in the *Sync Rules* documentation. **Example**: ```dart lib/models/schema.dart import 'package:powersync/powersync.dart'; const schema = Schema(([ Table('todos', [ Column.text('list_id'), Column.text('created_at'), Column.text('completed_at'), Column.text('description'), Column.integer('completed'), Column.text('created_by'), Column.text('completed_by'), ], indexes: [ // Index to allow efficient lookup within a list Index('list', [IndexedColumn('list_id')]) ]), Table('lists', [ Column.text('created_at'), Column.text('name'), Column.text('owner_id') ]) ])); ``` **Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this. ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed client-side database. Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. To instantiate `PowerSyncDatabase`, inject the Schema you defined in the previous step and a file path — it's important to only instantiate one instance of `PowerSyncDatabase` per file. **Example**: ```dart lib/powersync/powersync.dart import 'package:path/path.dart'; import 'package:path_provider/path_provider.dart'; import 'package:powersync/powersync.dart'; import '../main.dart'; import '../models/schema.dart'; openDatabase() async { final dir = await getApplicationSupportDirectory(); final path = join(dir.path, 'powersync-dart.db'); // Set up the database // Inject the Schema you defined in the previous step and a file path db = PowerSyncDatabase(schema: schema, path: path); await db.initialize(); } ``` Once you've instantiated your PowerSync database, you will need to call the [connect()](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/connect.html) method to activate it. This method requires the backend connector that will be created in the next step. ```dart lib/main.dart {35} import 'package:flutter/material.dart'; import 'package:powersync/powersync.dart'; import 'powersync/powersync.dart'; late PowerSyncDatabase db; Future main() async { WidgetsFlutterBinding.ensureInitialized(); await openDatabase(); runApp(const DemoApp()); } class DemoApp extends StatefulWidget { const DemoApp({super.key}); @override State createState() => _DemoAppState(); } class _DemoAppState extends State { @override Widget build(BuildContext context) { return MaterialApp( title: 'Demo', home: // TODO: Implement your own UI here. // You could listen for authentication state changes to connect or disconnect from PowerSync StreamBuilder( stream: // TODO: some stream, builder: (ctx, snapshot) {, // TODO: implement your own condition here if ( ... ) { // Uses the backend connector that will be created in the next step db.connect(connector: MyBackendConnector()); // TODO: implement your own UI here } }, ) ); } } ``` ### 3. Integrate with your Backend The PowerSync backend connector provides the connection between your application backend and the PowerSync client-slide managed SQLite database. It is used to: 1. [Retrieve an auth token](/installation/authentication-setup) to connect to the PowerSync instance. 2. [Apply local changes](/installation/app-backend-setup/writing-client-changes) on your backend application server (and from there, to your backend database) Accordingly, the connector must implement two methods: 1. [PowerSyncBackendConnector.fetchCredentials](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/fetchCredentials.html) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. [PowerSyncBackendConnector.uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```dart lib/powersync/my_backend_connector.dart import 'package:powersync/powersync.dart'; class MyBackendConnector extends PowerSyncBackendConnector { PowerSyncDatabase db; MyBackendConnector(this.db); @override Future fetchCredentials() async { // Implement fetchCredentials to obtain a JWT from your authentication service // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see // - https://docs.powersync.com/installation/authentication-setup/supabase-auth // - https://docs.powersync.com/installation/authentication-setup/firebase-auth // See example implementation here: https://pub.dev/documentation/powersync/latest/powersync/DevConnector/fetchCredentials.html return PowerSyncCredentials( endpoint: 'https://xxxxxx.powersync.journeyapps.com', // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly token: 'An authentication token' ); } // Implement uploadData to send local changes to your backend service // You can omit this method if you only want to sync data from the server to the client // See example implementation here: https://docs.powersync.com/client-sdk-references/flutter#3-integrate-with-your-backend @override Future uploadData(PowerSyncDatabase database) async { // This function is called whenever there is data to upload, whether the // device is online or offline. // If this call throws an error, it is retried periodically. final transaction = await database.getNextCrudTransaction(); if (transaction == null) { return; } // The data that needs to be changed in the remote db for (var op in transaction.crud) { switch (op.op) { case UpdateType.put: // TODO: Instruct your backend API to CREATE a record case UpdateType.patch: // TODO: Instruct your backend API to PATCH a record case UpdateType.delete: //TODO: Instruct your backend API to DELETE a record } } // Completes the transaction and moves onto the next one await transaction.complete(); } } ``` ## Using PowerSync: CRUD functions Once the PowerSync instance is configured you can start using the SQLite DB functions. The most commonly used CRUD functions to interact with your SQLite data are: * [PowerSyncDatabase.get](/client-sdk-references/flutter#fetching-a-single-item) - get (SELECT) a single row from a table. * [PowerSyncDatabase.getAll](/client-sdk-references/flutter#querying-items-powersync.getall) - get (SELECT) a set of rows from a table. * [PowerSyncDatabase.watch](/client-sdk-references/flutter#watching-queries-powersync.watch) - execute a read query every time source tables are modified. * [PowerSyncDatabase.execute](/client-sdk-references/flutter#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query. For the following examples, we will define a `TodoList` model class that represents a List of todos. ```dart lib/models/todolist.dart /// This is a simple model class representing a TodoList class TodoList { final int id; final String name; final DateTime createdAt; final DateTime updatedAt; TodoList({ required this.id, required this.name, required this.createdAt, required this.updatedAt, }); factory TodoList.fromRow(Map row) { return TodoList( id: row['id'], name: row['name'], createdAt: DateTime.parse(row['created_at']), updatedAt: DateTime.parse(row['updated_at']), ); } } ``` ### Fetching a Single Item The [get](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/get.html) method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/getOptional.html) to return a single optional result (returns `null` if no result is found). The following is an example of selecting a list item by ID ```dart lib/widgets/lists_widget.dart import '../main.dart'; import '../models/todolist.dart'; Future find(id) async { final result = await db.get('SELECT * FROM lists WHERE id = ?', [id]); return TodoList.fromRow(result); } ``` ### Querying Items (PowerSync.getAll) The [getAll](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/getAll.html) method returns a set of rows from a table. ```dart lib/widgets/lists_widget.dart import 'package:powersync/sqlite3.dart'; import '../main.dart'; Future> getLists() async { ResultSet results = await db.getAll('SELECT id FROM lists WHERE id IS NOT NULL'); List ids = results.map((row) => row['id'] as String).toList(); return ids; } ``` ### Watching Queries (PowerSync.watch) The [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) method executes a read query whenever a change to a dependent table is made. ```dart lib/widgets/todos_widget.dart {13-17} import 'package:flutter/material.dart'; import '../main.dart'; import '../models/todolist.dart'; // Example Todos widget class TodosWidget extends StatelessWidget { const TodosWidget({super.key}); @override Widget build(BuildContext context) { return StreamBuilder( // You can watch any SQL query stream: db .watch('SELECT * FROM lists ORDER BY created_at, id') .map((results) { return results.map(TodoList.fromRow).toList(growable: false); }), builder: (context, snapshot) { if (snapshot.hasData) { // TODO: implement your own UI here based on the result set return ...; } else { return const Center(child: CircularProgressIndicator()); } }, ); } } ``` ### Mutations (PowerSync.execute) The [execute](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/execute.html) method can be used for executing single SQLite write statements. ```dart lib/widgets/todos_widget.dart {12-15} import 'package:flutter/material.dart'; import '../main.dart'; // Example Todos widget class TodosWidget extends StatelessWidget { const TodosWidget({super.key}); @override Widget build(BuildContext context) { return FloatingActionButton( onPressed: () async { await db.execute( 'INSERT INTO lists(id, created_at, name, owner_id) VALUES(uuid(), datetime(), ?, ?)', ['name', '123'], ); }, tooltip: '+', child: const Icon(Icons.add), ); } } ``` ## Configure Logging Since version 1.1.2 of the SDK, logging is enabled by default and outputs logs from PowerSync to the console in debug mode. ## Additional Usage Examples See [Usage Examples](/client-sdk-references/flutter/usage-examples) for further examples of the SDK. ## ORM Support See [Flutter ORM Support](/client-sdk-references/flutter/flutter-orm-support) for details. ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. # API Reference Source: https://docs.powersync.com/client-sdk-references/flutter/api-reference # Encryption Source: https://docs.powersync.com/client-sdk-references/flutter/encryption # Flutter ORM Support (Alpha) Source: https://docs.powersync.com/client-sdk-references/flutter/flutter-orm-support An introduction to using ORMs with PowerSync is available on our blog [here](https://www.powersync.com/blog/using-orms-with-powersync). ORM support is available via the following package (currently in an alpha release): This package enables using the [Drift](https://pub.dev/packages/drift) persistence library (ORM) with the PowerSync Flutter SDK. The Drift integration gives Flutter developers the flexibility to write queries in either Dart or SQL. Importantly, it supports propagating change notifications from the PowerSync side to Drift, which is necessary for streaming queries. The use of this package is recommended for Flutter developers who already know Drift, or specifically want the benefits of an ORM for their PowerSync projects. ### Example implementation An example project which showcases setting up and using Drift with PowerSync is available here: ### Support for Other Flutter ORMs Other ORMs for Flutter, like [Floor](https://pinchbv.github.io/floor/), are not currently supported. It is technically possible to open a separate connection to the same database file using Floor but there are two big caveats to that: **Write locks** Every write transaction (or write statement) will lock the database for other writes for the duration of the transaction. While transactions are typically short, if multiple happen to run at the same time they may fail with a SQLITE\_BUSY or similar error. **External modifications** Often, ORMs only detect notifications made using the same library. In order to support streaming queries, PowerSync requires the ORM to allow external modifications to trigger the same change notifications, meaning streaming queries are unlikely to work out-of-the-box. # Flutter Web Support (Beta) Source: https://docs.powersync.com/client-sdk-references/flutter/flutter-web-support Web support for Flutter in version `^1.9.0` is currently in a **beta** release. It is functionally ready for production use, provided that you've tested your use cases. Please see the [Limitations](#limitations) detailed below. ## Demo app The easiest way to test Flutter Web support is to run the [Supabase Todo-List](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app: 1. Clone the [powersync.dart](https://github.com/powersync-ja/powersync.dart/tree/main) repo. 1. **Note**: If you are an existing user updating to the latest code after a git pull, run `melos exec 'flutter pub upgrade'` in the repo's root and make sure it succeeds. 2. Run `melos prepare` in the repo's root 3. cd into the `demos/supabase-todolist` folder 4. If you haven’t yet: `cp lib/app_config_template.dart lib/app_config.dart` (optionally update this config with your own Supabase and PowerSync project details). 5. Run `flutter run -d chrome` ## Installing PowerSync in your own project Install the [latest version](https://pub.dev/packages/powersync/versions) of the package, for example: ```bash flutter pub add powersync:'^1.9.0' ``` ### Additional config #### Assets Web support requires `sqlite3.wasm` and worker (`powersync_db.worker.js` and `powersync_sync.worker.js`) assets to be served from the web application. They can be downloaded to the web directory by running the following command in your application's root folder. ```bash dart run powersync:setup_web ``` The same code is used for initializing native and web `PowerSyncDatabase` clients. #### OPFS for improved performance This SDK supports different storage modes of the SQLite database with varying levels of performance and compatibility: * **IndexedDB**: Highly compatible with different browsers, but performance is slow. * **OPFS** (Origin-Private File System): Significantly faster but requires additional configuration. OPFS is the preferred mode when it is available. Otherwise database storage falls back to IndexedDB. Enabling OPFS requires adding two headers to the HTTP server response when a client requests the Flutter web application: * `Cross-Origin-Opener-Policy`: Needs to be set to `same-origin`. * `Cross-Origin-Embedder-Policy`: Needs to be set to `require-corp`. When running the app locally, you can use the following command to include the required headers: ```bash flutter run -d chrome --web-header "Cross-Origin-Opener-Policy=same-origin" --web-header "Cross-Origin-Embedder-Policy=require-corp" ``` When serving a Flutter Web app in production, the [Flutter docs](https://docs.flutter.dev/deployment/web#building-the-app-for-release) recommend building the web app with `flutter build web`, then serving the content with an HTTP server. The server should be configured to use the above headers. **Further reading**: [Drift](https://drift.simonbinder.eu/) uses the same packages as our [`sqlite_async`](https://github.com/powersync-ja/sqlite_async.dart) package under the hood, and has excellent documentation for how the web filesystem is selected. See [here](https://drift.simonbinder.eu/platforms/web/) for web compatibility notes and [here](https://drift.simonbinder.eu/platforms/web/#additional-headers) for additional notes on the required web headers. ## Limitations The API for Web is essentially the same as for native platforms, however, some features within `PowerSyncDatabase` clients are not available. ### Imports Flutter Web does not support importing directly from `sqlite3.dart` as it uses `dart:ffi`. Change imports from: ```dart import 'package/powersync/sqlite3.dart` ``` to: ```dart import 'package/powersync/sqlite3_common.dart' ``` in code which needs to run on the Web platform. Isolated native-specific code can still import from `sqlite3.dart`. ### Database connections Web database connections do not support concurrency. A single database connection is used. `readLock` and `writeLock` contexts do not implement checks for preventing writable queries in read connections and vice-versa. Direct access to the synchronous `CommonDatabase` (`sqlite.Database` equivalent for web) connection is not available. `computeWithDatabase` is not available on web. # State Management Source: https://docs.powersync.com/client-sdk-references/flutter/state-management Guidance on using PowerSync with popular Flutter state management libraries. Our [demo apps](/resources/demo-apps-example-projects) for Flutter are intentionally kept simple to put a focus on demonstrating PowerSync APIs. Instead of using heavy state management solutions, they use simple global fields to make the PowerSync database accessible to widgets. When adopting PowerSync, you might be interested in using a more sophisticated approach for state management. This section explains how PowerSync's Flutter SDK integrates with popular packages for state management. Adopting PowerSync can simplify the architecture of your app by using a local SQLite database as the single source of truth for all data. For a general discussion on how PowerSync fits into modern app architecture on Flutter, also see [this blogpost](https://dinkomarinac.dev/building-local-first-flutter-apps-with-riverpod-drift-and-powersync). PowerSync exposes database queries with the standard `Future` and `Stream` classes from `dart:async`. Given how widely used these are in the Dart ecosystem, PowerSync works well with all popular approaches for state management, such as: 1. Providers with `package:provider`: Create your database as a `Provider` and expose watched queries to child widgets with `StreamProvider`! The provider for databases should `close()` the database in `dispose`. 2. Providers with `package:riverpod`: We mention relevant snippets [below](#riverpod). 3. Dependency injection with `package:get_it`: PowerSync databases can be registered with `registerSingletonAsync`. Again, make sure to `close()` the database in the `dispose` callback. 4. The BLoC pattern with the `bloc` package: You can easily listen to watched queries in Cubits (although, if you find your Blocs and Cubits becoming trivial wrappers around database streams, consider just `watch()`ing database queries in widgets directly. That doesn't make your app [less testable](/client-sdk-references/flutter/unit-testing)!). To simplify state management, avoid the use of hydrated blocs and cubits for state that depends on database queries. With PowerSync, regular data is already available locally and doesn't need a second local cache. ## Riverpod We have a [complete example](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist-drift) on using PowerSync with modern Flutter libraries like Riverpod, Drift and `auto_route`. A good way to open PowerSync databases with Riverpod is to use an async provider. You can also manage your `connect` and `disconnect` calls there, for instance by listening to the authentication state: ```dart @Riverpod(keepAlive: true) Future powerSyncInstance(Ref ref) async { final db = PowerSyncDatabase( schema: schema, path: await _getDatabasePath(), logger: attachedLogger, ); await db.initialize(); // TODO: Listen for auth changes and connect() the database here. ref.listen(yourAuthProvider, (prev, next) { if (next.isAuthenticated && !prev.isAuthenticated) { db.connect(connector: MyConnector()); } // ... }); ref.onDispose(db.close); return db; } ``` ### Running queries To expose auto-updating query results, use a `StreamProvider` reading the database: ```dart final _lists = StreamProvider((ref) async* { final database = await ref.read(powerSyncInstanceProvider.future); yield* database.watch('SELECT * FROM lists'); }); ``` ### Waiting for sync If you were awaiting `waitForFirstSync` before, you can keep doing that: ```dart final db = await ref.read(powerSyncInstanceProvider.future); await db.waitForFirstSync(); ``` Alternatively, you can expose the sync status as a provider and use that to determine whether the synchronization has completed: ```dart final syncStatus = statefulProvider((ref, change) { final status = Stream.fromFuture(ref.read(powerSyncInstanceProvider.future)) .asyncExpand((db) => db.statusStream); final sub = status.listen(change); ref.onDispose(sub.cancel); return const SyncStatus(); }); @riverpod bool didCompleteSync(Ref ref, [BucketPriority? priority]) { final status = ref.watch(syncStatus); if (priority != null) { return status.statusForPriority(priority).hasSynced ?? false; } else { return status.hasSynced ?? false; } } final class MyWidget extends ConsumerWidget { const MyWidget({super.key}); @override Widget build(BuildContext context, WidgetRef ref) { final didSync = ref.watch(didCompleteSyncProvider()); if (!didSync) { return const Text('Busy with sync...'); } // ... content after first sync } } ``` ### Attachment queue If you're using the attachment queue helper to synchronize media assets, you can also wrap that in a provider: ```dart @Riverpod(keepAlive: true) Future attachmentQueue(Ref ref) async { final db = await ref.read(powerSyncInstanceProvider.future); final queue = YourAttachmentQueue(db, remoteStorage); await queue.init(); return queue; } ``` Reading and awaiting this provider can then be used to show attachments: ```dart final class PhotoWidget extends ConsumerWidget { final TodoItem todo; const PhotoWidget({super.key, required this.todo}); @override Widget build(BuildContext context, WidgetRef ref) { final photoState = ref.watch(_getPhotoStateProvider(todo.photoId)); if (!photoState.hasValue) { return Container(); } final data = photoState.value; if (data == null) { return Container(); } String? filePath = data.photoPath; bool fileIsDownloading = !data.fileExists; bool fileArchived = data.attachment?.state == AttachmentState.archived.index; if (fileArchived) { return Column( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.center, children: [ const Text("Unavailable"), const SizedBox(height: 8), ], ); } if (fileIsDownloading) { return const Text("Downloading..."); } File imageFile = File(filePath!); int lastModified = imageFile.existsSync() ? imageFile.lastModifiedSync().millisecondsSinceEpoch : 0; Key key = ObjectKey('$filePath:$lastModified'); return Image.file( key: key, imageFile, width: 50, height: 50, ); } } class _ResolvedPhotoState { String? photoPath; bool fileExists; Attachment? attachment; _ResolvedPhotoState( {required this.photoPath, required this.fileExists, this.attachment}); } @riverpod Future<_ResolvedPhotoState> _getPhotoState(Ref ref, String? photoId) async { if (photoId == null) { return _ResolvedPhotoState(photoPath: null, fileExists: false); } final queue = await ref.read(attachmentQueueProvider.future); final photoPath = await queue.getLocalUri('$photoId.jpg'); bool fileExists = await File(photoPath).exists(); final row = await queue.db .getOptional('SELECT * FROM attachments_queue WHERE id = ?', [photoId]); if (row != null) { Attachment attachment = Attachment.fromRow(row); return _ResolvedPhotoState( photoPath: photoPath, fileExists: fileExists, attachment: attachment); } return _ResolvedPhotoState( photoPath: photoPath, fileExists: fileExists, attachment: null); } ``` # Unit Testing Source: https://docs.powersync.com/client-sdk-references/flutter/unit-testing Guidelines for unit testing with PowerSync For unit-testing your projects using PowerSync (e.g. testing whether your queries run as expected) you will need the `powersync-sqlite-core` binary in your project's root directory. 1. Download the PowerSync SQLite binary * Go to the [Releases](https://github.com/powersync-ja/powersync-sqlite-core/releases) for `powersync-sqlite-core`. * Download the binary compatible with your OS. 2. Rename the binary * Rename the binary by removing the architecture suffix. * Example: `powersync_x64.dll` to `powersync.dll` * Example: `libpowersync_aarch64.dylib` to `libpowersync.dylib` * Example: `libpowersync_x64.so` to `libpowersync.so` 3. Place the binary in your project * Move the renamed binary to the root directory of your project. This snippet below is only included as a guide to unit testing in Flutter with PowerSync. For more information refer to the [official Flutter unit testing documentation](https://docs.flutter.dev/cookbook/testing/unit/introduction). ```dart import 'dart:io'; import 'package:powersync/powersync.dart'; import 'package:path/path.dart'; const schema = Schema([ Table('customers', [Column.text('name'), Column.text('email')]) ]); late PowerSyncDatabase testDB; String getTestDatabasePath() async { const dbFilename = 'powersync-test.db'; final dir = Directory.current.absolute.path; return join(dir, dbFilename); } Future openTestDatabase() async { testDB = PowerSyncDatabase( schema: schema, path: await getTestDatabasePath(), logger: testLogger, ); await testDB.initialize(); } test('INSERT', () async { await testDB.execute( 'INSERT INTO customers(name, email) VALUES(?, ?)', ['John Doe', 'john@hotmail.com']); final results = await testDB.getAll('SELECT * FROM customers'); expect(results.length, 1); expect(results, ['John Doe', 'john@hotmail.com']); }); ``` #### If you have trouble with loading the extension, confirm the following Ensure that your SQLite3 binary install on your system has extension loading enabled. You can confirm this by doing the following * Run `sqlite3` in your command-line interface. * In the sqlite3 prompt run `PRAGMA compile_options;` * Check the output for the option `ENABLE_LOAD_EXTENSION`. * If you see `ENABLE_LOAD_EXTENSION`, it means extension loading is enabled. If the above steps don't work, you can also confirm if extension loading is enabled by trying to load the extension in your command-line interface. * Run `sqlite3` in your command-line interface. * Run `.load /path/to/file/libpowersync.dylib` (macOS) or `.load /path/to/file/libpowersync.so` (Linux) or `.load /path/to/file/powersync.dll` (Windows). * If this runs without error, then extension loading is enabled. If it fails with an error message about extension loading being disabled, then it’s not enabled in your SQLite installation. If it is not enabled, you will have to download a compiled SQLite binary with extension loading enabled (e.g. using Homebrew) or [compile SQLite](https://www.sqlite.org/howtocompile.html) with extension loading enabled and include it in your project's folder alongside the extension. # Usage Examples Source: https://docs.powersync.com/client-sdk-references/flutter/usage-examples Code snippets and guidelines for common scenarios ## Using transactions to group changes Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). The [writeTransaction(callback)](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/writeTransaction.html) method combines all writes into a single transaction, only committing to persistent storage once. ```dart deleteList(SqliteDatabase db, String id) async { await db.writeTransaction((tx) async { // Delete the main list await tx.execute('DELETE FROM lists WHERE id = ?', [id]); // Delete any children of the list await tx.execute('DELETE FROM todos WHERE list_id = ?', [id]); }); } ``` Also see [readTransaction(callback)](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/readTransaction.html) . ## Subscribe to changes in data Use [watch](https://pub.dev/documentation/powersync/latest/sqlite_async/SqliteQueries/watch.html) to watch for changes to the dependent tables of any SQL query. ```dart StreamBuilder( // You can watch any SQL query stream: db.watch('SELECT * FROM customers order by id asc'), builder: (context, snapshot) { if (snapshot.hasData) { // TODO: implement your own UI here based on the result set return ...; } else { return const Center(child: CircularProgressIndicator()); } }, ) ``` ## Insert, update, and delete data in the local database Use [execute](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/execute.html) to run INSERT, UPDATE or DELETE queries. ```dart FloatingActionButton( onPressed: () async { await db.execute( 'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)', ['Fred', 'fred@example.org'], ); }, tooltip: '+', child: const Icon(Icons.add), ); ``` ## Send changes in local data to your backend service Override [uploadData](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncBackendConnector/uploadData.html) to send local updates to your backend service. ```dart @override Future uploadData(PowerSyncDatabase database) async { final batch = await database.getCrudBatch(); if (batch == null) return; for (var op in batch.crud) { switch (op.op) { case UpdateType.put: // Send the data to your backend service // Replace `_myApi` with your own API client or service await _myApi.put(op.table, op.opData!); break; default: // TODO: implement the other operations (patch, delete) break; } } await batch.complete(); } ``` ## Accessing PowerSync connection status information Use [SyncStatus](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus-class.html) and register an event listener with [statusStream](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/statusStream.html) to listen for status changes to your PowerSync instance. ```dart class _StatusAppBarState extends State { late SyncStatus _connectionState; StreamSubscription? _syncStatusSubscription; @override void initState() { super.initState(); _connectionState = db.currentStatus; _syncStatusSubscription = db.statusStream.listen((event) { setState(() { _connectionState = db.currentStatus; }); }); } @override void dispose() { super.dispose(); _syncStatusSubscription?.cancel(); } @override Widget build(BuildContext context) { final statusIcon = _getStatusIcon(_connectionState); return AppBar( title: Text(widget.title), actions: [ ... statusIcon ], ); } } Widget _getStatusIcon(SyncStatus status) { if (status.anyError != null) { // The error message is verbose, could be replaced with something // more user-friendly if (!status.connected) { return _makeIcon(status.anyError!.toString(), Icons.cloud_off); } else { return _makeIcon(status.anyError!.toString(), Icons.sync_problem); } } else if (status.connecting) { return _makeIcon('Connecting', Icons.cloud_sync_outlined); } else if (!status.connected) { return _makeIcon('Not connected', Icons.cloud_off); } else if (status.uploading && status.downloading) { // The status changes often between downloading, uploading and both, // so we use the same icon for all three return _makeIcon('Uploading and downloading', Icons.cloud_sync_outlined); } else if (status.uploading) { return _makeIcon('Uploading', Icons.cloud_sync_outlined); } else if (status.downloading) { return _makeIcon('Downloading', Icons.cloud_sync_outlined); } else { return _makeIcon('Connected', Icons.cloud_queue); } } ``` ## Wait for the initial sync to complete Use the [hasSynced](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus/hasSynced.html) property (available since version 1.5.1 of the SDK) and register a listener to indicate to the user whether the initial sync is in progress. ```dart // Example of using hasSynced to show whether the first sync has completed /// Global reference to the database final PowerSyncDatabase db; bool hasSynced = false; StreamSubscription? _syncStatusSubscription; // Use the exposed statusStream Stream watchSyncStatus() { return db.statusStream; } @override void initState() { super.initState(); _syncStatusSubscription = watchSyncStatus.listen((status) { setState(() { hasSynced = status.hasSynced ?? false; }); }); } @override Widget build(BuildContext context) { return Text(hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'); } // Don't forget to dispose of stream subscriptions when the view is disposed void dispose() { super.dispose(); _syncStatusSubscription?.cancel(); } ``` For async use cases, see the [waitForFirstSync](https://pub.dev/documentation/powersync/latest/powersync/PowerSyncDatabase/waitForFirstSync.html) method which returns a promise that resolves once the first full sync has completed. ## Report sync download progress You can show users a progress bar when data downloads using the `downloadProgress` property from the [SyncStatus](https://pub.dev/documentation/powersync/latest/powersync/SyncStatus/downloadProgress.html) class. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs. As an example, this widget renders a progress bar when a download is active: ```dart import 'package:flutter/material.dart'; import 'package:powersync/powersync.dart' hide Column; class SyncProgressBar extends StatelessWidget { final PowerSyncDatabase db; /// When set, show progress towards the [BucketPriority] instead of towards /// the full sync. final BucketPriority? priority; const SyncProgressBar({ super.key, required this.db, this.priority, }); @override Widget build(BuildContext context) { return StreamBuilder( stream: db.statusStream, initialData: db.currentStatus, builder: (context, snapshot) { final status = snapshot.requireData; final progress = switch (priority) { null => status.downloadProgress, var priority? => status.downloadProgress?.untilPriority(priority), }; if (progress != null) { return Center( child: Column( children: [ const Text('Busy with sync...'), LinearProgressIndicator(value: progress?.downloadedFraction), Text( '${progress.downloadedOperations} out of ${progress.totalOperations}') ], ), ); } else { return const SizedBox.shrink(); } }, ); } } ``` Also see: * [SyncDownloadProgress API](https://pub.dev/documentation/powersync/latest/powersync/SyncDownloadProgress-extension-type.html) * [Demo component](https://github.com/powersync-ja/powersync.dart/blob/main/demos/supabase-todolist/lib/widgets/guard_by_sync.dart) # Introduction Source: https://docs.powersync.com/client-sdk-references/introduction PowerSync supports multiple client-side frameworks with official SDKs Select your client framework for the full SDK reference, getting started instructions and example code: # JavaScript Web Source: https://docs.powersync.com/client-sdk-references/javascript-web Full SDK reference for using PowerSync in JavaScript Web clients This SDK is distributed via NPM [\[External link\].](https://www.npmjs.com/package/@powersync/web) Refer to packages/web in the powersync-js repo on GitHub. Full API reference for the PowerSync SDK [\[External link\].](https://powersync-ja.github.io/powersync-js/web-sdk) Gallery of example projects/demo apps built with JavaScript Web stacks and PowerSync. ### SDK Features * **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic. * **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions. * **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity. * **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date. * **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side. ## Installation Add the [PowerSync Web NPM package](https://www.npmjs.com/package/@powersync/web) to your project: ```bash npm install @powersync/web ``` ```bash yarn add @powersync/web ``` ```bash pnpm install @powersync/web ``` **Required peer dependencies** This SDK currently requires [`@journeyapps/wa-sqlite`](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency. Install it in your app with: ```bash npm install @journeyapps/wa-sqlite ``` ```bash yarn add @journeyapps/wa-sqlite ``` ```bash pnpm install @journeyapps/wa-sqlite ``` By default, this SDK connects to a PowerSync instance via WebSocket (from `@powersync/web@1.6.0`) or HTTP streaming (before `@powersync/web@1.6.0`). See [Developer Notes](/client-sdk-references/javascript-web#developer-notes) for more details on connection methods. ## Getting Started Before implementing the PowerSync SDK in your project, make sure you have completed these steps: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. * [Installed](/client-sdk-references/javascript-web#installation) the PowerSync Web SDK. ### 1. Define the Schema The first step is defining the schema for the local SQLite database. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the local PowerSync database is constructed (as we'll show in the next step). **Generate schema automatically** In the [dashboard](/usage/tools/powersync-dashboard), the schema can be generated based off your sync rules by right-clicking on an instance and selecting **Generate client-side schema**. Similar functionality exists in the [CLI](/usage/tools/cli). The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. For details on how Postgres types are mapped to the types below, see the section on [Types](/usage/sync-rules/types) in the *Sync Rules* documentation. **Example**: ```js // AppSchema.ts import { column, Schema, Table } from '@powersync/web'; const lists = new Table({ created_at: column.text, name: column.text, owner_id: column.text }); const todos = new Table( { list_id: column.text, created_at: column.text, completed_at: column.text, description: column.text, created_by: column.text, completed_by: column.text, completed: column.integer }, { indexes: { list: ['list_id'] } } ); export const AppSchema = new Schema({ todos, lists }); // For types export type Database = (typeof AppSchema)['types']; export type TodoRecord = Database['todos']; // OR: // export type Todo = RowType; export type ListRecord = Database['lists']; ``` **Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this. ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed database. Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. **Example**: ```js import { PowerSyncDatabase } from '@powersync/web'; import { Connector } from './Connector'; import { AppSchema } from './AppSchema'; export const db = new PowerSyncDatabase({ // The schema you defined in the previous step schema: AppSchema, database: { // Filename for the SQLite database — it's important to only instantiate one instance per file. dbFilename: 'powersync.db' // Optional. Directory where the database file is located.' // dbLocation: 'path/to/directory' } }); ``` **SDK versions lower than 1.2.0** In SDK versions lower than 1.2.0, you will need to use the deprecated [WASQLitePowerSyncDatabaseOpenFactory](https://powersync-ja.github.io/powersync-js/web-sdk/classes/WASQLitePowerSyncDatabaseOpenFactory) syntax to instantiate the database. Once you've instantiated your PowerSync database, you will need to call the [connect()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#connect) method to activate it. ```js export const setupPowerSync = async () => { // Uses the backend connector that will be created in the next section const connector = new Connector(); db.connect(connector); }; ``` ### 3. Integrate with your Backend The PowerSync backend connector provides the connection between your application backend and the PowerSync client-slide managed SQLite database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. 2. Apply local changes on your backend application server (and from there, to your backend database) Accordingly, the connector must implement two methods: 1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```js import { UpdateType } from '@powersync/web'; export class Connector { async fetchCredentials() { // Implement fetchCredentials to obtain a JWT from your authentication service. // See https://docs.powersync.com/installation/authentication-setup // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see // - https://docs.powersync.com/installation/authentication-setup/supabase-auth // - https://docs.powersync.com/installation/authentication-setup/firebase-auth return { endpoint: '[Your PowerSync instance URL or self-hosted endpoint]', // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly token: 'An authentication token' }; } async uploadData(database) { // Implement uploadData to send local changes to your backend service. // You can omit this method if you only want to sync data from the database to the client // See example implementation here: https://docs.powersync.com/client-sdk-references/javascript-web#3-integrate-with-your-backend } } ``` ## Using PowerSync: CRUD functions Once the PowerSync instance is configured you can start using the SQLite DB functions. The most commonly used CRUD functions to interact with your SQLite data are: * [PowerSyncDatabase.get](/client-sdk-references/javascript-web#fetching-a-single-item) - get (SELECT) a single row from a table. * [PowerSyncDatabase.getAll](/client-sdk-references/javascript-web#querying-items-powersync.getall) - get (SELECT) a set of rows from a table. * [PowerSyncDatabase.watch](/client-sdk-references/javascript-web#watching-queries-powersync.watch) - execute a read query every time source tables are modified. * [PowerSyncDatabase.execute](/client-sdk-references/javascript-web#mutations-powersync.execute) - execute a write (INSERT/UPDATE/DELETE) query. ### Fetching a Single Item The [get](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#get) method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#getoptional) to return a single optional result (returns `null` if no result is found). ```js // Find a list item by ID export const findList = async (id) => { const result = await db.get('SELECT * FROM lists WHERE id = ?', [id]); return result; } ``` ### Querying Items (PowerSync.getAll) The [getAll](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#getall) method returns a set of rows from a table. ```js // Get all list IDs export const getLists = async () => { const results = await db.getAll('SELECT * FROM lists'); return results; } ``` ### Watching Queries (PowerSync.watch) The [watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made. ```js // Watch changes to lists const abortController = new AbortController(); export const function watchLists = (onUpdate) => { for await (const update of PowerSync.watch( 'SELECT * from lists', [], { signal: abortController.signal } ) ) { onUpdate(update); } } ``` ### Mutations (PowerSync.execute, PowerSync.writeTransaction) The [execute](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#execute) method can be used for executing single SQLite write statements. ```js // Delete a list item by ID export const deleteList = async (id) => { const result = await db.execute('DELETE FROM lists WHERE id = ?', [id]); return TodoList.fromRow(results); } // OR: using a transaction const deleteList = async (id) => { await db.writeTransaction(async (tx) => { // Delete associated todos await tx.execute(`DELETE FROM ${TODOS_TABLE} WHERE list_id = ?`, [id]); // Delete list record await tx.execute(`DELETE FROM ${LISTS_TABLE} WHERE id = ?`, [id]); }); }; ``` ## Configure Logging ```js import { createBaseLogger, LogLevel } from '@powersync/web'; const logger = createBaseLogger(); // Configure the logger to use the default console output logger.useDefaults(); // Set the minimum log level to DEBUG to see all log messages // Available levels: DEBUG, INFO, WARN, ERROR, TRACE, OFF logger.setLevel(LogLevel.DEBUG); ``` Enable verbose output in the developer tools for detailed logs. Additionally, the [WASQLiteDBAdapter](https://powersync-ja.github.io/powersync-js/web-sdk/classes/WASQLiteDBAdapter) opens SQLite connections inside a shared web worker. This worker can be inspected in Chrome by accessing: ``` chrome://inspect/#workers ``` ## Additional Usage Examples See [Usage Examples](/client-sdk-references/javascript-web/usage-examples) for further examples of the SDK. ## Developer Notes ### Connection Methods This SDK supports two methods for streaming sync commands: 1. **WebSocket (Default)** * The implementation leverages RSocket for handling reactive socket streams. * Back-pressure is effectively managed through client-controlled command requests. * Sync commands are transmitted efficiently as BSON (binary) documents. * This method is **recommended** since it will support the future [BLOB column support](https://roadmap.powersync.com/c/88-support-for-blob-column-types) feature. 2. **HTTP Streaming (Legacy)** * This is the original implementation method. * This method will not support the future BLOB column feature. By default, the `PowerSyncDatabase.connect()` method uses WebSocket. You can optionally specify the `connectionMethod` to override this: ```js // WebSocket (default) powerSync.connect(connector); // HTTP Streaming powerSync.connect(connector, { connectionMethod: SyncStreamConnectionMethod.HTTP }); ``` ### SQLite Virtual File Systems This SDK supports multiple Virtual File Systems (VFS), responsible for storing the local SQLite database: #### 1. IDBBatchAtomicVFS (Default) * This system utilizes IndexedDB as its underlying storage mechanism. * Multiple tabs are fully supported across most modern browsers. * Users may experience stability issues when using Safari. #### 2. OPFS-based Alternatives PowerSync supports two OPFS (Origin Private File System) implementations that generally offer improved performance: ##### OPFSCoopSyncVFS (Recommended) * This implementation provides comprehensive multi-tab support across all major browsers. * It offers the most reliable compatibility with Safari and Safari iOS. * Example configuration: ```js import { PowerSyncDatabase, WASQLiteOpenFactory, WASQLiteVFS } from '@powersync/web'; export const db = new PowerSyncDatabase({ schema: AppSchema, database: new WASQLiteOpenFactory({ dbFilename: 'exampleVFS.db', vfs: WASQLiteVFS.OPFSCoopSyncVFS, flags: { enableMultiTabs: typeof SharedWorker !== 'undefined' } }), flags: { enableMultiTabs: typeof SharedWorker !== 'undefined' } }); ``` ##### AccessHandlePoolVFS * This implementation delivers optimal performance for single-tab applications. * The system is not designed to handle multiple tab scenarios. * The configuration is similar to `OPFSCoopSyncVFS`, but requires using `WASQLiteVFS.AccessHandlePoolVFS`. #### VFS Compatibility Matrix | VFS Type | Multi-Tab Support (Standard Browsers) | Multi-Tab Support (Safari/iOS) | Notes | | ------------------- | ------------------------------------- | ------------------------------ | ------------------------------------- | | IDBBatchAtomicVFS | ✅ | ❌ | Default, some Safari stability issues | | OPFSCoopSyncVFS | ✅ | ✅ | Recommended for multi-tab support | | AccessHandlePoolVFS | ❌ | ❌ | Best for single-tab applications | **Note**: There are known issues with OPFS when using Safari's incognito mode. ### Managing OPFS Storage Unlike IndexedDB, OPFS storage cannot be managed through browser developer tools. The following utility functions can help you manage OPFS storage programmatically: ```js // Clear all OPFS storage async function purgeVFS() { await powerSync.disconnect(); await powerSync.close(); const root = await navigator.storage.getDirectory(); await new Promise(resolve => setTimeout(resolve, 1)); // Allow .db-wal to become deletable for await (const [name, entry] of root.entries!()) { try { if (entry.kind === 'file') { await root.removeEntry(name); } else if (entry.kind === 'directory') { await root.removeEntry(name, { recursive: true }); } } catch (err) { console.error(`Failed to delete ${entry.kind}: ${name}`, err); } } } // List OPFS entries async function listVfsEntries() { const root = await navigator.storage.getDirectory(); for await (const [name, entry] of root.entries()) { console.log(`${entry.kind}: ${name}`); } } ``` ## ORM Support See [JavaScript ORM Support](/client-sdk-references/javascript-web/javascript-orm/overview) for details. ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. # API Reference Source: https://docs.powersync.com/client-sdk-references/javascript-web/api-reference # Encryption Source: https://docs.powersync.com/client-sdk-references/javascript-web/encryption # Drizzle Source: https://docs.powersync.com/client-sdk-references/javascript-web/javascript-orm/drizzle This package enables using [Drizzle](https://orm.drizzle.team/) with the PowerSync [React Native](/client-sdk-references/react-native-and-expo) and [JavaScript Web](/client-sdk-references/javascript-web) SDKs. ## Setup Set up the PowerSync Database and wrap it with Drizzle. ```js import { wrapPowerSyncWithDrizzle } from '@powersync/drizzle-driver'; import { PowerSyncDatabase } from '@powersync/web'; import { relations } from 'drizzle-orm'; import { index, integer, sqliteTable, text } from 'drizzle-orm/sqlite-core'; import { AppSchema } from './schema'; export const lists = sqliteTable('lists', { id: text('id'), name: text('name') }); export const todos = sqliteTable('todos', { id: text('id'), description: text('description'), list_id: text('list_id'), created_at: text('created_at') }); export const listsRelations = relations(lists, ({ one, many }) => ({ todos: many(todos) })); export const todosRelations = relations(todos, ({ one, many }) => ({ list: one(lists, { fields: [todos.list_id], references: [lists.id] }) })); export const drizzleSchema = { lists, todos, listsRelations, todosRelations }; // As an alternative to manually defining a PowerSync schema, generate the local PowerSync schema from the Drizzle schema with the `DrizzleAppSchema` constructor: // import { DrizzleAppSchema } from '@powersync/drizzle-driver'; // export const AppSchema = new DrizzleAppSchema(drizzleSchema); // // This is optional, but recommended, since you will only need to maintain one schema on the client-side // Read on to learn more. export const powerSyncDb = new PowerSyncDatabase({ database: { dbFilename: 'test.sqlite' }, schema: AppSchema }); // This is the DB you will use in queries export const db = wrapPowerSyncWithDrizzle(powerSyncDb, { schema: drizzleSchema }); ``` ## Schema Conversion The `DrizzleAppSchema` constructor simplifies the process of integrating Drizzle with PowerSync. It infers the local [PowerSync schema](/installation/client-side-setup/define-your-schema) from your Drizzle schema definition, providing a unified development experience. As the PowerSync schema only supports SQLite types (`text`, `integer`, and `real`), the same limitation extends to the Drizzle table definitions. To use it, define your Drizzle tables and supply the schema to the `DrizzleAppSchema` function: ```js import { DrizzleAppSchema } from '@powersync/drizzle-driver'; import { sqliteTable, text } from 'drizzle-orm/sqlite-core'; // Define a Drizzle table const lists = sqliteTable('lists', { id: text('id').primaryKey().notNull(), created_at: text('created_at'), name: text('name').notNull(), owner_id: text('owner_id') }); export const drizzleSchema = { lists }; // Infer the PowerSync schema from your Drizzle schema export const AppSchema = new DrizzleAppSchema(drizzleSchema); ``` ### Defining PowerSync Options The PowerSync table definition allows additional options supported by PowerSync's app schema beyond that which are supported by Drizzle. They can be specified as follows. Note that these options exclude indexes as they can be specified in a Drizzle table. ```js import { DrizzleAppSchema } from '@powersync/drizzle-driver'; // import { DrizzleAppSchema, type DrizzleTableWithPowerSyncOptions} from '@powersync/drizzle-driver'; for TypeScript const listsWithOptions = { tableDefinition: logs, options: { localOnly: true } }; // const listsWithOptions: DrizzleTableWithPowerSyncOptions = { tableDefinition: logs, options: { localOnly: true } }; for TypeScript export const drizzleSchemaWithOptions = { lists: listsWithOptions }; export const AppSchema = new DrizzleAppSchema(drizzleSchemaWithOptions); ``` ### Converting a Single Table From Drizzle to PowerSync Drizzle tables can also be converted on a table-by-table basis with `toPowerSyncTable`. ```js import { toPowerSyncTable } from '@powersync/drizzle-driver'; import { Schema } from '@powersync/web'; import { sqliteTable, text } from 'drizzle-orm/sqlite-core'; // Define a Drizzle table const lists = sqliteTable('lists', { id: text('id').primaryKey().notNull(), created_at: text('created_at'), name: text('name').notNull(), owner_id: text('owner_id') }); const psLists = toPowerSyncTable(lists); // converts the Drizzle table to a PowerSync table // toPowerSyncTable(lists, { localOnly: true }); - allows for PowerSync table configuration export const AppSchema = new Schema({ lists: psLists // names the table `lists` in the PowerSync schema }); ``` ## Compilable queries To use Drizzle queries in your hooks and composables, they currently need to be converted using `toCompilableQuery`. ```js import { toCompilableQuery } from "@powersync/drizzle-driver"; const query = db.select().from(users); const { data: listRecords, isLoading } = useQuery(toCompilableQuery(query)); ``` ## Usage Examples Below are examples comparing Drizzle and PowerSync syntax for common database operations. ### Select Operations ```js Drizzle const result = await db.select().from(users); // [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }] ``` ```js PowerSync const result = await powerSyncDb.getAll('SELECT * from users'); // [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }] ``` ### Insert Operations ```js Drizzle await db.insert(users).values({ id: '1', name: 'John' }); const result = await db.select().from(users); // [{ id: '1', name: 'John' }] ``` ```js PowerSync await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(1, ?)', ['John']); const result = await powerSyncDb.getAll('SELECT * from users'); // [{ id: '1', name: 'John' }] ``` ### Delete Operations ```js Drizzle await db.insert(users).values({ id: '2', name: 'Ben' }); await db.delete(users).where(eq(users.name, 'Ben')); const result = await db.select().from(users); // [] ``` ```js PowerSync await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(2, ?)', ['Ben']); await powerSyncDb.execute(`DELETE FROM users WHERE name = ?`, ['Ben']); const result = await powerSyncDb.getAll('SELECT * from users'); // [] ``` ### Update Operations ```js Drizzle await db.insert(users).values({ id: '3', name: 'Lucy' }); await db.update(users).set({ name: 'Lucy Smith' }).where(eq(users.name, 'Lucy')); const result = await db.select({ name: users.name }).from(users).get(); // 'Lucy Smith' ``` ```js PowerSync await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(3, ?)', ['Lucy']); await powerSyncDb.execute('UPDATE users SET name = ? WHERE name = ?', ['Lucy Smith', 'Lucy']); const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['Lucy Smith']) // 'Lucy Smith' ``` ### Watched Queries For watched queries with Drizzle it's recommended to use the `watch()` function from the Drizzle integration which takes in a Drizzle query. ```js Drizzle const query = db.select().from(users); db.watch(query, { onResult(results) { console.log(results); }, }); // [{ id: '1', name: 'John' }] ``` ```js PowerSync powerSyncDb.watch("select * from users", [], { onResult(results) { console.log(results.rows?._array); }, }); // [{ id: '1', name: 'John' }] ``` ### Transactions ```js Drizzle await db.transaction(async (transaction) => { await db.insert(users).values({ id: "4", name: "James" }); await db .update(users) .set({ name: "Lucy James Smith" }) .where(eq(users.name, "James")); }); const result = await db.select({ name: users.name }).from(users).get(); // 'James Smith' ``` ```js PowerSync await powerSyncDb.writeTransaction((transaction) => { await transaction.execute('INSERT INTO users (id, name) VALUES(4, ?)', ['James']); await transaction.execute("UPDATE users SET name = ? WHERE name = ?", ['James Smith', 'James']); }) const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['James Smith']) // 'James Smith' ``` ## Developer Notes ### Table Constraint Restrictions The Drizzle ORM relies on the underlying PowerSync table definitions which are subject to certain limitations. This means that most Drizzle [constraint features](https://orm.drizzle.team/docs/indexes-constraints) (such as cascading deletes, foreign checks, unique) are currently not supported. # Kysely Source: https://docs.powersync.com/client-sdk-references/javascript-web/javascript-orm/kysely This package enables using [Kysely](https://kysely.dev/) with PowerSync React Native and web SDKs. It gives JavaScript developers the flexibility to write queries in either JavaScript/TypeScript or SQL, and provides type-safe imperative APIs. ## Setup Set up the PowerSync Database and wrap it with Kysely. ### JavaScript Setup ```js import { wrapPowerSyncWithKysely } from '@powersync/kysely-driver'; import { PowerSyncDatabase } from '@powersync/web'; // Define schema as in: https://docs.powersync.com/usage/installation/client-side-setup/define-your-schema import { appSchema } from './schema'; export const powerSyncDb = new PowerSyncDatabase({ database: { dbFilename: 'test.sqlite' }, schema: appSchema }); export const db = wrapPowerSyncWithKysely(powerSyncDb); ``` ### TypeScript Setup ```js import { wrapPowerSyncWithKysely } from '@powersync/kysely-driver'; import { PowerSyncDatabase } from "@powersync/web"; // Define schema as in: https://docs.powersync.com/usage/installation/client-side-setup/define-your-schema import { appSchema, Database } from "./schema"; export const powerSyncDb = new PowerSyncDatabase({ database: { dbFilename: "test.sqlite" }, schema: appSchema, }); // `db` now automatically contains types for defined tables export const db = wrapPowerSyncWithKysely(powerSyncDb) ``` For more information on Kysely typing, see [their documentation](https://kysely.dev/docs/getting-started#types). ## Usage Examples Below are examples comparing Kysely and PowerSync syntax for common database operations. ### Select Operations ```js Kysely const result = await db.selectFrom('users').selectAll().execute(); // [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }] ``` ```js PowerSync const result = await powerSyncDb.getAll('SELECT * from users'); // [{ id: '1', name: 'user1' }, { id: '2', name: 'user2' }] ``` ### Insert Operations ```js Kysely await db.insertInto('users').values({ id: '1', name: 'John' }).execute(); const result = await db.selectFrom('users').selectAll().execute(); // [{ id: '1', name: 'John' }] ``` ```js PowerSync await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(1, ?)', ['John']); const result = await powerSyncDb.getAll('SELECT * from users'); // [{ id: '1', name: 'John' }] ``` ### Delete Operations ```js Kysely await db.insertInto('users').values({ id: '2', name: 'Ben' }).execute(); await db.deleteFrom('users').where('name', '=', 'Ben').execute(); const result = await db.selectFrom('users').selectAll().execute(); // [] ``` ```js PowerSync await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(2, ?)', ['Ben']); await powerSyncDb.execute(`DELETE FROM users WHERE name = ?`, ['Ben']); const result = await powerSyncDb.getAll('SELECT * from users'); // [] ``` ### Update Operations ```js Kysely await db.insertInto('users').values({ id: '3', name: 'Lucy' }).execute(); await db.updateTable('users').where('name', '=', 'Lucy').set('name', 'Lucy Smith').execute(); const result = await db.selectFrom('users').select('name').executeTakeFirstOrThrow(); // 'Lucy Smith' ``` ```js PowerSync await powerSyncDb.execute('INSERT INTO users (id, name) VALUES(3, ?)', ['Lucy']); await powerSyncDb.execute('UPDATE users SET name = ? WHERE name = ?', ['Lucy Smith', 'Lucy']); const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['Lucy Smith']) // 'Lucy Smith' ``` ### Watched Queries For watched queries with Kysely it's recommended to use the `watch()` function from the wrapper package which takes in a Kysely query. ```js Kysely const query = db.selectFrom('users').selectAll(); db.watch(query, { onResult(results) { console.log(results); }, }); // [{ id: '1', name: 'John' }] ``` ```js PowerSync powerSyncDb.watch("select * from users", [], { onResult(results) { console.log(results.rows?._array); }, }); // [{ id: '1', name: 'John' }] ``` ### Transactions ```js Kysely await db.transaction().execute(async (transaction) => { await transaction.insertInto('users').values({ id: '4', name: 'James' }).execute(); await transaction.updateTable('users').where('name', '=', 'James').set('name', 'James Smith').execute(); }); const result = await db.selectFrom('users').select('name').executeTakeFirstOrThrow(); // 'James Smith' ``` ```js Kysely with Raw SQL await db.transaction().execute(async (transaction) => { await sql`INSERT INTO users (id, name) VALUES ('4', 'James');`.execute(transaction) await transaction.updateTable('users').where('name', '=', 'James').set('name', 'James Smith').execute(); }); const result = await db.selectFrom('users').select('name').executeTakeFirstOrThrow(); // 'James Smith' ``` ```js PowerSync await powerSyncDb.writeTransaction((transaction) => { await transaction.execute('INSERT INTO users (id, name) VALUES(4, ?)', ['James']); await transaction.execute("UPDATE users SET name = ? WHERE name = ?", ['James Smith', 'James']); }) const result = await powerSyncDb.get('SELECT name FROM users WHERE name = ?', ['James Smith']) // 'James Smith' ``` # ORM Overview Source: https://docs.powersync.com/client-sdk-references/javascript-web/javascript-orm/overview Reference for using PowerSync with ORMs in JavaScript and React Native An introduction to using ORMs with PowerSync is available on our blog [here](https://www.powersync.com/blog/using-orms-with-powersync). The following ORMs are officially supported: Kysely query builder for PowerSync. Drizzle ORM for PowerSync. # JavaScript SPA Frameworks Source: https://docs.powersync.com/client-sdk-references/javascript-web/javascript-spa-frameworks Compatibility with SPA frameworks The PowerSync [JavaScript Web SDK](../javascript-web) is compatible with popular Single-Page Application (SPA) frameworks like React, Vue, Angular, and Svelte. For [React](#react-hooks) and [Vue](#vue-composables) specifically, wrapper packages are available to support reactivity and live queries, making it easier for developers to leverage PowerSync's features. PowerSync also integrates with [TanStack Query for React](#tanstack-query) (details below). This integration provides a wide range of developer tools and paves the way for future live query support in other frameworks. Notable community library: * Using SolidJS? Check out [powersync-solid](https://github.com/aboviq/powersync-solid) for SolidJS hooks for PowerSync queries. ### Which package should I choose for queries? For React or React Native apps: * The [`@powersync/react`](#react-hooks) package is best for most basic use cases, especially when you only need reactive queries with loading and error states. * For more advanced scenarios, such as query caching and pagination, TanStack is a powerful solution. The [`@powersync/tanstack-react-query`](#tanstack-query) package extends the `useQuery` hook from `@powersync/react` and adds functionality from [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview), making it a better fit for advanced use cases or performance-optimized apps. If you have a Vue app, use the Vue-specific package: [`@powersync/vue`](#vue-composables). ## React Hooks The `@powersync/react` package provides React hooks for use with the [JavaScript Web SDK](./) or [React Native SDK](../react-native-and-expo/). These hooks are designed to support reactivity, and can be used to automatically re-render React components when query results update or to access PowerSync connectivity status changes. The main hooks available are: * `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. * `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. * `useSuspenseQuery`: This hook also allows you to access the results of a watched query, but its loading and fetching states are handled through [Suspense](https://react.dev/reference/react/Suspense). It automatically converts certain loading/fetching states into Suspense signals, triggering Suspense boundaries in parent components. The full API Reference and example code can be found here: ## TanStack Query PowerSync integrates with [TanStack Query](https://tanstack.com/query/latest/docs/framework/react/overview) (formerly React Query) through the `@powersync/tanstack-react-query` package. This package wraps TanStack's `useQuery` and `useSuspenseQuery` hooks, bringing many of TanStack's advanced asynchronous state management features to PowerSync web and React Native applications, including: * **Loading and error states** via [`useQuery`](https://tanstack.com/query/latest/docs/framework/react/guides/queries) * [**React Suspense**](https://tanstack.com/query/latest/docs/framework/react/guides/suspense) **support**: `useSuspenseQuery` automatically converts certain loading states into Suspense signals, triggering Suspense boundaries in parent components. * [**Caching queries**](https://tanstack.com/query/latest/docs/framework/react/guides/caching): Queries are cached with a unique key and reused across the app, so subsequent instances of the same query won't refire unnecessarily. * **Built-in support for** [**pagination**](https://tanstack.com/query/latest/docs/framework/react/guides/paginated-queries) #### Additional hooks We plan to support more TanStack Query hooks over time. If there are specific hooks you're interested in, please let us know on [Discord](https://discord.gg/powersync). ### Example Use Case When navigating to or refreshing a page, you may notice a brief UI "flicker" (10-50ms). Here are a few ways to manage this with TanStack Query: * **First load**: When a page loads for the first time, use a loading indicator or a Suspense fallback to handle queries. See the [examples](https://www.npmjs.com/package/@powersync/tanstack-react-query#usage). * **Subsequent loads**: With TanStack's query caching, subsequent loads of the same page won't refire queries, which reduces the flicker effect. * **Block navigation until components are ready**: Using `useSuspenseQuery`, you can ensure that navigation from page A to page B only happens after the queries for page B have loaded. You can do this by combining `useSuspenseQuery` with the `` element and React Router’s [`v7_startTransition`](https://reactrouter.com/en/main/upgrading/future#v7_starttransition) future flag, which blocks navigation until all suspending components are ready. ### Usage and Examples For more examples and usage details, see the package [README](https://www.npmjs.com/package/@powersync/tanstack-react-query). The full API Reference can be found here: ## Vue Composables The [`powersync/vue`](https://www.npmjs.com/package/@powersync/vue) package is a Vue-specific wrapper for PowerSync. It provides Vue [composables](https://vuejs.org/guide/reusability/composables) that are designed to support reactivity, and can be used to automatically re-render components when query results update or to access PowerSync connectivity status changes. The main hooks available are: * `useQuery`: This allows you to access the results of a watched query. The response includes `isLoading`, `isFetching` and `error` properties. * `useStatus`: Access the PowerSync connectivity status. This can be used to update the UI based on whether the client is connected or not. The full API Reference and example code can be found here: # Usage Examples Source: https://docs.powersync.com/client-sdk-references/javascript-web/usage-examples Code snippets and guidelines for common scenarios ## Multiple Tab Support * Multiple tab support is not currently available on Android. * For Safari, use the [`OPFSCoopSyncVFS`](/client-sdk-references/javascript-web#sqlite-virtual-file-systems) virtual file system to ensure stable multi-tab functionality. Using PowerSync between multiple tabs is supported on some web browsers. Multiple tab support relies on shared web workers for database and sync streaming operations. When enabled, shared web workers named `shared-DB-worker-[dbFileName]` and `shared-sync-[dbFileName]` will be created. #### `shared-DB-worker-[dbFileName]` The shared database worker will ensure writes to the database will instantly be available between tabs. #### `shared-sync-[dbFileName]` The shared sync worker connects directly to the PowerSync backend instance and applies changes to the database. Note that the shared sync worker will call the `fetchCredentials` and `uploadData` method of the latest opened available tab. Closing a tab will shift the latest tab to the previously opened one. Currently, using the SDK in multiple tabs without enabling the [enableMultiTabs](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/web/src/db/adapters/web-sql-flags.ts#L23) flag will spawn a standard web worker per tab for DB operations. These workers are safe to operate on the DB concurrently, however changes from one tab may not update watches on other tabs. Only one tab can sync from the PowerSync instance at a time. The sync status will not be shared between tabs, only the oldest tab will connect and display the latest sync status. Support is enabled by default if available. This can be disabled as below: ```js export const db = new PowerSyncDatabase({ schema: AppSchema, database: { dbFilename: 'my_app_db.sqlite' }, flags: { /** * Multiple tab support is enabled by default if available. * This can be disabled by setting this flag to false. */ enableMultiTabs: false } }); ``` ## Using transactions to group changes Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). [PowerSyncDatabase.writeTransaction(callback)](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#writetransaction) automatically commits changes after the transaction callback is completed if `tx.rollback()` has not explicitly been called. If an exception is thrown in the callback then changes are automatically rolled back. ```js // ListsWidget.jsx import React, { useState } from 'react'; export const ListsWidget = () => { const [lists, setLists] = useState([]); return (
    {lists.map((list) => (
  • {list.name}
  • ))}
); }; ``` Also see [PowerSyncDatabase.readTransaction(callback)](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#readtransaction). ## Subscribe to changes in data Use [PowerSyncDatabase.watch](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#watch) to watch for changes in source tables. The `watch` method can be used with a `AsyncIterable` signature as follows: ```js async *attachmentIds(): AsyncIterable { for await (const result of this.powersync.watch( `SELECT photo_id as id FROM ${TODO_TABLE} WHERE photo_id IS NOT NULL`, [] )) { yield result.rows?._array.map((r) => r.id) ?? []; } } ``` As of version **1.3.3** of the SDK, the `watch` method can also be used with a callback: ```js attachmentIds(onResult: (ids: string[]) => void): void { this.powersync.watch( `SELECT photo_id as id FROM ${TODO_TABLE} WHERE photo_id IS NOT NULL`, [], { onResult: (result) => { onResult(result.rows?._array.map((r) => r.id) ?? []); } } ); } ``` ## Insert, update, and delete data in the local database Use [PowerSyncDatabase.execute](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#execute) to run INSERT, UPDATE or DELETE queries. ```js const handleButtonClick = async () => { await db.execute( 'INSERT INTO customers(id, name, email) VALUES(uuid(), ?, ?)', ['Fred', 'fred@example.org'] ); }; return ( ); ``` ## Send changes in local data to your backend service Override [uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) to send local updates to your backend service. ```js // Implement the uploadData method in your backend connector async function uploadData(database) { const batch = await database.getCrudBatch(); if (batch === null) return; for (const op of batch.crud) { switch (op.op) { case 'put': // Send the data to your backend service // replace `_myApi` with your own API client or service await _myApi.put(op.table, op.opData); break; default: // TODO: implement the other operations (patch, delete) break; } } await batch.complete(); } ``` ## Accessing PowerSync connection status information Use [PowerSyncDatabase.connected](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#connected) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#registerlistener) to listen for status changes to your PowerSync instance. ```js // Example of using connected status to show online or offline // Tap into connected const [connected, setConnected] = React.useState(powersync.connected); React.useEffect(() => { // Register listener for changes made to the powersync status return powersync.registerListener({ statusChanged: (status) => { setConnected(status.connected); } }); }, [powersync]); // Icon to show connected or not connected to powersync // as well as the last synced time { Alert.alert( 'Status', `${connected ? 'Connected' : 'Disconnected'}. \nLast Synced at ${powersync.currentStatus?.lastSyncedAt.toISOString() ?? '-' }\nVersion: ${powersync.sdkVersion}` ); }} />; ``` ## Wait for the initial sync to complete Use the [hasSynced](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus#hassynced) property (available since version 0.4.1 of the SDK) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/web-sdk/classes/PowerSyncDatabase#registerlistener) to indicate to the user whether the initial sync is in progress. ```js // Example of using hasSynced to show whether the first sync has completed // Tap into hasSynced const [hasSynced, setHasSynced] = React.useState(powerSync.currentStatus?.hasSynced || false); React.useEffect(() => { // Register listener for changes made to the powersync status return powerSync.registerListener({ statusChanged: (status) => { setHasSynced(!!status.hasSynced); } }); }, [powerSync]); return
{hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'}
; ``` For async use cases, see [PowerSyncDatabase.waitForFirstSync()](https://powersync-ja.github.io/powersync-js/web-sdk/classes/AbstractPowerSyncDatabase#waitforfirstsync), which returns a promise that resolves once the first full sync has completed (it queries the internal SQL [ps\_buckets](/architecture/client-architecture) table to determine if data has been synced). ## Report sync download progress You can show users a progress bar when data downloads using the `downloadProgress` property from the [SyncStatus](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus) class. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. Example (React, using [MUI](https://mui.com) components): ```jsx import { Box, LinearProgress, Stack, Typography } from '@mui/material'; import { useStatus } from '@powersync/react'; import { FC, ReactNode } from 'react'; export const SyncProgressBar: FC<{ priority?: number }> = ({ priority }) => { const status = useStatus(); const progressUntilNextSync = status.downloadProgress; const progress = priority == null ? progressUntilNextSync : progressUntilNextSync?.untilPriority(priority); if (progress == null) { return <>; } return ( {progress.downloadedOperations == progress.totalOperations ? ( Applying server-side changes ) : ( Downloaded {progress.downloadedOperations} out of {progress.totalOperations}. )} ); }; ``` Also see: * [SyncStatus API](https://powersync-ja.github.io/powersync-js/web-sdk/classes/SyncStatus) * [Demo component](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-supabase-todolist/src/components/widgets/GuardBySync.tsx) ## Using PowerSyncDatabase Flags This guide provides an overview of the customizable flags available for the `PowerSyncDatabase` in the JavaScript Web SDK. These flags allow you to enable or disable specific features to suit your application's requirements. ### Configuring Flags You can configure flags during the initialization of the `PowerSyncDatabase`. Flags can be set using the `flags` property, which allows you to enable or disable specific functionalities. ```javascript import { PowerSyncDatabase, resolveWebPowerSyncFlags, WebPowerSyncFlags } from '@powersync/web'; import { AppSchema } from '@/library/powersync/AppSchema'; // Define custom flags const customFlags: WebPowerSyncFlags = resolveWebPowerSyncFlags({ enableMultiTabs: true, broadcastLogs: true, disableSSRWarning: false, ssrMode: false, useWebWorker: true, }); // Create the PowerSync database instance export const db = new PowerSyncDatabase({ schema: AppSchema, database: { dbFilename: 'example.db', }, flags: customFlags, }); ``` #### Available Flags default: `true` Enables support for multiple tabs using shared web workers. When enabled, multiple tabs can interact with the same database and sync data seamlessly. default: `false` Enables the broadcasting of logs for debugging purposes. This flag helps monitor shared worker logs in a multi-tab environment. default: `false` Disables warnings when running in SSR (Server-Side Rendering) mode. default: `false` Enables SSR mode. In this mode, only empty query results will be returned, and syncing with the backend is disabled. default: `true` Enables the use of web workers for database operations. Disabling this flag also disables multi-tab support. ### Flag Behavior #### Example 1: Multi-Tab Support By default, multi-tab support is enabled if supported by the browser. To explicitly disable this feature: ```javascript export const db = new PowerSyncDatabase({ schema: AppSchema, database: { dbFilename: 'my_app_db.sqlite', }, flags: { enableMultiTabs: false, }, }); ``` When disabled, each tab will use independent workers, and changes in one tab will not automatically propagate to others. #### Example 2: SSR Mode To enable SSR mode and suppress warnings: ```javascript export const db = new PowerSyncDatabase({ schema: AppSchema, database: { dbFilename: 'my_app_db.sqlite', }, flags: { ssrMode: true, disableSSRWarning: true, }, }); ``` #### Example 3: Verbose Debugging with Broadcast Logs To enable detailed logging for debugging: ```javascript export const db = new PowerSyncDatabase({ schema: AppSchema, database: { dbFilename: 'my_app_db.sqlite', }, flags: { broadcastLogs: true, }, }); ``` Logs will include detailed insights into database operations and synchronization. ### Recommendations 1. **Set `enableMultiTabs`** to `true` if your application requires seamless data sharing across multiple tabs. 2. **Set `useWebWorker`** to `true` for efficient database operations using web workers. 3. **Set `broadcastLogs`** to `true` during development to troubleshoot and monitor database and sync operations. 4. **Set `disableSSRWarning`** to `true` when running in SSR mode to avoid unnecessary console warnings. 5. **Test combinations** of flags to validate their behavior in your application's specific use case. # Kotlin Multiplatform Source: https://docs.powersync.com/client-sdk-references/kotlin-multiplatform The PowerSync KMP SDK is distributed via Maven Central [\[External link\].](https://central.sonatype.com/artifact/com.powersync/core) Refer to the powersync-kotlin repo on GitHub. Full API reference for the PowerSync SDK [\[External link\].](https://powersync-ja.github.io/powersync-kotlin) Gallery of example projects/demo apps built with Kotlin Multiplatform and PowerSync. ### SDK Features * **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic. * **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions. * **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity. * **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date. * **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side. Supported targets: Android, iOS and Desktop. ## Installation Add the [PowerSync SDK](https://central.sonatype.com/artifact/com.powersync/core) to your project by adding the following to your `build.gradle.kts` file: ```gradle kotlin { //... sourceSets { commonMain.dependencies { api("com.powersync:core:$powersyncVersion") // If you want to use the Supabase Connector, also add the following: implementation("com.powersync:connectors:$powersyncVersion") } //... } } ``` **CocoaPods configuration (recommended for iOS)** Add the following to the `cocoapods` config in your `build.gradle.kts`: ```gradle cocoapods { //... pod("powersync-sqlite-core") { linkOnly = true } framework { isStatic = true export("com.powersync:core") } //... } ``` The `linkOnly = true` attribute and `isStatic = true` framework setting ensure that the `powersync-sqlite-core` binaries are statically linked. **JVM compatibility for Desktop** * The following platforms are supported: Linux AArch64, Linux X64, MacOS AArch64, MacOS X64, Windows X64. * See this [example build.gradle file](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/hello-powersync/composeApp/build.gradle.kts) for the relevant JVM config. ## Getting Started Before implementing the PowerSync SDK in your project, make sure you have completed these steps: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. * [Installed](/client-sdk-references/kotlin-multiplatform#installation) the PowerSync SDK. ### 1. Define the Schema The first step is defining the schema for the local SQLite database, which is provided to the `PowerSyncDatabase` constructor via the `schema` parameter. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the PowerSync database is constructed. The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. **Example**: ```kotlin // AppSchema.kt import com.powersync.db.schema.Column import com.powersync.db.schema.Index import com.powersync.db.schema.IndexedColumn import com.powersync.db.schema.Schema import com.powersync.db.schema.Table val AppSchema: Schema = Schema( listOf( Table( name = "todos", columns = listOf( Column.text('list_id'), Column.text('created_at'), Column.text('completed_at'), Column.text('description'), Column.integer('completed'), Column.text('created_by'), Column.text('completed_by') ), // Index to allow efficient lookup within a list indexes = listOf( Index("list", listOf(IndexedColumn.descending("list_id"))) ) ), Table( name = "lists", columns = listOf( Column.text('created_at'), Column.text('name'), Column.text('owner_id') ) ) ) ) ``` **Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this. ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed database. Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. **Example**: a. Create platform specific `DatabaseDriverFactory` to be used by the `PowerSyncBuilder` to create the SQLite database driver. ```kotlin // commonMain import com.powersync.DatabaseDriverFactory import com.powersync.PowerSyncDatabase // Android val driverFactory = DatabaseDriverFactory(this) // iOS & Desktop val driverFactory = DatabaseDriverFactory() ``` b. Build a `PowerSyncDatabase` instance using the `PowerSyncBuilder` and the `DatabaseDriverFactory`. The schema you created in a previous step is provided as a parameter: ```kotlin // commonMain val database = PowerSyncDatabase({ factory: driverFactory, // The factory you defined above schema: AppSchema, // The schema you defined in the previous step dbFilename: "powersync.db" // logger: YourLogger // Optionally include your own Logger that must conform to Kermit Logger // dbDirectory: "path/to/directory" // Optional. Directory path where the database file is located. This parameter is ignored for iOS. }); ``` c. Connect the `PowerSyncDatabase` to the backend connector: ```kotlin // commonMain // Uses the backend connector that will be created in the next step database.connect(MyConnector()) ``` **Special case: Compose Multiplatform** The artifact `com.powersync:powersync-compose` provides a simpler API: ```kotlin // commonMain val database = rememberPowerSyncDatabase(schema) remember { database.connect(MyConnector()) } ``` ### 3. Integrate with your Backend Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. 2. Apply local changes on your backend application server (and from there, to your backend database) Accordingly, the connector must implement two methods: 1. `PowerSyncBackendConnector.fetchCredentials` - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. `PowerSyncBackendConnector.uploadData` - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```kotlin // PowerSync.kt import com.powersync.DatabaseDriverFactory import com.powersync.PowerSyncDatabase class MyConnector : PowerSyncBackendConnector() { override suspend fun fetchCredentials(): PowerSyncCredentials { // implement fetchCredentials to obtain the necessary credentials to connect to your backend // See an example implementation in https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt return { endpoint: '[Your PowerSync instance URL or self-hosted endpoint]', // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly) to get up and running quickly token: 'An authentication token' } } override suspend fun uploadData(database: PowerSyncDatabase) { // Implement uploadData to send local changes to your backend service // You can omit this method if you only want to sync data from the server to the client // See an example implementation under Usage Examples (sub-page) // See https://docs.powersync.com/installation/app-backend-setup/writing-client-changes for considerations. } } ``` **Note**: If you are using Supabase, you can use [SupabaseConnector.kt](https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt) as a starting point. ## Using PowerSync: CRUD functions Once the PowerSync instance is configured you can start using the SQLite DB functions. The most commonly used CRUD functions to interact with your SQLite data are: * [PowerSyncDatabase.get](/client-sdk-references/kotlin-multiplatform#fetching-a-single-item) - get (SELECT) a single row from a table. * [PowerSyncDatabase.getAll](/client-sdk-references/kotlin-multiplatform#querying-items-powersync-getall) - get (SELECT) a set of rows from a table. * [PowerSyncDatabase.watch](/client-sdk-references/kotlin-multiplatform#watching-queries-powersync-watch) - execute a read query every time source tables are modified. * [PowerSyncDatabase.execute](/client-sdk-references/kotlin-multiplatform#mutations-powersync-execute) - execute a write (INSERT/UPDATE/DELETE) query. ### Fetching a Single Item The `get` method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use `getOptional` to return a single optional result (returns `null` if no result is found). ```kotlin // Find a list item by ID suspend fun find(id: Any): TodoList { return database.get( "SELECT * FROM lists WHERE id = ?", listOf(id) ) { cursor -> TodoList.fromCursor(cursor) } } ``` ### Querying Items (PowerSync.getAll) The `getAll` method executes a read-only (SELECT) query and returns a set of rows. ```kotlin // Get all list IDs suspend fun getLists(): List { return database.getAll( "SELECT id FROM lists WHERE id IS NOT NULL" ) { cursor -> cursor.getString("id") } } ``` ### Watching Queries (PowerSync.watch) The `watch` method executes a read query whenever a change to a dependent table is made. ```kotlin // You can watch any SQL query fun watchCustomers(): Flow> { // TODO: implement your UI based on the result set return database.watch( "SELECT * FROM customers" ) { cursor -> User( id = cursor.getString("id"), name = cursor.getString("name"), email = cursor.getString("email") ) } } ``` ### Mutations (PowerSync.execute) The `execute` method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any). ```kotlin suspend fun insertCustomer(name: String, email: String) { database.writeTransaction { tx -> tx.execute( sql = "INSERT INTO customers (id, name, email) VALUES (uuid(), ?, ?)", parameters = listOf(name, email) ) } } suspend fun updateCustomer(id: String, name: String, email: String) { database.execute( sql = "UPDATE customers SET name = ? WHERE email = ?", parameters = listOf(name, email) ) } suspend fun deleteCustomer(id: String? = null) { // If no id is provided, delete the first customer in the database val targetId = id ?: database.getOptional( sql = "SELECT id FROM customers LIMIT 1", mapper = { cursor -> cursor.getString(0)!! } ) ?: return database.writeTransaction { tx -> tx.execute( sql = "DELETE FROM customers WHERE id = ?", parameters = listOf(targetId) ) } } ``` ## Configure Logging You can include your own Logger that must conform to the [Kermit Logger](https://kermit.touchlab.co/docs/) as shown here. ```kotlin PowerSyncDatabase( ... logger: Logger? = YourLogger ) ``` If you don't supply a Logger then a default Kermit Logger is created with settings to only show `Warnings` in release and `Verbose` in debug as follows: ```kotlin val defaultLogger: Logger = Logger // Severity is set to Verbose in Debug and Warn in Release if(BuildConfig.isDebug) { Logger.setMinSeverity(Severity.Verbose) } else { Logger.setMinSeverity(Severity.Warn) } return defaultLogger ``` You are able to use the Logger anywhere in your code as follows to debug: ```kotlin import co.touchlab.kermit.Logger Logger.i("Some information"); Logger.e("Some error"); ... ``` ## Additional Usage Examples See [Usage Examples](/client-sdk-references/kotlin-multiplatform/usage-examples) for further examples of the SDK. ## ORM Support ORM support is not yet available, we are still investigating options. Please [let us know](/resources/contact-us) what your needs around ORMs are. ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. # Usage Examples Source: https://docs.powersync.com/client-sdk-references/kotlin-multiplatform/usage-examples Code snippets and guidelines for common scenarios ## Using transactions to group changes Use `writeTransaction` to group statements that can write to the database. ```kotlin database.writeTransaction { database.execute( sql = "DELETE FROM list WHERE id = ?", parameters = listOf(listId) ) database.execute( sql = "DELETE FROM todos WHERE list_id = ?", parameters = listOf(listId) ) } ``` ## Subscribe to changes in data Use the `watch` method to watch for changes to the dependent tables of any SQL query. ```kotlin // You can watch any SQL query fun watchCustomers(): Flow> { // TODO: implement your UI based on the result set return database.watch("SELECT * FROM customers", mapper = { cursor -> User( id = cursor.getString("id"), name = cursor.getString("name"), email = cursor.getString("email") ) }) } ``` ## Insert, update, and delete data in the local database Use `execute` to run INSERT, UPDATE or DELETE queries. ```kotlin suspend fun updateCustomer(id: String, name: String, email: String) { database.execute( "UPDATE customers SET name = ? WHERE email = ?", listOf(name, email) ) } ``` ## Send changes in local data to your backend service Override `uploadData` to send local updates to your backend service. If you are using Supabase, see [SupabaseConnector.kt](https://github.com/powersync-ja/powersync-kotlin/blob/main/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt) for a complete implementation. ```kotlin /** * This function is called whenever there is data to upload, whether the device is online or offline. * If this call throws an error, it is retried periodically. */ override suspend fun uploadData(database: PowerSyncDatabase) { val transaction = database.getNextCrudTransaction() ?: return; var lastEntry: CrudEntry? = null; try { for (entry in transaction.crud) { lastEntry = entry; val table = supabaseClient.from(entry.table) when (entry.op) { UpdateType.PUT -> { val data = entry.opData?.toMutableMap() ?: mutableMapOf() data["id"] = entry.id table.upsert(data) } UpdateType.PATCH -> { table.update(entry.opData!!) { filter { eq("id", entry.id) } } } UpdateType.DELETE -> { table.delete { filter { eq("id", entry.id) } } } } } transaction.complete(null); } catch (e: Exception) { println("Data upload error - retrying last entry: ${lastEntry!!}, $e") throw e } } ``` ## Accessing PowerSync connection status information ```kotlin // Intialize the DB val db = remember { PowerSyncDatabase(factory, schema) } // Get the status as a flow val status = db.currentStatus.asFlow().collectAsState(initial = null) // Use the emitted values from the flow e.g. to check if connected val isConnected = status.value?.connected ``` ## Wait for the initial sync to complete Use the `hasSynced` property and register a listener to indicate to the user whether the initial sync is in progress. ```kotlin val db = remember { PowerSyncDatabase(factory, schema) } val status = db.currentStatus.asFlow().collectAsState(initial = null) val hasSynced by remember { derivedStateOf { status.value?.hasSynced } } when { hasSynced == null || hasSynced == false -> { Box( modifier = Modifier.fillMaxSize().background(MaterialTheme.colors.background), contentAlignment = Alignment.Center ) { Text( text = "Busy with initial sync...", style = MaterialTheme.typography.h6 ) } } else -> { ... show rest of UI ``` For async use cases, use `waitForFirstSync` method which is a suspense function that resolves once the first full sync has completed. ## Report sync download progress You can show users a progress bar when data downloads using the `syncStatus.downloadProgress` property. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives a value from 0.0 to 1.0 representing the total sync progress. Example (Compose): ```kotlin import androidx.compose.foundation.background import androidx.compose.foundation.layout.Arrangement import androidx.compose.foundation.layout.Column import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.fillMaxWidth import androidx.compose.foundation.layout.padding import androidx.compose.material.LinearProgressIndicator import androidx.compose.material.MaterialTheme import androidx.compose.material.Text import androidx.compose.runtime.Composable import androidx.compose.runtime.getValue import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.unit.dp import com.powersync.PowerSyncDatabase import com.powersync.bucket.BucketPriority import com.powersync.compose.composeState /** * Shows a progress bar while a sync is active. * * The [priority] parameter can be set to, instead of showing progress until the end of the entire * sync, only show progress until data in the [BucketPriority] is synced. */ @Composable fun SyncProgressBar( db: PowerSyncDatabase, priority: BucketPriority? = null, ) { val state by db.currentStatus.composeState() val progress = state.downloadProgress?.let { if (priority == null) { it } else { it.untilPriority(priority) } } if (progress == null) { return } Column( modifier = Modifier.fillMaxSize().background(MaterialTheme.colors.background), horizontalAlignment = Alignment.CenterHorizontally, verticalArrangement = Arrangement.Center, ) { LinearProgressIndicator( modifier = Modifier.fillMaxWidth().padding(8.dp), progress = progress.fraction, ) if (progress.downloadedOperations == progress.totalOperations) { Text("Applying server-side changes...") } else { Text("Downloaded ${progress.downloadedOperations} out of ${progress.totalOperations}.") } } } ``` Also see: * [SyncDownloadProgress API](https://powersync-ja.github.io/powersync-kotlin/core/com.powersync.sync/-sync-download-progress/index.html) * [Demo component](https://github.com/powersync-ja/powersync-kotlin/blob/main/demos/supabase-todolist/shared/src/commonMain/kotlin/com/powersync/demos/components/GuardBySync.kt) # Node.js client (alpha) Source: https://docs.powersync.com/client-sdk-references/node SDK reference for using PowerSync in Node.js clients. This page describes the PowerSync *client* SDK for Node.js. If you're interested in using PowerSync for your Node.js backend, no special package is required. Instead, follow our guides on [app backend setup](/installation/app-backend-setup). This SDK is distributed via NPM [\[External link\].](https://www.npmjs.com/package/@powersync/node) Refer to packages/node in the powersync-js repo on GitHub. Full API reference for the PowerSync SDK [\[External link\].](https://powersync-ja.github.io/powersync-js/node-sdk) Gallery of example projects/demo apps built with Node.js and PowerSync. This SDK is currently in an [**alpha** release](/resources/feature-status). It is not suitable for production use as breaking changes may still occur. ### SDK Features * **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic. * **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions. * **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity. * **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date. * **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side. ## Quickstart Add the [PowerSync Node NPM package](https://www.npmjs.com/package/@powersync/node) to your project: ```bash npm install @powersync/node ``` ```bash yarn add @powersync/node ``` ```bash pnpm install @powersync/node ``` **Required peer dependencies** This SDK requires [`@powersync/better-sqlite3`](https://www.npmjs.com/package/@powersync/better-sqlite3) as a peer dependency: ```bash npm install @powersync/better-sqlite3 ``` ```bash yarn add @powersync/better-sqlite3 ``` ```bash pnpm install @powersync/better-sqlite3 ``` **Common installation issues** The `@powersync/better-sqlite` package requires native compilation, which depends on certain system tools. This compilation process is handled by `node-gyp` and may fail if required dependencies are missing or misconfigured. Refer to the [PowerSync Node package README](https://www.npmjs.com/package/@powersync/node) for more details. Next, make sure that you have: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. ### 1. Define the schema The first step is defining the schema for the local SQLite database. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the local PowerSync database is constructed (as we'll show in the next step). You can use [this example](https://github.com/powersync-ja/powersync-js/blob/e5a57a539150f4bc174e109d3898b6e533de272f/demos/example-node/src/powersync.ts#L47-L77) as a reference when defining your schema. **Generate schema automatically** In the [dashboard](/usage/tools/powersync-dashboard), the schema can be generated based off your sync rules by right-clicking on an instance and selecting **Generate client-side schema**. Select JavaScript and replace the suggested import with `@powersync/node`. Similar functionality exists in the [CLI](/usage/tools/cli). ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed database. Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. **Example**: ```js import { PowerSyncDatabase } from '@powersync/node'; import { Connector } from './Connector'; import { AppSchema } from './Schema'; export const db = new PowerSyncDatabase({ // The schema you defined in the previous step schema: AppSchema, database: { // Filename for the SQLite database — it's important to only instantiate one instance per file. dbFilename: 'powersync.db', // Optional. Directory where the database file is located.' // dbLocation: 'path/to/directory' }, }); ``` ### 3. Integrate with your Backend The PowerSync backend connector provides the connection between your application backend and the PowerSync client-slide managed SQLite database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. 2. Apply local changes on your backend application server (and from there, to your backend database) Accordingly, the connector must implement two methods: 1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```js import { UpdateType } from '@powersync/node'; export class Connector implements PowerSyncBackendConnector { constructor() { // Setup a connection to your server for uploads this.serverConnectionClient = TODO; } async fetchCredentials() { // Implement fetchCredentials to obtain a JWT from your authentication service. // See https://docs.powersync.com/installation/authentication-setup // If you're using Supabase or Firebase, you can re-use the JWT from those clients, see // - https://docs.powersync.com/installation/authentication-setup/supabase-auth // - https://docs.powersync.com/installation/authentication-setup/firebase-auth return { endpoint: '[Your PowerSync instance URL or self-hosted endpoint]', // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly token: 'An authentication token' }; } async uploadData(database) { // Implement uploadData to send local changes to your backend service. // You can omit this method if you only want to sync data from the database to the client // See example implementation here: https://docs.powersync.com/client-sdk-references/javascript-web#3-integrate-with-your-backend } } ``` With your database instantiated and your connector ready, call `connect` to start the synchronization process: ```js await db.connect(new Connector()); await db.waitForFirstSync(); // Optional, to wait for a complete snapshot of data to be available ``` ## Usage After connecting the client database, it is ready to be used. The API to run queries and updates is identical to our [web SDK](/client-sdk-references/javascript-web#using-powersync%3A-crud-functions): ```js // Use db.get() to fetch a single row: console.log(await db.get('SELECT powersync_rs_version();')); // Or db.getAll() to fetch all: console.log(await db.getAll('SELECT * FROM lists;')); // Use db.watch() to watch queries for changes: const watchLists = async () => { for await (const rows of db.watch('SELECT * FROM lists;')) { console.log('Has todo lists', rows.rows!._array); } }; watchLists(); // And db.execute for inserts, updates and deletes: await db.execute( "INSERT INTO lists (id, created_at, name, owner_id) VALUEs (uuid(), datetime('now'), ?, uuid());", ['My new list'] ); ``` PowerSync runs queries asynchronously on a background pool of workers and automatically configures WAL to allow a writer and multiple readers to operate in parallel. ## Configure Logging ```js import { createBaseLogger, LogLevel } from '@powersync/node'; const logger = createBaseLogger(); // Configure the logger to use the default console output logger.useDefaults(); // Set the minimum log level to DEBUG to see all log messages // Available levels: DEBUG, INFO, WARN, ERROR, TRACE, OFF logger.setLevel(LogLevel.DEBUG); ``` Enable verbose output in the developer tools for detailed logs. # JavaScript ORM Support Source: https://docs.powersync.com/client-sdk-references/node/javascript-orm-support # React Native & Expo Source: https://docs.powersync.com/client-sdk-references/react-native-and-expo Full SDK reference for using PowerSync in React Native clients This SDK is distributed via NPM [\[External link\].](https://www.npmjs.com/package/@powersync/react-native) Refer to packages/react-native in the powersync-js repo on GitHub. Full API reference for the PowerSync SDK [\[External link\].](https://powersync-ja.github.io/powersync-js/react-native-sdk) Gallery of example projects/demo apps built with React Native and PowerSync. ### SDK Features * **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic. * **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions. * **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity. * **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date. * **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side. ## Installation **PowerSync is not compatible with Expo Go.** PowerSync uses a native plugin and is therefore only compatible with Expo Dev Builds. Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powersync/react-native) to your project: ```bash npx expo install @powersync/react-native ``` ```bash yarn expo add @powersync/react-native ``` ``` pnpm expo install @powersync/react-native ``` **Required peer dependencies** This SDK requires [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) as a peer dependency. Install it as follows: ```bash npx expo install @journeyapps/react-native-quick-sqlite ``` ```bash yarn expo add @journeyapps/react-native-quick-sqlite ``` ``` pnpm expo install @journeyapps/react-native-quick-sqlite ``` Alternatively, you can install OP-SQLite with the [PowerSync OP-SQLite package](https://github.com/powersync-ja/powersync-js/tree/main/packages/powersync-op-sqlite) which offers [built-in encryption support via SQLCipher](/usage/use-case-examples/data-encryption) and a smoother transition to React Native's New Architecture. **Polyfills and additional notes:** * For async iterator support with watched queries, additional polyfills are required. See the [Babel plugins section](https://www.npmjs.com/package/@powersync/react-native#babel-plugins-watched-queries) in the README. * By default, this SDK connects to a PowerSync instance via WebSocket (from `@powersync/react-native@1.11.0`) or HTTP streaming (before `@powersync/react-native@1.11.0`). See [Developer Notes](/client-sdk-references/react-native-and-expo#developer-notes) for more details on connection methods and platform-specific requirements. * When using the OP-SQLite package, we recommend adding this [metro config](https://github.com/powersync-ja/powersync-js/tree/main/packages/react-native#metro-config-optional) to avoid build issues. ## Getting Started Before implementing the PowerSync SDK in your project, make sure you have completed these steps: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. * [Installed](/client-sdk-references/react-native-and-expo#installation) the PowerSync React Native SDK. ### 1. Define the Schema The first step is defining the schema for the local SQLite database. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the PowerSync database is constructed (as we'll show in the next step). **Generate schema automatically** In the [dashboard](/usage/tools/powersync-dashboard), the schema can be generated based off your sync rules by right-clicking on an instance and selecting **Generate client-side schema**. Similar functionality exists in the [CLI](/usage/tools/cli). The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. For details on how Postgres types are mapped to the types below, see the section on [Types](/usage/sync-rules/types) in the *Sync Rules* documentation. **Example**: **Note**: No need to declare a primary key `id` column - as PowerSync will automatically create this. ```typescript powersync/AppSchema.ts import { column, Schema, Table } from '@powersync/react-native'; const lists = new Table({ created_at: column.text, name: column.text, owner_id: column.text }); const todos = new Table( { list_id: column.text, created_at: column.text, completed_at: column.text, description: column.text, created_by: column.text, completed_by: column.text, completed: column.integer }, { indexes: { list: ['list_id'] } } ); export const AppSchema = new Schema({ todos, lists }); // For types export type Database = (typeof AppSchema)['types']; export type TodoRecord = Database['todos']; // OR: // export type Todo = RowType; export type ListRecord = Database['lists']; ``` ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed database. Its primary functions are to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. **Example**: For getting started and testing PowerSync use the [@journeyapps/react-native-quick-sqlite](https://github.com/powersync-ja/react-native-quick-sqlite) package. By default, this SDK requires @journeyapps/react-native-quick-sqlite as a peer dependency. ```typescript powersync/system.ts import { PowerSyncDatabase } from '@powersync/react-native'; import { AppSchema } from './Schema'; export const powersync = new PowerSyncDatabase({ // The schema you defined in the previous step schema: AppSchema, // For other options see, // https://powersync-ja.github.io/powersync-js/web-sdk/globals#powersyncopenfactoryoptions database: { // Filename for the SQLite database — it's important to only instantiate one instance per file. // For other database options see, // https://powersync-ja.github.io/powersync-js/web-sdk/globals#sqlopenoptions dbFilename: 'powersync.db' } }); ``` If you want to include encryption with SQLCipher use the [@powersync/op-sqlite](https://www.npmjs.com/package/@powersync/op-sqlite) package. If you've already installed `@journeyapps/react-native-quick-sqlite`, You will have to uninstall it and then install both `@powersync/op-sqlite` and it's peer dependency `@op-engineering/op-sqlite` to use this. ```typescript powersync/system.ts import { PowerSyncDatabase } from '@powersync/react-native'; import { OPSqliteOpenFactory } from '@powersync/op-sqlite'; // Add this import import { AppSchema } from './Schema'; // Create the factory const opSqlite = new OPSqliteOpenFactory({ dbFilename: 'powersync.db' }); export const powersync = new PowerSyncDatabase({ // For other options see, schema: AppSchema, // Override the default database database: opSqlite }); ``` **SDK versions lower than 1.8.0** In SDK versions lower than 1.8.0, you will need to use the deprecated [RNQSPowerSyncDatabaseOpenFactory](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/RNQSPowerSyncDatabaseOpenFactory) syntax to instantiate the database. Once you've instantiated your PowerSync database, you will need to call the [connect()](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/AbstractPowerSyncDatabase#connect) method to activate it. ```typescript powersync/system.ts import { Connector } from './Connector'; export const setupPowerSync = async () => { // Uses the backend connector that will be created in the next section const connector = new Connector(); powersync.connect(connector); }; ``` ### 3. Integrate with your Backend The PowerSync backend connector provides the connection between your application backend and the PowerSync client-slide managed SQLite database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. 2. Apply local changes on your backend application server (and from there, to Postgres) Accordingly, the connector must implement two methods: 1. [PowerSyncBackendConnector.fetchCredentials](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L16) - This is called every couple of minutes and is used to obtain credentials for your app backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. [PowerSyncBackendConnector.uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```typescript powersync/Connector.ts import { PowerSyncBackendConnector, AbstractPowerSyncDatabase, UpdateType } from "@powersync/react-native" export class Connector implements PowerSyncBackendConnector { /** * Implement fetchCredentials to obtain a JWT from your authentication service. * See https://docs.powersync.com/installation/authentication-setup * If you're using Supabase or Firebase, you can re-use the JWT from those clients, see: * https://docs.powersync.com/installation/authentication-setup/supabase-auth * https://docs.powersync.com/installation/authentication-setup/firebase-auth */ async fetchCredentials() { return { // The PowerSync instance URL or self-hosted endpoint endpoint: 'https://xxxxxx.powersync.journeyapps.com', /** * To get started quickly, use a development token, see: * Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) to get up and running quickly */ token: 'An authentication token' }; } /** * Implement uploadData to send local changes to your backend service. * You can omit this method if you only want to sync data from the database to the client * See example implementation here:https://docs.powersync.com/client-sdk-references/react-native-and-expo#3-integrate-with-your-backend */ async uploadData(database: AbstractPowerSyncDatabase) { /** * For batched crud transactions, use data.getCrudBatch(n); * https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SqliteBucketStorage#getcrudbatch */ const transaction = await database.getNextCrudTransaction(); if (!transaction) { return; } for (const op of transaction.crud) { // The data that needs to be changed in the remote db const record = { ...op.opData, id: op.id }; switch (op.op) { case UpdateType.PUT: // TODO: Instruct your backend API to CREATE a record break; case UpdateType.PATCH: // TODO: Instruct your backend API to PATCH a record break; case UpdateType.DELETE: //TODO: Instruct your backend API to DELETE a record break; } } // Completes the transaction and moves onto the next one await transaction.complete(); } } ``` ## Using PowerSync: CRUD functions Once the PowerSync instance is configured you can start using the SQLite DB functions. The most commonly used CRUD functions to interact with your SQLite data are: * [PowerSyncDatabase.get](/client-sdk-references/react-native-and-expo#fetching-a-single-item) - get (SELECT) a single row from a table. * [PowerSyncDatabase.getAll](/client-sdk-references/react-native-and-expo#querying-items-powersync-getall) - get (SELECT) a set of rows from a table. * [PowerSyncDatabase.watch](/client-sdk-references/react-native-and-expo#watching-queries-powersync-watch) - execute a read query every time source tables are modified. * [PowerSyncDatabase.execute](/client-sdk-references/react-native-and-expo#mutations-powersync-execute) - execute a write (INSERT/UPDATE/DELETE) query. ### Fetching a Single Item The [get](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#get) method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use [getOptional](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#getoptional) to return a single optional result (returns `null` if no result is found). ```js TodoItemWidget.jsx import { Text } from 'react-native'; import { powersync } from "../powersync/system"; export const TodoItemWidget = ({id}) => { const [todoItem, setTodoItem] = React.useState([]); const [error, setError] = React.useState([]); React.useEffect(() => { // .get returns the first item of the result. Throws an exception if no result is found. powersync.get('SELECT * from todos WHERE id = ?', [id]) .then(setTodoItem) .catch(ex => setError(ex.message)) }, []); return {error || todoItem.description} } ``` ### Querying Items (PowerSync.getAll) The [getAll](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#getall) method returns a set of rows from a table. ```js ListsWidget.jsx import { FlatList, Text} from 'react-native'; import { powersync } from "../powersync/system"; export const ListsWidget = () => { const [lists, setLists] = React.useState([]); React.useEffect(() => { powersync.getAll('SELECT * from lists').then(setLists) }, []); return ( ({key: list.id, ...list}))} renderItem={({item}) => {item.name}} />) } ``` ### Watching Queries (PowerSync.watch) The [watch](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#watch) method executes a read query whenever a change to a dependent table is made. It can be used with an `AsyncGenerator`, or with a callback. ```js ListsWidget.jsx import { FlatList, Text } from 'react-native'; import { powersync } from "../powersync/system"; export const ListsWidget = () => { const [lists, setLists] = React.useState([]); React.useEffect(() => { const abortController = new AbortController(); // Option 1: Use with AsyncGenerator (async () => { for await(const update of powersync.watch('SELECT * from lists', [], {signal: abortController.signal})) { setLists(update) } })(); // Option 2: Use a callback (available since version 1.3.3 of the SDK) powersync.watch('SELECT * from lists', [], { onResult: (result) => setLists(result) }, { signal: abortController.signal }); return () => { abortController.abort(); } }, []); return ( ({ key: list.id, ...list }))} renderItem={({ item }) => {item.name}} />) } ``` ### Mutations (PowerSync.execute) The [execute](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#execute) method can be used for executing single SQLite write statements. ```js ListsWidget.jsx import { Alert, Button, FlatList, Text, View } from 'react-native'; import { powersync } from "../powersync/system"; export const ListsWidget = () => { // Populate lists with one of methods listed above const [lists, setLists] = React.useState([]); return ( ({key: list.id, ...list}))} renderItem={({item}) => ( {item.name} ); ``` ## Send changes in local data to your backend service Override [uploadData](https://github.com/powersync-ja/powersync-js/blob/ed5bb49b5a1dc579050304fab847feb8d09b45c7/packages/common/src/client/connection/PowerSyncBackendConnector.ts#L24) to send local updates to your backend service. ```js // Implement the uploadData method in your backend connector async function uploadData(database) { const batch = await database.getCrudBatch(); if (batch === null) return; for (const op of batch.crud) { switch (op.op) { case 'put': // Send the data to your backend service // replace `_myApi` with your own API client or service await _myApi.put(op.table, op.opData); break; default: // TODO: implement the other operations (patch, delete) break; } } await batch.complete(); } ``` ## Accessing PowerSync connection status information Use [PowerSyncDatabase.connected](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#connected) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#registerlistener) to listen for status changes to your PowerSync instance. ```js // Example of using connected status to show online or offline // Tap into connected const [connected, setConnected] = React.useState(powersync.connected); React.useEffect(() => { // Register listener for changes made to the powersync status return powersync.registerListener({ statusChanged: (status) => { setConnected(status.connected); } }); }, [powersync]); // Icon to show connected or not connected to powersync // as well as the last synced time { Alert.alert( 'Status', `${connected ? 'Connected' : 'Disconnected'}. \nLast Synced at ${powersync.currentStatus?.lastSyncedAt.toISOString() ?? '-' }\nVersion: ${powersync.sdkVersion}` ); }} />; ``` ## Wait for the initial sync to complete Use the [hasSynced](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus#hassynced) property (available since version 1.4.1 of the SDK) and register an event listener with [PowerSyncDatabase.registerListener](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/PowerSyncDatabase#registerlistener) to indicate to the user whether the initial sync is in progress. ```js // Example of using hasSynced to show whether the first sync has completed // Tap into hasSynced const [hasSynced, setHasSynced] = React.useState(powerSync.currentStatus?.hasSynced || false); React.useEffect(() => { // Register listener for changes made to the powersync status return powerSync.registerListener({ statusChanged: (status) => { setHasSynced(!!status.hasSynced); } }); }, [powerSync]); return {hasSynced ? 'Initial sync completed!' : 'Busy with initial sync...'}; ``` For async use cases, see [PowerSyncDatabase.waitForFirstSync](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/AbstractPowerSyncDatabase#waitforfirstsync), which returns a promise that resolves once the first full sync has completed (it queries the internal SQL [ps\_buckets](/architecture/client-architecture) table to determine if data has been synced). ## Report sync download progress You can show users a progress bar when data downloads using the `downloadProgress` property from the [SyncStatus](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus) class. This is especially useful for long-running initial syncs. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. Example: ```jsx import { useStatus } from '@powersync/react'; import { FC, ReactNode } from 'react'; import { View } from 'react-native'; import { Text, LinearProgress } from '@rneui/themed'; export const SyncProgressBar: FC<{ priority?: number }> = ({ priority }) => { const status = useStatus(); const progressUntilNextSync = status.downloadProgress; const progress = priority == null ? progressUntilNextSync : progressUntilNextSync?.untilPriority(priority); if (progress == null) { return <>; } return ( {progress.downloadedOperations == progress.totalOperations ? ( Applying server-side changes ) : ( Downloaded {progress.downloadedOperations} out of {progress.totalOperations}. )} ); }; ``` Also see: * [SyncStatus API](https://powersync-ja.github.io/powersync-js/react-native-sdk/classes/SyncStatus) * [Demo component](https://github.com/powersync-ja/powersync-js/blob/main/demos/react-native-supabase-todolist/library/widgets/GuardBySync.tsx) # Swift Source: https://docs.powersync.com/client-sdk-references/swift Refer to the powersync-swift repo on GitHub. Full API reference for the PowerSync SDK [\[External link\].](https://powersync-ja.github.io/powersync-swift/documentation/powersync) Gallery of example projects/demo apps built with PowerSync and Swift. ## Kotlin Multiplatform -> Swift SDK The PowerSync Swift SDK makes use of the [PowerSync Kotlin Multiplatform SDK](https://github.com/powersync-ja/powersync-kotlin) with the API tool [SKIE](https://skie.touchlab.co/) under the hood to help generate and publish a Swift package. The Swift SDK abstracts the Kotlin SDK behind pure Swift Protocols, enabling us to fully leverage Swift's native features and libraries. Our ultimate goal is to deliver a Swift-centric experience for developers. ### SDK Features * **Real-time streaming of database changes**: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic. * **Direct access to a local SQLite database**: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions. * **Asynchronous background execution**: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity. * **Query subscriptions for live updates**: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date. * **Automatic schema management**: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side. ## Installation You can add the PowerSync Swift package to your project using either `Package.swift` or Xcode: ```swift let package = Package( //... dependencies: [ //... .package( url: "https://github.com/powersync-ja/powersync-swift", exact: "" ), ], targets: [ .target( name: "YourTargetName", dependencies: [ .product( name: "PowerSync", package: "powersync-swift" ) ] ) ] ) ``` 1. Follow [this guide](https://developer.apple.com/documentation/xcode/adding-package-dependencies-to-your-app#Add-a-package-dependency) to add a package to your project. 2. Use `https://github.com/powersync-ja/powersync-swift.git` as the URL 3. Include the exact version (e.g., `1.0.x`) ## Getting Started Before implementing the PowerSync SDK in your project, make sure you have completed these steps: * Signed up for a PowerSync Cloud account ([here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)) or [self-host PowerSync](/self-hosting/getting-started). * [Configured your backend database](/installation/database-setup) and connected it to your PowerSync instance. * [Installed](/client-sdk-references/swift#installation) the PowerSync SDK. ### 1. Define the Schema The first step is defining the schema for the local SQLite database, which is provided to the `PowerSyncDatabase` constructor via the `schema` parameter. This schema represents a "view" of the downloaded data. No migrations are required — the schema is applied directly when the PowerSync database is constructed. The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/usage/sync-rules). If a value doesn't match, it is cast automatically. **Example**: ```swift import Foundation import PowerSync let LISTS_TABLE = "lists" let TODOS_TABLE = "todos" let lists = Table( name: LISTS_TABLE, columns: [ // ID column is automatically included .text("name"), .text("created_at"), .text("owner_id") ] ) let todos = Table( name: TODOS_TABLE, // ID column is automatically included columns: [ .text("list_id"), .text("photo_id"), .text("description"), // 0 or 1 to represent false or true .integer("completed"), .text("created_at"), .text("completed_at"), .text("created_by"), .text("completed_by") ], indexes: [ Index( name: "list_id", columns: [ IndexedColumn.ascending("list_id") ] ) ] ) let AppSchema = Schema(lists, todos) ``` **Note**: No need to declare a primary key `id` column, as PowerSync will automatically create this. ### 2. Instantiate the PowerSync Database Next, you need to instantiate the PowerSync database — this is the core managed database. Its primary function is to record all changes in the local database, whether online or offline. In addition, it automatically uploads changes to your app backend when connected. **Example**: ```swift let schema = AppSchema // Comes from the AppSchema defined above let db = PowerSyncDatabase( schema: schema, dbFilename: "powersync-swift.sqlite" ) ``` ### 3. Integrate with your Backend Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to: 1. Retrieve an auth token to connect to the PowerSync instance. 2. Apply local changes on your backend application server (and from there, to your backend database) Accordingly, the connector must implement two methods: 1. `PowerSyncBackendConnector.fetchCredentials` - This is called every couple of minutes and is used to obtain credentials for your app's backend API. -> See [Authentication Setup](/installation/authentication-setup) for instructions on how the credentials should be generated. 2. `PowerSyncBackendConnector.uploadData` - Use this to upload client-side changes to your app backend. -> See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the app backend implementation. **Example**: ```swift import PowerSync @Observable class MyConnector: PowerSyncBackendConnector { override func fetchCredentials() async throws -> PowerSyncCredentials? { // implement fetchCredentials to obtain the necessary credentials to connect to your backend // See an example implementation in https://github.com/powersync-ja/powersync-swift/blob/main/Demo/PowerSyncExample/PowerSync/SupabaseConnector.swift return PowerSyncCredentials( endpoint: "Your PowerSync instance URL or self-hosted endpoint", // Use a development token (see Authentication Setup https://docs.powersync.com/installation/authentication-setup/development-tokens) // to get up and running quickly) to get up and running quickly token: "An authentication token" ) } override func uploadData(database: PowerSyncDatabaseProtocol) async throws { // Implement uploadData to send local changes to your backend service // You can omit this method if you only want to sync data from the server to the client // See an example implementation under Usage Examples (sub-page) // See https://docs.powersync.com/installation/app-backend-setup/writing-client-changes for considerations. } } ``` ## Using PowerSync: CRUD functions Once the PowerSync instance is configured you can start using the SQLite DB functions. The most commonly used CRUD functions to interact with your SQLite data are: * [PowerSyncDatabase.get](/client-sdk-references/swift#fetching-a-single-item) - get (SELECT) a single row from a table. * [PowerSyncDatabase.getOptional](/client-sdk-references/swift#fetching-a-single-item) - get (SELECT) a single row from a table and return `null` if not found. * [PowerSyncDatabase.getAll](/client-sdk-references/swift#querying-items-powersync-getall) - get (SELECT) a set of rows from a table. * [PowerSyncDatabase.watch](/client-sdk-references/swift#watching-queries-powersync-watch) - execute a read query every time source tables are modified. * [PowerSyncDatabase.execute](/client-sdk-references/swift#mutations-powersync-execute) - execute a write (INSERT/UPDATE/DELETE) query. ### Fetching a Single Item ( PowerSync.get / PowerSync.getOptional) The `get` method executes a read-only (SELECT) query and returns a single result. It throws an exception if no result is found. Use `getOptional` to return a single optional result (returns `null` if no result is found). ```swift // Find a list item by ID func getList(_ id: String) async throws { try await self.db.getAll( sql: "SELECT * FROM \(LISTS_TABLE) WHERE id = ?", parameters: [id], mapper: { cursor in ListContent( id: try cursor.getString(name: "id")!, name: try cursor.getString(name: "name")!, createdAt: try cursor.getString(name: "created_at")!, ownerId: try cursor.getString(name: "owner_id")! ) } ) } ``` ### Querying Items (PowerSync.getAll) The `getAll` method executes a read-only (SELECT) query and returns a set of rows. ```swift // Get all lists func getLists() async throws { try await self.db.getAll( sql: "SELECT * FROM \(LISTS_TABLE)", parameters: [], mapper: { cursor in ListContent( id: try cursor.getString(name: "id")!, name: try cursor.getString(name: "name")!, createdAt: try cursor.getString(name: "created_at")!, ownerId: try cursor.getString(name: "owner_id")! ) } ) } ``` ### Watching Queries (PowerSync.watch) The `watch` method executes a read query whenever a change to a dependent table is made. ```swift // You can watch any SQL query func watchLists(_ callback: @escaping (_ lists: [ListContent]) -> Void ) async { do { for try await lists in try self.db.watch( sql: "SELECT * FROM \(LISTS_TABLE)", parameters: [], mapper: { cursor in try ListContent( id: cursor.getString(name: "id"), name: cursor.getString(name: "name"), createdAt: cursor.getString(name: "created_at"), ownerId: cursor.getString(name: "owner_id") ) } ) { callback(lists) } } catch { print("Error in watch: \(error)") } } ``` ### Mutations (PowerSync.execute) The `execute` method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any). ```swift func insertTodo(_ todo: NewTodo, _ listId: String) async throws { try await db.execute( sql: "INSERT INTO \(TODOS_TABLE) (id, created_at, created_by, description, list_id, completed) VALUES (uuid(), datetime(), ?, ?, ?, ?)", parameters: [connector.currentUserID, todo.description, listId, todo.isComplete] ) } func updateTodo(_ todo: Todo) async throws { try await db.execute( sql: "UPDATE \(TODOS_TABLE) SET description = ?, completed = ?, completed_at = datetime(), completed_by = ? WHERE id = ?", parameters: [todo.description, todo.isComplete, connector.currentUserID, todo.id] ) } func deleteTodo(id: String) async throws { try await db.writeTransaction(callback: { transaction in _ = try transaction.execute( sql: "DELETE FROM \(TODOS_TABLE) WHERE id = ?", parameters: [id] ) }) } ``` ## Configure Logging You can include your own Logger that must conform to the [LoggerProtocol](https://powersync-ja.github.io/powersync-swift/documentation/powersync/loggerprotocol) as shown here. ```swift let logger = DefaultLogger(minSeverity: .debug) let db = PowerSyncDatabase( schema: schema, dbFilename: "powersync-swift.sqlite", logger: logger ) ``` The `DefaultLogger` supports the following severity levels: `.debug`, `.info`, `.warn`, `.error`. ## Additional Usage Examples See [Usage Examples](/client-sdk-references/swift/usage-examples) for further examples of the SDK. ## ORM Support ORM support is not yet available, we are still investigating options. Please [let us know](/resources/contact-us) what your needs around ORMs are. ## Troubleshooting See [Troubleshooting](/resources/troubleshooting) for pointers to debug common issues. # Usage Examples Source: https://docs.powersync.com/client-sdk-references/swift/usage-examples Code snippets and guidelines for common scenarios in Swift ## Using transactions to group changes Read and write transactions present a context where multiple changes can be made then finally committed to the DB or rolled back. This ensures that either all the changes get persisted, or no change is made to the DB (in the case of a rollback or exception). ```swift // Delete a list and its todos in a transaction func deleteList(db: PowerSyncDatabase, listId: String) async throws { try await db.writeTransaction { tx in try await tx.execute(sql: "DELETE FROM lists WHERE id = ?", parameters: [listId]) try await tx.execute(sql: "DELETE FROM todos WHERE list_id = ?", parameters: [listId]) } } ``` Also see [`readTransaction`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/queries/readtransaction\(callback:\)). ## Subscribe to changes in data Use `watch` to watch for changes to the dependent tables of any SQL query. ```swift // Watch for changes to the lists table func watchLists(_ callback: @escaping (_ lists: [ListContent]) -> Void ) async { do { for try await lists in try self.db.watch( sql: "SELECT * FROM \(LISTS_TABLE)", parameters: [], mapper: { cursor in try ListContent( id: cursor.getString(name: "id"), name: cursor.getString(name: "name"), createdAt: cursor.getString(name: "created_at"), ownerId: cursor.getString(name: "owner_id") ) } ) { callback(lists) } } catch { print("Error in watch: \(error)") } } ``` ## Insert, update, and delete data in the local database Use `execute` to run INSERT, UPDATE or DELETE queries. ```swift // Insert a new TODO func insertTodo(_ todo: NewTodo, _ listId: String) async throws { try await db.execute( sql: "INSERT INTO \(TODOS_TABLE) (id, created_at, created_by, description, list_id, completed) VALUES (uuid(), datetime(), ?, ?, ?, ?)", parameters: [connector.currentUserID, todo.description, listId, todo.isComplete] ) } ``` ## Send changes in local data to your backend service Override `uploadData` to send local updates to your backend service. ```swift class MyConnector: PowerSyncBackendConnector { override func uploadData(database: PowerSyncDatabaseProtocol) async throws { let batch = try await database.getCrudBatch() guard let batch = batch else { return } for entry in batch.crud { switch entry.op { case .put: // Send the data to your backend service // Replace `_myApi` with your own API client or service try await _myApi.put(table: entry.table, data: entry.opData) default: // TODO: implement the other operations (patch, delete) break } } try await batch.complete(writeCheckpoint: nil) } } ``` ## Accessing PowerSync connection status information Use [`currentStatus`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/powersyncdatabaseprotocol/currentstatus) and observe changes to listen for status changes to your PowerSync instance. ```swift import Foundation import SwiftUI import PowerSync struct PowerSyncConnectionIndicator: View { private let powersync: any PowerSyncDatabaseProtocol @State private var connected: Bool = false init(powersync: any PowerSyncDatabaseProtocol) { self.powersync = powersync } var body: some View { let iconName = connected ? "wifi" : "wifi.slash" let description = connected ? "Online" : "Offline" Image(systemName: iconName) .accessibility(label: Text(description)) .task { self.connected = powersync.currentStatus.connected for await status in powersync.currentStatus.asFlow() { self.connected = status.connected } } } } ``` ## Wait for the initial sync to complete Use the `hasSynced` property and observe status changes to indicate to the user whether the initial sync is in progress. ```swift struct WaitForFirstSync: View { private let powersync: any PowerSyncDatabaseProtocol @State var didSync: Bool = false init(powersync: any PowerSyncDatabaseProtocol) { self.powersync = powersync } var body: some View { if !didSync { ProgressView().task { do { try await powersync.waitForFirstSync() } catch { // TODO: Handle errors } } } } } ``` For async use cases, use [`waitForFirstSync`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/powersyncdatabaseprotocol/waitforfirstsync\(\)). ## Report sync download progress You can show users a progress bar when data downloads using the `downloadProgress` property from the [`SyncStatusData`](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata/) object. `downloadProgress.downloadedFraction` gives you a value from 0.0 to 1.0 representing the total sync progress. This is especially useful for long-running initial syncs. Example: ```swift struct SyncProgressIndicator: View { private let powersync: any PowerSyncDatabaseProtocol private let priority: BucketPriority? @State private var status: SyncStatusData? = nil init(powersync: any PowerSyncDatabaseProtocol, priority: BucketPriority? = nil) { self.powersync = powersync self.priority = priority } var body: some View { VStack { if let totalProgress = status?.downloadProgress { let progress = if let priority = self.priority { totalProgress.untilPriority(priority: priority) } else { totalProgress } ProgressView(value: progress.fraction) if progress.downloadedOperations == progress.totalOperations { Text("Applying server-side changes...") } else { Text("Downloaded \(progress.downloadedOperations) out of \(progress.totalOperations)") } } }.task { status = powersync.currentStatus for await status in powersync.currentStatus.asFlow() { self.status = status } } } } ``` Also see: * [SyncStatusData API](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncstatusdata/) * [SyncDownloadProgress API](https://powersync-ja.github.io/powersync-swift/documentation/powersync/syncdownloadprogress/) * [Demo component](https://github.com/powersync-ja/powersync-swift/blob/main/Demo/PowerSyncExample/Components/ListView.swift) # App Backend Setup Source: https://docs.powersync.com/installation/app-backend-setup PowerSync generally assumes that you have some kind of "backend application" as part of your overall application architecture - whether it's Supabase, Node.js, Rails, Laravel, Django, ASP.NET, some kind of serverless cloud functions (e.g. Azure Functions, AWS Lambda, Google Cloud Functions, Cloudflare Workers, etc.), or anything else. When you integrate PowerSync into your app project, PowerSync relies on that "backend application" for a few purposes: 1. **Allowing client-side write operations to be uploaded** and [applied](/installation/app-backend-setup/writing-client-changes) to the backend database (Postgres, MongoDB or MySQL). When you write to the client-side SQLite database provided by PowerSync, those writes are also placed into an upload queue. The PowerSync Client SDK manages uploading of those writes to your backend using the `uploadData()` function that you defined in the [Client-Side Setup](/installation/client-side-setup/integrating-with-your-backend) part of the implementation. That `uploadData()` function should call your backend application API to apply the writes to your backend database. The reason why we designed PowerSync this way is to give you full control over things like data validation and authorization of writes, while PowerSync itself requires minimal permissions. 2. **Authentication integration:** Your backend is responsible for securely generating the [JWTs](/installation/authentication-setup) used by the PowerSync Client SDK to authenticate with the [PowerSync Service](/architecture/powersync-service). ### Processing Writes from Clients The next section, [Writing Client Changes](/installation/app-backend-setup/writing-client-changes), provides guidance on how can handle write operations in your backend application. ### Authentication General authentication for your app users is outside the scope of PowerSync. A service such as [Auth0](https://auth0.com/) or [Clerk](https://clerk.com/) may be used, or any other authentication system. PowerSync assumes that you have some kind of authentication system already in place that allows you to communicate securely between your client-side app and backend application. The `fetchCredentials()` function that you defined in the [Client-Side Setup](/installation/client-side-setup/integrating-with-your-backend) can therefore call your backend application API to generate a JWT which can be used by PowerSync Client SDK to authenticate with the [PowerSync Service](/architecture/powersync-service). See [Authentication Setup](/installation/authentication-setup) for details. ### Backend Implementation Examples See our [Example Projects](/resources/demo-apps-example-projects#backend-examples) page for examples of custom backend implementations (e.g. Django, Node.js, Rails, etc.) For Postgres developers, using [Supabase](/integration-guides/supabase-+-powersync) is an easy alternative to a custom backend. Several of our demo apps demonstrate how to use [Supabase](https://supabase.com/) as the Postgres backend. ### Hosted/Managed Option for MongoDB For developers using MongoDB as a source backend database, an alternative option to running your own backend is to use CloudCode, a serverless cloud functions environment provided by us. We have a template that you can use as a turnkey starting point. See our [documentation here](/usage/tools/cloudcode). # Writing Client Changes Source: https://docs.powersync.com/installation/app-backend-setup/writing-client-changes Your backend application needs to expose an API endpoint to apply write operations to your backend database that are received from the PowerSync Client SDK. Your backend application receives the write operations based on how you defined your `uploadData()` function in the `PowerSyncBackendConnector` in your client-side app. See [Integrate with your Backend](/installation/client-side-setup/integrating-with-your-backend) in the [Client-Side Setup](/installation/client-side-setup) section for details. Since you get to define the client-side `uploadData()` function as you wish, you have full control over how to structure your backend application API to accept write operations from the client. For example, you can have: 1. A single API endpoint that accepts a batch of write operations from the client, with minimal client-side processing. 2. Separate API endpoints based on the types of write operations. In your `uploadData()`, you can call the respective endpoints as needed. 3. A combination of the above. You can also use any API style you want — e.g. REST, GraphQL, gRPC, etc. It's important that your API endpoint be blocking/synchronous with underlying writes to the backend database (Postgres, MongoDB or MySQL). In other words, don't place writes into something like a queue for processing later — process them immediately. For more details, see the explainer below. PowerSync uses a server-authoritative architecture with a checkpoint system for conflict resolution and [consistency](/architecture/consistency). The client advances to a new write checkpoint after uploads have been processed, so if the client believes that the server has written changes into your backend database (Postgres, MongoDB or MySQL), but the next checkpoint does not contain your uploaded changes, those changes will be removed from the client. This could manifest as UI glitches for your end-users, where the changes disappear from the device for a few seconds and then re-appear. ### Write operations recorded on the client The upload queue on the client stores three types of operations: | Operation | Purpose | Contents | SQLite Statement | | --------- | ------------------- | -------------------------------------------------------- | --------------------------------- | | `PUT` | Create new row | Contains the value for each non-null column | Generated by `INSERT` statements. | | `PATCH` | Update existing row | Contains the row `id`, and value of each changed column. | Generated by `UPDATE` statements. | | `DELETE` | Delete existing row | Contains the row `id` | Generated by `DELETE` statements. | ### Recommendations The PowerSync Client SDK does not prescribe any specific request/response format for your backend application API that accepts the write operations. You can implement it as you wish. We do however recommend the following: 1. Use a batch endpoint to handle high volumes of write operations. 2. Use an error response (`5xx`) only when the write operations cannot be applied due to a temporary error (e.g. backend database not available). In this scenario, the PowerSync Client SDK can retry uploading the write operation and it should succeed at a later time. 3. For validation errors or write conflicts, you should avoid returning an error response (`4xx`), since it will block the PowerSync client's upload queue. Instead, it is best to return a `2xx` response, and if needed, propagate the validation or other error message(s) back to the client, for example by: 1. Including the error details in the `2xx` response. 2. Writing the error(s) into a separate table/collection that is synced to the client, so that the client/user can handle the error(s). For details on approaches, see: For details on handling write conflicts, see: ### Example backend implementations See our [Example Projects](/resources/demo-apps-example-projects#backend-examples) page for examples of custom backend implementations (e.g. Django, Node.js, Rails, etc.) that you can use as a guide for your implementation. For Postgres developers, using [Supabase](/integration-guides/supabase-+-powersync) is an easy alternative to a custom backend. Several of our example/demo apps demonstrate how to use [Supabase](https://supabase.com/) as the backend. These examples use the [PostgREST API](https://supabase.com/docs/guides/api) exposed by Supabase to upload write operations. Alternatively, Supabase's [Edge Functions](https://supabase.com/docs/guides/functions) can also be used. # Authentication Setup Source: https://docs.powersync.com/installation/authentication-setup ## Overview PowerSync clients (i.e. apps used by your users that embed the PowerSync Client SDK) authenticate against the server-side [PowerSync Service](/architecture/powersync-service) using [JWTs](https://jwt.io/) (signed tokens) that are generated by your application backend. Before using PowerSync, an application's existing architecture may look like this: The [PowerSync Service](/architecture/powersync-service) uses database native credentials and authenticates directly against the [backend database](/installation/database-setup) using the configured credentials: When the PowerSync client SDK is included in an app project, it uses [existing app-to-backend](/installation/app-backend-setup) authentication to [retrieve a JSON Web Token (JWT)](/installation/authentication-setup): The PowerSync client SDK uses the retrieved JWT to authenticate directly against the PowerSync Service: Users are not persisted in PowerSync, and there is no server-to-server communication used for client authentication. ## Common Authentication Providers PowerSync supports JWT-based authentication from various providers. The table below shows commonly used authentication providers, their JWKS URLs, and any specific configuration requirements. Scroll the table horizontally. | Provider | JWKS URL | Configuration Notes | Documentation | | ----------------------------------------- | ------------------------------------------------------------------------------------------- | ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | **Supabase** | Direct integration available | Uses Supabase's **JWT Secret** | [Supabase Auth Setup](/installation/authentication-setup/supabase-auth) | | **Firebase Auth / GCP Identity Platform** | `https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com` | JWT Audience: Firebase project ID | [Firebase Auth Setup](/installation/authentication-setup/firebase-auth) | | **Auth0** | `https://{auth0-domain}/.well-known/jwks.json` | JWT Audience: PowerSync instance URL | [Auth0 Setup](/installation/authentication-setup/auth0) | | **Clerk** | `https://{yourClerkDomain}/.well-known/jwks.json` | Additional configuration may be required | [Clerk Documentation](https://clerk.com/docs/backend-requests/making/jwt-templates#create-a-jwt-template) | | **Stytch** | `https://{live_or_test}.stytch.com/v1/sessions/jwks/{project-id}` | Additional configuration may be required | [Stytch Documentation](https://stytch.com/docs/api/jwks-get) | | **Keycloak** | `https://{your-keycloak-domain}/auth/realms/{realm-name}/protocol/openid-connect/certs` | Additional configuration may be required | [Keycloak Documentation](https://documentation.cloud-iam.com/how-to-guides/configure-remote-jkws.html) | | **Amazon Cognito** | `https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json` | Additional configuration may be required | [Cognito Documentation](https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-verifying-a-jwt.html) | | **Azure AD** | `https://login.microsoftonline.com/{tenantId}/discovery/v2.0/keys` | Additional configuration may be required | [Azure AD Documentation](https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens) | | **Google Identity** | `https://www.googleapis.com/oauth2/v3/certs` | Additional configuration may be required | [Google Identity Documentation](https://developers.google.com/identity/openid-connect/openid-connect#discovery) | | **SuperTokens** | `https://{YOUR_SUPER_TOKENS_CORE_CONNECTION_URI}/.well-known/jwks.json` | Additional configuration may be required | [SuperTokens Documentation](https://supertokens.com/docs/quickstart/integrations/aws-lambda/session-verification/using-jwt-authorizer) | | **WorkOS** | `https://api.workos.com/sso/jwks/{YOUR_CLIENT_ID}` | Additional configuration may be required | [WorkOS Documentation](https://workos.com/docs/reference/user-management/session-tokens/jwks) | | **Custom JWT** | Your own JWKS endpoint | See custom auth requirements | [Custom Auth Setup](/installation/authentication-setup/custom) | ## Authentication Options Some authentication providers already generate JWTs for users which PowerSync can verify directly — see the documentation for individual providers (e.g. [Supabase Auth](/installation/authentication-setup/supabase-auth), [Firebase Auth](/installation/authentication-setup/firebase-auth)). For others, some backend code must be added to your application backend to generate the JWTs needed for PowerSync — see [Custom](/installation/authentication-setup/custom) authentication. For a quick way to get up and running during development, you can generate [Development Tokens](/installation/authentication-setup/development-tokens) directly from the [PowerSync Dashboard](/usage/tools/powersync-dashboard) (PowerSync Cloud) or locally with a self-hosted setup. # Auth0 Source: https://docs.powersync.com/installation/authentication-setup/auth0 Setting up Auth0 Authentication with PowerSync On Auth0, create a new API: * Name: PowerSync * Identifier: PowerSync instance URL, e.g. `https://{instance}.powersync.journeyapps.com` On the PowerSync instance, add the Auth0 JWKS URI: `https://{auth0-domain}/.well-known/jwks.json` In the application, generate access tokens with the PowerSync instance URL as the audience, and use this to connect to PowerSync. # Custom Source: https://docs.powersync.com/installation/authentication-setup/custom Any authentication provider can be supported by generating custom JWTs for PowerSync. For a quick way to get started before implementing custom auth, [Development Tokens](/installation/authentication-setup/development-tokens) can be used instead. The process is as follows: 1. The client authenticates the user using the app's authentication provider and typically gets a session token — either a third-party authentication provider or a custom one. 2. The client makes a backend call (authenticated using the above session token), which generates and signs a JWT for PowerSync. 1. For example implementations of this backend endpoint, see [Custom Backend Examples](/resources/demo-apps-example-projects#backend-examples) 3. The client connects to the PowerSync Service using the above JWT. 4. PowerSync verifies the JWT. The requirements are: A key pair (private + public key) is required to sign and verify JWTs. The private key is used to sign the JWT, and the public key is advertised on a public JWKS URL. Requirements for the key in the JWKS URL: 1. The URL must be a public URL in the [JWKS](https://auth0.com/docs/secure/tokens/json-web-tokens/json-web-key-sets) format. 1. We have an example endpoint available [here](https://hlstmcktecziostiaplz.supabase.co/functions/v1/powersync-jwks); ensure that your response looks similar. 2. Supported signature schemes: RSA, EdDSA and ECDSA. 3. Key type (`kty`): `RSA`, `OKP` (EdDSA) or `EC` (ECDSA). 4. Algorithm (`alg`): 1. `RS256`, `RS384` or `RS512` for RSA 2. `EdDSA` for EdDSA 3. `ES256`, `ES384` or `ES512` for ECDSA 5. Curve (`crv`) - only relevant for EdDSA and ECDSA: 1. `Ed25519` or `Ed448` for EdDSA 2. `P-256`, `P-384` or `P-512` for ECDSA 6. A `kid` must be specified and must match the `kid` in the JWT. Requirements for the signed JWT: 1. The JWT must be signed using a key in the JWKS URL. 2. JWT must have a `kid` matching the key in the JWKS URL. 3. The `aud` of the JWT must match the PowerSync instance URL. 1. To get the instance URL of a PowerSync instance when using PowerSync Cloud: In the project tree on the [PowerSync dashboard](https://powersync.journeyapps.com/), click on the "Copy instance URL" icon. 2. Alternatively, specify a custom audience in the instance settings. 4. The JWT must expire in 60 minutes or less. Specifically, both `iat` and `exp` fields must be present, with a difference of 3600 or less between them. 5. The user ID must be used as the `sub` of the JWT. 6. Additional fields can be added which can be referenced in Sync Rules [parameter queries](/usage/sync-rules/parameter-queries). Refer to [this example](https://github.com/powersync-ja/powersync-jwks-example) for creating and verifying JWTs for PowerSync authentication. Since there is no way to revoke a JWT once issued without rotating the key, we recommend using short expiration periods (e.g. 5 minutes). JWTs older than 60 minutes are not accepted by PowerSync. #### Rotating Keys If a private key is compromised, rotate the key on the JWKS endpoint. PowerSync refreshes the keys from the endpoint every couple of minutes, after which old tokens will not be accepted anymore. There is a possibility of false authentication errors until PowerSync refreshes the keys. These errors are typically retried by the client and will have little impact. However, to periodically rotate keys without any authentication failures, follow this process: 1. Add a new key to the JWKS endpoint. 2. Wait an hour (or more) to make sure PowerSync has the new key. 3. Start signing new JWT tokens using the new key. 4. Wait until all existing tokens have expired. 5. Remove the old key from the JWKS endpoint. # Development Tokens Source: https://docs.powersync.com/installation/authentication-setup/development-tokens PowerSync allows generating temporary development tokens for authentication. This is useful for developers who want to get up and running quickly, without a full custom auth implementation. This may also be used to generate a token for a specific user to debug issues. ## Generating a Development Token: ### PowerSync Cloud - Dashboard: 1. **Enable setting**: The "Enable development tokens" setting must be set on the PowerSync instance. It can be set in the instance's config (In the [PowerSync dashboard](https://powersync.journeyapps.com/): Edit instance -> *Client Auth*). 1. **Generate token**: Call the "Generate development token" action for your instance. In the [PowerSync dashboard](https://powersync.journeyapps.com/), this can be done via the command palette (CMD+SHIFT+P / SHIFT+SHIFT), or by selecting it from an instance's options (right-click on an instance for options). 1. Enter token subject / user ID: This is the ID of the user you want to authenticate and is used in [sync rules](/usage/sync-rules) as `request.user_id()` (previously, `token_parameters.user_id`) 1. Copy the generated token. Note that these tokens expire after 12 hours. ### Self-hosted Setup / Local Development For self-hosted [local development](/self-hosting/local-development), the [powersync-service test client](https://github.com/powersync-ja/powersync-service/tree/main/test-client) contains a script to generate a development token, given a .yaml config file with an HS256 key. Run the following command: ```bash node dist/bin.js generate-token --config path/to/powersync.yaml --sub test-user ``` For more information on generating development tokens, see the [Generate development tokens tutorial](/tutorials/self-host/generate-dev-token) ## Usage To use the temporary development token, update the `fetchCredentials()` function in your backend connector to return the generated token (see [Integrate with your Backend](/installation/client-side-setup/integrating-with-your-backend) for more information). Example: ```js return PowerSyncCredentials( endpoint: AppConfig.powersyncUrl, token: 'temp-token-here'); ``` # Firebase Auth Source: https://docs.powersync.com/installation/authentication-setup/firebase-auth Setting up Firebase Authentication with PowerSync Configure authentication on the PowerSync instance with the following settings: * JWKS URI: [https://www.googleapis.com/service\_accounts/v1/jwk/securetoken@system.gserviceaccount.com](https://www.googleapis.com/service_accounts/v1/jwk/securetoken@system.gserviceaccount.com) * JWT Audience: Firebase project ID Firebase signs these tokens using RS256. PowerSync will periodically refresh the keys using the above JWKS URI, and validate tokens against the configured audience (token `aud` value). The Firebase user UID will be available as `request.user_id()` (previously `token_parameters.user_id`.). To use a different identifier as the user ID in sync rules (for example user email), use [Custom authentication](/installation/authentication-setup/custom). # Supabase Auth Source: https://docs.powersync.com/installation/authentication-setup/supabase-auth PowerSync can verify Supabase JWTs directly when connected to a Supabase-hosted Postgres database. You can implement various types of auth: * Standard [Supabase Auth](https://supabase.com/docs/guides/auth) * JavaScript [example](https://github.com/powersync-ja/powersync-js/blob/58fd05937ec9ac993622666742f53200ee694585/demos/react-supabase-todolist/src/library/powersync/SupabaseConnector.ts#L87) * Dart/Flutter [example](https://github.com/powersync-ja/powersync.dart/blob/9ef224175c8969f5602c140bcec6dd8296c31260/demos/supabase-todolist/lib/powersync.dart#L38) * Kotlin [example](https://github.com/powersync-ja/powersync-kotlin/blob/4f60e2089745dda21b0d486c70f47adbbe24d289/connectors/supabase/src/commonMain/kotlin/com/powersync/connector/supabase/SupabaseConnector.kt#L75) * Anonymous Sign-Ins * JavaScript [Example](https://github.com/powersync-ja/powersync-js/blob/58fd05937ec9ac993622666742f53200ee694585/demos/react-multi-client/src/library/SupabaseConnector.ts#L47) * Fully custom auth * [Example](https://github.com/powersync-ja/powersync-jwks-example/) * Experimental: We've also heard from the community that Supabase's newly released [support for external auth providers works](https://supabase.com/blog/third-party-auth-mfa-phone-send-hooks), but we don't have any examples for this yet. ## Enabling Supabase Auth To implement either **Supabase Auth** or **Anonymous Sign-Ins**, enable the relevant setting on the PowerSync instance, and provide your Supabase JWT Secret. Internally, this setting allows PowerSync to verify and use Supabase JWTs directly using HS256 and the provided secret. ### PowerSync Cloud instances: 1. In the PowerSync Dashboard, right-click on your instance to edit it. 2. Under the **"Client Auth"** tab, enable **"Use Supabase Auth"** and enter your Supabase **JWT Secret** (from the [JWT Keys](https://supabase.com/dashboard/project/_/settings/jwt) section in the Supabase dashboard): 3. Click **"Save and deploy"** to deploy the updates to your instance. ### Self-hosted instances: This can be enabled via your [`config.yaml`](/self-hosting/installation/powersync-service-setup): ```yaml client_auth: # Enable this if using Supabase Auth* supabase: true supabase_jwt_secret: your-jwt-secret ``` ## Sync Rules The Supabase user UUID will be available as `request.user_id()` in [Sync Rules](/usage/sync-rules). To use a different identifier as the user ID in sync rules (for example user email), use [Custom authentication](/installation/authentication-setup/custom). # Stytch + Supabase Source: https://docs.powersync.com/installation/authentication-setup/supabase-auth/stytch-+-supabase PowerSync is compatible with both Consumer and B2B SaaS Stytch project types when using [Stytch](https://stytch.com/) for authentication with Supabase projects. ## Consumer Authentication See this community project for detailed setup instructions: [https://github.com/guillempuche/localfirst\_react\_server](https://github.com/guillempuche/localfirst_react_server) ## B2B SaaS Authentication The high-level approach is: * Users authenticate via [Stytch](https://stytch.com/) * Extract the user and org IDs from the Stytch JWT * Generate a Supabase JWT by calling a Supabase Edge Function that uses the Supabase JWT Secret for signing a new JWT * Set the `KID` in the JWT header * You can obtain this from any other Supabase JWT by extracting the KID value from the header — this value is static, even across database upgrades. * Set the `AUD` field to `authenticated` * Set the `SUB` field in the JWT payload to the user ID * Pass this new JWT into your PowerSync `fetchCredentials` function Use the below settings in your [PowerSync Dashboard](/usage/tools/powersync-dashboard): Reach out to us directly on our [Discord server](https://discord.gg/powersync) if you have any issues with setting up auth. # Client-Side Setup Source: https://docs.powersync.com/installation/client-side-setup Include the PowerSync Client SDK in your project ## Overview If you're following the [Implementation Outline](/installation/quickstart-guide#implementation-outline): after configuring your database, connecting your PowerSync instance to it, and defining basic [Sync Rules](/usage/sync-rules), the next step is to include the appropriate *PowerSync Client SDK* package in your app project. On a high level, this involves the following steps: 1. [Install the Client SDK](#installing-the-client-sdk) (see below) 2. [Define your Client-Side Schema](/installation/client-side-setup/define-your-schema) * The PowerSync Client SDKs expose a managed SQLite database that your app can read from and write to. The client-side schema refers to the schema for that SQLite database. 3. [Instantiate the PowerSync Database](/installation/client-side-setup/instantiate-powersync-database) * This instantiates the aforemention managed SQLite database. 4. [Integrate with your Backend](/installation/client-side-setup/integrating-with-your-backend) \[Optional] * This allows write operations on the client-side SQLite database to be uploaded to your backend and applied to your backend database. * Integrating with your backend is also part of [authentication](/installation/authentication-setup) integration. For initial development and testing, you can use [Development Tokens](/installation/authentication-setup/development-tokens), and then implement proper authentication integration at a later time. ## Installing the Client SDK PowerSync offers a variety of client SDKs. Please see the steps based on your app language and framework: Add the [PowerSync pub.dev package](https://pub.dev/packages/powersync) to your project: ```bash flutter pub add powersync ``` See the full SDK reference for further details and getting started instructions: **PowerSync is not compatible with Expo Go.** PowerSync uses a native plugin and is therefore only compatible with Expo Dev Builds. Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powersync/react-native) to your project: ```bash npx expo install @powersync/react-native ``` ```bash yarn expo add @powersync/react-native ``` ``` pnpm expo install @powersync/react-native ``` **Required peer dependencies** This SDK requires [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) as a peer dependency. Install it as follows: ```bash npx expo install @journeyapps/react-native-quick-sqlite ``` ```bash yarn expo add @journeyapps/react-native-quick-sqlite ``` ``` pnpm expo install @journeyapps/react-native-quick-sqlite ``` Alternatively, you can install OP-SQLite with the [PowerSync OP-SQLite package](https://github.com/powersync-ja/powersync-js/tree/main/packages/powersync-op-sqlite) which offers [built-in encryption support via SQLCipher](/usage/use-case-examples/data-encryption) and a smoother transition to React Native's New Architecture. **Polyfills and additional notes:** * For async iterator support with watched queries, additional polyfills are required. See the [Babel plugins section](https://www.npmjs.com/package/@powersync/react-native#babel-plugins-watched-queries) in the README. * By default, this SDK connects to a PowerSync instance via WebSocket (from `@powersync/react-native@1.11.0`) or HTTP streaming (before `@powersync/react-native@1.11.0`). See [Developer Notes](/client-sdk-references/react-native-and-expo#developer-notes) for more details on connection methods and platform-specific requirements. * When using the OP-SQLite package, we recommend adding this [metro config](https://github.com/powersync-ja/powersync-js/tree/main/packages/react-native#metro-config-optional) to avoid build issues. See the full SDK reference for further details and getting started instructions: Add the [PowerSync Web NPM package](https://www.npmjs.com/package/@powersync/web) to your project: ```bash npm install @powersync/web ``` ```bash yarn add @powersync/web ``` ```bash pnpm install @powersync/web ``` **Required peer dependencies** This SDK currently requires [`@journeyapps/wa-sqlite`](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency. Install it in your app with: ```bash npm install @journeyapps/wa-sqlite ``` ```bash yarn add @journeyapps/wa-sqlite ``` ```bash pnpm install @journeyapps/wa-sqlite ``` By default, this SDK connects to a PowerSync instance via WebSocket (from `@powersync/web@1.6.0`) or HTTP streaming (before `@powersync/web@1.6.0`). See [Developer Notes](/client-sdk-references/javascript-web#developer-notes) for more details on connection methods. See the full SDK reference for further details and getting started instructions: Add the [PowerSync SDK](https://central.sonatype.com/artifact/com.powersync/core) to your project by adding the following to your `build.gradle.kts` file: ```gradle kotlin { //... sourceSets { commonMain.dependencies { api("com.powersync:core:$powersyncVersion") // If you want to use the Supabase Connector, also add the following: implementation("com.powersync:connectors:$powersyncVersion") } //... } } ``` **CocoaPods configuration (recommended for iOS)** Add the following to the `cocoapods` config in your `build.gradle.kts`: ```gradle cocoapods { //... pod("powersync-sqlite-core") { linkOnly = true } framework { isStatic = true export("com.powersync:core") } //... } ``` The `linkOnly = true` attribute and `isStatic = true` framework setting ensure that the `powersync-sqlite-core` binaries are statically linked. See the full SDK reference for further details and getting started instructions: You can add the PowerSync Swift package to your project using either `Package.swift` or Xcode: ```swift let package = Package( //... dependencies: [ //... .package( url: "https://github.com/powersync-ja/powersync-swift", exact: "" ), ], targets: [ .target( name: "YourTargetName", dependencies: [ .product( name: "PowerSync", package: "powersync-swift" ) ] ) ] ) ``` 1. Follow [this guide](https://developer.apple.com/documentation/xcode/adding-package-dependencies-to-your-app#Add-a-package-dependency) to add a package to your project. 2. Use `https://github.com/powersync-ja/powersync-swift.git` as the URL 3. Include the exact version (e.g., `1.0.x`) See the full SDK reference for further details and getting started instructions: Add the [PowerSync Node NPM package](https://www.npmjs.com/package/@powersync/node) to your project: ```bash npm install @powersync/node ``` ```bash yarn add @powersync/node ``` ```bash pnpm install @powersync/node ``` **Required peer dependencies** This SDK requires [`@powersync/better-sqlite3`](https://www.npmjs.com/package/@powersync/better-sqlite3) as a peer dependency: ```bash npm install @powersync/better-sqlite3 ``` ```bash yarn add @powersync/better-sqlite3 ``` ```bash pnpm install @powersync/better-sqlite3 ``` **Common installation issues** The `@powersync/better-sqlite` package requires native compilation, which depends on certain system tools. This compilation process is handled by `node-gyp` and may fail if required dependencies are missing or misconfigured. Refer to the [PowerSync Node package README](https://www.npmjs.com/package/@powersync/node) for more details. See the full SDK reference for further details and getting started instructions: For desktop/server/binary use-cases and WPF, add the [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) NuGet package to your project: ```bash dotnet add package PowerSync.Common --prerelease ``` For MAUI apps, add both [`PowerSync.Common`](https://www.nuget.org/packages/PowerSync.Common/) and [`PowerSync.Maui`](https://www.nuget.org/packages/PowerSync.Maui/) NuGet packages to your project: ```bash dotnet add package PowerSync.Common --prerelease dotnet add package PowerSync.Maui --prerelease ``` Add `--prerelease` while this package is in alpha. See the full SDK reference for further details and getting started instructions: ## Next Steps For an overview of the client-side steps required to set up PowerSync in your app, continue reading the next sections. 1. [Define your Client-Side Schema](/installation/client-side-setup/define-your-schema) 2. [Instantiate the PowerSync Database](/installation/client-side-setup/instantiate-powersync-database) 3. [Integrate with your Backend](/installation/client-side-setup/integrating-with-your-backend) For a walkthrough with example implementations for your platform, see the *Getting Started* section of the corresponding SDK reference linked above. # Define your Schema Source: https://docs.powersync.com/installation/client-side-setup/define-your-schema The PowerSync Client SDKs expose a managed SQLite database that your app can read from and write to. The client-side schema refers to the schema for that SQLite database. The client-side schema is typically mainly derived from your backend database schema and [Sync Rules](/usage/sync-rules), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using *SQLite views* to allow for structured querying of the data. **Generate schema automatically (PowerSync Cloud)** In the [PowerSync Dashboard](/usage/tools/powersync-dashboard), the schema can be generated based off your [Sync Rules](/usage/sync-rules) by right-clicking on an instance and selecting **Generate client-side schema**. Similar functionality exists in the PowerSync [CLI](/usage/tools/cli). **Note:** The generated schema will exclude an `id` column, as the client SDK automatically creates an `id` column of type `text`. Consequently, it is not necessary to specify an `id` column in your schema. For additional information on IDs, refer to [Client ID](/usage/sync-rules/client-id). ## Example implementation For an example implementation of the client-side schema, see the *Getting Started* section of the SDK reference for your platform: ### Flutter * [1. Define the Schema](/client-sdk-references/flutter#1-define-the-schema) ### React Native & Expo * [1. Define the Schema](/client-sdk-references/react-native-and-expo#1-define-the-schema) ### JavaScript Web * [1. Define the Schema](/client-sdk-references/javascript-web#1-define-the-schema) ### Kotlin Multiplatform * [1. Define the Schema](/client-sdk-references/kotlin-multiplatform#1-define-the-schema) ### Swift * [1. Define the Schema](/client-sdk-references/swift#1-define-the-schema) ### Node.js (alpha) * [1. Define the Schema](/client-sdk-references/node#1-define-the-schema) ### .NET (alpha) * [1. Define the Schema](/client-sdk-references/dotnet#1-define-the-schema) ## ORM Support For details on ORM support in PowerSync, refer to [Using ORMs with PowerSync](https://www.powersync.com/blog/using-orms-with-powersync) on our blog. ## Next Step The next step is to instantiate the client-side PowerSync database: Instantiate the PowerSync Database → # Instantiate PowerSync Database Source: https://docs.powersync.com/installation/client-side-setup/instantiate-powersync-database This instantiates the client-side managed SQLite database. PowerSync streams changes from your backend database into the client-side SQLite database, based on your [Sync Rules](/usage/sync-rules). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend database) when the user is connected. This is explained in the next section, [Integrate with your Backend](/installation/client-side-setup/integrating-with-your-backend). ## Example implementation For an example implementation of instantiating the client-side database, see the *Getting Started* section of the client SDK reference for your platform: ### Flutter * [2. Instantiate the PowerSync Database](/client-sdk-references/flutter#2-instantiate-the-powersync-database) ### React Native & Expo * [2. Instantiate the PowerSync Database](/client-sdk-references/react-native-and-expo#2-instantiate-the-powersync-database) ### JavaScript Web * [2. Instantiate the PowerSync Database](/client-sdk-references/javascript-web#2-instantiate-the-powersync-database) ### Kotlin Multiplatform * [2. Instantiate the PowerSync Database](/client-sdk-references/kotlin-multiplatform#2-instantiate-the-powersync-database) ### Swift * [2. Instantiate the PowerSync Database](/client-sdk-references/swift#2-instantiate-the-powersync-database) ### Node.js (alpha) * [2. Instantiate the PowerSync Database](/client-sdk-references/node#2-instantiate-the-powersync-database) ### .NET (alpha) * [2. Instantiate the PowerSync Database](/client-sdk-references/dotnet#2-instantiate-the-powersync-database) ## Additional Examples For additional implementation examples, see [Example / Demo Apps](/resources/demo-apps-example-projects). ## ORM Support For details on ORM support in PowerSync, refer to [Using ORMs with PowerSync](https://www.powersync.com/blog/using-orms-with-powersync) on our blog. ## Next Step The next step is to implement the client-side integration with your backend application: Integrate with your Backend → # Integrate with your Backend Source: https://docs.powersync.com/installation/client-side-setup/integrating-with-your-backend The 'backend connector' provides the connection between the PowerSync Client SDK and your backend application. After you've [instantiated](/installation/client-side-setup/instantiate-powersync-database) the client-side PowerSync database, you will call `connect()` on it, which causes the PowerSync Client SDK to connect to the [PowerSync Service](/architecture/powersync-service) for the purpose of syncing data to the client-side SQLite database, *and* to connect to your backend application as needed, for two purposes: | Purpose | Description | | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Uploading writes to your backend:** | Writes that are made to the client-side SQLite database are uploaded to your backend application, where you control how they're applied to your backend database (Postgres, MongoDB or MySQL). This is how PowerSync achieves bi-directional syncing of data. | | **Authentication integration:** | PowerSync uses JWTs for authentication between the Client SDK and PowerSync Service. Your backend application should be able to generate JWTs that the PowerSync Client SDK can retrieve and use for authentication against your [PowerSync Service](/architecture/powersync-service) instance. | Accordingly, you must pass a *backend connector* as an argument when you call `connect()` on the client-side PowerSync database. You must define that backend connector, and it must implement two functions/methods: | Purpose | Function | Description | | ------------------------------------- | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Uploading writes to your backend:** | `uploadData()` | The PowerSync Client SDK automatically calls this function to upload client-side write operations to your backend. Whenever you write to the client-side SQLite database, those writes are also automatically placed into an *upload queue* by the Client SDK, and the Client SDK processes the entries in the upload queue by calling `uploadData()`. You should define your `uploadData()` function to call your backend application API to upload and apply the write operations to your backend database. The Client SDK automatically handles retries in the case of failures. See [Writing Client Changes](/installation/app-backend-setup/writing-client-changes) for considerations on the backend implementation. | | **Authentication integration:** | `fetchCredentials()` | This is called every couple of minutes and is used to obtain a JWT from your backend. The PowerSync Client SDK uses that JWT to authenticate against the PowerSync Service. See [Authentication Setup](/installation/authentication-setup) for instructions on how the JWTs should be generated. | ## Example implementation For an example implementation of a PowerSync 'backend connector', see the *Getting Started* section of the SDK reference for your platform: ### Flutter * [3. Integrate with your Backend](/client-sdk-references/flutter#3-integrate-with-your-backend) ### React Native & Expo * [3. Integrate with your Backend](/client-sdk-references/react-native-and-expo#3-integrate-with-your-backend) ### JavaScript Web * [3. Integrate with your Backend](/client-sdk-references/javascript-web#3-integrate-with-your-backend) ### Node.js (alpha) * [3. Integrate with your Backend](/client-sdk-references/node#3-integrate-with-your-backend) ### Kotlin Multiplatform * [3. Integrate with your Backend](/client-sdk-references/kotlin-multiplatform#3-integrate-with-your-backend) ### Swift * [3. Integrate with your Backend](/client-sdk-references/swift#3-integrate-with-your-backend) ## More Examples For additional implementation examples, see the [Example / Demo Apps](/resources/demo-apps-example-projects) section. ## Next Step The next step is implement the necessary server-side functionality in your backend application to handle the above: App Backend Setup → # Database Connection Source: https://docs.powersync.com/installation/database-connection Connect a PowerSync instance to your backend database. This page covers PowerSync Cloud. For self-hosted PowerSync, refer to [this section](/self-hosting/installation/powersync-service-setup#powersync-configuration). ## Create a PowerSync Instance 1. In the **Overview** workspace of the [PowerSync Dashboard](/usage/tools/powersync-dashboard), you will be prompted to create your first instance: If you've previously created an instance in your project, you can create an additional instance by navigating to **Manage instances** and clicking **Create new instance**: You can also create an entirely new [project](/usage/tools/powersync-dashboard#hierarchy%3A-organization%2C-project%2C-instance) with its own set of instances. Click on the PowerSync icon in the top left corner of the Dashboard or on **Admin Portal** at the top of the Dashboard, and then click on **Create Project**. 2. Give your instance a name, such as "Testing". 3. \[Optional] You can change the default cloud region from US to EU, JP (Japan), AU (Australia) or BR (Brazil) if desired. * Note: Additional cloud regions will be considered on request, especially for customers on our Enterprise plan. Please [contact us](/resources/contact-us) if you need a different region. 4. \[Optional] You can opt in to using the `Next` version of the Service, which may contain early access or experimental features. Always use the `Stable` version in production. 5. Click **Next**. ## Specify Connection Details Each database provider has their quirks when it comes to specifying connection details, so we have documented database-specific and provider-specific instructions below: ## Postgres Provider Specifics Select your Postgres hosting provider for steps to connect your newly-created PowerSync instance to your Postgres database: 1. From your Supabase Dashboard, select **Connect** in the top navigation bar (or follow this [link](https://supabase.com/dashboard/project/_?showConnect=true)): 2. In the **Direct connection** section, copy the complete connection string (including the `[YOUR-PASSWORD]` placeholder) 3. Back in the PowerSync Dashboard, paste the connection string into the **URI** field. PowerSync will automatically parse this URI to populate the database connection details. 4. Update the **Username** and **Password** fields to use the `powersync_role` and password you created when configuring your Supabase for PowerSync (see [Source Database Setup](/installation/database-setup#supabase)). 5. Note: PowerSync includes Supabase's CA certificate by default, so you can use `verify-full` SSL mode without additional configuration. 6. Your connection settings should look similar to this: 7. Verify your setup by clicking **Test Connection** and resolve any errors. 8. Click **Next**. 9. PowerSync will detect the Supabase connection and prompt you to enable Supabase auth. To enable it, copy your JWT Secret from your project's settings ([JWT Keys](https://supabase.com/dashboard/project/_/settings/jwt) section in the Supabase dashboard) and paste it here: 10. Click **Enable Supabase auth** to finalize your connection settings. PowerSync will now create an isolated cloud environment for your instance. This typically takes a minute or two. You can update your instance settings by navigating to the **Manage instances** workspace, opening your instance options and selecting **Edit instance**. ### Troubleshooting Supabase is configured with a maximum of 4 logical replication slots, with one often used for Supabase Realtime. It is therefore easy to run out of replication slots, resulting in an error such as "All replication slots are in use" when deploying. To resolve this, delete inactive replication slots by running this query: ```sql select slot_name, pg_drop_replication_slot(slot_name) from pg_replication_slots where active = false; ``` 1. [Locate the connection details from RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html): * Copy the **"Endpoint"** value. * Paste the endpoint into the "**Host**" field. * Complete the remaining fields: "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify this. * "**Name**" can be any name for the connection. * "**Port**" is 5432 for Postgres databases. * "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). * PowerSync has the AWS RDS CA certificate pre-configured — `verify-full` SSL mode can be used directly, without any additional configuration required. * If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 2. Click **"Test Connection"** and fix any errors. 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. ### Troubleshooting If you get an error such as "IPs in this range are not supported", the instance is likely not configured to be publicly accessible. A DNS lookup on the host should give a public IP, and not for example `10.x.x.x` or `172.31.x.x`. 1. Fill in your connection details from Azure. 1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can also paste a connection string into the "**URI**" field to simplify data entry. 2. "**Name**" can be any name for the connection. 3. "**Port**" is 5432 for Postgres databases. 4. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). 5. PowerSync has the Azure CA certificate pre-configured — `verify-full` SSL mode can be used directly, without any additional configuration required. 6. If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 2. Click **"Test Connection"** and fix any errors. * If you encounter the error `"must be superuser or replication role to start walsender"`, ensure that you've followed all the steps for enabling logical replication documented [here](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-logical#prerequisites-for-logical-replication-and-logical-decoding). 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. 1. Fill in your connection details from Google Cloud SQL. * "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry. * "**Name**" can be any name for the connection. * "**Port**" is 5432 for Postgres databases. * "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). * The server certificate can be downloaded from Google Cloud SQL. * If SSL is enforced, a client certificate and key must also be created on Google Cloud SQL, and configured on the PowerSync instance. * If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 2. Click **"Test Connection"** and fix any errors. 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. 1. Fill in your connection details from [Neon](https://neon.tech/). 1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry. 2. "**Name**" can be any name for the connection. 3. "**Port**" is 5432 for Postgres databases. 4. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). 5. Note that if you're using a self-signed SSL certificate for your database server, click the "Download Certificate" button to dynamically fetch the recommended certificate directly from your server. 6. Also note if you get any error such as `server certificate not trusted: SELF_SIGNED_CERT_IN_CHAIN`, click "Download Certificate" to attempt automatic resolution. 7. If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 2. Click **"Test Connection"** and fix any errors. 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. 1. Fill in your connection details from [Fly Postgres](https://fly.io/docs/postgres/). 1. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry. 2. "**Name**" can be any name for the connection. 3. "**Port**" is 5432 for Postgres databases. 4. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). 5. Note that if you're using a self-signed SSL certificate for your database server, click the "Download Certificate" button to dynamically fetch the recommended certificate directly from your server. 6. Also note if you get any error such as `server certificate not trusted: SELF_SIGNED_CERT_IN_CHAIN`, click "Download Certificate" to attempt automatic resolution. 7. If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 2. Click **"Test Connection"** and fix any errors. 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. 1. Head to your PlanetScale database dashboard page at `https://app.planetscale.com//` and click on the "Connect" button to get your database connection parameters. 1. In the PowerSync dashboard, "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**" and "**Password**" are required. 2. "**Name**" can be any name for the connection. 3. "**Host**" is the `host` connection parameter for your database. 4. "**Port**" is 5432 for Postgres databases. 5. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). 1. Important: PlanetScale requires your branch ID to be appended to your username. The username should be `powersync_role`.\. Your PlanetScale branch ID can be found on the same connection details page. 6. **SSL Mode** can remain the default `verify-full`. 7. If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 2. Click **"Test Connection"** and fix any errors. 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. For other providers and self-hosted databases: 1. Fill in your connection details. 2. "**Name**", "**Host**", "**Port**", "**Database name**", "**Username**", "**Password**" and "**SSL Mode"** are required. You can paste a connection string into the "**URI**" field to simplify data entry. 3. "**Name**" can be any name for the connection. 4. "**Port**" is 5432 for Postgres databases. 5. "**Username**" and "**Password**" maps to the `powersync_role` created in [Source Database Setup](/installation/database-setup). 6. Note that if you're using a self-signed SSL certificate for your database server, click the "Download Certificate" button to dynamically fetch the recommended certificate directly from your server. 7. Also note if you get any error such as `server certificate not trusted: SELF_SIGNED_CERT_IN_CHAIN`, click "Download Certificate" to attempt automatic resolution. 8. If you want to query your source database via the PowerSync Dashboard, enable "**Allow querying data from the dashboard?**". 9. Click **"Test Connection"** and fix any errors. 10. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. ## MongoDB Specifics 1. Fill in your connection details from MongoDB: 1. Copy your cluster's connection string and paste it into the PowerSync instance **URI** field. PowerSync will automatically parse this URI to populate other connection details. * The format should be `mongodb+srv://[username:password@]host/[database]`. For example, `mongodb+srv://admin:@cluster0.abcde1.mongodb.net/powersync` 2. Enter your database user's password into the **Password** field. See the necessary permissions in [Source Database Setup](/installation/database-setup#mongodb). 3. "**Database name**" is the database in your cluster to replicate. 2. Click **"Test Connection"** and fix any errors. If have any issues connecting, reach out to our support engineers on our [Discord server](https://discord.gg/powersync) or otherwise [contact us](/resources/contact-us). 1. Make sure that your database allows access to PowerSync's IPs — see [Security and IP Filtering](/installation/database-setup/security-and-ip-filtering) 3. Click **"Save"**. PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. Also see: * [MongoDB Atlas Device Sync Migration Guide](/migration-guides/mongodb-atlas) * [MongoDB Setup](/installation/database-setup#mongodb) ## MySQL (Alpha) Specifics 1. Fill in your connection details from MySQL: 1. "**Name**" can be any name for the connection. 2. "**Host**" and "**Database name**" is the database to replicate. 3. "**Username**" and "**Password**" maps to your database user. 2. Click **"Test Connection"** and fix any errors. If have any issues connecting, reach out to our support engineers on our [Discord server](https://discord.gg/powersync) or otherwise [contact us](/resources/contact-us). 1. Make sure that your database allows access to PowerSync's IPs — see [Security and IP Filtering](/installation/database-setup/security-and-ip-filtering) 3. Click **"Save".** PowerSync deploys and configures an isolated cloud environment for you, which can take a few minutes to complete. # Source Database Setup Source: https://docs.powersync.com/installation/database-setup Configure your backend database for PowerSync, including permissions and replication settings. Jump to: [Postgres](#postgres) | [MongoDB](#mongodb) | [MySQL](#mysql-alpha) ## Postgres **Version compatibility**: PowerSync requires Postgres version 11 or greater. Configuring your Postgres database for PowerSync generally involves three tasks: 1. Ensure logical replication is enabled 2. Create a PowerSync database user 3. Create `powersync` logical replication publication We have documented steps for some hosting providers: ### 1. Ensure logical replication is enabled No action required: Supabase has logical replication enabled by default. ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` ### Prerequisites The instance must be publicly accessible using an IPv4 address. ![](https://mintlify.s3.us-west-1.amazonaws.com/powersync/images/setup-1.avif) Access may be restricted to specific IPs if required — see [IP Filtering](/installation/database-setup/security-and-ip-filtering). ### 1. Ensure logical replication is enabled Set the `rds.logical_replication` parameter to `1` in the parameter group for the instance: ![](https://mintlify.s3.us-west-1.amazonaws.com/powersync/images/setup-2.png) ### 2. Create a PowerSync database user Create a PowerSync user on Postgres: ```sql -- SQL to create powersync user CREATE ROLE powersync_role WITH BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Allow the role to perform replication tasks GRANT rds_replication TO powersync_role; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` PowerSync supports both "Azure Database for PostgreSQL" and "Azure Database for PostgreSQL Flexible Server". ### Prerequisites The database must be accessible on the public internet. Once you have created your database, navigate to **Settings** → **Networking** and enable **Public access.** ### 1. Ensure logical replication is enabled Follow the steps as noted in [this Microsoft article](https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/concepts-logical#prerequisites-for-logical-replication-and-logical-decoding) to allow logical replication. ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` ### 1. Ensure logical replication is enabled In Google Cloud SQL Postgres, enabling the logical replication is done using flags: ![](https://mintlify.s3.us-west-1.amazonaws.com/powersync/images/setup-3.png) ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` Neon is a serverless Postgres environment with an innovative pricing model that separates storage and compute. ### 1. Ensure logical replication is enabled To [Ensure logical replication is enabled](https://neon.tech/docs/guides/logical-replication-postgres#prepare-your-source-neon-database): 1. Select your project in the Neon Console. 2. On the Neon Dashboard, select **Settings**. 3. Select **Logical Replication**. 4. Click **Enable** to Ensure logical replication is enabled. ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` Fly Postgres is a [Fly](https://fly.io/) app with [flyctl](https://fly.io/docs/flyctl/) sugar on top to help you bootstrap and manage a database cluster for your apps. ### 1. Ensure logical replication is enabled Once you've deployed your Fly Postgres cluster, you can use the following command to Ensure logical replication is enabled: ```bash fly pg config update --wal-level=logical ``` ![](https://mintlify.s3.us-west-1.amazonaws.com/powersync/images/setup-5.avif) ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` ### 1. Ensure logical replication is enabled No action required: PlanetScale has logical replication (`wal_level = logical`) enabled by default. ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- PlanetScale does not support ON ALL TABLES so -- Specify each table you want to sync -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR TABLE public.lists, public.todos; ``` For other providers and self-hosted databases: Need help? Simply contact us on [Discord](https://discord.gg/powersync) and we'll help you get set up. ### 1. Ensure logical replication is enabled PowerSync reads the Postgres WAL using logical replication in order to create sync buckets in accordance with the specified PowerSync [Sync Rules](/usage/sync-rules). If you are managing Postgres yourself, set `wal_level = logical` in your config file: ![](https://mintlify.s3.us-west-1.amazonaws.com/powersync/images/setup-6.avif) Alternatively, you can use the below SQL commands to check and Ensure logical replication is enabled: ```sql -- Check the replication type SHOW wal_level; -- Ensure logical replication is enabled ALTER SYSTEM SET wal_level = logical; ``` Note that Postgres must be restarted after changing this config. If you're using a managed Postgres service, there may be a setting for this in the relevant section of the service's admin console. ### 2. Create a PowerSync database user ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). ### 3. Create "powersync" publication ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` ### Unsupported Hosted Postgres Providers Due to the logical replication requirement, not all Postgres hosting providers are supported. Notably, some "serverless Postgres" providers do not support logical replication, and are therefore not supported by PowerSync yet. ## MongoDB **Version compatibility**: PowerSync requires MongoDB version 6.0 or greater. ### Permissions required - MongoDB Atlas For MongoDB Atlas databases, the minimum permissions when using built-in roles are: ``` readWrite@._powersync_checkpoints read@ ``` To allow PowerSync to automatically enable [`changeStreamPreAndPostImages`](#post-images) on replicated collections (the default for new PowerSync instances), additionally add the `dbAdmin` permission: ``` readWrite@._powersync_checkpoints read@ dbAdmin@ ``` If you are replicating from multiple databases in the cluster, you need read permissions on the entire cluster, in addition to the above: ``` readAnyDatabase@admin ``` ### Privileges required - Self-hosted / Custom roles For self-hosted MongoDB, or for creating custom roles on MongoDB Atlas, PowerSync requires the following privileges/granted actions: * On the database being replicated: `listCollections` * On all collections in the database: `changeStream` * This must apply to the entire database, not individual collections. Specify `collection: ""` * If replicating from multiple databases, this must apply to the entire cluster. Specify `db: ""` * On each collection being replicated: `find` * On the `_powersync_checkpoints` collection: `createCollection`, `dropCollection`, `find`, `changeStream`, `insert`, `update`, and `remove` * To allow PowerSync to automatically enable [`changeStreamPreAndPostImages`](#post-images) on replicated collections, additionally add the `collMod` permission on all replicated collections. ### Post-Images To replicate data from MongoDB to PowerSync in a consistent manner, PowerSync uses Change Streams with [post-images](https://www.mongodb.com/docs/v6.0/reference/command/collMod/#change-streams-with-document-pre--and-post-images) to get the complete document after each change. This requires the `changeStreamPreAndPostImages` option to be enabled on replicated collections. PowerSync supports three configuration options for post-images: 1. **Off**: (`post_images: off`): Uses `fullDocument: 'updateLookup'` for backwards compatibility. This was the default for older instances. However, this may lead to consistency issues, so we strongly recommend enabling post-images instead. 2. **Automatic**: (`post_images: auto_configure`) The **default** for new instances: Automatically enables the `changeStreamPreAndPostImages` option on collections as needed. Requires the permissions/privileges mentioned above. If a collection is removed from [Sync Rules](/usage/sync-rules), developers can manually disable `changeStreamPreAndPostImages`. 3. **Read-only**: (`post_images: read_only`): Uses `fullDocument: 'required'` and requires `changeStreamPreAndPostImages: { enabled: true }` to be set on every collection referenced in the [Sync Rules](/usage/sync-rules). Replication will error if this is not configured. This option is ideal when permissions are restricted. To manually configure collections for `read_only` mode, run this on each collection: ```js db.runCommand( { collMod: , changeStreamPreAndPostImages: { enabled: } } ) ``` You can view which collections have the option enabled using: ```js db.getCollectionInfos().filter(c => c.options?.changeStreamPreAndPostImages?.enabled) ``` Post-images can be configured for PowerSync instances as follows: Configure the **Post Images** setting in the connection configuration in the Dashboard (right-click on your instance to edit it). Configure `post_images` in the `config.yaml` file. ### MongoDB Atlas private endpoints using AWS PrivateLink If you need to use private endpoints with MongoDB Atlas, see [Private Endpoints](/installation/database-setup/private-endpoints) (AWS only). ### Migrating from MongoDB Atlas Device Sync For more information on migrating from Atlas Device Sync to PowerSync, see our [migration guide](/migration-guides/mongodb-atlas). ## MySQL (Alpha) This section is a work in progress. More details for MySQL connections are coming soon. In the meantime, ask on our [Discord server](https://discord.gg/powersync) if you have any questions. **Version compatibility**: PowerSync requires MySQL version 5.7 or greater. MySQL connections use the [binary log](https://dev.mysql.com/doc/refman/8.4/en/binary-log.html) to replicate changes. Generally, this requires the following config: * `gtid_mode` : `ON` * `enforce_gtid_consistency` : `ON` * `binlog_format` : `ROW` PowerSync also requires a user with replication permissions on the database. An example: ```sql -- Create a user with necessary privileges CREATE USER 'repl_user'@'%' IDENTIFIED BY 'good_password'; -- Grant replication client privilege GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl_user'@'%'; -- Grant access to the specific database GRANT ALL PRIVILEGES ON powersync.* TO 'repl_user'@'%'; -- Apply changes FLUSH PRIVILEGES; ``` ## Next Step Next, connect PowerSync to your database: Refer to **Database Connection**. Refer to **PowerSync Service Setup** in the Self-Hosting section. # Private Endpoints Source: https://docs.powersync.com/installation/database-setup/private-endpoints ## PowerSync Cloud: AWS Private Endpoints To avoid exposing a database in AWS to the public internet, using AWS Private Endpoints ([AWS PrivateLink](https://aws.amazon.com/privatelink/)) is an option that provides private networking between the source database and the PowerSync Service. Private Endpoints are currently available on our [Team and Enterprise plans](https://www.powersync.com/pricing). We use Private Endpoints instead of VPC peering to ensure that no other resources are exposed between the VPCs. Do not rely on Private Endpoints as the only form of security. Always use strong database passwords, and use client certificates if additional security is required. ## Current Limitations 1. Private Endpoints are currently only supported for Postgres and MongoDB instances. [Contact us](/resources/contact-us) if you need this for MySQL. 2. Self-service is not yet available on the PowerSync side — [contact PowerSync support](/resources/contact-us) to configure the instance. 3. Only AWS is supported currently — other cloud providers are not supported yet. 4. The **"Test Connection"** function on the [PowerSync Dashboard](/usage/tools/powersync-dashboard) is not supported yet - the instance has to be deployed to test the connection. ## Concepts * [AWS PrivateLink](https://aws.amazon.com/privatelink/) is the overarching feature on AWS. * VPC/Private Endpoint Service is the service that exposes the database, and lives in the same VPC as the source database. It provides a one-way connection to the database without exposing other resources in the VPC. * *Endpoint Service Name* is a unique identifier for this Endpoint Service. * Each Endpoint Service may have multiple Private Endpoints in different VPCs. * VPC/Private Endpoint is the endpoint in the PowerSync VPC. This is what the PowerSync instance connects to. For custom Endpoint Services for Postgres: * Network Load Balancer (NLB) is a load balancer that exposes the source database to the Endpoint Service. * *Target Group* specifies the IPs and ports for the Network Load Balancer to expose. * *Listener* for the Network Load Balancer is what describes the incoming port on the Network Load Balancer (the port that the PowerSync instance connects to). ## Private Endpoint Setup MongoDB Atlas supports creating an Endpoint Service per project for AWS. Limitations: 1. Only Atlas clusters in AWS are supported. 2. The Atlas cluster must be in one of the PowerSync AWS regions - see the list below. Cross-region endpoints are not yet supported by MongoDB Atlas. 3. This is only supported for Atlas clusters - PowerSync does not support PrivateLink for MongoDB clusters self-hosted in AWS. ### 1. Configure the Endpoint Service 1. In the Atlas project dashboard, go to Network Access → Private Endpoint → Dedicated Cluster. 2. Select "Add Private Endpoint". 3. Select AWS and the relevant AWS region. 4. Wait for the Endpoint Service to be created. 5. "Your VPC ID" and "Your Subnet IDs" are not relevant for PowerSync - leave those blank. 6. Avoid running the command to create the "VPC Interface Endpoint"; this step is handled by PowerSync. 7. Note the Endpoint Service Name. This is displayed in the command to run, as the `--service-name` option. The Service Name should look something like `com.amazonaws.vpce.us-east-1.vpce-svc-0123456`. Skip the final step of configuring the VPC Endpoint ID - this will be done later. ### 2. PowerSync Setup On PowerSync, create a new instance, but do not configure the connection yet. Copy the Instance ID. [Contact us](/resources/contact-us) and provide: 1. The Endpoint Service Name. 2. The PowerSync Instance ID. We will then configure the instance to use the Endpoint Service for the database connection, and provide you with a VPC Endpoint ID, in the form `vpce-12346`. ### 3. Finish Atlas Endpoint Service Setup On the Atlas Private Endpoint Configuration, in the final step, specify the VPC Endpoint ID from above. If you have already closed the dialog, go through the process of creating a Private Endpoint again. It should have the same Endpoint Service Name as before. Check that the Endpoint Status changes to *Available*. ### 4. Get the Connection String 1. On the Atlas Cluster, select "Connect". 2. Select "Private Endpoint" as the connection type, and select the provisioned endpoint. 3. Select "Drivers" as the connection method, and copy the connection string. The connection string should look something like `mongodb+srv://:@your-cluster-pl-0.abcde.mongodb.net/`. ### 5. Deploy Once the Private Endpoint has been created on the PowerSync side, it will be visible in the instance settings under the connection details, as "VPC Endpoint Hostname". Configure the instance the connection string from the previous step, then deploy. Monitor the logs to ensure the instance can connect after deploying. To configure a Private Endpoint Service, a network load balancer is required to forward traffic to the database. This can be used with a Postgres database running on an EC2 instance, or an RDS instance. For AWS RDS, the guide below does not handle dynamic IPs if the RDS instance's IP changes. This needs additional work to automatically update the IP - see this [AWS blog post](https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/) on the topic. This is specifically relevant if using an RDS cluster with failover support. Use the following steps to configure the Endpoint Service: ### 1. Create a Target Group 1. Obtain the RDS Instance's private IP address. Make sure this points to a writable instance. 2. Create a Target Group with IP addresses as target type, using the IP address from above. Use TCP protocol, and specify the database port (typically `5432` for Postgres). 3. Note: The IP address of your RDS instance may change over time. To maintain a consistent connection, consider implementing automation to monitor and update the target group's IP address as needed. See the [AWS blog post](https://aws.amazon.com/blogs/database/access-amazon-rds-across-vpcs-using-aws-privatelink-and-network-load-balancer/) on the topic. ### 2. Create a Network Load Balancer (NLB) 1. Select the same VPC as your RDS instance. 2. Choose at least two subnets in different availability zones. 3. Configure a TCP listener and pick a port (for example `5432` again). 4. Associate the listener with the target group created earlier. ### 3. Modify the Security Group 1. Modify the security group associated with your RDS instance to permit traffic from the load balancer IP range. ### 4. Create a VPC Endpoint Service 1. In the AWS Management Console, navigate to the VPC service and select Endpoint Services. 2. Click on "Create Endpoint Service". 3. Select the Network Load Balancer created in the previous step. 4. If the load balancer is in one of the PowerSync regions (see below), it is not required to select any "Supported Region". If the load balancer is in a different region, select the region corresponding to your PowerSync instance here. Note that this will incur additional AWS charges for the cross-region support. 5. Decide whether to require acceptance for endpoint connections. Disabling acceptance can simplify the process but may reduce control over connections. 6. Under "Supported IP address types", select both IPv4 and IPv6. 7. After creating the endpoint service, note the Service Name. This identifier will be used when configuring PowerSync to connect via PrivateLink. 8. Configure the Endpoint Service to accept connections from the principal `arn:aws:iam::131569880293:root`. See the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/configure-endpoint-service.html#add-remove-permissions) for details. ### 5. PowerSync Setup On PowerSync, create a new instance, but do not configure the connection yet. [Contact us](/resources/contact-us) and provide the Service Name from above, as well as the PowerSync instance ID created above. We will then configure the instance to use the Endpoint Service for the database connection. ### 6. Deploy Once the Private Endpoint has been created on the PowerSync side, it will be visible in the instance settings under the connection details, as "VPC Endpoint Hostname". Verify the connection details, and deploy the instance. Monitor the logs to ensure the instance can connect after deploying. ## AWS Regions PowerSync currently runs in the AWS regions below. Make sure the region matching your PowerSync instance is supported in by the Endpoint Service. 1. US: `us-east-1` 2. EU: `eu-west-1` 3. BR: `sa-east-1` 4. JP: `ap-northeast-1` 5. AU: `ap-southeast-2` # Security & IP Filtering Source: https://docs.powersync.com/installation/database-setup/security-and-ip-filtering ## TLS with Postgres PowerSync always [enforces TLS](/usage/lifecycle-maintenance/postgres-maintenance#tls) on connections to the database, and certificate validation cannot be disabled. ## PowerSync Cloud: IP Filtering For enhanced security, you can restrict database access to PowerSync Cloud's IP addresses. Below are the IP ranges for each region: ``` 50.19.5.255 34.193.39.149 18.234.18.91 18.233.128.219 34.202.251.156 ``` ``` 79.125.70.43 18.200.209.88 18.234.18.91 18.233.128.219 34.202.251.156 ``` ``` 54.248.194.85 57.180.73.135 18.234.18.91 18.233.128.219 34.202.251.156 ``` ``` 52.63.101.65 13.211.184.238 18.234.18.91 18.233.128.219 34.202.251.156 ``` ``` 54.207.21.139 54.232.53.97 18.234.18.91 18.233.128.219 34.202.251.156 ``` ``` 2602:817::/44 ``` Do not rely on IP filtering as a primary form of security. Always use strong database passwords, and use client certificates if additional security is required. ## PowerSync Cloud: AWS Private Endpoints See [Private Endpoints](./private-endpoints) for using a private network to your database using AWS PrivateLink (AWS only). ## See Also * [Data Encryption](/usage/use-case-examples/data-encryption) * [Security](/resources/security) # Quickstart Guide / Installation Overview Source: https://docs.powersync.com/installation/quickstart-guide PowerSync is designed to be stack agnostic, and currently supports [Postgres](/installation/database-setup#postgres), [MongoDB](/installation/database-setup#mongodb) and [MySQL](/installation/database-setup#mysql-alpha) (alpha) as the backend source database, and has the following official client-side SDKs available: * [**Flutter**](/client-sdk-references/flutter) (mobile and [web](/client-sdk-references/flutter/flutter-web-support)) * [**React Native**](/client-sdk-references/react-native-and-expo) (mobile and [web](/client-sdk-references/react-native-and-expo/react-native-web-support)) * [**JavaScript Web**](/client-sdk-references/javascript-web) (including integrations for React & Vue) * [**Kotlin Multiplatform**](/client-sdk-references/kotlin-multiplatform) * [**Swift**](/client-sdk-references/swift) * [**Node.js**](/client-sdk-references/node) (alpha) * [**.NET**](/client-sdk-references/dotnet) (alpha) Support for additional platforms is on our [Roadmap](https://roadmap.powersync.com/). If one isn't supported today, please add your vote or submit a new idea on our roadmap, and check back soon. **Postgres Developers: Using Supabase?** If you are using [Supabase](https://supabase.com/) as your backend, we provide a [PowerSync\<>Supabase](/integration-guides/supabase-+-powersync) integration guide which includes a tutorial and demo app to quickly learn how to use PowerSync with Supabase. ## Implementation Outline The following outlines our recommended steps to implement PowerSync in your project: Sign up for a free PowerSync Cloud account [here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs) if you want to use our cloud-hosted service. PowerSync can also be self-hosted — see instructions in step 3. Configure your source database for PowerSync — see [Source Database Setup](/installation/database-setup). Connect your database to your instance of the PowerSync Service: 1. Using PowerSync Cloud: See [Database Connection](/installation/database-connection) 2. Using self-hosted PowerSync: Refer to [this section](/self-hosting/installation/powersync-service-setup#powersync-configuration). Define [Sync Rules](/usage/sync-rules) in PowerSync — this enables dynamic partial replication: syncing just a relevant subset of data to each user/client instead of your entire database. * Learn about Sync Rules in our introductory [blog post](https://www.powersync.com/blog/sync-rules-from-first-principles-partial-replication-to-sqlite). * We recommend starting with one or two simple [Global Data](/usage/sync-rules/example-global-data) queries. Generate a [Development Token](/installation/authentication-setup/development-tokens) so you can get up and running quickly, without implementing full authentication integration yet. Use our hosted [Diagnostics App](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to validate that your backend source database is syncing into SQLite as expected based on your Sync Rules. Implement PowerSync in your app using one of our Client SDKs: 1. At this point, we recommend continuing to use your Development Token from step 5 for simplicity. 2. To get a quick feel for PowerSync, you may want to implement a "Hello World" app as a start. Or you can jump straight into installing the client SDK in your existing app. See [Client-Side Setup](/installation/client-side-setup) or follow end-to-end getting started instructions in the [full SDK reference](/client-sdk-references/introduction). 3. Verify that downloads from your source database are working. Data should reflect in your UI and you can also [inspect the SQLite database](/resources/troubleshooting#inspect-local-sqlite-database). Implement authentication for clients (JWT-based) — see our [docs here](/installation/authentication-setup). Implement your [backend application](/installation/app-backend-setup) to accept and process writes from clients. * We have backend examples available [here](/resources/demo-apps-example-projects#backend-examples) for environments like Node.js, Django and Rails. ## Questions? Join us on [our community Discord server](https://discord.gg/powersync) where you can browse topics from the PowerSync community and ask questions. Our engineers are there to help, and we also have an AI bot on the [#gpt-help](https://discord.com/channels/1138230179878154300/1304118313093173329) channel that provides decent answers to common questions. # Deploy PowerSync Service on Coolify Source: https://docs.powersync.com/integration-guides/coolify Integration guide for deploying the [PowerSync Service](http://localhost:3333/architecture/powersync-service) on Coolify [Coolify](https://coolify.io/) is an open-source, self-hosted platform that simplifies the deployment and management of applications, databases, and services on your own infrastructure. Think of it as a self-hosted alternative to platforms like Heroku or Netlify. Before following this guide, you should: * Read through the [PowerSync Service Setup](/self-hosting/installation/powersync-service-setup) guide to understand the requirements and configuration options. This guide assumes you have already done so, and will only cover the Coolify specific setup. * Have Coolify installed and running. # Background For the PowerSync Service to function correctly, you will need: * A database, * Authentication service, and * Data upload service. The easiest way to get started is to use **Supabase** as it provides all three. However, you can also use a different database, and custom authentication and data upload services. # Steps Add the [`Compose file`](/integration-guides/coolify#base-docker-compose-yaml-file) as a Docker Compose Empty resource to your project. Update the environment variables and config files. Click on the `Deploy` button to deploy the PowerSync Service. The PowerSync Service will now be available at * `http://localhost:8080` if default config was used, or * `http://{your_coolify_domain}:{PS_PORT}` if a custom domain or port was specified. To check the health of the PowerSync Service, see [Healthchecks](/self-hosting/lifecycle-maintenance/healthchecks). # Configuration options The following configuration options should be updated: * Environment variables * `sync_rules.yaml` file (according to your data requirements) * `powersync.yaml` file
Environment Variable Value
PS\_DATABASE\_TYPE postgresql
PS\_DATABASE\_URI **Connection string obtained from Supabase**
See step 5 in [Connect PowerSync to Your Supabase](/integration-guides/supabase-+-powersync#connect-powersync-to-your-supabase)
PS\_PORT **Keep default value (8080)**
PS\_MONGO\_URI mongodb://mongo:27017
PS\_JWKS\_URL **Keep default value**
```yaml {5} ... # Client (application end user) authentication settings client_auth: # Enable this if using Supabase Auth supabase: true ... ```
Environment Variable Value
PS\_DATABASE\_TYPE postgresql OR mongodb OR mysql
PS\_DATABASE\_URI The database connection URI (according to your database type) where your data is stored.
PS\_PORT **Default value (8080)** You can change this if you want the PowerSync Service to be available on a different port.
PS\_MONGO\_URI mongodb://mongo:27017
PS\_JWKS\_URL The URL of the JWKS endpoint of your authentication service.
```yaml {5, 11-15,18, 23} ... # Client (application end user) authentication settings client_auth: # Enable this if using Supabase Auth supabase: false # JWKS URIs can be specified here jwks_uri: !env PS_JWKS_URL # Optional static collection of public keys for JWT verification jwks: keys: - kty: 'oct' k: 'use_a_better_token_in_production' alg: 'HS256' # JWKS audience audience: ["powersync-dev", "powersync", "http://localhost:8080"] api: tokens: # These tokens are used for local admin API route authentication - use_a_better_token_in_production ```
# Base `Compose` file The following Compose file serves as a universal starting point for deploying the PowerSync Service on Coolify. ```yaml services: mongo: image: mongo:7.0 command: --replSet rs0 --bind_ip_all --quiet restart: unless-stopped ports: - 27017:27017 volumes: - mongo_storage:/data/db # Initializes the MongoDB replica set. This service will not usually be actively running mongo-rs-init: image: mongo:7.0 depends_on: - mongo restart: on-failure entrypoint: - bash - -c - 'mongosh --host mongo:27017 --eval ''try{rs.status().ok && quit(0)} catch {} rs.initiate({_id: "rs0", version: 1, members: [{ _id: 0, host : "mongo:27017" }]})''' # PowerSync Service powersync: image: journeyapps/powersync-service:latest container_name: powersync depends_on: - mongo-rs-init command: [ "start", "-r", "unified"] restart: unless-stopped environment: - NODE_OPTIONS="--max-old-space-size=1000" - POWERSYNC_CONFIG_PATH=/home/config/powersync.yaml - PS_DATABASE_TYPE=${PS_DEMO_BACKEND_DATABASE_TYPE:-postgresql} - PS_DATABASE_URI=${PS_DATABASE_URI:-postgresql://postgres:postgres@localhost:5432/postgres} - PS_PORT=${PS_PORT:-8080} - PS_MONGO_URI=${PS_MONGO_URI:-mongodb://mongo:27017} - PS_SUPABASE_AUTH=${USE_SUPABASE_AUTH:-false} - PS_JWKS_URL=${PS_JWKS_URL:-http://localhost:6060/api/auth/keys} ports: - ${PS_PORT}:${PS_PORT} volumes: - ./volumes/config:/home/config - type: bind source: ./volumes/config/sync_rules.yaml target: /home/config/sync_rules.yaml content: | bucket_definitions: user_lists: # Separate bucket per To-Do list parameters: select id as list_id from lists where owner_id = request.user_id() data: - select * from lists where id = bucket.list_id - select * from todos where list_id = bucket.list_id - type: bind source: ./volumes/config/powersync.yaml target: /home/config/powersync.yaml content: | # yaml-language-server: $schema=../schema/schema.json # Note that this example uses YAML custom tags for environment variable substitution. # Using `!env [variable name]` will substitute the value of the environment variable named # [variable name]. # migrations: # # Migrations run automatically by default. # # Setting this to true will skip automatic migrations. # # Migrations can be triggered externally by altering the container `command`. # disable_auto_migration: true # Settings for telemetry reporting # See https://docs.powersync.com/self-hosting/telemetry telemetry: # Opt out of reporting anonymized usage metrics to PowerSync telemetry service disable_telemetry_sharing: false # Settings for source database replication replication: # Specify database connection details # Note only 1 connection is currently supported # Multiple connection support is on the roadmap connections: - type: !env PS_DATABASE_TYPE # The PowerSync server container can access the Postgres DB via the DB's service name. # In this case the hostname is pg-db # The connection URI or individual parameters can be specified. # Individual params take precedence over URI params uri: !env PS_BACKEND_DATABASE_URI # Or use individual params # hostname: pg-db # From the Docker Compose service name # port: 5432 # database: postgres # username: postgres # password: mypassword # SSL settings sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable' # 'disable' is OK for local/private networks, not for public networks # Required for verify-ca, optional for verify-full # This should be the certificate(s) content in PEM format # cacert: !env PS_PG_CA_CERT # Include a certificate here for HTTPs # This should be the certificate content in PEM format # client_certificate: !env PS_PG_CLIENT_CERT # This should be the key content in PEM format # client_private_key: !env PS_PG_CLIENT_PRIVATE_KEY # This is valid if using the `mongo` service defined in `ps-mongo.yaml` # Connection settings for sync bucket storage storage: type: mongodb uri: !env PS_MONGO_URI # Use these if authentication is required. The user should have `readWrite` and `dbAdmin` roles # username: my-mongo-user # password: my-password # The port which the PowerSync API server will listen on port: !env PS_PORT # Specify sync rules sync_rules: path: /home/config/sync_rules.yaml # Client (application end user) authentication settings client_auth: # Enable this if using Supabase Auth supabase: true # JWKS URIs can be specified here jwks_uri: !env PS_JWKS_URL # Optional static collection of public keys for JWT verification # jwks: # keys: # - kty: 'RSA' # n: !env PS_JWK_N # e: !env PS_JWK_E # alg: 'RS256' # kid: !env PS_JWK_KID # JWKS audience audience: ["powersync-dev", "powersync"] api: tokens: # These tokens are used for local admin API route authentication - use_a_better_token_in_production ``` {/* # Steps Add the PowerSync Service resource to your project by either scrolling through the `Services` section or by searching for `powersync` in the search bar. The default one-click deployable PowerSync Service uses * MongoDB for internal storage, * PostgreSQL for replication, and * [Sync Rules](/usage/sync-rules) as defined for the To-Do List demo application found in [Demo Apps / Example Projects](/resources/demo-apps-example-projects). If you are running the demo To-Do List application, you can jump to Step 4 and simply deploy the PowerSync Service. Navigate to the `Environment Variables` tab and update the environment variables as per your requirements. For more information on what environment variables are available, see [Environment Variables](/tutorials/self-host/coolify#environment-variables). Navigate to the `Storages` tab and update the `sync_rules.yaml` and `powersync.yaml` files as needed. For more information see [Sync Rules](/usage/sync-rules) and the skeleton config file in [PowerSync Service Setup](/self-hosting/installation/powersync-service-setup). You can expand the content by dragging the bottom right corner of the editor. There are two parameters whose values should be changed manually if necessary. - `disable_telemetry_sharing` in telemetry, and - `supabase` in client_auth Click on the `Deploy` button to deploy the PowerSync Service. The PowerSync Service will now be available at * `http://localhost:8080` if default config was used, or * `http://{your_coolify_domain}:{PS_PORT}` if a custom domain or port was specified. */} {/* ## What to do next */} {/* Update your backend/client `.env` file with the PowerSync URL from [Step 4](#step-4-deploy-the-powersync-service) above. For this example we assume we have an environment variable named `POWERSYNC_URL`. ```bash POWERSYNC_URL=http://localhost:8080 ``` */} {/* ## Environment Variables
Environment Variable Description Example
POWERSYNC_CONFIG_PATH This is the path (inside the container) to the YAML config file /home/config/powersync.yaml
PS_DATABASE_TYPE Database replication type postgresql
PS_BACKEND_DATABASE_URI Database connection URI postgresql://postgres:postgres@localhost:5432/postgres
PS_PORT The port the PowerSync API is accessible on 8080
PS_MONGO_URI The MongoDB URI used internally by the PowerSync Service mongodb://mongo:27017
PS_JWKS_URL Auth URL http://localhost:6060/api/auth/keys
*/} # FlutterFlow + PowerSync Source: https://docs.powersync.com/integration-guides/flutterflow-+-powersync Integration guide for creating local-first apps with FlutterFlow and PowerSync with Supabase as the backend. Used in conjunction with **FlutterFlow**, PowerSync enables developers to build local-first apps that are robust in poor network conditions and that have highly responsive frontends while relying on Supabase for their backend. This guide walks you through configuring PowerSync within your FlutterFlow project that has Supabase integration enabled. **New and Improved integration**: Welcome to our updated FlutterFlow integration guide. This version introduces a dedicated [PowerSync FlutterFlow Library](https://marketplace.flutterflow.io/item/dm1cuOwYzDv6yQL2QOFb), offering a simpler and more robust solution compared to the [previous version](/integration-guides/flutterflow-+-powersync/powersync-+-flutterflow-legacy) which required extensive custom code. Key improvements are: * Uses the new [PowerSync FlutterFlow Library](https://marketplace.flutterflow.io/item/dm1cuOwYzDv6yQL2QOFb) * Supports Web-based test mode * Streamlined Setup * No more dozens of custom actions * Working Attachments package - learn how to sync attachments [here](/integration-guides/flutterflow-+-powersync/handling-attachments). Note that using libraries in FlutterFlow requires being on a [paid plan with FlutterFlow](https://www.flutterflow.io/pricing). If this is not an option for you, you can use our [legacy guide](/integration-guides/flutterflow-+-powersync/powersync-+-flutterflow-legacy) with custom code to integrate PowerSync in your FlutterFlow project. This guide uses **Supabase** as the backend database provider for its seamless integration with PowerSync. However, you can integrate a different backend using custom actions. For more information, refer to the [Custom backend connectors](#custom-backend-connectors) section. ## Guide Overview Before starting this guide, you'll need: * A PowerSync account ([sign up here](https://accounts.journeyapps.com/portal/powersync-signup?s=docs)). * A Supabase account ([sign up here](https://supabase.com/dashboard/sign-up)). * A [paid plan](https://www.flutterflow.io/pricing) with FlutterFlow for the ability to import a Library into a project. This guide walks you through building a basic item management app from scratch and takes about 30-40 minutes to complete. You should then be able to use this knowledge to build and extend your own app. 1. Configure Supabase and PowerSync Prerequisites 2. Initialize Your FlutterFlow Project 3. Build a Sign-in Screen 4. Read Data 5. Create Data 6. Update Data (Guide coming soon) 7. Delete Data 8. Sign Out 9. (New) Display Connectivity and Sync Status 10. Secure Your App 11. Enable RLS in Supabase 12. Update Sync Rules in PowerSync ## Configure Supabase 1. Create a new project in Supabase. 2. To set up the Postgres database for our demo app, we will create a `lists` table. The demo app will have access to this table even while offline. Run the below SQL statement in your **Supabase SQL Editor**: ```sql create table public.lists ( id uuid not null default gen_random_uuid (), created_at timestamp with time zone not null default now(), name text not null, owner_id uuid not null, constraint lists_pkey primary key (id), constraint lists_owner_id_fkey foreign key (owner_id) references auth.users (id) on delete cascade ) tablespace pg_default ``` 3. PowerSync uses the Postgres [Write Ahead Log (WAL)](https://www.postgresql.org/docs/current/wal-intro.html) to replicate data changes in order to keep PowerSync SDK clients up to date. Run the below SQL statement in your **Supabase SQL Editor** to create a Postgres role/user with replication privileges: ```sql -- Create a role/user with replication privileges for PowerSync CREATE ROLE powersync_role WITH REPLICATION BYPASSRLS LOGIN PASSWORD 'myhighlyrandompassword'; -- Set up permissions for the newly created role -- Read-only (SELECT) access is required GRANT SELECT ON ALL TABLES IN SCHEMA public TO powersync_role; ``` To restrict read access to specific tables, explicitly list allowed tables for both the `SELECT` privilege, and for the publication mentioned in the next step (as well as for any other publications that may exist). 4. Create a Postgres publication using the SQL Editor. This will enable data to be replicated from Supabase so that your FlutterFlow app can download it. ```sql -- Create a publication to replicate tables. -- Specify a subset of tables to replicate if required. -- The publication must be named "powersync" CREATE PUBLICATION powersync FOR ALL TABLES; ``` ## Configure PowerSync ### Create a PowerSync Cloud Instance 1. In the **Overview** workspace of the [PowerSync Dashboard](/usage/tools/powersync-dashboard), you will be prompted to create your first instance: If you've previously created an instance in your project, you can create an additional instance by navigating to **Manage instances** and clicking **Create new instance**: You can also create an entirely new [project](/usage/tools/powersync-dashboard#hierarchy%3A-organization%2C-project%2C-instance) with its own set of instances. Click on the PowerSync icon in the top left corner of the Dashboard or on **Admin Portal** at the top of the Dashboard, and then click on **Create Project**. 2. Give your instance a name, such as "Testing". 3. \[Optional] You can change the default cloud region from US to EU, JP (Japan), AU (Australia) or BR (Brazil) if desired. * Note: Additional cloud regions will be considered on request, especially for customers on our Enterprise plan. Please [contact us](/resources/contact-us) if you need a different region. 4. \[Optional] You can opt in to using the `Next` version of the Service, which may contain early access or experimental features. Always use the `Stable` version in production. 5. Click **Next**. ### Connect PowerSync to Your Supabase 1. From your Supabase Dashboard, select **Connect** in the top navigation bar (or follow this [link](https://supabase.com/dashboard/project/_?showConnect=true)): 2. In the **Direct connection** section, copy the complete connection string (including the `[YOUR-PASSWORD]` placeholder) 3. Back in the PowerSync Dashboard, paste the connection string into the **URI** field. PowerSync will automatically parse this URI to populate the database connection details. 4. Update the **Username** and **Password** fields to use the `powersync_role` and password you created when configuring your Supabase for PowerSync (see [Source Database Setup](/installation/database-setup#supabase)). 5. Note: PowerSync includes Supabase's CA certificate by default, so you can use `verify-full` SSL mode without additional configuration. 6. Your connection settings should look similar to this: 7. Verify your setup by clicking **Test Connection** and resolve any errors. 8. Click **Next**. 9. PowerSync will detect the Supabase connection and prompt you to enable Supabase auth. To enable it, copy your JWT Secret from your project's settings ([JWT Keys](https://supabase.com/dashboard/project/_/settings/jwt) section in the Supabase dashboard) and paste it here: 10. Click **Enable Supabase auth** to finalize your connection settings. PowerSync will now create an isolated cloud environment for your instance. This typically takes a minute or two. You can update your instance settings by navigating to the **Manage instances** workspace, opening your instance options and selecting **Edit instance**. ### Configure Sync Rules [Sync Rules](/usage/sync-rules) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own lists. 1. To update your Sync Rules, open the `sync-rules.yaml` file. 2. Replace the `sync-rules.yaml` file's contents with the below: ```yaml # This will sync the entire table to all users - we will refine this later bucket_definitions: global: data: - SELECT * FROM lists ``` 3. In the top right, click **"Validate sync rules"** and ensure there are no errors. This validates your sync rules against your Postgres database. 4. In the top right, click **"Deploy sync rules"** and select your instance. 5. Confirm in the dialog and wait a couple of minutes for the deployment to complete. * For additional information on PowerSync's Sync Rules, refer to the [Sync Rules](/usage/sync-rules) documentation. * If you're wondering how Sync Rules relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integration-guides/supabase-+-powersync/rls-and-sync-rules). ## Initialize Your FlutterFlow Project 1. Create a new Blank project in FlutterFlow. 2. Under **"App Settings" -> "Integrations"**, enable "Supabase". 1. Enter your Supabase "API URL" and public "Anon Key". You can find these under **"Project Settings" -> "API Keys" -> `anon` `public`** in your Supabase dashboard. 2. Click "Get Schema". 3. Add the [PowerSync Library](https://marketplace.flutterflow.io/item/dm1cuOwYzDv6yQL2QOFb) to your FlutterFlow account. 4. Under **"App Settings" -> "Project Dependencies" -> "FlutterFlow Libraries"** click "Add Library". 1. Select the "PowerSync" library. 2. Add your schema: 1. On the PowerSync Dashboard, right-click on your instance and select "Generate Client-Side Schema" and select "FlutterFlow" as the language. 2. Copy and paste the generated schema into the "PowerSyncSchema" field. 3. Copy and paste your PowerSync instance URL into the "PowerSyncUrl" field. 4. Note: The default path for the "HomePage" field under "Library Pages" can be left as is and ignored. FlutterFlow does not currently provide a way to remove it. 5. Close the library config. 5. Under **"Custom Pub Dependencies"**, add a dependency on `powersync_core:1.3.0`: This version of `powersync_core` is required for running FlutterFlow on Web. ## Build A Sign-In Screen 1. Under the **"Page Selector"**, click **"Add Page, Component, or Flow"**. 2. Select the "Auth 1" template and name the page `Login`. 3. Delete the *Sign Up*, *Forgot Password* and *Social Login* buttons — we will only be supporting Sign In for this demo app. 4. Under **"App Settings" -> "App Settings" -> "Authentication"**: 1. Enable Authentication. 2. Set "Authentication Type" to "Supabase". 3. Set "Entry Page" to the `Login` page you just created. 4. Set "Logged In Page" to "HomePage". 5. In your Supabase Dashboard, under **"Authentication"**, click on **"Add User" -> "Create new user"** and create a user for yourself to test with: 6. Test your app with test mode: **Checkpoint:** You should now be able to log into the app using the Supabase user account you just created. After logging in you should see a blank screen. ## Read Data We will now create our first UI and bind it to the data in the local SQLite database on the device. There are three ways to read data from the SQLite database using PowerSync's FlutterFlow library: 1. Auto-updating queries for Layout Elements with Dynamic Children e.g. the ListView Element * This uses the library's `PowerSyncQuery` component. 2. Auto-updating queries for basic Layout Elements e.g. Text Elements. * This uses the library's `PowerSyncStateUpdater` component. 3. Once-off reads for static data. * This uses the library's `PowerSyncQueryOnce` custom action. ### Prepare Supabase Tables for Reads For reading data in FlutterFlow, you need a Custom Function per Supabase table to map Supabase rows to data that can be used by the library. This is because FlutterFlow Libraries do not support Supabase classes. 1. Navigate to **"Custom Code"** and add a Custom Function. 2. Name the function `supabaseRowsToList` (if your Supabase table name is "Customers", you would name this `supabaseRowsToCustomers`). 3. Under **Function Settings** on the right, set the "Return Value" to `Supabase Row` 1. Check "Is List". 2. Uncheck "Nullable". 3. Under "Table Name", select `lists`. 4. Also under Function Settings, click "Add Arguments". 1. Set its "Name" to `supabaseRows` 2. Set its "Type" to "JSON". 3. Check "Is List". 4. Uncheck "Nullable". 5. In the Function Code, paste the following code: ```dart /// MODIFY CODE ONLY BELOW THIS LINE return supabaseRows.map((r) => ListsRow(r)).toList(); ``` 6. Click "Save Function". ### 1. Auto-updating queries for Layout Elements with Dynamic Children #### Create a Component to display List Items 1. Under the **"Page Selector"**, click **"Add Page, Component, or Flow"**. 2. Select the **"New Component"** tab. 3. Select "Create Blank" and call the component `ListItems`. 4. Under the **"Widget Palette"**, drag a "ListView" widget into the `ListItems` component. 5. Still under the **"Widget Palette"**, drag a "ListTile" into the `ListView` widget. 6. Under the **"Widget Tree"**, select the `ListItems` component. 1. At the top right under "Component Parameters" click "Add Parameters". 2. Click "Add Parameter". 3. Set its "Name" to `lists`. 4. Set its "Type" to `Supabase Row`. 5. Check "Is List". 6. Under "Table Name", select `lists`. 7. Click "Confirm". 7. Still under the **"Widget Tree"**, select the "ListView" widget. 1. Select the **"Generate Dynamic Children"** panel on the right. 2. Set the "Variable Name" to `listItem`. 3. Set the "Value" to the component parameter created in the previous step (`lists`). 4. Click "Confirm". 5. Click "Save". 6. Click "Ok" when being prompted about the widget generating its children dynamically. 8. Still under the **"Widget Tree"**, select the `ListTile` widget. 1. In the **"Properties"** panel on the right, under "Title", click on the settings icon next to "Text". 2. Set as "listItem Item". 3. Under "Available Options", select "Get Row Field". 4. Under "Supabase Row Fields", select "name". 5. Click "Confirm". 9. Repeat Step 8 above for the "Subtitle", setting it to "created\_at". #### Display the List Component and populate it with Data 1. Under the **"Page Selector"**, select your `HomePage`. 2. Under the **"Widget Palette"**, select the "Components and custom widgets imported from library projects" panel. 3. Drag the `PowerSyncQuery` library component into your page. 4. In the Properties panel on the right, under **"Component Parameters" -> "child"**: 1. Click on "Unknown". 2. Select `ListItems` we previously created. 3. Click on `lists`. 4. Set the "Value" to "Custom Functions" -> `supabaseRowsToList` we created previously. 5. Under the `supabaseRows` argument, set the "Value" to "Widget Builder Parameters" -> `rows`. 6. Click "Confirm". 7. Click "Confirm". 5. Still under "Component Parameters" add the SQL query to fetch all list items from the SQLite database: 1. Paste the following into the "sql \[String]" field: `select * from lists order by created_at;` 2. For this query there are no parameters - this will be covered further down in the guide. 6. Still under "Component Parameters", check "watch \[Boolean]". This ensures that the query auto-updates. #### Test your App 1. Check that there are no project issues or errors. 2. Reload your app or start another test session. 3. Notice that your homepage is still blank. This is because the `lists` table is empty in Supabase. Create a test row in the table by clicking on "Insert" -> "Insert Row" in your Supabase Table Editor. 1. Leave `id` and `created_at` blank. 2. Enter a name such as "Test from Supabase". 3. Click "Select Record" for `owner_id` and select your test user. **Checkpoint:** You should now see your single test row magically appear in your app: ### 2. Auto-updating queries for basic Layout Elements In this section, we will be making the `ListView` component clickable and navigate the user to a page which will eventually display the list's To-Do items. This page will show the selected list's name in the title bar ("AppBar"). This uses Page State and the `PowerSyncStateUpdater` library component. #### Create a Page Parameter This parameter will store the selected list's ID. 1. Under the **"Page Selector"**, click **"Add Page, Component, or Flow"**. 2. Create a blank page and name it `Todos`. 3. Under the **"Widget Tree"**, select your `Todos` page. 4. At the top right of the **"Properties"** panel on the right, click on the plus icon for Page Parameters. 5. Click "Add Parameter". 6. Set the "Parameter Name" to `id`. 7. Set the "Type" to "String". 8. Click "Confirm". #### Create a Local Page State Variable This variable will store the selected list row. 1. Still in the **"Widget Tree"** with the `Todos` page selected: 2. Select the **"State Management Panel"** on the right. 3. Click on "Add Field". 4. Set "Field Name" to `list`. 5. Set the "Type" to "Supabase Row". 6. Under "Table Name", select `lists`. 7. Click "Confirm". #### Bind the Page Title to the Page State 1. Under the **"Widget Palette"**, select the "Components and custom widgets imported from library projects" panel. 2. Drag the `PowerSyncStateUpdater` library component into your page. 3. Under the **"Widget Tree"**, select the `PowerSyncStateUpdater` component. 4. In the **"Properties"** panel on the right, under "Component Parameters": 1. Add the SQL query to fetch the selected list from the SQLite database. Paste the following into the "sql \[String]" field: `select * from lists where id = :id;` 2. Click on "parameters \[Json]" select "Create Map (JSON)" as the variable. 1. Under "Add Map Entries", click "Add Key Value Pair". 2. Set the "Key" to `id`. 3. Set the "Value" to the page parameter created previously called `id`. 4. Check "watch \[Boolean]". This ensures that the query auto-updates. 5. Click "Confirm". 5. Still under "Component Parameters", configure the "onData" action: 1. Open the "Action Flow Editor". 2. Select the "Callback" trigger type. 3. Click "Add Action". 4. Search for "update page" and select "Update Page State". 5. Click "Add Field". 6. Select your `list` page state variable. 7. Set "Select Update Type" to "Set Value". 8. Set "Value to set" to "Custom Functions" -> `supabaseRowsToList`. 9. Set the "Value" to "Callback Parameters" -> `rows` 10. Click "Confirm". 11. Under "Available Options", select "Item at Index". 12. Set "List Index Options" to "First" 13. Click "Confirm". 14. Close the Action Flow Editor. 6. Still under the **"Widget Tree"**, select the "AppBar" -> "Text" widget. 1. In the **"Properties"** panel on the right, click on settings icon next to "Text". 2. Click on "Page State" -> "List". 3. Set "Supabase Row Fields" to "name". 4. (Optional) Set the "Default Variable Value" to `List Name`. 5. Click "Confirm". #### Make the `ListView` Component Clickable 1. Under the **"Page Selector"**, select your `ListItems` component. 2. Under the **"Widget Tree"**, select the `ListTile` widget. 3. In the **"Actions"** panel on the right, click "Add Action". "On Tap" should be selected by default. 4. In the "Navigation" subsection, select "Navigate To". 5. Select the "Todos" page. 6. Under "Parameters" click "Pass". 7. "id" should be auto-selected, click on it. 8. Click on the settings icon next to "Value" 9. Set it to "listItem Item". 10. Under "Available Options" select "Get Row Field" 11. Under "Supabase Row Fields" select "id". 12. Click "Confirm". 13. (Optional) Enable the back button to navigate back: 1. Under the **"Page Selector"**, select your `Todos` page. 2. Under the **"Widget Tree"**, select the "AppBar" component. 3. In the **"Properties"** panel on the right, enable "Show Default Button". #### Test your App Instant Reload your app or start another test session. **Checkpoint:** You should now be able to click on a list item and it should navigate you to a new page showing the name of the list in the title bar: ### 3. Once off reads for static data This section is a work in progress. Please reach out on [our Discord](https://discord.gg/powersync) if you have any questions. ## Create Data You will now update the app so that we can capture new list entries. 1. Under the **"Page Selector"**, select your `HomePage` page. 2. Under the **"Widget Palette"**, search for "float" and drag the "FAB" widget onto your page. 3. In the **"Actions"** panel on the right, click "Add Action". 1. Under "Custom Action" -> "PowerSync", select "powersyncWrite". 2. Under the "Set Action Arguments" -> "sql" section, add the SQL query to create a new list item. For the purpose of this guide we are hardcoding the list's name, normally you would build UI for this. 1. Paste the following into the "Value" field: `INSERT INTO lists(id, created_at, name, owner_id) VALUES(uuid(), datetime(), 'new item', :userId);` 3. Under the "parameters" section, set the `userId` parameter we're using the above query: 1. Click on "UNSET". 2. Select "Create Map (JSON)" as the variable. 3. Under "Add Map Entries", click "Add Key Value Pair". 4. Set the "Key" to `userId`. 5. Set the "Value" to "Authenticated User" -> "User ID". 6. Click "Confirm". **Checkpoint:** Reload your app and click on the + floating action button. A new list item should appear, which also automatically syncs to Supabase: ## Update Data Updating data is possible today using the `powersyncWrite` helper of the Library, and a guide will be published soon. In the mean time, use the section below about [Deleting Data](#delete-data) as a reference. Please reach out on [our Discord](https://discord.gg/powersync) if you have any questions. ## Delete Data In this section we will add the ability to swipe on a `ListTile` to delete it. 1. Under the **"Page Selector"**, select your `ListItems` component. 2. Under the **"Widget Tree"**, select the `ListTile` widget. 3. In the **"Properties"** panel on the right, enable "Slidable". 4. Click "Open Slidable". 5. Select the "SlidableActionWidget". 6. In the **"Actions"** panel on the right, click "Add Action". 1. Under "Custom Action" -> "PowerSync", select "powersyncWrite". 2. Under the "Set Action Arguments" -> "sql" section, add the SQL query to delete the list item. 1. Paste the following into the "Value" field: `delete from lists where id = :id;` 3. Under the "parameters" section, set the `id` parameter we're using the above query: 1. Click on "UNSET". 2. Select "Create Map (JSON)" as the variable. 3. Under "Add Map Entries", click "Add Key Value Pair". 4. Set the "Key" to `id`. 5. Set the "Value" to "listItem Item". 6. Under "Available Options" select "Get Row Field". 7. Under "Supabase Row Fields" select "id". 8. Click "Confirm". 9. Click "Confirm". **Checkpoint:** Reload your app and swipe on a list item. Delete it, and note how it is deleted from the list as well as from Supabase. ## Sign Out 1. Navigate to **"Custom Code"** and create a new Custom Action called `signOut` without Arguments or Return Values and paste the below code: In the below code, `power_sync_b0w5r9` is the project ID of the PowerSync library. Update it if it changes. ```dart // Automatic FlutterFlow imports import '/backend/supabase/supabase.dart'; import "package:power_sync_b0w5r9/backend/schema/structs/index.dart" as power_sync_b0w5r9_data_schema; import 'package:ff_theme/flutter_flow/flutter_flow_theme.dart'; import '/flutter_flow/flutter_flow_util.dart'; import '/custom_code/actions/index.dart'; // Imports other custom actions import '/flutter_flow/custom_functions.dart'; // Imports custom functions import 'package:flutter/material.dart'; // Begin custom action code // DO NOT REMOVE OR MODIFY THE CODE ABOVE! import 'package:power_sync_b0w5r9/custom_code/actions/initialize_power_sync.dart' as ps; Future signOut() async { final database = await ps.getOrInitializeDatabase(); //await database.disconnectAndClear(); // this will completely delete all the local data, use with caution as there may be items still in the upload queue await database .disconnect(); //this will simply disconnect from the PowerSync Service and preserve all local data } // Set your action name, define your arguments and return parameter, // and then add the boilerplate code using the green button on the right! ``` 2. Click "Save Action". 3. Under the **"Page Selector"**, select your `HomePage` page. 4. Under the **"Widget Palette"**, drag a "Button" onto the right of your "AppBar". 5. In the **"Properties"** panel on the right, rename the "Button Text" to `Sign Out`. 6. Switch to the **"Actions"** panel and open the **"Action Flow Editor"**. 7. Select "On Tap" as the action trigger. 8. Click "Add Action" and add a call to the `signOut` Custom Action. 9. Chain another Action and call to "Supabase Authentication" -> "Log Out": 10. Click "Close". **Checkpoint:** You should now be able to reload your app and sign out and in again. ## (Optional) Display Connectivity and Sync Status The PowerSync library provides a built-in component that displays real-time connectivity and synchronization status. Since the sync state is available globally as part of your app state, you can easily monitor the database status throughout your application. To add this status indicator: 1. Under the **Widget Palette**, select the "Components and custom widgets imported from library projects" panel. 2. Drag the `PowerSyncConnectivity` component into your home page's "AppBar". ## Secure Your App PowerSync's [Sync Rules](/usage/sync-rules) and Supabase's support for [Row Level Security (RLS)](https://supabase.com/docs/guides/auth/row-level-security) can be used in conjunction. Here are some high level similarities and differences: * RLS should be used as the authoritative set of security rules applied to your users' CRUD operations that reach Postgres. * Sync Rules are only applied for data that is to be downloaded to clients — they do not apply to uploaded data. * Sync Rules can typically be considered to be complementary to RLS, and will generally mirror your RLS setup. ### Enable RLS in Supabase Run the below in your Supabase console to ensure that only list owners can perform actions on the lists table where `owner_id` matches their user id: ```sql alter table public.lists enable row level security; create policy "owned lists" on public.lists for ALL using ( auth.uid() = owner_id ) ``` ### Update Sync Rules Currently all lists are synced to all users, regardless of who the owner of the list is. You will now update this so that only a user's lists are synced to their device: 1. Navigate to the [PowerSync Dashboard](https://powersync.journeyapps.com/) and open your `sync-rules.yaml` file. 2. Delete the existing content and paste the below contents: ```yaml bucket_definitions: user_lists: parameters: select request.user_id() as user_id data: - select * from lists where owner_id = bucket.user_id ``` 3. Click on **"Validate"**. 4. Click on **"Deploy sync rules"**. 5. Wait for the deploy to complete. **Checkpoint:** Your app should continue running seamlessly as before. ## Arrays, JSON and Other Types For column values, PowerSync supports three basic types: Integers, doubles, and strings. These types have been chosen because they're natively supported by SQLite while also being easy to transport as JSON. Of course, you may want to to store other values in your Postgres database as well. When syncing a value that doesn't fit into the three fundamental types, PowerSync will [encode it as a JSON string](/usage/use-case-examples/custom-types-arrays-and-json#custom-types). To use those values in your app, you'll need to apply a mapping so that you display the correct values and use the correct representation when uploading data. As an example, let's consider an added `tags` column on the `lists` table used in this guide. These tags will be encoded as a string array in Postgres: ```SQL CREATE TABLE public.lists ( # ... existing columns, tags text[] DEFAULT '{"default", "tags"}' ); ``` Like all array values, PowerSync will transport this as a JSON string. For instance, a row with the default tags would be represented as this string: `["default", "tags"]`. FlutterFlow does not support extracting a list from that string, so the [custom functions](#read-data) responsible for mapping SQLite rows to FlutterFlow classes needs to be aware of the transformation and reverse it: ```dart /// MODIFY CODE ONLY BELOW THIS LINE return supabaseRows.map((r) { return ListsRow({ ...r, 'tags': jsonDecode(r['tags'] as String), }); }).toList(); ``` This transforms the `'["default", "tags"]'` value as it appears into `["default", "tags"]`, the list value expected for this row. A similar approach is necessary when making local writes. The local database should be consistent with the data synced with PowerSync. So all [local writes](#create-data) should write array and JSON values as strings by encoding them as JSON. Finally, the PowerSync mapping also needs to be reverted when uploading rows to Postgres. For a `text[]` column for instance, the local string value would not be accepted by Supabase. For this reason, the upload behavior for columns with advanced types needs to be customized. **New feature:** This option has been added in version `0.0.7` of the PowerSync FlutterFlow library. Please make sure you're using that version or later. To customize the uploading behavior, create a new custom action (e.g. `applyPowerSyncOptions`). After the default imports, put this snippet: ```dart import 'package:power_sync_b0w5r9/custom_code/actions/initialize_power_sync.dart'; Future applyPowerSyncOptions() async { // Add your function code here! powerSyncOptions.transformData = (table, data) { switch (table) { case 'lists': data['tags'] = jsonDecode(data['tags'] as String); } }; } ``` Also, add this function to your `main.dart` as a final action. When setting `powersyncOptions.transformData`, a callback is invoked every time a created or updated row is uploaded to Supabase. This allows you to customize how individual values are represented for Postgres. In this case, the `tags` column of the `lists` table is decoded as JSON so that it's uploaded as a proper array while being stored as a list locally. ## Custom Backend Connectors To enable an easy setup, the PowerSync FlutterFlow library integrates with Supabase by default. This means that as long as you use Supabase for authentication in your app, PowerSync will automatically connect as soon as users log in, and can automatically upload local writes to a Supabase database. For apps that don't use Supabase, you can disable this default behavior and instead rely on your own backend connectors. For this, create your own custom action (e.g. `applyPowerSyncOptions`). It's important that this action runs before anything else in your app uses PowerSync, so add this action to your `main.dart` as a final action. ```dart import 'package:power_sync_b0w5r9/custom_code/actions/initialize_power_sync.dart'; import 'package:powersync/powersync.dart' as ps; Future applyPowerSyncOptions() async { // Disable the default Supabase integration powerSyncOptions.useSupabaseConnector = false; final db = await getOrInitializeDatabase(); // TODO: Write your own connector and call connect/disconnect when a user logs // in. db.connect(connector: _MyCustomConnector()); } final class _MyCustomConnector extends ps.PowerSyncBackendConnector { @override Future fetchCredentials() { // TODO: implement fetchCredentials throw UnimplementedError(); } @override Future uploadData(ps.PowerSyncDatabase database) { // TODO: implement uploadData throw UnimplementedError(); } } ``` For more information on writing backend connectors, see [integrating with your backend](/client-sdk-references/flutter#3-integrate-with-your-backend). ## Known Issues, Limitations and Gotchas Below is a list of known issues and limitations. 1. Deploying to the Apple App Store currently requires some workarounds due to limitations in FlutterFlow: 1. Download the code from FlutterFlow. 2. Open the `Podfile` located in the `ios/` directory. 3. The following option in the `Podfile` needs to be updated from `use_frameworks! :linkage => :static` to `use_frameworks!` (remove everything after the exclamation sign). 4. After removing that option, clean the build folder and build the project again. 5. You should now be able to submit to the App Store. 2. Exporting the code from FlutterFlow using the "Download Code" action in FlutterFlow requires the same workaround listed above. 3. Other common issues and troubleshooting techniques are documented here: [Troubleshooting](/resources/troubleshooting). # Flutter Web Source: https://docs.powersync.com/integration-guides/flutterflow-+-powersync/flutter-web PowerSync supports Flutter Web. This section is a work in progress — reach out to us on our [Discord](https://discord.gg/powersync) if you need assistance in the meantime. # Full-Text Search Source: https://docs.powersync.com/integration-guides/flutterflow-+-powersync/full-text-search PowerSync supports [Full-Text Search](/usage/use-case-examples/full-text-search) on all Flutter platforms. This section is a work in progress — reach out to us on our [Discord](https://discord.gg/powersync) if you need assistance in the meantime. # Handling Attachments Source: https://docs.powersync.com/integration-guides/flutterflow-+-powersync/handling-attachments Learn how to sync attachments such as images and PDFs with PowerSync, FlutterFlow and Supabase Storage. You can synchronize attachments, such as images and PDFs, between user devices and a remote storage provider using the [`powersync_attachments_helper`](https://pub.dev/packages/powersync_attachments_helper) package for Flutter. This guide uses Supabase Storage as the remote storage provider to store and serve photos. Other media types, like [PDFs](/tutorials/client/attachments-and-files/pdf-attachment), are also supported. At a high level, the \[`powersync_attachments_helper`] package syncs attachments by: * Storing files locally on the device in a structured way, linking them to specific database records. * Maintaining attachment metadata in the local SQLite database to track the sync state of each attachment. * Managing uploads, downloads, and retries through a local attachment queue to ensure local files stay in sync with remote storage. * Providing a file operations API with methods to add, remove, and retrieve attachments. ## Prerequisites To follow this guide, ensure you have completed the [FlutterFlow + PowerSync integration guide](/integration-guides/flutterflow-+-powersync). At minimum, you should have implemented everything up to step 4, which involves reading data where your app's `lists` are displayed and clickable. ## Update schema to track attachments Here we add a `photo_id` column to the `lists` table to link a photo to a list. ### Update Supabase schema 1. In your Supabase dashboard, run the below SQL statement in your Supabase SQL Editor to add the `photo_id` column to the `lists` table: ```sql ALTER TABLE public.lists ADD COLUMN photo_id text; ``` 2. In FlutterFlow, under **"App Settings" -> "Integrations"**, click "Get Schema". ### Update PowerSync schema The schema of the local SQLite database should now be updated to include the new `photo_id` column. Additionally, we need to set up a local-only table to store the metadata of photos which is being managed by the helper package. 1. In the PowerSync Dashboard, generate your updated client-side schema: Right-click on your instance and select "Generate Client-Side Schema" and select "FlutterFlow" as the language. 2. In FlutterFlow, under "App Settings" -> "Project Dependencies" -> "FlutterFlow Libraries", click "View Details" of the PowerSync library. 3. Copy and paste the generated schema into the "PowerSyncSchema" field. ## Configure Supabase Storage 1. To configure Supabase Storage for your app, navigate to the **Storage** section of your Supabase project and create a new bucket: 2. Give the storage bucket a name, such as **media**, and hit "Save". 3. Next, configure a policy for this bucket. For the purpose of this demo, we will allow all user operations on the media bucket. 4. Create a new policy for the **media** bucket: 2. Give the new policy a name, and allow SELECT, INSERT, UPDATE, and DELETE. 3. Proceed to review and save the policy. 4) Finally, back in FlutterFlow, create an App Constant to store the bucket name: 1. Under **"App Values" -> "Constants"**, click "Add App Constant". 2. Set "Constant Name" to `supabaseStorageBucket`. 3. Click "Create". 4. Set the "Value" to the name of your Supabase Storage bucket, e.g. `media`. ## Add the PowerSync Attachments Helper to your project 1. Under **"App Settings" -> "Project Dependencies" -> "Custom Pub Dependencies"** click "Add Pub Dependency". 2. Enter `powersync_attachments_helper: ^0.6.18`. 3. Click "Add". ## Create `setUpAttachments` Custom Action This creates an attachment queue which is responsible for tracking, storing and synching attachment metadata and CRUD operations. 1. Navigate to **"Custom Code"** and add a Custom Action. 2. Name the action `setUpAttachments`. 3. Add the following code: In the below code, `power_sync_b0w5r9` is the project ID of the PowerSync library. Update it if it changes. ```dart // DO NOT REMOVE OR MODIFY THE CODE ABOVE! import 'dart:async'; import 'dart:io'; import 'package:powersync/powersync.dart' as powersync; import 'package:powersync_attachments_helper/powersync_attachments_helper.dart'; import 'package:power_sync_b0w5r9/custom_code/actions/initialize_power_sync.dart' show db; Future setUpAttachments() async { // Add your function code here! await _initializeAttachmentQueue(db); } PhotoAttachmentQueue? attachmentQueue; final _remoteStorage = SupabaseStorageAdapter(); class SupabaseStorageAdapter implements AbstractRemoteStorageAdapter { @override Future uploadFile(String filename, File file, {String mediaType = 'text/plain'}) async { _checkSupabaseBucketIsConfigured(); try { await Supabase.instance.client.storage .from(FFAppConstants.supabaseStorageBucket) .upload(filename, file, fileOptions: FileOptions(contentType: mediaType)); } catch (error) { throw Exception(error); } } @override Future downloadFile(String filePath) async { _checkSupabaseBucketIsConfigured(); try { return await Supabase.instance.client.storage .from(FFAppConstants.supabaseStorageBucket) .download(filePath); } catch (error) { throw Exception(error); } } @override Future deleteFile(String filename) async { _checkSupabaseBucketIsConfigured(); try { await Supabase.instance.client.storage .from(FFAppConstants.supabaseStorageBucket) .remove([filename]); } catch (error) { throw Exception(error); } } void _checkSupabaseBucketIsConfigured() { if (FFAppConstants.supabaseStorageBucket.isEmpty) { throw Exception( 'Supabase storage bucket is not configured in App Constants'); } } } /// Function to handle errors when downloading attachments /// Return false if you want to archive the attachment Future onDownloadError(Attachment attachment, Object exception) async { if (exception.toString().contains('Object not found')) { return false; } return true; } class PhotoAttachmentQueue extends AbstractAttachmentQueue { PhotoAttachmentQueue(db, remoteStorage) : super( db: db, remoteStorage: remoteStorage, onDownloadError: onDownloadError); @override init() async { if (FFAppConstants.supabaseStorageBucket.isEmpty) { log.info( 'No Supabase bucket configured, skip setting up PhotoAttachmentQueue watches'); return; } await super.init(); } @override Future saveFile(String fileId, int size, {mediaType = 'image/jpeg'}) async { String filename = '$fileId.jpg'; Attachment photoAttachment = Attachment( id: fileId, filename: filename, state: AttachmentState.queuedUpload.index, mediaType: mediaType, localUri: getLocalFilePathSuffix(filename), size: size, ); return attachmentsService.saveAttachment(photoAttachment); } @override Future deleteFile(String fileId) async { String filename = '$fileId.jpg'; Attachment photoAttachment = Attachment( id: fileId, filename: filename, state: AttachmentState.queuedDelete.index); return attachmentsService.saveAttachment(photoAttachment); } @override StreamSubscription watchIds({String fileExtension = 'jpg'}) { log.info('Watching photos in lists table...'); return db.watch(''' SELECT photo_id FROM lists WHERE photo_id IS NOT NULL ''').map((results) { return results.map((row) => row['photo_id'] as String).toList(); }).listen((ids) async { List idsInQueue = await attachmentsService.getAttachmentIds(); List relevantIds = ids.where((element) => !idsInQueue.contains(element)).toList(); syncingService.processIds(relevantIds, fileExtension); }); } } Future _initializeAttachmentQueue(powersync.PowerSyncDatabase db) async { final queue = attachmentQueue = PhotoAttachmentQueue(db, _remoteStorage); await queue.init(); } ``` 4. Click "Save Action". ## Add Final Actions to your `main.dart` We need to call `initializePowerSync` from the Library to create the PowerSync database, and then call `setUpAttachments` to create the attachments queue. These actions need to happen in this specific order since `setUpAttachments` depends on having the database ready. 1. Still under **Custom Code**, select `main.dart`. Under **File Settings -> Final Actions**, click the plus icon. 2. Select `initializePowerSync`. 3. Click the plus icon again, and select `setUpAttachments`. 4. Save your changes. **Continue by using Local Run** Due to a known FlutterFlow limitation, web test mode will crash when both Supabase integration is enabled and actions are added to `main.dart`. Please continue by using Local Run to test your app. ## Create `resolveItemPicture` Custom Action (downloads) This action handles downloads by taking an attachment ID and returning an `UploadedFile`, which is FLutterFlow's representation of an in-memory file asset. This action calls `attachmentQueue.getLocalUri()` and reads contents from the underlying file. 1. Create another Custom Action and name it `resolveItemPicture`. 2. Add the following code: ```dart // DO NOT REMOVE OR MODIFY THE CODE ABOVE! import 'dart:io'; import 'set_up_attachments.dart'; Future resolveItemPicture(String? id) async { if (id == null) { return null; } final name = '$id.jpg'; final path = await attachmentQueue?.getLocalUri(name); if (path == null) { return null; } final file = File(path); if (!await file.exists()) { return null; } return FFUploadedFile( name: name, bytes: await file.readAsBytes(), ); } ``` 3. Under **Action Settings -> Define Arguments** on the right, click "Add Arguments". 1. Set the "Name" to `id`. 4. Click "Save Action". 5. Click "Yes" when prompted about parameters in the settings not matching parameters in the code editor. ## Create `setItemPicture` Custom Action (uploads) This action handles uploads by passing the `UploadedFile` to local storage and then to the upload queue. 1. Create another Custom Action and name it `setItemPicture`. 2. Add the following code: In the below code, `power_sync_b0w5r9` is the project ID of the PowerSync library. Update it if it changes. ```dart // DO NOT REMOVE OR MODIFY THE CODE ABOVE! import 'package:power_sync_b0w5r9/custom_code/actions/initialize_power_sync.dart' show db; import 'package:powersync/powersync.dart' as powersync; import 'set_up_attachments.dart' show attachmentQueue; Future setItemPicture( FFUploadedFile? picture, Future Function(String? photoId) applyToDatabase, ) async { if (picture == null) { await applyToDatabase(null); return; } final queue = attachmentQueue; if (queue == null) { return; } String photoId = powersync.uuid.v4(); final storageDirectory = await queue.getStorageDirectory(); await queue.localStorage .saveFile('$storageDirectory/$photoId.jpg', picture.bytes!); queue.saveFile(photoId, picture.bytes!.length); await applyToDatabase(photoId); } ``` 3. Under **Action Settings -> Define Arguments** on the right, click "Add Arguments". 1. Set the "Name" to `picture`. 2. Under "Type" select "UploadedFile". 4. Click "Add Arguments" again. 1. Set the "Name" to `applyToDatabase`. 2. Under "Type" select "Action". 3. Add an Action Parameter. 4. Set the "Name" to `photoId`. 5. Set its "Type" to "String". 5. Click "Save Action". 6. Click "Yes" when prompted about parameters in the settings not matching parameters in the code editor. 7. Check the Custom Actions for any errors. **Compilation errors:** If, at this stage, you receive errors for any of the custom actions, test your app and ensure there are no errors in your Device Logs. FlutterFlow does occasionally show false compilation errors which can safely be ignored. ## Create a Custom Component to display and upload photos Next, we'll create a custom component that displays an image and includes a button to upload or replace the image file. You can use this component throughout your app wherever you need to display and update images. ### Create the UI widgets of the component 1. Under the **"Page Selector"**, click "Add Page, Component, or Flow". 2. Select the "New Component" tab. 3. Select "Create Blank" and call the component `ListImage`. 4. Under the **"Widget Tree"**, click on "Add a child to this widget". 1. Add the "Image" widget. 2. Expand the width of the image to fill the available space. 5. Click on "Add a child to this widget" for the `ListImage` again. 1. Add the "Button" widget. 2. Select "Wrap in Column" when prompted. ### Set component parameters and state variables 1. Still under the **"Widget Tree"**, select the `ListImage` component. 1. At the top right under "Component Parameters" click "Add Parameters". 2. Click "Add Parameter". 3. Set its "Name" to `listId`. 4. Set its "Type" to `String`. 5. Click "Confirm". 2. In the same panel, add Local Component State Variables: 1. Define a variable to store the image file: 1. Click "Add Field". 2. Set its "Field Name" to `image`. 3. Set its "Type" to `Uploaded File`. 2. Define a variable that stores the ID of the image: 1. Click "Add Field" again. 2. Set its "Field Name" to `photoId`. 3. Set its "Type" to `String`. 4. Check "Nullable. 3. Define a variable that indicates whether an image is loaded or not. We'll use this to set conditional visibility of the component: 1. Click "Add Field" again. 2. Set its "Field Name" to `imageLoaded`. 3. Set its "Type" to `Boolean`. 4. Toggle "Initial Field Value" on and off (click it twice) to set it to false. 4. Click "Confirm". ### Set conditional visibility and state 1. Back under the **"Widget Tree"**, select the `Image` widget. 1. In the **"Properties"** panel on the right, enable "Conditional" under "Visibility". 1. Click on "Unset". 2. Select the "Component State" -> `imageLoaded` state variable. 3. Click "Confirm". 2. Further down in the **"Properties"** panel, set the "Image Type" to "Uploaded File". 1. Select the "Component State" -> `image` state variable. 2. Click "Confirm". ### Define the Image widget logic 1. Under the **"Widget Tree"**, select the "Column" component within your `ListImage` component. 1. Click "Add a child to this widget". 2. Add the "Container" widget. 3. In the **"Properties"** panel on the right, set its "Width" and "Height" to 0 respectively. This container can be hidden. 4. Back under the **"Widget Tree"**, click "Add a child to this widget" for the "Container". 5. Select the "Select the "Components and custom widgets imported from library projects" panel, and select the `PowerSyncStateUpdater` component. 6. In the **"Properties"** panel on the right, under the "Component Properties" section: 1. Under the "sql" section, add the SQL query to set the photo: `select * from lists where id = :id;` 2. Under the "parameters" section, set the `id` parameter we're using the above query: 3. Click on "UNSET". 4. Select "Create Map (JSON)" as the variable. 5. Under "Add Map Entries", click "Add Key Value Pair". 6. Set the "Key" to `id`. 7. Set the Value to "Component Parameters" -> `listId`. 8. Click "Confirm". 9. Check "watch \[Boolean]". This ensures that the query auto-updates. 10. Configure the "onData" action: 1. Open the "Action Flow Editor". 2. Select the "Callback" trigger type. 3. Click "Add Action". 1. Search for "update com" and select "Update Component State". 2. Click "Add Field". 3. Select your `photoId` state variable. 4. Set "Select Update Type" to "Set Value". 5. Set "Value to set" to "Callback Parameters" -> `rows`. 6. Under "Available Options", select "Item at Index". 7. Under "List Index Options" select "First". 8. Under "Available Options" select "JSON Path". 9. Set "JSON Path" to `$.photo_id`. 10. Click "Confirm". 11. Set "Update Type" to "No Rebuild". 4. Chain another action and select the `resolveItemPicture` custom action. 1. Under "Set Action Arguments", click on the settings icon next to "Value". 2. Select "Callback Parameters" -> `rows`. 3. Under "Available Options", select "Item at Index". 4. Under "List Index Options" select "First". 5. Under "Available Options" select "JSON Path". 6. Set "JSON Path" to `$.photo_id`. 7. Click "Confirm". 8. Set "Action Output Variable Name" to `picture`. 5. Chain another action, search for "update com" and select "Update Component State". 1. Click on "Add Field". 2. Select the "image - Uploaded File" state variable. 3. Under "Select Update Type", select "Set Value". 4. Set the "Value to set" to the "Action Outputs" -> `picture` variable. 5. Click "Confirm". 6. Click on "Add Field" again. 7. Select the "imageLoaded" state variable. 8. Under "Select Update Type", select "Set Value". 9. Click on the settings icon next to "Value to set" and select "Code Expression". 10. Select "Code Expression". 11. Click on "Add argument". 12. Select the "var1" placeholder argument. 13. Set its "Name" to `photoId`. 14. Check "Nullable". 15. Set the "Value" to the "Component State" -> `photoId` state variable. 16. Set the "Expression" to `photoId != null && photoId != 'null'`. 17. Ensure there are no errors. 18. Click "Confirm". 6. Close the Action Flow Editor. ### Define the Button widget logic 1. Under the **"Widget Tree"**, select the "Button" widget. 2. In the **"Properties"** panel on the right, under "Button Text", update the text to `Add/replace image`. 3. Switch to the **"Actions"** panel and open the **"Action Flow Editor"**. 4. Select the "On Tap" trigger type. 5. Add an action, search for "media" and select "Upload/Save Media". 6. Under "Upload Type" select "Local Upload (Widget State). 7. Chain another action and select the `setItemPicture` custom action. 8. Under "Set Action Arguments", under the "picture" argument, set the "Value", to the "Widget State" -> "Uploaded Local File" variable. 9. Click "Confirm". 10. Under the "applyToDatabase" argument, add an action and under "Custom Action" -> "PowerSync", select "powersyncWrite". 11. Under the "Set Action Arguments" -> "sql" section, add the SQL query to update the photo. 1. Paste the following into the "Value" field: `update lists set photo_id = :photo where id = :id;` 2. Under the "parameters" section, set the `photo` parameter and `id` parameters we're using the above query: 3. Click on "UNSET". 4. Select "Create Map (JSON)" as the variable. 5. Under "Add Map Entries", click "Add Key Value Pair". 6. Set the "Key" to `photo`. 7. Set the "Value" to "Action Parameter" -> `photoId`. 8. Click "Confirm". 9. Add another Key Value Pair. 10. Set the "Key" to `id`. 11. Set the Value to "Component Parameters" -> `listId`. 12. Click "Confirm". ## Add the `ListImage` Custom Component to your page 1. Under the **"Page Selector"**, select the `Todos` page. 2. Under the **"Widget Tree"**, right-click on the `PowerSyncStateUpdater` library component. 1. Select "Wrap Widget". 2. Select the "Container" widget. 3. In the **"Properties"** panel on the right, set the Container's "Width" and "Height" to 0 respectively. This container can be hidden. 3. Back under the **"Widget Tree"**, add a child to the "Column" widget. 1. Select the "Components and custom widgets defined in this project" panel, and select the `ListImage` component. 2. In the `ListImage` **"Properties"** panel on the right, under "Component Parameters", click on the settings icons next to "listId \[String]". 3. Select the "Page Parameter" -> `id` variable. 4. Click "Confirm". **Test your app:** You should now be able to test your app, select a list item and add or replace an image on the next page: In Supabase, notice how the image is uploaded to your bucket in Supabase Storage, and the corresponding list has the `photo_id` column set with a reference to the file. # FlutterFlow + PowerSync Legacy Guide Source: https://docs.powersync.com/integration-guides/flutterflow-+-powersync/powersync-+-flutterflow-legacy Legacy integration guide for creating local-first apps with FlutterFlow and PowerSync with Supabase as the backend. This guide demonstrates our previous FlutterFlow integration approach that uses custom actions. For a simpler and more robust solution, we recommend following our [updated guide](/integration-guides/flutterflow-+-powersync) which leverages the official PowerSync FlutterFlow library.