Skip to main content
This SDK is currently in a pre-alpha / experimental state and is intended for gathering external feedback. It is not suitable for production use. We also can’t guarantee continued support for the SDK at this time. If you’re interested in using the PowerSync Rust SDK, please contact us with details about your use case.

SDK Features

  • Real-time streaming of database changes: Changes made by one user are instantly streamed to all other users with access to that data. This keeps clients automatically in sync without manual polling or refresh logic.
  • Direct access to a local SQLite database: Data is stored locally, so apps can read and write instantly without network calls. This enables offline support and faster user interactions.
  • Asynchronous background execution: The SDK performs database operations in the background to avoid blocking the application’s main thread. This means that apps stay responsive, even during heavy data activity.
  • Query subscriptions for live updates: The SDK supports query subscriptions that automatically push real-time updates to client applications as data changes, keeping your UI reactive and up to date.
  • Automatic schema management: PowerSync syncs schemaless data and applies a client-defined schema using SQLite views. This architecture means that PowerSync SDKs can handle schema changes gracefully without requiring explicit migrations on the client-side.

Installation

Add the PowerSync SDK to your project by adding the following to your Cargo.toml file:
cargo add powersync

Getting Started

Prerequisites: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the Setup Guide).

1. Define the Client-Side Schema

The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and Sync Rules, but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the PowerSync protocol: schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using SQLite views to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we’ll show in the next step).
Generate schema automaticallyIn the PowerSync Dashboard, select your project and instance and click the Connect button in the top bar to generate the client-side schema in your preferred language. The schema will be generated based off your Sync Rules.Similar functionality exists in the CLI.Note: The generated schema will not include an id column, as the client SDK automatically creates an id column of type text. Consequently, it is not necessary to specify an id column in your schema. For additional information on IDs, refer to Client ID.
The types available are text, integer and real. These should map directly to the values produced by the Sync Rules. If a value doesn’t match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see Types. Example:
src/schema.rs
use powersync::schema::{Column, Schema, Table};

pub fn app_schema() -> Schema {
    let mut schema = Schema::default();
    let todos = Table::create(
        "todos",
        vec![
            Column::text("list_id"),
            Column::text("created_at"),
            Column::text("completed_at"),
            Column::text("description"),
            Column::integer("completed"),
            Column::text("created_by"),
            Column::text("completed_by"),
        ],
        |_| {},
    );

    let lists = Table::create(
        "lists",
        vec![
            Column::text("created_at"),
            Column::text("name"),
            Column::text("owner_id"),
        ],
        |_| {},
    );

    schema.tables.push(todos);
    schema.tables.push(lists);
    schema
}
Note: No need to declare a primary key id column, as PowerSync will automatically create this.

2. Instantiate the PowerSync Database

Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your Sync Rules. In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.

Process setup

PowerSync is based on SQLite, and statically links a SQLite extension that needs to be enabled for the process before the SDK can be used. The SDK offers a utility to register the extension, and we recommend calling it early in main():
lib/main.rs
use powersync::env::PowerSyncEnvironment;

mod schema;

fn main() {
    PowerSyncEnvironment::powersync_auto_extension()
        .expect("could not load PowerSync core extension");

    // TODO: Start database and your app
}

Database setup

For maximum flexibility, the PowerSync Rust SDK can be configured with different asynchronous runtimes and HTTP clients used to connect to the PowerSync service. These dependencies can be configured through the PowerSyncEnvironment struct, which wraps:
  1. An HTTP client (using traits from the http-client crate). We recommend enabling the curl_client feature on that crate and then using an IsahcClient. The H1Client is known not to work with PowerSync because it can’t cancel response streams properly.
  2. An asynchronous pool giving out leases to SQLite connections.
  3. A timer implementation allowing the sync client to implement delayed retries on connection errors. This is typically provided by async runtimes like Tokio.
To configure PowerSync, begin by configuring a connection pool:
Use ConnectionPool::open to open a database file with multiple connections configured with WAL mode:
use powersync::{ConnectionPool, error::PowerSyncError};
use powersync::env::PowerSyncEnvironment;

fn open_pool() -> Result<ConnectionPool, PowerSyncError>{
    ConnectionPool::open("database.db")
}
Next, create a database and start asynchronous tasks used by the sync client when connecting. To be compatible with different executors, the SDK usees a model based on long-lived actors instead of spawning tasks dynamically. All asynchronous processes are exposed through PowerSyncDatabase::async_tasks(), these tasks must be spawned before connecting.
Ensure you depend on powersync with the tokio feature enabled.
#[tokio::main]
async fn main() {
    PowerSyncEnvironment::powersync_auto_extension()
        .expect("could not load PowerSync core extension");

    let pool = open_pool().expect("open pool");
    let client = Arc::new(IsahcClient::new());
    let env = PowerSyncEnvironment::custom(
        client.clone(),
        pool,
        Box::new(PowerSyncEnvironment::tokio_timer()),
    );

    let db = PowerSyncDatabase::new(env, schema::app_schema());
    db.async_tasks().spawn_with_tokio();
}
Finally, instruct PowerSync to sync data from your backend:
// MyBackendConnector is defined in the next step...
db.connect(SyncOptions::new(MyBackendConnector {
    client,
    db: db.clone(),
})).await;
Note: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling connect() refer to our Local-Only guide.

3. Integrate with your Backend

Create a connector to integrate with your backend. The PowerSync backend connector provides the connection between your application backend and the PowerSync managed database. It is used to:
  1. Retrieve an auth token to connect to the PowerSync instance.
  2. Upload client-side writes to your backend API. Any writes that are made to the SQLite database are placed into an upload queue by the PowerSync Client SDK and automatically uploaded to your app backend (where you apply those changes to the backend source database) when the user is connected.
Accordingly, the connector must implement two methods:
  1. fetch_credentials - This method will be automatically invoked by the PowerSync Client SDK every couple of minutes to obtain authentication credentials. See Authentication Setup for instructions on how the credentials should be generated.
  2. upload_data - This method will be automatically invoked by the PowerSync Client SDK whenever it needs to upload client-side writes to your app backend via your backend API. Therefore, in your implementation, you need to define how your backend API is called. See Writing Client Changes for considerations on the app backend implementation.
Example:
struct MyBackendConnector {
    client: Arc<dyn http_client::HttpClient>,
    db: PowerSyncDatabase,
}

#[async_trait]
impl BackendConnector for MyBackendConnector {
    async fn fetch_credentials(&self) -> Result<PowerSyncCredentials, PowerSyncError> {
        // implement fetchCredentials to obtain the necessary credentials to connect to your backend
        // See an example implementation in https://github.com/powersync-ja/powersync-native/blob/508193b0822b8dad1a534a16462e2fcd36a9ac68/examples/egui_todolist/src/database.rs#L119-L133

        Ok(PowerSyncCredentials {
            endpoint: "[Your PowerSync instance URL or self-hosted endpoint]".to_string(),
            // Use a development token (see Authentication Setup https://docs.powersync.com/configuration/auth/development-tokens) to get up and running quickly) to get up and running quickly
            token: "An authentication token".to_string(),
        })
    }

    async fn upload_data(&self) -> Result<(), PowerSyncError> {
        // Implement uploadData to send local changes to your backend service
        // You can omit this method if you only want to sync data from the server to the client
        // See an example implementation under Usage Examples (sub-page)
        // See https://docs.powersync.com/handling-writes/writing-client-changes for considerations.
        let  mut local_writes = self.db.crud_transactions();
        while let Some(tx) = local_writes.try_next().await? {
            todo!("Inspect tx.crud for local writes that need to be uploaded to your backend");
            tx.complete().await?;
        }

        Ok(())
    }
}

Using PowerSync: CRUD functions

Once the PowerSync instance is configured you can start using the SQLite DB functions. The most commonly used CRUD functions to interact with your SQLite data are:
  • reader - run statements reading from the database.
  • wroter - execute a read query every time source tables are modified.
  • writer - write to the database.

Reads

To obtain a connection suitable for reads, call and await PowerSyncDatabase::reader(). The returned connection leased can be used as a rusqlite::Connection to run queries.
async fn find(db: &PowerSyncDatabase, id: &str) -> Result<(), PowerSyncError> {
    let reader = db.reader().await?;
    let mut stmt = reader.prepare("SELECT * FROM lists WHERE id = ?")?;
    let mut rows = stmt.query(params![id])?;
    while let Some(row) = rows.next()? {
        let id: String = row.get("id")?;
        let name: String = row.get("name")?;

        println!("Found todo list: {id}, {name}");
    }
}

Watching Queries

The watch_statement method executes a read query whenever a change to a dependent table is made.
async fn watch_pending_lists(db: &PowerSyncDatabase) -> Result<(), PowerSyncError> {
    let stream = db.watch_statement(
        "SELECT * FROM lists WHERE state = ?".to_string(),
        params!["pending"],
        |stmt, params| {
            let mut rows = stmt.query(params)?;
            let mut mapped = vec![];

            while let Some(row) = rows.next()? {
                mapped.push(() /* TODO: Read row into list struct */)
            }

            Ok(mapped)
        },
    );
    let mut stream = pin!(stream);

    // Note: The stream is never-ending, so you probably want to call this in an independent async
    // task.
    while let Some(event) = stream.try_next().await? {
        // Update UI to display rows
    }
    Ok(())
}

Mutations

Local writes on tables are automatically captured with triggers. To obtain a connection suitable for writes, use the PowerSyncDatabase::writer method: The execute method executes a write query (INSERT, UPDATE, DELETE) and returns the results (if any).
async fn insert_customer(
    db: &PowerSyncDatabase,
    name: &str,
    email: &str,
) -> Result<(), PowerSyncError> {
    let writer = db.writer().await?;
    writer.execute(
        "INSERT INTO customers (id, name, email) VALUES (uuid(), ?, ?)",
        params![name, email],
    )?;
    Ok(())
}
If you’re looking for transactions, use the transaction method from rusqlite on writer.

Configure Logging

The Rust SDK uses the log crate internally, so you can configure it with any backend, e.g. with env_logger:
fn main() {
    env_logger::init();
    // ...
}

Additional Usage Examples

For more usage examples including accessing connection status, monitoring sync progress, and waiting for initial sync, see the Usage Examples page.

ORM / SQL Library Support

The Rust SDK does not currently support any higher-level SQL libraries, but we’re investigating support for Diesel and sqlx. Please reach out to us if you’re interested in these or other integrations.

Troubleshooting

See Troubleshooting for pointers to debug common issues.

Supported Platforms

See Supported Platforms -> Rust SDK.

Upgrading the SDK

To update the PowerSync SDK, run cargo update powersync or manually update to the latest version.