Skip to main content

Introduction

The @powersync/attachments package (JavaScript/TypeScript) and powersync_attachments_helper package (Flutter/Dart) are deprecated. Attachment functionality is now built-in to the PowerSync SDKs. Please use the built-in attachment helpers instead, and see the migration notes.
While PowerSync excels at syncing structured data, storing large files (images, videos, PDFs) directly in SQLite is not recommended. Embedding files as base64-encoded data or binary blobs in database rows can lead to many issues. Instead, PowerSync uses a metadata + storage provider pattern: sync small metadata records through PowerSync while storing actual files in purpose-built storage systems (S3, Supabase Storage, Cloudflare R2, etc.). This approach provides:
  • Optimal performance - Database stays small and fast
  • Automatic queue management - Background uploads/downloads with retry logic
  • Offline-first support - Local files available immediately, sync happens in background
  • Cache management - Automatic cleanup of unused files
  • Platform flexibility - Works across web, mobile, and desktop

SDK & Demo Reference

We provide attachment helpers for multiple platforms:
SDKPackageMin. SDK versionDemo App
JavaScript/TypeScriptBuilt-in attachments (alpha)Web v1.33.0, React Native v1.30.0, Node.js v0.17.0React Native Todo · React Web Todo
FlutterBuilt-in attachments (alpha)v1.16.0Flutter Todo
SwiftBuilt-in attachments (alpha)v1.0.0iOS Demo
KotlinBuilt-in attachments (alpha)v1.0.0Android Todo
Most demo applications use Supabase Storage as the storage provider, but the patterns are adaptable to any storage system.

How It Works

PowerSync attachments flow & architecture

Workflow

  1. Save file - Your app calls saveFile() with file data and an updateHook to handle linking the attachment to your data model
  2. Queue for upload - File is saved locally and a record is created in the attachments table with state QUEUED_UPLOAD
  3. Background upload - The attachment queue automatically uploads file to remote storage (S3/Supabase/etc.)
  4. Remote storage - File is stored in remote storage with the attachment ID
  5. State update - The updateHook runs, updating your data model with the attachment ID and marking the file locally as SYNCED
  6. Cross-device sync - PowerSync syncs the data model changes to other clients
  7. Data model updated - Other clients receive the updated data model with the new attachment reference (e.g., user.photo_id = "id-123")
  8. Watch detects attachment - Other clients’ watchAttachments() callback detects the new attachment reference and creates a record in the attachments table with state QUEUED_DOWNLOAD
  9. File download - The attachment queue automatically downloads the file from remote storage
  10. Local storage - File is saved to local storage on the other client
  11. State update - File is marked locally as SYNCED and ready for use

Attachment States

StateDescription
QUEUED_UPLOADFile saved locally, waiting to upload to remote storage
QUEUED_DOWNLOADData model synced from another device, file needs to be downloaded
SYNCEDFile exists both locally and in remote storage, fully synchronized
QUEUED_DELETEMarked for deletion from both local and remote storage
ARCHIVEDNo longer referenced in your data model, candidate for cleanup

Core Components

Attachment Table

The Attachment Table is a local-only table that stores metadata about each file. It’s not synced through PowerSync’s sync rules - instead, it’s managed entirely by the attachment queue on each device. Metadata stored:
  • id - Unique attachment identifier (UUID)
  • filename - File name with extension (e.g., photo-123.jpg)
  • localUri - Path to file in local storage
  • size - File size in bytes
  • mediaType - MIME type (e.g., image/jpeg)
  • state - Current sync state (see states above)
  • hasSynced - Boolean indicating if file has ever been uploaded
  • timestamp - Last update time
  • metaData - Optional JSON string for custom data
Key characteristics:
  • Local-only - Each device maintains its own attachment table
  • Automatic management - Queue handles all inserts/updates
  • Cross-client coordination - Your data model (e.g., users.photo_id) tells each client which files it needs

Remote Storage Adapter

The Remote Storage Adapter is an interface you implement to connect PowerSync with your cloud storage provider. It’s completely platform-agnostic - Implementations can use S3, Supabase Storage, Cloudflare R2, Azure Blob, or even IPFS. Interface methods:
  • uploadFile(fileData, attachment) - Upload file to cloud storage
  • downloadFile(attachment) - Download file from cloud storage
  • deleteFile(attachment) - Delete file from cloud storage
Common pattern: For security reasons client side implementations should use signed URLs
  1. Request a signed upload/download URL from your backend
  2. Your backend validates permissions and generates a temporary URL
  3. Client uploads/downloads directly to storage using the signed URL
  4. Never expose storage credentials to clients

Local Storage Adapter

The Local Storage Adapter handles file persistence on the device. PowerSync provides implementations for common platforms and allows you to create custom adapters. Interface methods:
  • initialize() - Set up storage (create directories, etc.)
  • saveFile(path, data) - Write file to storage
  • readFile(path) - Read file from storage
  • deleteFile(path) - Remove file from storage
  • fileExists(path) - Check if file exists
  • getLocalUri(filename) - Get full path for a filename
Built-in adapters:
  • IndexedDB - For web browsers (IndexDBFileSystemStorageAdapter)
  • Node.js Filesystem - For Node/Electron (NodeFileSystemAdapter)
  • React Native - For React Native with Expo or bare React Native we have a dedicated package (@powersync/attachments-storage-react-native)
  • Native mobile storage - For Flutter, Kotlin, Swift
The React Native local storage adapter requires Expo 54 or later.

Attachment Queue

The Attachment Queue is the orchestrator that manages the entire attachment lifecycle. It:
  • Watches your data model - You pass a watchAttachments function as a parameter that monitors which files your app references
  • Manages state transitions - Automatically moves files through states (upload/download → synced → archive → delete)
  • Handles retries - Failed operations are retried on the next sync interval
  • Performs cleanup - Removes archived files that are no longer needed
  • Verifies integrity - Checks local files exist and repairs inconsistencies
Watched Attachments pattern: The queue needs to know which attachments exist in your data model. The watchAttachments function you provide monitors your data model and returns a list of attachment IDs that your app references. The queue compares this list with its internal attachment table to determine:
  • New attachments - Download them
  • Missing attachments - Upload them
  • Removed attachments - Archive them
The watchAttachments queries are reactive and execute whenever the watched tables change, keeping the attachment queue synchronized with your data model. There are a few scenarios you might encounter: Single Attachment Type For a single attachment type, you watch one table. For example, if users have profile photos:
SELECT photo_id FROM users WHERE photo_id IS NOT NULL
Multiple Attachment Types - Single Queue You can watch multiple attachment types using a single queue by combining queries with SQL UNION or UNION ALL. This allows you to monitor attachments across different tables (e.g., users.photo_id, documents.document_id, videos.video_id) in one queue. Each attachment type may have different file extensions, which can be handled in the query by selecting the extension from your data model or using type-specific defaults. For example:
SELECT photo_id as id, photo_file_extension as file_extension
FROM users 
WHERE photo_id IS NOT NULL

UNION ALL

SELECT document_id as id, document_file_extension as file_extension
FROM documents 
WHERE document_id IS NOT NULL

UNION ALL

SELECT video_id as id, video_file_extension as file_extension
FROM videos 
WHERE video_id IS NOT NULL
Use UNION ALL when you want to include all rows (including duplicates), or UNION when you want to automatically deduplicate results. For attachment watching, UNION ALL is typically preferred since attachment IDs should already be unique.
The UNION query executes whenever any of the watched tables change, which may have higher database overhead compared to watching a single table. Implementation examples are shown in the Initialize Attachment Queue section below.
Multiple Attachment Types - Multiple Queues Alternatively, you can create separate queues for different attachment types. Each queue watches its own specific table(s) with simpler queries, allowing for independent configuration and management.
Multiple queues may use more memory, but each queue watches simpler queries. Implementation examples are shown in the Initialize Attachment Queue section below.

Implementation Guide

Installation

Included with web and node and react-native packages, for react-native adapters install @powersync/attachments-storage-react-native.

Setup: Add Attachment Table to Schema

import { Schema, Table, column, AttachmentTable } from '@powersync/web';

const appSchema = new Schema({
  users: new Table({
    name: column.text,
    email: column.text,
    photo_id: column.text  // References attachment ID
  }),
  // Add the attachment table
  attachments: new AttachmentTable()
});

Configure Storage Adapters

// For web browsers (IndexedDB)
import { IndexDBFileSystemStorageAdapter } from '@powersync/web';
const localStorage = new IndexDBFileSystemStorageAdapter('my-app-files');

// For Node.js/Electron (filesystem)
// import { NodeFileSystemAdapter } from '@powersync/node';
// const localStorage = new NodeFileSystemAdapter('./user-attachments');

// For React Native (Expo or bare React Native)
// Need to install @powersync/attachments-storage-react-native
//
// For Expo projects, also install expo-file-system
// import { ExpoFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native';
// const localStorage = new ExpoFileSystemStorageAdapter();
//
// For bare React Native, also install @dr.pogodin/react-native-fs
// import { ReactNativeFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native';
// const localStorage = new ReactNativeFileSystemStorageAdapter();

// Remote storage adapter (example with signed URLs)
const remoteStorage = {
  async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) {
    // Request signed upload URL from your backend
    const { uploadUrl } = await fetch('/api/attachments/upload-url', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ 
        filename: attachment.filename,
        contentType: attachment.mediaType 
      })
    }).then(r => r.json());
    
    // Upload to cloud storage using signed URL
    await fetch(uploadUrl, {
      method: 'PUT',
      body: fileData,
      headers: { 
        'Content-Type': attachment.mediaType || 'application/octet-stream' 
      }
    });
  },
  
  async downloadFile(attachment: AttachmentRecord): Promise<ArrayBuffer> {
    // Request signed download URL from your backend
    const { downloadUrl } = await fetch(
      `/api/attachments/${attachment.id}/download-url`
    ).then(r => r.json());
    
    // Download from cloud storage
    const response = await fetch(downloadUrl);
    return response.arrayBuffer();
  },
  
  async deleteFile(attachment: AttachmentRecord) {
    // Delete via your backend
    await fetch(`/api/attachments/${attachment.id}`, {
      method: 'DELETE'
    });
  }
};
Security Best Practice: Always use your backend to generate signed URLs and validate permissions. Never expose storage credentials directly to clients.

Initialize Attachment Queue

import { AttachmentQueue } from '@powersync/web';

const attachmentQueue = new AttachmentQueue({
  db: db,  // PowerSync database instance
  localStorage,
  remoteStorage,
  
  // Define which attachments exist in your data model
  watchAttachments: (onUpdate) => {
    db.watch(
      `SELECT photo_id FROM users WHERE photo_id IS NOT NULL`,
      [],
      {
        onResult: async (result) => {
          const attachments = result.rows?._array.map(row => ({
            id: row.photo_id,
            fileExtension: 'jpg'
          })) ?? [];
          await onUpdate(attachments);
        }
      }
    );
  },
  
  // Optional configuration
  syncIntervalMs: 30000,      // Sync every 30 seconds
  downloadAttachments: true,  // Auto-download referenced files
  archivedCacheLimit: 100     // Keep 100 archived files before cleanup
});

// Start the sync process
await attachmentQueue.startSync();
The watchAttachments callback is crucial - it tells the queue which files your app needs based on your data model. The queue uses this to automatically download, upload, or archive files.

Watching Multiple Attachment Types

When watching multiple attachment types, you need to provide the fileExtension for each attachment. You can store this in your data model tables or derive it from other fields. Here are examples for both patterns: Pattern 2: Single Queue with UNION
// Example: Watching users.photo_id, documents.document_id, and videos.video_id
// Assuming your tables store file extensions
const attachmentQueue = new AttachmentQueue({
  db: db,
  localStorage,
  remoteStorage,
  
  watchAttachments: (onUpdate) => {
    db.watch(
      `SELECT photo_id as id, photo_file_extension as file_extension
       FROM users 
       WHERE photo_id IS NOT NULL
       
       UNION ALL
       
       SELECT document_id as id, document_file_extension as file_extension
       FROM documents 
       WHERE document_id IS NOT NULL
       
       UNION ALL
       
       SELECT video_id as id, video_file_extension as file_extension
       FROM videos 
       WHERE video_id IS NOT NULL`,
      [],
      {
        onResult: async (result) => {
          const attachments = result.rows?._array.map(row => ({
            id: row.id,
            fileExtension: row.file_extension
          })) ?? [];
          await onUpdate(attachments);
        }
      }
    );
  },
  
  // ... other options
});

await attachmentQueue.startSync();
Pattern 3: Multiple Queues
// Create separate queues for different attachment types
const photoQueue = new AttachmentQueue({
  db: db,
  localStorage,
  remoteStorage,
  watchAttachments: (onUpdate) => {
    db.watch(
      `SELECT photo_id FROM users WHERE photo_id IS NOT NULL`,
      [],
      {
        onResult: async (result) => {
          const attachments = result.rows?._array.map(row => ({
            id: row.photo_id,
            fileExtension: 'jpg'
          })) ?? [];
          await onUpdate(attachments);
        }
      }
    );
  },
});

const documentQueue = new AttachmentQueue({
  db: db,
  localStorage,
  remoteStorage,
  watchAttachments: (onUpdate) => {
    db.watch(
      `SELECT document_id FROM documents WHERE document_id IS NOT NULL`,
      [],
      {
        onResult: async (result) => {
          const attachments = result.rows?._array.map(row => ({
            id: row.document_id,
            fileExtension: 'pdf'
          })) ?? [];
          await onUpdate(attachments);
        }
      }
    );
  },
});

await Promise.all([
  photoQueue.startSync(),
  documentQueue.startSync()
]);

Upload an Attachment

async function uploadProfilePhoto(imageBlob: Blob, userId: string) {
  const arrayBuffer = await imageBlob.arrayBuffer();
  
  const attachment = await attachmentQueue.saveFile({
    data: arrayBuffer,
    fileExtension: 'jpg',
    mediaType: 'image/jpeg',
    
    // updateHook runs in same transaction, ensuring atomicity
    updateHook: async (tx, attachment) => {
      await tx.execute(
        'UPDATE users SET photo_id = ? WHERE id = ?',
        [attachment.id, userId]
      );
    }
  });
  
  return attachment;
}

// The queue will:
// 1. Save file locally immediately
// 2. Create attachment record with state QUEUED_UPLOAD
// 3. Update user record in same transaction
// 4. Automatically upload file in background
// 5. Update state to SYNCED when complete
The updateHook parameter is the recommended way to link attachments to your data model. It runs in the same database transaction, ensuring data consistency.

Download/Access an Attachment

// Downloads happen automatically when watchAttachments references a file

async function getProfilePhotoUri(userId: string): Promise<string | null> {
  const user = await db.get(
    'SELECT photo_id FROM users WHERE id = ?',
    [userId]
  );
  
  if (!user?.photo_id) {
    return null;
  }
  
  const attachment = await db.get(
    'SELECT * FROM attachments WHERE id = ?',
    [user.photo_id]
  );
  
  if (!attachment) {
    return null;
  }
  
  if (attachment.state === 'SYNCED' && attachment.local_uri) {
    return attachment.local_uri;
  }
  
  return null;
}

// Example: Display image in React with watch query
function ProfilePhoto({ userId }: { userId: string }) {
  const [photoUri, setPhotoUri] = useState<string | null>(null);
  
  useEffect(() => {
    const watch = db.watch(
      `SELECT a.local_uri, a.state 
       FROM users u 
       LEFT JOIN attachments a ON a.id = u.photo_id 
       WHERE u.id = ?`,
      [userId],
      {
        onResult: (result) => {
          const row = result.rows?._array[0];
          if (row?.state === 'SYNCED' && row?.local_uri) {
            setPhotoUri(row.local_uri);
          }
        }
      }
    );
    
    return () => watch.close();
  }, [userId]);
  
  if (!photoUri) {
    return <div>Loading photo...</div>;
  }
  
  return <img src={photoUri} alt="Profile" />;
}

Delete an Attachment

async function deleteProfilePhoto(userId: string, photoId: string) {
  await attachmentQueue.deleteFile({
    id: photoId,
    
    // updateHook ensures atomic deletion
    updateHook: async (tx, attachment) => {
      await tx.execute(
        'UPDATE users SET photo_id = NULL WHERE id = ?',
        [userId]
      );
    }
  });
  
  console.log('Photo queued for deletion');
  // The queue will:
  // 1. Delete from remote storage
  // 2. Delete local file
  // 3. Remove attachment record
}

// Alternative: Remove reference and let queue archive it automatically
async function removePhotoReference(userId: string) {
  await db.execute(
    'UPDATE users SET photo_id = NULL WHERE id = ?',
    [userId]
  );
  
  // The watchAttachments callback will detect this change
  // The queue will automatically archive the unreferenced attachment
  // After reaching archivedCacheLimit, it will be deleted
}

Advanced Topics

Error Handling

Implement custom error handling to control retry behavior:
import { AttachmentErrorHandler } from '@powersync/web';

const errorHandler: AttachmentErrorHandler = {
  async onDownloadError(attachment, error) {
    console.error(`Download failed: ${attachment.filename}`, error);
    
    // Return true to retry, false to archive
    if (error.message.includes('404')) {
      return false; // File doesn't exist, don't retry
    }
    return true; // Retry on network errors
  },
  
  async onUploadError(attachment, error) {
    console.error(`Upload failed: ${attachment.filename}`, error);
    return true; // Always retry uploads
  },
  
  async onDeleteError(attachment, error) {
    console.error(`Delete failed: ${attachment.filename}`, error);
    return true; // Retry deletes
  }
};

const queue = new AttachmentQueue({
  // ... other options
  errorHandler
});

Custom Storage Adapters

The following is an example of how to implement a custom storage adapter for IPFS:
import { LocalStorageAdapter, RemoteStorageAdapter } from '@powersync/web';

// Example: IPFS remote storage
class IPFSStorageAdapter implements RemoteStorageAdapter {
  async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) {
    // Upload to IPFS
    const cid = await ipfs.add(fileData);
    // Store CID in your backend for retrieval
    await fetch('/api/ipfs-cids', {
      method: 'POST',
      body: JSON.stringify({ attachmentId: attachment.id, cid })
    });
  }
  
  async downloadFile(attachment: AttachmentRecord): Promise<ArrayBuffer> {
    // Retrieve CID from backend
    const { cid } = await fetch(`/api/ipfs-cids/${attachment.id}`)
      .then(r => r.json());
    // Download from IPFS
    return ipfs.cat(cid);
  }
  
  async deleteFile(attachment: AttachmentRecord) {
    // IPFS is immutable, but you can unpin and remove from backend
    await fetch(`/api/ipfs-cids/${attachment.id}`, { method: 'DELETE' });
  }
}

Verification and Recovery

verifyAttachments() is always called internally during startSync(). This method does the following: 1- Local files exist at expected paths 2- Repairs broken localUri references 3- Archives attachments with missing files 4- Requeues downloads for synced files with missing local copies
await attachmentQueue.verifyAttachments();

Cache Management

Control archived file retention:
const queue = new AttachmentQueue({
  // ... other options
  archivedCacheLimit: 200  // Keep 200 archived files; oldest deleted when limit reached
});

// For manually expiring the cache
queue.expireCache()

Offline-First Considerations

The attachment queue is designed for offline-first apps:
  • Local-first operations - Files are saved locally immediately, synced later
  • Automatic retry - Failed uploads/downloads retry when connection returns
  • Queue persistence - Queue state survives app restarts
  • Conflict-free - Files are immutable, identified by UUID
  • Bandwidth efficient - Only syncs when needed, respects network conditions

Migrating From Deprecated Packages

If you are migrating from the now deprecated attachment helpers for Dart or JavaScript, follow the notes below:
A fairly simple migration from powersync_attachments_helper to the new utilities would be to adopt the new library with a different Attachment Queue table name and drop the legacy package. This means existing attachments are lost, but will be re-downloaded automatically.