Keep files out of your database and handle attachments in an entirely storage-agnostic way. PowerSync syncs minimal metadata while an offline-first queue automatically handles uploads, downloads, and retries.
The @powersync/attachments package (JavaScript/TypeScript) and powersync_attachments_helper package (Flutter/Dart) are deprecated. Attachment functionality is now built-in to the PowerSync SDKs. Please use the built-in attachment helpers instead, and see the migration notes.
While PowerSync excels at syncing structured data, storing large files (images, videos, PDFs) directly in SQLite is not recommended. Embedding files as base64-encoded data or binary blobs in database rows can lead to many issues.Instead, PowerSync uses a metadata + storage provider pattern: sync small metadata records through PowerSync while storing actual files in purpose-built storage systems (S3, Supabase Storage, Cloudflare R2, etc.). This approach provides:
Optimal performance - Database stays small and fast
Automatic queue management - Background uploads/downloads with retry logic
Offline-first support - Local files available immediately, sync happens in background
Cache management - Automatic cleanup of unused files
Platform flexibility - Works across web, mobile, and desktop
Save file - Your app calls saveFile() with file data and an updateHook to handle linking the attachment to your data model
Queue for upload - File is saved locally and a record is created in the attachments table with state QUEUED_UPLOAD
Background upload - The attachment queue automatically uploads file to remote storage (S3/Supabase/etc.)
Remote storage - File is stored in remote storage with the attachment ID
State update - The updateHook runs, updating your data model with the attachment ID and marking the file locally as SYNCED
Cross-device sync - PowerSync syncs the data model changes to other clients
Data model updated - Other clients receive the updated data model with the new attachment reference (e.g., user.photo_id = "id-123")
Watch detects attachment - Other clients’ watchAttachments() callback detects the new attachment reference and creates a record in the attachments table with state QUEUED_DOWNLOAD
File download - The attachment queue automatically downloads the file from remote storage
Local storage - File is saved to local storage on the other client
State update - File is marked locally as SYNCED and ready for use
The Attachment Table is a local-only table that stores metadata about each file. It’s not synced through PowerSync’s sync rules - instead, it’s managed entirely by the attachment queue on each device.Metadata stored:
id - Unique attachment identifier (UUID)
filename - File name with extension (e.g., photo-123.jpg)
localUri - Path to file in local storage
size - File size in bytes
mediaType - MIME type (e.g., image/jpeg)
state - Current sync state (see states above)
hasSynced - Boolean indicating if file has ever been uploaded
timestamp - Last update time
metaData - Optional JSON string for custom data
Key characteristics:
Local-only - Each device maintains its own attachment table
Automatic management - Queue handles all inserts/updates
Cross-client coordination - Your data model (e.g., users.photo_id) tells each client which files it needs
The Remote Storage Adapter is an interface you implement to connect PowerSync with your cloud storage provider. It’s completely platform-agnostic - Implementations can use S3, Supabase Storage, Cloudflare R2, Azure Blob, or even IPFS.Interface methods:
uploadFile(fileData, attachment) - Upload file to cloud storage
downloadFile(attachment) - Download file from cloud storage
deleteFile(attachment) - Delete file from cloud storage
Common pattern:
For security reasons client side implementations should use signed URLs
Request a signed upload/download URL from your backend
Your backend validates permissions and generates a temporary URL
Client uploads/downloads directly to storage using the signed URL
The Local Storage Adapter handles file persistence on the device. PowerSync provides implementations for common platforms and allows you to create custom adapters.Interface methods:
initialize() - Set up storage (create directories, etc.)
saveFile(path, data) - Write file to storage
readFile(path) - Read file from storage
deleteFile(path) - Remove file from storage
fileExists(path) - Check if file exists
getLocalUri(filename) - Get full path for a filename
Built-in adapters:
IndexedDB - For web browsers (IndexDBFileSystemStorageAdapter)
Node.js Filesystem - For Node/Electron (NodeFileSystemAdapter)
The Attachment Queue is the orchestrator that manages the entire attachment lifecycle. It:
Watches your data model - You pass a watchAttachments function as a parameter that monitors which files your app references
Manages state transitions - Automatically moves files through states (upload/download → synced → archive → delete)
Handles retries - Failed operations are retried on the next sync interval
Performs cleanup - Removes archived files that are no longer needed
Verifies integrity - Checks local files exist and repairs inconsistencies
Watched Attachments pattern:
The queue needs to know which attachments exist in your data model. The watchAttachments function you provide monitors your data model and returns a list of attachment IDs that your app references. The queue compares this list with its internal attachment table to determine:
New attachments - Download them
Missing attachments - Upload them
Removed attachments - Archive them
The watchAttachments queries are reactive and execute whenever the watched tables change, keeping the attachment queue synchronized with your data model.There are a few scenarios you might encounter:Single Attachment TypeFor a single attachment type, you watch one table. For example, if users have profile photos:
Copy
SELECT photo_id FROM users WHERE photo_id IS NOT NULL
Multiple Attachment Types - Single QueueYou can watch multiple attachment types using a single queue by combining queries with SQL UNION or UNION ALL. This allows you to monitor attachments across different tables (e.g., users.photo_id, documents.document_id, videos.video_id) in one queue. Each attachment type may have different file extensions, which can be handled in the query by selecting the extension from your data model or using type-specific defaults.For example:
Copy
SELECT photo_id as id, photo_file_extension as file_extensionFROM users WHERE photo_id IS NOT NULLUNION ALLSELECT document_id as id, document_file_extension as file_extensionFROM documents WHERE document_id IS NOT NULLUNION ALLSELECT video_id as id, video_file_extension as file_extensionFROM videos WHERE video_id IS NOT NULL
Use UNION ALL when you want to include all rows (including duplicates), or UNION when you want to automatically deduplicate results. For attachment watching, UNION ALL is typically preferred since attachment IDs should already be unique.
The UNION query executes whenever any of the watched tables change, which may have higher database overhead compared to watching a single table. Implementation examples are shown in the Initialize Attachment Queue section below.
Multiple Attachment Types - Multiple QueuesAlternatively, you can create separate queues for different attachment types. Each queue watches its own specific table(s) with simpler queries, allowing for independent configuration and management.
Multiple queues may use more memory, but each queue watches simpler queries. Implementation examples are shown in the Initialize Attachment Queue section below.
// For web browsers (IndexedDB)import { IndexDBFileSystemStorageAdapter } from '@powersync/web';const localStorage = new IndexDBFileSystemStorageAdapter('my-app-files');// For Node.js/Electron (filesystem)// import { NodeFileSystemAdapter } from '@powersync/node';// const localStorage = new NodeFileSystemAdapter('./user-attachments');// For React Native (Expo or bare React Native)// Need to install @powersync/attachments-storage-react-native//// For Expo projects, also install expo-file-system// import { ExpoFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native';// const localStorage = new ExpoFileSystemStorageAdapter();//// For bare React Native, also install @dr.pogodin/react-native-fs// import { ReactNativeFileSystemStorageAdapter } from '@powersync/attachments-storage-react-native';// const localStorage = new ReactNativeFileSystemStorageAdapter();// Remote storage adapter (example with signed URLs)const remoteStorage = { async uploadFile(fileData: ArrayBuffer, attachment: AttachmentRecord) { // Request signed upload URL from your backend const { uploadUrl } = await fetch('/api/attachments/upload-url', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ filename: attachment.filename, contentType: attachment.mediaType }) }).then(r => r.json()); // Upload to cloud storage using signed URL await fetch(uploadUrl, { method: 'PUT', body: fileData, headers: { 'Content-Type': attachment.mediaType || 'application/octet-stream' } }); }, async downloadFile(attachment: AttachmentRecord): Promise<ArrayBuffer> { // Request signed download URL from your backend const { downloadUrl } = await fetch( `/api/attachments/${attachment.id}/download-url` ).then(r => r.json()); // Download from cloud storage const response = await fetch(downloadUrl); return response.arrayBuffer(); }, async deleteFile(attachment: AttachmentRecord) { // Delete via your backend await fetch(`/api/attachments/${attachment.id}`, { method: 'DELETE' }); }};
Security Best Practice: Always use your backend to generate signed URLs and validate permissions. Never expose storage credentials directly to clients.
import { AttachmentQueue } from '@powersync/web';const attachmentQueue = new AttachmentQueue({ db: db, // PowerSync database instance localStorage, remoteStorage, // Define which attachments exist in your data model watchAttachments: (onUpdate) => { db.watch( `SELECT photo_id FROM users WHERE photo_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.photo_id, fileExtension: 'jpg' })) ?? []; await onUpdate(attachments); } } ); }, // Optional configuration syncIntervalMs: 30000, // Sync every 30 seconds downloadAttachments: true, // Auto-download referenced files archivedCacheLimit: 100 // Keep 100 archived files before cleanup});// Start the sync processawait attachmentQueue.startSync();
The watchAttachments callback is crucial - it tells the queue which files your app needs based on your data model. The queue uses this to automatically download, upload, or archive files.
When watching multiple attachment types, you need to provide the fileExtension for each attachment. You can store this in your data model tables or derive it from other fields. Here are examples for both patterns:Pattern 2: Single Queue with UNION
Copy
// Example: Watching users.photo_id, documents.document_id, and videos.video_id// Assuming your tables store file extensionsconst attachmentQueue = new AttachmentQueue({ db: db, localStorage, remoteStorage, watchAttachments: (onUpdate) => { db.watch( `SELECT photo_id as id, photo_file_extension as file_extension FROM users WHERE photo_id IS NOT NULL UNION ALL SELECT document_id as id, document_file_extension as file_extension FROM documents WHERE document_id IS NOT NULL UNION ALL SELECT video_id as id, video_file_extension as file_extension FROM videos WHERE video_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.id, fileExtension: row.file_extension })) ?? []; await onUpdate(attachments); } } ); }, // ... other options});await attachmentQueue.startSync();
Pattern 3: Multiple Queues
Copy
// Create separate queues for different attachment typesconst photoQueue = new AttachmentQueue({ db: db, localStorage, remoteStorage, watchAttachments: (onUpdate) => { db.watch( `SELECT photo_id FROM users WHERE photo_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.photo_id, fileExtension: 'jpg' })) ?? []; await onUpdate(attachments); } } ); },});const documentQueue = new AttachmentQueue({ db: db, localStorage, remoteStorage, watchAttachments: (onUpdate) => { db.watch( `SELECT document_id FROM documents WHERE document_id IS NOT NULL`, [], { onResult: async (result) => { const attachments = result.rows?._array.map(row => ({ id: row.document_id, fileExtension: 'pdf' })) ?? []; await onUpdate(attachments); } } ); },});await Promise.all([ photoQueue.startSync(), documentQueue.startSync()]);
async function uploadProfilePhoto(imageBlob: Blob, userId: string) { const arrayBuffer = await imageBlob.arrayBuffer(); const attachment = await attachmentQueue.saveFile({ data: arrayBuffer, fileExtension: 'jpg', mediaType: 'image/jpeg', // updateHook runs in same transaction, ensuring atomicity updateHook: async (tx, attachment) => { await tx.execute( 'UPDATE users SET photo_id = ? WHERE id = ?', [attachment.id, userId] ); } }); return attachment;}// The queue will:// 1. Save file locally immediately// 2. Create attachment record with state QUEUED_UPLOAD// 3. Update user record in same transaction// 4. Automatically upload file in background// 5. Update state to SYNCED when complete
The updateHook parameter is the recommended way to link attachments to your data model. It runs in the same database transaction, ensuring data consistency.
async function deleteProfilePhoto(userId: string, photoId: string) { await attachmentQueue.deleteFile({ id: photoId, // updateHook ensures atomic deletion updateHook: async (tx, attachment) => { await tx.execute( 'UPDATE users SET photo_id = NULL WHERE id = ?', [userId] ); } }); console.log('Photo queued for deletion'); // The queue will: // 1. Delete from remote storage // 2. Delete local file // 3. Remove attachment record}// Alternative: Remove reference and let queue archive it automaticallyasync function removePhotoReference(userId: string) { await db.execute( 'UPDATE users SET photo_id = NULL WHERE id = ?', [userId] ); // The watchAttachments callback will detect this change // The queue will automatically archive the unreferenced attachment // After reaching archivedCacheLimit, it will be deleted}
verifyAttachments() is always called internally during startSync().This method does the following:
1- Local files exist at expected paths
2- Repairs broken localUri references
3- Archives attachments with missing files
4- Requeues downloads for synced files with missing local copies
If you are migrating from the now deprecated attachment helpers for Dart or JavaScript, follow the notes below:
powersync_attachments_helper (Dart)
@powersync/attachments (JS)
A fairly simple migration from powersync_attachments_helper to the new utilities would be to adopt the new library with a different Attachment Queue table name and drop the legacy package. This means existing attachments are lost, but will be re-downloaded automatically.
Import AttachmentTable and AttachmentQueue directly from your platform SDK (@powersync/web, @powersync/node, or @powersync/react-native), then remove @powersync/attachments from your dependencies.React Native only: also install @powersync/attachments-storage-react-native plus either expo-file-system (Expo 54+) or @dr.pogodin/react-native-fs.What changed:
Before (@powersync/attachments)
After (platform SDK)
AbstractAttachmentQueue subclass
AttachmentQueue instantiated directly
onAttachmentIdsChange(ids: string[])
watchAttachments — items must be { id, fileExtension }, not just IDs
newAttachmentRecord() + saveToQueue()
saveFile({ data, fileExtension, updateHook })
init()
startSync()
Single storage adapter
localStorage + remoteStorage (two separate adapters)
syncInterval
syncIntervalMs
cacheLimit
archivedCacheLimit
AttachmentTable option: name
viewName
AttachmentTable option: additionalColumns
Removed — use the built-in meta_data column (JSON string) instead
Error handlers return { retry: boolean }
Return Promise<boolean>; onDeleteError is now also required
Tip: use a different viewName (e.g. attachment_queue) to avoid a SQLite conflict with the old attachments table during the transition.Data on existing users: the new local attachments table starts empty. Files already in remote storage will re-download automatically once referenced by your watchAttachments query. Files that were only ever stored locally and never uploaded have no remote copy and will not be recoverable.