Backup & Restore Your Salesforce Data

A free, open-source desktop app that gives you complete control over your Salesforce data. Export to CSV files or directly to your data warehouse. Verify everything. Restore when needed.

1-15
Parallel Workers
4
Export Destinations
4GB
Default Memory
Records Supported
🔐 Connect to Salesforce
📋 Select Objects
⚙️ Choose Destination
▶️ Run Backup
✅ Verify

Data Backup Features

Everything you need to safely export your Salesforce data, from simple CSV exports to enterprise data warehouses.

📅

Incremental Backup Available

Only backup records that changed since your last backup. Save time and API calls on large orgs.

How it works
What it does

Instead of exporting all records every time, incremental backup only fetches records that were created or modified since your last successful backup. Perfect for daily backups of large orgs.

How to use
  • Check the "Incremental Backup" checkbox on the Backup page
  • The app automatically remembers your last backup date per object
  • First run is always a full backup, subsequent runs are incremental
Technical Details: Uses LastModifiedDate > [timestamp] in the SOQL WHERE clause. Backup history is stored in ~/.backupforce/backup_history.json. Objects without LastModifiedDate field (like History objects or Custom Metadata) automatically fall back to full backup.
📄 View BackupHistory.java
⏭️

Skip If Already Backed Up Available

Resume interrupted backups by skipping objects that already match your Salesforce record count.

How it works
What it does

If your backup gets interrupted (crash, timeout, out of memory), you can restart it and the app will skip objects that are already fully backed up. It compares record counts in your destination with Salesforce.

How to use
  • Go to Configure Database settings
  • Check "Skip tables if record count matches Salesforce"
  • Make sure "Drop and recreate tables" is unchecked
  • When you restart a backup, matching objects show "SKIPPED"
Technical Details: Queries SELECT COUNT(DISTINCT Id) FROM table in Snowflake/PostgreSQL/SQL Server and compares to Salesforce's record count via getRecordCount(). Only available for database destinations (not CSV yet).
📄 View JdbcDatabaseSink.java
✏️

Field Selection Available

Choose exactly which fields to backup per object. Skip formula fields, reduce file sizes.

How it works
What it does

Instead of backing up all 500+ fields on an object, select only the ones you need. Essential fields (Id, Name, CreatedDate, etc.) are always included automatically.

How to use
  • Click the ✏️ pencil icon next to any object in the list
  • A dialog shows all available fields with checkboxes
  • Use "Select All" / "Deselect All" for quick selection
  • System and essential fields are pre-selected
Technical Details: Uses Salesforce REST API /describe to get field metadata. Builds a custom SOQL SELECT statement with only chosen fields. Field selections are remembered per object.
📄 View FieldSelectionDialog.java
🔍

Custom WHERE Filter Available

Filter your backup with any SOQL WHERE clause. Backup only active records, specific record types, or date ranges.

How it works
What it does

Add custom filtering conditions that apply to all objects during backup. Great for backing up only active accounts, opportunities from a specific date range, or records owned by certain users.

How to use
  • Check "Use Custom WHERE" checkbox
  • Enter your condition: IsDeleted = false AND CreatedDate > 2024-01-01T00:00:00Z
  • Don't include the "WHERE" keyword - just the condition
  • Works together with incremental backup (conditions are combined with AND)
Technical Details: The WHERE clause is appended to the Bulk API v2 query. If combined with incremental backup, the final query becomes: WHERE (LastModifiedDate > [timestamp]) AND ([your condition])

Backup Verification New

Verify your backup is complete by comparing record counts with Salesforce. Build trust in your data.

How it works
What it does

After a backup completes, verification checks that every object in your destination has the same record count as Salesforce. You get a confidence report: HIGH (exact match), MODERATE (small variance), or LOW (significant difference).

How to use
  • Click the Verify button after a backup completes
  • Or check "Verify after backup" to run automatically
  • Set default in Preferences → "Verify backup after completion by default"
Technical Details: For CSV: counts rows in each CSV file (excluding header). For databases: runs SELECT COUNT(DISTINCT Id). Compares against Salesforce /query?q=SELECT COUNT() FROM Object. Results saved to backup history.
📄 View BackupVerifier.java
📎

Attachments & Files Available

Download actual file content from Attachments, ContentVersion, and Documents - not just metadata.

How it works
What it does

When backing up Attachment, ContentVersion, or Document objects, the actual binary file content (PDFs, images, Word docs, etc.) is downloaded and saved alongside the record metadata.

Supported Objects
  • Attachment → Body field
  • ContentVersion → VersionData field
  • Document → Body field
Storage
  • CSV: Files saved to [backup-folder]/blobs/[ObjectName]/[RecordId]_filename.ext. CSV includes a BlobFilePath column.
  • Snowflake: Binary stored in BINARY(8MB) column
  • PostgreSQL: Binary stored in BYTEA column
  • SQL Server: Binary stored in VARBINARY(MAX) column
Technical Details: Uses Salesforce REST API to download binary content via /sobjects/[Object]/[Id]/Body or /sobjects/ContentVersion/[Id]/VersionData. Memory-efficient streaming for large files. Default heap is 4GB to handle large attachments.
📄 View blob download code

Parallel Processing Available

Backup up to 15 objects simultaneously (default: 5). Large orgs with 200+ objects finish in hours, not days.

How it works
What it does

Instead of backing up objects one at a time, BackupForce runs parallel backup workers. While waiting for Salesforce to process one Bulk API job, other objects are already being processed.

How to configure
  • Go to Preferences → Backup Defaults → Parallel Threads
  • Set between 1-15 workers (default: 5)
  • Lower values = less memory/API pressure, good for large objects
  • Higher values = faster backups, good for many small objects
How it works
  • You select 100 objects to backup
  • N workers start simultaneously (where N = your configured value)
  • As each worker finishes, it picks up the next object in queue
  • Progress bar shows overall completion
Technical Details: Uses Java ExecutorService with a configurable thread pool (1-15). Salesforce allows 100 concurrent Bulk API jobs per org; we limit to 15 to avoid consuming too many org resources. Each worker manages its own Bulk API v2 job lifecycle: create job → upload query → poll for completion → download results. Thread-safe progress tracking via AtomicInteger counters. Preference stored in Java Preferences at com.backupforce/threads.
📄 View parallel backup code
🚀

Salesforce Bulk API v2 Available

The fastest way to export data from Salesforce. Handle millions of records efficiently.

How it works
What it does

Bulk API v2 is Salesforce's most efficient API for large data operations. Unlike regular REST API (2,000 records per call) or SOAP API (batch mode), Bulk v2 processes queries server-side and streams results.

Why it matters
  • Export 10 million records without hitting API limits
  • Server-side query processing (no pagination on client)
  • Automatic result locator handling for large results
  • Supports SOQL queries including relationships
Technical Details: Creates query jobs via POST /services/data/v62.0/jobs/query. Polls /jobs/query/[jobId] until state is JobComplete. Downloads results via /jobs/query/[jobId]/results with streaming. Handles Sforce-Locator header for multi-part results.
📄 View BulkV2Client.java

Export Destinations

Choose where to store your backups. Start simple with CSV files, or integrate directly with your data warehouse.

📁

CSV Files Available

Export to simple CSV files. Easy to open in Excel, import to other systems, or archive.

How it works
Output Structure
  • Account.csv - One CSV file per object
  • blobs/Attachment/ - Binary files organized by object
  • _backup_manifest.json - Metadata about the backup
How to use
  • Select destination: CSV Folder
  • Click Browse to choose output folder
  • Files are written as each object completes
Technical Details: Uses Apache Commons CSV with RFC 4180 format. UTF-8 encoding with BOM for Excel compatibility. Handles NULL values, embedded quotes, and newlines in field data.
❄️

Snowflake Available

Stream data directly to Snowflake. Supports SSO authentication and automatic schema creation.

How it works
Features
  • Auto-creates tables matching Salesforce object structure
  • Maps Salesforce types to Snowflake types
  • SSO via "externalbrowser" authenticator
  • Binary data stored in BINARY columns
Connection Settings
  • Account: your-account.snowflakecomputing.com
  • Database: SALESFORCE_BACKUP
  • Schema: TBS_DATA (or your preference)
  • Warehouse: COMPUTE_WH
Technical Details: Uses Snowflake JDBC driver with net.snowflake:snowflake-jdbc. Table DDL generated dynamically from Salesforce /describe metadata. Batch inserts via PreparedStatement.addBatch().
📄 View JdbcDatabaseSink.java
🐘

PostgreSQL Available

Back up to your PostgreSQL database. Great for on-premise data warehouses or cloud PostgreSQL.

How it works
Type Mapping
  • Salesforce Text → TEXT
  • Salesforce Number → NUMERIC
  • Salesforce Date → DATE
  • Salesforce DateTime → TIMESTAMP
  • Salesforce Boolean → BOOLEAN
  • Binary data → BYTEA
Technical Details: Uses PostgreSQL JDBC driver. Table and column names are quoted to preserve case and handle reserved words. Uses ON CONFLICT DO NOTHING for upserts when not recreating tables.
🗄️

SQL Server Available

Direct integration with Microsoft SQL Server for enterprise backup strategies.

How it works
Type Mapping
  • Salesforce Text → NVARCHAR(MAX)
  • Salesforce Number → DECIMAL
  • Salesforce DateTime → DATETIME2
  • Binary data → VARBINARY(MAX)
Technical Details: Uses Microsoft JDBC driver mssql-jdbc. Supports Windows authentication and SQL authentication. Schema and table names use bracket notation [schema].[table].

Data Restore Coming Soon

Restore your backed-up data back to Salesforce. Whether recovering from data loss, migrating between orgs, or setting up sandboxes - restore makes it simple.

🔗

Relationship Resolution Coming Soon

Automatically resolve lookup and master-detail relationships. Old IDs map to new IDs seamlessly.

How it works
The Problem

When you restore an Opportunity, it has an AccountId pointing to the old Account record's ID. But in the target org, that Account has a different ID (or doesn't exist yet).

The Solution
  • Restore parent objects first (Accounts before Opportunities)
  • Track Old ID → New ID mapping for every inserted record
  • Replace lookup field values with new IDs before insert
Technical Details: Uses topological sort to determine restore order based on relationship metadata. Stores mappings in _id_mapping.json. Handles circular references by doing a two-pass insert (insert with nulls, then update with resolved IDs).
🔄

Cross-Org Transformation Coming Soon

Migrate data between different orgs. Map RecordTypes, picklist values, and users automatically.

How it works
What it solves

Source org has RecordType "Enterprise Account" with ID 012xxx. Target org has same RecordType but ID 012yyy. Same for picklist values, users, profiles, and more.

Automatic Mapping
  • RecordTypes: Matched by DeveloperName
  • Users: Matched by Email or Username
  • Picklists: Matched by API value
  • Profiles: Matched by Name
Technical Details: Pre-restore phase queries target org metadata. Builds transformation map. Dry-run mode shows what will change without actually inserting. Transformation rules stored in _transformation_rules.json.
🔬

Dry Run Mode Coming Soon

Preview exactly what will happen before committing any changes. See transformations, warnings, and potential issues.

How it works
What you see
  • Number of records to be inserted per object
  • Fields that will be transformed
  • Unmapped values (missing RecordTypes, etc.)
  • Relationship resolution preview
  • Estimated API usage
How to use
  • Select your backup folder or database connection
  • Configure transformation options
  • Click "Dry Run" instead of "Restore"
  • Review the report, then run actual restore

Download

Pre-built releases include bundled Java runtime. No additional software required - just download, extract, and run.

🪟

Windows

Windows 10/11 (64-bit)

BackupForce-portable.zip
🍎

macOS

macOS 11+ (Apple Silicon & Intel)

BackupForce.dmg
🐧

Linux

Ubuntu 20.04+, RHEL 8+

backupforce.tar.gz

Architecture

BackupForce is a JavaFX desktop application built for reliability and performance.

┌─────────────────────────────────────────────────────────────────┐ │ BackupForce │ ├─────────────────────────────────────────────────────────────────┤ │ UI Layer (JavaFX) │ │ ├── FXML Controllers (Dashboard, Backup, Restore, Preferences) │ │ └── CSS Theming (Windows 11 Dark, VS Code Dark) │ ├─────────────────────────────────────────────────────────────────┤ │ Service Layer │ │ ├── BackupService ─────→ 1-15 parallel workers (configurable) │ │ ├── RestoreService ────→ Relationship resolution, ID mapping │ │ ├── VerificationService → Count comparison, confidence scoring │ │ └── BackupHistory ─────→ Incremental tracking, last modified │ ├─────────────────────────────────────────────────────────────────┤ │ API Layer │ │ ├── BulkV2Client ──────→ Query jobs, result streaming │ │ ├── SalesforceClient ──→ REST API, describe, blob download │ │ └── OAuthManager ──────→ OAuth 2.0, token refresh │ ├─────────────────────────────────────────────────────────────────┤ │ Storage Layer (DataSink interface) │ │ ├── CsvExporter ───────→ Apache Commons CSV, blob files │ │ ├── SnowflakeSink ─────→ Snowflake JDBC, SSO support │ │ ├── PostgresSink ──────→ PostgreSQL JDBC │ │ └── SqlServerSink ─────→ Microsoft JDBC │ ├─────────────────────────────────────────────────────────────────┤ │ Configuration │ │ ├── ConnectionManager ─→ Saved connections (encrypted) │ │ ├── Preferences ───────→ User settings │ │ └── BackupHistory ─────→ ~/.backupforce/backup_history.json │ └─────────────────────────────────────────────────────────────────┘

Frequently Asked Questions

How is this different from Salesforce's Data Export service?
Salesforce's built-in Data Export runs weekly (or monthly on lower editions) and gives you a ZIP file of CSVs. BackupForce lets you backup on-demand, choose specific objects and fields, export directly to databases like Snowflake, run incremental backups, and verify that your backup is complete. Plus, it works with all Salesforce editions.
What Salesforce editions are supported?
BackupForce works with any Salesforce edition that supports the Bulk API v2 and REST API. This includes Enterprise, Unlimited, Developer, and Performance editions. Professional Edition works if you have API access enabled (usually requires an add-on).
How many API calls does a backup use?
Bulk API v2 jobs don't count against your daily REST API limits - they use a separate allocation. A typical backup of 100 objects might use: 100 Bulk API jobs (one per object), plus a few hundred REST calls for metadata and record counts. For most orgs, this is well within limits.
Can I schedule automated backups?
Not yet within the app, but it's on the roadmap. For now, you can use Windows Task Scheduler, cron, or any automation tool to launch BackupForce with command-line arguments for unattended backups.
Is my data secure?
BackupForce runs entirely on your machine - no data is sent to any third-party servers. OAuth tokens are stored locally in your OS credential store. Database passwords for saved connections are encrypted. Your backup files are stored wherever you choose.
What about custom objects and fields?
Fully supported. BackupForce queries the Salesforce /describe API to discover all objects and fields, including custom ones (__c), managed package objects, and custom metadata types (__mdt).

Building from Source

Requirements

git clone https://github.com/victorfelisbino/BackupForce.git
cd BackupForce
mvn clean package -DskipTests
java -jar target/BackupForce.jar

Creating Native Executables

# Windows
.\scripts\build-portable.ps1

# macOS  
./scripts/build-mac.sh

# The output is in the /dist folder

Documentation