I once had a 670 MB SQLite database and a simple requirement: put it on a static site so users could search it by keyword. Use a backend? The whole project was static β I didn’t want to spin up a server just for this. Upload the raw DB and have the browser download it? Nobody is waiting for 670 MB.
That’s when I found sql.js-httpvfs, and the problem went away. It builds on sql.js, adding an HTTP Range Request-based virtual file system so the browser only fetches the SQLite pages a query actually needs. That same 670 MB database? A simple key lookup transfers roughly 1 KB.
How it works: fetching only what you need
SQLite stores data in fixed-size pages (4096 bytes by default). Every B-Tree node, every index entry, and every row maps to a specific page number. When you run an indexed query, SQLite only needs to read the pages along the B-Tree path β it never has to scan the whole table.
sql.js-httpvfs exploits this by replacing the Emscripten VFS (Virtual File System) layer. Where stock sql.js reads from an in-memory ArrayBuffer, this version issues HTTP Range Requests instead:
| |
The server returns just those 4096 bytes, which get handed to the SQLite engine. Everything runs inside a Web Worker, so the main thread is never blocked, and all queries are async.
To reduce round trips, the library implements 3 virtual read heads that each track access patterns. If a read head detects sequential page access, it automatically ramps up prefetching β from one page at a time to several pages per request. This matters a lot for full-text search, which traverses many tree nodes in sequence.
Index design determines transfer size
The most important thing to understand about this approach: your queries must use indexes, or the benefits largely disappear.
A SCAN TABLE means SQLite has to read every page in the table β under HTTP Range Requests, that means downloading the whole table. A COVERING INDEX lets the query work entirely within the index B-Tree, without touching the data rows at all.
Use EXPLAIN QUERY PLAN to confirm:
| |
Installation and initialization
| |
sql.js-httpvfs needs two additional static assets: sql-wasm.wasm and a Worker JS file. Both are included in the package β you just need to copy them into your public directory. With Vite:
| |
Initialize the worker:
| |
requestChunkSize defaults to 4096, matching the SQLite default page size. If you set your database page size to 1024, adjust this value to match.
Preparing the database
How the database is configured directly affects transfer efficiency. Do these steps before uploading:
| |
When designing indexes, think about your query patterns and use covering indexes wherever possible:
| |
For full-text search, use FTS5:
| |
Running queries
Once the worker is set up, querying looks similar to regular sql.js β but everything returns a Promise:
| |
Full-text search:
| |
Monitoring transfer size
One of my favorite parts: you can see exactly how many bytes each query fetches.
| |
getStats() returns an object with totalFetchedBytes (cumulative transfer) and totalRequests (cumulative request count). During development I display these numbers on screen to verify that indexes are actually working.
Splitting the database into chunks
For large databases you can split the file into fixed-size chunks, which makes CDN caching much more effective:
| |
Switch to serverMode: 'chunked' in the config:
| |
Deploying to GitHub Pages
Put the database and static assets in your repo (or use Git LFS), then push. GitHub’s static file server supports Range Requests out of the box β no special configuration needed.
S3, Cloudflare Pages, and Netlify all support Range Requests as well, so any of those work directly.
One thing to watch: CORS. If your frontend and database are on different origins, the server needs to return:
| |
Limitations
A few things worth knowing before you commit to this approach:
It is read-only. HTTP Range Requests are a read operation β writes require a backend. If you need both reads and writes in the browser, look at the official SQLite Wasm with OPFS or wa-sqlite.
There is no cache eviction. Pages downloaded during a session are cached in Worker memory, but that cache never shrinks. Users who run many different queries will see memory usage grow continuously.
It is experimental. The author describes this as demo-level code in the README, and it should not be used where high stability is required.
If you’re new to sql.js, Getting Started with sql.js: SQLite in the Browser covers the fundamentals. For use cases that need offline writes, Offline Web Apps with sql.js and IndexedDB walks through a complete implementation. If you want a broader comparison of browser storage options, Browser Storage Solutions Compared covers the landscape.
