Skip to content

SQLite Internals

This document describes the internal implementation of @queuert/sqlite — the tables it creates, how it handles concurrency within SQLite’s single-writer model, and where it diverges from the PostgreSQL adapter.

The adapter creates its schema via migrateToLatest(). All table names use a configurable prefix (default: queuert_).

The {tablePrefix}job table mirrors the PostgreSQL schema with SQLite-appropriate types:

ColumnTypeDescription
idconfigurable (default: text)Primary key. Type set via idType, value generated by idGenerator
type_nameTEXTJob type identifier
chain_idsame as idForeign key to root job
chain_type_nameTEXTType name of the chain
chain_indexINTEGERPosition in chain (0 for root)
inputTEXTJob input as JSON string
outputTEXTCompletion output as JSON string
statusTEXTJob state, constrained by CHECK
created_atTEXTISO 8601 timestamp
scheduled_atTEXTISO 8601 timestamp
completed_atTEXTISO 8601 timestamp
completed_byTEXTWorker ID
attemptINTEGERAttempt count
last_attempt_atTEXTISO 8601 timestamp
last_attempt_errorTEXTError as JSON string
leased_byTEXTWorker ID holding the lease
leased_untilTEXTISO 8601 timestamp
deduplication_keyTEXTDeduplication key
chain_trace_contextTEXTW3C traceparent
trace_contextTEXTW3C traceparent

Key differences from PostgreSQL:

  • No enum type: Status uses CHECK (status IN ('blocked', 'pending', 'running', 'completed')) instead of a custom enum
  • TEXT for JSON: SQLite stores JSON as plain TEXT, not JSONB. The adapter uses json_each() for parameterized array operations
  • TEXT for timestamps: ISO 8601 strings with datetime('now', 'subsec') for subsecond precision
  • Application-generated IDs: Since SQLite lacks gen_random_uuid(), IDs are generated by the idGenerator function before insertion

The chain_id column references job(id). SQLite has foreign keys disabled by default — the adapter issues PRAGMA foreign_keys = ON at connection time. This must be done on every connection; it is not persisted.

The checkForeignKeys option (default: true) causes migrateToLatest() to verify that PRAGMA foreign_keys is enabled and throws with a clear error message if not. Disable this check only if foreign key enforcement is managed externally.

The {tablePrefix}job_blocker table is structurally identical to PostgreSQL:

ColumnTypeDescription
job_idforeign key to jobThe blocked job
blocked_by_chain_idforeign key to jobRoot job ID of the blocker chain
indexINTEGERPosition in blockers array
trace_contextTEXTPRODUCER span context

Primary key: (job_id, blocked_by_chain_id).

Same structure as PostgreSQL, tracking applied migrations by name and timestamp.

The SQLite adapter creates the same set of indexes as PostgreSQL, using partial indexes (WHERE clauses) where supported:

IndexDefinitionPurpose
job_acquisition_idx(type_name, scheduled_at) WHERE status = 'pending'Job acquisition
job_chain_index_idxUNIQUE (chain_id, chain_index)Chain position uniqueness
job_deduplication_idx(deduplication_key, created_at DESC) WHERE deduplication_key IS NOT NULL AND chain_index = 0Deduplication lookup
job_expired_lease_idx(type_name, leased_until) WHERE status = 'running' AND leased_until IS NOT NULLLease reaping
job_blocker_chain_idx(blocked_by_chain_id) on job_blockerBlocker resolution
job_chain_listing_idx(created_at DESC) WHERE chain_index = 0Chain listing
job_listing_idx(created_at DESC)Job listing
job_listing_status_idx(status, created_at DESC)Filtered listing
job_listing_type_name_idx(type_name, created_at DESC)Type-filtered listing
job_chain_listing_type_name_idx(type_name, created_at DESC) WHERE chain_index = 0Type-filtered chain listing

SQLite’s concurrency model differs fundamentally from PostgreSQL. There is no row-level locking — writes are serialized at the database level.

The adapter uses BEGIN IMMEDIATE for all write transactions:

BEGIN IMMEDIATE;
-- operations
COMMIT;

BEGIN IMMEDIATE acquires a RESERVED lock at transaction start, preventing other writers from starting. This differs from the default BEGIN DEFERRED which only acquires a lock on the first write, avoiding SQLITE_BUSY errors mid-transaction.

Since SQLite serializes writes at the database level, the adapter adds an application-level AsyncLock to prevent concurrent access from async code within the same process:

const lock = createAsyncLock();
executeSql: async ({ txCtx, sql, params }) => {
if (txCtx) return executeRaw(/* ... */); // Lock already held
await lock.acquire();
try {
return executeRaw(/* ... */);
} finally {
lock.release();
}
};
  • Outside a transaction: Every SQL execution acquires the lock
  • Inside a transaction: The lock was already acquired by withTransaction, so individual operations skip it

Custom SqliteStateProvider implementations must use createAsyncLock() to ensure correct serialization.

SQLite has no equivalent to FOR UPDATE SKIP LOCKED. The acquisition query uses a subquery-based atomic UPDATE instead:

UPDATE job
SET status = 'running', attempt = attempt + 1
WHERE id = (
SELECT id FROM job
WHERE type_name IN (SELECT value FROM json_each(?))
AND status = 'pending'
AND scheduled_at <= datetime('now', 'subsec')
ORDER BY scheduled_at ASC
LIMIT 1
)
RETURNING *, EXISTS(...) AS has_more

This is safe because SQLite’s exclusive locking ensures only one writer executes at a time — there is no concurrent acquisition to contend with. The trade-off is that SQLite cannot process jobs in parallel from multiple processes.

The adapter supports nested operations via SQLite savepoints:

SAVEPOINT queuert_sp;
-- nested operation
RELEASE SAVEPOINT queuert_sp;
-- or on error: ROLLBACK TO SAVEPOINT queuert_sp;

Used for partial rollback within transactions — if a user callback or observability event fails, the savepoint rolls back without aborting the outer transaction.

SQLite does not support writeable CTEs with RETURNING in the same way as PostgreSQL. Operations that PostgreSQL handles in a single CTE are split into multiple sequential queries within a transaction:

  • addJobBlockers: Separate INSERT for blockers, then UPDATE for job status
  • deleteJobChains: Separate SELECT to find connected chains, DELETE blockers, DELETE jobs
  • unblockJobs: Separate DELETE for resolved blockers, SELECT to check remaining, UPDATE for unblocked jobs

This results in more round-trips per operation, but is safe under SQLite’s exclusive locking model. See Adapter Architecture for the design rationale.

SQLite uses Julian day conversion for time calculations:

MAX(0, CAST(
(julianday(job.scheduled_at) - julianday(datetime('now', 'subsec'))) * 86400000
AS INTEGER)) AS available_in_ms

This calculates milliseconds until a scheduled job is ready, multiplying the Julian day difference by 86,400,000 (milliseconds per day).

SQLite has no built-in notification mechanism like PostgreSQL’s LISTEN/NOTIFY. The adapter uses the in-process notify adapter (createInProcessNotifyAdapter), which provides synchronous event delivery within a single process. This means SQLite deployments are limited to single-process operation for notification delivery.

For multi-process deployments, an external notify adapter (Redis or NATS) can be paired with the SQLite state adapter.

SQLite does not reclaim disk space from deleted rows automatically by default. The adapter exposes a vacuum() method that runs PRAGMA incremental_vacuum to free reclaimable pages without rewriting the entire database:

await stateAdapter.vacuum();

This requires PRAGMA auto_vacuum = INCREMENTAL to be set on the database before any tables are created:

const db = new Database("queue.db");
db.pragma("auto_vacuum = INCREMENTAL");
db.pragma("foreign_keys = ON");

Incremental vacuum frees pages that are already marked as free by prior DELETE operations. It does not rewrite the database or defragment it — it only returns free pages to the OS. This makes it safe to call frequently (e.g., after each cleanup run) without blocking other operations for extended periods.

listJobChains joins each root row with the last job in the chain. The status filter applies to the joined last job and cannot use an index — only typeName and date range filters narrow the scan before the join. Without these filters, every root row is scanned and joined. On deployments with frequent writes, unfiltered scans over large tables can extend write queue wait times because the read lock is held longer.

listJobs uses straightforward indexed scans without a join and is efficient at any scale.