Understanding the architecture that makes EdgeLock reliable and fast.
The fundamental principle behind EdgeLock's simplicity and correctness is that each unique lock name (e.g., 'user-signup-flow'
) is managed by a single, logical coordinator instance at any given time. When you attempt to acquire a lock, your request is routed to the specific coordinator responsible for that lock name.
This coordinator runs in a single-threaded environment. This design eliminates entire classes of complex race conditions common in traditional distributed locking systems where multiple clients might try to modify the same lock state simultaneously across different machines or threads. With EdgeLock, all operations for a specific lock are serialized through its dedicated coordinator.
This allows developers to reason about lock state more easily, similar to managing state within a single-threaded application, while still benefiting from a globally distributed and resilient infrastructure.
While the coordinator itself is single-threaded, interacting with persistent storage involves asynchronous operations (like checking lock status or updating it). A naive implementation could still lead to race conditions if multiple requests arrive concurrently and their storage operations interleave incorrectly.
Consider this simplified example (illustrative, not actual EdgeLock code):
// If two requests run this concurrently *without* proper gating:async function attemptLock() {let currentStatus = await storage.get('lock-status'); // Request 1 gets 'unlocked'// Context switch! Request 2 runs...// let currentStatus = await storage.get('lock-status'); // Request 2 *also* gets 'unlocked'if (currentStatus === 'unlocked') {await storage.put('lock-status', 'locked'); // Request 1 sets 'locked'// Context switch! Request 2 runs...// await storage.put('lock-status', 'locked'); // Request 2 *also* sets 'locked'return true; // Both requests think they acquired the lock!}return false;}
EdgeLock prevents this through internal mechanisms often referred to as "input gates". Essentially, it ensures that asynchronous operations within a single logical request (like acquiring a lock, which might involve reading then writing) are treated atomically with respect to other incoming requests for the *same lock*. A new request for the same lock name won't start processing until the critical storage operations of the previous request are sequenced, preventing the interleaving shown above. This guarantees that operations occur in a well-defined order, making your locking logic correct by default.
When you successfully acquire a lock and perform an action, you need assurance that the lock state is durable and won't be lost due to transient failures. EdgeLock achieves this by persisting lock state durably across multiple physical machines and potentially geographic locations.
A acquire()
call only fully resolves successfully once the system has confirmation that the lock state change (e.g., marking the lock as held with a specific TTL) has been safely written to persistent storage. Similarly, internal mechanisms ("output gates") ensure that external confirmations (like the response to your API call) are not sent until the underlying state changes are durable. This prevents scenarios where your application thinks it acquired a lock, but the state change was lost before being persisted.
This commitment to durability ensures data integrity even in the face of node failures or network partitions, providing strong consistency guarantees for your critical sections.
While durability requires writes to persistent storage (which inherently involves latency), EdgeLock employs several optimizations to provide low-latency locking:
These optimizations, combined with the inherent correctness guarantees of the single-coordinator model, allow EdgeLock to offer locking that is simultaneously easy to use, correct by default, and highly performant.