Stop caching and use your database to save time and latency

Stop caching and use your database to save time and latency

Embracing Embedded Replicas over Traditional KV Stores for caching.

As a developer for the last two decades, I’ve navigated the tricky waters of database optimization more times than I can count. Instead of honing my SQL skills, I often found myself reaching for the nearest band-aid solution. Let’s face it, why dive into the depths of query optimization when you can slap on a quick fix and call it a day?

More often than not, I’ve turned to the trusty Key-Value store as my quick fix. It’s like duct tape of the database world – great for patching up those not-so-stellar queries and indexes. But here’s the kicker, as our applications start resembling a complex web more than a neat stack, our good old duct tape solutions like KV stores begin to stumble over issues like latency and consistency.

In this post, we’re going to explore the wild world of data interaction and how we can give it a modern makeover with Turso Embedded Replicas to dramatically reduce latency and simplify your codebase for easier development.


​​Key-Value (KV) stores match unique keys with corresponding values, which can vary from basic data points to complex entities. Renowned for their speed and scalability, KV stores are great for tasks like caching data returned by API responses for web and mobile apps.

Despite their efficiency in simple scenarios, KV stores often struggle with intricate data interactions, and their use as a cache brings limited benefits to end users when each query is unique or not already stored in the cache. Users who fetch reports with various filters each day are unlikely to benefit from any KV caching, unless they like reading the same report every day.

Tools like Redis have expanded the traditional boundaries of KV stores, incorporating features that inch closer to relational databases. Yet, these advancements don't fully bridge the gap in handling complex data relationships and queries.

This shortfall in KV stores, particularly in multifaceted data scenarios, prompts a significant consideration: the potential of using databases themselves as caches, especially in environments where data complexity and relational integrity are paramount.


The spectrum of databases, ranging from structured relational systems to adaptable NoSQL varieties, plays a crucial role in data management and the effectiveness of caching strategies.

This is increasingly important as Large Language Models (LLMs) benefit greatly from efficient caching mechanisms to enhance performance, and reduce costs.

In web development, the separation between frontend and database always results in some added latency, which makes KV stores, despite their limitations, an attractive option for quick data access instead of the database.

However, the limited query capabilities of KV stores lead to a growing interest in using databases directly as caches, especially for complex data. Edge databases, reducing latency by bringing data closer to users, are at the forefront of this evolution, enhancing access speed and challenging traditional database scaling, which is particularly beneficial for data-intensive technologies like LLMs.

On another note, over the past decade, Edge Computing has emerged as the standard for delivering content and executing functions close to users. Platforms like AWS Lambda, Cloudflare Workers, and recently Deno Deploy have been instrumental in this shift. However, integrating databases into this edge infrastructure presents significant challenges, from complex connection pooling to adapting transmission protocols.

When it comes to 'edge databases,' there's a common misconception worth addressing. Just because a database can be accessed in an edge runtime environment doesn't automatically make it an 'edge database' in the truest sense. This distinction is crucial.

Often, what's happening is that there's an edge proxy acting as a middleman between your application and the actual database, which could be located far from the edge. This setup doesn't really bring the database closer to the edge; instead, it adds another hop in the data path. Ironically, this can introduce more latency, the very thing we're trying to avoid, especially in caching scenarios.

This means when we talk about 'edge compatibility,' we're not necessarily discussing databases physically located at the edge, but rather those that are accessible through edge computing infrastructure, albeit with potential trade-offs in performance.

Embedded Replicas

In the quest for ever-faster web applications, developers have traditionally turned to Key-Value (KV) stores as caching layers to speed up data retrieval. However, with the advent of embedded replicas, the need for separate KV stores is being reevaluated.

What if, instead of reaching out to a remote cache or database, your application could directly interact with a database located on the same server?

Embedded Replicas

Embedded replicas present a groundbreaking approach, placing a local, read-only version of your database located on the same application server. This proximity eliminates the network latency typically encountered with KV stores and edge databases, ensuring rapid reads.

Furthermore, embedded replicas adeptly tackle a significant modern web challenge: consistent functionality amidst fluctuating connectivity. In situations where applications might be offline or distant from an edge node, Turso's embedded replicas maintain swift data access, ensuring a smooth user experience regardless of connectivity constraints.

The adoption of embedded replicas signifies a major leap in achieving speed and reliability in web and mobile applications, fundamentally altering the landscape of web performance and dependability.

Replication has traditionally been hard and expensive, but Turso makes it easy and cheap.


In conclusion, KV stores and traditional databases had a good run, but dropped the ball when it comes to modern web architecture. Edge databases gave it a shot, trying to fill those gaps, but let’s face it, they’re still doing the tango with latency and sync issues.

Enter Embedded Replicas – the cool new kid on the block. They cosy up right next to your application server with their read-only database replica, slashing latency like a ninja and making read operations quicker than ever. It’s like giving your app a super-speed boost without the energy drink crash.

Provide your LibSQL client the Turso syncUrl:

import { createClient } from "@libsql/client";

const client = createClient({
   url: "file:local.db",
   syncUrl: process.env.DB,
   authToken: process.env.TOKEN

When you’re ready to bring in new data, you can call the sync() operation in the background:

await client.sync()

So there you have it – not only do embedded replicas keep your data in tip-top shape, they also jazz up the user experience in ways old-school methods can only dream of. Piotr Sarna wrote an excellent post exploring how you can get microsecond read latency on AWS Lambda with local databases that is certainly worth reading if you haven’t already.

Are you ready to speed your application development and response times with embedded replicas? Sign up today.