Why Litebase?
TL;DR;
I’m building Litebase to give people what they want: the simplicity, speed, and reliability of SQLite with the high availability, durability, and scalability it was never designed for. SQLite is astonishingly capable, but everyone knows where it hits the ceiling. Litebase exists to lift that ceiling.
If you ask most developers what relational database they should start with, I bet you’ll hear one of three foundational answers: PostgreSQL, MySQL, or SQLite. In my opinion, these three are excellent choices for general-purpose transactional data processing in modern web applications. In this post, though, I’m going to attempt to convince you to take a closer look at SQLite. If you’re happily running PostgreSQL or MySQL, please keep doing what you’re doing, they’re great. This post isn’t meant to convince you that SQLite is objectively the best option, either. It certainly has its own shortcomings, but I believe it can hang with the best of them.
There’s something genuinely special about SQLite. Developers who use it tend to speak about it with a different kind of respect. They aren’t loud, they aren’t arguing on social media, and they aren’t hyping it as the next big trend. Instead, they’re quietly building, quietly shipping, and quietly collecting checks. They understand the profound power of having a database that is, quite literally, just a file. SQLite is beloved because it’s simple, fast, and incredibly reliable. It’s the rare technology that consistently over-delivers while demanding almost nothing in return.
And that sentiment isn’t just anecdotal, it shows up everywhere, especially on Hacker News, where developers continually share the sentiment that SQLite punches way above its weight. It might generate fewer stories than PostgreSQL, but those stories attract disproportionately more comments, upvotes, and genuine enthusiasm. Developers routinely praise it as “astonishing,” “shockingly fast,” and “ridiculously reliable.” In fact, many developers treat it as the default database for side projects, internal tools, single-node SaaS, desktop apps, mobile apps, games, and embedded systems. Which makes sense since SQLite is the most deployed database in the world.
But SQLite has its limits which are well-documented, and a big part of the reason I’m working on Litebase.
The limits of SQLite
Getting started with SQLite is easy. But easy doesn’t always mean right, especially as you move into production and begin to scale. When you’re getting an application off the ground, it’s crucial you understand how the technology works and know whether it will properly carry you forward into the future.
Let’s discuss some of the most common limitations of SQLite that tend to be pain points for developers as their applications scale:
-
The single-writer limitation
SQLite uses coarse-grained locking allowing one writer at a time.
-
Difficult schema changes
SQLite has limited support for altering table schemas, often requiring complex workarounds.
-
Not built to scale beyond a single machine
SQLite was designed to run as an embedded database, so it is not built with distributed systems, high availability, failover, or multi-node coordination in mind.
-
Not suitable for very large datasets
SQLite has practical limits on database size, which can be a constraint for applications with massive amounts of data.
These aren’t controversial points, and even the strongest SQLite enthusiasts acknowledge them. But given these limitations, you may already come to the conclusion that SQLite is not the right tool for the job when building web applications. Let’s take a closer look at these to get a better understanding.
Single writer
SQLite’s single-writer limitation is one of its most well-known constraints. In order to make changes to the database file, SQLite must exclusively lock the file while writing. While this limitation may seem insurmountable, in practice it’s far more manageable than people expect. The first thing to keep in mind is how long your transactions remain open. If you start a transaction and then perform a slow operation like an HTTP request or heavy computation, you’re holding the write lock the entire time.
BEGIN TRANSACTION;
-- Slow operation here (e.g., HTTP request, heavy computation)
UPDATE users SET last_login = CURRENT_TIMESTAMP WHERE id = 1;
COMMIT; This is where most real-world contention comes from. Keeping transactions small, focused, and short-lived dramatically improves concurrency. Instead, try to perform as much of the work outside of the transaction as possible:
-- Perform slow operation here (e.g., HTTP request, heavy computation)
BEGIN TRANSACTION;
UPDATE users SET last_login = CURRENT_TIMESTAMP WHERE id = 1;
COMMIT; The next thing to try, which is one of the most effective techniques to increase write throughput, is to batch writes. Instead of performing dozens of tiny write transactions throughout the request cycle, group them together and commit them in one fast, atomic operation.
BEGIN TRANSACTION;
UPDATE users SET last_login = CURRENT_TIMESTAMP WHERE id = 1;
INSERT INTO audit_log (user_id, action, timestamp) VALUES (1, 'login', CURRENT_TIMESTAMP);
-- More write operations here
COMMIT; This can also be done when inserting multiple rows using a single transaction and a single INSERT statement with multiple value sets:
BEGIN TRANSACTION;
INSERT INTO orders (user_id, product_id, quantity)
VALUES
(1, 101, 2),
(1, 102, 1),
(1, 103, 5);
COMMIT; This reduces lock churn and keeps the database available for readers. SQLite’s WAL (Write-Ahead Logging) mode helps here as well. WAL allows readers to continue accessing the database while a writer is active, giving you far better concurrency than the default rollback journal mode.
Finally, I think it is smart to consider using multiple databases for different domains or tenants of your application. I say this with the caveat that managing more than one database will always be more complex than managing a single database, so be sure to weigh the trade-offs carefully. But if your application has distinct areas that don’t need to share data frequently, splitting them into separate databases can reduce contention and improve performance.
For example, you might have one database for you core application data, another that receives heavy write traffic like logs or analytics, and others for things like caching, or queues if you are using SQLite for that purpose. This way, write-heavy operations don’t block reads on your main application data and you can scale each database independently based on its specific needs.
Difficult schema changes
SQLite supports a limited set of ALTER TABLE operations, which can make schema migrations challenging. For example, you can’t directly drop a column or change a column’s data type.
ALTER TABLE users DROP COLUMN age; -- ERROR: Not supported
ALTER TABLE users ALTER COLUMN name TYPE TEXT; -- ERROR: Not supported Instead, you often have to create a new table with the desired schema, copy the data over, drop the old table, and rename the new one. This process can be cumbersome and error-prone, especially for large tables.
Single machine
So as an embedded database, you can expect SQLite to work quite well on a single machine. But if your application begins to take off and you decide to add more application servers to support the increased load, you’ll be met with one of the greatest challenges for this situation: how to share the SQLite database file across multiple servers. Logically speaking, you can’t. SQLite is not designed to be accessed concurrently from multiple machines. Even if the servers were to share a network file system, attempting to do so can lead to database corruption and data loss.
Maximum database size
Depending on what you think of as a “very large” dataset, you may be quite surprised that SQLite can handle databases up to 281 terabytes in size. However, it should be well-noted that there is also a maximum page count of 4,294,967,294 pages. Given that the default page size is 4096 bytes, this results in a maximum database size of approximately 16 terabytes. So while you can technically have a database up to 281 terabytes, I think the important trade-off to consider here is how page size and database performance interact. The larger the page size, the more data SQLite has to read and write for each operation, which can lead to inefficiencies, especially for workloads with many small transactions.
That being said, sticking with the default of 4096 bytes per page and 16 terabytes is more than sufficient for many applications. The only way around this limitation is to shard your data across multiple databases, which adds significant complexity to your application logic. You could shard by user, region, date, or any other logical partitioning scheme that makes sense for your data access patterns. But again, this adds complexity and overhead to your application.
Where Litebase comes in
So where does Litebase fit into all of this? Is Litebase meant to solve all of these problems? Not exactly. Litebase is being built to extend SQLite’s capabilities as a distributed database in a way that addresses many of these limitations, and as things progress I think we’ll be able to tackle even more of them.
Distributed system
First off, Litebase runs as a distributed system that allows one or more servers to provide remote access to SQLite databases over HTTP. Each cluster has a primary node that handles writes and one or more replicas that handle reads. This architecture allows Litebase to scale read operations horizontally by adding more replicas, while the primary node manages write operations, thus addressing the single-writer limitation of SQLite.
Schema changes
I haven’t yet solved the problem of difficult schema changes, but it’s on my radar. I’m exploring ways to implement online schema migrations that can be performed without significant downtime or complex workarounds. This is still an area of active research and development.
Sharing the database file
What about sharing the database file across multiple machines? This is actually one of the areas where this experiment started. The maintainers of SQLite explicitly advise against using SQLite over a network file system due to the risk of database corruption. When SQLite is used in WAL journal mode, it uses a memory mapped file to coordinate access to the database file. Memory mapped files do not work over the network, so typically this would be a non-starter. However, after toiling with this problem for a while, I realized that if I could control how the database file is stored and accessed, I could work around this limitation, manage the shared memory in heap, and propagate update notifications from the primary to replicas in a safe manner over the network. This is where the idea of a distributed file storage layer came into being.
Litebase uses a distributed file storage layer that tiers data across different storage classes, from fast local disk, to network storage, and durable object storage. This allows Litebase to store and manage SQLite database files in a way that is safe and efficient, even when accessed from multiple nodes in a cluster.
Scaling writes
Write scaling is a hard problem, and while Litebase introduces some optimizations to improve write throughput like in-memory locking and write queues, it doesn’t fundamentally change SQLite’s single-writer architecture. I’m currently exploring using BEGIN CONCURRENT transactions to allow multiple writers to operate concurrently, but this is still an experimental feature. In the future, I also hope to introduce features like:
- Implicit batching of write transactions on the server.
- Support for async write transactions through something like
BEGIN ASYNCthat can allow the server to acknowledge writes before they are fully committed to durable storage and commit them at the most optimal time. - Some sort of sharing mechanism that can distribute writes accross databases.
In terms of storage limits, Litebase’s distributed file storage layer currently splits database files into smaller chunks that are stored independently in object storage. Later down the line, I plan to introduce sharding capabilities that can distribute data across multiple database files to form a single database, allowing applications to scale beyond SQLite’s inherent size limitations. This is a much loftier goal, and will require significant research and development to get right, but I’m excited about the possibilities.
Where things stand today?
I’ve been diligently working on Litebase, adding new features, and fixing bugs. I plan on announcing the first public alpha of Litebase very soon, but if you are interested you can go ahead and access the code on GitHub. Documentation, which I view as the most important piece, is still a work in progress, but will be available soon. If you’re interested in being among the first to try it out, please join the mailing list to get notified as soon as it’s available.
If you are interested in supporting the project, please consider starring the GitHub repository. You can also sponsor the project on GitHub Sponsors.
Conclusion
So, why build Litebase? Why not just use PostgreSQL or MySQL? All of this has already been solved, right?
Short answer: Because I want to.
Longer answer:
- Because people love SQLite, but they outgrow it.
- Because developers want simplicity, but the cloud made things complex.
- Because object storage is the new hard drive.
- Because stateless compute is the new normal.
- Because serverless made us rethink everything, except databases.
- Because a file is still the perfect abstraction for data.
What do you think? Would you consider using Litebase for your next project? I’d love to hear your thoughts, feedback, and questions.