I was thinking of making it possible for SQLite to be used with static pages.
My idea is to modify SQLite to use ajax with the HTTP Range header to fetch B+ pages from the server as they are needed. SQLite already has a VFS (virtual file system) so this shouldn't be too hard.
I am not sure how fast it would be and it would waste a lot of bandwidth. That's why I haven't made it yet.
This would only be useful for using it with Github pages.
Hm, interesting. I made this, if you want to take a look: https://github.com/TomasHubelbauer/sqlite-javascript. You can test it using the Demo link in the readme and then use the prefilled value in the Load from URL prompt. Based on this I find your idea to be very doable. I didn't do it yet, but while working on this I had async fetching of pages using Range requests on my mind. Shame I didn't get to it when I was working on this project, could be easy to put together a demo. Similar to this I made https://github.com/TomasHubelbauer/fatcow-icons which parses a big ZIP file piece-wise on demand. As you scroll, new parts of the archive are fetched and icons extracted so you don't need to download the whole thing.
I have a sqlite 'on top' of a chrome cache filesystem. My usecase is to torrent the database as i also have support for files. The caveat is that i primarily use as a key-value store (the btree storage part of sqlite).
They are used to share structured and unstructured data in a p2p way where a torrent could be a database(structured) or a fileset(unstructured) .
But all of this is part of a bigger project that is basically a new sort of p2p "app browser" with a local-first emphasis but where every app can share its functionality though RPC (locally or remotely) with distribution going over DHT's and torrents.
Yes. Maybe you could increase the sizes of B+ pages ?
The only reason that databases use B+ trees rather than a red-black tree or avl trees is because of the overhead of reading data from the hard drive.
This would be a interesting hack.
You can increase the page size [1]. That will increase the size of the B tree nodes [2] (and other pages too but that's probably what you want):
> The upper bound on [the number of keys on an interior b-tree page] is as many keys as will fit on the page.
I think a tricky part of this idea would be the locking. Usually SQLite relies on the locking of the underlying filesystem. You could add your own mechanism that causes a lock to be assigned to a single client connection, but what if it never unlocks? (On a single machine you can tell if the client process has crashed.) You could add a timeout but what if the client process then does respond?
Even with an artificially slow connection it seems to be reactive enough. Actually fetching pages on-demand can only be better for the bandwidth, the real issue is going to be latency
I haven't looked at the code but this sounds cool.
There is also something called Kademlia which is a distributed hash table for keeping track of seeds for torrents.
https://en.wikipedia.org/wiki/Kademlia
My idea is to modify SQLite to use ajax with the HTTP Range header to fetch B+ pages from the server as they are needed. SQLite already has a VFS (virtual file system) so this shouldn't be too hard.
I am not sure how fast it would be and it would waste a lot of bandwidth. That's why I haven't made it yet.
This would only be useful for using it with Github pages.