completely clear the cache
dump the current cache to a string to preserve between session
a serialized version of the cache to pass to a new api instance
get the entries associated with a list hash
A list hash is the root hash, or any hash with the type 80000000. NOTE these are hashed differently than files.
the hash to get entries for
the entries
get the raw binary data associated with a hash
the hash to get the data for
the data
gets the root hash and the current generation
When calling putRootHash
, you should pass the generation you got from
this call. That way you tell reMarkable you're updating the previous state.
the root hash and the current generation
get raw text data associated with a hash
We assume text data are small, and so cache the entire text. If you want to
avoid this, use getHash
combined with a TextDecoder.
the hash to get text for
the text
the same as putText
but with extra validation for Content
put a set of entries to make an entry list file
To fully upload an item:
the id of the list to upload - this should be the item id if uploading an item list, or "root" if uploading a new root list.
the entries to upload
the new list entry and a promise to finish the upload
put a raw onto the server
This returns the new expeced entry of the file you uploaded, and a promise to finish the upload successful. By splitting these two operations you can start using the uploaded entry while file finishes uploading.
NOTE: This won't update the state of the reMarkable until this entry is incorporated into the root hash.
the id of the file to upload
the bytes to upload
the new entry and a promise to finish the upload
the same as putText
but with extra validation for Metadata
update the current root hash
This will fail if generation doesn't match the current server generation.
This ensures that you are updating what you expect. IF you get a
GenerationError
, that indicates that the server
was updated after you last got the generation. You should call
getRootHash
and then recompute the changes you want
from the new root hash. If you ignore the update hash value and just call
putRootHash
again, you will overwrite the changes made by the other
update.
the new root hash
the generation of the current root hash
Optional
broadcast: boolean[unknown] an option in the request
the new root hash and the new generation
the same as putFile
but with caching for text
access to the low-level reMarkable api
This class gives more granualar access to the reMarkable cloud, but is more dangerous.
Overview
reMarkable uses an immutable file system, where each file is referenced by the 32 byte sha256 hash of its contents. Each file also has an id used to keep track of updates, so to "update" a file, you upload a new file, and change the hash associated with it's id.
Each "item" (a document or a collection) is actually a list of files. The whole reMarkable state is then a list of these lists. Finally, the hash of that list is called the rootHash. To update anything, you have to update the root hash to point to a new list of updated items.
This can be dangerous, as corrupting the root hash can destroy all of your files. It is therefore highly recommended to save your current root hash (
getRootHash
) before using this api to attempt file writes, so you can recover a previous "snapshot" should anything go wrong.Items
Each item is a collection of individual files. Using
getEntries
on the root hash will give you a list entries that correspond to items. UsinggetEntries
on any of those items will get you the files that make up that item.The documented files are:
<docid>.pdf
- a raw pdf document<docid>.epub
- a raw epub document<docid>.content
- a json file roughly describing document properties (seeDocumentContent
)<docid>.metadata
- metadata about the document (seeMetadata
)<docid>.pagedata
- a text file where each line is the template of that page<docid>/<pageid>.rm
- [speculative] raw remarkable vectors, text, etc<docid>/<pageid>-metadata.json
- [speculative] metadata about the individual page<docid>.highlights/<pageid>.json
- [speculative] highlights on the pageSome items will have both a
.pdf
and.epub
file, likely due to preparing for export. Collections only have.content
and.metadata
files, with.content
only containing tags.Caching
Since everything is tied to the hash of it's contents, we can agressively cache results. We assume that text contents are "small" and so fully cache them, where as binary files we treat as large and only store that we know they exist to prevent future writes.
By default, this only persists as long as the api instance is alive. However, for performance reasons, you should call
dumpCache
to persist the cache between sessions.Remarks
Generally all hashes are 64 character hex strings, and all ids are uuid4.