I use Fastmail for my personal email, and I like to
keep a backup of my email on my personal computer. Why make a backup? When I am done reading or
replying to an email, I make a split-second decision on whether to delete or
archive it on Fastmail’s server. If it turns out I deleted something that I need later, I can always
look in my backup. The backup also predates my use of Fastmail and serves as a
service-independent store of my email.
My old method of backing up the email was to forward all my email to a Gmail
account, then use POP to download the email with a hacked-together script. This
had the added benefit that the Gmail account also served as a searchable backup.
Unfortunately the Gmail account ran out of storage and the POP script kept
hanging for some reason, which together motivated me to get away from this
convoluted backup strategy.
The replacement script uses JMAP to connect directly to
Fastmail and download all messages. It is intended to run periodically, and
what it does is pick an end time 24 hours in the past, download all email older than
that, and then record the end time. The next time it runs, it searches for mail
between the previous end time and a new end time, which is again 24 hours in
Why pick a time in the past? Well, I’m not confident that if you search up until
this exact moment, you are guaranteed to get every message. A message could come
in, then two seconds later you send a query, but it hits a server that doesn’t
know about your message yet. I’m sure an hour is more than enough leeway, but
since this is a backup, we might as well make it a 24-hour delay.
Note that I am querying all mail, regardless of which mailbox it is in, so even
if I have put a message in the trash, my backup script will find it and
JMAP is a modern JSON-based replacement for IMAP and much easier to use, such
that the entire script is 135 lines, even with my not-exactly-terse use of
Here is the script, with some notes below.
The get_session function is run once at the beginning of the script, and
fetches some important data from the server including the account ID and a
URL to use to download individual emails.
The query function does the bulk of the work, sending a single JSON request
multiple times to page through the search results. It is actually a two-part
request, first Email/query, which returns a list of ids, and then
Email/get, which gets some email metadata for each result. I wrote this as a
generator to make the
main part of my script simpler. The paging is performed by capturing the ID of
the final result of one query, and asking the next query to start at that
position plus one (lines 73-74). We are done when the query returns no results
The download_email function uses the blob ID to fetch the entire email and
saves it to disk. This doesn’t really need to be its own function, but it
will help if I later decide to use multiple threads to do the downloading.
Finally, the main part of the script reads configuration from a YAML file,
including the last end time. It loops through the results of query, calling
download_email on each result. Finally, it writes the configuration data back
out to the YAML file, including the updated last_end_time.