Rewrite the vault to have one RWlock rather then two separate locks for
data and reference counts.
In certain circumstance, it could be possible that that different
requests could end up in undefined states if a user got deleted
successfully while at he same time another goroutine/thread is loading
the given user.
While I have not been able to reproduce this in a test, restricting the
access scope to one lock rather than two, should avoid corner cases
where logic code is executing outside of the lock scope.
When performing a factory reset, we don't want to wipe all keychain
entries. The only keychain entry should be the vault's passphrase,
and we need this to be able to decrypt the vault at next startup
(to avoid it being reported as corrupt).
Update UIDValidity to be timestamp with the number of seconds since
the 1st of February 2023. This avoids the problem where we lose the
last UIDValidity value due to the vault being missing/corrupted/deleted.
Updates go-proton-api and Gluon to includes memory reduction changes and
modify the sync process to take into account how much memory is used
during the sync stage.
The sync process now has an extra stage which first download the message
metada to ensure that we only download up to `syncMaxDownloadRequesMem`
messages or 250 messages total. This allows for scaling the download
request automatically to accommodate many small or few very large
messages.
The IDs are then sent to a download go-routine which downloads the
message and its attachments. The result is then forwarded to another
go-routine which builds the actual message. This stage tries to ensure
that we don't use more than `syncMaxMessageBuildingMem` to build these
messages.
Finally the result is sent to a last go-routine which applies the
changes to Gluon and waits for them to be completed.
The new process is currently limited to 2GB. Dynamic scaling will be
implemented in a follow up. For systems with less than 2GB of memory we
limit the values to a set of values that is known to work.
Deleting the file isn't enough because it's still held in memory
and is written back to disk on the next write (SetLastVersion during
bridge teardown).
Make sure that we are at least using 16 workers for sync, otherwise
multiply the current sync worker count by 2.
Finally, this patch also logs the duration of the time it takes to
transfer all the messages from the server.