Rewrite the vault to have one RWlock rather then two separate locks for
data and reference counts.
In certain circumstance, it could be possible that that different
requests could end up in undefined states if a user got deleted
successfully while at he same time another goroutine/thread is loading
the given user.
While I have not been able to reproduce this in a test, restricting the
access scope to one lock rather than two, should avoid corner cases
where logic code is executing outside of the lock scope.
It is possible, on slower machines, that the new event poll task is not
yet registered and attempts to cancel have nothing to cancel.
In this case, we need the refresh event to cancel the task, at that
point it is guaranteed that the task exists.
We don't want to start processing events until those events have
somewhere to be sent to.
Also, to be safe, ensure remove and re-add the gluon user while
clearing its sync status. This shouldn't be necessary.
fix(GODT-2327): Only start processing events once sync is finished
fix(GODT-2327): avoid windows delete all deadlock
fix(GODT-2327): Clear update channels whenever clearing sync status
fix(GODT-2327): Properly cancel event stream when handling refresh
fix(GODT-2327): Remove unnecessary sync abort call
fix(GODT-2327): Fix lint issue
fix(GODT-2327): Don't retry with abortable context because it's canceled
fix(GODT-2327): Loop to retry until sync has complete
fix(GODT-2327): Better sleep (with context)
Updates go-proton-api and Gluon to includes memory reduction changes and
modify the sync process to take into account how much memory is used
during the sync stage.
The sync process now has an extra stage which first download the message
metada to ensure that we only download up to `syncMaxDownloadRequesMem`
messages or 250 messages total. This allows for scaling the download
request automatically to accommodate many small or few very large
messages.
The IDs are then sent to a download go-routine which downloads the
message and its attachments. The result is then forwarded to another
go-routine which builds the actual message. This stage tries to ensure
that we don't use more than `syncMaxMessageBuildingMem` to build these
messages.
Finally the result is sent to a last go-routine which applies the
changes to Gluon and waits for them to be completed.
The new process is currently limited to 2GB. Dynamic scaling will be
implemented in a follow up. For systems with less than 2GB of memory we
limit the values to a set of values that is known to work.
This change implements safe.Mutex and safe.RWMutex, which wrap the
sync.Mutex and sync.RWMutex types and are assigned a globally unique
integer ID. The safe.Lock and safe.RLock methods sort the mutexes
by this integer ID before locking to ensure that locks for a given
set of mutexes are always performed in the same order, avoiding
deadlocks.
Depending on the timing of bridge closure, it was possible for the
IMAP/SMTP servers to not have started serving yet. By grouping this in
a cancelable goroutine group (*xsync.Group), we mitigate this issue.
Further, depending on internet disconnection timing during user login,
it was possible for a user to be improperly logged in. This change
fixes this and adds test coverage for it.
Lastly, depending on timing, certain background tasks (updates check,
connectivity ping) could be improperly started or closed. This change
groups them in the *xsync.Group as well to be closed properly.