We don't want to start processing events until those events have
somewhere to be sent to.
Also, to be safe, ensure remove and re-add the gluon user while
clearing its sync status. This shouldn't be necessary.
fix(GODT-2327): Only start processing events once sync is finished
fix(GODT-2327): avoid windows delete all deadlock
fix(GODT-2327): Clear update channels whenever clearing sync status
fix(GODT-2327): Properly cancel event stream when handling refresh
fix(GODT-2327): Remove unnecessary sync abort call
fix(GODT-2327): Fix lint issue
fix(GODT-2327): Don't retry with abortable context because it's canceled
fix(GODT-2327): Loop to retry until sync has complete
fix(GODT-2327): Better sleep (with context)
Updates go-proton-api and Gluon to includes memory reduction changes and
modify the sync process to take into account how much memory is used
during the sync stage.
The sync process now has an extra stage which first download the message
metada to ensure that we only download up to `syncMaxDownloadRequesMem`
messages or 250 messages total. This allows for scaling the download
request automatically to accommodate many small or few very large
messages.
The IDs are then sent to a download go-routine which downloads the
message and its attachments. The result is then forwarded to another
go-routine which builds the actual message. This stage tries to ensure
that we don't use more than `syncMaxMessageBuildingMem` to build these
messages.
Finally the result is sent to a last go-routine which applies the
changes to Gluon and waits for them to be completed.
The new process is currently limited to 2GB. Dynamic scaling will be
implemented in a follow up. For systems with less than 2GB of memory we
limit the values to a set of values that is known to work.
For every update sent to gluon wait and check the error code to see if
an error occurred.
Note: Updates can't be inspect on the call site as it can lead to
deadlocks.
Prevent infinite error loop in event parsing by not returning errors if
we already have or do not have a given address. This occurs since we
sync the latest state at Bridge startup but still receive the events
which contain these changes later.
Do not treat unknown label creation/deletion/update or deletion in Bridge
as an error as the Gluon cache still needs to receive these events to
correct its internal state.
During sync a user may continue to perform operations on the server it
is possible we run into a message which has a labelID we are not aware
of. To counter this we issue `CreateMessage` updates with
`IgnoreUnknownMailboxIDs` set to true. Eventually, after sync the state
will resolve itself with events.
Add special case handling for draft messages so that if a Draft is
updated via an event it is correctly updated on the IMAP client via a
the new `imap.MessageUpdated event`.
This patch also updates Gluon to the latest version.
Only delete messages when unlabeled from trash/spam if they only exists
in All Mail and (spam or trash).
This patch also ports delete_from_trash.feature and use status rather
than fetch to count messages in a mailboxes.
This change implements safe.Mutex and safe.RWMutex, which wrap the
sync.Mutex and sync.RWMutex types and are assigned a globally unique
integer ID. The safe.Lock and safe.RLock methods sort the mutexes
by this integer ID before locking to ensure that locks for a given
set of mutexes are always performed in the same order, avoiding
deadlocks.
Labels can be held locally and updated in memory. This greatly improves
the responsiveness of IMAP mailbox operations as we don't need to fetch
all a user's labels to find the parent whenever a mailbox is moved.
This fixes various race conditions and leaks related to the user's sync
and API event stream. It was possible for a sync/stream to begin after a
user was already closed; this change prevents that by managing the
goroutines related to sync/stream within cancellable groups.
Add missing Close calls.
Properly handle nil channel for `user.startSync`.
This patch also updated liteapi and Gluon to latest master and dev
version respectively.