We don't want to start processing events until those events have
somewhere to be sent to.
Also, to be safe, ensure remove and re-add the gluon user while
clearing its sync status. This shouldn't be necessary.
We don't want to start processing events until those events have
somewhere to be sent to.
Also, to be safe, ensure remove and re-add the gluon user while
clearing its sync status. This shouldn't be necessary.
fix(GODT-2327): Only start processing events once sync is finished
fix(GODT-2327): avoid windows delete all deadlock
fix(GODT-2327): Clear update channels whenever clearing sync status
fix(GODT-2327): Properly cancel event stream when handling refresh
fix(GODT-2327): Remove unnecessary sync abort call
fix(GODT-2327): Fix lint issue
fix(GODT-2327): Don't retry with abortable context because it's canceled
fix(GODT-2327): Loop to retry until sync has complete
fix(GODT-2327): Better sleep (with context)
There is concurrency bug due to competing sync calls that can occur when
we clear the sync status in the Vault. Running sync at the end of
addIMAPUser() avoids the problem.
This patch also remove the execution of a sync task for
`user.ClearSyncStatus()`
There is concurrency bug due to competing sync calls that can occur when
we clear the sync status in the Vault. Running sync at the end of
addIMAPUser() avoids the problem.
This patch also remove the execution of a sync task for
`user.ClearSyncStatus()`
Updates go-proton-api and Gluon to includes memory reduction changes and
modify the sync process to take into account how much memory is used
during the sync stage.
The sync process now has an extra stage which first download the message
metada to ensure that we only download up to `syncMaxDownloadRequesMem`
messages or 250 messages total. This allows for scaling the download
request automatically to accommodate many small or few very large
messages.
The IDs are then sent to a download go-routine which downloads the
message and its attachments. The result is then forwarded to another
go-routine which builds the actual message. This stage tries to ensure
that we don't use more than `syncMaxMessageBuildingMem` to build these
messages.
Finally the result is sent to a last go-routine which applies the
changes to Gluon and waits for them to be completed.
The new process is currently limited to 2GB. Dynamic scaling will be
implemented in a follow up. For systems with less than 2GB of memory we
limit the values to a set of values that is known to work.
For every update sent to gluon wait and check the error code to see if
an error occurred.
Note: Updates can't be inspect on the call site as it can lead to
deadlocks.
When we send a message, the send recorder records the sent message.
When the client then appends an identical message to the sent folder,
the deduplication works and instead returns the message ID of the
existing proton message, rather than creating a new message. Gluon is
expected to notice that it already has this message ID and perform
some deduplication stuff internally.
However, it can happen that gluon doesn't yet have this message ID,
because we haven't yet received the "Message Created" event from the
API. To prevent this, we poll the events after send and wait for all
new events to be applied.
There's still a chance that the event wasn't generated yet on the API
side. Not sure what we can do about this.
After sending, a client might append to the sent folder over IMAP.
In this case, we perform deduplication and return the message ID of the
sent message. However, if we haven't already processed this message in
gluon, it doesn't work as expected.
This change polls the event stream immediately after send. Note that it
doesn't wait for these events to be processed; that should be done in a
follow-up commit.
This change implements safe.Mutex and safe.RWMutex, which wrap the
sync.Mutex and sync.RWMutex types and are assigned a globally unique
integer ID. The safe.Lock and safe.RLock methods sort the mutexes
by this integer ID before locking to ensure that locks for a given
set of mutexes are always performed in the same order, avoiding
deadlocks.