Do not abort removing files on first error. Collect errors and try to
remove as many as possible. This would cause some state files to not be
removed on windows.
This helps the export tool to deal with problems arising from message
assembly after everything has been successfully encrypted.
The original behavior is still available under `DecryptAndBuildRFC822`.
* Remove distinction between values with and without reply.
* Hide types that don't need to be public.
* Don't allow direct access to the request's internal types.
Fix the path we are checking for was not updated for V3.
Ensure that we only inspect items that start with the correct prefix.
Some implementation (e.g.: KeepassXC) return some values which are not
valid.
Finally, remove unnecessary attributes.
When rebuilding attachments, ensure that more complicated mime types are
properly re-constructed.
If we fail to parse the mime type, set the value as is.
When attaching public key, we take the root mime part, create a new root,
and put the old root alongside an additional public key mime part.
But when moving the root, we would copy all content headers, even empty ones.
So we’d be left with Content-Disposition: "" which would fail to parse.
Updates go-proton-api and Gluon to includes memory reduction changes and
modify the sync process to take into account how much memory is used
during the sync stage.
The sync process now has an extra stage which first download the message
metada to ensure that we only download up to `syncMaxDownloadRequesMem`
messages or 250 messages total. This allows for scaling the download
request automatically to accommodate many small or few very large
messages.
The IDs are then sent to a download go-routine which downloads the
message and its attachments. The result is then forwarded to another
go-routine which builds the actual message. This stage tries to ensure
that we don't use more than `syncMaxMessageBuildingMem` to build these
messages.
Finally the result is sent to a last go-routine which applies the
changes to Gluon and waits for them to be completed.
The new process is currently limited to 2GB. Dynamic scaling will be
implemented in a follow up. For systems with less than 2GB of memory we
limit the values to a set of values that is known to work.