Built-in and configured S3 storage
S3 storage ensures long life, constant performance and maximum availability. It also overcomes limitations such as the number of objects in a bucket.
Recommendations
Internal implementation (ceph, minio, etc.)
Be careful to spread data over several machines, the network and the disks underneath (ssd/nvme only, see recommendations above).
⚠️ 1Gb network to s3 is only suitable for small installations, as all e-mails are persisted in it.
Implementation with service purchase (scality ring, etc.)
The main point of attention will be the network between mail and S3 storage: latency is generally high, and ratelimits exist.
Use with webmail and EAS synchronization is only possible with BlueMind version 5.3.
In scenarios involving the use of thick clients, implementation will be all the more complicated, as these download a whole salvo of complete mail when they are reconnected, so that resetting an Outlook client alone can saturate the provider's quota of requests.
Backup
The S3 is said to be long-lasting, suggesting that a backup is not necessary. In practice, a software bug or an unfortunate administration command are eventualities to be taken into account. Restoration to a disaster recovery scenario would require the development of specific code and is not natively managed by BlueMind at present. It is therefore recommended to enable s3 versioning so that deletions are simply deletion tags that expire much later, so that objects are not lost immediately.
Configuring S3 storage for a new BlueMind installation
For a new installation of BlueMind, you need to install BlueMind on a blank server before configuring and activating S3 storage:
- Check Installation Prerequisites
- Proceed to Installing BlueMind
⚠️ Do not use the setup wizard at this stage.
- Install subscription manually.
- Install the new packages:
apt update && apt install bm-plugin-cli-setup bm-setup-wizard
- Launch the setup wizard** and fill in the necessary information:
bm-cli setup install \
--domain "nom-du-domaine.tld" \
--external-url "url-externe.tld" \
--set-contact "email-de-contact@domaine.tld" \
--admin0-pass 'CHANGEME'
--sw-pass 'CHANGEME2'
--domain-admin-pass 'CHANGEME3' - Optional: activate Filehosting if desired:
bm-cli filehosting activate --domain bluemind.lab --group user
- Configure BlueMind's connection to the S3 storage server:
⚠️ The example below must be edited and adapted to the environment
cat << EOF > bm-s3-config.json
{
"archive_kind": "s3",
"sds_s3_endpoint": "http://s3.bluemind.lab:8333", # ATTENTION, doit contenir le protocole sinon erreur : "The URI scheme of endpointOverride must not be null"
"sds_s3_bucket": "NOM DU BUCKET S3",
"sds_s3_region": "", # Peut être vide
"sds_s3_access_key": "ACCESS KEY S3",
"sds_s3_secret_key": "SECRET KEY S3",
"sds_filehosting_storetype": "s3", # Optionnel, Configuration S3 pour le file hosting.
"sds_filehosting_endpoint": "http://s3.bluemind.lab:8333", # Optionnel, Configuration S3 pour le file hosting.
"sds_filehosting_s3_bucket": "NOM DU BUCKET S3", # Optionnel, Configuration S3 pour le file hosting.
"sds_filehosting_s3_region": "", # Optionnel, Configuration S3 pour le file hosting.
"sds_filehosting_s3_access_key": "ACCESS KEY S3", # Optionnel, Configuration S3 pour le file hosting.
"sds_filehosting_s3_secret_key": "SECRET KEY S3" # Optionnel, Configuration S3 pour le file hosting.
}
EOF - Activate S3 configuration:
bm-cli sysconf mset --format=json bm-s3-config.json
💡 No need to restart the service, the configuration is applied immediately.
Migrate e-mail spool to new storage
- Follow steps 1 to 7 of the procedure in the previous chapter.
⚠️ DO NOT APPLY STEP 8 (Activate S3 configuration)
- Copy e-mails from the old storage to the new one using the bm-cli
sds migrate
command:bm-cli sds migrate config2.json --workers 8 --format=json --no-filehosting
# ^ adapter le nombre de workers au besoin💡 This can be a time-consuming process, but it can be done on the fly. In addition, the order can be interrupted and restarted.
- Now follow step 8 Activate S3 configuration.
- Run email copy command to migrate any delta between 1st migration and activation.
Migrating Filehosting with Rclone
The bm-cli sds migrate
command is used to migrate Filehosting only from local storage (file) to an SDS.
The Rclone tool allows you to migrate it from local storage, but also from an SDS.
src
: The current SDS server (optional in the case of local storage)dst
: The destination SDS server- replace
src:BUCKET_SOURCE
with the absolute filehosting path (/var/spool/bm-filehosting
) in the case of migration from local storage.
-
Install Rclone:
- Ubuntu/Debian
- RedHat
apt install rclone
yum install rclone
-
Configure remotes Rclone with the
rclone config
command. -
Prepare the JSON configuration file (
s3-destination.json
) for the new SDS storage server (see above Configuration for a new installation, Step 7.). -
Copy data from old S3 storage to new:
rclone sync --progress src:BUCKET_SOURCE dst:BUCKET_DESTINATION
💡 This step can be interrupted and restarted.
-
Apply configuration to new S3 storage:
rclone sync --progress src:BUCKET_SOURCE dst:BUCKET_DESTINATION && bm-cli sysconf mset --format=json s3-destination.json
💡 No service restart is required.
-
Relaunch a synchronization of data from the old S3 server to the new one for e-mails that arrived between the last synchronization and the switchover:
rclone copy --progress src:BUCKET_SOURCE dst:BUCKET_DESTINATION
⚠️ Using the COPY command > Warning: at this stage you MUST use the
copy
command. Thesync
command can delete e-mails in the destination.