{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"\ud83d\udc2e + \ud83d\udc0b = \ud83d\udc95 \u00b6 Help mailcow \u00b6 Please consider a support contract for a small monthly fee at Servercow EN / Servercow DE to support further development. We support you while you support us . :) If you are super awesome and would like to support without a contract, you can get a SAL license that confirms your awesomeness (a flexible one-time payment) at Servercow EN / Servercow DE . Get support \u00b6 There are two ways to achieve support for your mailcow installation. Commercial support \u00b6 For professional and prioritized commercial support you can sign a basic support subscription at Servercow EN / Servercow DE . For custom inquiries or questions please contact us at info@servercow.de instead. Furthermore we do also provide a fully featured and managed mailcow here . This way we take care about the technical magic underneath and you can enjoy your whole mail experience in a hassle-free way. Community support and chat \u00b6 The other alternative is our free community-support on our various channels below. Please notice, that this support is driven by our awesome community around mailcow. This kind of support is best-effort, voluntary and there is no guarantee for anything. Our mailcow community @ community.mailcow.email Telegram @ t.me/mailcow . Telegram @ t.me/mailcowOfftopic . Telegram desktop clients are available for multiple platforms . You can search the groups history for keywords. For bug tracking, feature requests and code contributions only: GitHub @ mailcow/mailcow-dockerized Demo \u00b6 You can find a demo at demo.mailcow.email , use the following credentials to login: Administrator : admin / moohoo Domain administrator : department / moohoo Mailbox : demo@440044.xyz / moohoo Overview \u00b6 The integrated mailcow UI allows administrative work on your mail server instance as well as separated domain administrator and mailbox user access: DKIM and ARC support Black- and whitelists per domain and per user Spam score management per-user (reject spam, mark spam, greylist) Allow mailbox users to create temporary spam aliases Prepend mail tags to subject or move mail to sub folder (per-user) Allow mailbox users to toggle incoming and outgoing TLS enforcement Allow users to reset SOGo ActiveSync device caches imapsync to migrate or pull remote mailboxes regularly TFA: Yubikey OTP and U2F USB (Google Chrome and derivatives only), TOTP Add domains, mailboxes, aliases, domain aliases and SOGo resources Add whitelisted hosts to forward mail to mailcow Fail2ban-like integration Quarantine system Antivirus scanning incl. macro scanning in office documents Integrated basic monitoring A lot more... mailcow: dockerized comes with multiple containers linked in one bridged network. Each container represents a single application. ACME ClamAV (optional) Dovecot MariaDB Memcached Netfilter (Fail2ban-like integration by @mkuron ) Nginx Oletools via Olefy PHP Postfix Redis Rspamd SOGo Solr (optional) Unbound A Watchdog to provide basic monitoring Docker volumes to keep dynamic data - take care of them! crypt-vol-1 mysql-socket-vol-1 mysql-vol-1 postfix-vol-1 redis-vol-1 rspamd-vol-1 sogo-userdata-backup-vol-1 sogo-web-vol-1 solr-vol-1 vmail-index-vol-1 vmail-vol-1","title":"Information & Support"},{"location":"#_1","text":"","title":"\ud83d\udc2e + \ud83d\udc0b = \ud83d\udc95"},{"location":"#help-mailcow","text":"Please consider a support contract for a small monthly fee at Servercow EN / Servercow DE to support further development. We support you while you support us . :) If you are super awesome and would like to support without a contract, you can get a SAL license that confirms your awesomeness (a flexible one-time payment) at Servercow EN / Servercow DE .","title":"Help mailcow"},{"location":"#get-support","text":"There are two ways to achieve support for your mailcow installation.","title":"Get support"},{"location":"#commercial-support","text":"For professional and prioritized commercial support you can sign a basic support subscription at Servercow EN / Servercow DE . For custom inquiries or questions please contact us at info@servercow.de instead. Furthermore we do also provide a fully featured and managed mailcow here . This way we take care about the technical magic underneath and you can enjoy your whole mail experience in a hassle-free way.","title":"Commercial support"},{"location":"#community-support-and-chat","text":"The other alternative is our free community-support on our various channels below. Please notice, that this support is driven by our awesome community around mailcow. This kind of support is best-effort, voluntary and there is no guarantee for anything. Our mailcow community @ community.mailcow.email Telegram @ t.me/mailcow . Telegram @ t.me/mailcowOfftopic . Telegram desktop clients are available for multiple platforms . You can search the groups history for keywords. For bug tracking, feature requests and code contributions only: GitHub @ mailcow/mailcow-dockerized","title":"Community support and chat"},{"location":"#demo","text":"You can find a demo at demo.mailcow.email , use the following credentials to login: Administrator : admin / moohoo Domain administrator : department / moohoo Mailbox : demo@440044.xyz / moohoo","title":"Demo"},{"location":"#overview","text":"The integrated mailcow UI allows administrative work on your mail server instance as well as separated domain administrator and mailbox user access: DKIM and ARC support Black- and whitelists per domain and per user Spam score management per-user (reject spam, mark spam, greylist) Allow mailbox users to create temporary spam aliases Prepend mail tags to subject or move mail to sub folder (per-user) Allow mailbox users to toggle incoming and outgoing TLS enforcement Allow users to reset SOGo ActiveSync device caches imapsync to migrate or pull remote mailboxes regularly TFA: Yubikey OTP and U2F USB (Google Chrome and derivatives only), TOTP Add domains, mailboxes, aliases, domain aliases and SOGo resources Add whitelisted hosts to forward mail to mailcow Fail2ban-like integration Quarantine system Antivirus scanning incl. macro scanning in office documents Integrated basic monitoring A lot more... mailcow: dockerized comes with multiple containers linked in one bridged network. Each container represents a single application. ACME ClamAV (optional) Dovecot MariaDB Memcached Netfilter (Fail2ban-like integration by @mkuron ) Nginx Oletools via Olefy PHP Postfix Redis Rspamd SOGo Solr (optional) Unbound A Watchdog to provide basic monitoring Docker volumes to keep dynamic data - take care of them! crypt-vol-1 mysql-socket-vol-1 mysql-vol-1 postfix-vol-1 redis-vol-1 rspamd-vol-1 sogo-userdata-backup-vol-1 sogo-web-vol-1 solr-vol-1 vmail-index-vol-1 vmail-vol-1","title":"Overview"},{"location":"b_n_r-accidental_deletion/","text":"So you deleted a mailbox and have no backups, he? If you noticed your mistake within a few hours, you can probably recover the users data. SOGo \u00b6 We automatically create daily backups (24h interval starting from running up -d) in /var/lib/docker/volumes/mailcowdockerized_sogo-userdata-backup-vol-1/_data/ . Make sure the user you want to restore exists in your mailcow . Re-create them if they are missing. Copy the file named after the user you want to restore to __MAILCOW_DIRECTORY__/data/conf/sogo . 1. Copy the backup: cp /var/lib/docker/volumes/mailcowdockerized_sogo-userdata-backup-vol-1/_data/restoreme@example.org __MAILCOW_DIRECTORY__/data/conf/sogo 2. Run docker-compose exec -u sogo sogo-mailcow sogo-tool restore -F ALL /etc/sogo restoreme@example.org Run sogo-tool without parameters to check for possible restore options. 3. Delete the copied backup by running rm __MAILCOW_DIRECTORY__/data/conf/sogo 4. Restart SOGo and Memcached: docker-compose restart sogo-mailcow memcached-mailcow Mail \u00b6 In case of an accidental deletion of a mailbox, you will be able to recover for (by default) 5 days. This depends on the MAILDIR_GC_TIME parameter in mailcow.conf . A deleted mailbox is copied in its encrypted form to /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/_garbage . The folder inside _garbage follows the structure [timestamp]_[domain_sanitized][user_sanitized] , for example 1629109708_exampleorgtest in case of test@example.org deleted on 1629109708. To restore make sure you are actually restoring to the same mailcow it was deleted from or you use the same encryption keys in crypt-vol-1 . Make sure the user you want to restore exists in your mailcow . Re-create them if they are missing. Copy the folders from /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/_garbage/[timestamp]_[domain_sanitized][user_sanitized] back to /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/[domain]/[user] and resync the folder and recalc the quota: docker-compose exec dovecot-mailcow doveadm force-resync -u restoreme@example.net '*' docker-compose exec dovecot-mailcow doveadm quota recalc -u restoreme@example.net","title":"Recover accidentally deleted data"},{"location":"b_n_r-accidental_deletion/#sogo","text":"We automatically create daily backups (24h interval starting from running up -d) in /var/lib/docker/volumes/mailcowdockerized_sogo-userdata-backup-vol-1/_data/ . Make sure the user you want to restore exists in your mailcow . Re-create them if they are missing. Copy the file named after the user you want to restore to __MAILCOW_DIRECTORY__/data/conf/sogo . 1. Copy the backup: cp /var/lib/docker/volumes/mailcowdockerized_sogo-userdata-backup-vol-1/_data/restoreme@example.org __MAILCOW_DIRECTORY__/data/conf/sogo 2. Run docker-compose exec -u sogo sogo-mailcow sogo-tool restore -F ALL /etc/sogo restoreme@example.org Run sogo-tool without parameters to check for possible restore options. 3. Delete the copied backup by running rm __MAILCOW_DIRECTORY__/data/conf/sogo 4. Restart SOGo and Memcached: docker-compose restart sogo-mailcow memcached-mailcow","title":"SOGo"},{"location":"b_n_r-accidental_deletion/#mail","text":"In case of an accidental deletion of a mailbox, you will be able to recover for (by default) 5 days. This depends on the MAILDIR_GC_TIME parameter in mailcow.conf . A deleted mailbox is copied in its encrypted form to /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/_garbage . The folder inside _garbage follows the structure [timestamp]_[domain_sanitized][user_sanitized] , for example 1629109708_exampleorgtest in case of test@example.org deleted on 1629109708. To restore make sure you are actually restoring to the same mailcow it was deleted from or you use the same encryption keys in crypt-vol-1 . Make sure the user you want to restore exists in your mailcow . Re-create them if they are missing. Copy the folders from /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/_garbage/[timestamp]_[domain_sanitized][user_sanitized] back to /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/[domain]/[user] and resync the folder and recalc the quota: docker-compose exec dovecot-mailcow doveadm force-resync -u restoreme@example.net '*' docker-compose exec dovecot-mailcow doveadm quota recalc -u restoreme@example.net","title":"Mail"},{"location":"b_n_r-backup/","text":"Backup \u00b6 Manual \u00b6 You can use the provided script helper-scripts/backup_and_restore.sh to backup mailcow automatically. Please do not copy this script to another location. To run a backup, write \"backup\" as first parameter and either one or more components to backup as following parameters. You can also use \"all\" as second parameter to backup all components. Append --delete-days n to delete backups older than n days. # Syntax: # ./helper-scripts/backup_and_restore.sh backup (vmail|crypt|redis|rspamd|postfix|mysql|all|--delete-days) # Backup all, delete backups older than 3 days ./helper-scripts/backup_and_restore.sh backup all --delete-days 3 # Backup vmail, crypt and mysql data, delete backups older than 30 days ./helper-scripts/backup_and_restore.sh backup vmail crypt mysql --delete-days 30 # Backup vmail ./helper-scripts/backup_and_restore.sh backup vmail The script will ask you for a backup location. Inside of this location it will create folders in the format \"mailcow_DATE\". You should not rename those folders to not break the restore process. To run a backup unattended, define MAILCOW_BACKUP_LOCATION as environment variable before starting the script: MAILCOW_BACKUP_LOCATION=/opt/backup /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all Cronjob \u00b6 You can run the backup script regularly via cronjob. Make sure BACKUP_LOCATION exists: 5 4 * * * cd /opt/mailcow-dockerized/; MAILCOW_BACKUP_LOCATION=/mnt/mailcow_backups /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup mysql crypt redis --delete-days 3 Per default cron sends the full result of each backup operation by email. If you want cron to only mail on error (non-zero exit code) you may want to use the following snippet. Pathes need to be modified according to your setup (this script is a user contribution). This following script may be placed in /etc/cron.daily/mailcow-backup - do not forget to mark it as executable via chmod +x : #!/bin/sh # Backup mailcow data # https://mailcow.github.io/mailcow-dockerized-docs/b_n_r_backup/ set -e OUT=\"$(mktemp)\" export MAILCOW_BACKUP_LOCATION=\"/opt/backup\" SCRIPT=\"/opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh\" PARAMETERS=\"backup all\" OPTIONS=\"--delete-days 30\" # run command set +e \"${SCRIPT}\" ${PARAMETERS} ${OPTIONS} 2>&1 > \"$OUT\" RESULT=$? if [ $RESULT -ne 0 ] then echo \"${SCRIPT} ${PARAMETERS} ${OPTIONS} encounters an error:\" echo \"RESULT=$RESULT\" echo \"STDOUT / STDERR:\" cat \"$OUT\" fi Backup strategy with rsync and mailcow backup script \u00b6 Create the destination directory for mailcows helper script: mkdir -p /external_share/backups/backup_script Create cronjobs: 25 1 * * * rsync -aH --delete /opt/mailcow-dockerized /external_share/backups/mailcow-dockerized 40 2 * * * rsync -aH --delete /var/lib/docker/volumes /external_share/backups/var_lib_docker_volumes 5 4 * * * cd /opt/mailcow-dockerized/; BACKUP_LOCATION=/external_share/backups/backup_script /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup mysql crypt redis --delete-days 3 # If you want to, use the acl util to backup permissions of some/all folders/files: getfacl -Rn /path On the destination (in this case /external_share/backups ) you may want to have snapshot capabilities (ZFS, Btrfs etc.). Snapshot daily and keep for n days for a consistent backup. Do not rsync to a Samba share, you need to keep the correct permissions! To restore you'd simply need to run rsync the other way round and restart Docker to re-read the volumes. Run docker-compose pull and docker-compose up -d . If you are lucky Redis and MariaDB can automatically fix the inconsistent databases (if they are inconsistent). In case of a corrupted database you'd need to use the helper script to restore the inconsistent elements. If a restore fails, try to extract the backups and copy the files back manually. Keep the file permissions!","title":"Backup"},{"location":"b_n_r-backup/#backup","text":"","title":"Backup"},{"location":"b_n_r-backup/#manual","text":"You can use the provided script helper-scripts/backup_and_restore.sh to backup mailcow automatically. Please do not copy this script to another location. To run a backup, write \"backup\" as first parameter and either one or more components to backup as following parameters. You can also use \"all\" as second parameter to backup all components. Append --delete-days n to delete backups older than n days. # Syntax: # ./helper-scripts/backup_and_restore.sh backup (vmail|crypt|redis|rspamd|postfix|mysql|all|--delete-days) # Backup all, delete backups older than 3 days ./helper-scripts/backup_and_restore.sh backup all --delete-days 3 # Backup vmail, crypt and mysql data, delete backups older than 30 days ./helper-scripts/backup_and_restore.sh backup vmail crypt mysql --delete-days 30 # Backup vmail ./helper-scripts/backup_and_restore.sh backup vmail The script will ask you for a backup location. Inside of this location it will create folders in the format \"mailcow_DATE\". You should not rename those folders to not break the restore process. To run a backup unattended, define MAILCOW_BACKUP_LOCATION as environment variable before starting the script: MAILCOW_BACKUP_LOCATION=/opt/backup /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all","title":"Manual"},{"location":"b_n_r-backup/#cronjob","text":"You can run the backup script regularly via cronjob. Make sure BACKUP_LOCATION exists: 5 4 * * * cd /opt/mailcow-dockerized/; MAILCOW_BACKUP_LOCATION=/mnt/mailcow_backups /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup mysql crypt redis --delete-days 3 Per default cron sends the full result of each backup operation by email. If you want cron to only mail on error (non-zero exit code) you may want to use the following snippet. Pathes need to be modified according to your setup (this script is a user contribution). This following script may be placed in /etc/cron.daily/mailcow-backup - do not forget to mark it as executable via chmod +x : #!/bin/sh # Backup mailcow data # https://mailcow.github.io/mailcow-dockerized-docs/b_n_r_backup/ set -e OUT=\"$(mktemp)\" export MAILCOW_BACKUP_LOCATION=\"/opt/backup\" SCRIPT=\"/opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh\" PARAMETERS=\"backup all\" OPTIONS=\"--delete-days 30\" # run command set +e \"${SCRIPT}\" ${PARAMETERS} ${OPTIONS} 2>&1 > \"$OUT\" RESULT=$? if [ $RESULT -ne 0 ] then echo \"${SCRIPT} ${PARAMETERS} ${OPTIONS} encounters an error:\" echo \"RESULT=$RESULT\" echo \"STDOUT / STDERR:\" cat \"$OUT\" fi","title":"Cronjob"},{"location":"b_n_r-backup/#backup-strategy-with-rsync-and-mailcow-backup-script","text":"Create the destination directory for mailcows helper script: mkdir -p /external_share/backups/backup_script Create cronjobs: 25 1 * * * rsync -aH --delete /opt/mailcow-dockerized /external_share/backups/mailcow-dockerized 40 2 * * * rsync -aH --delete /var/lib/docker/volumes /external_share/backups/var_lib_docker_volumes 5 4 * * * cd /opt/mailcow-dockerized/; BACKUP_LOCATION=/external_share/backups/backup_script /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup mysql crypt redis --delete-days 3 # If you want to, use the acl util to backup permissions of some/all folders/files: getfacl -Rn /path On the destination (in this case /external_share/backups ) you may want to have snapshot capabilities (ZFS, Btrfs etc.). Snapshot daily and keep for n days for a consistent backup. Do not rsync to a Samba share, you need to keep the correct permissions! To restore you'd simply need to run rsync the other way round and restart Docker to re-read the volumes. Run docker-compose pull and docker-compose up -d . If you are lucky Redis and MariaDB can automatically fix the inconsistent databases (if they are inconsistent). In case of a corrupted database you'd need to use the helper script to restore the inconsistent elements. If a restore fails, try to extract the backups and copy the files back manually. Keep the file permissions!","title":"Backup strategy with rsync and mailcow backup script"},{"location":"b_n_r-coldstandby/","text":"Cold-standby backup \u00b6 mailcow offers an easy way to create a consistent copy of itself to be rsync'ed to a remote location without downtime. This may also be used to transfer your mailcow to a new server. You should know \u00b6 The provided script will work on default installations. It may break when you use unsupported volume overrides. We don't support that and we will not include hacks to support that. Please run and maintain a fork if you plan to keep your changes. The script will use the same pathes as your default mailcow installation. That is the mailcow base directory - for most users /opt/mailcow-dockerized - as well as the mountpoints. To find the pathes of your source volumes we use docker inspect and read the destination directory of every volume related to your mailcow compose project. This means we will also transfer volumes you may have added in a override file. Local bind mounts may or may not work. The use rsync with the --delete flag. The destination will be an exact copy of the source. mariabackup is used to create a consistent copy of the SQL data directory. After rsync'ing the data we will run docker-compose pull and remove old image tags from the destination. Your source will not be changed at any time. You may want to make sure to use the same /etc/docker/daemon.json on the remote target. You should not run disk snapshots (e.g. via ZFS, LVM etc.) on the target at the very same time as this script is run. Versioning is not part of this script, we rely on the destination (snapshots or backups). You may also want to use any other tool for that. Prepare \u00b6 You will need a SSH-enabled destination and a keyfile to connect to said destination. The key should not be protected by a password for the script to work unattended. In your mailcow base directory, e.g. /opt/mailcow-dockerized you will find a file create_cold_standby.sh . Edit this file and change the exported variables: export REMOTE_SSH_KEY=/path/to/keyfile export REMOTE_SSH_PORT=22 export REMOTE_SSH_HOST=mailcow-backup.host.name The key must be owned and readable by root only. Both the source and destination require rsync >= v3.1.0. The destination must have Docker and docker-compose v1 available. The script will detect errors automatically and exit. You may want to test the connection by running ssh mailcow-backup.host.name -p22 -i /path/to/keyfile . Backup and refresh the cold-standby \u00b6 Run the first backup, this may take a while depending on the connection: bash /opt/mailcow-dockerized/create_cold_standby.sh That was easy, wasn't it? Updating your cold-standby is just as easy: bash /opt/mailcow-dockerized/create_cold_standby.sh It's the same command. Automated backups with cron \u00b6 First make sure that the cron service is enabled and running: systemctl enable cron.service && systemctl start cron.service To automate the backups to the cold-standby server you can use a cron job. To edit the cron jobs for the root user run: crontab -e Add the following lines to synchronize the cold standby server daily at 03:00. In this example errors of the last execution are logged into a file. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 0 3 * * * bash /opt/mailcow-dockerized/create_cold_standby.sh 2> /var/log/mailcow-coldstandby-sync.log If saved correctly, the cron job should be shown by typing: crontab -l","title":"Cold-standby (rolling backup)"},{"location":"b_n_r-coldstandby/#cold-standby-backup","text":"mailcow offers an easy way to create a consistent copy of itself to be rsync'ed to a remote location without downtime. This may also be used to transfer your mailcow to a new server.","title":"Cold-standby backup"},{"location":"b_n_r-coldstandby/#you-should-know","text":"The provided script will work on default installations. It may break when you use unsupported volume overrides. We don't support that and we will not include hacks to support that. Please run and maintain a fork if you plan to keep your changes. The script will use the same pathes as your default mailcow installation. That is the mailcow base directory - for most users /opt/mailcow-dockerized - as well as the mountpoints. To find the pathes of your source volumes we use docker inspect and read the destination directory of every volume related to your mailcow compose project. This means we will also transfer volumes you may have added in a override file. Local bind mounts may or may not work. The use rsync with the --delete flag. The destination will be an exact copy of the source. mariabackup is used to create a consistent copy of the SQL data directory. After rsync'ing the data we will run docker-compose pull and remove old image tags from the destination. Your source will not be changed at any time. You may want to make sure to use the same /etc/docker/daemon.json on the remote target. You should not run disk snapshots (e.g. via ZFS, LVM etc.) on the target at the very same time as this script is run. Versioning is not part of this script, we rely on the destination (snapshots or backups). You may also want to use any other tool for that.","title":"You should know"},{"location":"b_n_r-coldstandby/#prepare","text":"You will need a SSH-enabled destination and a keyfile to connect to said destination. The key should not be protected by a password for the script to work unattended. In your mailcow base directory, e.g. /opt/mailcow-dockerized you will find a file create_cold_standby.sh . Edit this file and change the exported variables: export REMOTE_SSH_KEY=/path/to/keyfile export REMOTE_SSH_PORT=22 export REMOTE_SSH_HOST=mailcow-backup.host.name The key must be owned and readable by root only. Both the source and destination require rsync >= v3.1.0. The destination must have Docker and docker-compose v1 available. The script will detect errors automatically and exit. You may want to test the connection by running ssh mailcow-backup.host.name -p22 -i /path/to/keyfile .","title":"Prepare"},{"location":"b_n_r-coldstandby/#backup-and-refresh-the-cold-standby","text":"Run the first backup, this may take a while depending on the connection: bash /opt/mailcow-dockerized/create_cold_standby.sh That was easy, wasn't it? Updating your cold-standby is just as easy: bash /opt/mailcow-dockerized/create_cold_standby.sh It's the same command.","title":"Backup and refresh the cold-standby"},{"location":"b_n_r-coldstandby/#automated-backups-with-cron","text":"First make sure that the cron service is enabled and running: systemctl enable cron.service && systemctl start cron.service To automate the backups to the cold-standby server you can use a cron job. To edit the cron jobs for the root user run: crontab -e Add the following lines to synchronize the cold standby server daily at 03:00. In this example errors of the last execution are logged into a file. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 0 3 * * * bash /opt/mailcow-dockerized/create_cold_standby.sh 2> /var/log/mailcow-coldstandby-sync.log If saved correctly, the cron job should be shown by typing: crontab -l","title":"Automated backups with cron"},{"location":"b_n_r-restore/","text":"Restore \u00b6 Please do not copy this script to another location. To run a restore, start mailcow , use the script with \"restore\" as first parameter. # Syntax: # ./helper-scripts/backup_and_restore.sh restore The script will ask you for a backup location containing the mailcow_DATE folders.","title":"Restore"},{"location":"b_n_r-restore/#restore","text":"Please do not copy this script to another location. To run a restore, start mailcow , use the script with \"restore\" as first parameter. # Syntax: # ./helper-scripts/backup_and_restore.sh restore The script will ask you for a backup location containing the mailcow_DATE folders.","title":"Restore"},{"location":"client/","text":"mailcow supports a variety of email clients, both on desktop computers and on smartphones. Below, you can find a number of configuration guides that explain how to configure your mailcow account. Tip If you access this page by logging into your mailcow server and clicking the \"Show configuration guides for email clients and smartphones\" link, all of the guides will be personalized with your email address and server name. Success Since you accessed this page after logging into your mailcow server, all of the guides have been personalized with your email address and server name. Android Apple iOS / macOS eM Client KDE Kontact / KMail Microsoft Outlook Mozilla Thunderbird Windows Mail Windows Phone Manual configuration","title":"Overview"},{"location":"debug-admin_login_sogo/","text":"This is an experimental feature that allows admins and domain admins to directly log into SOGo as a mailbox user, without knowing the users password. For this, an additional link to SOGo is displayed in the mailbox list. Multiple concurrent admin-logins to different mailboxes are also possible when using this feature. Enabling the feature \u00b6 The feature is disabled by default. It can be enabled in the mailcow.conf by setting: ALLOW_ADMIN_EMAIL_LOGIN=y and recreating the affected containers with docker-compose up -d Drawbacks when enabled \u00b6 Each SOGo page-load and each Active-Sync request will cause an additional execution of an internal PHP script. This might impact load-times of SOGo / EAS. In most cases, this should not be noticeable but should be kept in mind if you face any performance issues. SOGo will not display a logout link for admin-logins, to login normally one has to logout from the mailcow UI so the PHP session is destroyed. Technical details \u00b6 SOGoTrustProxyAuthentication option is set to YES which makes SOGo trust the x-webobjects-remote-user header. Dovecot will receive a random master-password which is valid for all mailboxes when used by the SOGo container. Clicking on the SOGo button in the mailbox list will open sogo-auth.php which checks permissions, sets session variables and redirects to the SOGo mailbox. Each SOGo, CardDAV, CalDAV and EAS http request will cause an additional, nginx internal auth_request call to sogo-auth.php with the following behavior: If a basic_auth header is present, the script will validate the credentials in place of SOGo and provide the following headers: x-webobjects-remote-user , Authorization and x-webobjects-auth-type . If no basic_auth header is present, the script will check for an active mailcow admin session for the requested email user and provide the same headers but with the dovecot master password used in the Authorization header. If both fails the headers will be set empty, which makes SOGo use its standard authentication methods. All of these options / behaviors are disabled if the ALLOW_ADMIN_EMAIL_LOGIN is not enabled in the config.","title":"Admin login to SOGo"},{"location":"debug-admin_login_sogo/#enabling-the-feature","text":"The feature is disabled by default. It can be enabled in the mailcow.conf by setting: ALLOW_ADMIN_EMAIL_LOGIN=y and recreating the affected containers with docker-compose up -d","title":"Enabling the feature"},{"location":"debug-admin_login_sogo/#drawbacks-when-enabled","text":"Each SOGo page-load and each Active-Sync request will cause an additional execution of an internal PHP script. This might impact load-times of SOGo / EAS. In most cases, this should not be noticeable but should be kept in mind if you face any performance issues. SOGo will not display a logout link for admin-logins, to login normally one has to logout from the mailcow UI so the PHP session is destroyed.","title":"Drawbacks when enabled"},{"location":"debug-admin_login_sogo/#technical-details","text":"SOGoTrustProxyAuthentication option is set to YES which makes SOGo trust the x-webobjects-remote-user header. Dovecot will receive a random master-password which is valid for all mailboxes when used by the SOGo container. Clicking on the SOGo button in the mailbox list will open sogo-auth.php which checks permissions, sets session variables and redirects to the SOGo mailbox. Each SOGo, CardDAV, CalDAV and EAS http request will cause an additional, nginx internal auth_request call to sogo-auth.php with the following behavior: If a basic_auth header is present, the script will validate the credentials in place of SOGo and provide the following headers: x-webobjects-remote-user , Authorization and x-webobjects-auth-type . If no basic_auth header is present, the script will check for an active mailcow admin session for the requested email user and provide the same headers but with the dovecot master password used in the Authorization header. If both fails the headers will be set empty, which makes SOGo use its standard authentication methods. All of these options / behaviors are disabled if the ALLOW_ADMIN_EMAIL_LOGIN is not enabled in the config.","title":"Technical details"},{"location":"debug-asan_rspamd/","text":"A quick guide to deeply analyze a malfunctioning Rspamd. docker-compose exec rspamd-mailcow bash if ! grep -qi 'apt-stable-asan' /etc/apt/sources.list.d/rspamd.list; then sed -i 's/apt-stable/apt-stable-asan/i' /etc/apt/sources.list.d/rspamd.list fi apt-get update ; apt-get upgrade rspamd nano /docker-entrypoint.sh # Before \"exec \"$@\"\" add the following lines: export G_SLICE=always-malloc export ASAN_OPTIONS=new_delete_type_mismatch=0:detect_leaks=1:detect_odr_violation=0:log_path=/tmp/rspamd-asan:quarantine_size_mb=2048:malloc_context_size=8:fast_unwind_on_malloc=0 Restart Rspamd: docker-compose restart rspamd-mailcow Your memory consumption will increase by a lot, it will also steadily grow, which is not related to a possible memory leak you are looking for. Leave the container running for a few minutes, hours or days (it should match the time you usually wait for the leak to \"happen\") and restart it: docker-compose restart rspamd-mailcow . Now enter the container by running docker-compose exec rspamd-mailcow bash , change the directory to /tmp and copy the asan Files to your desired location or upload them via termbin.com ( cat /tmp/rspamd-asan.* | nc termbin.com 9999 ).","title":"Advanced: Find memory leaks in Rspamd"},{"location":"debug-attach_service/","text":"Attaching a Container to your Shell \u00b6 To attach a container to your shell you can simply run docker-compose exec $Service_Name /bin/bash Connecting to Services \u00b6 If you want to connect to a service / application directly it is always a good idea to source mailcow.conf to get all relevant variables into your environment. MySQL \u00b6 source mailcow.conf docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME} Redis \u00b6 docker-compose exec redis-mailcow redis-cli Service Descriptions \u00b6 Here is a brief overview of what container / service does what: Service Name Service Descriptions unbound-mailcow Local (DNSSEC) DNS Resolver mysql-mailcow Stores SOGo's and most of mailcow's settings postfix-mailcow Receives and sends mails dovecot-mailcow User logins and sieve filter redis-mailcow Storage back-end for DKIM keys and Rspamd rspamd-mailcow Mail filtering system. Used for av handling, dkim signing, spam handling clamd-mailcow Scans attachments for viruses olefy-mailcow Scans attached office documents for macro-viruses solr-mailcow Provides full-text search in Dovecot sogo-mailcow Webmail client that handles Microsoft ActiveSync and Cal- / CardDav nginx-mailcow Nginx remote proxy that handles all mailcow related HTTP / HTTPS requests acme-mailcow Automates HTTPS (SSL/TLS) certificate deployment memcached-mailcow Internal caching system for mailcow services watchdog-mailcow Allows the monitoring of docker containers / services php-fpm-mailcow Powers the mailcow web UI netfilter-mailcow Fail2Ban like integration","title":"Attach a Container"},{"location":"debug-attach_service/#attaching-a-container-to-your-shell","text":"To attach a container to your shell you can simply run docker-compose exec $Service_Name /bin/bash","title":"Attaching a Container to your Shell"},{"location":"debug-attach_service/#connecting-to-services","text":"If you want to connect to a service / application directly it is always a good idea to source mailcow.conf to get all relevant variables into your environment.","title":"Connecting to Services"},{"location":"debug-attach_service/#mysql","text":"source mailcow.conf docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME}","title":"MySQL"},{"location":"debug-attach_service/#redis","text":"docker-compose exec redis-mailcow redis-cli","title":"Redis"},{"location":"debug-attach_service/#service-descriptions","text":"Here is a brief overview of what container / service does what: Service Name Service Descriptions unbound-mailcow Local (DNSSEC) DNS Resolver mysql-mailcow Stores SOGo's and most of mailcow's settings postfix-mailcow Receives and sends mails dovecot-mailcow User logins and sieve filter redis-mailcow Storage back-end for DKIM keys and Rspamd rspamd-mailcow Mail filtering system. Used for av handling, dkim signing, spam handling clamd-mailcow Scans attachments for viruses olefy-mailcow Scans attached office documents for macro-viruses solr-mailcow Provides full-text search in Dovecot sogo-mailcow Webmail client that handles Microsoft ActiveSync and Cal- / CardDav nginx-mailcow Nginx remote proxy that handles all mailcow related HTTP / HTTPS requests acme-mailcow Automates HTTPS (SSL/TLS) certificate deployment memcached-mailcow Internal caching system for mailcow services watchdog-mailcow Allows the monitoring of docker containers / services php-fpm-mailcow Powers the mailcow web UI netfilter-mailcow Fail2Ban like integration","title":"Service Descriptions"},{"location":"debug-common_problems/","text":"Here we list common problems and possible solutions: Mail loops back to myself \u00b6 Please check in your mailcow UI if you made the domain a backup MX : I can receive but not send mails \u00b6 There are a lot of things that could prevent you from sending mail: Check if your IP address is on any blacklists. You could use dnsbl.info or any other similar service to check for your IP address. There are some consumer ISP routers out there, that block mail ports for non whitelisted domains. Please check if you can reach your server on the ports 465 or 587 : # telnet 74.125.133.27 465 Trying 74.125.133.27... Connected to 74.125.133.27. Escape character is '^]'. My mails are identified as Spam \u00b6 Please read our guide on DNS configuration . docker-compose throws weird errors \u00b6 ... like: ERROR: Invalid interpolation format ... AttributeError: 'NoneType' object has no attribute 'keys' . ERROR: In file './docker-compose.yml' service 'version' doesn't have any configuration options . When you encounter one or similar messages while trying to run mailcow: dockerized please check if you have the latest version of Docker and docker-compose Container XY is unhealthy \u00b6 This error tries to tell you that one of the (health) conditions for a certain container are not met. Therefore it can't be started. This can have several reasons, the most common one is an updated git clone but old docker image or vice versa. A wrong configured firewall could also cause such a failure. The containers need to be able to talk to each other over the network 172.22.1.1/24. It might also be wrongly linked file (i.e. SSL certificate) that prevents a crucial container (nginx) from starting, so always check your logs to get an idea where your problem is coming from. Address already in use \u00b6 If you get an error message like: ERROR: for postfix-mailcow Cannot start service postfix-mailcow: driver failed programming external connectivity on endpoint mailcowdockerized_postfix-mailcow_1: Error starting userland proxy: listen tcp 0.0.0.0:25: bind: address already in use while trying to start / install mailcow: dockerized, make sure you've followed our section on the prerequisites . XYZ can't connect to ... \u00b6 Please check your local firewall! Docker and iptables-based firewalls sometimes create conflicting rules, so disable the firewall on your host to determine whether your connection issues are caused by such conflicts. If they are, you need to manually create appropriate rules in your host firewall to permit the necessary connections. If you experience connection problems from home, please check your ISP router's firewall too, some of them block mail traffic on the SMTP (587) or SMTPS (465) ports. It could also be, that your ISP is blocking the ports for SUBMISSION (25). While Linux users can chose from a variety of tools 1 to check if a port is open, the Windows user has only the PowerShell command Test-NetConnection -ComputerName host -Port port available by default. To enable telnet on a Windows after Vista please check this guide or enter the following command in an terminal with administrator privileges : dism /online /Enable-Feature /FeatureName:TelnetClient Inotify instance limit for user 5000 (UID vmail) exceeded ( see #453 ) \u00b6 Docker containers use the Docker hosts inotify limits. Setting them on your Docker host will pass them to the container. Dovecot keeps restarting (see #2672 ) \u00b6 Check that you have at least the following files in data/assets/ssl : cert.pem dhparams.pem key.pem If dhparams.pem is missing, you can generate it with openssl dhparam -out data/assets/ssl/dhparams.pem 4096 netcat , nmap , openssl , telnet , etc. \u21a9","title":"Common Problems"},{"location":"debug-common_problems/#mail-loops-back-to-myself","text":"Please check in your mailcow UI if you made the domain a backup MX :","title":"Mail loops back to myself"},{"location":"debug-common_problems/#i-can-receive-but-not-send-mails","text":"There are a lot of things that could prevent you from sending mail: Check if your IP address is on any blacklists. You could use dnsbl.info or any other similar service to check for your IP address. There are some consumer ISP routers out there, that block mail ports for non whitelisted domains. Please check if you can reach your server on the ports 465 or 587 : # telnet 74.125.133.27 465 Trying 74.125.133.27... Connected to 74.125.133.27. Escape character is '^]'.","title":"I can receive but not send mails"},{"location":"debug-common_problems/#my-mails-are-identified-as-spam","text":"Please read our guide on DNS configuration .","title":"My mails are identified as Spam"},{"location":"debug-common_problems/#docker-compose-throws-weird-errors","text":"... like: ERROR: Invalid interpolation format ... AttributeError: 'NoneType' object has no attribute 'keys' . ERROR: In file './docker-compose.yml' service 'version' doesn't have any configuration options . When you encounter one or similar messages while trying to run mailcow: dockerized please check if you have the latest version of Docker and docker-compose","title":"docker-compose throws weird errors"},{"location":"debug-common_problems/#container-xy-is-unhealthy","text":"This error tries to tell you that one of the (health) conditions for a certain container are not met. Therefore it can't be started. This can have several reasons, the most common one is an updated git clone but old docker image or vice versa. A wrong configured firewall could also cause such a failure. The containers need to be able to talk to each other over the network 172.22.1.1/24. It might also be wrongly linked file (i.e. SSL certificate) that prevents a crucial container (nginx) from starting, so always check your logs to get an idea where your problem is coming from.","title":"Container XY is unhealthy"},{"location":"debug-common_problems/#address-already-in-use","text":"If you get an error message like: ERROR: for postfix-mailcow Cannot start service postfix-mailcow: driver failed programming external connectivity on endpoint mailcowdockerized_postfix-mailcow_1: Error starting userland proxy: listen tcp 0.0.0.0:25: bind: address already in use while trying to start / install mailcow: dockerized, make sure you've followed our section on the prerequisites .","title":"Address already in use"},{"location":"debug-common_problems/#xyz-cant-connect-to","text":"Please check your local firewall! Docker and iptables-based firewalls sometimes create conflicting rules, so disable the firewall on your host to determine whether your connection issues are caused by such conflicts. If they are, you need to manually create appropriate rules in your host firewall to permit the necessary connections. If you experience connection problems from home, please check your ISP router's firewall too, some of them block mail traffic on the SMTP (587) or SMTPS (465) ports. It could also be, that your ISP is blocking the ports for SUBMISSION (25). While Linux users can chose from a variety of tools 1 to check if a port is open, the Windows user has only the PowerShell command Test-NetConnection -ComputerName host -Port port available by default. To enable telnet on a Windows after Vista please check this guide or enter the following command in an terminal with administrator privileges : dism /online /Enable-Feature /FeatureName:TelnetClient","title":"XYZ can't connect to ..."},{"location":"debug-common_problems/#inotify-instance-limit-for-user-5000-uid-vmail-exceeded-see-453","text":"Docker containers use the Docker hosts inotify limits. Setting them on your Docker host will pass them to the container.","title":"Inotify instance limit for user 5000 (UID vmail) exceeded (see #453)"},{"location":"debug-common_problems/#dovecot-keeps-restarting-see-2672","text":"Check that you have at least the following files in data/assets/ssl : cert.pem dhparams.pem key.pem If dhparams.pem is missing, you can generate it with openssl dhparam -out data/assets/ssl/dhparams.pem 4096 netcat , nmap , openssl , telnet , etc. \u21a9","title":"Dovecot keeps restarting (see #2672)"},{"location":"debug-logs/","text":"Warning This section only applies for Dockers default logging driver (JSON). To view the logs of all mailcow: dockerized related containers, you can use docker-compose logs inside your mailcow-dockerized folder that contains your mailcow.conf . This is usually a bit much, but you could trim the output with --tail=100 to the last 100 lines per container, or add a -f to follow the live output of all your services. To view the logs of a specific service you can use docker-compose logs [options] $service_name Info The available options for the command docker-compose logs are: --no-color : Produce monochrome output. -f : Follow the log output. -t : Show timestamps. --tail=\"all\" : Number of lines to show from the end of the logs for each container.","title":"Logs"},{"location":"debug-mysql_aria/","text":"MariaDB: Aria recovery after crash \u00b6 If your server crashed and MariaDB logs an error similar to [ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables (*.MAI) and delete all aria_log.######## files you may want to try the following to recover the database to a healthy state: Start the stack and wait until mysql-mailcow begins to report a restarting state. Check by running docker-compose ps . Now run the following commands: # Stop the stack, don't run \"down\" docker-compose stop # Run a bash in the stopped container as user mysql docker-compose run --rm --entrypoint '/bin/sh -c \"gosu mysql bash\"' mysql-mailcow # cd to the SQL data directory cd /var/lib/mysql # Run aria_chk aria_chk --check --force */*.MAI # Delete aria log files rm aria_log.* Now run docker-compose down followed by docker-compose up -d .","title":"Recover crashed Aria storage engine"},{"location":"debug-mysql_aria/#mariadb-aria-recovery-after-crash","text":"If your server crashed and MariaDB logs an error similar to [ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables (*.MAI) and delete all aria_log.######## files you may want to try the following to recover the database to a healthy state: Start the stack and wait until mysql-mailcow begins to report a restarting state. Check by running docker-compose ps . Now run the following commands: # Stop the stack, don't run \"down\" docker-compose stop # Run a bash in the stopped container as user mysql docker-compose run --rm --entrypoint '/bin/sh -c \"gosu mysql bash\"' mysql-mailcow # cd to the SQL data directory cd /var/lib/mysql # Run aria_chk aria_chk --check --force */*.MAI # Delete aria log files rm aria_log.* Now run docker-compose down followed by docker-compose up -d .","title":"MariaDB: Aria recovery after crash"},{"location":"debug-mysql_upgrade/","text":"Run a manual mysql_upgrade \u00b6 This step is usually not necessary. docker-compose stop mysql-mailcow watchdog-mailcow docker-compose run --rm --entrypoint '/bin/sh -c \"gosu mysql mysqld --skip-grant-tables & sleep 10 && bash && exit 0\"' mysql-mailcow As soon as the SQL shell spawned, run mysql_upgrade and exit the container: mysql_upgrade exit","title":"Manual MySQL upgrade"},{"location":"debug-mysql_upgrade/#run-a-manual-mysql_upgrade","text":"This step is usually not necessary. docker-compose stop mysql-mailcow watchdog-mailcow docker-compose run --rm --entrypoint '/bin/sh -c \"gosu mysql mysqld --skip-grant-tables & sleep 10 && bash && exit 0\"' mysql-mailcow As soon as the SQL shell spawned, run mysql_upgrade and exit the container: mysql_upgrade exit","title":"Run a manual mysql_upgrade"},{"location":"debug-reset_pw/","text":"mailcow Admin Account \u00b6 Resets the mailcow admin account to a random password. Older mailcow: dockerized installations may find the mailcow-reset-admin.sh script in their mailcow root directory (mailcow_path). cd mailcow_path ./helper-scripts/mailcow-reset-admin.sh Reset MySQL Passwords \u00b6 Stop the stack by running docker-compose stop . When the containers came to a stop, run this command: docker-compose run --rm --entrypoint '/bin/sh -c \"gosu mysql mysqld --skip-grant-tables & sleep 10 && mysql -hlocalhost -uroot && exit 0\"' mysql-mailcow 1. Find database name \u00b6 # source mailcow.conf # docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME} MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mailcow_database | <===== | mysql | | performance_schema | +--------------------+ 4 rows in set (0.00 sec) 2. Reset one or more users \u00b6 2.1 Maria DB < 10.4 (older mailcow installations) \u00b6 Both \"password\" and \"authentication_string\" exist. Currently \"password\" is used, but better set both. MariaDB [(none)]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | mailcow | <===== | root | +--------------+ 2 rows in set (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES; MariaDB [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('gotr00t'), password = PASSWORD('gotr00t') WHERE User = 'root'; MariaDB [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('mookuh'), password = PASSWORD('mookuh') WHERE User = 'mailcow' AND Host = '%'; MariaDB [(none)]> FLUSH PRIVILEGES; 2.2 Maria DB >= 10.4 (current mailcows) \u00b6 MariaDB [(none)]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | mailcow | <===== | root | +--------------+ 2 rows in set (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES; MariaDB [(none)]> ALTER USER 'mailcow'@'%' IDENTIFIED BY 'mookuh'; MariaDB [(none)]> ALTER USER 'root'@'%' IDENTIFIED BY 'gotr00t'; MariaDB [(none)]> ALTER USER 'root'@'localhost' IDENTIFIED BY 'gotr00t'; MariaDB [(none)]> FLUSH PRIVILEGES; Remove Two-Factor Authentication \u00b6 For mailcow WebUI: \u00b6 This works similar to resetting a MySQL password, now we do it from the host without connecting to the MySQL CLI: source mailcow.conf docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME} -e \"DELETE FROM tfa WHERE username='YOUR_USERNAME';\" For SOGo: \u00b6 docker-compose exec -u sogo sogo-mailcow sogo-tool user-preferences set defaults user@example.com SOGoGoogleAuthenticatorEnabled '{\"SOGoGoogleAuthenticatorEnabled\":0}'","title":"Reset Passwords (incl. SQL)"},{"location":"debug-reset_pw/#mailcow-admin-account","text":"Resets the mailcow admin account to a random password. Older mailcow: dockerized installations may find the mailcow-reset-admin.sh script in their mailcow root directory (mailcow_path). cd mailcow_path ./helper-scripts/mailcow-reset-admin.sh","title":"mailcow Admin Account"},{"location":"debug-reset_pw/#reset-mysql-passwords","text":"Stop the stack by running docker-compose stop . When the containers came to a stop, run this command: docker-compose run --rm --entrypoint '/bin/sh -c \"gosu mysql mysqld --skip-grant-tables & sleep 10 && mysql -hlocalhost -uroot && exit 0\"' mysql-mailcow","title":"Reset MySQL Passwords"},{"location":"debug-reset_pw/#1-find-database-name","text":"# source mailcow.conf # docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME} MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mailcow_database | <===== | mysql | | performance_schema | +--------------------+ 4 rows in set (0.00 sec)","title":"1. Find database name"},{"location":"debug-reset_pw/#2-reset-one-or-more-users","text":"","title":"2. Reset one or more users"},{"location":"debug-reset_pw/#21-maria-db-104-older-mailcow-installations","text":"Both \"password\" and \"authentication_string\" exist. Currently \"password\" is used, but better set both. MariaDB [(none)]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | mailcow | <===== | root | +--------------+ 2 rows in set (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES; MariaDB [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('gotr00t'), password = PASSWORD('gotr00t') WHERE User = 'root'; MariaDB [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('mookuh'), password = PASSWORD('mookuh') WHERE User = 'mailcow' AND Host = '%'; MariaDB [(none)]> FLUSH PRIVILEGES;","title":"2.1 Maria DB < 10.4 (older mailcow installations)"},{"location":"debug-reset_pw/#22-maria-db-104-current-mailcows","text":"MariaDB [(none)]> SELECT user FROM mysql.user; +--------------+ | user | +--------------+ | mailcow | <===== | root | +--------------+ 2 rows in set (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES; MariaDB [(none)]> ALTER USER 'mailcow'@'%' IDENTIFIED BY 'mookuh'; MariaDB [(none)]> ALTER USER 'root'@'%' IDENTIFIED BY 'gotr00t'; MariaDB [(none)]> ALTER USER 'root'@'localhost' IDENTIFIED BY 'gotr00t'; MariaDB [(none)]> FLUSH PRIVILEGES;","title":"2.2 Maria DB >= 10.4 (current mailcows)"},{"location":"debug-reset_pw/#remove-two-factor-authentication","text":"","title":"Remove Two-Factor Authentication"},{"location":"debug-reset_pw/#for-mailcow-webui","text":"This works similar to resetting a MySQL password, now we do it from the host without connecting to the MySQL CLI: source mailcow.conf docker-compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME} -e \"DELETE FROM tfa WHERE username='YOUR_USERNAME';\"","title":"For mailcow WebUI:"},{"location":"debug-reset_pw/#for-sogo","text":"docker-compose exec -u sogo sogo-mailcow sogo-tool user-preferences set defaults user@example.com SOGoGoogleAuthenticatorEnabled '{\"SOGoGoogleAuthenticatorEnabled\":0}'","title":"For SOGo:"},{"location":"debug-reset_tls/","text":"In case you encounter problems with your certificate, key or Let's Encrypt account, please try to reset the TLS assets: source mailcow.conf docker-compose down rm -rf data/assets/ssl mkdir data/assets/ssl openssl req -x509 -newkey rsa:4096 -keyout data/assets/ssl-example/key.pem -out data/assets/ssl-example/cert.pem -days 365 -subj \"/C=DE/ST=NRW/L=Willich/O=mailcow/OU=mailcow/CN=${MAILCOW_HOSTNAME}\" -sha256 -nodes cp -n -d data/assets/ssl-example/*.pem data/assets/ssl/ docker-compose up -d This will stop mailcow, source the variables we need, create a self-signed certificate and start mailcow. If you use Let's Encrypt you should be careful as you will create a new account and a new set of certificates. You will run into a ratelimit sooner or later. Please also note that previous TLSA records will be invalid.","title":"Reset TLS certificates"},{"location":"debug-rm_volumes/","text":"You may want to remove a set of persistent data to resolve a conflict or to start over. mailcowdockerized can vary and depends on your compose project name (if it's unchanged, mailcowdockerized is the correct value). If you are unsure about volume names, run docker volume ls for a full list. Delete a single volume: docker volume rm mailcowdockerized_${VOLUME_NAME} Remove volume mysql-vol-1 to remove all MySQL data. Remove volume redis-vol-1 to remove all Redis data. Remove volume vmail-vol-1 to remove all contents of /var/vmail mounted to dovecot-mailcow . Remove volume rspamd-vol-1 to remove all Rspamd data. Remove volume crypt-vol-1 to remove all crypto data. This will render all mails unreadable. Alternatively, running docker-compose down -v will destroy all mailcow: dockerized volumes and delete any related containers and networks.","title":"Remove Persistent Data"},{"location":"debug/","text":"When a problem occurs, then always for a reason! What you want to do in such a case is: Read your logs; follow them to see what the reason for your problem is. Follow the leads given to you in your logfiles and start investigating. Restarting the troubled service or the whole stack to see if the problem persists. Read the documentation of the troubled service and search it's bugtracker for your problem. Search our issues for your problem. Create an issue over at our GitHub repository if you think your problem might be a bug or a missing feature you badly need. But please make sure, that you include all the logs and a full description to your problem. Please do not ask for support on Git. Join our Telegram community or find the official support packages at Servercow .","title":"Introduction"},{"location":"firststeps-disable_ipv6/","text":"This is ONLY recommended if you do not have an IPv6 enabled network on your host! If you really need to, you can disable the usage of IPv6 in the compose file. Additionally, you can also disable the startup of container \"ipv6nat-mailcow\", as it's not needed if you won't use IPv6. Instead of editing docker-compose.yml directly, it is preferable to create an override file for it and implement your changes to the service there. Unfortunately, this right now only seems to work for services, not for network settings. To disable IPv6 on the mailcow network, open docker-compose.yml with your favourite text editor and search for the network section (it's near the bottom of the file). 1. Modify docker-compose.yml Change enable_ipv6: true to enable_ipv6: false : networks: mailcow-network: [...] enable_ipv6: true # <<< set to false [...] 2. Disable ipv6nat-mailcow To disable the ipv6nat-mailcow container as well, go to your mailcow directory and create a new file called \"docker-compose.override.yml\": NOTE: If you already have an override file, of course don't recreate it, but merge the lines below into your existing one accordingly! # cd /opt/mailcow-dockerized # touch docker-compose.override.yml Open the file in your favourite text editor and fill in the following: version: '2.1' services: ipv6nat-mailcow: image: bash:latest restart: \"no\" entrypoint: [\"echo\", \"ipv6nat disabled in compose.override.yml\"] For these changes to be effective, you need to fully stop and then restart the stack, so containers and networks are recreated: docker-compose down docker-compose up -d 3. Disable IPv6 in unbound-mailcow Edit data/conf/unbound/unbound.conf and set do-ip6 to \"no\": server: [...] do-ip6: no [...] Restart Unbound: docker-compose restart unbound-mailcow 4. Disable IPv6 in postfix-mailcow Create data/conf/postfix/extra.cf and set smtp_address_preference to ipv4 : smtp_address_preference = ipv4 inet_protocols = ipv4 Restart Postfix: docker-compose restart postfix-mailcow","title":"Disable IPv6"},{"location":"firststeps-dmarc_reporting/","text":"DMARC Reporting done via Rspamd DMARC Module. Rspamd documentation can be found here: https://rspamd.com/doc/modules/dmarc.html Important: Change example.com , mail.example.com and Example to reflect your setup DMARC reporting requires additional attention, especially over the first few days All receiving domains hosted on mailcow send from one reporting domain. It is recommended to use the parent domain of your MAILCOW_HOSTNAME : If your MAILCOW_HOSTNAME is mail.example.com change the following config to domain = \"example.com\"; Set email equally, e.g. email = \"noreply-dmarc@example.com\"; It is optional but recommended to create an email user noreply-dmarc in mailcow to handle bounces. Enable DMARC reporting \u00b6 Create the file data/conf/rspamd/local.d/dmarc.conf and set the following content: reporting { enabled = true; email = 'noreply-dmarc@example.com'; domain = 'example.com'; org_name = 'Example'; helo = 'rspamd'; smtp = 'postfix'; smtp_port = 25; from_name = 'Example DMARC Report'; msgid_from = 'rspamd.mail.example.com'; max_entries = 2k; keys_expire = 2d; } Create or modify docker-compose.override.yml in the mailcow-dockerized base directory: version: '2.1' services: rspamd-mailcow: environment: - MASTER=${MASTER:-y} labels: ofelia.enabled: \"true\" ofelia.job-exec.rspamd_dmarc_reporting.schedule: \"@every 24h\" ofelia.job-exec.rspamd_dmarc_reporting.command: \"/bin/bash -c \\\"[[ $${MASTER} == y ]] && /usr/bin/rspamadm dmarc_report > /var/lib/rspamd/dmarc_reports_last_log 2>&1 || exit 0\\\"\" ofelia-mailcow: depends_on: - rspamd-mailcow Run docker-compose up -d Send a copy reports to yourself \u00b6 To receive a hidden copy of reports generated by Rspamd you can set a bcc_addrs list in the reporting config section of data/conf/rspamd/local.d/dmarc.conf : reporting { enabled = true; email = 'noreply-dmarc@example.com'; bcc_addrs = [\"noreply-dmarc@example.com\",\"parsedmarc@example.com\"]; [...] Rspamd will load changes in real time, so you won't need to restart the container at this point. This can be useful if you... ...want to check that your DMARC reports are sent correctly and authenticated. ...want to analyze your own reports to get statistics, i.e. to use with ParseDMARC or other analytic systems. Troubleshooting \u00b6 Check when the report schedule last ran: docker-compose exec rspamd-mailcow date -r /var/lib/rspamd/dmarc_reports_last_log See the latest report output: docker-compose exec rspamd-mailcow cat /var/lib/rspamd/dmarc_reports_last_log Manually trigger a DMARC report: docker-compose exec rspamd-mailcow rspamadm dmarc_report Validate that Rspamd has recorded data in Redis: docker-compose exec redis-mailcow redis-cli KEYS 'dmarc;*' docker-compose exec redis-mailcow redis-cli HGETALL \"dmarc;example.com;20211231\" Change DMARC reporting frequency \u00b6 In the example above reports are sent once every 24 hours. Olefia schedule has same implementation as cron in Go, supported syntax described at cron Documentation To change schedule: Edit docker-compose.override.yml and a djust ofelia.job-exec.rspamd_dmarc_reporting.schedule: \"@every 24h\" to a desired value, for example to \"@midnight\" Run docker-compose up -d Run docker-compose restart ofelia-mailcow Disable DMARC Reporting \u00b6 To disable reporting: Set enabled to false in data/conf/rspamd/local.d/dmarc.conf Revert changes done in docker-compose.override.yml to rspamd-mailcow and ofelia-mailcow Run docker-compose up -d","title":"DMARC Reporting"},{"location":"firststeps-dmarc_reporting/#enable-dmarc-reporting","text":"Create the file data/conf/rspamd/local.d/dmarc.conf and set the following content: reporting { enabled = true; email = 'noreply-dmarc@example.com'; domain = 'example.com'; org_name = 'Example'; helo = 'rspamd'; smtp = 'postfix'; smtp_port = 25; from_name = 'Example DMARC Report'; msgid_from = 'rspamd.mail.example.com'; max_entries = 2k; keys_expire = 2d; } Create or modify docker-compose.override.yml in the mailcow-dockerized base directory: version: '2.1' services: rspamd-mailcow: environment: - MASTER=${MASTER:-y} labels: ofelia.enabled: \"true\" ofelia.job-exec.rspamd_dmarc_reporting.schedule: \"@every 24h\" ofelia.job-exec.rspamd_dmarc_reporting.command: \"/bin/bash -c \\\"[[ $${MASTER} == y ]] && /usr/bin/rspamadm dmarc_report > /var/lib/rspamd/dmarc_reports_last_log 2>&1 || exit 0\\\"\" ofelia-mailcow: depends_on: - rspamd-mailcow Run docker-compose up -d","title":"Enable DMARC reporting"},{"location":"firststeps-dmarc_reporting/#send-a-copy-reports-to-yourself","text":"To receive a hidden copy of reports generated by Rspamd you can set a bcc_addrs list in the reporting config section of data/conf/rspamd/local.d/dmarc.conf : reporting { enabled = true; email = 'noreply-dmarc@example.com'; bcc_addrs = [\"noreply-dmarc@example.com\",\"parsedmarc@example.com\"]; [...] Rspamd will load changes in real time, so you won't need to restart the container at this point. This can be useful if you... ...want to check that your DMARC reports are sent correctly and authenticated. ...want to analyze your own reports to get statistics, i.e. to use with ParseDMARC or other analytic systems.","title":"Send a copy reports to yourself"},{"location":"firststeps-dmarc_reporting/#troubleshooting","text":"Check when the report schedule last ran: docker-compose exec rspamd-mailcow date -r /var/lib/rspamd/dmarc_reports_last_log See the latest report output: docker-compose exec rspamd-mailcow cat /var/lib/rspamd/dmarc_reports_last_log Manually trigger a DMARC report: docker-compose exec rspamd-mailcow rspamadm dmarc_report Validate that Rspamd has recorded data in Redis: docker-compose exec redis-mailcow redis-cli KEYS 'dmarc;*' docker-compose exec redis-mailcow redis-cli HGETALL \"dmarc;example.com;20211231\"","title":"Troubleshooting"},{"location":"firststeps-dmarc_reporting/#change-dmarc-reporting-frequency","text":"In the example above reports are sent once every 24 hours. Olefia schedule has same implementation as cron in Go, supported syntax described at cron Documentation To change schedule: Edit docker-compose.override.yml and a djust ofelia.job-exec.rspamd_dmarc_reporting.schedule: \"@every 24h\" to a desired value, for example to \"@midnight\" Run docker-compose up -d Run docker-compose restart ofelia-mailcow","title":"Change DMARC reporting frequency"},{"location":"firststeps-dmarc_reporting/#disable-dmarc-reporting","text":"To disable reporting: Set enabled to false in data/conf/rspamd/local.d/dmarc.conf Revert changes done in docker-compose.override.yml to rspamd-mailcow and ofelia-mailcow Run docker-compose up -d","title":"Disable DMARC Reporting"},{"location":"firststeps-ip_bindings/","text":"Warning Changing the binding does not affect source NAT. See SNAT for required steps. IPv4 binding \u00b6 To adjust one or multiple IPv4 bindings, open mailcow.conf and edit one, multiple or all variables as per your needs: # For technical reasons, http bindings are a bit different from other service bindings. # You will find the following variables, separated by a bind address and its port: # Example: HTTP_BIND=1.2.3.4 HTTP_PORT=80 HTTP_BIND= HTTPS_PORT=443 HTTPS_BIND= # Other services are bound by using the following format: # SMTP_PORT=1.2.3.4:25 will bind SMTP to the IP 1.2.3.4 on port 25 # Important! Specifying an IPv4 address will skip all IPv6 bindings since Docker 20.x. # doveadm, SQL as well as Solr are bound to local ports only, please do not change that, unless you know what you are doing. SMTP_PORT=25 SMTPS_PORT=465 SUBMISSION_PORT=587 IMAP_PORT=143 IMAPS_PORT=993 POP_PORT=110 POPS_PORT=995 SIEVE_PORT=4190 DOVEADM_PORT=127.0.0.1:19991 SQL_PORT=127.0.0.1:13306 SOLR_PORT=127.0.0.1:18983 To apply your changes, run docker-compose down followed by docker-compose up -d . IPv6 binding \u00b6 Changing IPv6 bindings is different from IPv4. Again, this has a technical background. A docker-compose.override.yml file will be used instead of editing the docker-compose.yml file directly. This is to maintain updatability, as the docker-compose.yml file gets updated regularly and your changes will most likely be overwritten. Edit to create a file docker-compose.override.yml with the following content. Its content will be merged with the productive docker-compose.yml file. An imaginary IPv6 2a00:dead:beef::abc is given. The first suffix :PORT1 defines the external port, while the second suffix :PORT2 routes to the corresponding port inside the container and must not be changed. version: '2.1' services: dovecot-mailcow: ports: - '2a00:dead:beef::abc:143:143' - '2a00:dead:beef::abc:993:993' - '2a00:dead:beef::abc:110:110' - '2a00:dead:beef::abc:995:995' - '2a00:dead:beef::abc:4190:4190' postfix-mailcow: ports: - '2a00:dead:beef::abc:25:25' - '2a00:dead:beef::abc:465:465' - '2a00:dead:beef::abc:587:587' nginx-mailcow: ports: - '2a00:dead:beef::abc:80:80' - '2a00:dead:beef::abc:443:443' To apply your changes, run docker-compose down followed by docker-compose up -d .","title":"IP bindings"},{"location":"firststeps-ip_bindings/#ipv4-binding","text":"To adjust one or multiple IPv4 bindings, open mailcow.conf and edit one, multiple or all variables as per your needs: # For technical reasons, http bindings are a bit different from other service bindings. # You will find the following variables, separated by a bind address and its port: # Example: HTTP_BIND=1.2.3.4 HTTP_PORT=80 HTTP_BIND= HTTPS_PORT=443 HTTPS_BIND= # Other services are bound by using the following format: # SMTP_PORT=1.2.3.4:25 will bind SMTP to the IP 1.2.3.4 on port 25 # Important! Specifying an IPv4 address will skip all IPv6 bindings since Docker 20.x. # doveadm, SQL as well as Solr are bound to local ports only, please do not change that, unless you know what you are doing. SMTP_PORT=25 SMTPS_PORT=465 SUBMISSION_PORT=587 IMAP_PORT=143 IMAPS_PORT=993 POP_PORT=110 POPS_PORT=995 SIEVE_PORT=4190 DOVEADM_PORT=127.0.0.1:19991 SQL_PORT=127.0.0.1:13306 SOLR_PORT=127.0.0.1:18983 To apply your changes, run docker-compose down followed by docker-compose up -d .","title":"IPv4 binding"},{"location":"firststeps-ip_bindings/#ipv6-binding","text":"Changing IPv6 bindings is different from IPv4. Again, this has a technical background. A docker-compose.override.yml file will be used instead of editing the docker-compose.yml file directly. This is to maintain updatability, as the docker-compose.yml file gets updated regularly and your changes will most likely be overwritten. Edit to create a file docker-compose.override.yml with the following content. Its content will be merged with the productive docker-compose.yml file. An imaginary IPv6 2a00:dead:beef::abc is given. The first suffix :PORT1 defines the external port, while the second suffix :PORT2 routes to the corresponding port inside the container and must not be changed. version: '2.1' services: dovecot-mailcow: ports: - '2a00:dead:beef::abc:143:143' - '2a00:dead:beef::abc:993:993' - '2a00:dead:beef::abc:110:110' - '2a00:dead:beef::abc:995:995' - '2a00:dead:beef::abc:4190:4190' postfix-mailcow: ports: - '2a00:dead:beef::abc:25:25' - '2a00:dead:beef::abc:465:465' - '2a00:dead:beef::abc:587:587' nginx-mailcow: ports: - '2a00:dead:beef::abc:80:80' - '2a00:dead:beef::abc:443:443' To apply your changes, run docker-compose down followed by docker-compose up -d .","title":"IPv6 binding"},{"location":"firststeps-local_mta/","text":"The easiest option would be to disable the listener on port 25/tcp. Postfix users disable the listener by commenting the following line (starting with smtp or 25 ) in /etc/postfix/master.cf : #smtp inet n - - - - smtpd Furthermore, to relay over a dockerized mailcow, you may want to add 172.22.1.1 as relayhost and remove the Docker interface from \"inet_interfaces\": postconf -e 'relayhost = 172.22.1.1' postconf -e \"mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128\" postconf -e \"inet_interfaces = loopback-only\" postconf -e \"relay_transport = relay\" postconf -e \"default_transport = smtp\" Now it is important to not have the same FQDN in myhostname as you use for your dockerized mailcow. Check your local (non-Docker) Postfix' main.cf for myhostname and set it to something different, for example local.my.fqdn.tld . \"172.22.1.1\" is the mailcow created network gateway in Docker. Relaying over this interface is necessary (instead of - for example - relaying directly over ${MAILCOW_HOSTNAME}) to relay over a known internal network. Restart Postfix after applying your changes.","title":"Local MTA on Docker host"},{"location":"firststeps-logging/","text":"Logging in mailcow: dockerized consists of multiple stages, but is, after all, much more flexible and easier to integrate into a logging daemon than before. In Docker the containerized application (PID 1) writes its output to stdout. For real one-application containers this works just fine. Run docker-compose logs --help to learn more. Some containers log or stream to multiple destinations. No container will keep persistent logs in it. Containers are transient items! In the end, every line of logs will reach the Docker daemon - unfiltered. The default logging driver is \"json\" . Filtered logs \u00b6 Some logs are filtered and written to Redis keys but also streamed to a Redis channel. The Redis channel is used to stream logs with failed authentication attempts to be read by netfilter-mailcow. The Redis keys are persistent and will keep 10000 lines of logs for the web UI. This mechanism makes it possible to use whatever Docker logging driver you want to, without losing the ability to read logs from the UI or ban suspicious clients with netfilter-mailcow. Redis keys will only hold logs from applications and filter out system messages (think of cron etc.). Logging drivers \u00b6 Via docker-compose.override.yml \u00b6 Here is the good news: Since Docker has some great logging drivers, you can integrate mailcow: dockerized into your existing logging environment with ease. Create a docker-compose.override.yml and add, for example, this block to use the \"gelf\" logging plugin for postfix-mailcow : version: '2.1' services: postfix-mailcow: # or any other logging: driver: \"gelf\" options: gelf-address: \"udp://graylog:12201\" Another example for Syslog : version: '2.1' services: postfix-mailcow: # or any other logging: driver: \"syslog\" options: syslog-address: \"udp://127.0.0.1:514\" syslog-facility: \"local3\" dovecot-mailcow: # or any other logging: driver: \"syslog\" options: syslog-address: \"udp://127.0.0.1:514\" syslog-facility: \"local3\" rspamd-mailcow: # or any other logging: driver: \"syslog\" options: syslog-address: \"udp://127.0.0.1:514\" syslog-facility: \"local3\" # For Rsyslog only: # To move local3 input to /var/log/mailcow.log and stop processing, create a file \"/etc/rsyslog.d/docker.conf\": local3.* /var/log/mailcow.logs & ~ # Restart rsyslog afterwards. via daemon.json (globally) \u00b6 If you want to change the logging driver globally , edit Dockers daemon configuration file /etc/docker/daemon.json and restart the Docker service: { ... \"log-driver\": \"gelf\", \"log-opts\": { \"gelf-address\": \"udp://graylog:12201\" } ... } For Syslog: { ... \"log-driver\": \"syslog\", \"log-opts\": { \"syslog-address\": \"udp://1.2.3.4:514\" } ... } Restart the Docker daemon and run docker-compose down && docker-compose up -d to recreate the containers with the new logging driver.","title":"Logging"},{"location":"firststeps-logging/#filtered-logs","text":"Some logs are filtered and written to Redis keys but also streamed to a Redis channel. The Redis channel is used to stream logs with failed authentication attempts to be read by netfilter-mailcow. The Redis keys are persistent and will keep 10000 lines of logs for the web UI. This mechanism makes it possible to use whatever Docker logging driver you want to, without losing the ability to read logs from the UI or ban suspicious clients with netfilter-mailcow. Redis keys will only hold logs from applications and filter out system messages (think of cron etc.).","title":"Filtered logs"},{"location":"firststeps-logging/#logging-drivers","text":"","title":"Logging drivers"},{"location":"firststeps-logging/#via-docker-composeoverrideyml","text":"Here is the good news: Since Docker has some great logging drivers, you can integrate mailcow: dockerized into your existing logging environment with ease. Create a docker-compose.override.yml and add, for example, this block to use the \"gelf\" logging plugin for postfix-mailcow : version: '2.1' services: postfix-mailcow: # or any other logging: driver: \"gelf\" options: gelf-address: \"udp://graylog:12201\" Another example for Syslog : version: '2.1' services: postfix-mailcow: # or any other logging: driver: \"syslog\" options: syslog-address: \"udp://127.0.0.1:514\" syslog-facility: \"local3\" dovecot-mailcow: # or any other logging: driver: \"syslog\" options: syslog-address: \"udp://127.0.0.1:514\" syslog-facility: \"local3\" rspamd-mailcow: # or any other logging: driver: \"syslog\" options: syslog-address: \"udp://127.0.0.1:514\" syslog-facility: \"local3\" # For Rsyslog only: # To move local3 input to /var/log/mailcow.log and stop processing, create a file \"/etc/rsyslog.d/docker.conf\": local3.* /var/log/mailcow.logs & ~ # Restart rsyslog afterwards.","title":"Via docker-compose.override.yml"},{"location":"firststeps-logging/#via-daemonjson-globally","text":"If you want to change the logging driver globally , edit Dockers daemon configuration file /etc/docker/daemon.json and restart the Docker service: { ... \"log-driver\": \"gelf\", \"log-opts\": { \"gelf-address\": \"udp://graylog:12201\" } ... } For Syslog: { ... \"log-driver\": \"syslog\", \"log-opts\": { \"syslog-address\": \"udp://1.2.3.4:514\" } ... } Restart the Docker daemon and run docker-compose down && docker-compose up -d to recreate the containers with the new logging driver.","title":"via daemon.json (globally)"},{"location":"firststeps-rp/","text":"You don't need to change the Nginx site that comes with mailcow: dockerized. mailcow: dockerized trusts the default gateway IP 172.22.1.1 as proxy. 1. Make sure you change HTTP_BIND and HTTPS_BIND in mailcow.conf to a local address and set the ports accordingly, for example: HTTP_BIND = 127 .0.0.1 HTTP_PORT = 8080 HTTPS_BIND = 127 .0.0.1 HTTPS_PORT = 8443 This will also change the bindings inside the Nginx container! This is important, if you decide to use a proxy within Docker. IMPORTANT: Do not use port 8081, 9081 or 65510! Recreate affected containers by running docker-compose up -d . Important information, please read them carefully! Info If you plan to use a reverse proxy and want to use another server name that is not MAILCOW_HOSTNAME, you need to read Adding additional server names for mailcow UI at the bottom of this page. Warning Make sure you run generate_config.sh before you enable any site configuration examples below. The script generate_config.sh copies snake-oil certificates to the correct location, so the services will not fail to start due to missing files. Warning If you enable TLS SNI ( ENABLE_TLS_SNI in mailcow.conf), the certificate paths in your reverse proxy must match the correct paths in data/assets/ssl/{hostname}. The certificates will be split into data/assets/ssl/{hostname1,hostname2,etc} and therefore will not work when you copy the examples from below pointing to data/assets/ssl/cert.pem etc. Info Using the site configs below will forward ACME requests to mailcow and let it handle certificates itself. The downside of using mailcow as ACME client behind a reverse proxy is, that you will need to reload your webserver after acme-mailcow changed/renewed/created the certificate. You can either reload your webserver daily or write a script to watch the file for changes. On many servers logrotate will reload the webserver daily anyway. If you want to use a local certbot installation, you will need to change the SSL certificate parameters accordingly. Make sure you run a post-hook script when you decide to use external ACME clients. You will find an example at the bottom of this page. 2. Configure your local webserver as reverse proxy: Apache 2.4 \u00b6 Required modules: a2enmod rewrite proxy proxy_http headers ssl Let's Encrypt will follow our rewrite, certificate requests in mailcow will work fine. Take care of highlighted lines. ServerName CHANGE_TO_MAILCOW_HOSTNAME ServerAlias autodiscover.* ServerAlias autoconfig.* RewriteEngine on RewriteCond %{HTTPS} off RewriteRule ^/?(.*) https://%{HTTP_HOST}/$1 [R=301,L] ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ProxyPreserveHost On ProxyAddHeaders On RequestHeader set X-Forwarded-Proto \"http\" ServerName CHANGE_TO_MAILCOW_HOSTNAME ServerAlias autodiscover.* ServerAlias autoconfig.* # You should proxy to a plain HTTP session to offload SSL processing ProxyPass /Microsoft-Server-ActiveSync http://127.0.0.1:8080/Microsoft-Server-ActiveSync connectiontimeout=4000 ProxyPassReverse /Microsoft-Server-ActiveSync http://127.0.0.1:8080/Microsoft-Server-ActiveSync ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ProxyPreserveHost On ProxyAddHeaders On RequestHeader set X-Forwarded-Proto \"https\" SSLCertificateFile MAILCOW_PATH/data/assets/ssl/cert.pem SSLCertificateKeyFile MAILCOW_PATH/data/assets/ssl/key.pem # If you plan to proxy to a HTTPS host: #SSLProxyEngine On # If you plan to proxy to an untrusted HTTPS host: #SSLProxyVerify none #SSLProxyCheckPeerCN off #SSLProxyCheckPeerName off #SSLProxyCheckPeerExpire off Nginx \u00b6 Let's Encrypt will follow our rewrite, certificate requests will work fine. Take care of highlighted lines. server { listen 80 default_server; listen [::]:80 default_server; server_name CHANGE_TO_MAILCOW_HOSTNAME autodiscover.* autoconfig.*; return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name CHANGE_TO_MAILCOW_HOSTNAME autodiscover.* autoconfig.*; ssl_certificate MAILCOW_PATH/data/assets/ssl/cert.pem; ssl_certificate_key MAILCOW_PATH/data/assets/ssl/key.pem; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # See https://ssl-config.mozilla.org/#server=nginx for the latest ssl settings recommendations # An example config is given below ssl_protocols TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA; ssl_prefer_server_ciphers off; location /Microsoft-Server-ActiveSync { proxy_pass http://127.0.0.1:8080/Microsoft-Server-ActiveSync; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 75; proxy_send_timeout 3650; proxy_read_timeout 3650; proxy_buffers 64 256k; client_body_buffer_size 512k; client_max_body_size 0; } location / { proxy_pass http://127.0.0.1:8080/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 0; } } HAProxy (community supported) \u00b6 Warning This is an unsupported community contribution. Feel free to provide fixes. Important/Fixme : This example only forwards HTTPS traffic and does not use mailcows built-in ACME client. frontend https-in bind :::443 v4v6 ssl crt mailcow.pem default_backend mailcow backend mailcow option forwardfor http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } server mailcow 127.0.0.1:8080 check Traefik v2 (community supported) \u00b6 Warning This is an unsupported community contribution. Feel free to provide fixes. Important : This config only covers the \"reverseproxing\" of the webpannel (nginx-mailcow) using Traefik v2, if you also want to reverseproxy the mail services such as dovecot, postfix... you'll just need to adapt the following config to each container and create an EntryPoint on your traefik.toml or traefik.yml (depending which config you use) for each port. For this section we'll assume you have your Traefik 2 [certificatesresolvers] properly configured on your traefik configuration file, and also using acme, also, the following example uses Lets Encrypt, but feel free to change it to your own cert resolver. You can find a basic Traefik 2 toml config file with all the above implemented which can be used for this example here traefik.toml if you need one, or a hint on how to adapt your config. So, first of all, we are going to disable the acme-mailcow container since we'll use the certs that traefik will provide us. For this we'll have to set SKIP_LETS_ENCRYPT=y on our mailcow.conf , and run docker-compose up -d to apply the changes. Then we'll create a docker-compose.override.yml file in order to override the main docker-compose.yml found in your mailcow root folder. version : '2.1' services : nginx-mailcow : networks : # add Traefik's network web : labels : - traefik.enable=true # Creates a router called \"moo\" for the container, and sets up a rule to link the container to certain rule, # in this case, a Host rule with our MAILCOW_HOSTNAME var. - traefik.http.routers.moo.rule=Host(`${MAILCOW_HOSTNAME}`) # Enables tls over the router we created before. - traefik.http.routers.moo.tls=true # Specifies which kind of cert resolver we'll use, in this case le (Lets Encrypt). - traefik.http.routers.moo.tls.certresolver=le # Creates a service called \"moo\" for the container, and specifies which internal port of the container # should traefik route the incoming data to. - traefik.http.services.moo.loadbalancer.server.port=${HTTP_PORT} # Specifies which entrypoint (external port) should traefik listen to, for this container. # websecure being port 443, check the traefik.toml file liked above. - traefik.http.routers.moo.entrypoints=websecure # Make sure traefik uses the web network, not the mailcowdockerized_mailcow-network - traefik.docker.network=web certdumper : image : humenius/traefik-certs-dumper container_name : traefik_certdumper network_mode : none volumes : # mount the folder which contains Traefik's `acme.json' file # in this case Traefik is started from its own docker-compose in ../traefik - ../traefik/data:/traefik:ro # mount mailcow's SSL folder - ./data/assets/ssl/:/output:rw environment : # only change this, if you're using another domain for mailcow's web frontend compared to the standard config - DOMAIN=${MAILCOW_HOSTNAME} networks : web : external : true Start the new containers with docker-compose up -d . Now, there's only one thing left to do, which is setup the certs so that the mail services can use them as well, since Traefik 2 uses an acme v2 format to save ALL the license from all the domains we have, we'll need to find a way to dump the certs, lucky we have this tiny container which grabs the acme.json file trough a volume, and a variable DOMAIN=example.org , and with these, the container will output the cert.pem and key.pem files, for this we'll simply run the traefik-certs-dumper container binding the /traefik volume to the folder where our acme.json is saved, bind the /output volume to our mailcow data/assets/ssl/ folder, and set up the DOMAIN=example.org variable to the domain we want the certs dumped from. This container will watch over the acme.json file for any changes, and regenerate the cert.pem and key.pem files directly into data/assets/ssl/ being the path binded to the container's /output path. You can use the command line to run it, or use the docker-compose shown here . After we have the certs dumped, we'll have to reload the configs from our postfix and dovecot containers, and check the certs, you can see how here . Aaand that should be it \ud83d\ude0a, you can check if the Traefik router works fine trough Traefik's dashboard / traefik logs / accessing the setted domain trough https, or / and check HTTPS, SMTP and IMAP trough the commands shown on the page linked before. Optional: Post-hook script for non-mailcow ACME clients \u00b6 Using a local certbot (or any other ACME client) requires to restart some containers, you can do this with a post-hook script. Make sure you change the paths accordingly: #!/bin/bash cp /etc/letsencrypt/live/my.domain.tld/fullchain.pem /opt/mailcow-dockerized/data/assets/ssl/cert.pem cp /etc/letsencrypt/live/my.domain.tld/privkey.pem /opt/mailcow-dockerized/data/assets/ssl/key.pem postfix_c=$(docker ps -qaf name=postfix-mailcow) dovecot_c=$(docker ps -qaf name=dovecot-mailcow) nginx_c=$(docker ps -qaf name=nginx-mailcow) docker restart ${postfix_c} ${dovecot_c} ${nginx_c} Adding additional server names for mailcow UI \u00b6 If you plan to use a server name that is not MAILCOW_HOSTNAME in your reverse proxy, make sure to populate that name in mailcow.conf via ADDITIONAL_SERVER_NAMES first. Names must be separated by commas and must not contain spaces. If you skip this step, mailcow may respond to your reverse proxy with an incorrect site. ADDITIONAL_SERVER_NAMES=webmail.domain.tld,other.example.tld Run docker-compose up -d to apply.","title":"Reverse Proxy"},{"location":"firststeps-rp/#apache-24","text":"Required modules: a2enmod rewrite proxy proxy_http headers ssl Let's Encrypt will follow our rewrite, certificate requests in mailcow will work fine. Take care of highlighted lines. ServerName CHANGE_TO_MAILCOW_HOSTNAME ServerAlias autodiscover.* ServerAlias autoconfig.* RewriteEngine on RewriteCond %{HTTPS} off RewriteRule ^/?(.*) https://%{HTTP_HOST}/$1 [R=301,L] ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ProxyPreserveHost On ProxyAddHeaders On RequestHeader set X-Forwarded-Proto \"http\" ServerName CHANGE_TO_MAILCOW_HOSTNAME ServerAlias autodiscover.* ServerAlias autoconfig.* # You should proxy to a plain HTTP session to offload SSL processing ProxyPass /Microsoft-Server-ActiveSync http://127.0.0.1:8080/Microsoft-Server-ActiveSync connectiontimeout=4000 ProxyPassReverse /Microsoft-Server-ActiveSync http://127.0.0.1:8080/Microsoft-Server-ActiveSync ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ ProxyPreserveHost On ProxyAddHeaders On RequestHeader set X-Forwarded-Proto \"https\" SSLCertificateFile MAILCOW_PATH/data/assets/ssl/cert.pem SSLCertificateKeyFile MAILCOW_PATH/data/assets/ssl/key.pem # If you plan to proxy to a HTTPS host: #SSLProxyEngine On # If you plan to proxy to an untrusted HTTPS host: #SSLProxyVerify none #SSLProxyCheckPeerCN off #SSLProxyCheckPeerName off #SSLProxyCheckPeerExpire off ","title":"Apache 2.4"},{"location":"firststeps-rp/#nginx","text":"Let's Encrypt will follow our rewrite, certificate requests will work fine. Take care of highlighted lines. server { listen 80 default_server; listen [::]:80 default_server; server_name CHANGE_TO_MAILCOW_HOSTNAME autodiscover.* autoconfig.*; return 301 https://$host$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name CHANGE_TO_MAILCOW_HOSTNAME autodiscover.* autoconfig.*; ssl_certificate MAILCOW_PATH/data/assets/ssl/cert.pem; ssl_certificate_key MAILCOW_PATH/data/assets/ssl/key.pem; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # See https://ssl-config.mozilla.org/#server=nginx for the latest ssl settings recommendations # An example config is given below ssl_protocols TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA; ssl_prefer_server_ciphers off; location /Microsoft-Server-ActiveSync { proxy_pass http://127.0.0.1:8080/Microsoft-Server-ActiveSync; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 75; proxy_send_timeout 3650; proxy_read_timeout 3650; proxy_buffers 64 256k; client_body_buffer_size 512k; client_max_body_size 0; } location / { proxy_pass http://127.0.0.1:8080/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 0; } }","title":"Nginx"},{"location":"firststeps-rp/#haproxy-community-supported","text":"Warning This is an unsupported community contribution. Feel free to provide fixes. Important/Fixme : This example only forwards HTTPS traffic and does not use mailcows built-in ACME client. frontend https-in bind :::443 v4v6 ssl crt mailcow.pem default_backend mailcow backend mailcow option forwardfor http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } server mailcow 127.0.0.1:8080 check","title":"HAProxy (community supported)"},{"location":"firststeps-rp/#traefik-v2-community-supported","text":"Warning This is an unsupported community contribution. Feel free to provide fixes. Important : This config only covers the \"reverseproxing\" of the webpannel (nginx-mailcow) using Traefik v2, if you also want to reverseproxy the mail services such as dovecot, postfix... you'll just need to adapt the following config to each container and create an EntryPoint on your traefik.toml or traefik.yml (depending which config you use) for each port. For this section we'll assume you have your Traefik 2 [certificatesresolvers] properly configured on your traefik configuration file, and also using acme, also, the following example uses Lets Encrypt, but feel free to change it to your own cert resolver. You can find a basic Traefik 2 toml config file with all the above implemented which can be used for this example here traefik.toml if you need one, or a hint on how to adapt your config. So, first of all, we are going to disable the acme-mailcow container since we'll use the certs that traefik will provide us. For this we'll have to set SKIP_LETS_ENCRYPT=y on our mailcow.conf , and run docker-compose up -d to apply the changes. Then we'll create a docker-compose.override.yml file in order to override the main docker-compose.yml found in your mailcow root folder. version : '2.1' services : nginx-mailcow : networks : # add Traefik's network web : labels : - traefik.enable=true # Creates a router called \"moo\" for the container, and sets up a rule to link the container to certain rule, # in this case, a Host rule with our MAILCOW_HOSTNAME var. - traefik.http.routers.moo.rule=Host(`${MAILCOW_HOSTNAME}`) # Enables tls over the router we created before. - traefik.http.routers.moo.tls=true # Specifies which kind of cert resolver we'll use, in this case le (Lets Encrypt). - traefik.http.routers.moo.tls.certresolver=le # Creates a service called \"moo\" for the container, and specifies which internal port of the container # should traefik route the incoming data to. - traefik.http.services.moo.loadbalancer.server.port=${HTTP_PORT} # Specifies which entrypoint (external port) should traefik listen to, for this container. # websecure being port 443, check the traefik.toml file liked above. - traefik.http.routers.moo.entrypoints=websecure # Make sure traefik uses the web network, not the mailcowdockerized_mailcow-network - traefik.docker.network=web certdumper : image : humenius/traefik-certs-dumper container_name : traefik_certdumper network_mode : none volumes : # mount the folder which contains Traefik's `acme.json' file # in this case Traefik is started from its own docker-compose in ../traefik - ../traefik/data:/traefik:ro # mount mailcow's SSL folder - ./data/assets/ssl/:/output:rw environment : # only change this, if you're using another domain for mailcow's web frontend compared to the standard config - DOMAIN=${MAILCOW_HOSTNAME} networks : web : external : true Start the new containers with docker-compose up -d . Now, there's only one thing left to do, which is setup the certs so that the mail services can use them as well, since Traefik 2 uses an acme v2 format to save ALL the license from all the domains we have, we'll need to find a way to dump the certs, lucky we have this tiny container which grabs the acme.json file trough a volume, and a variable DOMAIN=example.org , and with these, the container will output the cert.pem and key.pem files, for this we'll simply run the traefik-certs-dumper container binding the /traefik volume to the folder where our acme.json is saved, bind the /output volume to our mailcow data/assets/ssl/ folder, and set up the DOMAIN=example.org variable to the domain we want the certs dumped from. This container will watch over the acme.json file for any changes, and regenerate the cert.pem and key.pem files directly into data/assets/ssl/ being the path binded to the container's /output path. You can use the command line to run it, or use the docker-compose shown here . After we have the certs dumped, we'll have to reload the configs from our postfix and dovecot containers, and check the certs, you can see how here . Aaand that should be it \ud83d\ude0a, you can check if the Traefik router works fine trough Traefik's dashboard / traefik logs / accessing the setted domain trough https, or / and check HTTPS, SMTP and IMAP trough the commands shown on the page linked before.","title":"Traefik v2 (community supported)"},{"location":"firststeps-rp/#optional-post-hook-script-for-non-mailcow-acme-clients","text":"Using a local certbot (or any other ACME client) requires to restart some containers, you can do this with a post-hook script. Make sure you change the paths accordingly: #!/bin/bash cp /etc/letsencrypt/live/my.domain.tld/fullchain.pem /opt/mailcow-dockerized/data/assets/ssl/cert.pem cp /etc/letsencrypt/live/my.domain.tld/privkey.pem /opt/mailcow-dockerized/data/assets/ssl/key.pem postfix_c=$(docker ps -qaf name=postfix-mailcow) dovecot_c=$(docker ps -qaf name=dovecot-mailcow) nginx_c=$(docker ps -qaf name=nginx-mailcow) docker restart ${postfix_c} ${dovecot_c} ${nginx_c}","title":"Optional: Post-hook script for non-mailcow ACME clients"},{"location":"firststeps-rp/#adding-additional-server-names-for-mailcow-ui","text":"If you plan to use a server name that is not MAILCOW_HOSTNAME in your reverse proxy, make sure to populate that name in mailcow.conf via ADDITIONAL_SERVER_NAMES first. Names must be separated by commas and must not contain spaces. If you skip this step, mailcow may respond to your reverse proxy with an incorrect site. ADDITIONAL_SERVER_NAMES=webmail.domain.tld,other.example.tld Run docker-compose up -d to apply.","title":"Adding additional server names for mailcow UI"},{"location":"firststeps-rspamd_ui/","text":"Rspamd is an easy to use spam filtering tool presently installed with mailcow. Go to the mailcow web admin interface Navigate to the Access tab. (Configuration > Configuration & Details > Access) Modify the Rspamd UI password Go to https://${MAILCOW_HOSTNAME}/rspamd in a browser and log in! Additional configuration options and documentation can be found here : https://rspamd.com/webui/","title":"Rspamd UI"},{"location":"firststeps-snat/","text":"SNAT is used to change the source address of the packets sent by mailcow. It can be used to change the outgoing IP address on systems with multiple IP addresses. Open mailcow.conf , set either or both of the following parameters: # Use this IPv4 for outgoing connections (SNAT) SNAT_TO_SOURCE=1.2.3.4 # Use this IPv6 for outgoing connections (SNAT) SNAT6_TO_SOURCE=dead:beef Run docker-compose up -d . The values are read by netfilter-mailcow. netfilter-mailcow will make sure, the post-routing rules are on position 1 in the netfilter table. It does automatically delete and re-create them if they are found on another position than 1. Check the output of docker-compose logs --tail=200 netfilter-mailcow to ensure the SNAT settings have been applied.","title":"SNAT"},{"location":"firststeps-ssl/","text":"Let's Encrypt (out-of-the-box) \u00b6 The \"acme-mailcow\" container will try to obtain a LE certificate for ${MAILCOW_HOSTNAME} , autodiscover.ADDED_MAIL_DOMAIN and autoconfig.ADDED_MAIL_DOMAIN . Warning mailcow must be available on port 80 for the acme-client to work. Our reverse proxy example configurations do cover that. You can also use any external ACME client (certbot for example) to obtain certificates, but you will need to make sure, that they are copied to the correct location and a post-hook reloads affected containers. See more in the Reverse Proxy documentation. By default, which means 0 domains are added to mailcow, it will try to obtain a certificate for ${MAILCOW_HOSTNAME} . For each domain you add, it will try to resolve autodiscover.ADDED_MAIL_DOMAIN and autoconfig.ADDED_MAIL_DOMAIN to its IPv6 address or - if IPv6 is not configured in your domain - IPv4 address. If it succeeds, a name will be added as SAN to the certificate request. Only names that can be validated, will be added as SAN. For every domain you remove, the certificate will be moved and a new certificate will be requested. It is not possible to keep domains in a certificate, when we are not able validate the challenge for those. If you want to re-run the ACME client, use docker-compose restart acme-mailcow and monitor its logs with docker-compose logs --tail=200 -f acme-mailcow . Additional domain names \u00b6 Edit \"mailcow.conf\" and add a parameter ADDITIONAL_SAN like this: Do not use quotes ( \" ) and do not use spaces between the names! ADDITIONAL_SAN=smtp.*,cert1.example.com,cert2.example.org,whatever.* Each name will be validated against its IPv6 address or - if IPv6 is not configured in your domain - IPv4 address. A wildcard name like smtp.* will try to obtain a smtp.DOMAIN_NAME SAN for each domain added to mailcow. Run docker-compose up -d to recreate affected containers automatically. Info Using names other name MAILCOW_HOSTNAME to access the mailcow UI may need further configuration. If you plan to use a server name that is not MAILCOW_HOSTNAME to access the mailcow UI (for example by adding mail.* to ADDITIONAL_SAN make sure to populate that name in mailcow.conf via ADDITIONAL_SERVER_NAMES . Names must be separated by commas and must not contain spaces. If you skip this step, mailcow may respond with an incorrect site. ADDITIONAL_SERVER_NAMES=webmail.domain.tld,other.example.tld Run docker-compose up -d to apply. Force renewal \u00b6 To force a renewal, you need to create a file named force_renew and restart the acme-mailcow container: cd /opt/mailcow-dockerized touch data/assets/ssl/force_renew docker-compose restart acme-mailcow # Now check the logs for a renewal docker-compose logs --tail=200 -f acme-mailcow The file will be deleted automatically. Validation errors and how to skip validation \u00b6 You can skip the IP verification by setting SKIP_IP_CHECK=y in mailcow.conf (no quotes). Be warned that a misconfiguration will get you ratelimited by Let's Encrypt! This is primarily useful for multi-IP setups where the IP check would return the incorrect source IP address. Due to using dynamic IPs for acme-mailcow, source NAT is not consistent over restarts. If you encounter problems with \"HTTP validation\", but your IP address confirmation succeeds, you are most likely using firewalld, ufw or any other firewall, that disallows connections from br-mailcow to your external interface. Both firewalld and ufw disallow this by default. It is often not enough to just stop these firewall services. You'd need to stop mailcow ( docker-compose down ), stop the firewall service, flush the chains and restart Docker. You can also skip this validation method by setting SKIP_HTTP_VERIFICATION=y in \"mailcow.conf\". Be warned that this is discouraged. In most cases, the HTTP verification is skipped to workaround unknown NAT reflection issues, which are not resolved by ignoring this specific network misconfiguration. If you encounter problems generating TLSA records in the DNS overview within mailcow, you are most likely having issues with NAT reflection you should fix. If you changed a SKIP_* parameter, run docker-compose up -d to apply your changes. Disable Let's Encrypt \u00b6 Disable Let's Encrypt completely \u00b6 Set SKIP_LETS_ENCRYPT=y in \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker-compose up -d . Skip all names but ${MAILCOW_HOSTNAME} \u00b6 Add ONLY_MAILCOW_HOSTNAME=y to \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker-compose up -d . The Let's Encrypt subjectAltName limit of 100 domains \u00b6 Let's Encrypt currently has a limit of 100 Domain Names per Certificate . By default, \"acme-mailcow\" will create a single SAN certificate for all validated domains (see the first section and Additional domain names ). This provides best compatibility but means the Let's Encrypt limit exceeds if you add too many domains to a single mailcow installation. To solve this, you can configure ENABLE_SSL_SNI to generate: A main server certificate with MAILCOW_HOSTNAME and all fully qualified domain names in the ADDITIONAL_SAN config One additional certificate for each domain found in the database with autodiscover. , autoconfig. and any other ADDITIONAL_SAN configured in this format (subdomain.*). Limitations: A certificate name ADDITIONAL_SAN=test.example.com will be added as SAN to the main certificate. A separate certificate/key pair will not be generated for this format. Postfix, Dovecot and Nginx will then serve these certificates with SNI. Set ENABLE_SSL_SNI=y in \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker-compose up -d . Warning Not all clients support SNI, see Dovecot documentation or Wikipedia . You should make sure these clients use the MAILCOW_HOSTNAME for secure connections if you enable this feature. Here is an example: MAILCOW_HOSTNAME=server.email.tld ADDITIONAL_SAN=webmail.email.tld,mail.* Mailcow email domains: \"domain1.tld\" and \"domain2.tld\" The following certificates will be generated: server.email.tld, webmail.email.tld -> this is the default certificate, all clients can connect with these domains mail.domain1.tld, autoconfig.domain1.tld, autodiscover.domain1.tld -> individual certificate for domain1.tld, cannot be used by clients without SNI support mail.domain2.tld, autoconfig.domain2.tld, autodiscover.domain2.tld -> individual certificate for domain2.tld, cannot be used by clients without SNI support How to use your own certificate \u00b6 Make sure you disable mailcows internal LE client (see above). To use your own certificates, just save the combined certificate (containing the certificate and intermediate CA/CA if any) to data/assets/ssl/cert.pem and the corresponding key to data/assets/ssl/key.pem . IMPORTANT: Do not use symbolic links! Make sure you copy the certificates and do not link them to data/assets/ssl . Restart affected services afterwards: docker restart $(docker ps -qaf name=postfix-mailcow) docker restart $(docker ps -qaf name=nginx-mailcow) docker restart $(docker ps -qaf name=dovecot-mailcow) See Post-hook script for non-mailcow ACME clients for a full example script. Test against staging ACME directory \u00b6 Edit mailcow.conf and add LE_STAGING=y . Run docker-compose up -d to activate your changes. Custom directory URL \u00b6 Edit mailcow.conf and add the corresponding directory URL to the new variable DIRECTORY_URL : DIRECTORY_URL=https://acme-custom-v9000.api.letsencrypt.org/directory You cannot use LE_STAGING with DIRECTORY_URL . If both are set, only LE_STAGING is used. Run docker-compose up -d to activate your changes. Check your configuration \u00b6 Run docker-compose logs acme-mailcow to find out why a validation fails. To check if nginx serves the correct certificate, simply use a browser of your choice and check the displayed certificate. To check the certificate served by Postfix, Dovecot and Nginx we will use openssl : # Connect via SMTP (587) echo \"Q\" | openssl s_client -starttls smtp -crlf -connect mx.mailcow.email:587 # Connect via IMAP (143) echo \"Q\" | openssl s_client -starttls imap -showcerts -connect mx.mailcow.email:143 # Connect via HTTPS (443) echo \"Q\" | openssl s_client -connect mx.mailcow.email:443 To validate the expiry dates as returned by openssl against MAILCOW_HOSTNAME, you are able to use our helper script: cd /opt/mailcow-dockerized bash helper-scripts/expiry-dates.sh","title":"Advanced SSL"},{"location":"firststeps-ssl/#lets-encrypt-out-of-the-box","text":"The \"acme-mailcow\" container will try to obtain a LE certificate for ${MAILCOW_HOSTNAME} , autodiscover.ADDED_MAIL_DOMAIN and autoconfig.ADDED_MAIL_DOMAIN . Warning mailcow must be available on port 80 for the acme-client to work. Our reverse proxy example configurations do cover that. You can also use any external ACME client (certbot for example) to obtain certificates, but you will need to make sure, that they are copied to the correct location and a post-hook reloads affected containers. See more in the Reverse Proxy documentation. By default, which means 0 domains are added to mailcow, it will try to obtain a certificate for ${MAILCOW_HOSTNAME} . For each domain you add, it will try to resolve autodiscover.ADDED_MAIL_DOMAIN and autoconfig.ADDED_MAIL_DOMAIN to its IPv6 address or - if IPv6 is not configured in your domain - IPv4 address. If it succeeds, a name will be added as SAN to the certificate request. Only names that can be validated, will be added as SAN. For every domain you remove, the certificate will be moved and a new certificate will be requested. It is not possible to keep domains in a certificate, when we are not able validate the challenge for those. If you want to re-run the ACME client, use docker-compose restart acme-mailcow and monitor its logs with docker-compose logs --tail=200 -f acme-mailcow .","title":"Let's Encrypt (out-of-the-box)"},{"location":"firststeps-ssl/#additional-domain-names","text":"Edit \"mailcow.conf\" and add a parameter ADDITIONAL_SAN like this: Do not use quotes ( \" ) and do not use spaces between the names! ADDITIONAL_SAN=smtp.*,cert1.example.com,cert2.example.org,whatever.* Each name will be validated against its IPv6 address or - if IPv6 is not configured in your domain - IPv4 address. A wildcard name like smtp.* will try to obtain a smtp.DOMAIN_NAME SAN for each domain added to mailcow. Run docker-compose up -d to recreate affected containers automatically. Info Using names other name MAILCOW_HOSTNAME to access the mailcow UI may need further configuration. If you plan to use a server name that is not MAILCOW_HOSTNAME to access the mailcow UI (for example by adding mail.* to ADDITIONAL_SAN make sure to populate that name in mailcow.conf via ADDITIONAL_SERVER_NAMES . Names must be separated by commas and must not contain spaces. If you skip this step, mailcow may respond with an incorrect site. ADDITIONAL_SERVER_NAMES=webmail.domain.tld,other.example.tld Run docker-compose up -d to apply.","title":"Additional domain names"},{"location":"firststeps-ssl/#force-renewal","text":"To force a renewal, you need to create a file named force_renew and restart the acme-mailcow container: cd /opt/mailcow-dockerized touch data/assets/ssl/force_renew docker-compose restart acme-mailcow # Now check the logs for a renewal docker-compose logs --tail=200 -f acme-mailcow The file will be deleted automatically.","title":"Force renewal"},{"location":"firststeps-ssl/#validation-errors-and-how-to-skip-validation","text":"You can skip the IP verification by setting SKIP_IP_CHECK=y in mailcow.conf (no quotes). Be warned that a misconfiguration will get you ratelimited by Let's Encrypt! This is primarily useful for multi-IP setups where the IP check would return the incorrect source IP address. Due to using dynamic IPs for acme-mailcow, source NAT is not consistent over restarts. If you encounter problems with \"HTTP validation\", but your IP address confirmation succeeds, you are most likely using firewalld, ufw or any other firewall, that disallows connections from br-mailcow to your external interface. Both firewalld and ufw disallow this by default. It is often not enough to just stop these firewall services. You'd need to stop mailcow ( docker-compose down ), stop the firewall service, flush the chains and restart Docker. You can also skip this validation method by setting SKIP_HTTP_VERIFICATION=y in \"mailcow.conf\". Be warned that this is discouraged. In most cases, the HTTP verification is skipped to workaround unknown NAT reflection issues, which are not resolved by ignoring this specific network misconfiguration. If you encounter problems generating TLSA records in the DNS overview within mailcow, you are most likely having issues with NAT reflection you should fix. If you changed a SKIP_* parameter, run docker-compose up -d to apply your changes.","title":"Validation errors and how to skip validation"},{"location":"firststeps-ssl/#disable-lets-encrypt","text":"","title":"Disable Let's Encrypt"},{"location":"firststeps-ssl/#disable-lets-encrypt-completely","text":"Set SKIP_LETS_ENCRYPT=y in \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker-compose up -d .","title":"Disable Let's Encrypt completely"},{"location":"firststeps-ssl/#skip-all-names-but-mailcow_hostname","text":"Add ONLY_MAILCOW_HOSTNAME=y to \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker-compose up -d .","title":"Skip all names but ${MAILCOW_HOSTNAME}"},{"location":"firststeps-ssl/#the-lets-encrypt-subjectaltname-limit-of-100-domains","text":"Let's Encrypt currently has a limit of 100 Domain Names per Certificate . By default, \"acme-mailcow\" will create a single SAN certificate for all validated domains (see the first section and Additional domain names ). This provides best compatibility but means the Let's Encrypt limit exceeds if you add too many domains to a single mailcow installation. To solve this, you can configure ENABLE_SSL_SNI to generate: A main server certificate with MAILCOW_HOSTNAME and all fully qualified domain names in the ADDITIONAL_SAN config One additional certificate for each domain found in the database with autodiscover. , autoconfig. and any other ADDITIONAL_SAN configured in this format (subdomain.*). Limitations: A certificate name ADDITIONAL_SAN=test.example.com will be added as SAN to the main certificate. A separate certificate/key pair will not be generated for this format. Postfix, Dovecot and Nginx will then serve these certificates with SNI. Set ENABLE_SSL_SNI=y in \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker-compose up -d . Warning Not all clients support SNI, see Dovecot documentation or Wikipedia . You should make sure these clients use the MAILCOW_HOSTNAME for secure connections if you enable this feature. Here is an example: MAILCOW_HOSTNAME=server.email.tld ADDITIONAL_SAN=webmail.email.tld,mail.* Mailcow email domains: \"domain1.tld\" and \"domain2.tld\" The following certificates will be generated: server.email.tld, webmail.email.tld -> this is the default certificate, all clients can connect with these domains mail.domain1.tld, autoconfig.domain1.tld, autodiscover.domain1.tld -> individual certificate for domain1.tld, cannot be used by clients without SNI support mail.domain2.tld, autoconfig.domain2.tld, autodiscover.domain2.tld -> individual certificate for domain2.tld, cannot be used by clients without SNI support","title":"The Let's Encrypt subjectAltName limit of 100 domains"},{"location":"firststeps-ssl/#how-to-use-your-own-certificate","text":"Make sure you disable mailcows internal LE client (see above). To use your own certificates, just save the combined certificate (containing the certificate and intermediate CA/CA if any) to data/assets/ssl/cert.pem and the corresponding key to data/assets/ssl/key.pem . IMPORTANT: Do not use symbolic links! Make sure you copy the certificates and do not link them to data/assets/ssl . Restart affected services afterwards: docker restart $(docker ps -qaf name=postfix-mailcow) docker restart $(docker ps -qaf name=nginx-mailcow) docker restart $(docker ps -qaf name=dovecot-mailcow) See Post-hook script for non-mailcow ACME clients for a full example script.","title":"How to use your own certificate"},{"location":"firststeps-ssl/#test-against-staging-acme-directory","text":"Edit mailcow.conf and add LE_STAGING=y . Run docker-compose up -d to activate your changes.","title":"Test against staging ACME directory"},{"location":"firststeps-ssl/#custom-directory-url","text":"Edit mailcow.conf and add the corresponding directory URL to the new variable DIRECTORY_URL : DIRECTORY_URL=https://acme-custom-v9000.api.letsencrypt.org/directory You cannot use LE_STAGING with DIRECTORY_URL . If both are set, only LE_STAGING is used. Run docker-compose up -d to activate your changes.","title":"Custom directory URL"},{"location":"firststeps-ssl/#check-your-configuration","text":"Run docker-compose logs acme-mailcow to find out why a validation fails. To check if nginx serves the correct certificate, simply use a browser of your choice and check the displayed certificate. To check the certificate served by Postfix, Dovecot and Nginx we will use openssl : # Connect via SMTP (587) echo \"Q\" | openssl s_client -starttls smtp -crlf -connect mx.mailcow.email:587 # Connect via IMAP (143) echo \"Q\" | openssl s_client -starttls imap -showcerts -connect mx.mailcow.email:143 # Connect via HTTPS (443) echo \"Q\" | openssl s_client -connect mx.mailcow.email:443 To validate the expiry dates as returned by openssl against MAILCOW_HOSTNAME, you are able to use our helper script: cd /opt/mailcow-dockerized bash helper-scripts/expiry-dates.sh","title":"Check your configuration"},{"location":"firststeps-sync_jobs_migration/","text":"Sync jobs are used to copy or move existing emails from an external IMAP server or within mailcow's existing mailboxes. Info Depending on your mailbox's ACL you may not have the option to add a sync job. Please contact your domain administrator if so. Setup a Sync Job \u00b6 In the \"Mail Setup\" or \"User Settings\" interface, create a new sync job. If you are an administrator, select the username of the downstream mailcow mailbox in the \"Username\" dropdown. Fill in the \"Host\" and \"Port\" fields with their respective correct values from the upstream IMAP server. In the \"Username\" and \"Password\" fields, supply the correct access credentials from the upstream IMAP server. Select the \"Encryption Method\". If the upstream IMAP server uses port 143, it is likely that the encryption method is TLS and SSL for port 993. Nevertheless, you can use PLAIN authentication, but it is stongly discouraged. For all ther other fields, you can leave them as is or modify them as desired. Make sure to tick \"Active\" and click \"Add\". Info Once Completed, log into the mailbox and check if all emails are imported correctly. If all goes well, all your mails shall end up in your new mailbox. And don't forget to delete or deactivate the sync job after it is used.","title":"Sync job migration"},{"location":"firststeps-sync_jobs_migration/#setup-a-sync-job","text":"In the \"Mail Setup\" or \"User Settings\" interface, create a new sync job. If you are an administrator, select the username of the downstream mailcow mailbox in the \"Username\" dropdown. Fill in the \"Host\" and \"Port\" fields with their respective correct values from the upstream IMAP server. In the \"Username\" and \"Password\" fields, supply the correct access credentials from the upstream IMAP server. Select the \"Encryption Method\". If the upstream IMAP server uses port 143, it is likely that the encryption method is TLS and SSL for port 993. Nevertheless, you can use PLAIN authentication, but it is stongly discouraged. For all ther other fields, you can leave them as is or modify them as desired. Make sure to tick \"Active\" and click \"Add\". Info Once Completed, log into the mailbox and check if all emails are imported correctly. If all goes well, all your mails shall end up in your new mailbox. And don't forget to delete or deactivate the sync job after it is used.","title":"Setup a Sync Job"},{"location":"i_u_m_deinstall/","text":"To remove mailcow: dockerized with all it's volumes, images and containers do: docker-compose down -v --rmi all --remove-orphans Info -v Remove named volumes declared in the volumes section of the Compose file and anonymous volumes attached to containers. --rmi Remove images. Type must be one of: all : Remove all images used by any service. local : Remove only images that don't have a custom tag set by the image field. --remove-orphans Remove containers for services not defined in the compose file. By default docker-compose down only removes currently active containers and networks defined in the docker-compose.yml .","title":"Deinstallation"},{"location":"i_u_m_install/","text":"You need Docker (a version >= 20.10.2 is required) and Docker Compose (a version <= 2.0 is required). 1. Learn how to install Docker and Docker Compose . Quick installation for most operation systems: Docker curl -sSL https://get.docker.com/ | CHANNEL=stable sh # After the installation process is finished, you may need to enable the service and make sure it is started (e.g. CentOS 7) systemctl enable --now docker Docker-Compose Warning mailcow requires the latest version of docker-compose v1. It is highly recommended to use the commands below to install docker-compose . Package managers (e.g. apt , yum ) likely won't give you the correct version. Note: This command downloads docker-compose from the official Docker Github repository and is a safe method. The snippet will determine the latest supported version by mailcow. In almost all cases this is the latest version available (exceptions are broken releases or major changes not yet supported by mailcow). curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose Please use the latest Docker engine available and do not use the engine that ships with your distros repository. 1.1. On SELinux enabled systems, e.g. CentOS 7: Check if \"container-selinux\" package is present on your system: rpm -qa | grep container-selinux If the above command returns an empty or no output, you should install it via your package manager. Check if docker has SELinux support enabled: docker info | grep selinux If the above command returns an empty or no output, create or edit /etc/docker/daemon.json and add \"selinux-enabled\": true . Example file content: { \"selinux-enabled\": true } Restart the docker daemon and verify SELinux is now enabled. This step is required to make sure mailcows volumes are properly labeled as declared in the compose file. If you are interested in how this works, you can check out the readme of https://github.com/containers/container-selinux which links to a lot of useful information on that topic. 2. Clone the master branch of the repository, make sure your umask equals 0022. Please clone the repository as root user and also control the stack as root. We will modify attributes - if necessary - while boostrapping the containers automatically and make sure everything is secured. The update.sh script must therefore also be run as root. It might be necessary to change ownership and other attributes of files you will otherwise not have access to. We drop permissions for every exposed application and will not run an exposed service as root! Controlling the Docker daemon as non-root user does not give you additional security. The unprivileged user will spawn the containers as root likewise. The behaviour of the stack is identical. $ su # umask 0022 # <- Verify it is 0022 # cd /opt # git clone https://github.com/mailcow/mailcow-dockerized # cd mailcow-dockerized 3. Generate a configuration file. Use a FQDN ( host.domain.tld ) as hostname when asked. ./generate_config.sh 4. Change configuration if you want or need to. nano mailcow.conf If you plan to use a reverse proxy, you can, for example, bind HTTPS to 127.0.0.1 on port 8443 and HTTP to 127.0.0.1 on port 8080. You may need to stop an existing pre-installed MTA which blocks port 25/tcp. See this chapter to learn how to reconfigure Postfix to run besides mailcow after a successful installation. Some updates modify mailcow.conf and add new parameters. It is hard to keep track of them in the documentation. Please check their description and, if unsure, ask at the known channels for advise. 4.1. Users with a MTU not equal to 1500 (e.g. OpenStack): Whenever you run into trouble and strange phenomena, please check your MTU. Edit docker-compose.yml and change the network settings according to your MTU. Add the new driver_opts parameter like this: networks: mailcow-network: ... driver_opts: com.docker.network.driver.mtu: 1450 ... 4.2. Users without an IPv6 enabled network on their host system: Enable IPv6. Finally. If you do not have an IPv6 enabled network on your host and you don't care for a better internet (thehe), it is recommended to disable IPv6 for the mailcow network to prevent unforeseen issues. 5. Pull the images and run the compose file. The parameter -d will start mailcow: dockerized detached: docker-compose pull docker-compose up -d Done! You can now access https://${MAILCOW_HOSTNAME} with the default credentials admin + password moohoo . Info If you are not using mailcow behind a reverse proxy, you should redirect all HTTP requests to HTTPS . The database will be initialized right after a connection to MySQL can be established. Your data will persist in multiple Docker volumes, that are not deleted when you recreate or delete containers. Run docker volume ls to see a list of all volumes. You can safely run docker-compose down without removing persistent data.","title":"Installation"},{"location":"i_u_m_migration/","text":"Warning This guide assumes you intend to migrate an existing mailcow server (source) over to a brand new, empty server (target). It takes no care about preserving any existing data on your target server and will erase anything within /var/lib/docker/volumes and thus any Docker volumes you may have already set up. Tip Alternatively, you can use the ./helper-scripts/backup_and_restore.sh script to create a full backup on the source machine, then install mailcow on the target machine as usual, copy over your mailcow.conf and use the same script to restore your backup to the target machine. 1. Install Docker and Docker Compose on your new server. Quick installation for most operation systems: Docker curl -sSL https://get.docker.com/ | CHANNEL=stable sh # After the installation process is finished, you may need to enable the service and make sure it is started (e.g. CentOS 7) systemctl enable docker.service docker-compose curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose Please use the latest Docker engine available and do not use the engine that ships with your distros repository. 2. Stop Docker and assure Docker has stopped: systemctl stop docker.service systemctl status docker.service 3. Run the following commands on the source machine (take care of adding the trailing slashes in the first path parameter as shown below!) - WARNING: This command will erase anything that may already exist under /var/lib/docker/volumes on the target machine : rsync -aHhP --numeric-ids --delete /opt/mailcow-dockerized/ root@target-machine.example.com:/opt/mailcow-dockerized rsync -aHhP --numeric-ids --delete /var/lib/docker/volumes/ root@target-machine.example.com:/var/lib/docker/volumes 4. Shut down mailcow and stop Docker on the source machine. cd /opt/mailcow-dockerized docker-compose down systemctl stop docker.service 5. Repeat step 3 with the same commands. This will be much quicker than the first time. 6. Switch over to the target machine and start Docker. systemctl start docker.service 7. Now pull the mailcow Docker images on the target machine. cd /opt/mailcow-dockerized docker-compose pull 8. Start the whole mailcow stack and everything should be done! docker-compose up -d 9. Finally, change your DNS settings to point to the target server.","title":"Migration"},{"location":"i_u_m_update/","text":"Automatic update \u00b6 An update script in your mailcow-dockerized directory will take care of updates. But use it with caution! If you think you made a lot of changes to the mailcow code, you should use the manual update guide below. Run the update script: ./update.sh If it needs to, it will ask you how you wish to proceed. Merge errors will be reported. Some minor conflicts will be auto-corrected (in favour for the mailcow: dockerized repository code). Options \u00b6 # Options can be combined # - Check for updates and show changes ./update.sh --check # Do not try to update docker-compose, **make sure to use the latest docker-compose available** ./update.sh --no-update-compose # - Do not start mailcow after applying an update ./update.sh --skip-start # - Force update (unattended, but unsupported, use at own risk) ./update.sh --force # - Run garbage collector to cleanup old image tags and exit ./update.sh --gc # - Update with merge strategy option \"ours\" instead of \"theirs\" # This will **solve conflicts** when merging in favor for your local changes and should be avoided. Local changes will always be kept, unless we changed file XY, too. ./update.sh --ours # - Don't update, but prefetch images and exit ./update.sh --prefetch I forgot what I changed before running update.sh \u00b6 See git log --pretty=oneline | grep -i \"before update\" , you will have an output similar to this: 22cd00b5e28893ef9ddef3c2b5436453cc5223ab Before update on 2020-09-28_19_25_45 dacd4fb9b51e9e1c8a37d84485b92ffaf6c59353 Before update on 2020-08-07_13_31_31 Run git diff 22cd00b5e28893ef9ddef3c2b5436453cc5223ab to see what changed. Can I roll back? \u00b6 Yes. See the topic above, instead of a diff, you run checkout: docker-compose down # Replace commit ID 22cd00b5e28893ef9ddef3c2b5436453cc5223ab by your ID git checkout 22cd00b5e28893ef9ddef3c2b5436453cc5223ab docker-compose pull docker-compose up -d Hooks \u00b6 You can hook into the update mechanism by adding scripts called pre_commit_hook.sh and post_commit_hook.sh to your mailcows root directory. See this for more details. Footnotes \u00b6 There is no release cycle regarding updates.","title":"Update"},{"location":"i_u_m_update/#automatic-update","text":"An update script in your mailcow-dockerized directory will take care of updates. But use it with caution! If you think you made a lot of changes to the mailcow code, you should use the manual update guide below. Run the update script: ./update.sh If it needs to, it will ask you how you wish to proceed. Merge errors will be reported. Some minor conflicts will be auto-corrected (in favour for the mailcow: dockerized repository code).","title":"Automatic update"},{"location":"i_u_m_update/#options","text":"# Options can be combined # - Check for updates and show changes ./update.sh --check # Do not try to update docker-compose, **make sure to use the latest docker-compose available** ./update.sh --no-update-compose # - Do not start mailcow after applying an update ./update.sh --skip-start # - Force update (unattended, but unsupported, use at own risk) ./update.sh --force # - Run garbage collector to cleanup old image tags and exit ./update.sh --gc # - Update with merge strategy option \"ours\" instead of \"theirs\" # This will **solve conflicts** when merging in favor for your local changes and should be avoided. Local changes will always be kept, unless we changed file XY, too. ./update.sh --ours # - Don't update, but prefetch images and exit ./update.sh --prefetch","title":"Options"},{"location":"i_u_m_update/#i-forgot-what-i-changed-before-running-updatesh","text":"See git log --pretty=oneline | grep -i \"before update\" , you will have an output similar to this: 22cd00b5e28893ef9ddef3c2b5436453cc5223ab Before update on 2020-09-28_19_25_45 dacd4fb9b51e9e1c8a37d84485b92ffaf6c59353 Before update on 2020-08-07_13_31_31 Run git diff 22cd00b5e28893ef9ddef3c2b5436453cc5223ab to see what changed.","title":"I forgot what I changed before running update.sh"},{"location":"i_u_m_update/#can-i-roll-back","text":"Yes. See the topic above, instead of a diff, you run checkout: docker-compose down # Replace commit ID 22cd00b5e28893ef9ddef3c2b5436453cc5223ab by your ID git checkout 22cd00b5e28893ef9ddef3c2b5436453cc5223ab docker-compose pull docker-compose up -d","title":"Can I roll back?"},{"location":"i_u_m_update/#hooks","text":"You can hook into the update mechanism by adding scripts called pre_commit_hook.sh and post_commit_hook.sh to your mailcows root directory. See this for more details.","title":"Hooks"},{"location":"i_u_m_update/#footnotes","text":"There is no release cycle regarding updates.","title":"Footnotes"},{"location":"model-acl/","text":"Editing a domain administrator or a mailbox user allows to set restrictions to that account. Important : For overlapping modules like sync jobs, which both domain administrators and mailbox users can be granted access to, the domain administrators permissions are inherited, when logging in as mailbox user. Some examples: 1. A domain administror has not access to sync jobs but can login as mailbox user When logging in as mailbox user, he does not gain access to sync jobs, even if the given mailbox user has access when logging in directly 2. A domain administror has access to sync jobs and can login as mailbox user The mailbox user he tries to login as has not access to sync jobs The domain administrator, now logged in as mailbox user, inherits its permission to the mailbox user and can access sync jobs 3. A domain administrator logs in as mailbox user Every permission, that does not exist in a domain administrators ACL, is automatically granted (example: time-limited alias, TLS policy etc.)","title":"ACL"},{"location":"model-passwd/","text":"Fully supported hashing methods \u00b6 The most current mailcow fully supports the following hashing methods. The default hashing method is written in bold: BLF-CRYPT SSHA SSHA256 SSHA512 The methods above can be used in mailcow.conf as MAILCOW_PASS_SCHEME value. Read-only hashing methods \u00b6 The following methods are supported read only . If you plan to use SOGo (as per default), you need a SOGo compatible hashing method. Please see the note at the bottom of this page how to update the view if necessary. With SOGo disabled, all hashing methods below will be able to be read by mailcow and Dovecot. ARGON2I (SOGo compatible) ARGON2ID (SOGo compatible) CLEAR CLEARTEXT CRYPT (SOGo compatible) DES-CRYPT LDAP-MD5 (SOGo compatible) MD5 (SOGo compatible) MD5-CRYPT (SOGo compatible) PBKDF2 (SOGo compatible) PLAIN (SOGo compatible) PLAIN-MD4 PLAIN-MD5 PLAIN-TRUNC SHA (SOGo compatible) SHA1 (SOGo compatible) SHA256 (SOGo compatible) SHA256-CRYPT (SOGo compatible) SHA512 (SOGo compatible) SHA512-CRYPT (SOGo compatible) SMD5 (SOGo compatible) That means mailcow is able to verify users with a hash like {MD5}1a1dc91c907325c69271ddf0c944bc72 from the database. The value of MAILCOW_PASS_SCHEME will always be used to encrypt new passwords. I changed the password hashes in the \"mailbox\" SQL table and cannot login. A \"view\" needs to be updated. You can trigger this by restarting sogo-mailcow: docker-compose restart sogo-mailcow","title":"Password hashing"},{"location":"model-passwd/#fully-supported-hashing-methods","text":"The most current mailcow fully supports the following hashing methods. The default hashing method is written in bold: BLF-CRYPT SSHA SSHA256 SSHA512 The methods above can be used in mailcow.conf as MAILCOW_PASS_SCHEME value.","title":"Fully supported hashing methods"},{"location":"model-passwd/#read-only-hashing-methods","text":"The following methods are supported read only . If you plan to use SOGo (as per default), you need a SOGo compatible hashing method. Please see the note at the bottom of this page how to update the view if necessary. With SOGo disabled, all hashing methods below will be able to be read by mailcow and Dovecot. ARGON2I (SOGo compatible) ARGON2ID (SOGo compatible) CLEAR CLEARTEXT CRYPT (SOGo compatible) DES-CRYPT LDAP-MD5 (SOGo compatible) MD5 (SOGo compatible) MD5-CRYPT (SOGo compatible) PBKDF2 (SOGo compatible) PLAIN (SOGo compatible) PLAIN-MD4 PLAIN-MD5 PLAIN-TRUNC SHA (SOGo compatible) SHA1 (SOGo compatible) SHA256 (SOGo compatible) SHA256-CRYPT (SOGo compatible) SHA512 (SOGo compatible) SHA512-CRYPT (SOGo compatible) SMD5 (SOGo compatible) That means mailcow is able to verify users with a hash like {MD5}1a1dc91c907325c69271ddf0c944bc72 from the database. The value of MAILCOW_PASS_SCHEME will always be used to encrypt new passwords. I changed the password hashes in the \"mailbox\" SQL table and cannot login. A \"view\" needs to be updated. You can trigger this by restarting sogo-mailcow: docker-compose restart sogo-mailcow","title":"Read-only hashing methods"},{"location":"model-sender_rcv/","text":"When a mailbox is created, a user is allowed to send mail from and receive mail for his own mailbox address. Mailbox me @example . org is created . example . org is a primary domain . Note : a mailbox cannot be created in an alias domain . me @example . org is only known as me @example . org . me @example . org is allowed to send as me @example . org . We can add an alias domain for example.org: Alias domain alias . com is added and assigned to primary domain example . org . me @example . org is now known as me @example . org and me @alias . com . me @example . org is now allowed to send as me @example . org and me @alias . com . We can add aliases for a mailbox to receive mail for and to send from this new address. It is important to know, that you are not able to receive mail for my-alias@my-alias-domain.tld . You would need to create this particular alias. me @example . org is assigned the alias alias @example . org me @example . org is now known as me @example . org , me @alias . com , alias @example . org me @example . org is NOT known as alias @alias . com . Please note that this does not apply to catch-all aliases: Alias domain alias . com is added and assigned to primary domain example . org me @example . org is assigned the catch - all alias @example . org me @example . org is still just known as me @example . org , which is the only available send - as option Any email send to alias . com will match the catch - all alias for example . org Administrators and domain administrators can edit mailboxes to allow specific users to send as other mailbox users (\"delegate\" them). You can choose between mailbox users or completely disable the sender check for domains. SOGo \"mail from\" addresses \u00b6 Mailbox users can, obviously, select their own mailbox address, as well as all alias addresses and aliases that exist through alias domains. If you want to select another existing mailbox user as your \"mail from\" address, this user has to delegate you access through SOGo (see SOGo documentation). Moreover a mailcow (domain) administrator needs to grant you access as described above.","title":"Sender and receiver model"},{"location":"model-sender_rcv/#sogo-mail-from-addresses","text":"Mailbox users can, obviously, select their own mailbox address, as well as all alias addresses and aliases that exist through alias domains. If you want to select another existing mailbox user as your \"mail from\" address, this user has to delegate you access through SOGo (see SOGo documentation). Moreover a mailcow (domain) administrator needs to grant you access as described above.","title":"SOGo \"mail from\" addresses"},{"location":"prerequisite-dns/","text":"Below you can find a list of recommended DNS records . While some are mandatory for a mail server (A, MX), others are recommended to build a good reputation score (TXT/SPF) or used for auto-configuration of mail clients (SRV). References \u00b6 A good article covering all relevant topics: \"3 DNS Records Every Email Marketer Must Know\" Another great one, but Zimbra as an example platform: \"Best Practices on Email Protection: SPF, DKIM and DMARC\" An in-depth discussion of SPF, DKIM and DMARC: \"How to eliminate spam and protect your name with DMARC\" A thorough guide on understanding DMARC: \"Demystifying DMARC: A guide to preventing email spoofing\" Reverse DNS of your IP address \u00b6 Make sure that the PTR record of your IP address matches the FQDN of your mailcow host: ${MAILCOW_HOSTNAME} 1 . This record is usually set at the provider you leased the IP address (server) from. The minimal DNS configuration \u00b6 This example shows you a set of records for one domain managed by mailcow. Each domain that is added to mailcow needs at least this set of records to function correctly. # Name Type Value mail IN A 1.2.3.4 autodiscover IN CNAME mail.example.org. (your ${MAILCOW_HOSTNAME}) autoconfig IN CNAME mail.example.org. (your ${MAILCOW_HOSTNAME}) @ IN MX 10 mail.example.org. (your ${MAILCOW_HOSTNAME}) DKIM, SPF and DMARC \u00b6 In the example DNS zone file snippet below, a simple SPF TXT record is used to only allow THIS server (the MX) to send mail for your domain. Every other server is disallowed but able to (\" ~all \"). Please refer to SPF Project for further reading. # Name Type Value @ IN TXT \"v=spf1 mx a -all\" It is highly recommended to create a DKIM TXT record in your mailcow UI and set the corresponding TXT record in your DNS records. Please refer to OpenDKIM for further reading. # Name Type Value dkim._domainkey IN TXT \"v=DKIM1; k=rsa; t=s; s=email; p=...\" The last step in protecting yourself and others is the implementation of a DMARC TXT record, for example by using the DMARC Assistant ( check ). # Name Type Value _dmarc IN TXT \"v=DMARC1; p=reject; rua=mailto:mailauth-reports@example.org\" The advanced DNS configuration \u00b6 SRV records specify the server(s) for a specific protocol on your domain. If you want to explicitly announce a service as not provided, give \".\" as the target address (instead of \"mail.example.org.\"). Please refer to RFC 2782 . # Name Type Priority Weight Port Value _autodiscover._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME}) _caldavs._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME}) _caldavs._tcp IN TXT \"path=/SOGo/dav/\" _carddavs._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME}) _carddavs._tcp IN TXT \"path=/SOGo/dav/\" _imap._tcp IN SRV 0 1 143 mail.example.org. (your ${MAILCOW_HOSTNAME}) _imaps._tcp IN SRV 0 1 993 mail.example.org. (your ${MAILCOW_HOSTNAME}) _pop3._tcp IN SRV 0 1 110 mail.example.org. (your ${MAILCOW_HOSTNAME}) _pop3s._tcp IN SRV 0 1 995 mail.example.org. (your ${MAILCOW_HOSTNAME}) _sieve._tcp IN SRV 0 1 4190 mail.example.org. (your ${MAILCOW_HOSTNAME}) _smtps._tcp IN SRV 0 1 465 mail.example.org. (your ${MAILCOW_HOSTNAME}) _submission._tcp IN SRV 0 1 587 mail.example.org. (your ${MAILCOW_HOSTNAME}) Testing \u00b6 Here are some tools you can use to verify your DNS configuration: MX Toolbox (DNS, SMTP, RBL) port25.com (DKIM, SPF) Mail-tester (DKIM, DMARC, SPF) DMARC Analyzer (DMARC, SPF) MultiRBL.valli.org (DNSBL, RBL, FCrDNS) Misc \u00b6 Optional DMARC Statistics \u00b6 If you are interested in statistics, you can additionally register with some of the many below DMARC statistic services - or self-host your own. Tip It is worth considering that if you request DMARC statistic reports to your mailcow server and your mailcow server is not configured correctly to receive these reports, you may not get accurate and complete results. Please consider using an alternative email domain for receiving DMARC reports. It is worth mentioning, that the following suggestions are not a comprehensive list of all services and tools available, but only a small few of the many choices. Postmaster Tool parsedmarc (self-hosted) Fraudmarc Postmark Dmarcian Tip These services may provide you with a TXT record you need to insert into your DNS records as the provider specifies. Please ensure you read the provider's documentation from the service you choose as this process may vary. Email test for SPF, DKIM and DMARC: \u00b6 To run a rudimentary email authentication check, send a mail to check-auth at verifier.port25.com and wait for a reply. You will find a report similar to the following: ========================================================== Summary of Results ========================================================== SPF check: pass \"iprev\" check: pass DKIM check: pass DKIM check: pass SpamAssassin check: ham ========================================================== Details: ========================================================== .... The full report will contain more technical details. Fully Qualified Domain Name (FQDN) \u00b6 A Fully Qualified Domain Name ( FQDN ) is the complete (absolute) domain name for a specific computer or host, on the Internet. The FQDN consists of at least three parts divided by a dot: the hostname, the domain name, and the Top Level Domain ( TLD for short). In the example of mx.mailcow.email the hostname would be mx , the domain name mailcow and the TLD email . \u21a9","title":"DNS setup"},{"location":"prerequisite-dns/#references","text":"A good article covering all relevant topics: \"3 DNS Records Every Email Marketer Must Know\" Another great one, but Zimbra as an example platform: \"Best Practices on Email Protection: SPF, DKIM and DMARC\" An in-depth discussion of SPF, DKIM and DMARC: \"How to eliminate spam and protect your name with DMARC\" A thorough guide on understanding DMARC: \"Demystifying DMARC: A guide to preventing email spoofing\"","title":"References"},{"location":"prerequisite-dns/#reverse-dns-of-your-ip-address","text":"Make sure that the PTR record of your IP address matches the FQDN of your mailcow host: ${MAILCOW_HOSTNAME} 1 . This record is usually set at the provider you leased the IP address (server) from.","title":"Reverse DNS of your IP address"},{"location":"prerequisite-dns/#the-minimal-dns-configuration","text":"This example shows you a set of records for one domain managed by mailcow. Each domain that is added to mailcow needs at least this set of records to function correctly. # Name Type Value mail IN A 1.2.3.4 autodiscover IN CNAME mail.example.org. (your ${MAILCOW_HOSTNAME}) autoconfig IN CNAME mail.example.org. (your ${MAILCOW_HOSTNAME}) @ IN MX 10 mail.example.org. (your ${MAILCOW_HOSTNAME})","title":"The minimal DNS configuration"},{"location":"prerequisite-dns/#dkim-spf-and-dmarc","text":"In the example DNS zone file snippet below, a simple SPF TXT record is used to only allow THIS server (the MX) to send mail for your domain. Every other server is disallowed but able to (\" ~all \"). Please refer to SPF Project for further reading. # Name Type Value @ IN TXT \"v=spf1 mx a -all\" It is highly recommended to create a DKIM TXT record in your mailcow UI and set the corresponding TXT record in your DNS records. Please refer to OpenDKIM for further reading. # Name Type Value dkim._domainkey IN TXT \"v=DKIM1; k=rsa; t=s; s=email; p=...\" The last step in protecting yourself and others is the implementation of a DMARC TXT record, for example by using the DMARC Assistant ( check ). # Name Type Value _dmarc IN TXT \"v=DMARC1; p=reject; rua=mailto:mailauth-reports@example.org\"","title":"DKIM, SPF and DMARC"},{"location":"prerequisite-dns/#the-advanced-dns-configuration","text":"SRV records specify the server(s) for a specific protocol on your domain. If you want to explicitly announce a service as not provided, give \".\" as the target address (instead of \"mail.example.org.\"). Please refer to RFC 2782 . # Name Type Priority Weight Port Value _autodiscover._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME}) _caldavs._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME}) _caldavs._tcp IN TXT \"path=/SOGo/dav/\" _carddavs._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME}) _carddavs._tcp IN TXT \"path=/SOGo/dav/\" _imap._tcp IN SRV 0 1 143 mail.example.org. (your ${MAILCOW_HOSTNAME}) _imaps._tcp IN SRV 0 1 993 mail.example.org. (your ${MAILCOW_HOSTNAME}) _pop3._tcp IN SRV 0 1 110 mail.example.org. (your ${MAILCOW_HOSTNAME}) _pop3s._tcp IN SRV 0 1 995 mail.example.org. (your ${MAILCOW_HOSTNAME}) _sieve._tcp IN SRV 0 1 4190 mail.example.org. (your ${MAILCOW_HOSTNAME}) _smtps._tcp IN SRV 0 1 465 mail.example.org. (your ${MAILCOW_HOSTNAME}) _submission._tcp IN SRV 0 1 587 mail.example.org. (your ${MAILCOW_HOSTNAME})","title":"The advanced DNS configuration"},{"location":"prerequisite-dns/#testing","text":"Here are some tools you can use to verify your DNS configuration: MX Toolbox (DNS, SMTP, RBL) port25.com (DKIM, SPF) Mail-tester (DKIM, DMARC, SPF) DMARC Analyzer (DMARC, SPF) MultiRBL.valli.org (DNSBL, RBL, FCrDNS)","title":"Testing"},{"location":"prerequisite-dns/#misc","text":"","title":"Misc"},{"location":"prerequisite-dns/#optional-dmarc-statistics","text":"If you are interested in statistics, you can additionally register with some of the many below DMARC statistic services - or self-host your own. Tip It is worth considering that if you request DMARC statistic reports to your mailcow server and your mailcow server is not configured correctly to receive these reports, you may not get accurate and complete results. Please consider using an alternative email domain for receiving DMARC reports. It is worth mentioning, that the following suggestions are not a comprehensive list of all services and tools available, but only a small few of the many choices. Postmaster Tool parsedmarc (self-hosted) Fraudmarc Postmark Dmarcian Tip These services may provide you with a TXT record you need to insert into your DNS records as the provider specifies. Please ensure you read the provider's documentation from the service you choose as this process may vary.","title":"Optional DMARC Statistics"},{"location":"prerequisite-dns/#email-test-for-spf-dkim-and-dmarc","text":"To run a rudimentary email authentication check, send a mail to check-auth at verifier.port25.com and wait for a reply. You will find a report similar to the following: ========================================================== Summary of Results ========================================================== SPF check: pass \"iprev\" check: pass DKIM check: pass DKIM check: pass SpamAssassin check: ham ========================================================== Details: ========================================================== .... The full report will contain more technical details.","title":"Email test for SPF, DKIM and DMARC:"},{"location":"prerequisite-dns/#fully-qualified-domain-name-fqdn","text":"A Fully Qualified Domain Name ( FQDN ) is the complete (absolute) domain name for a specific computer or host, on the Internet. The FQDN consists of at least three parts divided by a dot: the hostname, the domain name, and the Top Level Domain ( TLD for short). In the example of mx.mailcow.email the hostname would be mx , the domain name mailcow and the TLD email . \u21a9","title":"Fully Qualified Domain Name (FQDN)"},{"location":"prerequisite-system/","text":"Before you run mailcow: dockerized , there are a few requirements that you should check: Warning Do not try to install mailcow on a Synology/QNAP device (any NAS), OpenVZ, LXC or other container platforms. KVM, ESX, Hyper-V and other full virtualization platforms are supported. Info mailcow: dockerized requires some ports to be open for incoming connections, so make sure that your firewall is not blocking these. Make sure that no other application is interfering with mailcow's configuration, such as another mail service A correct DNS setup is crucial to every good mailserver setup, so please make sure you got at least the basics covered before you begin! Make sure that your system has a correct date and time setup . This is crucial for various components like two factor TOTP authentication. Minimum System Resources \u00b6 OpenVZ, Virtuozzo and LXC are not supported . Please make sure that your system has at least the following resources: Resource mailcow: dockerized CPU 1 GHz RAM Minimum 6 GiB + 1 GiB swap (default config) Disk 20 GiB (without emails) System Type x86_64 We recommend using any distribution listed as supported by Docker CE (check https://docs.docker.com/install/ ). We test on CentOS 7, Debian 10/11 and Ubuntu 18.04/20.04. ClamAV and Solr can be greedy with RAM. You may disable them in mailcow.conf by settings SKIP_CLAMD=y and SKIP_SOLR=y . Info : We are aware that a pure MTA can run on 128 MiB RAM. mailcow is a full-grown and ready-to-use groupware with many extras making life easier. mailcow comes with a webserver, webmailer, ActiveSync (MS), antivirus, antispam, indexing (Solr), document scanner (Oletools), SQL (MariaDB), Cache (Redis), MDA, MTA, various web services etc. A single SOGo worker can acquire ~350 MiB RAM before it gets purged. The more ActiveSync connections you plan to use, the more RAM you will need. A default configuration spawns 20 workers. Usage examples \u00b6 A company with 15 phones (EAS enabled) and about 50 concurrent IMAP connections should plan 16 GiB RAM. 6 GiB RAM + 1 GiB swap are fine for most private installations while 8 GiB RAM are recommended for ~5 to 10 users. We can help to correctly plan your setup as part of our support. Firewall & Ports \u00b6 Please check if any of mailcow's standard ports are open and not in use by other applications: ss -tlpn | grep -E -w '25|80|110|143|443|465|587|993|995|4190|5222|5269|5443' # or: netstat -tulpn | grep -E -w '25|80|110|143|443|465|587|993|995|4190|5222|5269|5443' Warning There are several problems with running mailcow on a firewalld/ufw enabled system. You should disable it (if possible) and move your ruleset to the DOCKER-USER chain, which is not cleared by a Docker service restart, instead. See this (blog.donnex.net) or this (unrouted.io) guide for information about how to use iptables-persistent with the DOCKER-USER chain. As mailcow runs dockerized, INPUT rules have no effect on restricting access to mailcow. Use the FORWARD chain instead. If this command returns any results please remove or stop the application running on that port. You may also adjust mailcows ports via the mailcow.conf configuration file. Default Ports \u00b6 If you have a firewall in front of mailcow, please make sure that these ports are open for incoming connections: Service Protocol Port Container Variable Postfix SMTP TCP 25 postfix-mailcow ${SMTP_PORT} Postfix SMTPS TCP 465 postfix-mailcow ${SMTPS_PORT} Postfix Submission TCP 587 postfix-mailcow ${SUBMISSION_PORT} Dovecot IMAP TCP 143 dovecot-mailcow ${IMAP_PORT} Dovecot IMAPS TCP 993 dovecot-mailcow ${IMAPS_PORT} Dovecot POP3 TCP 110 dovecot-mailcow ${POP_PORT} Dovecot POP3S TCP 995 dovecot-mailcow ${POPS_PORT} Dovecot ManageSieve TCP 4190 dovecot-mailcow ${SIEVE_PORT} HTTP(S) TCP 80/443 nginx-mailcow ${HTTP_PORT} / ${HTTPS_PORT} To bind a service to an IP address, you can prepend the IP like this: SMTP_PORT=1.2.3.4:25 Important : You cannot use IP:PORT bindings in HTTP_PORT and HTTPS_PORT. Please use HTTP_PORT=1234 and HTTP_BIND=1.2.3.4 instead. Important for Hetzner firewalls \u00b6 Quoting https://github.com/chermsen via https://github.com/mailcow/mailcow-dockerized/issues/497#issuecomment-469847380 (THANK YOU!): For all who are struggling with the Hetzner firewall: Port 53 unimportant for the firewall configuration in this case. According to the documentation unbound uses the port range 1024-65535 for outgoing requests. Since the Hetzner Robot Firewall is a static firewall (each incoming packet is checked isolated) - the following rules must be applied: For TCP SRC-IP: --- DST IP: --- SRC Port: --- DST Port: 1024-65535 Protocol: tcp TCP flags: ack Action: Accept For UDP SRC-IP: --- DST IP: --- SRC Port: --- DST Port: 1024-65535 Protocol: udp Action: Accept If you want to apply a more restrictive port range you have to change the config of unbound first (after installation): {mailcow-dockerized}/data/conf/unbound/unbound.conf: outgoing-port-avoid: 0-32767 Now the firewall rules can be adjusted as follows: [...] DST Port: 32768-65535 [...] Date and Time \u00b6 To ensure that you have the correct date and time setup on your system, please check the output of timedatectl status : $ timedatectl status Local time: Sat 2017-05-06 02:12:33 CEST Universal time: Sat 2017-05-06 00:12:33 UTC RTC time: Sat 2017-05-06 00:12:32 Time zone: Europe/Berlin (CEST, +0200) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2017-03-26 01:59:59 CET Sun 2017-03-26 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2017-10-29 02:59:59 CEST Sun 2017-10-29 02:00:00 CET The lines NTP enabled: yes and NTP synchronized: yes indicate whether you have NTP enabled and if it's synchronized. To enable NTP you need to run the command timedatectl set-ntp true . You also need to edit your /etc/systemd/timesyncd.conf : # vim /etc/systemd/timesyncd.conf [Time] Servers=0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org Hetzner Cloud (and probably others) \u00b6 Check /etc/network/interfaces.d/50-cloud-init.cfg and change the IPv6 interface from eth0:0 to eth0: # Wrong: auto eth0:0 iface eth0:0 inet6 static # Right: auto eth0 iface eth0 inet6 static Reboot or restart the interface. You may want to disable cloud-init network changes. MTU \u00b6 Especially relevant for OpenStack users: Check your MTU and set it accordingly in docker-compose.yml. See 4.1 in our installation docs .","title":"Prepare your system"},{"location":"prerequisite-system/#minimum-system-resources","text":"OpenVZ, Virtuozzo and LXC are not supported . Please make sure that your system has at least the following resources: Resource mailcow: dockerized CPU 1 GHz RAM Minimum 6 GiB + 1 GiB swap (default config) Disk 20 GiB (without emails) System Type x86_64 We recommend using any distribution listed as supported by Docker CE (check https://docs.docker.com/install/ ). We test on CentOS 7, Debian 10/11 and Ubuntu 18.04/20.04. ClamAV and Solr can be greedy with RAM. You may disable them in mailcow.conf by settings SKIP_CLAMD=y and SKIP_SOLR=y . Info : We are aware that a pure MTA can run on 128 MiB RAM. mailcow is a full-grown and ready-to-use groupware with many extras making life easier. mailcow comes with a webserver, webmailer, ActiveSync (MS), antivirus, antispam, indexing (Solr), document scanner (Oletools), SQL (MariaDB), Cache (Redis), MDA, MTA, various web services etc. A single SOGo worker can acquire ~350 MiB RAM before it gets purged. The more ActiveSync connections you plan to use, the more RAM you will need. A default configuration spawns 20 workers.","title":"Minimum System Resources"},{"location":"prerequisite-system/#usage-examples","text":"A company with 15 phones (EAS enabled) and about 50 concurrent IMAP connections should plan 16 GiB RAM. 6 GiB RAM + 1 GiB swap are fine for most private installations while 8 GiB RAM are recommended for ~5 to 10 users. We can help to correctly plan your setup as part of our support.","title":"Usage examples"},{"location":"prerequisite-system/#firewall-ports","text":"Please check if any of mailcow's standard ports are open and not in use by other applications: ss -tlpn | grep -E -w '25|80|110|143|443|465|587|993|995|4190|5222|5269|5443' # or: netstat -tulpn | grep -E -w '25|80|110|143|443|465|587|993|995|4190|5222|5269|5443' Warning There are several problems with running mailcow on a firewalld/ufw enabled system. You should disable it (if possible) and move your ruleset to the DOCKER-USER chain, which is not cleared by a Docker service restart, instead. See this (blog.donnex.net) or this (unrouted.io) guide for information about how to use iptables-persistent with the DOCKER-USER chain. As mailcow runs dockerized, INPUT rules have no effect on restricting access to mailcow. Use the FORWARD chain instead. If this command returns any results please remove or stop the application running on that port. You may also adjust mailcows ports via the mailcow.conf configuration file.","title":"Firewall & Ports"},{"location":"prerequisite-system/#default-ports","text":"If you have a firewall in front of mailcow, please make sure that these ports are open for incoming connections: Service Protocol Port Container Variable Postfix SMTP TCP 25 postfix-mailcow ${SMTP_PORT} Postfix SMTPS TCP 465 postfix-mailcow ${SMTPS_PORT} Postfix Submission TCP 587 postfix-mailcow ${SUBMISSION_PORT} Dovecot IMAP TCP 143 dovecot-mailcow ${IMAP_PORT} Dovecot IMAPS TCP 993 dovecot-mailcow ${IMAPS_PORT} Dovecot POP3 TCP 110 dovecot-mailcow ${POP_PORT} Dovecot POP3S TCP 995 dovecot-mailcow ${POPS_PORT} Dovecot ManageSieve TCP 4190 dovecot-mailcow ${SIEVE_PORT} HTTP(S) TCP 80/443 nginx-mailcow ${HTTP_PORT} / ${HTTPS_PORT} To bind a service to an IP address, you can prepend the IP like this: SMTP_PORT=1.2.3.4:25 Important : You cannot use IP:PORT bindings in HTTP_PORT and HTTPS_PORT. Please use HTTP_PORT=1234 and HTTP_BIND=1.2.3.4 instead.","title":"Default Ports"},{"location":"prerequisite-system/#important-for-hetzner-firewalls","text":"Quoting https://github.com/chermsen via https://github.com/mailcow/mailcow-dockerized/issues/497#issuecomment-469847380 (THANK YOU!): For all who are struggling with the Hetzner firewall: Port 53 unimportant for the firewall configuration in this case. According to the documentation unbound uses the port range 1024-65535 for outgoing requests. Since the Hetzner Robot Firewall is a static firewall (each incoming packet is checked isolated) - the following rules must be applied: For TCP SRC-IP: --- DST IP: --- SRC Port: --- DST Port: 1024-65535 Protocol: tcp TCP flags: ack Action: Accept For UDP SRC-IP: --- DST IP: --- SRC Port: --- DST Port: 1024-65535 Protocol: udp Action: Accept If you want to apply a more restrictive port range you have to change the config of unbound first (after installation): {mailcow-dockerized}/data/conf/unbound/unbound.conf: outgoing-port-avoid: 0-32767 Now the firewall rules can be adjusted as follows: [...] DST Port: 32768-65535 [...]","title":"Important for Hetzner firewalls"},{"location":"prerequisite-system/#date-and-time","text":"To ensure that you have the correct date and time setup on your system, please check the output of timedatectl status : $ timedatectl status Local time: Sat 2017-05-06 02:12:33 CEST Universal time: Sat 2017-05-06 00:12:33 UTC RTC time: Sat 2017-05-06 00:12:32 Time zone: Europe/Berlin (CEST, +0200) NTP enabled: yes NTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2017-03-26 01:59:59 CET Sun 2017-03-26 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2017-10-29 02:59:59 CEST Sun 2017-10-29 02:00:00 CET The lines NTP enabled: yes and NTP synchronized: yes indicate whether you have NTP enabled and if it's synchronized. To enable NTP you need to run the command timedatectl set-ntp true . You also need to edit your /etc/systemd/timesyncd.conf : # vim /etc/systemd/timesyncd.conf [Time] Servers=0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org","title":"Date and Time"},{"location":"prerequisite-system/#hetzner-cloud-and-probably-others","text":"Check /etc/network/interfaces.d/50-cloud-init.cfg and change the IPv6 interface from eth0:0 to eth0: # Wrong: auto eth0:0 iface eth0:0 inet6 static # Right: auto eth0 iface eth0 inet6 static Reboot or restart the interface. You may want to disable cloud-init network changes.","title":"Hetzner Cloud (and probably others)"},{"location":"prerequisite-system/#mtu","text":"Especially relevant for OpenStack users: Check your MTU and set it accordingly in docker-compose.yml. See 4.1 in our installation docs .","title":"MTU"},{"location":"restrictions_ip_accss/","text":"WIP Protocol restrictions and IP access \u00b6 Denied access will be shown to the user as failed login attempts. Protocol restrictions in Dovecot \u00b6 Protocol restrictions work by filtering the passdb query for IMAP and POP3 as well as reading the JSON value for %s_access where %s reflects the protocol seen by Dovecot. In the future we may use virtual colums in SQL to add an index on these values. Protocol restrictions in Postfix \u00b6 Filtering SMTP protocol access works by using a check_sasl_map in the smtpd_recipient_restrictions.","title":"Restrictions ip accss"},{"location":"restrictions_ip_accss/#protocol-restrictions-and-ip-access","text":"Denied access will be shown to the user as failed login attempts.","title":"Protocol restrictions and IP access"},{"location":"restrictions_ip_accss/#protocol-restrictions-in-dovecot","text":"Protocol restrictions work by filtering the passdb query for IMAP and POP3 as well as reading the JSON value for %s_access where %s reflects the protocol seen by Dovecot. In the future we may use virtual colums in SQL to add an index on these values.","title":"Protocol restrictions in Dovecot"},{"location":"restrictions_ip_accss/#protocol-restrictions-in-postfix","text":"Filtering SMTP protocol access works by using a check_sasl_map in the smtpd_recipient_restrictions.","title":"Protocol restrictions in Postfix"},{"location":"third_party-borgmatic/","text":"Borgmatic Backup \u00b6 Introduction \u00b6 Borgmatic is a great way to run backups on your Mailcow setup as it securely encrypts your data and is extremely easy to set up. Due to it's deduplication capabilities you can store a great number of backups without wasting large amounts of disk space. This allows you to run backups in very short intervals to ensure minimal data loss when the need arises to recover data from a backup. This document guides you through the process to enable continuous backups for mailcow with borgmatic. The borgmatic functionality is provided by the borgmatic Docker image by b3vis . Check out the README in that repository to find out about the other options (such as push notifications) that are available. This guide only covers the basics. Setting up borgmatic \u00b6 Create or amend docker-compose.override.yml \u00b6 In the mailcow-dockerized root folder create or edit docker-compose.override.yml and insert the following configuration: version : '2.1' services : borgmatic-mailcow : image : b3vis/borgmatic hostname : mailcow restart : always dns : ${IPV4_NETWORK:-172.22.1}.254 volumes : - vmail-vol-1:/mnt/source/vmail:ro - crypt-vol-1:/mnt/source/crypt:ro - redis-vol-1:/mnt/source/redis:ro,z - rspamd-vol-1:/mnt/source/rspamd:ro,z - postfix-vol-1:/mnt/source/postfix:ro,z - mysql-socket-vol-1:/var/run/mysqld/:z - borg-config-vol-1:/root/.config/borg:Z - borg-cache-vol-1:/root/.cache/borg:Z - ./data/conf/borgmatic/etc:/etc/borgmatic.d:Z - ./data/conf/borgmatic/ssh:/root/.ssh:Z environment : - TZ=${TZ} - BORG_PASSPHRASE=YouBetterPutSomethingRealGoodHere networks : mailcow-network : aliases : - borgmatic volumes : borg-cache-vol-1 : borg-config-vol-1 : Ensure that you change the BORG_PASSPHRASE to a secure passphrase of your choosing. For security reasons we mount the maildir as read-only. If you later want to restore data you will need to remove the ro flag prior to restoring the data. This is described in the section on restoring backups. Create data/conf/borgmatic/etc/config.yaml \u00b6 Next, we need to create the borgmatic configuration. source mailcow.conf cat < data/conf/borgmatic/etc/config.yaml location: source_directories: - /mnt/source repositories: - user@rsync.net:mailcow exclude_patterns: - '/mnt/source/postfix/public/' - '/mnt/source/postfix/private/' - '/mnt/source/rspamd/rspamd.sock' retention: keep_hourly: 24 keep_daily: 7 keep_weekly: 4 keep_monthly: 6 hooks: mysql_databases: - name: ${DBNAME} username: ${DBUSER} password: ${DBPASS} options: --default-character-set=utf8mb4 EOF Creating the file in this way ensures the correct MySQL credentials are pulled in from mailcow.conf . This file is a minimal example for using borgmatic with an account user on the cloud storage provider rsync.net for a repository called mailcow (see repositories setting). It will backup both the maildir and MySQL database, which is all you should need to restore your mailcow setup after an incident. The retention settings will keep one archive for each hour of the past 24 hours, one per day of the week, one per week of the month and one per month of the past half year. Check the borgmatic documentation on how to use other types of repositories or configuration options. If you choose to use a local filesystem as a backup destination make sure to mount it into the container. The container defines a volume called /mnt/borg-repository for this purpose. Note If you do not use rsync.net you can most likely drop the remote_path element from your config. Create a crontab \u00b6 Create a new text file in data/conf/borgmatic/etc/crontab.txt with the following content: 14 * * * * PATH=$PATH:/usr/bin /usr/bin/borgmatic --stats -v 0 2>&1 This file expects crontab syntax. The example shown here will trigger the backup to run every hour at 14 minutes past the hour and log some nice stats at the end. Place SSH keys in folder \u00b6 Place the SSH keys you intend to use for remote repository connections in data/conf/borgmatic/ssh . OpenSSH expects the usual id_rsa , id_ed25519 or similar to be in this directory. Ensure the file is chmod 600 and not world readable or OpenSSH will refuse to use the SSH key. Bring up the container \u00b6 For the next step we need the container to be up and running in a configured state. To do that run: docker-compose up -d Initialize the repository \u00b6 By now your borgmatic container is up and running, but the backups will currently fail due to the repository not being initialized. To initialize the repository run: docker-compose exec borgmatic-mailcow borgmatic init --encryption repokey-blake2 You will be asked you to authenticate the SSH host key of your remote repository server. See if it matches and confirm the prompt by entering yes . The repository will be initialized with the passphrase you set in the BORG_PASSPHRASE environment variable earlier. When using any of the repokey encryption methods the encryption key will be stored in the repository itself and not on the client, so there is no further action required in this regard. If you decide to use a keyfile instead of a repokey make sure you export the key and back it up separately. Check the Exporting Keys section for how to retrieve the key. Restart container \u00b6 Now that we finished configuring and initializing the repository restart the container to ensure it is in a defined state: docker-compose restart borgmatic-mailcow Restoring from a backup \u00b6 Restoring a backup assumes you are starting off with a fresh installation of mailcow, and you currently do not have any custom data in your maildir or your mailcow database. Restore maildir \u00b6 Warning Doing this will overwrite files in your maildir! Do not run this unless you actually intend to recover mail files from a backup. If you use SELinux in Enforcing mode If you are using mailcow on a host with SELinux in Enforcing mode you will have to temporarily disable it during extraction of the archive as the mailcow setup labels the vmail volume as private, belonging to the dovecot container exclusively. SELinux will (rightfully) prevent any other container, such as the borgmatic container, from writing to this volume. Before running a restore you must make the vmail volume writeable in docker-compose.override.yml by removing the ro flag from the volume. Then you can use the following command to restore the maildir from a backup: docker-compose exec borgmatic-mailcow borgmatic extract --path mnt/source --archive latest Alternatively you can specify any archive name from the list of archives (see Listing all available archives ) Restore MySQL \u00b6 Warning Running this command will delete and recreate the mailcow database! Do not run this unless you actually intend to recover the mailcow database from a backup. To restore the MySQL database from the latest archive use this command: docker-compose exec borgmatic-mailcow borgmatic restore --archive latest Alternatively you can specify any archive name from the list of archives (see Listing all available archives ) After restoring \u00b6 After restoring you need to restart mailcow. If you disabled SELinux enforcing mode now would be a good time to re-enable it. To restart mailcow use the follwing command: docker-compose down && docker-compose up -d If you use SELinux this will also trigger the re-labeling of all files in your vmail volume. Be patient, as this may take a while if you have lots of files. Useful commands \u00b6 Manual archiving run (with debugging output) \u00b6 docker-compose exec borgmatic-mailcow borgmatic -v 2 Listing all available archives \u00b6 docker-compose exec borgmatic-mailcow borgmatic list Break lock \u00b6 When borg is interrupted during an archiving run it will leave behind a stale lock that needs to be cleared before any new operations can be performed: docker-compose exec borgmatic-mailcow borg break-lock user@rsync.net:mailcow Where user@rsync.net:mailcow is the URI to your repository. Now would be a good time to do a manual archiving run to ensure it can be successfully performed. Exporting keys \u00b6 When using any of the keyfile methods for encryption you MUST take care of backing up the key files yourself. The key files are generated when you initialize the repository. The repokey methods store the key file within the repository, so a manual backup isn't as essential. Note that in either case you also must have the passphrase to decrypt any archives. To fetch the keyfile run: docker-compose exec borgmatic-mailcow borg key export --paper user@rsync.net:mailcow Where user@rsync.net:mailcow is the URI to your repository.","title":"Borgmatic Backup"},{"location":"third_party-borgmatic/#borgmatic-backup","text":"","title":"Borgmatic Backup"},{"location":"third_party-borgmatic/#introduction","text":"Borgmatic is a great way to run backups on your Mailcow setup as it securely encrypts your data and is extremely easy to set up. Due to it's deduplication capabilities you can store a great number of backups without wasting large amounts of disk space. This allows you to run backups in very short intervals to ensure minimal data loss when the need arises to recover data from a backup. This document guides you through the process to enable continuous backups for mailcow with borgmatic. The borgmatic functionality is provided by the borgmatic Docker image by b3vis . Check out the README in that repository to find out about the other options (such as push notifications) that are available. This guide only covers the basics.","title":"Introduction"},{"location":"third_party-borgmatic/#setting-up-borgmatic","text":"","title":"Setting up borgmatic"},{"location":"third_party-borgmatic/#create-or-amend-docker-composeoverrideyml","text":"In the mailcow-dockerized root folder create or edit docker-compose.override.yml and insert the following configuration: version : '2.1' services : borgmatic-mailcow : image : b3vis/borgmatic hostname : mailcow restart : always dns : ${IPV4_NETWORK:-172.22.1}.254 volumes : - vmail-vol-1:/mnt/source/vmail:ro - crypt-vol-1:/mnt/source/crypt:ro - redis-vol-1:/mnt/source/redis:ro,z - rspamd-vol-1:/mnt/source/rspamd:ro,z - postfix-vol-1:/mnt/source/postfix:ro,z - mysql-socket-vol-1:/var/run/mysqld/:z - borg-config-vol-1:/root/.config/borg:Z - borg-cache-vol-1:/root/.cache/borg:Z - ./data/conf/borgmatic/etc:/etc/borgmatic.d:Z - ./data/conf/borgmatic/ssh:/root/.ssh:Z environment : - TZ=${TZ} - BORG_PASSPHRASE=YouBetterPutSomethingRealGoodHere networks : mailcow-network : aliases : - borgmatic volumes : borg-cache-vol-1 : borg-config-vol-1 : Ensure that you change the BORG_PASSPHRASE to a secure passphrase of your choosing. For security reasons we mount the maildir as read-only. If you later want to restore data you will need to remove the ro flag prior to restoring the data. This is described in the section on restoring backups.","title":"Create or amend docker-compose.override.yml"},{"location":"third_party-borgmatic/#create-dataconfborgmaticetcconfigyaml","text":"Next, we need to create the borgmatic configuration. source mailcow.conf cat < data/conf/borgmatic/etc/config.yaml location: source_directories: - /mnt/source repositories: - user@rsync.net:mailcow exclude_patterns: - '/mnt/source/postfix/public/' - '/mnt/source/postfix/private/' - '/mnt/source/rspamd/rspamd.sock' retention: keep_hourly: 24 keep_daily: 7 keep_weekly: 4 keep_monthly: 6 hooks: mysql_databases: - name: ${DBNAME} username: ${DBUSER} password: ${DBPASS} options: --default-character-set=utf8mb4 EOF Creating the file in this way ensures the correct MySQL credentials are pulled in from mailcow.conf . This file is a minimal example for using borgmatic with an account user on the cloud storage provider rsync.net for a repository called mailcow (see repositories setting). It will backup both the maildir and MySQL database, which is all you should need to restore your mailcow setup after an incident. The retention settings will keep one archive for each hour of the past 24 hours, one per day of the week, one per week of the month and one per month of the past half year. Check the borgmatic documentation on how to use other types of repositories or configuration options. If you choose to use a local filesystem as a backup destination make sure to mount it into the container. The container defines a volume called /mnt/borg-repository for this purpose. Note If you do not use rsync.net you can most likely drop the remote_path element from your config.","title":"Create data/conf/borgmatic/etc/config.yaml"},{"location":"third_party-borgmatic/#create-a-crontab","text":"Create a new text file in data/conf/borgmatic/etc/crontab.txt with the following content: 14 * * * * PATH=$PATH:/usr/bin /usr/bin/borgmatic --stats -v 0 2>&1 This file expects crontab syntax. The example shown here will trigger the backup to run every hour at 14 minutes past the hour and log some nice stats at the end.","title":"Create a crontab"},{"location":"third_party-borgmatic/#place-ssh-keys-in-folder","text":"Place the SSH keys you intend to use for remote repository connections in data/conf/borgmatic/ssh . OpenSSH expects the usual id_rsa , id_ed25519 or similar to be in this directory. Ensure the file is chmod 600 and not world readable or OpenSSH will refuse to use the SSH key.","title":"Place SSH keys in folder"},{"location":"third_party-borgmatic/#bring-up-the-container","text":"For the next step we need the container to be up and running in a configured state. To do that run: docker-compose up -d","title":"Bring up the container"},{"location":"third_party-borgmatic/#initialize-the-repository","text":"By now your borgmatic container is up and running, but the backups will currently fail due to the repository not being initialized. To initialize the repository run: docker-compose exec borgmatic-mailcow borgmatic init --encryption repokey-blake2 You will be asked you to authenticate the SSH host key of your remote repository server. See if it matches and confirm the prompt by entering yes . The repository will be initialized with the passphrase you set in the BORG_PASSPHRASE environment variable earlier. When using any of the repokey encryption methods the encryption key will be stored in the repository itself and not on the client, so there is no further action required in this regard. If you decide to use a keyfile instead of a repokey make sure you export the key and back it up separately. Check the Exporting Keys section for how to retrieve the key.","title":"Initialize the repository"},{"location":"third_party-borgmatic/#restart-container","text":"Now that we finished configuring and initializing the repository restart the container to ensure it is in a defined state: docker-compose restart borgmatic-mailcow","title":"Restart container"},{"location":"third_party-borgmatic/#restoring-from-a-backup","text":"Restoring a backup assumes you are starting off with a fresh installation of mailcow, and you currently do not have any custom data in your maildir or your mailcow database.","title":"Restoring from a backup"},{"location":"third_party-borgmatic/#restore-maildir","text":"Warning Doing this will overwrite files in your maildir! Do not run this unless you actually intend to recover mail files from a backup. If you use SELinux in Enforcing mode If you are using mailcow on a host with SELinux in Enforcing mode you will have to temporarily disable it during extraction of the archive as the mailcow setup labels the vmail volume as private, belonging to the dovecot container exclusively. SELinux will (rightfully) prevent any other container, such as the borgmatic container, from writing to this volume. Before running a restore you must make the vmail volume writeable in docker-compose.override.yml by removing the ro flag from the volume. Then you can use the following command to restore the maildir from a backup: docker-compose exec borgmatic-mailcow borgmatic extract --path mnt/source --archive latest Alternatively you can specify any archive name from the list of archives (see Listing all available archives )","title":"Restore maildir"},{"location":"third_party-borgmatic/#restore-mysql","text":"Warning Running this command will delete and recreate the mailcow database! Do not run this unless you actually intend to recover the mailcow database from a backup. To restore the MySQL database from the latest archive use this command: docker-compose exec borgmatic-mailcow borgmatic restore --archive latest Alternatively you can specify any archive name from the list of archives (see Listing all available archives )","title":"Restore MySQL"},{"location":"third_party-borgmatic/#after-restoring","text":"After restoring you need to restart mailcow. If you disabled SELinux enforcing mode now would be a good time to re-enable it. To restart mailcow use the follwing command: docker-compose down && docker-compose up -d If you use SELinux this will also trigger the re-labeling of all files in your vmail volume. Be patient, as this may take a while if you have lots of files.","title":"After restoring"},{"location":"third_party-borgmatic/#useful-commands","text":"","title":"Useful commands"},{"location":"third_party-borgmatic/#manual-archiving-run-with-debugging-output","text":"docker-compose exec borgmatic-mailcow borgmatic -v 2","title":"Manual archiving run (with debugging output)"},{"location":"third_party-borgmatic/#listing-all-available-archives","text":"docker-compose exec borgmatic-mailcow borgmatic list","title":"Listing all available archives"},{"location":"third_party-borgmatic/#break-lock","text":"When borg is interrupted during an archiving run it will leave behind a stale lock that needs to be cleared before any new operations can be performed: docker-compose exec borgmatic-mailcow borg break-lock user@rsync.net:mailcow Where user@rsync.net:mailcow is the URI to your repository. Now would be a good time to do a manual archiving run to ensure it can be successfully performed.","title":"Break lock"},{"location":"third_party-borgmatic/#exporting-keys","text":"When using any of the keyfile methods for encryption you MUST take care of backing up the key files yourself. The key files are generated when you initialize the repository. The repokey methods store the key file within the repository, so a manual backup isn't as essential. Note that in either case you also must have the passphrase to decrypt any archives. To fetch the keyfile run: docker-compose exec borgmatic-mailcow borg key export --paper user@rsync.net:mailcow Where user@rsync.net:mailcow is the URI to your repository.","title":"Exporting keys"},{"location":"third_party-exchange_onprem/","text":"Using Microsoft Exchange in a hybrid setup is possible with mailcow. With this setup you can add mailboxes on your mailcow and still use Exchange Online Protection . All mailboxes setup in Exchange will receive their mails as usual , while with the hybrid approach additional Mailboxes can be setup in mailcow without any further configuration. This setup becomes very handy if you have enabled the Office 365 security defaults and third party applications can no longer login into your mailboxes by any of the supported methods . Requirements \u00b6 The mx Record of your domain needs to point at the Exchange mail service. Log into your Admin center and look out for the dns settings of your domain to find your personalized gateway domain. It should look like this contoso-com.mail.protection.outlook.com . Contact your domain registrant to get further information on how to change mx record. The domain you want to have additional mailboxes for must be setup as internal relay domain in Exchange. Log in to your Exchange Admin Center Select the mail flow pane and click on accepted domains Select the domain and switch it from authorative to internal relay Set up the mailcow \u00b6 Your mailcow needs to relay all mails to your personalized Exchange Host. It is the same host address we already looked up for the mx Record. Add the domain to your mailcow Add your personalized Exchange Host address as relayhost Add your personalized Exchange Host address as forwarding host to unconditionally accepted all relayed mails from Exchange. (Admin > Configuration & Details > Configuration Dropdown > Forwarding Hosts) Go to the domain settings and select the newly added host on the Sender-dependent transports dropdown. Enable relaying by ticking the Relay this domain , Relay all recipients and the Relay non-existing mailboxes only. checkboxes Info From now on your mailcow will accept all mails relayed from Exchange. The inbound filtering and so the neural learning of your cow will no longer work . Because all mails are routed through Exchange the filtering process is handled there . Set up Connectors in Exchange \u00b6 All mail traffic now goes through Exchange. At this point the Exchange Online Protection already filters all incoming and outgoing mails. Now we need to set up two connectors to relay incoming mails from our Exchange Service to the mailcow and another one to allow mails relayed from the mailcow to our exchange service. You can follow the official guide from Microsoft . Warning For the connector that handles mails from your mailcow to Exchange Microsoft offers two ways of authenticating it. The recommended way is to use a tls certificate configured with a subject name that matches an accepted domain in Exchange. Otherwise you need to choose authentication with the static ip address of your mailcow. Validating \u00b6 The easiest way to validate the hybrid setup is by sending a mail from the internet to a mailbox that only exists on the mailcow and vice versa. Common Issues \u00b6 The connector validation from Exchange to your mailcow failed with 550 5.1.10 RESOLVER.ADR.RecipientNotFound; Recipient test@contoso.com not found by SMTP address lookup Possible Solution: Your domain is not set up as internal relay . Exchange therefore cannot find the recipient Mails sent from the mailcow to a mailbox in the internet cannot be sent. Non Delivery Report with error 550 5.7.64 TenantAttribution; Relay Access Denied Possible Solution: The authentication method failed. Make sure the certificate subject matches an accepted domain in Exchange. Try authenticating by static ip instead. Microsoft Guide for the connector setup and additional requirements: https://docs.microsoft.com/exchange/mail-flow-best-practices/use-connectors-to-configure-mail-flow/set-up-connectors-to-route-mail#prerequisites-for-your-on-premises-email-environment","title":"Exchange Hybrid Setup"},{"location":"third_party-exchange_onprem/#requirements","text":"The mx Record of your domain needs to point at the Exchange mail service. Log into your Admin center and look out for the dns settings of your domain to find your personalized gateway domain. It should look like this contoso-com.mail.protection.outlook.com . Contact your domain registrant to get further information on how to change mx record. The domain you want to have additional mailboxes for must be setup as internal relay domain in Exchange. Log in to your Exchange Admin Center Select the mail flow pane and click on accepted domains Select the domain and switch it from authorative to internal relay","title":"Requirements"},{"location":"third_party-exchange_onprem/#set-up-the-mailcow","text":"Your mailcow needs to relay all mails to your personalized Exchange Host. It is the same host address we already looked up for the mx Record. Add the domain to your mailcow Add your personalized Exchange Host address as relayhost Add your personalized Exchange Host address as forwarding host to unconditionally accepted all relayed mails from Exchange. (Admin > Configuration & Details > Configuration Dropdown > Forwarding Hosts) Go to the domain settings and select the newly added host on the Sender-dependent transports dropdown. Enable relaying by ticking the Relay this domain , Relay all recipients and the Relay non-existing mailboxes only. checkboxes Info From now on your mailcow will accept all mails relayed from Exchange. The inbound filtering and so the neural learning of your cow will no longer work . Because all mails are routed through Exchange the filtering process is handled there .","title":"Set up the mailcow"},{"location":"third_party-exchange_onprem/#set-up-connectors-in-exchange","text":"All mail traffic now goes through Exchange. At this point the Exchange Online Protection already filters all incoming and outgoing mails. Now we need to set up two connectors to relay incoming mails from our Exchange Service to the mailcow and another one to allow mails relayed from the mailcow to our exchange service. You can follow the official guide from Microsoft . Warning For the connector that handles mails from your mailcow to Exchange Microsoft offers two ways of authenticating it. The recommended way is to use a tls certificate configured with a subject name that matches an accepted domain in Exchange. Otherwise you need to choose authentication with the static ip address of your mailcow.","title":"Set up Connectors in Exchange"},{"location":"third_party-exchange_onprem/#validating","text":"The easiest way to validate the hybrid setup is by sending a mail from the internet to a mailbox that only exists on the mailcow and vice versa.","title":"Validating"},{"location":"third_party-exchange_onprem/#common-issues","text":"The connector validation from Exchange to your mailcow failed with 550 5.1.10 RESOLVER.ADR.RecipientNotFound; Recipient test@contoso.com not found by SMTP address lookup Possible Solution: Your domain is not set up as internal relay . Exchange therefore cannot find the recipient Mails sent from the mailcow to a mailbox in the internet cannot be sent. Non Delivery Report with error 550 5.7.64 TenantAttribution; Relay Access Denied Possible Solution: The authentication method failed. Make sure the certificate subject matches an accepted domain in Exchange. Try authenticating by static ip instead. Microsoft Guide for the connector setup and additional requirements: https://docs.microsoft.com/exchange/mail-flow-best-practices/use-connectors-to-configure-mail-flow/set-up-connectors-to-route-mail#prerequisites-for-your-on-premises-email-environment","title":"Common Issues"},{"location":"third_party-gitea/","text":"With Gitea' ability to authenticate over SMTP it is trivial to integrate it with mailcow. Few changes are needed: 1. Open docker-compose.override.yml and add gitea: version: '2.1' services: gitea-mailcow: image: gitea/gitea:1 volumes: - ./data/gitea:/data networks: mailcow-network: aliases: - gitea ports: - \"${GITEA_SSH_PORT:-127.0.0.1:4000}:22\" 2. Create data/conf/nginx/site.gitea.custom , add: location /gitea/ { proxy_pass http://gitea:3000/; } 3. Open mailcow.conf and define the binding you want gitea to use for SSH. Example: GITEA_SSH_PORT=127.0.0.1:4000 5. Run docker-compose up -d to bring up the gitea container and run docker-compose restart nginx-mailcow afterwards. 6. If you forced mailcow to https, execute step 9 and restart gitea with docker-compose restart gitea-mailcow . Go head with step 7 (Remember to use https instead of http, https://mx.example.org/gitea/ 7. Open http://${MAILCOW_HOSTNAME}/gitea/ , for example http://mx.example.org/gitea/ . For database details set mysql as database host. Use the value of DBNAME found in mailcow.conf as database name, DBUSER as database user and DBPASS as database password. 8. Once the installation is complete, login as admin and set \"settings\" -> \"authorization\" -> \"enable SMTP\". SMTP Host should be postfix with port 587 , set Skip TLS Verify as we are using an unlisted SAN (\"postfix\" is most likely not part of your certificate). 9. Create data/gitea/gitea/conf/app.ini and set following values. You can consult gitea cheat sheet for their meaning and other possible values. [server] SSH_LISTEN_PORT = 22 # For GITEA_SSH_PORT=127.0.0.1:4000 in mailcow.conf, set: SSH_DOMAIN = 127.0.0.1 SSH_PORT = 4000 # For MAILCOW_HOSTNAME=mx.example.org in mailcow.conf (and default ports for HTTPS), set: ROOT_URL = https://mx.example.org/gitea/ 10. Restart gitea with docker-compose restart gitea-mailcow . Your users should be able to login with mailcow managed accounts.","title":"Gitea"},{"location":"third_party-gogs/","text":"With Gogs' ability to authenticate over SMTP it is trivial to integrate it with mailcow. Few changes are needed: 1. Open docker-compose.override.yml and add Gogs: version: '2.1' services: gogs-mailcow: image: gogs/gogs volumes: - ./data/gogs:/data networks: mailcow-network: aliases: - gogs ports: - \"${GOGS_SSH_PORT:-127.0.0.1:4000}:22\" 2. Create data/conf/nginx/site.gogs.custom , add: location /gogs/ { proxy_pass http://gogs:3000/; } 3. Open mailcow.conf and define the binding you want Gogs to use for SSH. Example: GOGS_SSH_PORT=127.0.0.1:4000 5. Run docker-compose up -d to bring up the Gogs container and run docker-compose restart nginx-mailcow afterwards. 6. Open http://${MAILCOW_HOSTNAME}/gogs/ , for example http://mx.example.org/gogs/ . For database details set mysql as database host. Use the value of DBNAME found in mailcow.conf as database name, DBUSER as database user and DBPASS as database password. 7. Once the installation is complete, login as admin and set \"settings\" -> \"authorization\" -> \"enable SMTP\". SMTP Host should be postfix with port 587 , set Skip TLS Verify as we are using an unlisted SAN (\"postfix\" is most likely not part of your certificate). 8. Create data/gogs/gogs/conf/app.ini and set following values. You can consult Gogs cheat sheet for their meaning and other possible values. [server] SSH_LISTEN_PORT = 22 # For GOGS_SSH_PORT=127.0.0.1:4000 in mailcow.conf, set: SSH_DOMAIN = 127.0.0.1 SSH_PORT = 4000 # For MAILCOW_HOSTNAME=mx.example.org in mailcow.conf (and default ports for HTTPS), set: ROOT_URL = https://mx.example.org/gogs/ 9. Restart Gogs with docker-compose restart gogs-mailcow . Your users should be able to login with mailcow managed accounts.","title":"Gogs"},{"location":"third_party-mailman3/","text":"Installing mailcow and Mailman 3 based on dockerized versions \u00b6 Info This guide is a copy from dockerized-mailcow-mailman . Please post issues, questions and improvements in the issue tracker there. Warning mailcow is not responsible for any data loss, hardware damage or broken keyboards. This guide comes without any warranty. Make backups before starting, 'coze: No backup no pity! Introduction \u00b6 This guide aims to install and configure mailcow-dockerized with docker-mailman and to provide some useful scripts. An essential condition is, to preserve mailcow and Mailman in their own installations for independent updates. There are some guides and projects on the internet, but they are not up to date and/or incomplete in documentation or configuration. This guide is based on the work of: mailcow-mailman3-dockerized by Shadowghost mailman-mailcow-integration After finishing this guide, mailcow-dockerized and docker-mailman will run and Apache as a reverse proxy will serve the web frontends. The operating system used is an Ubuntu 20.04 LTS . Installation \u00b6 This guide is based on different steps: DNS setup Install Apache as a reverse proxy Obtain SSL certificates with Let's Encrypt Install mailcow with Mailman integration Install Mailman \ud83c\udfc3 Run DNS setup \u00b6 Most of the configuration is covered by mailcow s DNS setup . After finishing this setup add another subdomain for Mailman , e.g. lists.example.org that points to the same server: # Name Type Value lists IN A 1.2.3.4 lists IN AAAA dead:beef Install Apache as a reverse proxy \u00b6 Install Apache , e.g. with this guide from Digital Ocean : How To Install the Apache Web Server on Ubuntu 20.04 . Activate certain Apache modules (as root or sudo ): a2enmod rewrite proxy proxy_http headers ssl wsgi proxy_uwsgi http2 Maybe you have to install further packages to get these modules. This PPA by Ond\u0159ej Sur\u00fd may help you. vHost configuration \u00b6 Copy the mailcow.conf and the mailman.conf in the Apache conf folder sites-available (e.g. under /etc/apache2/sites-available ). Change in mailcow.conf : - MAILCOW_HOSTNAME to your MAILCOW_HOSTNAME Change in mailman.conf : - MAILMAN_DOMAIN to your Mailman domain (e.g. lists.example.org ) Don't activate the configuration, as the ssl certificates and directories are missing yet. Obtain SSL certificates with Let's Encrypt \u00b6 Check if your DNS config is available over the internet and points to the right IP addresses, e.g. with MXToolBox : https://mxtoolbox.com/SuperTool.aspx?action=a%3aMAILCOW_HOSTNAME https://mxtoolbox.com/SuperTool.aspx?action=aaaa%3aMAILCOW_HOSTNAME https://mxtoolbox.com/SuperTool.aspx?action=a%3aMAILMAN_DOMAIN https://mxtoolbox.com/SuperTool.aspx?action=aaaa%3aMAILMAN_DOMAIN Install certbot (as root or sudo ): apt install certbot Get the desired certificates (as root or sudo ): certbot certonly -d mailcow_HOSTNAME certbot certonly -d MAILMAN_DOMAIN Install mailcow with Mailman integration \u00b6 Install mailcow \u00b6 Follow the mailcow installation . Omit step 5 and do not pull and up with docker-compose ! Configure mailcow \u00b6 This is also Step 4 in the official mailcow installation ( nano mailcow.conf ). So change to your needs and alter the following variables: HTTP_PORT=18080 # don't use 8080 as mailman needs it HTTP_BIND=127.0.0.1 # HTTPS_PORT=18443 # you may use 8443 HTTPS_BIND=127.0.0.1 # SKIP_LETS_ENCRYPT=y # reverse proxy will do the SSL termination SNAT_TO_SOURCE=1.2.3.4 # change this to your IPv4 SNAT6_TO_SOURCE=dead:beef # change this to your global IPv6 Add Mailman integration \u00b6 Create the file /opt/mailcow-dockerized/docker-compose.override.yml (e.g. with nano ) and add the following lines: version: '2.1' services: postfix-mailcow: volumes: - /opt/mailman:/opt/mailman networks: - docker-mailman_mailman networks: docker-mailman_mailman: external: true The additional volume is used by Mailman to generate additional config files for mailcow postfix . The external network is build and used by Mailman . mailcow needs it to deliver incoming list mails to Mailman . Create the file /opt/mailcow-dockerized/data/conf/postfix/extra.cf (e.g. with nano ) and add the following lines: # mailman recipient_delimiter = + unknown_local_recipient_reject_code = 550 owner_request_special = no local_recipient_maps = regexp:/opt/mailman/core/var/data/postfix_lmtp, proxy:unix:passwd.byname, $alias_maps virtual_mailbox_maps = proxy:mysql:/opt/postfix/conf/sql/mysql_virtual_mailbox_maps.cf, regexp:/opt/mailman/core/var/data/postfix_lmtp transport_maps = pcre:/opt/postfix/conf/custom_transport.pcre, pcre:/opt/postfix/conf/local_transport, proxy:mysql:/opt/postfix/conf/sql/mysql_relay_ne.cf, proxy:mysql:/opt/postfix/conf/sql/mysql_transport_maps.cf, regexp:/opt/mailman/core/var/data/postfix_lmtp relay_domains = proxy:mysql:/opt/postfix/conf/sql/mysql_virtual_relay_domain_maps.cf, regexp:/opt/mailman/core/var/data/postfix_domains relay_recipient_maps = proxy:mysql:/opt/postfix/conf/sql/mysql_relay_recipient_maps.cf, regexp:/opt/mailman/core/var/data/postfix_lmtp As we overwrite mailcow postfix configuration here, this step may break your normal mail transports. Check the original configuration files if anything changed. SSL certificates \u00b6 As we proxying mailcow , we need to copy the SSL certificates into the mailcow file structure. This task will do the script renew-ssl.sh for us: Copy the file to /opt/mailcow-dockerized Change mailcow_HOSTNAME to your mailcow hostname Make it executable ( chmod a+x renew-ssl.sh ) Do not run it yet, as we first need Mailman You have to create a cronjob , so that new certificates will be copied. Execute as root or sudo : crontab -e To run the script every day at 5am, add: 0 5 * * * /opt/mailcow-dockerized/renew-ssl.sh Install Mailman \u00b6 Basicly follow the instructions at docker-mailman . As they are a lot, here is in a nuthshell what to do: As root or sudo : cd /opt mkdir -p mailman/core mkdir -p mailman/web git clone https://github.com/maxking/docker-mailman cd docker-mailman Configure Mailman \u00b6 Create a long key for Hyperkitty , e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo . Save this key for a moment as HYPERKITTY_KEY. Create a long password for the database, e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo . Save this password for a moment as DBPASS. Create a long key for Django , e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo . Save this key for a moment as DJANGO_KEY. Create the file /opt/docker-mailman/docker-compose.override.yaml and replace HYPERKITTY_KEY , DBPASS and DJANGO_KEY with the generated values: version: '2' services: mailman-core: environment: - DATABASE_URL=postgres://mailman:DBPASS@database/mailmandb - HYPERKITTY_API_KEY=HYPERKITTY_KEY - TZ=Europe/Berlin - MTA=postfix restart: always networks: - mailman mailman-web: environment: - DATABASE_URL=postgres://mailman:DBPASS@database/mailmandb - HYPERKITTY_API_KEY=HYPERKITTY_KEY - TZ=Europe/Berlin - SECRET_KEY=DJANGO_KEY - SERVE_FROM_DOMAIN=MAILMAN_DOMAIN # e.g. lists.example.org - MAILMAN_ADMIN_USER=admin # the admin user - MAILMAN_ADMIN_EMAIL=admin@example.org # the admin mail address - UWSGI_STATIC_MAP=/static=/opt/mailman-web-data/static restart: always database: environment: - POSTGRES_PASSWORD=DBPASS restart: always At mailman-web fill in correct values for SERVE_FROM_DOMAIN (e.g. lists.example.org ), MAILMAN_ADMIN_USER and MAILMAN_ADMIN_EMAIL . You need the admin credentials to log into the web interface ( Pistorius ). For setting the password for the first time use the Forgot password function in the web interface. About other configuration options read Mailman-web and Mailman-core documentation. Configure Mailman core and Mailman web \u00b6 Create the file /opt/mailman/core/mailman-extra.cfg with the following content. mailman@example.org should be pointing to a valid mail box or redirection. [mailman] default_language: de site_owner: mailman@example.org Create the file /opt/mailman/web/settings_local.py with the following content. mailman@example.org should be pointing to a valid mail box or redirection. # locale LANGUAGE_CODE = 'de-de' # disable social authentication SOCIALACCOUNT_PROVIDERS = {} # change it DEFAULT_FROM_EMAIL = 'mailman@example.org' DEBUG = False You can change LANGUAGE_CODE and SOCIALACCOUNT_PROVIDERS to your needs. At the moment SOCIALACCOUNT_PROVIDERS has no effect, see issue #2 . \ud83c\udfc3 Run \u00b6 Run (as root or sudo ) a2ensite mailcow.conf a2ensite mailman.conf systemctl restart apache2 cd /opt/docker-mailman docker-compose pull docker-compose up -d cd /opt/mailcow-dockerized/ docker-compose pull ./renew-ssl.sh Wait a few minutes! The containers have to create there databases and config files. This can last up to 1 minute and more. Remarks \u00b6 New lists aren't recognized by postfix instantly \u00b6 When you create a new list and try to immediately send an e-mail, postfix responses with User doesn't exist , because postfix won't deliver it to Mailman yet. The configuration at /opt/mailman/core/var/data/postfix_lmtp is not instantly updated. If you need the list instantly, restart postifx manually: cd /opt/mailcow-dockerized docker-compose restart postfix-mailcow Update \u00b6 mailcow has it's own update script in `/opt/mailcow-dockerized/update.sh', see the docs . For Mailman just fetch the newest version from the github repository . Backup \u00b6 mailcow has an own backup script. Read the docs for further informations. Mailman won't state backup instructions in the README.md. In the gitbucket of pgollor is a script that may be helpful. ToDo \u00b6 install script \u00b6 Write a script like in mailman-mailcow-integration/mailman-install.sh as many of the steps are automatable. Ask for all the configuration variables and create passwords and keys. Do a (semi-)automatic installation. Have fun!","title":"Mailman 3"},{"location":"third_party-mailman3/#installing-mailcow-and-mailman-3-based-on-dockerized-versions","text":"Info This guide is a copy from dockerized-mailcow-mailman . Please post issues, questions and improvements in the issue tracker there. Warning mailcow is not responsible for any data loss, hardware damage or broken keyboards. This guide comes without any warranty. Make backups before starting, 'coze: No backup no pity!","title":"Installing mailcow and Mailman 3 based on dockerized versions"},{"location":"third_party-mailman3/#introduction","text":"This guide aims to install and configure mailcow-dockerized with docker-mailman and to provide some useful scripts. An essential condition is, to preserve mailcow and Mailman in their own installations for independent updates. There are some guides and projects on the internet, but they are not up to date and/or incomplete in documentation or configuration. This guide is based on the work of: mailcow-mailman3-dockerized by Shadowghost mailman-mailcow-integration After finishing this guide, mailcow-dockerized and docker-mailman will run and Apache as a reverse proxy will serve the web frontends. The operating system used is an Ubuntu 20.04 LTS .","title":"Introduction"},{"location":"third_party-mailman3/#installation","text":"This guide is based on different steps: DNS setup Install Apache as a reverse proxy Obtain SSL certificates with Let's Encrypt Install mailcow with Mailman integration Install Mailman \ud83c\udfc3 Run","title":"Installation"},{"location":"third_party-mailman3/#dns-setup","text":"Most of the configuration is covered by mailcow s DNS setup . After finishing this setup add another subdomain for Mailman , e.g. lists.example.org that points to the same server: # Name Type Value lists IN A 1.2.3.4 lists IN AAAA dead:beef","title":"DNS setup"},{"location":"third_party-mailman3/#install-apache-as-a-reverse-proxy","text":"Install Apache , e.g. with this guide from Digital Ocean : How To Install the Apache Web Server on Ubuntu 20.04 . Activate certain Apache modules (as root or sudo ): a2enmod rewrite proxy proxy_http headers ssl wsgi proxy_uwsgi http2 Maybe you have to install further packages to get these modules. This PPA by Ond\u0159ej Sur\u00fd may help you.","title":"Install Apache as a reverse proxy"},{"location":"third_party-mailman3/#vhost-configuration","text":"Copy the mailcow.conf and the mailman.conf in the Apache conf folder sites-available (e.g. under /etc/apache2/sites-available ). Change in mailcow.conf : - MAILCOW_HOSTNAME to your MAILCOW_HOSTNAME Change in mailman.conf : - MAILMAN_DOMAIN to your Mailman domain (e.g. lists.example.org ) Don't activate the configuration, as the ssl certificates and directories are missing yet.","title":"vHost configuration"},{"location":"third_party-mailman3/#obtain-ssl-certificates-with-lets-encrypt","text":"Check if your DNS config is available over the internet and points to the right IP addresses, e.g. with MXToolBox : https://mxtoolbox.com/SuperTool.aspx?action=a%3aMAILCOW_HOSTNAME https://mxtoolbox.com/SuperTool.aspx?action=aaaa%3aMAILCOW_HOSTNAME https://mxtoolbox.com/SuperTool.aspx?action=a%3aMAILMAN_DOMAIN https://mxtoolbox.com/SuperTool.aspx?action=aaaa%3aMAILMAN_DOMAIN Install certbot (as root or sudo ): apt install certbot Get the desired certificates (as root or sudo ): certbot certonly -d mailcow_HOSTNAME certbot certonly -d MAILMAN_DOMAIN","title":"Obtain SSL certificates with Let's Encrypt"},{"location":"third_party-mailman3/#install-mailcow-with-mailman-integration","text":"","title":"Install mailcow with Mailman integration"},{"location":"third_party-mailman3/#install-mailcow","text":"Follow the mailcow installation . Omit step 5 and do not pull and up with docker-compose !","title":"Install mailcow"},{"location":"third_party-mailman3/#configure-mailcow","text":"This is also Step 4 in the official mailcow installation ( nano mailcow.conf ). So change to your needs and alter the following variables: HTTP_PORT=18080 # don't use 8080 as mailman needs it HTTP_BIND=127.0.0.1 # HTTPS_PORT=18443 # you may use 8443 HTTPS_BIND=127.0.0.1 # SKIP_LETS_ENCRYPT=y # reverse proxy will do the SSL termination SNAT_TO_SOURCE=1.2.3.4 # change this to your IPv4 SNAT6_TO_SOURCE=dead:beef # change this to your global IPv6","title":"Configure mailcow"},{"location":"third_party-mailman3/#add-mailman-integration","text":"Create the file /opt/mailcow-dockerized/docker-compose.override.yml (e.g. with nano ) and add the following lines: version: '2.1' services: postfix-mailcow: volumes: - /opt/mailman:/opt/mailman networks: - docker-mailman_mailman networks: docker-mailman_mailman: external: true The additional volume is used by Mailman to generate additional config files for mailcow postfix . The external network is build and used by Mailman . mailcow needs it to deliver incoming list mails to Mailman . Create the file /opt/mailcow-dockerized/data/conf/postfix/extra.cf (e.g. with nano ) and add the following lines: # mailman recipient_delimiter = + unknown_local_recipient_reject_code = 550 owner_request_special = no local_recipient_maps = regexp:/opt/mailman/core/var/data/postfix_lmtp, proxy:unix:passwd.byname, $alias_maps virtual_mailbox_maps = proxy:mysql:/opt/postfix/conf/sql/mysql_virtual_mailbox_maps.cf, regexp:/opt/mailman/core/var/data/postfix_lmtp transport_maps = pcre:/opt/postfix/conf/custom_transport.pcre, pcre:/opt/postfix/conf/local_transport, proxy:mysql:/opt/postfix/conf/sql/mysql_relay_ne.cf, proxy:mysql:/opt/postfix/conf/sql/mysql_transport_maps.cf, regexp:/opt/mailman/core/var/data/postfix_lmtp relay_domains = proxy:mysql:/opt/postfix/conf/sql/mysql_virtual_relay_domain_maps.cf, regexp:/opt/mailman/core/var/data/postfix_domains relay_recipient_maps = proxy:mysql:/opt/postfix/conf/sql/mysql_relay_recipient_maps.cf, regexp:/opt/mailman/core/var/data/postfix_lmtp As we overwrite mailcow postfix configuration here, this step may break your normal mail transports. Check the original configuration files if anything changed.","title":"Add Mailman integration"},{"location":"third_party-mailman3/#ssl-certificates","text":"As we proxying mailcow , we need to copy the SSL certificates into the mailcow file structure. This task will do the script renew-ssl.sh for us: Copy the file to /opt/mailcow-dockerized Change mailcow_HOSTNAME to your mailcow hostname Make it executable ( chmod a+x renew-ssl.sh ) Do not run it yet, as we first need Mailman You have to create a cronjob , so that new certificates will be copied. Execute as root or sudo : crontab -e To run the script every day at 5am, add: 0 5 * * * /opt/mailcow-dockerized/renew-ssl.sh","title":"SSL certificates"},{"location":"third_party-mailman3/#install-mailman","text":"Basicly follow the instructions at docker-mailman . As they are a lot, here is in a nuthshell what to do: As root or sudo : cd /opt mkdir -p mailman/core mkdir -p mailman/web git clone https://github.com/maxking/docker-mailman cd docker-mailman","title":"Install Mailman"},{"location":"third_party-mailman3/#configure-mailman","text":"Create a long key for Hyperkitty , e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo . Save this key for a moment as HYPERKITTY_KEY. Create a long password for the database, e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo . Save this password for a moment as DBPASS. Create a long key for Django , e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo . Save this key for a moment as DJANGO_KEY. Create the file /opt/docker-mailman/docker-compose.override.yaml and replace HYPERKITTY_KEY , DBPASS and DJANGO_KEY with the generated values: version: '2' services: mailman-core: environment: - DATABASE_URL=postgres://mailman:DBPASS@database/mailmandb - HYPERKITTY_API_KEY=HYPERKITTY_KEY - TZ=Europe/Berlin - MTA=postfix restart: always networks: - mailman mailman-web: environment: - DATABASE_URL=postgres://mailman:DBPASS@database/mailmandb - HYPERKITTY_API_KEY=HYPERKITTY_KEY - TZ=Europe/Berlin - SECRET_KEY=DJANGO_KEY - SERVE_FROM_DOMAIN=MAILMAN_DOMAIN # e.g. lists.example.org - MAILMAN_ADMIN_USER=admin # the admin user - MAILMAN_ADMIN_EMAIL=admin@example.org # the admin mail address - UWSGI_STATIC_MAP=/static=/opt/mailman-web-data/static restart: always database: environment: - POSTGRES_PASSWORD=DBPASS restart: always At mailman-web fill in correct values for SERVE_FROM_DOMAIN (e.g. lists.example.org ), MAILMAN_ADMIN_USER and MAILMAN_ADMIN_EMAIL . You need the admin credentials to log into the web interface ( Pistorius ). For setting the password for the first time use the Forgot password function in the web interface. About other configuration options read Mailman-web and Mailman-core documentation.","title":"Configure Mailman"},{"location":"third_party-mailman3/#configure-mailman-core-and-mailman-web","text":"Create the file /opt/mailman/core/mailman-extra.cfg with the following content. mailman@example.org should be pointing to a valid mail box or redirection. [mailman] default_language: de site_owner: mailman@example.org Create the file /opt/mailman/web/settings_local.py with the following content. mailman@example.org should be pointing to a valid mail box or redirection. # locale LANGUAGE_CODE = 'de-de' # disable social authentication SOCIALACCOUNT_PROVIDERS = {} # change it DEFAULT_FROM_EMAIL = 'mailman@example.org' DEBUG = False You can change LANGUAGE_CODE and SOCIALACCOUNT_PROVIDERS to your needs. At the moment SOCIALACCOUNT_PROVIDERS has no effect, see issue #2 .","title":"Configure Mailman core and Mailman web"},{"location":"third_party-mailman3/#run","text":"Run (as root or sudo ) a2ensite mailcow.conf a2ensite mailman.conf systemctl restart apache2 cd /opt/docker-mailman docker-compose pull docker-compose up -d cd /opt/mailcow-dockerized/ docker-compose pull ./renew-ssl.sh Wait a few minutes! The containers have to create there databases and config files. This can last up to 1 minute and more.","title":"\ud83c\udfc3 Run"},{"location":"third_party-mailman3/#remarks","text":"","title":"Remarks"},{"location":"third_party-mailman3/#new-lists-arent-recognized-by-postfix-instantly","text":"When you create a new list and try to immediately send an e-mail, postfix responses with User doesn't exist , because postfix won't deliver it to Mailman yet. The configuration at /opt/mailman/core/var/data/postfix_lmtp is not instantly updated. If you need the list instantly, restart postifx manually: cd /opt/mailcow-dockerized docker-compose restart postfix-mailcow","title":"New lists aren't recognized by postfix instantly"},{"location":"third_party-mailman3/#update","text":"mailcow has it's own update script in `/opt/mailcow-dockerized/update.sh', see the docs . For Mailman just fetch the newest version from the github repository .","title":"Update"},{"location":"third_party-mailman3/#backup","text":"mailcow has an own backup script. Read the docs for further informations. Mailman won't state backup instructions in the README.md. In the gitbucket of pgollor is a script that may be helpful.","title":"Backup"},{"location":"third_party-mailman3/#todo","text":"","title":"ToDo"},{"location":"third_party-mailman3/#install-script","text":"Write a script like in mailman-mailcow-integration/mailman-install.sh as many of the steps are automatable. Ask for all the configuration variables and create passwords and keys. Do a (semi-)automatic installation. Have fun!","title":"install script"},{"location":"third_party-mailpiler_integration/","text":"This is a simple integration of mailcow aliases and the mailbox name into mailpiler when using IMAP authentication. Disclaimer : This is not officially maintained nor supported by the mailcow project nor its contributors. No warranty or support is being provided, however you're free to open issues on GitHub for filing a bug or provide further ideas. GitHub repo can be found here . Info Support for domain wildcards were implemented in Piler 1.3.10 which was released on 03.01.2021. Prior versions basically do work, but after logging in you won't see emails sent from or to the domain alias. (e.g. when @example.com is an alias for admin@example.com ) The problem to solve \u00b6 mailpiler offers the authentication based on IMAP, for example: $config['ENABLE_IMAP_AUTH'] = 1; $config['IMAP_HOST'] = 'mail.example.com'; $config['IMAP_PORT'] = 993; $config['IMAP_SSL'] = true; So when you log in using patrik@example.com , you will only see delivered emails sent from or to this specific email address. When additional aliases are defined in mailcow, like team@example.com , you won't see emails sent to or from this email address even the fact you're a recipient of mails sent to this alias address. By hooking into the authentication process of mailpiler, we are able to get required data via the mailcow API during login. This fires API requests to the mailcow API (requiring read-only API access) to read out the aliases your email address participates and also the \"Name\" of the mailbox specified to display it on the top-right of mailpiler after login. Permitted email addresses can be seen in the mailpiler settings top-right after logging in. Info This is only pulled once during the authentication process. The authorized aliases and the realname are valid for the whole duration of the user session as mailpiler sets them in the session data. If user is removed from specific alias, this will only take effect after next login. The solution \u00b6 Note: File paths might vary depending on your setup. Requirements \u00b6 A working mailcow instance A working mailpiler instance ( You can find an installation guide here , check supported versions here ) An mailcow API key (read-only works just fine): Configuration & Details - Access - Read-Only Access . Don't forget to allow API access from your mailpiler IP. Warning As mailpiler authenticates against mailcow, our IMAP server, failed logins of users or bots might trigger a block for your mailpiler instance. Therefore you might want to consider whitelisting the IP address of the mailpiler instance within mailcow: Configuration & Details - Configuration - Fail2ban parameters - Whitelisted networks/hosts . Setup \u00b6 Set the custom query function of mailpiler and append this to /usr/local/etc/piler/config-site.php : $config['MAILCOW_API_KEY'] = 'YOUR_READONLY_API_KEY'; $config['MAILCOW_SET_REALNAME'] = true; // when not specified, then default is false $config['CUSTOM_EMAIL_QUERY_FUNCTION'] = 'query_mailcow_for_email_access'; include('auth-mailcow.php'); You can also change the mailcow hostname, if required: $config['MAILCOW_HOST'] = 'mail.domain.tld'; // defaults to $config['IMAP_HOST'] Download the PHP file with the functions from the GitHub repo : curl -o /usr/local/etc/piler/auth-mailcow.php https://raw.githubusercontent.com/patschi/mailpiler-mailcow-integration/master/auth-mailcow.php Done! Make sure to re-login with your IMAP credentials for changes to take effect. If it doesn't work, most likely something's wrong with the API query itself. Consider debugging by sending manual API requests to the API. (Tip: Open https://mail.domain.tld/api on your instance)","title":"Mailpiler Integration"},{"location":"third_party-mailpiler_integration/#the-problem-to-solve","text":"mailpiler offers the authentication based on IMAP, for example: $config['ENABLE_IMAP_AUTH'] = 1; $config['IMAP_HOST'] = 'mail.example.com'; $config['IMAP_PORT'] = 993; $config['IMAP_SSL'] = true; So when you log in using patrik@example.com , you will only see delivered emails sent from or to this specific email address. When additional aliases are defined in mailcow, like team@example.com , you won't see emails sent to or from this email address even the fact you're a recipient of mails sent to this alias address. By hooking into the authentication process of mailpiler, we are able to get required data via the mailcow API during login. This fires API requests to the mailcow API (requiring read-only API access) to read out the aliases your email address participates and also the \"Name\" of the mailbox specified to display it on the top-right of mailpiler after login. Permitted email addresses can be seen in the mailpiler settings top-right after logging in. Info This is only pulled once during the authentication process. The authorized aliases and the realname are valid for the whole duration of the user session as mailpiler sets them in the session data. If user is removed from specific alias, this will only take effect after next login.","title":"The problem to solve"},{"location":"third_party-mailpiler_integration/#the-solution","text":"Note: File paths might vary depending on your setup.","title":"The solution"},{"location":"third_party-mailpiler_integration/#requirements","text":"A working mailcow instance A working mailpiler instance ( You can find an installation guide here , check supported versions here ) An mailcow API key (read-only works just fine): Configuration & Details - Access - Read-Only Access . Don't forget to allow API access from your mailpiler IP. Warning As mailpiler authenticates against mailcow, our IMAP server, failed logins of users or bots might trigger a block for your mailpiler instance. Therefore you might want to consider whitelisting the IP address of the mailpiler instance within mailcow: Configuration & Details - Configuration - Fail2ban parameters - Whitelisted networks/hosts .","title":"Requirements"},{"location":"third_party-mailpiler_integration/#setup","text":"Set the custom query function of mailpiler and append this to /usr/local/etc/piler/config-site.php : $config['MAILCOW_API_KEY'] = 'YOUR_READONLY_API_KEY'; $config['MAILCOW_SET_REALNAME'] = true; // when not specified, then default is false $config['CUSTOM_EMAIL_QUERY_FUNCTION'] = 'query_mailcow_for_email_access'; include('auth-mailcow.php'); You can also change the mailcow hostname, if required: $config['MAILCOW_HOST'] = 'mail.domain.tld'; // defaults to $config['IMAP_HOST'] Download the PHP file with the functions from the GitHub repo : curl -o /usr/local/etc/piler/auth-mailcow.php https://raw.githubusercontent.com/patschi/mailpiler-mailcow-integration/master/auth-mailcow.php Done! Make sure to re-login with your IMAP credentials for changes to take effect. If it doesn't work, most likely something's wrong with the API query itself. Consider debugging by sending manual API requests to the API. (Tip: Open https://mail.domain.tld/api on your instance)","title":"Setup"},{"location":"third_party-nextcloud/","text":"Manage Nextcloud using the helper script \u00b6 Nextcloud can be set up (parameter -i ) and removed (parameter -p ) with the helper script included with mailcow. In order to install Nextcloud simply navigate to your mailcow-dockerized root folder and run the helper script as follows: ./helper-scripts/nextcloud.sh -i In case you have forgotten the password (e.g. for admin) and can't request a new one via the password reset link on the login screen calling the helper script with -r as parameter allows you to set a new password. Only use this option if your Nextcloud isn't configured to use mailcow for authentication as described in the next section. Configure Nextcloud to use mailcow for authentication \u00b6 The following describes how set up authentication via mailcow using the OAuth2 protocol. We will only assume that you have already set up Nextcloud at cloud.example.com and that your mailcow is running at mail.example.com . It does not matter if your Nextcloud is running on a different server, you can still use mailcow for authentication. 1. Log into mailcow as administrator. 2. Scroll down to OAuth2 Apps and click the Add button. Specify the redirect URI as https://cloud.example.com/index.php/apps/sociallogin/custom_oauth2/Mailcow and click Add . Save the client ID and secret for later. Info Some installations, including those setup using the helper script of mailcow, need to remove index.php/ from the URL to get a successful redirect: https://cloud.example.com/apps/sociallogin/custom_oauth2/Mailcow 3. Log into Nextcloud as administrator. 4. Click the button in the top right corner and select Apps . Click the search button in the toolbar, search for the Social Login plugin and click Download and enable next to it. 5. Click the button in the top right corner and select Settings . Scroll down to the Administration section on the left and click Social login . 6. Uncheck the following items: \"Disable auto create new users\" \"Allow users to connect social logins with their accounts\" \"Do not prune not available user groups on login\" \"Automatically create groups if they do not exists\" \"Restrict login for users without mapped groups\" 7. Check the following items: \"Prevent creating an account if the email address exists in another account\" \"Update user profile every login\" \"Disable notify admins about new users\" Click the Save button. 8. Scroll down to Custom OAuth2 and click the + button. 9. Configure the parameters as follows: Internal name: Mailcow Title: Mailcow API Base URL: https://mail.example.com Authorize URL: https://mail.example.com/oauth/authorize Token URL: https://mail.example.com/oauth/token Profile URL: https://mail.example.com/oauth/profile Logout URL: (leave blank) Client ID: (what you obtained in step 1) Client Secret: (what you obtained in step 1) Scope: profile Click the Save button at the very bottom of the page. If you have previously used Nextcloud with mailcow authentication via user_external/IMAP, you need to perform some additional steps to link your existing user accounts with OAuth2. 1. Click the button in the top right corner and select Apps . Scroll down to the External user authentication app and click Remove next to it. 2. Run the following queries in your Nextcloud database (if you set up Nextcloud using mailcow's script, you can run source mailcow.conf && docker-compose exec mysql-mailcow mysql -u$DBUSER -p$DBPASS $DBNAME ): INSERT INTO nc_users (uid, uid_lower) SELECT DISTINCT uid, LOWER(uid) FROM nc_users_external; INSERT INTO nc_sociallogin_connect (uid, identifier) SELECT DISTINCT uid, CONCAT(\"Mailcow-\", uid) FROM nc_users_external; If you have previously used Nextcloud without mailcow authentication, but with the same usernames as mailcow, you can also link your existing user accounts with OAuth2. 1. Run the following queries in your Nextcloud database (if you set up Nextcloud using mailcow's script, you can run source mailcow.conf && docker-compose exec mysql-mailcow mysql -u$DBUSER -p$DBPASS $DBNAME ): INSERT INTO nc_sociallogin_connect (uid, identifier) SELECT DISTINCT uid, CONCAT(\"Mailcow-\", uid) FROM nc_users; Update \u00b6 The Nextcloud instance can be updated easily with the web update mechanism. In the case of larger updates, there may be further changes to be made after the update. After the Nextcloud instance has been checked, problems are shown. This can be e.g. missing indices in the DB or similar. It shows which commands have to be executed, these have to be placed in the php-fpm-mailcow container. As an an example run the following command to add the missing indices. docker exec -it -u www-data $(docker ps -f name=php-fpm-mailcow -q) bash -c \"php /web/nextcloud/occ db:add-missing-indices\" Debugging & Troubleshooting \u00b6 It may happen that you cannot reach the Nextcloud instance from your network. This may be due to the fact that the entry of your subnet in the array 'trusted_proxies' is missing. You can make changes in the Nextcloud config.php in data/web/nextcloud/config/* . 'trusted_proxies' => array ( 0 => 'fd4d:6169:6c63:6f77::/64', 1 => '172.22.1.0/24', 2 => 'NewSubnet/24', ), After the changes have been made, the nginx container must be restarted. docker-compose restart nginx-mailcow","title":"Nextcloud"},{"location":"third_party-nextcloud/#manage-nextcloud-using-the-helper-script","text":"Nextcloud can be set up (parameter -i ) and removed (parameter -p ) with the helper script included with mailcow. In order to install Nextcloud simply navigate to your mailcow-dockerized root folder and run the helper script as follows: ./helper-scripts/nextcloud.sh -i In case you have forgotten the password (e.g. for admin) and can't request a new one via the password reset link on the login screen calling the helper script with -r as parameter allows you to set a new password. Only use this option if your Nextcloud isn't configured to use mailcow for authentication as described in the next section.","title":"Manage Nextcloud using the helper script"},{"location":"third_party-nextcloud/#configure-nextcloud-to-use-mailcow-for-authentication","text":"The following describes how set up authentication via mailcow using the OAuth2 protocol. We will only assume that you have already set up Nextcloud at cloud.example.com and that your mailcow is running at mail.example.com . It does not matter if your Nextcloud is running on a different server, you can still use mailcow for authentication. 1. Log into mailcow as administrator. 2. Scroll down to OAuth2 Apps and click the Add button. Specify the redirect URI as https://cloud.example.com/index.php/apps/sociallogin/custom_oauth2/Mailcow and click Add . Save the client ID and secret for later. Info Some installations, including those setup using the helper script of mailcow, need to remove index.php/ from the URL to get a successful redirect: https://cloud.example.com/apps/sociallogin/custom_oauth2/Mailcow 3. Log into Nextcloud as administrator. 4. Click the button in the top right corner and select Apps . Click the search button in the toolbar, search for the Social Login plugin and click Download and enable next to it. 5. Click the button in the top right corner and select Settings . Scroll down to the Administration section on the left and click Social login . 6. Uncheck the following items: \"Disable auto create new users\" \"Allow users to connect social logins with their accounts\" \"Do not prune not available user groups on login\" \"Automatically create groups if they do not exists\" \"Restrict login for users without mapped groups\" 7. Check the following items: \"Prevent creating an account if the email address exists in another account\" \"Update user profile every login\" \"Disable notify admins about new users\" Click the Save button. 8. Scroll down to Custom OAuth2 and click the + button. 9. Configure the parameters as follows: Internal name: Mailcow Title: Mailcow API Base URL: https://mail.example.com Authorize URL: https://mail.example.com/oauth/authorize Token URL: https://mail.example.com/oauth/token Profile URL: https://mail.example.com/oauth/profile Logout URL: (leave blank) Client ID: (what you obtained in step 1) Client Secret: (what you obtained in step 1) Scope: profile Click the Save button at the very bottom of the page. If you have previously used Nextcloud with mailcow authentication via user_external/IMAP, you need to perform some additional steps to link your existing user accounts with OAuth2. 1. Click the button in the top right corner and select Apps . Scroll down to the External user authentication app and click Remove next to it. 2. Run the following queries in your Nextcloud database (if you set up Nextcloud using mailcow's script, you can run source mailcow.conf && docker-compose exec mysql-mailcow mysql -u$DBUSER -p$DBPASS $DBNAME ): INSERT INTO nc_users (uid, uid_lower) SELECT DISTINCT uid, LOWER(uid) FROM nc_users_external; INSERT INTO nc_sociallogin_connect (uid, identifier) SELECT DISTINCT uid, CONCAT(\"Mailcow-\", uid) FROM nc_users_external; If you have previously used Nextcloud without mailcow authentication, but with the same usernames as mailcow, you can also link your existing user accounts with OAuth2. 1. Run the following queries in your Nextcloud database (if you set up Nextcloud using mailcow's script, you can run source mailcow.conf && docker-compose exec mysql-mailcow mysql -u$DBUSER -p$DBPASS $DBNAME ): INSERT INTO nc_sociallogin_connect (uid, identifier) SELECT DISTINCT uid, CONCAT(\"Mailcow-\", uid) FROM nc_users;","title":"Configure Nextcloud to use mailcow for authentication"},{"location":"third_party-nextcloud/#update","text":"The Nextcloud instance can be updated easily with the web update mechanism. In the case of larger updates, there may be further changes to be made after the update. After the Nextcloud instance has been checked, problems are shown. This can be e.g. missing indices in the DB or similar. It shows which commands have to be executed, these have to be placed in the php-fpm-mailcow container. As an an example run the following command to add the missing indices. docker exec -it -u www-data $(docker ps -f name=php-fpm-mailcow -q) bash -c \"php /web/nextcloud/occ db:add-missing-indices\"","title":"Update"},{"location":"third_party-nextcloud/#debugging-troubleshooting","text":"It may happen that you cannot reach the Nextcloud instance from your network. This may be due to the fact that the entry of your subnet in the array 'trusted_proxies' is missing. You can make changes in the Nextcloud config.php in data/web/nextcloud/config/* . 'trusted_proxies' => array ( 0 => 'fd4d:6169:6c63:6f77::/64', 1 => '172.22.1.0/24', 2 => 'NewSubnet/24', ), After the changes have been made, the nginx container must be restarted. docker-compose restart nginx-mailcow","title":"Debugging & Troubleshooting"},{"location":"third_party-portainer/","text":"In order to enable Portainer, the docker-compose.yml and site.conf for Nginx must be modified. 1. Create a new file docker-compose.override.yml in the mailcow-dockerized root folder and insert the following configuration version: '2.1' services: portainer-mailcow: image: portainer/portainer-ce volumes: - /var/run/docker.sock:/var/run/docker.sock - ./data/conf/portainer:/data restart: always dns: - 172.22.1.254 dns_search: mailcow-network networks: mailcow-network: aliases: - portainer 2a. Create data/conf/nginx/portainer.conf : upstream portainer { server portainer-mailcow:9000; } map $http_upgrade $connection_upgrade { default upgrade; '' close; } 2b. Insert a new location to the default mailcow site by creating the file data/conf/nginx/site.portainer.custom : location /portainer/ { proxy_http_version 1.1; proxy_set_header Host $http_host; # required for docker client's sake proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_read_timeout 900; proxy_set_header Connection \"\"; proxy_buffers 32 4k; proxy_pass http://portainer/; } location /portainer/api/websocket/ { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_pass http://portainer/api/websocket/; } 3. Apply your changes: docker-compose up -d && docker-compose restart nginx-mailcow Now you can simply navigate to https://${MAILCOW_HOSTNAME}/portainer/ to view your Portainer container monitoring page. You\u2019ll then be prompted to specify a new password for the admin account. After specifying your password, you\u2019ll then be able to connect to the Portainer UI.","title":"Portainer"},{"location":"third_party-roundcube/","text":"Installing Roundcube \u00b6 Download Roundcube 1.5.x to the web htdocs directory and extract it (here rc/ ): # Check for a newer release! cd data/web wget -O - https://github.com/roundcube/roundcubemail/releases/download/1.5.2/roundcubemail-1.5.2-complete.tar.gz | tar xfvz - # Change folder name mv roundcubemail-1.5.2 rc # Change permissions chown -R root: rc/ # Fix Allow remote resources (https://github.com/roundcube/roundcubemail/issues/8170) should not be required in 1.6 sed -i \"s/\\$prefix = '\\.\\/';/\\$prefix = preg_replace\\('\\/\\[\\?\\&]\\.\\*\\$\\/', '', \\$_SERVER\\['REQUEST_URI'] \\?\\? ''\\) \\?: '\\.\\/';/g\" rc/program/include/rcmail.php If you need spell check features, create a file data/hooks/phpfpm/aspell.sh with the following content, then chmod +x data/hooks/phpfpm/aspell.sh . This installs a local spell check engine. Note, most modern web browsers have built in spell check, so you may not want/need this. #!/bin/bash apk update apk add aspell-en # or any other language Create a file data/web/rc/config/config.inc.php with the following content. - Change the des_key parameter to a random value. It is used to temporarily store your IMAP password. - The db_prefix is optional but recommended. - If you didn't install spell check in the above step, remove spellcheck_engine parameter and replace it with $config['enable_spellcheck'] = false; . array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true) ); $config['enable_installer'] = true; $config['smtp_conn_options'] = array( 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true) ); $config['db_prefix'] = 'mailcow_rc1'; Point your browser to https://myserver/rc/installer and follow the instructions. Initialize the database and leave the installer. Delete the directory data/web/rc/installer after a successful installation! Configure ManageSieve filtering \u00b6 Open data/web/rc/plugins/managesieve/config.inc.php and change the following parameters (or add them at the bottom of that file): $config['managesieve_port'] = 4190; $config['managesieve_host'] = 'tls://dovecot'; $config['managesieve_conn_options'] = array( 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true) ); // Enables separate management interface for vacation responses (out-of-office) // 0 - no separate section (default), // 1 - add Vacation section, // 2 - add Vacation section, but hide Filters section $config['managesieve_vacation'] = 1; Enable change password function in Roundcube \u00b6 Open data/web/rc/config/config.inc.php and enable the password plugin: ... $config['plugins'] = array( 'archive', 'password', ); ... Open data/web/rc/plugins/password/password.php , search for case 'ssha': and add above: case 'ssha256': $salt = rcube_utils::random_bytes(8); $crypted = base64_encode( hash('sha256', $password . $salt, TRUE ) . $salt ); $prefix = '{SSHA256}'; break; Open data/web/rc/plugins/password/config.inc.php and change the following parameters (or add them at the bottom of that file): $config['password_driver'] = 'sql'; $config['password_algorithm'] = 'ssha256'; $config['password_algorithm_prefix'] = '{SSHA256}'; $config['password_query'] = \"UPDATE mailbox SET password = %P WHERE username = %u\"; Integrate CardDAV addressbooks in Roundcube \u00b6 Download the latest release of RCMCardDAV to the Roundcube plugin directory and extract it (here rc/plugins ): cd data/web/rc/plugins wget -O - https://github.com/mstilkerich/rcmcarddav/releases/download/v4.3.0/carddav-v4.3.0.tar.gz | tar xfvz - chown -R root: carddav/ Copy the file config.inc.php.dist to config.inc.php (here in rc/plugins/carddav ) and append the following preset to the end of the file - don't forget to replace mx.example.org with your own hostname: $prefs['SOGo'] = array( 'name' => 'SOGo', 'username' => '%u', 'password' => '%p', 'url' => 'https://mx.example.org/SOGo/dav/%u/', 'carddav_name_only' => true, 'use_categories' => true, 'active' => true, 'readonly' => false, 'refresh_time' => '02:00:00', 'fixed' => array( 'active', 'name', 'username', 'password', 'refresh_time' ), 'hide' => false, ); Please note, that this preset only integrates the default addressbook (the one that's named \"Personal Address Book\" and can't be deleted). Additional addressbooks are currently not automatically detected but can be manually added within the roundecube settings. Enable the plugin by adding carddav to $config['plugins'] in rc/config/config.inc.php . If you want to remove the default addressbooks (stored in the Roundcube database), so that only the CardDAV addressbooks are accessible, append $config['address_book_type'] = ''; to the config file data/web/rc/config/config.inc.php . Optionally, you can add Roundcube's link to the mailcow Apps list. To do this, open or create data/web/inc/vars.local.inc.php and add the following code-block: NOTE: Don't forget to add the 'SOGo', 'link' => '/SOGo/' ), array( 'name' => 'Roundcube', 'link' => '/rc/' ) ); ... Upgrading Roundcube \u00b6 Upgrading Roundcube is rather simple, go to the Github releases page for Roundcube and get the link for the \"complete.tar.gz\" file for the wanted release. Then follow the below commands and change the URL and Roundcube folder name if needed. # Enter a bash session of the mailcow PHP container docker exec -it mailcowdockerized_php-fpm-mailcow_1 bash # Install required upgrade dependency, then upgrade Roundcube to wanted release apk add rsync cd /tmp wget -O - https://github.com/roundcube/roundcubemail/releases/download/1.5.2/roundcubemail-1.5.2-complete.tar.gz | tar xfvz - cd roundcubemail-1.5.2 bin/installto.sh /web/rc # Type 'Y' and press enter to upgrade your install of Roundcube # Remove leftover files cd /tmp rm -rf roundcube* # Fix Allow remote resources (https://github.com/roundcube/roundcubemail/issues/8170) should not be required in 1.6 sed -i \"s/\\$prefix = '\\.\\/';/\\$prefix = preg_replace\\('\\/\\[\\?\\&]\\.\\*\\$\\/', '', \\$_SERVER\\['REQUEST_URI'] \\?\\? ''\\) \\?: '\\.\\/';/g\" /web/rc/program/include/rcmail.php Let admins log into Roundcube without password \u00b6 First, install plugin dovecot_impersonate and add Roundcube as an app (see above). Edit mailcow.conf and add the following: # Allow admins to log into Roundcube as email user (without any password) # Roundcube with plugin dovecot_impersonate must be installed first ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE=y Edit docker-compose.override.yml and crate/extend the section for php-fpm-mailcow : version: '2.1' services: php-fpm-mailcow: environment: - ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE=${ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE:-n} Edit data/web/js/site/mailbox.js and the following code after if (ALLOW_ADMIN_EMAIL_LOGIN) { ... } if ( ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE ) { item . action += ' Roundcube' ; } Edit data/web/mailbox.php and add this line to array $template_data : 'allow_admin_email_login_roundcube' => (preg_match(\"/^(yes|y)+$/i\", $_ENV[\"ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE\"])) ? 'true' : 'false', Edit data/web/templates/mailbox.twig and add this code to the bottom of the javascript section : var ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE = {{ allow_admin_email_login_roundcube }}; Copy the contents of the following files from this Snippet : data/web/inc/lib/RoundcubeAutoLogin.php data/web/rc-auth.php Finally, restart mailcow docker-compose down docker-compose up -d","title":"Roundcube"},{"location":"third_party-roundcube/#installing-roundcube","text":"Download Roundcube 1.5.x to the web htdocs directory and extract it (here rc/ ): # Check for a newer release! cd data/web wget -O - https://github.com/roundcube/roundcubemail/releases/download/1.5.2/roundcubemail-1.5.2-complete.tar.gz | tar xfvz - # Change folder name mv roundcubemail-1.5.2 rc # Change permissions chown -R root: rc/ # Fix Allow remote resources (https://github.com/roundcube/roundcubemail/issues/8170) should not be required in 1.6 sed -i \"s/\\$prefix = '\\.\\/';/\\$prefix = preg_replace\\('\\/\\[\\?\\&]\\.\\*\\$\\/', '', \\$_SERVER\\['REQUEST_URI'] \\?\\? ''\\) \\?: '\\.\\/';/g\" rc/program/include/rcmail.php If you need spell check features, create a file data/hooks/phpfpm/aspell.sh with the following content, then chmod +x data/hooks/phpfpm/aspell.sh . This installs a local spell check engine. Note, most modern web browsers have built in spell check, so you may not want/need this. #!/bin/bash apk update apk add aspell-en # or any other language Create a file data/web/rc/config/config.inc.php with the following content. - Change the des_key parameter to a random value. It is used to temporarily store your IMAP password. - The db_prefix is optional but recommended. - If you didn't install spell check in the above step, remove spellcheck_engine parameter and replace it with $config['enable_spellcheck'] = false; . array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true) ); $config['enable_installer'] = true; $config['smtp_conn_options'] = array( 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true) ); $config['db_prefix'] = 'mailcow_rc1'; Point your browser to https://myserver/rc/installer and follow the instructions. Initialize the database and leave the installer. Delete the directory data/web/rc/installer after a successful installation!","title":"Installing Roundcube"},{"location":"third_party-roundcube/#configure-managesieve-filtering","text":"Open data/web/rc/plugins/managesieve/config.inc.php and change the following parameters (or add them at the bottom of that file): $config['managesieve_port'] = 4190; $config['managesieve_host'] = 'tls://dovecot'; $config['managesieve_conn_options'] = array( 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true) ); // Enables separate management interface for vacation responses (out-of-office) // 0 - no separate section (default), // 1 - add Vacation section, // 2 - add Vacation section, but hide Filters section $config['managesieve_vacation'] = 1;","title":"Configure ManageSieve filtering"},{"location":"third_party-roundcube/#enable-change-password-function-in-roundcube","text":"Open data/web/rc/config/config.inc.php and enable the password plugin: ... $config['plugins'] = array( 'archive', 'password', ); ... Open data/web/rc/plugins/password/password.php , search for case 'ssha': and add above: case 'ssha256': $salt = rcube_utils::random_bytes(8); $crypted = base64_encode( hash('sha256', $password . $salt, TRUE ) . $salt ); $prefix = '{SSHA256}'; break; Open data/web/rc/plugins/password/config.inc.php and change the following parameters (or add them at the bottom of that file): $config['password_driver'] = 'sql'; $config['password_algorithm'] = 'ssha256'; $config['password_algorithm_prefix'] = '{SSHA256}'; $config['password_query'] = \"UPDATE mailbox SET password = %P WHERE username = %u\";","title":"Enable change password function in Roundcube"},{"location":"third_party-roundcube/#integrate-carddav-addressbooks-in-roundcube","text":"Download the latest release of RCMCardDAV to the Roundcube plugin directory and extract it (here rc/plugins ): cd data/web/rc/plugins wget -O - https://github.com/mstilkerich/rcmcarddav/releases/download/v4.3.0/carddav-v4.3.0.tar.gz | tar xfvz - chown -R root: carddav/ Copy the file config.inc.php.dist to config.inc.php (here in rc/plugins/carddav ) and append the following preset to the end of the file - don't forget to replace mx.example.org with your own hostname: $prefs['SOGo'] = array( 'name' => 'SOGo', 'username' => '%u', 'password' => '%p', 'url' => 'https://mx.example.org/SOGo/dav/%u/', 'carddav_name_only' => true, 'use_categories' => true, 'active' => true, 'readonly' => false, 'refresh_time' => '02:00:00', 'fixed' => array( 'active', 'name', 'username', 'password', 'refresh_time' ), 'hide' => false, ); Please note, that this preset only integrates the default addressbook (the one that's named \"Personal Address Book\" and can't be deleted). Additional addressbooks are currently not automatically detected but can be manually added within the roundecube settings. Enable the plugin by adding carddav to $config['plugins'] in rc/config/config.inc.php . If you want to remove the default addressbooks (stored in the Roundcube database), so that only the CardDAV addressbooks are accessible, append $config['address_book_type'] = ''; to the config file data/web/rc/config/config.inc.php . Optionally, you can add Roundcube's link to the mailcow Apps list. To do this, open or create data/web/inc/vars.local.inc.php and add the following code-block: NOTE: Don't forget to add the 'SOGo', 'link' => '/SOGo/' ), array( 'name' => 'Roundcube', 'link' => '/rc/' ) ); ...","title":"Integrate CardDAV addressbooks in Roundcube"},{"location":"third_party-roundcube/#upgrading-roundcube","text":"Upgrading Roundcube is rather simple, go to the Github releases page for Roundcube and get the link for the \"complete.tar.gz\" file for the wanted release. Then follow the below commands and change the URL and Roundcube folder name if needed. # Enter a bash session of the mailcow PHP container docker exec -it mailcowdockerized_php-fpm-mailcow_1 bash # Install required upgrade dependency, then upgrade Roundcube to wanted release apk add rsync cd /tmp wget -O - https://github.com/roundcube/roundcubemail/releases/download/1.5.2/roundcubemail-1.5.2-complete.tar.gz | tar xfvz - cd roundcubemail-1.5.2 bin/installto.sh /web/rc # Type 'Y' and press enter to upgrade your install of Roundcube # Remove leftover files cd /tmp rm -rf roundcube* # Fix Allow remote resources (https://github.com/roundcube/roundcubemail/issues/8170) should not be required in 1.6 sed -i \"s/\\$prefix = '\\.\\/';/\\$prefix = preg_replace\\('\\/\\[\\?\\&]\\.\\*\\$\\/', '', \\$_SERVER\\['REQUEST_URI'] \\?\\? ''\\) \\?: '\\.\\/';/g\" /web/rc/program/include/rcmail.php","title":"Upgrading Roundcube"},{"location":"third_party-roundcube/#let-admins-log-into-roundcube-without-password","text":"First, install plugin dovecot_impersonate and add Roundcube as an app (see above). Edit mailcow.conf and add the following: # Allow admins to log into Roundcube as email user (without any password) # Roundcube with plugin dovecot_impersonate must be installed first ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE=y Edit docker-compose.override.yml and crate/extend the section for php-fpm-mailcow : version: '2.1' services: php-fpm-mailcow: environment: - ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE=${ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE:-n} Edit data/web/js/site/mailbox.js and the following code after if (ALLOW_ADMIN_EMAIL_LOGIN) { ... } if ( ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE ) { item . action += ' Roundcube' ; } Edit data/web/mailbox.php and add this line to array $template_data : 'allow_admin_email_login_roundcube' => (preg_match(\"/^(yes|y)+$/i\", $_ENV[\"ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE\"])) ? 'true' : 'false', Edit data/web/templates/mailbox.twig and add this code to the bottom of the javascript section : var ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE = {{ allow_admin_email_login_roundcube }}; Copy the contents of the following files from this Snippet : data/web/inc/lib/RoundcubeAutoLogin.php data/web/rc-auth.php Finally, restart mailcow docker-compose down docker-compose up -d","title":"Let admins log into Roundcube without password"},{"location":"u_e-80_to_443/","text":"Since February the 28th 2017 mailcow does come with port 80 and 443 enabled. Do not use the config below for reverse proxy setups , please see our reverse proxy guide for this, which includes a redirect from HTTP to HTTPS. Open mailcow.conf and set HTTP_BIND= - if not already set. Create a new file data/conf/nginx/redirect.conf and add the following server config to the file: server { root /web; listen 80 default_server; listen [::]:80 default_server; include /etc/nginx/conf.d/server_name.active; if ( $request_uri ~* \"%0A|%0D\" ) { return 403; } location ^~ /.well-known/acme-challenge/ { allow all; default_type \"text/plain\"; } location / { return 301 https://$host$uri$is_args$args; } } In case you changed the HTTP_BIND parameter, recreate the container: docker-compose up -d Otherwise restart Nginx: docker-compose restart nginx-mailcow","title":"Redirect HTTP to HTTPS"},{"location":"u_e-autodiscover_config/","text":"You do not need to change or create this file, autodiscover works out of the box . This guide is only meant for customizations to the autodiscover or autoconfig process. Newer Outlook clients (especially those delivered with O365) will not autodiscover mail profiles. Keep in mind, that ActiveSync should NOT be used with a desktop client . Open/create data/web/inc/vars.local.inc.php and add your changes to the configuration array. Changes will be merged with \"$autodiscover_config\" in data/web/inc/vars.inc.php ): 'activesync', // If autodiscoverType => activesync, also use ActiveSync (EAS) for Outlook desktop clients (>= Outlook 2013 on Windows) // Outlook for Mac does not support ActiveSync 'useEASforOutlook' => 'yes', // Please don't use STARTTLS-enabled service ports in the \"port\" variable. // The autodiscover service will always point to SMTPS and IMAPS (TLS-wrapped services). // The autoconfig service will additionally announce the STARTTLS-enabled ports, specified in the \"tlsport\" variable. 'imap' => array( 'server' => $mailcow_hostname, 'port' => array_pop(explode(':', getenv('IMAPS_PORT'))), 'tlsport' => array_pop(explode(':', getenv('IMAP_PORT'))), ), 'pop3' => array( 'server' => $mailcow_hostname, 'port' => array_pop(explode(':', getenv('POPS_PORT'))), 'tlsport' => array_pop(explode(':', getenv('POP_PORT'))), ), 'smtp' => array( 'server' => $mailcow_hostname, 'port' => array_pop(explode(':', getenv('SMTPS_PORT'))), 'tlsport' => array_pop(explode(':', getenv('SUBMISSION_PORT'))), ), 'activesync' => array( 'url' => 'https://'.$mailcow_hostname.($https_port == 443 ? '' : ':'.$https_port).'/Microsoft-Server-ActiveSync', ), 'caldav' => array( 'server' => $mailcow_hostname, 'port' => $https_port, ), 'carddav' => array( 'server' => $mailcow_hostname, 'port' => $https_port, ), ); To always use IMAP and SMTP instead of EAS, set 'autodiscoverType' => 'imap' . Disable ActiveSync for Outlook desktop clients by setting \"useEASforOutlook\" to \"no\".","title":"Autodiscover / Autoconfig"},{"location":"u_e-backup_restore-maildir/","text":"Backup \u00b6 This line backups the vmail directory to a file backup_vmail.tar.gz in the mailcow root directory: cd /path/to/mailcow-dockerized docker run --rm -i -v $(docker inspect --format '{{ range .Mounts }}{{ if eq .Destination \"/var/vmail\" }}{{ .Name }}{{ end }}{{ end }}' $(docker-compose ps -q dovecot-mailcow)):/vmail -v ${PWD}:/backup debian:stretch-slim tar cvfz /backup/backup_vmail.tar.gz /vmail You can change the path by adjusting ${PWD} (which equals to the current directory) to any path you have write-access to. Set the filename backup_vmail.tar.gz to any custom name, but leave the path as it is. Example: [...] tar cvfz /backup/my_own_filename_.tar.gz Restore \u00b6 cd /path/to/mailcow-dockerized docker run --rm -it -v $(docker inspect --format '{{ range .Mounts }}{{ if eq .Destination \"/var/vmail\" }}{{ .Name }}{{ end }}{{ end }}' $(docker-compose ps -q dovecot-mailcow)):/vmail -v ${PWD}:/backup debian:stretch-slim tar xvfz /backup/backup_vmail.tar.gz","title":"Maildir"},{"location":"u_e-backup_restore-maildir/#backup","text":"This line backups the vmail directory to a file backup_vmail.tar.gz in the mailcow root directory: cd /path/to/mailcow-dockerized docker run --rm -i -v $(docker inspect --format '{{ range .Mounts }}{{ if eq .Destination \"/var/vmail\" }}{{ .Name }}{{ end }}{{ end }}' $(docker-compose ps -q dovecot-mailcow)):/vmail -v ${PWD}:/backup debian:stretch-slim tar cvfz /backup/backup_vmail.tar.gz /vmail You can change the path by adjusting ${PWD} (which equals to the current directory) to any path you have write-access to. Set the filename backup_vmail.tar.gz to any custom name, but leave the path as it is. Example: [...] tar cvfz /backup/my_own_filename_.tar.gz","title":"Backup"},{"location":"u_e-backup_restore-maildir/#restore","text":"cd /path/to/mailcow-dockerized docker run --rm -it -v $(docker inspect --format '{{ range .Mounts }}{{ if eq .Destination \"/var/vmail\" }}{{ .Name }}{{ end }}{{ end }}' $(docker-compose ps -q dovecot-mailcow)):/vmail -v ${PWD}:/backup debian:stretch-slim tar xvfz /backup/backup_vmail.tar.gz","title":"Restore"},{"location":"u_e-backup_restore-mysql/","text":"Backup \u00b6 cd /path/to/mailcow-dockerized source mailcow.conf DATE=$(date +\"%Y%m%d_%H%M%S\") docker-compose exec -T mysql-mailcow mysqldump --default-character-set=utf8mb4 -u${DBUSER} -p${DBPASS} ${DBNAME} > backup_${DBNAME}_${DATE}.sql Restore \u00b6 Warning You should redirect the SQL dump without docker-compose to prevent parsing errors. cd /path/to/mailcow-dockerized source mailcow.conf docker exec -i $(docker-compose ps -q mysql-mailcow) mysql -u${DBUSER} -p${DBPASS} ${DBNAME} < backup_file.sql","title":"MySQL (mysqldump)"},{"location":"u_e-backup_restore-mysql/#backup","text":"cd /path/to/mailcow-dockerized source mailcow.conf DATE=$(date +\"%Y%m%d_%H%M%S\") docker-compose exec -T mysql-mailcow mysqldump --default-character-set=utf8mb4 -u${DBUSER} -p${DBPASS} ${DBNAME} > backup_${DBNAME}_${DATE}.sql","title":"Backup"},{"location":"u_e-backup_restore-mysql/#restore","text":"Warning You should redirect the SQL dump without docker-compose to prevent parsing errors. cd /path/to/mailcow-dockerized source mailcow.conf docker exec -i $(docker-compose ps -q mysql-mailcow) mysql -u${DBUSER} -p${DBPASS} ${DBNAME} < backup_file.sql","title":"Restore"},{"location":"u_e-docker-cust_dockerfiles/","text":"You need to copy the override file with corresponding build tags to the mailcow: dockerized root folder (i.e. /opt/mailcow-dockerized ): cp helper-scripts/docker-compose.override.yml.d/BUILD_FLAGS/docker-compose.override.yml docker-compose.override.yml Make your changes in data/Dockerfiles/$service and build the image locally: docker build data/Dockerfiles/service -t mailcow/$service Now auto-recreate modified containers: docker-compose up -d","title":"Customize Dockerfiles"},{"location":"u_e-docker-dc_bash_compl/","text":"To get some sexy bash completion inside your containers simply execute the following: curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose","title":"Docker Compose Bash Completion"},{"location":"u_e-dovecot-any_acl/","text":"On August the 17th, we disabled the possibility to share with \"any\" or \"all authenticated users\" by default. This function can be re-enabled by setting ACL_ANYONE to allow in mailcow.conf: ACL_ANYONE=allow Apply the changes by running docker-compose up -d .","title":"Enable \"any\" ACL settings"},{"location":"u_e-dovecot-catchall_vacation/","text":"The Dovecot parameter sieve_vacation_dont_check_recipient - which was by default set to yes in mailcow configurations pre 21st July - allows for vacation replies even when a mail is sent to non-existent mailboxes like a catch-all addresses. We decided to switch this parameter back to no and allow a user to specify which recipient address triggers a vacation reply. The triggering recipients can also be configured in SOGos autoresponder feature.","title":"Vacation replies for catchall addresses"},{"location":"u_e-dovecot-expunge/","text":"If you want to delete old mails out of the .Junk or .Trash folders or maybe delete all read mails that are older than a certain amount of time you may use dovecot's tool doveadm man doveadm-expunge . The manual way \u00b6 That said, let's dive in: Delete a user's mails inside the junk folder that are read and older than 4 hours docker-compose exec dovecot-mailcow doveadm expunge -u 'mailbox@example.com' mailbox 'Junk' SEEN not SINCE 4h Delete all user's mails in the junk folder that are older than 7 days docker-compose exec dovecot-mailcow doveadm expunge -A mailbox 'Junk' savedbefore 7d Delete all mails (of all users) in all folders that are older than 52 weeks (internal date of the mail, not the date it was saved on the system => before instead of savedbefore ). Useful for deleting very old mails on all users and folders (thus especially useful for GDPR-compliance). docker-compose exec dovecot-mailcow doveadm expunge -A mailbox % before 52w Delete mails inside a custom folder inside a user's inbox that are not flagged and older than 2 weeks docker-compose exec dovecot-mailcow doveadm expunge -u 'mailbox@example.com' mailbox 'INBOX/custom-folder' not FLAGGED not SINCE 2w Info For possible time spans or search keys have a look at man doveadm-search-query Job scheduler \u00b6 via the host system cron \u00b6 If you want to automate such a task you can create a cron job on your host that calls a script like the one below: #!/bin/bash # Path to mailcow-dockerized, e.g. /opt/mailcow-dockerized cd /path/to/your/mailcow-dockerized /usr/local/bin/docker-compose exec -T dovecot-mailcow doveadm expunge -A mailbox 'Junk' savedbefore 2w /usr/local/bin/docker-compose exec -T dovecot-mailcow doveadm expunge -A mailbox 'Junk' SEEN not SINCE 12h [...] To create a cron job you may execute crontab -e and insert something like the following to execute a script: # Execute everyday at 04:00 A.M. 0 4 * * * /path/to/your/expunge_mailboxes.sh via Docker job scheduler \u00b6 To archive this with a docker job scheduler use this docker-compose.override.yml with your mailcow: version: '2.1' services: ofelia: image: mcuadros/ofelia:latest restart: always command: daemon --docker volumes: - /var/run/docker.sock:/var/run/docker.sock:ro network_mode: none dovecot-mailcow: labels: - \"ofelia.enabled=true\" - \"ofelia.job-exec.dovecot-expunge-trash.schedule=0 4 * * *\" - \"ofelia.job-exec.dovecot-expunge-trash.command=doveadm expunge -A mailbox 'Junk' savedbefore 2w\" - \"ofelia.job-exec.dovecot-expunge-trash.tty=false\" The job controller just need access to the docker control socket to be able to emulate the behavior of \"exec\". Then we add a few label to our dovecot-container to activate the job scheduler and tell him in a cron compatible scheduling format when to run. If you struggle with that schedule string you can use crontab guru . This docker-compose.override.yml deletes all mails older then 2 weeks from the \"Junk\" folder every day at 4 am. To see if things ran proper, you can not only see in your mailbox but also check Ofelia's docker log if it looks something like this: common.go:124 \u25b6 NOTICE [Job \"dovecot-expunge-trash\" (8759567efa66)] Started - doveadm expunge -A mailbox 'Junk' savedbefore 2w, common.go:124 \u25b6 NOTICE [Job \"dovecot-expunge-trash\" (8759567efa66)] Finished in \"285.032291ms\", failed: false, skipped: false, error: none, If it failed it will say so and give you the output of the doveadm in the log to make it easy on you to debug. In case you want to add more jobs, ensure you change the \"dovecot-expunge-trash\" part after \"ofelia.job-exec.\" to something else, it defines the name of the job. Syntax of the labels you find at mcuadros/ofelia .","title":"Expunge a Users mails"},{"location":"u_e-dovecot-expunge/#the-manual-way","text":"That said, let's dive in: Delete a user's mails inside the junk folder that are read and older than 4 hours docker-compose exec dovecot-mailcow doveadm expunge -u 'mailbox@example.com' mailbox 'Junk' SEEN not SINCE 4h Delete all user's mails in the junk folder that are older than 7 days docker-compose exec dovecot-mailcow doveadm expunge -A mailbox 'Junk' savedbefore 7d Delete all mails (of all users) in all folders that are older than 52 weeks (internal date of the mail, not the date it was saved on the system => before instead of savedbefore ). Useful for deleting very old mails on all users and folders (thus especially useful for GDPR-compliance). docker-compose exec dovecot-mailcow doveadm expunge -A mailbox % before 52w Delete mails inside a custom folder inside a user's inbox that are not flagged and older than 2 weeks docker-compose exec dovecot-mailcow doveadm expunge -u 'mailbox@example.com' mailbox 'INBOX/custom-folder' not FLAGGED not SINCE 2w Info For possible time spans or search keys have a look at man doveadm-search-query","title":"The manual way"},{"location":"u_e-dovecot-expunge/#job-scheduler","text":"","title":"Job scheduler"},{"location":"u_e-dovecot-expunge/#via-the-host-system-cron","text":"If you want to automate such a task you can create a cron job on your host that calls a script like the one below: #!/bin/bash # Path to mailcow-dockerized, e.g. /opt/mailcow-dockerized cd /path/to/your/mailcow-dockerized /usr/local/bin/docker-compose exec -T dovecot-mailcow doveadm expunge -A mailbox 'Junk' savedbefore 2w /usr/local/bin/docker-compose exec -T dovecot-mailcow doveadm expunge -A mailbox 'Junk' SEEN not SINCE 12h [...] To create a cron job you may execute crontab -e and insert something like the following to execute a script: # Execute everyday at 04:00 A.M. 0 4 * * * /path/to/your/expunge_mailboxes.sh","title":"via the host system cron"},{"location":"u_e-dovecot-expunge/#via-docker-job-scheduler","text":"To archive this with a docker job scheduler use this docker-compose.override.yml with your mailcow: version: '2.1' services: ofelia: image: mcuadros/ofelia:latest restart: always command: daemon --docker volumes: - /var/run/docker.sock:/var/run/docker.sock:ro network_mode: none dovecot-mailcow: labels: - \"ofelia.enabled=true\" - \"ofelia.job-exec.dovecot-expunge-trash.schedule=0 4 * * *\" - \"ofelia.job-exec.dovecot-expunge-trash.command=doveadm expunge -A mailbox 'Junk' savedbefore 2w\" - \"ofelia.job-exec.dovecot-expunge-trash.tty=false\" The job controller just need access to the docker control socket to be able to emulate the behavior of \"exec\". Then we add a few label to our dovecot-container to activate the job scheduler and tell him in a cron compatible scheduling format when to run. If you struggle with that schedule string you can use crontab guru . This docker-compose.override.yml deletes all mails older then 2 weeks from the \"Junk\" folder every day at 4 am. To see if things ran proper, you can not only see in your mailbox but also check Ofelia's docker log if it looks something like this: common.go:124 \u25b6 NOTICE [Job \"dovecot-expunge-trash\" (8759567efa66)] Started - doveadm expunge -A mailbox 'Junk' savedbefore 2w, common.go:124 \u25b6 NOTICE [Job \"dovecot-expunge-trash\" (8759567efa66)] Finished in \"285.032291ms\", failed: false, skipped: false, error: none, If it failed it will say so and give you the output of the doveadm in the log to make it easy on you to debug. In case you want to add more jobs, ensure you change the \"dovecot-expunge-trash\" part after \"ofelia.job-exec.\" to something else, it defines the name of the job. Syntax of the labels you find at mcuadros/ofelia .","title":"via Docker job scheduler"},{"location":"u_e-dovecot-extra_conf/","text":"Create a file data/conf/dovecot/extra.conf - if missing - and add your additional content here. Restart dovecot-mailcow to apply your changes: docker-compose restart dovecot-mailcow","title":"Customize/Expand dovecot.conf"},{"location":"u_e-dovecot-fts/","text":"Solr is used for setups with memory >= 3.5 GiB to provide full-text search in Dovecot. Please be aware that applications like Solr may need maintenance from time to time. Besides that, Solr will eat a lot of RAM, depending on the usage of your server. Please avoid it on machines with less than 3 GB RAM. The default heap size (1024 M) is defined in mailcow.conf. Since we run in Docker and create our containers with the \"restart: always\" flag, a oom situation will at least only trigger a restart of the container. FTS related Dovecot commands \u00b6 # single user docker-compose exec dovecot-mailcow doveadm fts rescan -u user@domain # all users docker-compose exec dovecot-mailcow doveadm fts rescan -A Dovecot Wiki: \"Scan what mails exist in the full text search index and compare those to what actually exist in mailboxes. This removes mails from the index that have already been expunged and makes sure that the next doveadm index will index all the missing mails (if any).\" This does not re-index a mailbox. It basically repairs a given index. If you want to re-index data immediately, you can run the followig command, where '*' can also be a mailbox mask like 'Sent'. You do not need to run these commands, but it will speed things up a bit: # single user docker-compose exec dovecot-mailcow doveadm index -u user@domain '*' # all users, but obviously slower and more dangerous docker-compose exec dovecot-mailcow doveadm index -A '*' This will take some time depending on your machine and Solr can run oom, monitor it! Because re-indexing is very sensible, we did not include it to mailcow UI. You will need to take care of any errors while re-indexing a mailbox. Delete mailbox data \u00b6 mailcow will purge index data of a user when deleting a mailbox.","title":"FTS (Solr)"},{"location":"u_e-dovecot-fts/#fts-related-dovecot-commands","text":"# single user docker-compose exec dovecot-mailcow doveadm fts rescan -u user@domain # all users docker-compose exec dovecot-mailcow doveadm fts rescan -A Dovecot Wiki: \"Scan what mails exist in the full text search index and compare those to what actually exist in mailboxes. This removes mails from the index that have already been expunged and makes sure that the next doveadm index will index all the missing mails (if any).\" This does not re-index a mailbox. It basically repairs a given index. If you want to re-index data immediately, you can run the followig command, where '*' can also be a mailbox mask like 'Sent'. You do not need to run these commands, but it will speed things up a bit: # single user docker-compose exec dovecot-mailcow doveadm index -u user@domain '*' # all users, but obviously slower and more dangerous docker-compose exec dovecot-mailcow doveadm index -A '*' This will take some time depending on your machine and Solr can run oom, monitor it! Because re-indexing is very sensible, we did not include it to mailcow UI. You will need to take care of any errors while re-indexing a mailbox.","title":"FTS related Dovecot commands"},{"location":"u_e-dovecot-fts/#delete-mailbox-data","text":"mailcow will purge index data of a user when deleting a mailbox.","title":"Delete mailbox data"},{"location":"u_e-dovecot-idle_interval/","text":"Changing the IMAP IDLE interval \u00b6 What is the IDLE interval? \u00b6 Per default, Dovecot sends a \"I'm still here\" notification to every client that has an open connection with Dovecot to get mails as quickly as possible without manually polling it (IMAP PUSH). This notification is controlled by the setting imap_idle_notify_interval , which defaults to 2 minutes. A short interval results in the client getting a lot of messages for this connection, which is bad for mobile devices, because every time the device receives this message, the mailing app has to wake up. This can result in unnecessary battery drain. Edit the value \u00b6 Change configuration \u00b6 Create a new file data/conf/dovecot/extra.conf (or edit it if it already exists). Insert the setting followed by the new value. For example, to set the interval to 5 minutes you could type: imap_idle_notify_interval = 5 mins 29 minutes is the maximum value allowed by the corresponding RFC . Warning This isn't a default setting in mailcow because we don't know how this setting changes the behavior of other clients. Be careful if you change this and monitor different behavior. Reload Dovecot \u00b6 Now reload Dovecot: docker-compose exec dovecot-mailcow dovecot reload Info You can check the value of this setting with docker-compose exec dovecot-mailcow dovecot -a | grep \"imap_idle_notify_interval\" If you didn't change it, it should be at 2m. If you did change it, you should see your new value.","title":"IMAP IDLE interval"},{"location":"u_e-dovecot-idle_interval/#changing-the-imap-idle-interval","text":"","title":"Changing the IMAP IDLE interval"},{"location":"u_e-dovecot-idle_interval/#what-is-the-idle-interval","text":"Per default, Dovecot sends a \"I'm still here\" notification to every client that has an open connection with Dovecot to get mails as quickly as possible without manually polling it (IMAP PUSH). This notification is controlled by the setting imap_idle_notify_interval , which defaults to 2 minutes. A short interval results in the client getting a lot of messages for this connection, which is bad for mobile devices, because every time the device receives this message, the mailing app has to wake up. This can result in unnecessary battery drain.","title":"What is the IDLE interval?"},{"location":"u_e-dovecot-idle_interval/#edit-the-value","text":"","title":"Edit the value"},{"location":"u_e-dovecot-idle_interval/#change-configuration","text":"Create a new file data/conf/dovecot/extra.conf (or edit it if it already exists). Insert the setting followed by the new value. For example, to set the interval to 5 minutes you could type: imap_idle_notify_interval = 5 mins 29 minutes is the maximum value allowed by the corresponding RFC . Warning This isn't a default setting in mailcow because we don't know how this setting changes the behavior of other clients. Be careful if you change this and monitor different behavior.","title":"Change configuration"},{"location":"u_e-dovecot-idle_interval/#reload-dovecot","text":"Now reload Dovecot: docker-compose exec dovecot-mailcow dovecot reload Info You can check the value of this setting with docker-compose exec dovecot-mailcow dovecot -a | grep \"imap_idle_notify_interval\" If you didn't change it, it should be at 2m. If you did change it, you should see your new value.","title":"Reload Dovecot"},{"location":"u_e-dovecot-mail-crypt/","text":"Mails are stored compressed (lz4) and encrypted. The key pair can be found in crypt-vol-1. If you want to decode/encode existing maildir files, you can use the following script at your own risk: Enter Dovecot by running docker-compose exec dovecot-mailcow /bin/bash in the mailcow-dockerized location. # Decrypt /var/vmail find /var/vmail/ -type f -regextype egrep -regex '.*S=.*W=.*' | while read -r file; do if [[ $(head -c7 \"$file\") == \"CRYPTED\" ]]; then doveadm fs get compress lz4:0:crypt:private_key_path=/mail_crypt/ecprivkey.pem:public_key_path=/mail_crypt/ecpubkey.pem:posix:prefix=/ \\ \"$file\" > \"/tmp/$(basename \"$file\")\" if [[ -s \"/tmp/$(basename \"$file\")\" ]]; then chmod 600 \"/tmp/$(basename \"$file\")\" chown 5000:5000 \"/tmp/$(basename \"$file\")\" mv \"/tmp/$(basename \"$file\")\" \"$file\" else rm \"/tmp/$(basename \"$file\")\" fi fi done # Encrypt /var/vmail find /var/vmail/ -type f -regextype egrep -regex '.*S=.*W=.*' | while read -r file; do if [[ $(head -c7 \"$file\") != \"CRYPTED\" ]]; then doveadm fs put crypt private_key_path=/mail_crypt/ecprivkey.pem:public_key_path=/mail_crypt/ecpubkey.pem:posix:prefix=/ \\ \"$file\" \"$file\" chmod 600 \"$file\" chown 5000:5000 \"$file\" fi done","title":"Mail crypt"},{"location":"u_e-dovecot-more/","text":"Here is just an unsorted list of useful doveadm commands that could be useful. doveadm quota \u00b6 The quota get and quota recalc 1 commands are used to display or recalculate the current user's quota usage. The reported values are in kilobytes . To list the current quota status for a user / mailbox, do: doveadm quota get -u 'mailbox@example.org' To list the quota storage value for all users, do: doveadm quota get -A |grep \"STORAGE\" Recalculate a single user's quota usage: doveadm quota recalc -u 'mailbox@example.org' doveadm search \u00b6 The doveadm search 2 command is used to find messages matching your query. It can return the username, mailbox-GUID / -UID and message-GUIDs / -UIDs. To view the number of messages, by user, in their .Trash folder: doveadm search -A mailbox 'Trash' | awk '{print $1}' | sort | uniq -c Show all messages in a user's inbox older then 90 days: doveadm search -u 'mailbox@example.org' mailbox 'INBOX' savedbefore 90d Show all messages in any folder that are older then 30 days for mailbox@example.org : doveadm search -u 'mailbox@example.org' mailbox \"*\" savedbefore 30d https://wiki.dovecot.org/Tools/Doveadm/Quota \u21a9 https://wiki.dovecot.org/Tools/Doveadm/Search \u21a9","title":"More Examples with DOVEADM"},{"location":"u_e-dovecot-more/#doveadm-quota","text":"The quota get and quota recalc 1 commands are used to display or recalculate the current user's quota usage. The reported values are in kilobytes . To list the current quota status for a user / mailbox, do: doveadm quota get -u 'mailbox@example.org' To list the quota storage value for all users, do: doveadm quota get -A |grep \"STORAGE\" Recalculate a single user's quota usage: doveadm quota recalc -u 'mailbox@example.org'","title":"doveadm quota"},{"location":"u_e-dovecot-more/#doveadm-search","text":"The doveadm search 2 command is used to find messages matching your query. It can return the username, mailbox-GUID / -UID and message-GUIDs / -UIDs. To view the number of messages, by user, in their .Trash folder: doveadm search -A mailbox 'Trash' | awk '{print $1}' | sort | uniq -c Show all messages in a user's inbox older then 90 days: doveadm search -u 'mailbox@example.org' mailbox 'INBOX' savedbefore 90d Show all messages in any folder that are older then 30 days for mailbox@example.org : doveadm search -u 'mailbox@example.org' mailbox \"*\" savedbefore 30d https://wiki.dovecot.org/Tools/Doveadm/Quota \u21a9 https://wiki.dovecot.org/Tools/Doveadm/Search \u21a9","title":"doveadm search"},{"location":"u_e-dovecot-public_folder/","text":"Create a new public namespace \"Public\" and a mailbox \"Develcow\" inside that namespace: Edit or create data/conf/dovecot/extra.conf , add: namespace { type = public separator = / prefix = Public/ location = maildir:/var/vmail/public:INDEXPVT=~/public subscriptions = yes mailbox \"Develcow\" { auto = subscribe } } :INDEXPVT=~/public can be omitted if per-user seen flags are not wanted. The new mailbox in the public namespace will be auto-subscribed by users. To allow all authenticated users access full to that new mailbox (not the whole namespace), run: docker-compose exec dovecot-mailcow doveadm acl set -A \"Public/Develcow\" \"authenticated\" lookup read write write-seen write-deleted insert post delete expunge create Adjust the command to your needs if you like to assign more granular rights per user (use -u user@domain instead of -A for example). Allow authenticated users access to the whole public namespace \u00b6 To allow all authenticated users access full access to the whole public namespace and its subfolders, create a new dovecot-acl file in the namespace root directory: Open/edit/create /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/public/dovecot-acl (adjust the path accordingly) to create the global ACL file with the following content: authenticated kxeilprwts kxeilprwts equals to lookup read write write-seen write-deleted insert post delete expunge create . You can use doveadm acl set -u user@domain \"Public/Develcow\" user=user@domain lookup read to limit access for a single user. You may also turn it around to limit access for all users to \"lr\" and grant only some users full access. See Dovecot ACL for further information about ACL.","title":"Public folders"},{"location":"u_e-dovecot-public_folder/#allow-authenticated-users-access-to-the-whole-public-namespace","text":"To allow all authenticated users access full access to the whole public namespace and its subfolders, create a new dovecot-acl file in the namespace root directory: Open/edit/create /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/public/dovecot-acl (adjust the path accordingly) to create the global ACL file with the following content: authenticated kxeilprwts kxeilprwts equals to lookup read write write-seen write-deleted insert post delete expunge create . You can use doveadm acl set -u user@domain \"Public/Develcow\" user=user@domain lookup read to limit access for a single user. You may also turn it around to limit access for all users to \"lr\" and grant only some users full access. See Dovecot ACL for further information about ACL.","title":"Allow authenticated users access to the whole public namespace"},{"location":"u_e-dovecot-static_master/","text":"Random master usernames and passwords are automatically created on every restart of dovecot-mailcow. That's recommended and should not be changed. If you need the user to be static anyway, please specify two variables in mailcow.conf . Both parameters must not be empty! DOVECOT_MASTER_USER=mymasteruser DOVECOT_MASTER_PASS=mysecretpass Run docker-compose up -d to apply your changes. The static master username will be expanded to DOVECOT_MASTER_USER@mailcow.local . To login as test@example.org this would equal to test@example.org*mymasteruser@mailcow.local with the specified password above. A login to SOGo is not possible with this username. A click-to-login function for SOGo is available for admins as described here No master user is required.","title":"Static master user"},{"location":"u_e-dovecot-vmail-volume/","text":"The \"new\" way \u00b6 WARNING : Newer Docker versions seem to complain about existing volumes. You can fix this temporarily by removing the existing volume and start mailcow with the override file. But it seems to be problematic after a reboot (needs to be confirmed). An easy, dirty, yet stable workaround is to stop mailcow ( docker-compose down ), remove /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data and create a new link to your remote filesystem location, for example: mv /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data_backup ln -s /mnt/volume-xy/vmail_data /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data Start mailcow afterwards. The \"old\" way \u00b6 If you want to use another folder for the vmail-volume, you can create a docker-compose.override.yml file and add the following content: version: '2.1' volumes: vmail-vol-1: driver_opts: type: none device: /data/mailcow/vmail o: bind Moving an existing vmail folder: \u00b6 Locate the current vmail folder by its \"Mountpoint\" attribute: docker volume inspect mailcowdockerized_vmail-vol-1 [ { \"CreatedAt\": \"2019-06-16T22:08:34+02:00\", \"Driver\": \"local\", \"Labels\": { \"com.docker.compose.project\": \"mailcowdockerized\", \"com.docker.compose.version\": \"1.23.2\", \"com.docker.compose.volume\": \"vmail-vol-1\" }, \"Mountpoint\": \"/var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data\", \"Name\": \"mailcowdockerized_vmail-vol-1\", \"Options\": null, \"Scope\": \"local\" } ] Copy the content of the Mountpoint folder to the new location (e.g. /data/mailcow/vmail ) using cp -a , rsync -a or a similar non strcuture breaking copy command Stop mailcow by executing docker-compose down from within your mailcow root folder (e.g. /opt/mailcow-dockerized ) Create the file docker-compose.override.yml , edit the device path accordingly Delete the current vmail folder: docker volume rm mailcowdockerized_vmail-vol-1 Start mailcow by executing docker-compose up -d from within your mailcow root folder (e.g. /opt/mailcow-dockerized )","title":"Move Maildir (vmail)"},{"location":"u_e-dovecot-vmail-volume/#the-new-way","text":"WARNING : Newer Docker versions seem to complain about existing volumes. You can fix this temporarily by removing the existing volume and start mailcow with the override file. But it seems to be problematic after a reboot (needs to be confirmed). An easy, dirty, yet stable workaround is to stop mailcow ( docker-compose down ), remove /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data and create a new link to your remote filesystem location, for example: mv /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data_backup ln -s /mnt/volume-xy/vmail_data /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data Start mailcow afterwards.","title":"The \"new\" way"},{"location":"u_e-dovecot-vmail-volume/#the-old-way","text":"If you want to use another folder for the vmail-volume, you can create a docker-compose.override.yml file and add the following content: version: '2.1' volumes: vmail-vol-1: driver_opts: type: none device: /data/mailcow/vmail o: bind","title":"The \"old\" way"},{"location":"u_e-dovecot-vmail-volume/#moving-an-existing-vmail-folder","text":"Locate the current vmail folder by its \"Mountpoint\" attribute: docker volume inspect mailcowdockerized_vmail-vol-1 [ { \"CreatedAt\": \"2019-06-16T22:08:34+02:00\", \"Driver\": \"local\", \"Labels\": { \"com.docker.compose.project\": \"mailcowdockerized\", \"com.docker.compose.version\": \"1.23.2\", \"com.docker.compose.volume\": \"vmail-vol-1\" }, \"Mountpoint\": \"/var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data\", \"Name\": \"mailcowdockerized_vmail-vol-1\", \"Options\": null, \"Scope\": \"local\" } ] Copy the content of the Mountpoint folder to the new location (e.g. /data/mailcow/vmail ) using cp -a , rsync -a or a similar non strcuture breaking copy command Stop mailcow by executing docker-compose down from within your mailcow root folder (e.g. /opt/mailcow-dockerized ) Create the file docker-compose.override.yml , edit the device path accordingly Delete the current vmail folder: docker volume rm mailcowdockerized_vmail-vol-1 Start mailcow by executing docker-compose up -d from within your mailcow root folder (e.g. /opt/mailcow-dockerized )","title":"Moving an existing vmail folder:"},{"location":"u_e-fido2/","text":"How is UV handled in mailcow? \u00b6 The UV flag (as in \"user verification\") enforces WebAuthn to verify the user before it allows access to the key (think of a PIN). We don't enforce UV to allow logins via iOS and NFC (YubiKey). Login and key processing \u00b6 mailcow uses client-side key processing . We ask the authenticator (i.e. YubiKey) to save the registration in its memory. A user does not need to enter a username. The available credentials - if any - will be shown to the user when selecting the \"key login\" via mailcow UI login. When calling the login process, the authenticator is not given any credential IDs. This will force it to lookup credentials in its own memory. Who can use WebAuthn to login to mailcow? \u00b6 As of today, only administrators and domain administrators are able to setup WebAuthn/FIDO2. You want to use WebAuthn/Fido as 2FA? Check it out here: Two-Factor Authentication","title":"WebAuthn / FIDO2"},{"location":"u_e-fido2/#how-is-uv-handled-in-mailcow","text":"The UV flag (as in \"user verification\") enforces WebAuthn to verify the user before it allows access to the key (think of a PIN). We don't enforce UV to allow logins via iOS and NFC (YubiKey).","title":"How is UV handled in mailcow?"},{"location":"u_e-fido2/#login-and-key-processing","text":"mailcow uses client-side key processing . We ask the authenticator (i.e. YubiKey) to save the registration in its memory. A user does not need to enter a username. The available credentials - if any - will be shown to the user when selecting the \"key login\" via mailcow UI login. When calling the login process, the authenticator is not given any credential IDs. This will force it to lookup credentials in its own memory.","title":"Login and key processing"},{"location":"u_e-fido2/#who-can-use-webauthn-to-login-to-mailcow","text":"As of today, only administrators and domain administrators are able to setup WebAuthn/FIDO2. You want to use WebAuthn/Fido as 2FA? Check it out here: Two-Factor Authentication","title":"Who can use WebAuthn to login to mailcow?"},{"location":"u_e-mailcow_ui-bl_wl/","text":"To add or edit an entry to your domain wide filter table, login to your mailcow UI as (domain) administrator. Info Be aware that a user may override this setting by setting his own black- and whitelist! There is also a global filter table in /admin to configure a server-wide filter for multiple Regex maps (Todo: Screenshots).","title":"Blacklist / Whitelist"},{"location":"u_e-mailcow_ui-config/","text":"Several configuration parameters of the mailcow UI can be changed by creating a file data/web/inc/vars.local.inc.php which overrides defaults settings found in data/web/inc/vars.inc.php . The local configuration file is persistent over updates of mailcow. Try not to change values inside data/web/inc/vars.inc.php , but use them as template for the local override. mailcow UI configuration parameters can be used to... ...change the default language 1 ...change the default bootstrap theme ...set a password complexity regex ...enable DKIM private key visibility ...set a pagination trigger size ...set default mailbox attributes ...change session lifetimes ...create fixed app menus (which cannot be changed in mailcow UI) ...set a default \"To\" field for relayhost tests ...set a timeout for Docker API requests ...toggle IP anonymization To change SOGos default language, you will need to edit data/conf/sogo/sogo.conf and replace \"English\" by your preferred language. \u21a9","title":"Configuration"},{"location":"u_e-mailcow_ui-css/","text":"For custom overrides of specific elements via CSS, use data/web/css/build/0081-custom-mailcow.css . The file is excluded from tracking and persists over updates.","title":"CSS overrides"},{"location":"u_e-mailcow_ui-pushover/","text":"Info Pushover makes it easy to get real-time notifications on your Android, iPhone, iPad, and Desktop You can use Pushover to get a push notification on every mail you receive for each mailbox where you enabled this feature. 1. As admin open your mailbox' settings and scroll down to the Pushover settings 2. Register yourself on Pushover 3. Put your 'User Key' in the 'User/Group Key' field in your mailbox settings 4. Create an Applications to get the API Token/Key which you also need to put in your mailbox settings 5. Optional you can edit the notification title/text and define certain sender email addresses where a push notification is triggered 6. Save everything and then you can verify your credentials If everything is done you can test sending a mail and you will receive a push message on your phone","title":"Pushover"},{"location":"u_e-mailcow_ui-spamalias/","text":"These temporary email aliases are mostly used for places where we need to provide an email address but don't want future correspondence with. They are also called spam alias. To create, delete or extend a temporary email aliases you need to login to mailcow's UI as a mailbox user and navigate to the tab Temporary email aliases :","title":"Temporary email aliases"},{"location":"u_e-mailcow_ui-spamfilter/","text":"A mailbox user may adjust the spam filter and black- / whitelist settings for his mailbox individually by navigating to the Spam filter tab in the users mailcow UI. Info For global adjustments on your spam filter please check our section on Rspamd . For a domain wide black- and whitelist please check our guide on Black / Whitelist","title":"Spamfilter"},{"location":"u_e-mailcow_ui-tagging/","text":"Mailbox users can tag their mail address like in me+facebook@example.org . They can control the tag handling in the users mailcow UI panel. Tagging is also known as 'sub-addressing' (RFC 5233) or 'plus addressing' Available Actions \u00b6 1. Move this message to a sub folder \"facebook\" (will be created lower case if not existing) 2. Prepend the tag to the subject: \"[facebook] Subject\" Please note: Uppercase tags are converted to lowercase except for the first letter. If you want to keep the tag as it is, please apply the following diff and restart mailcow: diff --git a/data/conf/dovecot/global_sieve_after b/data/conf/dovecot/global_sieve_after index e047136e..933c4137 100644 --- a/data/conf/dovecot/global_sieve_after +++ b/data/conf/dovecot/global_sieve_after @@ -15,7 +15,7 @@ if allof ( envelope :detail :matches \"to\" \"*\", header :contains \"X-Moo-Tag\" \"YES\" ) { - set :lower :upperfirst \"tag\" \"${1}\"; + set \"tag\" \"${1}\"; if mailboxexists \"INBOX/${1}\" { fileinto \"INBOX/${1}\"; } else {","title":"Tagging"},{"location":"u_e-mailcow_ui-tagging/#available-actions","text":"1. Move this message to a sub folder \"facebook\" (will be created lower case if not existing) 2. Prepend the tag to the subject: \"[facebook] Subject\" Please note: Uppercase tags are converted to lowercase except for the first letter. If you want to keep the tag as it is, please apply the following diff and restart mailcow: diff --git a/data/conf/dovecot/global_sieve_after b/data/conf/dovecot/global_sieve_after index e047136e..933c4137 100644 --- a/data/conf/dovecot/global_sieve_after +++ b/data/conf/dovecot/global_sieve_after @@ -15,7 +15,7 @@ if allof ( envelope :detail :matches \"to\" \"*\", header :contains \"X-Moo-Tag\" \"YES\" ) { - set :lower :upperfirst \"tag\" \"${1}\"; + set \"tag\" \"${1}\"; if mailboxexists \"INBOX/${1}\" { fileinto \"INBOX/${1}\"; } else {","title":"Available Actions"},{"location":"u_e-mailcow_ui-tfa/","text":"So far three methods for Two-Factor Authentication are implemented: WebAuthn (replacing U2F since February 2022), Yubi OTP, and TOTP For WebAuthn to work, you need an encrypted connection to the server (HTTPS) as well as a FIDO security key. Both WebAuthn and Yubi OTP work well with the fantastic Yubikey . While Yubi OTP needs an active internet connection and an API ID + key, WebAuthn will work with any Fido Security Key out of the box, but can only be used when mailcow is accessed over HTTPS. WebAuthn and Yubi OTP support multiple keys per user. As the third TFA method mailcow uses TOTP: time-based one-time passwords. Those passwords can be generated with apps like \"Google Authenticator\" after initially scanning a QR code or entering the given secret manually. As administrator you are able to temporary disable a domain administrators TFA login until they successfully logged in. The key used to login will be displayed in green, while other keys remain grey. Information on how to remove 2FA can be found here . Yubi OTP \u00b6 The Yubi API ID and Key will be checked against the Yubico Cloud API. When setting up TFA you will be asked for your personal API account for this key. The API ID, API key and the first 12 characters (your YubiKeys ID in modhex) are stored in the MySQL table as secret. Example setup \u00b6 First of all, the YubiKey must be configured for use as an OTP Generator. To do this, download the YubiKey Manager from the Yubico website: here In the following you configure the YubiKey for OTP. Via the menu item Applications -> OTP and a click on the Configure button. In the following menu select Credential Type -> Yubico OTP and click on Next . Set a checkmark in the Use serial checkbox, generate a Private ID and a Secret key via the buttons. So that the YubiKey can be validated later, the checkmark in the Upload checkbox must also be set and then click on Finish . Now a new browser window will open in which you have to enter an OTP of your YubiKey at the bottom of the form (click on the field and then tap on your YubiKey). Confirm the captcha and upload the information to the Yubico server by clicking 'Upload'. The processing of the data will take a moment. After the generation was successful, you will be shown a Client ID and a Secret key , make a note of this information in a safe place. Now you can select Yubico OTP authentication from the dropdown menu in the mailcow UI on the start page under Access -> Two-factor authentication . In the dialog that opened now you can enter a name for this YubiKey and insert the Client ID you noted before as well as the Secret key into the fields provided. Finally, enter your current account password and, after selecting the Touch Yubikey field, touch your YubiKey button. Congratulations! You can now log in to the mailcow UI using your YubiKey! WebAuthn (U2F, replacement) \u00b6 :warning: Since February 2022 Google Chrome has discarded support for U2F and standardized the use of WebAuthn. The WebAuthn (U2F removal) is part of mailcow since 21th January 2022, so if you want to use the Key past February 2022 please consider a update with the update.sh script. To use WebAuthn, the browser must support this standard. The following desktop browsers support this authentication type: Edge (>=18) Firefox (>=60) Chrome (>=67) Safari (>=13) Opera (>=54) The following mobile browsers support this authentication type: Safari on iOS (>=14.5) Android Browser (>=97) Opera Mobile (>=64) Chrome for Android (>=97) Sources: caniuse.com , blog.mozilla.org WebAuthn works without an internet connection. What will happen to my registered Fido Security Key after the Update from U2F to WebAuthn? \u00b6 :warning: With this new U2F replacement (WebAuthn) you have to re-register your Fido Security Key, thankfully WebAuthn is backwards compatible and supports the U2F protocol. Ideally, the next time you log in (with the key), you should get a text box saying that your Fido Security Key has been removed due to the update to WebAuthn and deleted as a 2-factor authenticator. But don't worry! You can simply re-register your existing key and use it as usual, you probably won't even notice a difference, except that your browser won't show the U2F deactivation message anymore. Disable unofficial supported Fido Security Keys \u00b6 With WebAuthn there is the possibility to use only official Fido Security Keys (from the big brands like: Yubico, Apple, Nitro, Google, Huawei, Microsoft, etc.). This is primarily for security purposes, as it allows administrators to ensure that only official hardware can be used in their environment. To enable this feature, change the value WEBAUTHN_ONLY_TRUSTED_VENDORS in mailcow.conf from n to y and restart the affected containers with docker-compose up -d . The mailcow will now use the Vendor Certificates located in your mailcow directory under data/web/inc/lib/WebAuthn/rootCertificates . Example: \u00b6 If you want to limit the official Vendor devices to Apple only you only need the Apple Vendor Certificate inside the data/web/inc/lib/WebAuthn/rootCertificates . After you deleted all other certs you now only can activate WebAuthn 2FA with Apple devices. That\u00b4s for every vendor the same, so choose what you like (if you want to). Use own certificates for WebAuthn \u00b6 If you have a valid certificate from the vendor of your key you can also add it to your mailcow! Just copy the certificate into the data/web/inc/lib/WebAuthn/rootCertificates folder and restart your mailcow. Now you should be able to register this device as well, even though the verification for the vendor certificates is enabled, since you just added the certificate manually. Is it dangerous to keep the Vendor Check disabled? \u00b6 No, it isn\u00b4t! These vendor certificates are only used to verify original hardware, not to secure the registration process. As you can read in these articles, the deactivation is not software security related: - https://developers.yubico.com/U2F/Attestation_and_Metadata/ - https://medium.com/webauthnworks/webauthn-fido2-demystifying-attestation-and-mds-efc3b3cb3651 - https://medium.com/webauthnworks/sorting-fido-ctap-webauthn-terminology-7d32067c0b01 In the end, however, it is of course your decision to leave this check disabled or enabled. TOTP \u00b6 The best known TFA method mostly used with a smartphone. To setup the TOTP method login to the Admin UI and select Time-based OTP (TOTP) from the list. Now a modal will open in which you have to type in a name for your 2FA \"device\" (example: John Deer\u00b4s Smartphone) and the password of the affected Admin account (you are currently logged in with). You have two seperate methods to register TOTP to your account: 1. Scan the QR-Code with your Authenticator App on a Smartphone or Tablet. 2. Use the TOTP Code (under the QR Code) in your TOTP Program or App (if you can\u00b4t scan a QR Code). After you have registered the QR or TOTP code in the TOTP app/program of your choice you only need to enter the now generated TOTP token (in the app/program) as confirmation in the mailcow UI to finally activate the TOTP 2FA, otherwise it will not be activated even though the TOTP token is already generated in your app/program.","title":"Two-Factor Authentication"},{"location":"u_e-mailcow_ui-tfa/#yubi-otp","text":"The Yubi API ID and Key will be checked against the Yubico Cloud API. When setting up TFA you will be asked for your personal API account for this key. The API ID, API key and the first 12 characters (your YubiKeys ID in modhex) are stored in the MySQL table as secret.","title":"Yubi OTP"},{"location":"u_e-mailcow_ui-tfa/#example-setup","text":"First of all, the YubiKey must be configured for use as an OTP Generator. To do this, download the YubiKey Manager from the Yubico website: here In the following you configure the YubiKey for OTP. Via the menu item Applications -> OTP and a click on the Configure button. In the following menu select Credential Type -> Yubico OTP and click on Next . Set a checkmark in the Use serial checkbox, generate a Private ID and a Secret key via the buttons. So that the YubiKey can be validated later, the checkmark in the Upload checkbox must also be set and then click on Finish . Now a new browser window will open in which you have to enter an OTP of your YubiKey at the bottom of the form (click on the field and then tap on your YubiKey). Confirm the captcha and upload the information to the Yubico server by clicking 'Upload'. The processing of the data will take a moment. After the generation was successful, you will be shown a Client ID and a Secret key , make a note of this information in a safe place. Now you can select Yubico OTP authentication from the dropdown menu in the mailcow UI on the start page under Access -> Two-factor authentication . In the dialog that opened now you can enter a name for this YubiKey and insert the Client ID you noted before as well as the Secret key into the fields provided. Finally, enter your current account password and, after selecting the Touch Yubikey field, touch your YubiKey button. Congratulations! You can now log in to the mailcow UI using your YubiKey!","title":"Example setup"},{"location":"u_e-mailcow_ui-tfa/#webauthn-u2f-replacement","text":":warning: Since February 2022 Google Chrome has discarded support for U2F and standardized the use of WebAuthn. The WebAuthn (U2F removal) is part of mailcow since 21th January 2022, so if you want to use the Key past February 2022 please consider a update with the update.sh script. To use WebAuthn, the browser must support this standard. The following desktop browsers support this authentication type: Edge (>=18) Firefox (>=60) Chrome (>=67) Safari (>=13) Opera (>=54) The following mobile browsers support this authentication type: Safari on iOS (>=14.5) Android Browser (>=97) Opera Mobile (>=64) Chrome for Android (>=97) Sources: caniuse.com , blog.mozilla.org WebAuthn works without an internet connection.","title":"WebAuthn (U2F, replacement)"},{"location":"u_e-mailcow_ui-tfa/#what-will-happen-to-my-registered-fido-security-key-after-the-update-from-u2f-to-webauthn","text":":warning: With this new U2F replacement (WebAuthn) you have to re-register your Fido Security Key, thankfully WebAuthn is backwards compatible and supports the U2F protocol. Ideally, the next time you log in (with the key), you should get a text box saying that your Fido Security Key has been removed due to the update to WebAuthn and deleted as a 2-factor authenticator. But don't worry! You can simply re-register your existing key and use it as usual, you probably won't even notice a difference, except that your browser won't show the U2F deactivation message anymore.","title":"What will happen to my registered Fido Security Key after the Update from U2F to WebAuthn?"},{"location":"u_e-mailcow_ui-tfa/#disable-unofficial-supported-fido-security-keys","text":"With WebAuthn there is the possibility to use only official Fido Security Keys (from the big brands like: Yubico, Apple, Nitro, Google, Huawei, Microsoft, etc.). This is primarily for security purposes, as it allows administrators to ensure that only official hardware can be used in their environment. To enable this feature, change the value WEBAUTHN_ONLY_TRUSTED_VENDORS in mailcow.conf from n to y and restart the affected containers with docker-compose up -d . The mailcow will now use the Vendor Certificates located in your mailcow directory under data/web/inc/lib/WebAuthn/rootCertificates .","title":"Disable unofficial supported Fido Security Keys"},{"location":"u_e-mailcow_ui-tfa/#example","text":"If you want to limit the official Vendor devices to Apple only you only need the Apple Vendor Certificate inside the data/web/inc/lib/WebAuthn/rootCertificates . After you deleted all other certs you now only can activate WebAuthn 2FA with Apple devices. That\u00b4s for every vendor the same, so choose what you like (if you want to).","title":"Example:"},{"location":"u_e-mailcow_ui-tfa/#use-own-certificates-for-webauthn","text":"If you have a valid certificate from the vendor of your key you can also add it to your mailcow! Just copy the certificate into the data/web/inc/lib/WebAuthn/rootCertificates folder and restart your mailcow. Now you should be able to register this device as well, even though the verification for the vendor certificates is enabled, since you just added the certificate manually.","title":"Use own certificates for WebAuthn"},{"location":"u_e-mailcow_ui-tfa/#is-it-dangerous-to-keep-the-vendor-check-disabled","text":"No, it isn\u00b4t! These vendor certificates are only used to verify original hardware, not to secure the registration process. As you can read in these articles, the deactivation is not software security related: - https://developers.yubico.com/U2F/Attestation_and_Metadata/ - https://medium.com/webauthnworks/webauthn-fido2-demystifying-attestation-and-mds-efc3b3cb3651 - https://medium.com/webauthnworks/sorting-fido-ctap-webauthn-terminology-7d32067c0b01 In the end, however, it is of course your decision to leave this check disabled or enabled.","title":"Is it dangerous to keep the Vendor Check disabled?"},{"location":"u_e-mailcow_ui-tfa/#totp","text":"The best known TFA method mostly used with a smartphone. To setup the TOTP method login to the Admin UI and select Time-based OTP (TOTP) from the list. Now a modal will open in which you have to type in a name for your 2FA \"device\" (example: John Deer\u00b4s Smartphone) and the password of the affected Admin account (you are currently logged in with). You have two seperate methods to register TOTP to your account: 1. Scan the QR-Code with your Authenticator App on a Smartphone or Tablet. 2. Use the TOTP Code (under the QR Code) in your TOTP Program or App (if you can\u00b4t scan a QR Code). After you have registered the QR or TOTP code in the TOTP app/program of your choice you only need to enter the now generated TOTP token (in the app/program) as confirmation in the mailcow UI to finally activate the TOTP 2FA, otherwise it will not be activated even though the TOTP token is already generated in your app/program.","title":"TOTP"},{"location":"u_e-nginx/","text":"SSL \u00b6 Please see Advanced SSL and explicitly check ADDITIONAL_SERVER_NAMES for SSL configuration. Please do not add ADDITIONAL_SERVER_NAMES when you plan to use a different web root. New site \u00b6 To create persistent (over updates) sites hosted by mailcow: dockerized, a new site configuration must be placed inside data/conf/nginx/ : A good template to begin with: nano data/conf/nginx/my_custom_site.conf server { ssl_certificate /etc/ssl/mail/cert.pem; ssl_certificate_key /etc/ssl/mail/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305; ssl_ecdh_curve X25519:X448:secp384r1:secp256k1; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; index index.php index.html; client_max_body_size 0; # Location: data/web root /web; # Location: data/web/mysite.com #root /web/mysite.com include /etc/nginx/conf.d/listen_plain.active; include /etc/nginx/conf.d/listen_ssl.active; server_name mysite.example.org; server_tokens off; # This allows acme to be validated even with a different web root location ^~ /.well-known/acme-challenge/ { default_type \"text/plain\"; rewrite /.well-known/acme-challenge/(.*) /$1 break; root /web/.well-known/acme-challenge/; } if ($scheme = http) { return 301 https://$server_name$request_uri; } } New site with proxy to a remote location \u00b6 Another example with a reverse proxy configuration: nano data/conf/nginx/my_custom_site.conf server { ssl_certificate /etc/ssl/mail/cert.pem; ssl_certificate_key /etc/ssl/mail/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305; ssl_ecdh_curve X25519:X448:secp384r1:secp256k1; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; index index.php index.html; client_max_body_size 0; root /web; include /etc/nginx/conf.d/listen_plain.active; include /etc/nginx/conf.d/listen_ssl.active; server_name example.domain.tld; server_tokens off; location ^~ /.well-known/acme-challenge/ { allow all; default_type \"text/plain\"; } if ($scheme = http) { return 301 https://$host$request_uri; } location / { proxy_pass http://service:3000/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 0; } } Config expansion in mailcows Nginx \u00b6 The filename used for a new site is not important, as long as the filename carries a .conf extension. It is also possible to extend the configuration of the default file site.conf file: nano data/conf/nginx/site.my_content.custom This filename does not need to have a \".conf\" extension but follows the pattern site.*.custom , where * is a custom name. If PHP is to be included in a custom site, please use the PHP-FPM listener on phpfpm:9002 or create a new listener in data/conf/phpfpm/php-fpm.d/pools.conf . Restart Nginx (and PHP-FPM, if a new listener was created): docker-compose restart nginx-mailcow docker-compose restart php-fpm-mailcow","title":"Custom sites"},{"location":"u_e-nginx/#ssl","text":"Please see Advanced SSL and explicitly check ADDITIONAL_SERVER_NAMES for SSL configuration. Please do not add ADDITIONAL_SERVER_NAMES when you plan to use a different web root.","title":"SSL"},{"location":"u_e-nginx/#new-site","text":"To create persistent (over updates) sites hosted by mailcow: dockerized, a new site configuration must be placed inside data/conf/nginx/ : A good template to begin with: nano data/conf/nginx/my_custom_site.conf server { ssl_certificate /etc/ssl/mail/cert.pem; ssl_certificate_key /etc/ssl/mail/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305; ssl_ecdh_curve X25519:X448:secp384r1:secp256k1; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; index index.php index.html; client_max_body_size 0; # Location: data/web root /web; # Location: data/web/mysite.com #root /web/mysite.com include /etc/nginx/conf.d/listen_plain.active; include /etc/nginx/conf.d/listen_ssl.active; server_name mysite.example.org; server_tokens off; # This allows acme to be validated even with a different web root location ^~ /.well-known/acme-challenge/ { default_type \"text/plain\"; rewrite /.well-known/acme-challenge/(.*) /$1 break; root /web/.well-known/acme-challenge/; } if ($scheme = http) { return 301 https://$server_name$request_uri; } }","title":"New site"},{"location":"u_e-nginx/#new-site-with-proxy-to-a-remote-location","text":"Another example with a reverse proxy configuration: nano data/conf/nginx/my_custom_site.conf server { ssl_certificate /etc/ssl/mail/cert.pem; ssl_certificate_key /etc/ssl/mail/key.pem; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305; ssl_ecdh_curve X25519:X448:secp384r1:secp256k1; ssl_session_cache shared:SSL:50m; ssl_session_timeout 1d; ssl_session_tickets off; index index.php index.html; client_max_body_size 0; root /web; include /etc/nginx/conf.d/listen_plain.active; include /etc/nginx/conf.d/listen_ssl.active; server_name example.domain.tld; server_tokens off; location ^~ /.well-known/acme-challenge/ { allow all; default_type \"text/plain\"; } if ($scheme = http) { return 301 https://$host$request_uri; } location / { proxy_pass http://service:3000/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 0; } }","title":"New site with proxy to a remote location"},{"location":"u_e-nginx/#config-expansion-in-mailcows-nginx","text":"The filename used for a new site is not important, as long as the filename carries a .conf extension. It is also possible to extend the configuration of the default file site.conf file: nano data/conf/nginx/site.my_content.custom This filename does not need to have a \".conf\" extension but follows the pattern site.*.custom , where * is a custom name. If PHP is to be included in a custom site, please use the PHP-FPM listener on phpfpm:9002 or create a new listener in data/conf/phpfpm/php-fpm.d/pools.conf . Restart Nginx (and PHP-FPM, if a new listener was created): docker-compose restart nginx-mailcow docker-compose restart php-fpm-mailcow","title":"Config expansion in mailcows Nginx"},{"location":"u_e-postfix-attachment_size/","text":"Open data/conf/postfix/extra.cf and set the message_size_limit accordingly in bytes. See main.cf for the default value. Restart Postfix: docker-compose restart postfix-mailcow","title":"Max. message size (attachment size)"},{"location":"u_e-postfix-custom_transport/","text":"For transport maps other than those to be configured in mailcow UI, please use data/conf/postfix/custom_transport.pcre to prevent existing maps or settings from being overwritten by updates. In most cases using this file is not necessary. Please make sure mailcow UI is not able to route your desired traffic properly before using that file. The file needs valid PCRE content and can break Postfix, if configured incorrectly.","title":"Custom transport maps"},{"location":"u_e-postfix-disable_sender_verification/","text":"New guide \u00b6 Edit a mailbox and select \"Allow to send as *\". For historical reasons we kept the old and deprecated guide below: Deprecated guide (DO NOT USE ON NEWER MAILCOWS!) \u00b6 This option is not best-practice and should only be implemented when there is no other option available to achieve whatever you are trying to do. Simply create a file data/conf/postfix/check_sasl_access and enter the following content. This user must exist in your installation and needs to authenticate before sending mail. user-to-allow-everything@example.com OK Open data/conf/postfix/main.cf and find smtpd_sender_restrictions . Prepend check_sasl_access hash:/opt/postfix/conf/check_sasl_access like this: smtpd_sender_restrictions = check_sasl_access hash:/opt/postfix/conf/check_sasl_access reject_authenticated_sender_login_mismatch [...] Run postmap on check_sasl_access: docker-compose exec postfix-mailcow postmap /opt/postfix/conf/check_sasl_access Restart the Postfix container.","title":"Disable Sender Addresses Verification"},{"location":"u_e-postfix-disable_sender_verification/#new-guide","text":"Edit a mailbox and select \"Allow to send as *\". For historical reasons we kept the old and deprecated guide below:","title":"New guide"},{"location":"u_e-postfix-disable_sender_verification/#deprecated-guide-do-not-use-on-newer-mailcows","text":"This option is not best-practice and should only be implemented when there is no other option available to achieve whatever you are trying to do. Simply create a file data/conf/postfix/check_sasl_access and enter the following content. This user must exist in your installation and needs to authenticate before sending mail. user-to-allow-everything@example.com OK Open data/conf/postfix/main.cf and find smtpd_sender_restrictions . Prepend check_sasl_access hash:/opt/postfix/conf/check_sasl_access like this: smtpd_sender_restrictions = check_sasl_access hash:/opt/postfix/conf/check_sasl_access reject_authenticated_sender_login_mismatch [...] Run postmap on check_sasl_access: docker-compose exec postfix-mailcow postmap /opt/postfix/conf/check_sasl_access Restart the Postfix container.","title":"Deprecated guide (DO NOT USE ON NEWER MAILCOWS!)"},{"location":"u_e-postfix-extra_cf/","text":"Please create a new file data/conf/postfix/extra.cf for overrides or additional content to main.cf . Postfix will complain about duplicate values once after starting postfix-mailcow, this is intended. Syslog-ng was configured to hide those warnings while Postfix is running, to not spam the log files with unnecessary information every time a service is used. Restart postfix-mailcow to apply your changes: docker-compose restart postfix-mailcow","title":"Customize/Expand main.cf"},{"location":"u_e-postfix-pflogsumm/","text":"To use pflogsumm with the default logging driver, we need to query postfix-mailcow via docker logs and pipe the output to pflogsumm: docker logs --since 24h $(docker ps -qf name=postfix-mailcow) | pflogsumm The above log output is limited to the past 24 hours. It's also possible to create a daily pflogsumm report via cron. Create the file /etc/cron.d/pflogsumm with the following content: SHELL=/bin/bash 59 23 * * * root docker logs --since 24h $(docker ps -qf name=postfix-mailcow) | /usr/sbin/pflogsumm -d today | mail -s \"Postfix Report of $(date)\" postmaster@example.net Based on the last 24h postfix logs, this example sends every day at 23:59:00 a pflogsumm report to postmaster@example.net .","title":"Statistics with pflogsumm"},{"location":"u_e-postfix-postscreen_whitelist/","text":"IPs can be removed from Postscreen and therefore also from RBL checks in data/conf/postfix/custom_postscreen_whitelist.cidr . Postscreen does multiple checks to identify malicious senders. In most cases you want to whitelist an IP to exclude it from blacklist lookups. The format of the file is as follows: CIDR ACTION Where CIDR is a single IP address or IP range in CIDR notation, and action is either \"permit\" or \"reject\". Example: # Rules are evaluated in the order as specified. # Blacklist 192.168.* except 192.168.0.1. 192.168.0.1 permit 192.168.0.0/16 reject The file is reloaded on the fly, postfix restart is not required.","title":"Whitelist IP in Postscreen"},{"location":"u_e-postfix-relayhost/","text":"As of September 12, 2018 you can setup relayhosts as admin by using the mailcow UI. This is useful if you want to relay outgoing emails for a specific domain to a third-party spam filter or a service like Mailgun or Sendgrid. This is also known as a smarthost . Add a new relayhost \u00b6 Go to the Routing tab of the Configuration and Details section of the admin UI. Here you will see a list of relayhosts currently setup. Scroll to the Add sender-dependent transport section. Under Host , add the host you want to relay to. Example: if you want to use Mailgun to send emails instead of your server IP, enter smtp.mailgun.org If the relay host requires a username and password to authenticate, enter them in the respective fields. Keep in mind the credentials will be stored in plain text. Test a relayhost \u00b6 To test that connectivity to the host works, click on Test from the list of relayhosts and enter a From: address. Then, run the test. You will then see the results of the SMTP transmission. If all went well, you should see SERVER -> CLIENT: 250 2.0.0 Ok: queued as A093B401D4 as one of the last lines. If not, review the error provided and resolve it. Note: Some hosts, especially those who do not require authentication, will deny connections from servers that have not been added to their system beforehand. Make sure you read the documentation of the relayhost to make sure you've added your domain and/or the server IP to their system. Tip: You can change the default test To: address the test uses from null@mailcow.email to any email address you choose by modifying the $RELAY_TO variable on the vars.inc.php file under /opt/mailcow-dockerized/data/web/inc This way you can check that the relay worked by checking the destination mailbox. Set the relayhost for a domain \u00b6 Go to the Domains tab of the Mail setup section of the admin UI. Edit the desired domain. Select the newly added host on the Sender-dependent transports dropdown and save changes. Send an email from a mailbox on that domain and you should see postfix handing the message over to the relayhost in the logs.","title":"Relayhosts"},{"location":"u_e-postfix-relayhost/#add-a-new-relayhost","text":"Go to the Routing tab of the Configuration and Details section of the admin UI. Here you will see a list of relayhosts currently setup. Scroll to the Add sender-dependent transport section. Under Host , add the host you want to relay to. Example: if you want to use Mailgun to send emails instead of your server IP, enter smtp.mailgun.org If the relay host requires a username and password to authenticate, enter them in the respective fields. Keep in mind the credentials will be stored in plain text.","title":"Add a new relayhost"},{"location":"u_e-postfix-relayhost/#test-a-relayhost","text":"To test that connectivity to the host works, click on Test from the list of relayhosts and enter a From: address. Then, run the test. You will then see the results of the SMTP transmission. If all went well, you should see SERVER -> CLIENT: 250 2.0.0 Ok: queued as A093B401D4 as one of the last lines. If not, review the error provided and resolve it. Note: Some hosts, especially those who do not require authentication, will deny connections from servers that have not been added to their system beforehand. Make sure you read the documentation of the relayhost to make sure you've added your domain and/or the server IP to their system. Tip: You can change the default test To: address the test uses from null@mailcow.email to any email address you choose by modifying the $RELAY_TO variable on the vars.inc.php file under /opt/mailcow-dockerized/data/web/inc This way you can check that the relay worked by checking the destination mailbox.","title":"Test a relayhost"},{"location":"u_e-postfix-relayhost/#set-the-relayhost-for-a-domain","text":"Go to the Domains tab of the Mail setup section of the admin UI. Edit the desired domain. Select the newly added host on the Sender-dependent transports dropdown and save changes. Send an email from a mailbox on that domain and you should see postfix handing the message over to the relayhost in the logs.","title":"Set the relayhost for a domain"},{"location":"u_e-postfix-trust_networks/","text":"By default mailcow considers all networks as untrusted excluding its own IPV4_NETWORK and IPV6_NETWORK scopes. Though it is reasonable in most cases, there may be circumstances that you need to loosen this restriction. By default mailcow uses mynetworks_style = subnet to determine internal subnets and leaves mynetworks unconfigured. If you decide to set mynetworks , Postfix ignores the mynetworks_style setting. This means you have to add the IPV4_NETWORK and IPV6_NETWORK scopes as well as loopback subnets manually! Unauthenticated relaying \u00b6 Warning Incorrect setup of mynetworks will allow your server to be used as an open relay. If abused, this will affect your ability to send emails and can take some time to be resolved. IPv4 hosts/subnets \u00b6 To add the subnet 192.168.2.0/24 to the trusted networks you may use the following configuration, depending on your IPV4_NETWORK and IPV6_NETWORK scopes: Edit data/conf/postfix/extra.cf : mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 [fe80::]/10 172.22.1.0/24 [fd4d:6169:6c63:6f77::]/64 192.168.2.0/24 Run docker-compose restart postfix-mailcow to apply your new settings. IPv6 hosts/subnets \u00b6 Adding IPv6 hosts is done the same as IPv4, however the subnet needs to be placed in brackets [] with the netmask appended. To add the subnet 2001:db8::/32 to the trusted networks you may use the following configuration, depending on your IPV4_NETWORK and IPV6_NETWORK scopes: Edit data/conf/postfix/extra.cf : mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 [fe80::]/10 172.22.1.0/24 [fd4d:6169:6c63:6f77::]/64 [2001:db8::]/32 Run docker-compose restart postfix-mailcow to apply your new settings. Info More information about mynetworks can be found in the Postfix documentation .","title":"Add trusted networks"},{"location":"u_e-postfix-trust_networks/#unauthenticated-relaying","text":"Warning Incorrect setup of mynetworks will allow your server to be used as an open relay. If abused, this will affect your ability to send emails and can take some time to be resolved.","title":"Unauthenticated relaying"},{"location":"u_e-postfix-trust_networks/#ipv4-hostssubnets","text":"To add the subnet 192.168.2.0/24 to the trusted networks you may use the following configuration, depending on your IPV4_NETWORK and IPV6_NETWORK scopes: Edit data/conf/postfix/extra.cf : mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 [fe80::]/10 172.22.1.0/24 [fd4d:6169:6c63:6f77::]/64 192.168.2.0/24 Run docker-compose restart postfix-mailcow to apply your new settings.","title":"IPv4 hosts/subnets"},{"location":"u_e-postfix-trust_networks/#ipv6-hostssubnets","text":"Adding IPv6 hosts is done the same as IPv4, however the subnet needs to be placed in brackets [] with the netmask appended. To add the subnet 2001:db8::/32 to the trusted networks you may use the following configuration, depending on your IPV4_NETWORK and IPV6_NETWORK scopes: Edit data/conf/postfix/extra.cf : mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 [fe80::]/10 172.22.1.0/24 [fd4d:6169:6c63:6f77::]/64 [2001:db8::]/32 Run docker-compose restart postfix-mailcow to apply your new settings. Info More information about mynetworks can be found in the Postfix documentation .","title":"IPv6 hosts/subnets"},{"location":"u_e-redis/","text":"Redis is used as a key-value store for rspamd's and (some of) mailcow's settings and data. If you are unfamiliar with redis please read the introduction to redis and maybe visit this wonderful guide on how to use it. Client \u00b6 To connect to the redis cli execute: docker-compose exec redis-mailcow redis-cli Debugging \u00b6 Here are some useful commands for the redis-cli for debugging: MONITOR \u00b6 Listens for all requests received by the server in real time: # docker-compose exec redis-mailcow redis-cli 127.0.0.1:6379> monitor OK 1494077286.401963 [0 172.22.1.253:41228] \"SMEMBERS\" \"BAYES_SPAM_keys\" 1494077288.292970 [0 172.22.1.253:41229] \"SMEMBERS\" \"BAYES_SPAM_keys\" [...] KEYS \u00b6 Get all keys matching your pattern: KEYS * PING \u00b6 Test a connection: 127.0.0.1:6379> PING PONG If you want to know more, here is a cheat sheet .","title":"Redis"},{"location":"u_e-redis/#client","text":"To connect to the redis cli execute: docker-compose exec redis-mailcow redis-cli","title":"Client"},{"location":"u_e-redis/#debugging","text":"Here are some useful commands for the redis-cli for debugging:","title":"Debugging"},{"location":"u_e-redis/#monitor","text":"Listens for all requests received by the server in real time: # docker-compose exec redis-mailcow redis-cli 127.0.0.1:6379> monitor OK 1494077286.401963 [0 172.22.1.253:41228] \"SMEMBERS\" \"BAYES_SPAM_keys\" 1494077288.292970 [0 172.22.1.253:41229] \"SMEMBERS\" \"BAYES_SPAM_keys\" [...]","title":"MONITOR"},{"location":"u_e-redis/#keys","text":"Get all keys matching your pattern: KEYS *","title":"KEYS"},{"location":"u_e-redis/#ping","text":"Test a connection: 127.0.0.1:6379> PING PONG If you want to know more, here is a cheat sheet .","title":"PING"},{"location":"u_e-reeanble-weak-protocols/","text":"On February the 12th 2020 we disabled the deprecated protocols TLS 1.0 and 1.1 in Dovecot (POP3, POP3S, IMAP, IMAPS) and Postfix (SMTPS, SUBMISSION). Unauthenticated mail via SMTP on port 25/tcp does still accept >= TLS 1.0 . It is better to accept a weak encryption than none at all. How to re-enable weak protocols? Edit data/conf/postfix/extra.cf : submission_smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3 smtps_smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3 Edit data/conf/dovecot/extra.conf : ssl_min_protocol = TLSv1 Restart the affected services: docker-compose restart postfix-mailcow dovecot-mailcow Hint: You can enable TLS 1.2 in Windows 7.","title":"Re-enable TLS 1.0 and TLS 1.1"},{"location":"u_e-rspamd/","text":"Rspamd is used for AV handling, DKIM signing and SPAM handling. It's a powerful and fast filter system. For a more in-depth documentation on Rspamd please visit its own documentation . Learn Spam & Ham \u00b6 Rspamd learns mail as spam or ham when you move a message in or out of the junk folder to any mailbox besides trash. This is achieved by using the Sieve plugin \"sieve_imapsieve\" and parser scripts. Rspamd also auto-learns mail when a high or low score is detected (see https://rspamd.com/doc/configuration/statistic.html#autolearning ). We configured the plugin to keep a sane ratio between spam and ham learns. The bayes statistics are written to Redis as keys BAYES_HAM and BAYES_SPAM . Besides bayes, a local fuzzy storage is used to learn recurring patterns in text or images that indicate ham or spam. You can also use Rspamd's web UI to learn ham and / or spam or to adjust certain settings of Rspamd. Learn Spam or Ham from existing directory \u00b6 You can use a one-liner to learn mail in plain-text (uncompressed) format: # Ham for file in /my/folder/cur/* ; do docker exec -i $( docker-compose ps -q rspamd-mailcow ) rspamc learn_ham < $file ; done # Spam for file in /my/folder/.Junk/cur/* ; do docker exec -i $( docker-compose ps -q rspamd-mailcow ) rspamc learn_spam < $file ; done Consider attaching a local folder as new volume to rspamd-mailcow in docker-compose.yml and learn given files inside the container. This can be used as workaround to parse compressed data with zcat. Example: for file in /data/old_mail/.Junk/cur/* ; do rspamc learn_spam < zcat $file ; done Reset learned data (Bayes, Neural) \u00b6 You need to delete keys in Redis to reset learned data, so create a copy of your Redis database now: Backup database # It is better to stop Redis before you copy the file. cp /var/lib/docker/volumes/mailcowdockerized_redis-vol-1/_data/dump.rdb /root/ Reset Bayes data docker-compose exec redis-mailcow sh -c 'redis-cli --scan --pattern BAYES_* | xargs redis-cli del' docker-compose exec redis-mailcow sh -c 'redis-cli --scan --pattern RS* | xargs redis-cli del' Reset Neural data docker-compose exec redis-mailcow sh -c 'redis-cli --scan --pattern rn_* | xargs redis-cli del' Reset Fuzzy data # We need to enter the redis-cli first: docker-compose exec redis-mailcow redis-cli # In redis-cli: 127 .0.0.1:6379> EVAL \"for i, name in ipairs(redis.call('KEYS', ARGV[1])) do redis.call('DEL', name); end\" 0 fuzzy* Info If redis-cli complains about... (error) ERR wrong number of arguments for 'del' command ...the key pattern was not found and thus no data is available to delete - it is fine. CLI tools \u00b6 docker-compose exec rspamd-mailcow rspamc --help docker-compose exec rspamd-mailcow rspamadm --help Disable Greylisting \u00b6 Only messages with a higher score will be considered to be greylisted (soft rejected). It is bad practice to disable greylisting. You can disable greylisting server-wide by editing: {mailcow-dir}/data/conf/rspamd/local.d/greylist.conf Add the line: enabled = false ; Save the file and restart \"rspamd-mailcow\": docker-compose restart rspamd-mailcow Spam filter thresholds (global) \u00b6 Each user is able to change their spam rating individually . To define a new server-wide limit, edit data/conf/rspamd/local.d/actions.conf : reject = 15 ; add_header = 8 ; greylist = 7 ; Save the file and restart \"rspamd-mailcow\": docker-compose restart rspamd-mailcow Existing settings of users will not be overwritten! To reset custom defined thresholds, run: source mailcow.conf docker-compose exec mysql-mailcow mysql -umailcow -p$DBPASS mailcow -e \"delete from filterconf where option = 'highspamlevel' or option = 'lowspamlevel';\" # or: # docker-compose exec mysql-mailcow mysql -umailcow -p$DBPASS mailcow -e \"delete from filterconf where option = 'highspamlevel' or option = 'lowspamlevel' and object = 'only-this-mailbox@example.org';\" Custom reject messages \u00b6 The default spam reject message can be changed by adding a new file data/conf/rspamd/override.d/worker-proxy.custom.inc with the following content: reject_message = \"My custom reject message\"; Save the file and restart Rspamd: docker-compose restart rspamd-mailcow . While the above works for rejected mails with a high spam score, prefilter reject actions will ignore this setting. For these maps, the multimap module in Rspamd needs to be adjusted: Find prefilet reject symbol for which you want change message, to do it run: grep -R \"SYMBOL_YOU_WANT_TO_ADJUST\" /opt/mailcow-dockerized/data/conf/rspamd/ Add your custom message as new line: GLOBAL_RCPT_BL { type = \"rcpt\"; map = \"${LOCAL_CONFDIR}/custom/global_rcpt_blacklist.map\"; regexp = true; prefilter = true; action = \"reject\"; message = \"Sending mail to this recipient is prohibited by postmaster@your.domain\"; } Save the file and restart Rspamd: docker-compose restart rspamd-mailcow . Whitelist specific ClamAV signatures \u00b6 You may find that legitimate (clean) mail is being blocked by ClamAV (Rspamd will flag the mail with VIRUS_FOUND ). For instance, interactive PDF form attachments are blocked by default because the embedded Javascript code may be used for nefarious purposes. Confirm by looking at the clamd logs, e.g.: docker-compose logs clamd-mailcow | grep \"FOUND\" This line confirms that such was identified: clamd-mailcow_1 | Sat Sep 28 07:43:24 2019 -> instream(local): PUA.Pdf.Trojan.EmbeddedJavaScript-1(e887d2ac324ce90750768b86b63d0749:363325) FOUND To whitelist this particular signature (and enable sending this type of file attached), add it to the ClamAV signature whitelist file: echo 'PUA.Pdf.Trojan.EmbeddedJavaScript-1' >> data/conf/clamav/whitelist.ign2 Then restart the clamd-mailcow service container in the mailcow UI or using docker-compose: docker-compose restart clamd-mailcow Cleanup cached ClamAV results in Redis: # docker-compose exec redis-mailcow /bin/sh /data # redis-cli KEYS rs_cl* | xargs redis-cli DEL /data # exit Discard instead of reject \u00b6 If you want to silently drop a message, create or edit the file data/conf/rspamd/override.d/worker-proxy.custom.inc and add the following content: discard_on_reject = true; Restart Rspamd: docker-compose restart rspamd-mailcow Wipe all ratelimit keys \u00b6 If you don't want to use the UI and instead wipe all keys in the Redis database, you can use redis-cli for that task: docker-compose exec redis-mailcow sh # Unlink (available in Redis >=4.) will delete in the backgronud redis-cli --scan --pattern RL* | xargs redis-cli unlink Restart Rspamd: docker-compose exec redis-mailcow sh Trigger a resend of quarantine notifications \u00b6 Should be used for debugging only! docker-compose exec dovecot-mailcow bash mysql -umailcow -p$DBPASS mailcow -e \"update quarantine set notified = 0;\" redis-cli -h redis DEL Q_LAST_NOTIFIED quarantine_notify.py Increase history retention \u00b6 By default Rspamd keeps 1000 elements in the history. The history is stored compressed. It is recommended not to use a disproportionate high value here, try something along 5000 or 10000 and see how your server handles it: Edit data/conf/rspamd/local.d/history_redis.conf : nrows = 1000; # change this value Restart Rspamd afterwards: docker-compose restart rspamd-mailcow","title":"Rspamd"},{"location":"u_e-rspamd/#learn-spam-ham","text":"Rspamd learns mail as spam or ham when you move a message in or out of the junk folder to any mailbox besides trash. This is achieved by using the Sieve plugin \"sieve_imapsieve\" and parser scripts. Rspamd also auto-learns mail when a high or low score is detected (see https://rspamd.com/doc/configuration/statistic.html#autolearning ). We configured the plugin to keep a sane ratio between spam and ham learns. The bayes statistics are written to Redis as keys BAYES_HAM and BAYES_SPAM . Besides bayes, a local fuzzy storage is used to learn recurring patterns in text or images that indicate ham or spam. You can also use Rspamd's web UI to learn ham and / or spam or to adjust certain settings of Rspamd.","title":"Learn Spam & Ham"},{"location":"u_e-rspamd/#learn-spam-or-ham-from-existing-directory","text":"You can use a one-liner to learn mail in plain-text (uncompressed) format: # Ham for file in /my/folder/cur/* ; do docker exec -i $( docker-compose ps -q rspamd-mailcow ) rspamc learn_ham < $file ; done # Spam for file in /my/folder/.Junk/cur/* ; do docker exec -i $( docker-compose ps -q rspamd-mailcow ) rspamc learn_spam < $file ; done Consider attaching a local folder as new volume to rspamd-mailcow in docker-compose.yml and learn given files inside the container. This can be used as workaround to parse compressed data with zcat. Example: for file in /data/old_mail/.Junk/cur/* ; do rspamc learn_spam < zcat $file ; done","title":"Learn Spam or Ham from existing directory"},{"location":"u_e-rspamd/#reset-learned-data-bayes-neural","text":"You need to delete keys in Redis to reset learned data, so create a copy of your Redis database now: Backup database # It is better to stop Redis before you copy the file. cp /var/lib/docker/volumes/mailcowdockerized_redis-vol-1/_data/dump.rdb /root/ Reset Bayes data docker-compose exec redis-mailcow sh -c 'redis-cli --scan --pattern BAYES_* | xargs redis-cli del' docker-compose exec redis-mailcow sh -c 'redis-cli --scan --pattern RS* | xargs redis-cli del' Reset Neural data docker-compose exec redis-mailcow sh -c 'redis-cli --scan --pattern rn_* | xargs redis-cli del' Reset Fuzzy data # We need to enter the redis-cli first: docker-compose exec redis-mailcow redis-cli # In redis-cli: 127 .0.0.1:6379> EVAL \"for i, name in ipairs(redis.call('KEYS', ARGV[1])) do redis.call('DEL', name); end\" 0 fuzzy* Info If redis-cli complains about... (error) ERR wrong number of arguments for 'del' command ...the key pattern was not found and thus no data is available to delete - it is fine.","title":"Reset learned data (Bayes, Neural)"},{"location":"u_e-rspamd/#cli-tools","text":"docker-compose exec rspamd-mailcow rspamc --help docker-compose exec rspamd-mailcow rspamadm --help","title":"CLI tools"},{"location":"u_e-rspamd/#disable-greylisting","text":"Only messages with a higher score will be considered to be greylisted (soft rejected). It is bad practice to disable greylisting. You can disable greylisting server-wide by editing: {mailcow-dir}/data/conf/rspamd/local.d/greylist.conf Add the line: enabled = false ; Save the file and restart \"rspamd-mailcow\": docker-compose restart rspamd-mailcow","title":"Disable Greylisting"},{"location":"u_e-rspamd/#spam-filter-thresholds-global","text":"Each user is able to change their spam rating individually . To define a new server-wide limit, edit data/conf/rspamd/local.d/actions.conf : reject = 15 ; add_header = 8 ; greylist = 7 ; Save the file and restart \"rspamd-mailcow\": docker-compose restart rspamd-mailcow Existing settings of users will not be overwritten! To reset custom defined thresholds, run: source mailcow.conf docker-compose exec mysql-mailcow mysql -umailcow -p$DBPASS mailcow -e \"delete from filterconf where option = 'highspamlevel' or option = 'lowspamlevel';\" # or: # docker-compose exec mysql-mailcow mysql -umailcow -p$DBPASS mailcow -e \"delete from filterconf where option = 'highspamlevel' or option = 'lowspamlevel' and object = 'only-this-mailbox@example.org';\"","title":"Spam filter thresholds (global)"},{"location":"u_e-rspamd/#custom-reject-messages","text":"The default spam reject message can be changed by adding a new file data/conf/rspamd/override.d/worker-proxy.custom.inc with the following content: reject_message = \"My custom reject message\"; Save the file and restart Rspamd: docker-compose restart rspamd-mailcow . While the above works for rejected mails with a high spam score, prefilter reject actions will ignore this setting. For these maps, the multimap module in Rspamd needs to be adjusted: Find prefilet reject symbol for which you want change message, to do it run: grep -R \"SYMBOL_YOU_WANT_TO_ADJUST\" /opt/mailcow-dockerized/data/conf/rspamd/ Add your custom message as new line: GLOBAL_RCPT_BL { type = \"rcpt\"; map = \"${LOCAL_CONFDIR}/custom/global_rcpt_blacklist.map\"; regexp = true; prefilter = true; action = \"reject\"; message = \"Sending mail to this recipient is prohibited by postmaster@your.domain\"; } Save the file and restart Rspamd: docker-compose restart rspamd-mailcow .","title":"Custom reject messages"},{"location":"u_e-rspamd/#whitelist-specific-clamav-signatures","text":"You may find that legitimate (clean) mail is being blocked by ClamAV (Rspamd will flag the mail with VIRUS_FOUND ). For instance, interactive PDF form attachments are blocked by default because the embedded Javascript code may be used for nefarious purposes. Confirm by looking at the clamd logs, e.g.: docker-compose logs clamd-mailcow | grep \"FOUND\" This line confirms that such was identified: clamd-mailcow_1 | Sat Sep 28 07:43:24 2019 -> instream(local): PUA.Pdf.Trojan.EmbeddedJavaScript-1(e887d2ac324ce90750768b86b63d0749:363325) FOUND To whitelist this particular signature (and enable sending this type of file attached), add it to the ClamAV signature whitelist file: echo 'PUA.Pdf.Trojan.EmbeddedJavaScript-1' >> data/conf/clamav/whitelist.ign2 Then restart the clamd-mailcow service container in the mailcow UI or using docker-compose: docker-compose restart clamd-mailcow Cleanup cached ClamAV results in Redis: # docker-compose exec redis-mailcow /bin/sh /data # redis-cli KEYS rs_cl* | xargs redis-cli DEL /data # exit","title":"Whitelist specific ClamAV signatures"},{"location":"u_e-rspamd/#discard-instead-of-reject","text":"If you want to silently drop a message, create or edit the file data/conf/rspamd/override.d/worker-proxy.custom.inc and add the following content: discard_on_reject = true; Restart Rspamd: docker-compose restart rspamd-mailcow","title":"Discard instead of reject"},{"location":"u_e-rspamd/#wipe-all-ratelimit-keys","text":"If you don't want to use the UI and instead wipe all keys in the Redis database, you can use redis-cli for that task: docker-compose exec redis-mailcow sh # Unlink (available in Redis >=4.) will delete in the backgronud redis-cli --scan --pattern RL* | xargs redis-cli unlink Restart Rspamd: docker-compose exec redis-mailcow sh","title":"Wipe all ratelimit keys"},{"location":"u_e-rspamd/#trigger-a-resend-of-quarantine-notifications","text":"Should be used for debugging only! docker-compose exec dovecot-mailcow bash mysql -umailcow -p$DBPASS mailcow -e \"update quarantine set notified = 0;\" redis-cli -h redis DEL Q_LAST_NOTIFIED quarantine_notify.py","title":"Trigger a resend of quarantine notifications"},{"location":"u_e-rspamd/#increase-history-retention","text":"By default Rspamd keeps 1000 elements in the history. The history is stored compressed. It is recommended not to use a disproportionate high value here, try something along 5000 or 10000 and see how your server handles it: Edit data/conf/rspamd/local.d/history_redis.conf : nrows = 1000; # change this value Restart Rspamd afterwards: docker-compose restart rspamd-mailcow","title":"Increase history retention"},{"location":"u_e-sogo/","text":"SOGo is used for accessing your mails via a webbrowser, adding and sharing your contacts or calendars. For a more in-depth documentation on SOGo please visit its own documentation . Apply custom SOGo theme \u00b6 mailcow builds after 28 January 2021 can change SOGo's theme by editing data/conf/sogo/custom-theme.js . Please check the AngularJS Material intro and documentation as well as the material style guideline to learn how this works. You can use the provided custom-theme.js as an example starting point by removing the comments. After you modified data/conf/sogo/custom-theme.js and made changes to your new SOGo theme you need to edit data/conf/sogo/sogo.conf and append/set SOGoUIxDebugEnabled = YES; restart SOGo and Memcached containers by executing docker-compose restart memcached-mailcow sogo-mailcow . open SOGo in browser open browser developer console, usually shortcut is F12 only if you use Firefox: write by hands in dev console allow pasting and press enter paste java script snipet in dev console: copy([].slice.call(document.styleSheets) .map(e => e.ownerNode) .filter(e => e.hasAttribute('md-theme-style')) .map(e => e.textContent) .join('\\n') ) open text editor and paste data from clipboard (Ctrl+V), you should get minified CSS, save it copy CSS file to mailcow server data/conf/sogo/custom-theme.css edit data/conf/sogo/sogo.conf and set SOGoUIxDebugEnabled = NO; append/create docker-compose.override.yml with: version: '2.1' services: sogo-mailcow: volumes: - ./data/conf/sogo/custom-theme.css:/usr/lib/GNUstep/SOGo/WebServerResources/css/theme-default.css:z run docker-compose up -d run docker-compose restart memcached-mailcow Reset to SOGo default theme \u00b6 checkout data/conf/sogo/custom-theme.js by executing git fetch ; git checkout origin/master data/conf/sogo/custom-theme.js data/conf/sogo/custom-theme.js find in data/conf/sogo/custom-theme.js : // Apply new palettes to the default theme, remap some of the hues $mdThemingProvider.theme('default') .primaryPalette('green-cow', { 'default': '400', // background color of top toolbars 'hue-1': '400', 'hue-2': '600', // background color of sidebar toolbar 'hue-3': 'A700' }) .accentPalette('green', { 'default': '600', // background color of fab buttons and login screen 'hue-1': '300', // background color of center list toolbar 'hue-2': '300', // highlight color for selected mail and current day calendar 'hue-3': 'A700' }) .backgroundPalette('frost-grey'); and replace it with: $mdThemingProvider.theme('default'); remove from docker-compose.override.yml volume mount in sogo-mailcow : - ./data/conf/sogo/custom-theme.css:/usr/lib/GNUstep/SOGo/WebServerResources/css/theme-default.css:z run docker-compose up -d run docker-compose restart memcached-mailcow Change favicon \u00b6 mailcow builds after 31 January 2021 can change SOGo's favicon by replacing data/conf/sogo/custom-favicon.ico for SOGo and data/web/favicon.png for mailcow UI. Note : You can use .png favicons for SOGo by renaming them to custom-favicon.ico . For both SOGo and mailcow UI favicons you need use one of the standard dimensions: 16x16, 32x32, 64x64, 128x128 and 256x256. After you replaced said file you need to restart SOGo and Memcached containers by executing docker-compose restart memcached-mailcow sogo-mailcow . Change logo \u00b6 mailcow builds after 21 December 2018 can change SOGo's logo by replacing or creating (if missing) data/conf/sogo/sogo-full.svg . After you replaced said file you need to restart SOGo and Memcached containers by executing docker-compose restart memcached-mailcow sogo-mailcow . Connect domains \u00b6 Domains are usually isolated from eachother. You can change that by modifying data/conf/sogo/sogo.conf : Search... // SOGoDomainsVisibility = ( // (domain1.tld, domain5.tld), // (domain3.tld, domain2.tld) // ); ...and replace it by - for example: SOGoDomainsVisibility = ( (example.org, example.com, example.net) ); Restart SOGo: docker-compose restart sogo-mailcow Disable password changing \u00b6 Edit data/conf/sogo/sogo.conf and change SOGoPasswordChangeEnabled to NO . Please do not add a new parameter. Run docker-compose restart memcached-mailcow sogo-mailcow to activate the changes. Reset TOTP / Disable TOTP \u00b6 Run docker-compose exec -u sogo sogo-mailcow sogo-tool user-preferences set defaults user@domain.tld SOGoTOTPEnabled '{\"SOGoTOTPEnabled\":0}' from within the mailcow directory.","title":"SOGo"},{"location":"u_e-sogo/#apply-custom-sogo-theme","text":"mailcow builds after 28 January 2021 can change SOGo's theme by editing data/conf/sogo/custom-theme.js . Please check the AngularJS Material intro and documentation as well as the material style guideline to learn how this works. You can use the provided custom-theme.js as an example starting point by removing the comments. After you modified data/conf/sogo/custom-theme.js and made changes to your new SOGo theme you need to edit data/conf/sogo/sogo.conf and append/set SOGoUIxDebugEnabled = YES; restart SOGo and Memcached containers by executing docker-compose restart memcached-mailcow sogo-mailcow . open SOGo in browser open browser developer console, usually shortcut is F12 only if you use Firefox: write by hands in dev console allow pasting and press enter paste java script snipet in dev console: copy([].slice.call(document.styleSheets) .map(e => e.ownerNode) .filter(e => e.hasAttribute('md-theme-style')) .map(e => e.textContent) .join('\\n') ) open text editor and paste data from clipboard (Ctrl+V), you should get minified CSS, save it copy CSS file to mailcow server data/conf/sogo/custom-theme.css edit data/conf/sogo/sogo.conf and set SOGoUIxDebugEnabled = NO; append/create docker-compose.override.yml with: version: '2.1' services: sogo-mailcow: volumes: - ./data/conf/sogo/custom-theme.css:/usr/lib/GNUstep/SOGo/WebServerResources/css/theme-default.css:z run docker-compose up -d run docker-compose restart memcached-mailcow","title":"Apply custom SOGo theme"},{"location":"u_e-sogo/#reset-to-sogo-default-theme","text":"checkout data/conf/sogo/custom-theme.js by executing git fetch ; git checkout origin/master data/conf/sogo/custom-theme.js data/conf/sogo/custom-theme.js find in data/conf/sogo/custom-theme.js : // Apply new palettes to the default theme, remap some of the hues $mdThemingProvider.theme('default') .primaryPalette('green-cow', { 'default': '400', // background color of top toolbars 'hue-1': '400', 'hue-2': '600', // background color of sidebar toolbar 'hue-3': 'A700' }) .accentPalette('green', { 'default': '600', // background color of fab buttons and login screen 'hue-1': '300', // background color of center list toolbar 'hue-2': '300', // highlight color for selected mail and current day calendar 'hue-3': 'A700' }) .backgroundPalette('frost-grey'); and replace it with: $mdThemingProvider.theme('default'); remove from docker-compose.override.yml volume mount in sogo-mailcow : - ./data/conf/sogo/custom-theme.css:/usr/lib/GNUstep/SOGo/WebServerResources/css/theme-default.css:z run docker-compose up -d run docker-compose restart memcached-mailcow","title":"Reset to SOGo default theme"},{"location":"u_e-sogo/#change-favicon","text":"mailcow builds after 31 January 2021 can change SOGo's favicon by replacing data/conf/sogo/custom-favicon.ico for SOGo and data/web/favicon.png for mailcow UI. Note : You can use .png favicons for SOGo by renaming them to custom-favicon.ico . For both SOGo and mailcow UI favicons you need use one of the standard dimensions: 16x16, 32x32, 64x64, 128x128 and 256x256. After you replaced said file you need to restart SOGo and Memcached containers by executing docker-compose restart memcached-mailcow sogo-mailcow .","title":"Change favicon"},{"location":"u_e-sogo/#change-logo","text":"mailcow builds after 21 December 2018 can change SOGo's logo by replacing or creating (if missing) data/conf/sogo/sogo-full.svg . After you replaced said file you need to restart SOGo and Memcached containers by executing docker-compose restart memcached-mailcow sogo-mailcow .","title":"Change logo"},{"location":"u_e-sogo/#connect-domains","text":"Domains are usually isolated from eachother. You can change that by modifying data/conf/sogo/sogo.conf : Search... // SOGoDomainsVisibility = ( // (domain1.tld, domain5.tld), // (domain3.tld, domain2.tld) // ); ...and replace it by - for example: SOGoDomainsVisibility = ( (example.org, example.com, example.net) ); Restart SOGo: docker-compose restart sogo-mailcow","title":"Connect domains"},{"location":"u_e-sogo/#disable-password-changing","text":"Edit data/conf/sogo/sogo.conf and change SOGoPasswordChangeEnabled to NO . Please do not add a new parameter. Run docker-compose restart memcached-mailcow sogo-mailcow to activate the changes.","title":"Disable password changing"},{"location":"u_e-sogo/#reset-totp-disable-totp","text":"Run docker-compose exec -u sogo sogo-mailcow sogo-tool user-preferences set defaults user@domain.tld SOGoTOTPEnabled '{\"SOGoTOTPEnabled\":0}' from within the mailcow directory.","title":"Reset TOTP / Disable TOTP"},{"location":"u_e-unbound-fwd/","text":"If you want or have to use an external DNS service, you can either set a forwarder in Unbound or copy an override file to define external DNS servers: !!! warning Please do not use a public resolver like we did in the example above. Many - if not all - blacklist lookups will fail with public resolvers, because blacklist server has limits on how much requests can be done from one IP and public resolvers usually reach this limits. Important : Only DNSSEC validating DNS services will work. Method A, Unbound \u00b6 Edit data/conf/unbound/unbound.conf and append the following parameters: forward-zone: name: \".\" forward-addr: 8.8.8.8 # DO NOT USE PUBLIC DNS SERVERS - JUST AN EXAMPLE forward-addr: 8.8.4.4 # DO NOT USE PUBLIC DNS SERVERS - JUST AN EXAMPLE Restart Unbound: docker-compose restart unbound-mailcow Method B, Override file \u00b6 cd /opt/mailcow-dockerized cp helper-scripts/docker-compose.override.yml.d/EXTERNAL_DNS/docker-compose.override.yml . Edit docker-compose.override.yml and adjust the IP. Run docker-compose down ; docker-compose up -d .","title":"Using an external DNS service"},{"location":"u_e-unbound-fwd/#method-a-unbound","text":"Edit data/conf/unbound/unbound.conf and append the following parameters: forward-zone: name: \".\" forward-addr: 8.8.8.8 # DO NOT USE PUBLIC DNS SERVERS - JUST AN EXAMPLE forward-addr: 8.8.4.4 # DO NOT USE PUBLIC DNS SERVERS - JUST AN EXAMPLE Restart Unbound: docker-compose restart unbound-mailcow","title":"Method A, Unbound"},{"location":"u_e-unbound-fwd/#method-b-override-file","text":"cd /opt/mailcow-dockerized cp helper-scripts/docker-compose.override.yml.d/EXTERNAL_DNS/docker-compose.override.yml . Edit docker-compose.override.yml and adjust the IP. Run docker-compose down ; docker-compose up -d .","title":"Method B, Override file"},{"location":"u_e-update-hooks/","text":"It is possible to add pre- and post-update-hooks to the update.sh script that upgrades your whole mailcow installation. To do so, just add the corresponding bash script into your mailcow root directory: pre_update_hook.sh for commands that should run before the update post_update_hook.sh for commands that should run after the update is completed Keep in mind that pre_update_hook.sh runs every time you call update.sh and post_update_hook.sh will only run if the update was successful and the script doesn't have to be re-run. The scripts will be run by bash, an interpreter (e.g. #!/bin/bash ) as well as an execute permission flag (\"+x\") are not required.","title":"Run scripts before and after updates"},{"location":"u_e-watchdog-thresholds/","text":"Watchdog uses default values for all thresholds defined in docker-compose.yml . The default values will work for most setups. Example: - NGINX_THRESHOLD=${NGINX_THRESHOLD:-5} - UNBOUND_THRESHOLD=${UNBOUND_THRESHOLD:-5} - REDIS_THRESHOLD=${REDIS_THRESHOLD:-5} - MYSQL_THRESHOLD=${MYSQL_THRESHOLD:-5} - MYSQL_REPLICATION_THRESHOLD=${MYSQL_REPLICATION_THRESHOLD:-1} - SOGO_THRESHOLD=${SOGO_THRESHOLD:-3} - POSTFIX_THRESHOLD=${POSTFIX_THRESHOLD:-8} - CLAMD_THRESHOLD=${CLAMD_THRESHOLD:-15} - DOVECOT_THRESHOLD=${DOVECOT_THRESHOLD:-12} - DOVECOT_REPL_THRESHOLD=${DOVECOT_REPL_THRESHOLD:-20} - PHPFPM_THRESHOLD=${PHPFPM_THRESHOLD:-5} - RATELIMIT_THRESHOLD=${RATELIMIT_THRESHOLD:-1} - FAIL2BAN_THRESHOLD=${FAIL2BAN_THRESHOLD:-1} - ACME_THRESHOLD=${ACME_THRESHOLD:-1} - RSPAMD_THRESHOLD=${RSPAMD_THRESHOLD:-5} - OLEFY_THRESHOLD=${OLEFY_THRESHOLD:-5} - MAILQ_THRESHOLD=${MAILQ_THRESHOLD:-20} - MAILQ_CRIT=${MAILQ_CRIT:-30} To adjust them just add necessary threshold variables (e.g. MAILQ_THRESHOLD=10 ) to mailcow.conf and run docker-compose up -d . Thresholds descriptions \u00b6 NGINX_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to Nginx on port 8081 and it will restart the container automatically when issues were found and the threshold has been reached. UNBOUND_THRESHOLD \u00b6 Notifies administrators if Unbound can not resolve/valide external domains/DNSSEC and it will restart the container automatically when issues were found and the threshold has been reached. REDIS_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to Redis on port 6379 and it will restart the container automatically when issues were found and the threshold has been reached. MYSQL_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to MySQL or can not query a table and it will restart the container automatically when issues were found and the threshold has been reached. MYSQL_REPLICATION_THRESHOLD \u00b6 Notifies administrators if the MySQL replication fails. SOGO_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to SOGo on port 20000 and it will restart the container automatically when issues were found and the threshold has been reached. POSTFIX_THRESHOLD \u00b6 Notifies administrators if watchdog can not sent a test mail via port 589 and it will restart the container automatically when issues were found and the threshold has been reached. CLAMD_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to Clamd and it will restart the container automatically when issues were found and the threshold has been reached. DOVECOT_THRESHOLD \u00b6 Notifies administrators if watchdog fails with various tests with Dovecot container and it will restart the container automatically when issues were found and the threshold has been reached. DOVECOT_REPL_THRESHOLD \u00b6 Notifies administrators if the Dovecot replication fails. PHPFPM_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to PHP-FPM on port 9001/9002 and it will restart the container automatically when issues were found and the threshold has been reached. RATELIMIT_THRESHOLD \u00b6 Notifies administrators if a ratelimit got hit. FAIL2BAN_THRESHOLD \u00b6 Notifies administrators if a fail2ban banned an IP. ACME_THRESHOLD \u00b6 Notifies administrators if something is wrong with the acme-mailcow container. You may check its logs. RSPAMD_THRESHOLD \u00b6 Notifies administrators if watchdog fails with various tests with Rspamd container and it will restart the container automatically when issues were found and the threshold has been reached. OLEFY_THRESHOLD \u00b6 Notifies administrators if watchdog can not establish a connection to olefy on port 10005 and it will restart the container automatically when issues were found and the threshold has been reached. MAILQ_CRIT and MAILQ_THRESHOLD \u00b6 Notifies administrators if number of emails in the postfix queue is greater then MAILQ_CRIT for period of MAILQ_THRESHOLD * (60\u00b130) seconds.","title":"Thresholds"},{"location":"u_e-watchdog-thresholds/#thresholds-descriptions","text":"","title":"Thresholds descriptions"},{"location":"u_e-watchdog-thresholds/#nginx_threshold","text":"Notifies administrators if watchdog can not establish a connection to Nginx on port 8081 and it will restart the container automatically when issues were found and the threshold has been reached.","title":"NGINX_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#unbound_threshold","text":"Notifies administrators if Unbound can not resolve/valide external domains/DNSSEC and it will restart the container automatically when issues were found and the threshold has been reached.","title":"UNBOUND_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#redis_threshold","text":"Notifies administrators if watchdog can not establish a connection to Redis on port 6379 and it will restart the container automatically when issues were found and the threshold has been reached.","title":"REDIS_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#mysql_threshold","text":"Notifies administrators if watchdog can not establish a connection to MySQL or can not query a table and it will restart the container automatically when issues were found and the threshold has been reached.","title":"MYSQL_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#mysql_replication_threshold","text":"Notifies administrators if the MySQL replication fails.","title":"MYSQL_REPLICATION_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#sogo_threshold","text":"Notifies administrators if watchdog can not establish a connection to SOGo on port 20000 and it will restart the container automatically when issues were found and the threshold has been reached.","title":"SOGO_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#postfix_threshold","text":"Notifies administrators if watchdog can not sent a test mail via port 589 and it will restart the container automatically when issues were found and the threshold has been reached.","title":"POSTFIX_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#clamd_threshold","text":"Notifies administrators if watchdog can not establish a connection to Clamd and it will restart the container automatically when issues were found and the threshold has been reached.","title":"CLAMD_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#dovecot_threshold","text":"Notifies administrators if watchdog fails with various tests with Dovecot container and it will restart the container automatically when issues were found and the threshold has been reached.","title":"DOVECOT_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#dovecot_repl_threshold","text":"Notifies administrators if the Dovecot replication fails.","title":"DOVECOT_REPL_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#phpfpm_threshold","text":"Notifies administrators if watchdog can not establish a connection to PHP-FPM on port 9001/9002 and it will restart the container automatically when issues were found and the threshold has been reached.","title":"PHPFPM_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#ratelimit_threshold","text":"Notifies administrators if a ratelimit got hit.","title":"RATELIMIT_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#fail2ban_threshold","text":"Notifies administrators if a fail2ban banned an IP.","title":"FAIL2BAN_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#acme_threshold","text":"Notifies administrators if something is wrong with the acme-mailcow container. You may check its logs.","title":"ACME_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#rspamd_threshold","text":"Notifies administrators if watchdog fails with various tests with Rspamd container and it will restart the container automatically when issues were found and the threshold has been reached.","title":"RSPAMD_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#olefy_threshold","text":"Notifies administrators if watchdog can not establish a connection to olefy on port 10005 and it will restart the container automatically when issues were found and the threshold has been reached.","title":"OLEFY_THRESHOLD"},{"location":"u_e-watchdog-thresholds/#mailq_crit-and-mailq_threshold","text":"Notifies administrators if number of emails in the postfix queue is greater then MAILQ_CRIT for period of MAILQ_THRESHOLD * (60\u00b130) seconds.","title":"MAILQ_CRIT and MAILQ_THRESHOLD"},{"location":"u_e-webmail-site/","text":"IMPORTANT : This guide only applies to non SNI enabled configurations. The certificate path needs to be adjusted if SNI is enabled. Something like ssl_certificate,key /etc/ssl/mail/webmail.example.org/cert.pem,key.pem; will do. But : The certificate should be acquired first and only after the certificate exists a site config should be created. Nginx will fail to start if it cannot find the certificate and key. To create a subdomain webmail.example.org and redirect it to SOGo, you need to create a new Nginx site. Take care of \"CHANGE_TO_MAILCOW_HOSTNAME\"! nano data/conf/nginx/webmail.conf server { ssl_certificate /etc/ssl/mail/cert.pem; ssl_certificate_key /etc/ssl/mail/key.pem; index index.php index.html; client_max_body_size 0; root /web; include /etc/nginx/conf.d/listen_plain.active; include /etc/nginx/conf.d/listen_ssl.active; server_name webmail.example.org; server_tokens off; location ^~ /.well-known/acme-challenge/ { allow all; default_type \"text/plain\"; } location / { return 301 https://CHANGE_TO_MAILCOW_HOSTNAME/SOGo; } } Save and restart Nginx: docker-compose restart nginx-mailcow . Now open mailcow.conf and find ADDITIONAL_SAN . Add webmail.example.org to this array, don't use quotes! ADDITIONAL_SAN=webmail.example.org Run docker-compose up -d . See \"acme-mailcow\" and \"nginx-mailcow\" logs if anything fails.","title":"Create subdomain webmail.example.org"},{"location":"u_e-why_unbound/","text":"For DNS blacklist lookups and DNSSEC. Most systems use either a public or a local caching DNS resolver. That's a very bad idea when it comes to filter spam using DNS-based black hole lists (DNSBL) or similar technics. Most if not all providers apply a rate limit based on the DNS resolver that is used to query their service. Using a public resolver like Googles 4x8, OpenDNS or any other shared DNS resolver like your ISPs will hit that limit very soon.","title":"Why unbound?"},{"location":"client/client-android/","text":"Open the Email app. If this is your first email account, tap Add Account ; if not, tap More and Settings and then Add account . Select Microsoft Exchange ActiveSync . Enter your email address ( ) and password. Tap Sign in .","title":"Android"},{"location":"client/client-apple/","text":"Method 1 via Mobileconfig \u00b6 Email, contacts and calendars can be configured automatically on Apple devices by installing a profile. To download a profile you must login to the mailcow UI first. Method 1.1: IMAP, SMTP and Cal/CardDAV \u00b6 This method configures IMAP, CardDAV and CalDAV. Download and open the file from https://${MAILCOW_HOSTNAME}/mobileconfig.php mailcow.mobileconfig . Enter the unlock code (iPhone) or computer password (Mac). Enter your email password three times when prompted. Method 1.2: IMAP, SMTP (no DAV) \u00b6 This method configures IMAP and SMTP only. Download and open the file from https://${MAILCOW_HOSTNAME}/mobileconfig.php?only_email mailcow.mobileconfig . Enter the unlock code (iPhone) or computer password (Mac). Enter your email password when prompted. Method 2 (Exchange ActiveSync emulation) \u00b6 On iOS, Exchange ActiveSync is also supported as an alternative to the procedure above. It has the advantage of supporting push email (i.e. you are immediately notified of incoming messages), but has some limitations, e.g. it does not support more than three email addresses per contact in your address book. Follow the steps below if you decide to use Exchange instead. Open the Settings app, tap Mail , tap Accounts , tap Add Acccount , select Exchange . Enter your email address ( ) and tap Next . Enter your password, tap Next again. Finally, tap Save .","title":"Apple macOS / iOS"},{"location":"client/client-apple/#method-1-via-mobileconfig","text":"Email, contacts and calendars can be configured automatically on Apple devices by installing a profile. To download a profile you must login to the mailcow UI first.","title":"Method 1 via Mobileconfig"},{"location":"client/client-apple/#method-11-imap-smtp-and-calcarddav","text":"This method configures IMAP, CardDAV and CalDAV. Download and open the file from https://${MAILCOW_HOSTNAME}/mobileconfig.php mailcow.mobileconfig . Enter the unlock code (iPhone) or computer password (Mac). Enter your email password three times when prompted.","title":"Method 1.1: IMAP, SMTP and Cal/CardDAV"},{"location":"client/client-apple/#method-12-imap-smtp-no-dav","text":"This method configures IMAP and SMTP only. Download and open the file from https://${MAILCOW_HOSTNAME}/mobileconfig.php?only_email mailcow.mobileconfig . Enter the unlock code (iPhone) or computer password (Mac). Enter your email password when prompted.","title":"Method 1.2: IMAP, SMTP (no DAV)"},{"location":"client/client-apple/#method-2-exchange-activesync-emulation","text":"On iOS, Exchange ActiveSync is also supported as an alternative to the procedure above. It has the advantage of supporting push email (i.e. you are immediately notified of incoming messages), but has some limitations, e.g. it does not support more than three email addresses per contact in your address book. Follow the steps below if you decide to use Exchange instead. Open the Settings app, tap Mail , tap Accounts , tap Add Acccount , select Exchange . Enter your email address ( ) and tap Next . Enter your password, tap Next again. Finally, tap Save .","title":"Method 2 (Exchange ActiveSync emulation)"},{"location":"client/client-emclient/","text":"Launch eM Client. If this is the first time you launched eM Client, it asks you to set up your account. Proceed to step 4. Go to Menu at the top, select Tools and Accounts . Enter your email address ( ) and click Start Now . Enter your password and click Continue . Enter your name ( ) and click Next . Click Finish .","title":"eM Client"},{"location":"client/client-kontact/","text":"Launch Kontact. If this is the first time you launched Kontact or KMail, it asks you to set up your account. Proceed to step 4. Go to Mail in the sidebar. Go to the Tools menu and select Account Wizard . Enter your name ( ) , email address ( ) and your password. Click Next . Click Create Account . If prompted, re-enter your password and click OK . Close the window by clicking Finish . Go to Calendar in the sidebar. Go to the Settings menu and select Configure KOrganizer . Go to the Calendars tab and click the Add button. Choose DAV groupware resource and click OK . Enter your email address ( ) and your password. Click Next . Select ScalableOGo from the dropdown menu and click Next . Enter your mailcow hostname into the Host field and click Next . Click Test Connection and then Finish . Finally, click OK twice. Once you have set up Kontact, you can also use KMail, KOrganizer and KAddressBook individually.","title":"KDE Kontact"},{"location":"client/client-manual/","text":"These instructions are valid for unchanged port bindings only! Email \u00b6 Service Encryption Host Port IMAP STARTTLS mailcow hostname 143 IMAPS SSL mailcow hostname 993 POP3 STARTTLS mailcow hostname 110 POP3S SSL mailcow hostname 995 SMTP STARTTLS mailcow hostname 587 SMTPS SSL mailcow hostname 465 Please use \"plain\" as authentication mechanisms. Contrary to the assumption no passwords will be transferred plain text, as no authentication is allowed to take place without TLS. Contacts and calendars \u00b6 SOGos default calendar (CalDAV) and contacts (CardDAV) URLs: CalDAV - https://mail.example.com/SOGo/dav/user@example.com/Calendar/personal/ CardDAV - https://mail.example.com/SOGo/dav/user@example.com/Contacts/personal/ Some applications may require you to use https://mail.example.com/SOGo/dav/ or the full path to your calendar, which can be found and copied from within SOGo.","title":"Manual configuration"},{"location":"client/client-manual/#email","text":"Service Encryption Host Port IMAP STARTTLS mailcow hostname 143 IMAPS SSL mailcow hostname 993 POP3 STARTTLS mailcow hostname 110 POP3S SSL mailcow hostname 995 SMTP STARTTLS mailcow hostname 587 SMTPS SSL mailcow hostname 465 Please use \"plain\" as authentication mechanisms. Contrary to the assumption no passwords will be transferred plain text, as no authentication is allowed to take place without TLS.","title":"Email"},{"location":"client/client-manual/#contacts-and-calendars","text":"SOGos default calendar (CalDAV) and contacts (CardDAV) URLs: CalDAV - https://mail.example.com/SOGo/dav/user@example.com/Calendar/personal/ CardDAV - https://mail.example.com/SOGo/dav/user@example.com/Contacts/personal/ Some applications may require you to use https://mail.example.com/SOGo/dav/ or the full path to your calendar, which can be found and copied from within SOGo.","title":"Contacts and calendars"},{"location":"client/client-outlook/","text":"Outlook 2016 or higher from Office 365 on Windows \u00b6 This is only applicable if your server administrator has not disabled EAS for Outlook. If it is disabled, please follow the guide for Outlook 2007 instead. Outlook 2016 has an issue with autodiscover . Only Outlook from Office 365 is affected. If you installed Outlook from another source, please follow the guide for Outlook 2013 or higher. For EAS you must use the old assistant by launching C:\\Program Files (x86)\\Microsoft Office\\root\\Office16\\OLCFG.EXE . If this application opens, you can go to step 4 of the guide for Outlook 2013 below. If it does not open, you can completely disable the new account creation wizard and follow the guide for Outlook 2013 below. Outlook 2013 or higher on Windows \u00b6 This is only applicable if your server administrator has not disabled EAS for Outlook. If it is disabled, please follow the guide for Outlook 2007 instead. Launch Outlook. If this is the first time you launched Outlook, it asks you to set up your account. Proceed to step 4. Go to the File menu and click Add Account . Enter your name ( ) , email address ( ) and your password. Click Next . When prompted, enter your password again, check Remember my credentials and click OK . Click the Allow button. Click Finish . Outlook 2007 or 2010 on Windows \u00b6 Outlook 2007 or higher on Windows \u00b6 Download and install Outlook CalDav Synchronizer . Launch Outlook. If this is the first time you launched Outlook, it asks you to set up your account. Proceed to step 5. Go to the File menu and click Add Account . Enter your name ( ) , email address ( ) and your password. Click Next . Click Finish . Go to the CalDav Synchronizer ribbon and click Synchronization Profiles . Click the second button at top ( Add multiple profiles ), select Sogo , click Ok . Click the Get IMAP/POP3 account settings button. Click Discover resources and assign to Outlook folders . In the Select Resource window that pops up, select your main calendar (usually Personal Calendar ), click the ... button, assign it to Calendar , and click OK . Go to the Address Books and Tasks tabs and repeat repeat the process accordingly. Do not assign multiple calendars, address books or task lists! Close all windows with the OK buttons. Outlook 2011 or higher on macOS \u00b6 The Mac version of Outlook does not synchronize calendars and contacts and therefore is not supported.","title":"Microsoft Outlook"},{"location":"client/client-outlook/#outlook-2016-or-higher-from-office-365-on-windows","text":"This is only applicable if your server administrator has not disabled EAS for Outlook. If it is disabled, please follow the guide for Outlook 2007 instead. Outlook 2016 has an issue with autodiscover . Only Outlook from Office 365 is affected. If you installed Outlook from another source, please follow the guide for Outlook 2013 or higher. For EAS you must use the old assistant by launching C:\\Program Files (x86)\\Microsoft Office\\root\\Office16\\OLCFG.EXE . If this application opens, you can go to step 4 of the guide for Outlook 2013 below. If it does not open, you can completely disable the new account creation wizard and follow the guide for Outlook 2013 below.","title":"Outlook 2016 or higher from Office 365 on Windows"},{"location":"client/client-outlook/#outlook-2013-or-higher-on-windows","text":"This is only applicable if your server administrator has not disabled EAS for Outlook. If it is disabled, please follow the guide for Outlook 2007 instead. Launch Outlook. If this is the first time you launched Outlook, it asks you to set up your account. Proceed to step 4. Go to the File menu and click Add Account . Enter your name ( ) , email address ( ) and your password. Click Next . When prompted, enter your password again, check Remember my credentials and click OK . Click the Allow button. Click Finish .","title":"Outlook 2013 or higher on Windows"},{"location":"client/client-outlook/#outlook-2007-or-2010-on-windows","text":"","title":"Outlook 2007 or 2010 on Windows"},{"location":"client/client-outlook/#outlook-2007-or-higher-on-windows","text":"Download and install Outlook CalDav Synchronizer . Launch Outlook. If this is the first time you launched Outlook, it asks you to set up your account. Proceed to step 5. Go to the File menu and click Add Account . Enter your name ( ) , email address ( ) and your password. Click Next . Click Finish . Go to the CalDav Synchronizer ribbon and click Synchronization Profiles . Click the second button at top ( Add multiple profiles ), select Sogo , click Ok . Click the Get IMAP/POP3 account settings button. Click Discover resources and assign to Outlook folders . In the Select Resource window that pops up, select your main calendar (usually Personal Calendar ), click the ... button, assign it to Calendar , and click OK . Go to the Address Books and Tasks tabs and repeat repeat the process accordingly. Do not assign multiple calendars, address books or task lists! Close all windows with the OK buttons.","title":"Outlook 2007 or higher on Windows"},{"location":"client/client-outlook/#outlook-2011-or-higher-on-macos","text":"The Mac version of Outlook does not synchronize calendars and contacts and therefore is not supported.","title":"Outlook 2011 or higher on macOS"},{"location":"client/client-thunderbird/","text":"Launch Thunderbird. If this is the first time you launched Thunderbird, it asks you whether you would like a new email address. Click Skip this and use my existing email and proceed to step 4. Go to the File menu and select New , Existing Mail Account... . Enter your name ( ) , email address ( ) and your password. Make sure the Remember password checkbox is selected and click Continue . Once the configuration has been automatically detected, make sure IMAP is selected and click Done . To use your contacts from the server, click on the arrow next to \"Address Books\" and click the Connect button on each address book you would like to use. To use your calendars from the server, click on the arrow next to \"Calendars\" and click the Connect button on each calendar you would like to use. Click Finish to close the Account Setup window.","title":"Mozilla Thunderbird"},{"location":"client/client-windows/","text":"Windows 8 and higher support email, contacts and calendar via Exchange ActiveSync. Open the Mail app. If you have not previously used Mail, you can click Add Account in the main window. Proceed to step 4. Click Accounts in the sidebar on the left, then click Add Account on the far right. Select Exchange . Enter your email address ( ) and click Next . Enter your password and click Log in . Once you have set up the Mail app, you can also use the People and Calendar apps.","title":"Windows Mail"},{"location":"client/client-windowsphone/","text":"Open the Settings app. Select email + accounts and tap add an account . Tap Exchange . Enter your email address ( ) and your password. Tap Sign in . Tap done .","title":"Windows Phone"}]}