{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"\ud83d\udc2e + \ud83d\udc0b = \ud83d\udc95","text":""},{"location":"#help-mailcow","title":"Help mailcow","text":"
Please consider a support contract for a small monthly fee at Servercow EN to support further development. We support you while you support us. :)
If you are super awesome and would like to support without a contract, you can get a SAL license that confirms your awesomeness (a flexible one-time payment) at Servercow EN.
"},{"location":"#get-support","title":"Get support","text":"There are two ways to achieve support for your mailcow installation.
"},{"location":"#commercial-support","title":"Commercial support","text":"For professional and prioritized commercial support you can sign a basic support subscription at Servercow EN. For custom inquiries or questions please contact us at info@servercow.de instead.
Furthermore we do also provide a fully featured and managed mailcow here. This way we take care about the technical magic underneath and you can enjoy your whole mail experience in a hassle-free way.
"},{"location":"#community-support-and-chat","title":"Community support and chat","text":"The other alternative is our free community-support on our various channels below. Please notice, that this support is driven by our awesome community around mailcow. This kind of support is best-effort, voluntary and there is no guarantee for anything.
Our mailcow community @ community.mailcow.email
Telegram (Support) @ t.me/mailcow.
Telegram (Off-Topic) @ t.me/mailcowOfftopic.
Twitter @mailcow_email
Telegram desktop clients are available for multiple platforms. You can search the groups history for keywords.
For bug tracking, feature requests and code contributions only:
Since September 2022 we\u00b4re providing two seperate Demo instances:
Use the following credentials to login on both demos:
Success
The demo instances get the latest updates directly after releases from GitHub. Fully automatic, without any downtime!
"},{"location":"#overview","title":"Overview","text":"The integrated mailcow UI allows administrative work on your mail server instance as well as separated domain administrator and mailbox user access:
mailcow: dockerized comes with multiple containers linked in one bridged network. Each container represents a single application.
Warning
Mails are stored compressed and encrypted. The key pair can be found in crypt-vol-1. Be sure to backup this volume!
Docker volumes to keep dynamic data - take care of them!
So you deleted a mailbox and have no backups, he?
If you noticed your mistake within a few hours, you can probably recover the users data.
"},{"location":"backup_restore/b_n_r-accidental_deletion/#sogo","title":"SOGo","text":"We automatically create daily backups (24h interval starting from running up -d) in /var/lib/docker/volumes/mailcowdockerized_sogo-userdata-backup-vol-1/_data/
.
Make sure the user you want to restore exists in your mailcow. Re-create them if they are missing.
Copy the file named after the user you want to restore to __MAILCOW_DIRECTORY__/data/conf/sogo
.
1. Copy the backup: cp /var/lib/docker/volumes/mailcowdockerized_sogo-userdata-backup-vol-1/_data/restoreme@example.org __MAILCOW_DIRECTORY__/data/conf/sogo
2. Run docker compose exec -u sogo sogo-mailcow sogo-tool restore -F ALL /etc/sogo restoreme@example.org
Run sogo-tool
without parameters to check for possible restore options.
3. Delete the copied backup by running rm __MAILCOW_DIRECTORY__/data/conf/sogo
4. Restart SOGo and Memcached: docker compose restart sogo-mailcow memcached-mailcow
In case of an accidental deletion of a mailbox, you will be able to recover for (by default) 5 days. This depends on the MAILDIR_GC_TIME
parameter in mailcow.conf
.
A deleted mailbox is copied in its encrypted form to /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/_garbage
.
The folder inside _garbage
follows the structure [timestamp]_[domain_sanitized][user_sanitized]
, for example 1629109708_exampleorgtest
in case of test@example.org deleted on 1629109708.
To restore make sure you are actually restoring to the same mailcow it was deleted from or you use the same encryption keys in crypt-vol-1
.
Make sure the user you want to restore exists in your mailcow. Re-create them if they are missing.
Copy the folders from /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/_garbage/[timestamp]_[domain_sanitized][user_sanitized]
back to /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/[domain]/[user]
and resync the folder and recalc the quota:
docker compose exec dovecot-mailcow doveadm force-resync -u restoreme@example.net '*'\ndocker compose exec dovecot-mailcow doveadm quota recalc -u restoreme@example.net\n
"},{"location":"backup_restore/b_n_r-backup/","title":"Backup","text":""},{"location":"backup_restore/b_n_r-backup/#backup","title":"Backup","text":""},{"location":"backup_restore/b_n_r-backup/#manual","title":"Manual","text":"You can use the provided script helper-scripts/backup_and_restore.sh
to backup mailcow automatically.
Please do not copy this script to another location.
To run a backup, write \"backup\" as first parameter and either one or more components to backup as following parameters. You can also use \"all\" as second parameter to backup all components. Append --delete-days n
to delete backups older than n days.
# Syntax:\n# ./helper-scripts/backup_and_restore.sh backup (vmail|crypt|redis|rspamd|postfix|mysql|all|--delete-days)\n\n# Backup all, delete backups older than 3 days\n./helper-scripts/backup_and_restore.sh backup all --delete-days 3\n\n# Backup vmail, crypt and mysql data, delete backups older than 30 days\n./helper-scripts/backup_and_restore.sh backup vmail crypt mysql --delete-days 30\n\n# Backup vmail\n./helper-scripts/backup_and_restore.sh backup vmail\n
"},{"location":"backup_restore/b_n_r-backup/#variables-for-backuprestore-script","title":"Variables for backup/restore script","text":""},{"location":"backup_restore/b_n_r-backup/#multithreading","title":"Multithreading","text":"With the 2022-10 update it is possible to run the script with multithreading support. This can be used for backups as well as for restores.
To start the backup/restore with multithreading you have to add THREADS
as an environment variable in front of the command to execute the script.
THREADS=14 /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all\n
The number after the =
character indicates the number of threads. Please keep your core count -2 to leave enough CPU power for mailcow itself."},{"location":"backup_restore/b_n_r-backup/#backup-path","title":"Backup path","text":"The script will ask you for a backup location. Inside of this location it will create folders in the format \"mailcow_DATE\". You should not rename those folders to not break the restore process.
To run a backup unattended, define MAILCOW_BACKUP_LOCATION as environment variable before starting the script:
MAILCOW_BACKUP_LOCATION=/opt/backup /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all\n
Tip
Both variables mentioned above can also be combined! Ex:
MAILCOW_BACKUP_LOCATION=/opt/backup THREADS=14 /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all\n
"},{"location":"backup_restore/b_n_r-backup/#cronjob","title":"Cronjob","text":"You can run the backup script regularly via cronjob. Make sure BACKUP_LOCATION
exists:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\n5 4 * * * cd /opt/mailcow-dockerized/; MAILCOW_BACKUP_LOCATION=/mnt/mailcow_backups /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup mysql crypt redis --delete-days 3\n
Per default cron sends the full result of each backup operation by email. If you want cron to only mail on error (non-zero exit code) you may want to use the following snippet. Pathes need to be modified according to your setup (this script is a user contribution).
This following script may be placed in /etc/cron.daily/mailcow-backup
- do not forget to mark it as executable via chmod +x
:
#!/bin/sh\n\n# Backup mailcow data\n# https://mailcow.github.io/mailcow-dockerized-docs/backup_restore/b_n_r-backup/\n\nset -e\n\nOUT=\"$(mktemp)\"\nexport MAILCOW_BACKUP_LOCATION=\"/opt/backup\"\nSCRIPT=\"/opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh\"\nPARAMETERS=\"backup all\"\nOPTIONS=\"--delete-days 30\"\n\n# run command\nset +e\n\"${SCRIPT}\" ${PARAMETERS} ${OPTIONS} 2>&1 > \"$OUT\"\nRESULT=$?\n\nif [ $RESULT -ne 0 ]\n then\n echo \"${SCRIPT} ${PARAMETERS} ${OPTIONS} encounters an error:\"\n echo \"RESULT=$RESULT\"\n echo \"STDOUT / STDERR:\"\n cat \"$OUT\"\nfi\n
"},{"location":"backup_restore/b_n_r-backup/#backup-strategy-with-rsync-and-mailcow-backup-script","title":"Backup strategy with rsync and mailcow backup script","text":"Create the destination directory for mailcows helper script:
mkdir -p /external_share/backups/backup_script\n
Create cronjobs:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\n25 1 * * * rsync -aH --delete /opt/mailcow-dockerized /external_share/backups/mailcow-dockerized\n40 2 * * * rsync -aH --delete /var/lib/docker/volumes /external_share/backups/var_lib_docker_volumes\n5 4 * * * cd /opt/mailcow-dockerized/; BACKUP_LOCATION=/external_share/backups/backup_script /opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup mysql crypt redis --delete-days 3\n# If you want to, use the acl util to backup permissions of some/all folders/files: getfacl -Rn /path\n
On the destination (in this case /external_share/backups
) you may want to have snapshot capabilities (ZFS, Btrfs etc.). Snapshot daily and keep for n days for a consistent backup. Do not rsync to a Samba share, you need to keep the correct permissions!
To restore you'd simply need to run rsync the other way round and restart Docker to re-read the volumes. Run docker compose pull
and docker compose up -d
.
If you are lucky Redis and MariaDB can automatically fix the inconsistent databases (if they are inconsistent). In case of a corrupted database you'd need to use the helper script to restore the inconsistent elements. If a restore fails, try to extract the backups and copy the files back manually. Keep the file permissions!
"},{"location":"backup_restore/b_n_r-backup_restore-maildir/","title":"Maildir","text":""},{"location":"backup_restore/b_n_r-backup_restore-maildir/#backup","title":"Backup","text":"This line backups the vmail directory to a file backup_vmail.tar.gz in the mailcow root directory:
cd /path/to/mailcow-dockerized\ndocker run --rm -i -v $(docker inspect --format '{{ range .Mounts }}{{ if eq .Destination \"/var/vmail\" }}{{ .Name }}{{ end }}{{ end }}' $(docker compose ps -q dovecot-mailcow)):/vmail -v ${PWD}:/backup debian:stretch-slim tar cvfz /backup/backup_vmail.tar.gz /vmail\n
You can change the path by adjusting ${PWD} (which equals to the current directory) to any path you have write-access to. Set the filename backup_vmail.tar.gz
to any custom name, but leave the path as it is. Example: [...] tar cvfz /backup/my_own_filename_.tar.gz
cd /path/to/mailcow-dockerized\ndocker run --rm -it -v $(docker inspect --format '{{ range .Mounts }}{{ if eq .Destination \"/var/vmail\" }}{{ .Name }}{{ end }}{{ end }}' $(docker compose ps -q dovecot-mailcow)):/vmail -v ${PWD}:/backup debian:stretch-slim tar xvfz /backup/backup_vmail.tar.gz\n
"},{"location":"backup_restore/b_n_r-backup_restore-mysql/","title":"MySQL (mysqldump)","text":""},{"location":"backup_restore/b_n_r-backup_restore-mysql/#backup","title":"Backup","text":"cd /path/to/mailcow-dockerized\nsource mailcow.conf\nDATE=$(date +\"%Y%m%d_%H%M%S\")\ndocker compose exec -T mysql-mailcow mysqldump --default-character-set=utf8mb4 -u${DBUSER} -p${DBPASS} ${DBNAME} > backup_${DBNAME}_${DATE}.sql\n
"},{"location":"backup_restore/b_n_r-backup_restore-mysql/#restore","title":"Restore","text":"Warning
You should redirect the SQL dump without docker compose
to prevent parsing errors.
cd /path/to/mailcow-dockerized\nsource mailcow.conf\ndocker exec -i $(docker compose ps -q mysql-mailcow) mysql -u${DBUSER} -p${DBPASS} ${DBNAME} < backup_file.sql\n
"},{"location":"backup_restore/b_n_r-coldstandby/","title":"Cold-standby backup","text":"mailcow offers an easy way to create a consistent copy of itself to be rsync'ed to a remote location without downtime.
This may also be used to transfer your mailcow to a new server.
"},{"location":"backup_restore/b_n_r-coldstandby/#you-should-know","title":"You should know","text":"The provided script will work on default installations.
It may break when you use unsupported volume overrides. We don't support that and we will not include hacks to support that. Please run and maintain a fork if you plan to keep your changes.
The script will use the same paths as your default mailcow installation. That is the mailcow base directory - for most users /opt/mailcow-dockerized
- as well as the mountpoints.
To find the paths of your source volumes we use docker inspect
and read the destination directory of every volume related to your mailcow compose project. This means we will also transfer volumes you may have added in an override file. Local bind mounts may or may not work.
The script uses rsync with the --delete
flag. The destination will be an exact copy of the source.
mariabackup
is used to create a consistent copy of the SQL data directory.
After rsync'ing the data we will run docker compose pull
and remove old image tags from the destination.
Your source will not be changed at any time.
You may want to make sure to use the same /etc/docker/daemon.json
on the remote target.
You should not run disk snapshots (e.g. via ZFS, LVM etc.) on the target at the very same time as this script is run.
Versioning is not part of this script, we rely on the destination (snapshots or backups). You may also want to use any other tool for that.
"},{"location":"backup_restore/b_n_r-coldstandby/#prepare","title":"Prepare","text":"You will need an SSH-enabled destination and a keyfile to connect to said destination. The key should not be protected by a password for the script to work unattended.
In your mailcow base directory, e.g. /opt/mailcow-dockerized
you will find a file create_cold_standby.sh
.
Edit this file and change the exported variables:
export REMOTE_SSH_KEY=/path/to/keyfile\nexport REMOTE_SSH_PORT=22\nexport REMOTE_SSH_HOST=mailcow-backup.host.name\n
The key must be owned and readable by root only.
Both the source and destination require rsync
>= v3.1.0. The destination must have Docker and docker compose v2 available.
The script will detect errors automatically and exit.
You may want to test the connection by running ssh mailcow-backup.host.name -p22 -i /path/to/keyfile
.
Run the first backup, this may take a while depending on the connection:
bash /opt/mailcow-dockerized/create_cold_standby.sh\n
That was easy, wasn't it?
Updating your cold-standby is just as easy:
bash /opt/mailcow-dockerized/create_cold_standby.sh\n
It's the same command.
"},{"location":"backup_restore/b_n_r-coldstandby/#automated-backups-with-cron","title":"Automated backups with cron","text":"First make sure that the cron
service is enabled and running:
systemctl enable cron.service && systemctl start cron.service\n
To automate the backups to the cold-standby server you can use a cron job. To edit the cron jobs for the root user run:
crontab -e\n
Add the following lines to synchronize the cold standby server daily at 03:00. In this example errors of the last execution are logged into a file.
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\n\n0 3 * * * bash /opt/mailcow-dockerized/create_cold_standby.sh 2> /var/log/mailcow-coldstandby-sync.log\n
If saved correctly, the cron job should be shown by typing:
crontab -l\n
"},{"location":"backup_restore/b_n_r-restore/","title":"Restore","text":""},{"location":"backup_restore/b_n_r-restore/#restore","title":"Restore","text":"Please do not copy this script to another location.
To run a restore, start mailcow, use the script with \"restore\" as first parameter.
# Syntax:\n# ./helper-scripts/backup_and_restore.sh restore\n
The script will ask you for a backup location containing the mailcow_DATE folders.
"},{"location":"client/client-android/","title":"Android","text":"Email, contacts and calendars can be configured automatically on Apple devices by installing a profile. To download a profile you must login to the mailcow UI first.
"},{"location":"client/client-apple/#method-11-imap-smtp-and-calcarddav","title":"Method 1.1: IMAP, SMTP and Cal/CardDAV","text":"This method configures IMAP, CardDAV and CalDAV.
This method configures IMAP and SMTP only.
On iOS, Exchange ActiveSync is also supported as an alternative to the procedure above. It has the advantage of supporting push email (i.e. you are immediately notified of incoming messages), but has some limitations, e.g. it does not support more than three email addresses per contact in your address book. Follow the steps below if you decide to use Exchange instead.
Once you have set up Kontact, you can also use KMail, KOrganizer and KAddressBook individually.
"},{"location":"client/client-manual/","title":"Manual configuration","text":"These instructions are valid for unchanged port bindings only!
"},{"location":"client/client-manual/#email","title":"Email","text":"Service Encryption Host Port IMAP STARTTLS mailcow hostname 143 IMAPS SSL mailcow hostname 993 POP3 STARTTLS mailcow hostname 110 POP3S SSL mailcow hostname 995 SMTP STARTTLS mailcow hostname 587 SMTPS SSL mailcow hostname 465Please use the \"plain\" password setting as the authentication mechanism. Contrary to what the name implies, the password will not be transferred to the server in plain text as no authentication is allowed to take place without TLS.
"},{"location":"client/client-manual/#contacts-and-calendars","title":"Contacts and calendars","text":"SOGos default calendar (CalDAV) and contacts (CardDAV) URLs:
CalDAV https://mail.example.com/SOGo/dav/user@example.com/Calendar/personal/https:///SOGo/dav//Calendar/personal/
CardDAV https://mail.example.com/SOGo/dav/user@example.com/Contacts/personal/https:///SOGo/dav//Contacts/personal/
Some applications may require you to use https://mail.example.com/SOGo/dav/https:///SOGo/dav/ or the full path to your calendar, which can be found and copied from within SOGo.
"},{"location":"client/client-outlook/","title":"Microsoft Outlook","text":""},{"location":"client/client-outlook/#outlook-2016-or-higher-from-office-365-on-windows","title":"Outlook 2016 or higher from Office 365 on Windows","text":"This is only applicable if your server administrator has not disabled EAS for Outlook. If it is disabled, please follow the guide for Outlook 2007 instead.
Outlook 2016 has an issue with autodiscover. Only Outlook from Office 365 is affected. If you installed Outlook from another source, please follow the guide for Outlook 2013 or higher.
For EAS you must use the old assistant by launching C:\\Program Files (x86)\\Microsoft Office\\root\\Office16\\OLCFG.EXE
. If this application opens, you can go to step 4 of the guide for Outlook 2013 below.
If it does not open, you can completely disable the new account creation wizard and follow the guide for Outlook 2013 below.
"},{"location":"client/client-outlook/#outlook-2007-or-2010-on-windows","title":"Outlook 2007 or 2010 on Windows","text":""},{"location":"client/client-outlook/#outlook-2007-or-higher-on-windows-calendercontacts-via-caldav-synchronizer","title":"Outlook 2007 or higher on Windows (Calender/Contacts via CalDav Synchronizer)","text":"This is only applicable if your server administrator has not disabled EAS for Outlook. If it is disabled, please follow the guide for Outlook 2007 instead.
The Mac version of Outlook does not synchronize calendars and contacts and therefore is not supported.
"},{"location":"client/client-thunderbird/","title":"Mozilla Thunderbird","text":"Windows 8 and higher support email, contacts and calendar via Exchange ActiveSync.
Once you have set up the Mail app, you can also use the People and Calendar apps.
"},{"location":"client/client/","title":"Overview","text":"mailcow supports a variety of email clients, both on desktop computers and on smartphones. Below, you can find a number of configuration guides that explain how to configure your mailcow account.
Tip
If you access this page by logging into your mailcow server and clicking the \"Show configuration guides for email clients and smartphones\" link, all of the guides will be personalized with your email address and server name.Success
Since you accessed this page after logging into your mailcow server, all of the guides have been personalized with your email address and server name.To remove mailcow: dockerized with all it's volumes, images and containers do:
docker compose down -v --rmi all --remove-orphans\n
Info
volumes
section of the Compose file and anonymous volumes attached to containers.all
: Remove all images used by any service. local
: Remove only images that don't have a custom tag set by the image
field. docker compose down
only removes currently active containers and networks defined in the docker-compose.yml
.You need Docker (a version >= 20.10.2
is required) and Docker Compose (a version >= 2.0
is required).
Learn how to install Docker and Docker Compose.
Quick installation for most operation systems:
"},{"location":"i_u_m/i_u_m_install/#docker","title":"Docker","text":"curl -sSL https://get.docker.com/ | CHANNEL=stable sh\n# After the installation process is finished, you may need to enable the service and make sure it is started (e.g. CentOS 7)\nsystemctl enable --now docker\n
"},{"location":"i_u_m/i_u_m_install/#docker-compose","title":"docker compose","text":"Danger
mailcow requires the latest version of docker compose v2. If Docker was installed using the script above, the Docker Compose plugin is already automatically installed in a version >=2.0. Is your mailcow installation older or Docker was installed in a different way, the Compose plugin or the standalone version of Docker must be installed manually.
"},{"location":"i_u_m/i_u_m_install/#installation-via-paketmanager-plugin","title":"Installation via Paketmanager (plugin)","text":"Info
This approach with the package sources is only possible if the Docker repository has been included. This can happen either through the instructions above (see Docker) or through a manually integration.
On Debian/Ubuntu systems:
apt update\napt install docker-compose-plugin\n
On Centos 7 systems:
yum update\nyum install docker-compose-plugin\n
Danger
The Docker Compose command syntax is docker compose
for the plugin variant of Docker Compose!!!
Info
This installation is the old familiar way. It installs Docker Compose as a standalone program and does not rely on the Docker installation way.
curl -L https://github.com/docker/compose/releases/download/v$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose\nchmod +x /usr/local/bin/docker-compose\n
Danger
The Docker Compose command syntax is docker-compose
for the standalone variant of Docker Compose!!!
Please use the latest Docker engine available and do not use the engine that ships with your distros repository.
"},{"location":"i_u_m/i_u_m_install/#check-selinux-specifics","title":"Check SELinux specifics","text":"On SELinux enabled systems, e.g. CentOS 7:
rpm -qa | grep container-selinux\n
If the above command returns an empty or no output, you should install it via your package manager.
docker info | grep selinux\n
If the above command returns an empty or no output, create or edit /etc/docker/daemon.json
and add \"selinux-enabled\": true
. Example file content:
{\n \"selinux-enabled\": true\n}\n
Restart the docker daemon and verify SELinux is now enabled.
This step is required to make sure mailcows volumes are properly labeled as declared in the compose file. If you are interested in how this works, you can check out the readme of https://github.com/containers/container-selinux which links to a lot of useful information on that topic.
"},{"location":"i_u_m/i_u_m_install/#install-mailcow","title":"Install mailcow","text":"Clone the master branch of the repository, make sure your umask equals 0022. Please clone the repository as root user and also control the stack as root. We will modify attributes - if necessary - while bootstrapping the containers automatically and make sure everything is secured. The update.sh script must therefore also be run as root. It might be necessary to change ownership and other attributes of files you will otherwise not have access to. We drop permissions for every exposed application and will not run an exposed service as root! Controlling the Docker daemon as non-root user does not give you additional security. The unprivileged user will spawn the containers as root likewise. The behaviour of the stack is identical.
$ su\n# umask\n0022 # <- Verify it is 0022\n# cd /opt\n# git clone https://github.com/mailcow/mailcow-dockerized\n# cd mailcow-dockerized\n
"},{"location":"i_u_m/i_u_m_install/#initialize-mailcow","title":"Initialize mailcow","text":"Generate a configuration file. Use a FQDN (host.domain.tld
) as hostname when asked.
./generate_config.sh\n
Change configuration if you want or need to.
nano mailcow.conf\n
If you plan to use a reverse proxy, you can, for example, bind HTTPS to 127.0.0.1 on port 8443 and HTTP to 127.0.0.1 on port 8080. You may need to stop an existing pre-installed MTA which blocks port 25/tcp. See this chapter to learn how to reconfigure Postfix to run besides mailcow after a successful installation.
Some updates modify mailcow.conf and add new parameters. It is hard to keep track of them in the documentation. Please check their description and, if unsure, ask at the known channels for advise.
"},{"location":"i_u_m/i_u_m_install/#troubleshooting","title":"Troubleshooting","text":""},{"location":"i_u_m/i_u_m_install/#users-with-a-mtu-not-equal-to-1500-eg-openstack","title":"Users with a MTU not equal to 1500 (e.g. OpenStack)","text":"Whenever you run into trouble and strange phenomena, please check your MTU.
Edit docker-compose.yml
and change the network settings according to your MTU. Add the new driver_opts parameter like this:
networks:\n mailcow-network:\n ...\n driver_opts:\n com.docker.network.driver.mtu: 1450\n ...\n
"},{"location":"i_u_m/i_u_m_install/#users-without-an-ipv6-enabled-network-on-their-host-system","title":"Users without an IPv6 enabled network on their host system","text":"Please don't turn off IPv6, even if you don't like it. IPv6 is the future and should not be ignored.
If you do not have an IPv6 enabled network on your host and you don't care for a better internet (thehe), it is recommended to disable IPv6 for the mailcow network to prevent unforeseen issues.
"},{"location":"i_u_m/i_u_m_install/#start-mailcow","title":"Start mailcow","text":"Pull the images and run the compose file. The parameter -d
will start mailcow: dockerized detached:
docker compose pull\ndocker compose up -d\n
Done!
You can now access https://${MAILCOW_HOSTNAME} with the default credentials admin
+ password moohoo
.
Info
If you are not using mailcow behind a reverse proxy, you should redirect all HTTP requests to HTTPS.
The database will be initialized right after a connection to MySQL can be established.
Your data will persist in multiple Docker volumes, that are not deleted when you recreate or delete containers. Run docker volume ls
to see a list of all volumes. You can safely run docker compose down
without removing persistent data.
Warning
This guide assumes you intend to migrate an existing mailcow server (source) over to a brand new, empty server (target). It takes no care about preserving any existing data on your target server and will erase anything within /var/lib/docker/volumes
and thus any Docker volumes you may have already set up.
Tip
Alternatively, you can use the ./helper-scripts/backup_and_restore.sh
script to create a full backup on the source machine, then install mailcow on the target machine as usual, copy over your mailcow.conf
and use the same script to restore your backup to the target machine.
1. Follow the installation guide to install Docker and Compose.
2. Stop Docker and assure Docker has stopped:
systemctl stop docker.service\nsystemctl status docker.service\n
3. Run the following commands on the source machine (take care of adding the trailing slashes in the first path parameter as shown below!) - WARNING: This command will erase anything that may already exist under /var/lib/docker/volumes
on the target machine:
rsync -aHhP --numeric-ids --delete /opt/mailcow-dockerized/ root@target-machine.example.com:/opt/mailcow-dockerized\nrsync -aHhP --numeric-ids --delete /var/lib/docker/volumes/ root@target-machine.example.com:/var/lib/docker/volumes\n
4. Shut down mailcow and stop Docker on the source machine.
cd /opt/mailcow-dockerized\ndocker compose down\nsystemctl stop docker.service\n
5. Repeat step 3 with the same commands. This will be much quicker than the first time.
6. Switch over to the target machine and start Docker.
systemctl start docker.service\n
7. Now pull the mailcow Docker images on the target machine.
cd /opt/mailcow-dockerized\ndocker compose pull\n
8. Start the whole mailcow stack and everything should be done!
docker compose up -d\n
9. Finally, change your DNS settings to point to the target server. Also check the SNAT_TO_SOURCE
variable in your mailcow.conf
file if you have changed your public IP address, otherwise SOGo may not work.
An update script in your mailcow-dockerized directory will take care of updates.
But use it with caution! If you think you made a lot of changes to the mailcow code, you should use the manual update guide below.
Run the update script:
./update.sh\n
If it needs to, it will ask you how you wish to proceed. Merge errors will be reported. Some minor conflicts will be auto-corrected (in favour for the mailcow-dockerized repository code).
"},{"location":"i_u_m/i_u_m_update/#options","title":"Options","text":"# Options can be combined\n\n# - Check for updates and show changes\n./update.sh --check\n\n# - Do not start mailcow after applying an update\n./update.sh --skip-start\n\n# - Skip ICMP Check to public DNS resolvers (Use it only if you\u00b4ve blocked any ICMP Connections to your mailcow machine)\n./update.sh --skip-ping-check\n\n# - Switch your mailcow updates to the unstable (nightly) branch.\nFOR TESTING PURPOSES ONLY!!!! NOT READY FOR PRODUCTION!!!\n./update.sh --nightly\n\n# - Switch your mailcow updates to the stable (master) branch. Default unless you changed it with --nightly.\n./update.sh --stable\n\n# - Force update (unattended, but unsupported, use at own risk)\n./update.sh --force\n\n# - Run garbage collector to cleanup old image tags and exit\n./update.sh --gc\n\n# - Update with merge strategy option \"ours\" instead of \"theirs\"\n# This will **solve conflicts** when merging in favor for your local changes and should be avoided. Local changes will always be kept, unless we changed file XY, too.\n./update.sh --ours\n\n# - Don't update, but prefetch images and exit\n./update.sh --prefetch\n
"},{"location":"i_u_m/i_u_m_update/#i-forgot-what-i-changed-before-running-updatesh","title":"I forgot what I changed before running update.sh","text":"See git log --pretty=oneline | grep -i \"before update\"
, you will have an output similar to this:
22cd00b5e28893ef9ddef3c2b5436453cc5223ab Before update on 2020-09-28_19_25_45\ndacd4fb9b51e9e1c8a37d84485b92ffaf6c59353 Before update on 2020-08-07_13_31_31\n
Run git diff 22cd00b5e28893ef9ddef3c2b5436453cc5223ab
to see what changed.
Yes.
See the topic above, instead of a diff, you run checkout:
docker compose down\n# Replace commit ID 22cd00b5e28893ef9ddef3c2b5436453cc5223ab by your ID\ngit checkout 22cd00b5e28893ef9ddef3c2b5436453cc5223ab\ndocker compose pull\ndocker compose up -d\n
"},{"location":"i_u_m/i_u_m_update/#hooks","title":"Hooks","text":"You can hook into the update mechanism by adding scripts called pre_commit_hook.sh
and post_commit_hook.sh
to your mailcows root directory. See this for more details.
YYYY-MM
(e.g. 2022-05
)2022-05a
, 2022-05b
etc.)stable (stable updates): These updates are suitable for productive usage. They appear in a cycle of at least 1x per month.
nightly (unstable updates): These updates are NOT suitable for production use and are for testing only. The nightly updates are ahead of the stable updates, since in these updates we test newer and more extensive features before they go live for all users.
"},{"location":"i_u_m/i_u_m_update/#new-get-nightly-updates","title":"NEW: Get Nightly Updates","text":""},{"location":"i_u_m/i_u_m_update/#info-about-the-nightly-updates","title":"Info about the Nightly Updates","text":"Since the 2022-08 update there is the possibility to change the update sources. Until now, the master branch on GitHub served as the only (official) update source. With the August 2022 update, however, there is now the Nightly Branch which contains unstable and major changes for testing and feedback.
The Nightly Branch always gets new updates when something is finished on the mailcow project that will be included in the new main version.
Besides the obvious changes that will be included in the next major update anyway, it also contains exclusive features that need a longer testing time (e.g. the UI update to Bootstrap 5).
"},{"location":"i_u_m/i_u_m_update/#how-do-i-get-nightly-updates","title":"How do I get Nightly Updates?","text":"The process is relatively simple. With the 2022-08 update (assuming an update to the version) it is possible to run update.sh
with the parameter --nightly
.
Danger
Please make a backup before or follow the Best Practice Nightly Update section before switching to mailcow nightly builds. We are not responsible for any data loss/corruption, so work with caution!
The script will now change the branch with git checkout nightly
, which means it will ask for the IPv6 settings again. But this is normal.
If everything worked fine (for which we made a backup before) the mailcow UI should now show the current version number and date stamp in the lower right corner:
"},{"location":"i_u_m/i_u_m_update/#best-practice-nightly-update","title":"Best Practice Nightly Update","text":"Info
We recommend using the Nightly Update only if you have another machine or VM and NOT use it productively.
update.sh
script on the new machine with the parameter --nightly
and confirm.Since February the 28th 2017 mailcow does come with port 80 and 443 enabled.
Do not use the config below for reverse proxy setups, please see our reverse proxy guide for this, which includes a redirect from HTTP to HTTPS.
Open mailcow.conf
and set HTTP_BIND=
- if not already set.
Create a new file data/conf/nginx/redirect.conf
and add the following server config to the file:
server {\n root /web;\n listen 80 default_server;\n listen [::]:80 default_server;\n include /etc/nginx/conf.d/server_name.active;\n if ( $request_uri ~* \"%0A|%0D\" ) { return 403; }\n location ^~ /.well-known/acme-challenge/ {\n allow all;\n default_type \"text/plain\";\n }\n location / {\n return 301 https://$host$uri$is_args$args;\n }\n}\n
In case you changed the HTTP_BIND parameter, recreate the container:
docker compose up -d\n
Otherwise restart Nginx:
docker compose restart nginx-mailcow\n
"},{"location":"manual-guides/u_e-autodiscover_config/","title":"Autodiscover / Autoconfig","text":"You do not need to change or create this file, autodiscover works out of the box. This guide is only meant for customizations to the autodiscover or autoconfig process.
Newer Outlook clients (especially those delivered with O365) will not autodiscover mail profiles. Keep in mind, that ActiveSync should NOT be used with a desktop client.
Open/create data/web/inc/vars.local.inc.php
and add your changes to the configuration array.
Changes will be merged with \"$autodiscover_config\" in data/web/inc/vars.inc.php
):
<?php\n$autodiscover_config = array(\n // General autodiscover service type: \"activesync\" or \"imap\"\n // emClient uses autodiscover, but does not support ActiveSync. mailcow excludes emClient from ActiveSync.\n 'autodiscoverType' => 'activesync',\n // If autodiscoverType => activesync, also use ActiveSync (EAS) for Outlook desktop clients (>= Outlook 2013 on Windows)\n // Outlook for Mac does not support ActiveSync\n 'useEASforOutlook' => 'yes',\n // Please don't use STARTTLS-enabled service ports in the \"port\" variable.\n // The autodiscover service will always point to SMTPS and IMAPS (TLS-wrapped services).\n // The autoconfig service will additionally announce the STARTTLS-enabled ports, specified in the \"tlsport\" variable.\n 'imap' => array(\n 'server' => $mailcow_hostname,\n 'port' => array_pop(explode(':', getenv('IMAPS_PORT'))),\n 'tlsport' => array_pop(explode(':', getenv('IMAP_PORT'))),\n ),\n 'pop3' => array(\n 'server' => $mailcow_hostname,\n 'port' => array_pop(explode(':', getenv('POPS_PORT'))),\n 'tlsport' => array_pop(explode(':', getenv('POP_PORT'))),\n ),\n 'smtp' => array(\n 'server' => $mailcow_hostname,\n 'port' => array_pop(explode(':', getenv('SMTPS_PORT'))),\n 'tlsport' => array_pop(explode(':', getenv('SUBMISSION_PORT'))),\n ),\n 'activesync' => array(\n 'url' => 'https://'.$mailcow_hostname.($https_port == 443 ? '' : ':'.$https_port).'/Microsoft-Server-ActiveSync',\n ),\n 'caldav' => array(\n 'server' => $mailcow_hostname,\n 'port' => $https_port,\n ),\n 'carddav' => array(\n 'server' => $mailcow_hostname,\n 'port' => $https_port,\n ),\n);\n
To always use IMAP and SMTP instead of EAS, set 'autodiscoverType' => 'imap'
.
Disable ActiveSync for Outlook desktop clients by setting \"useEASforOutlook\" to \"no\".
"},{"location":"manual-guides/u_e-reeanble-weak-protocols/","title":"Re-enable TLS 1.0 and TLS 1.1","text":"On February the 12th 2020 we disabled the deprecated protocols TLS 1.0 and 1.1 in Dovecot (POP3, POP3S, IMAP, IMAPS) and Postfix (SMTPS, SUBMISSION).
Unauthenticated mail via SMTP on port 25/tcp does still accept >= TLS 1.0 . It is better to accept a weak encryption than none at all.
How to re-enable weak protocols?
Edit data/conf/postfix/extra.cf
:
submission_smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3\nsmtps_smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3\n
Edit data/conf/dovecot/extra.conf
:
ssl_min_protocol = TLSv1\n
Restart the affected services:
docker compose restart postfix-mailcow dovecot-mailcow\n
Hint: You can enable TLS 1.2 in Windows 7.
"},{"location":"manual-guides/u_e-update-hooks/","title":"Run scripts before and after updates","text":"It is possible to add pre- and post-update-hooks to the update.sh
script that upgrades your whole mailcow installation.
To do so, just add the corresponding bash script into your mailcow root directory:
pre_update_hook.sh
for commands that should run before the updatepost_update_hook.sh
for commands that should run after the update is completedKeep in mind that pre_update_hook.sh
runs every time you call update.sh
and post_update_hook.sh
will only run if the update was successful and the script doesn't have to be re-run.
The scripts will be run by bash, an interpreter (e.g. #!/bin/bash
) as well as an execute permission flag (\"+x\") are not required.
For DNS blacklist lookups and DNSSEC.
Most systems use either a public or a local caching DNS resolver. That's a very bad idea when it comes to filter spam using DNS-based black hole lists (DNSBL) or similar technics. Most if not all providers apply a rate limit based on the DNS resolver that is used to query their service. Using a public resolver like Googles 4x8, OpenDNS or any other shared DNS resolver like your ISPs will hit that limit very soon.
"},{"location":"manual-guides/ClamAV/u_e-clamav-additional_dbs/","title":"Additional Databases","text":""},{"location":"manual-guides/ClamAV/u_e-clamav-additional_dbs/#additional-databases-for-clamav","title":"Additional Databases for ClamAV","text":"Default ClamAV databases do not have great detection levels, but it can be enhanced with free or paid signature databases.
"},{"location":"manual-guides/ClamAV/u_e-clamav-additional_dbs/#list-of-known-free-databases-as-of-april-2022","title":"List of known free databases | As of April 2022","text":"your_id
from one of the download links, they are individual for every userAdd to data/conf/clamav/freshclam.conf
with replaced your_id
part:
DatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/securiteinfo.hdb\nDatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/securiteinfo.ign2\nDatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/javascript.ndb\nDatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/spam_marketing.ndb\nDatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/securiteinfohtml.hdb\nDatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/securiteinfoascii.hdb\nDatabaseCustomURL https://www.securiteinfo.com/get/signatures/your_id/securiteinfopdf.hdb\n
For free SecuriteInfo databases, download speed is limited to 300 kB/s. In data/conf/clamav/freshclam.conf
, increase the default ReceiveTimeout 20
value to ReceiveTimeout 90
(time in seconds), otherwise some of the database downloads could fail because of their size.
Adjust data/conf/clamav/clamd.conf
to align with next settings:
DetectPUA yes\nExcludePUA PUA.Win.Packer\nExcludePUA PUA.Win.Trojan.Packed\nExcludePUA PUA.Win.Trojan.Molebox\nExcludePUA PUA.Win.Packer.Upx\nExcludePUA PUA.Doc.Packed\nMaxScanSize 150M\nMaxFileSize 100M\nMaxRecursion 40\nMaxEmbeddedPE 100M\nMaxHTMLNormalize 50M\nMaxScriptNormalize 50M\nMaxZipTypeRcg 50M\n
docker compose restart clamd-mailcow\n
Please note:
ExcludePUA
and IncludePUA
in clamd.conf
simultaneously, so please comment any IncludePUA
if you uncommented them before. message_size_limit
in Postfix you need to adapt MaxSize
settings in ClamAV as well.data/conf/clamav/freshclam.conf
: DatabaseCustomURL http://sigs.interserver.net/interserver256.hdb\nDatabaseCustomURL http://sigs.interserver.net/interservertopline.db\nDatabaseCustomURL http://sigs.interserver.net/shell.ldb\nDatabaseCustomURL http://sigs.interserver.net/whitelist.fp\n
docker compose restart clamd-mailcow\n
You may find that legitimate (clean) mail is being blocked by ClamAV (Rspamd will flag the mail with VIRUS_FOUND
). For instance, interactive PDF form attachments are blocked by default because the embedded Javascript code may be used for nefarious purposes. Confirm by looking at the clamd logs, e.g.:
docker compose logs clamd-mailcow | grep \"FOUND\"\n
This line confirms that such was identified:
clamd-mailcow_1 | Sat Sep 28 07:43:24 2019 -> instream(local): PUA.Pdf.Trojan.EmbeddedJavaScript-1(e887d2ac324ce90750768b86b63d0749:363325) FOUND\n
To whitelist this particular signature (and enable sending this type of file attached), add it to the ClamAV signature whitelist file:
echo 'PUA.Pdf.Trojan.EmbeddedJavaScript-1' >> data/conf/clamav/whitelist.ign2\n
Then restart the clamd-mailcow service container in the mailcow UI or using docker compose:
docker compose restart clamd-mailcow\n
Cleanup cached ClamAV results in Redis:
# docker compose exec redis-mailcow /bin/sh\n/data # redis-cli KEYS rs_cl* | xargs redis-cli DEL\n/data # exit\n
"},{"location":"manual-guides/Docker/u_e-docker-cust_dockerfiles/","title":"Customize Dockerfiles","text":"You need to copy the override file with corresponding build tags to the mailcow: dockerized root folder (i.e. /opt/mailcow-dockerized
):
cp helper-scripts/docker-compose.override.yml.d/BUILD_FLAGS/docker-compose.override.yml docker-compose.override.yml\n
Customize data/Dockerfiles/$service
and build the image locally:
docker build data/Dockerfiles/$service -t mailcow/$service:$tag\n
(without a personalized :$tag docker will use :latest automatically) Now the created image has to be activated in docker-compose.override.yml, e.g.:
$service-mailcow:\n build: ./data/Dockerfiles/$service\n image: mailcow/$service:$tag\n
Now auto-recreate modified containers:
docker compose up -d\n
"},{"location":"manual-guides/Dovecot/u_e-dovecot-any_acl/","title":"Enable \"any\" ACL settings","text":"On August the 17th, we disabled the possibility to share with \"any\" or \"all authenticated users\" by default.
This function can be re-enabled by setting ACL_ANYONE
to allow
in mailcow.conf:
ACL_ANYONE=allow\n
Apply the changes by running docker compose up -d
.
The Dovecot parameter sieve_vacation_dont_check_recipient
- which was by default set to yes
in mailcow configurations pre 21st July 2021 - allows for vacation replies even when a mail is sent to non-existent mailboxes like a catch-all addresses.
We decided to switch this parameter back to no
and allow a user to specify which recipient address triggers a vacation reply. The triggering recipients can also be configured in SOGos autoresponder feature.
If you want to delete old mails out of the .Junk
or .Trash
folders or maybe delete all read mails that are older than a certain amount of time you may use dovecot's tool doveadm man doveadm-expunge.
That said, let's dive in:
Delete a user's mails inside the junk folder that are read and older than 4 hours
docker compose exec dovecot-mailcow doveadm expunge -u 'mailbox@example.com' mailbox 'Junk' SEEN not SINCE 4h\n
Delete all user's mails in the junk folder that are older than 7 days
docker compose exec dovecot-mailcow doveadm expunge -A mailbox 'Junk' savedbefore 7d\n
Delete all mails (of all users) in all folders that are older than 52 weeks (internal date of the mail, not the date it was saved on the system => before
instead of savedbefore
). Useful for deleting very old mails on all users and folders (thus especially useful for GDPR-compliance).
docker compose exec dovecot-mailcow doveadm expunge -A mailbox % before 52w\n
Delete mails inside a custom folder inside a user's inbox that are not flagged and older than 2 weeks
docker compose exec dovecot-mailcow doveadm expunge -u 'mailbox@example.com' mailbox 'INBOX/custom-folder' not FLAGGED not SINCE 2w\n
Info
For possible time spans or search keys have a look at man doveadm-search-query
"},{"location":"manual-guides/Dovecot/u_e-dovecot-expunge/#job-scheduler","title":"Job scheduler","text":""},{"location":"manual-guides/Dovecot/u_e-dovecot-expunge/#via-the-host-system-cron","title":"via the host system cron","text":"If you want to automate such a task you can create a cron job on your host that calls a script like the one below:
#!/bin/bash\n# Path to mailcow-dockerized, e.g. /opt/mailcow-dockerized\ncd /path/to/your/mailcow-dockerized\n\n/usr/local/bin/docker compose exec -T dovecot-mailcow doveadm expunge -A mailbox 'Junk' savedbefore 2w\n/usr/local/bin/docker compose exec -T dovecot-mailcow doveadm expunge -A mailbox 'Junk' SEEN not SINCE 12h\n[...]\n
To create a cron job you may execute crontab -e
and insert something like the following to execute a script:
# Execute everyday at 04:00 A.M.\n0 4 * * * /path/to/your/expunge_mailboxes.sh\n
"},{"location":"manual-guides/Dovecot/u_e-dovecot-expunge/#via-docker-job-scheduler","title":"via Docker job scheduler","text":"To archive this with a docker job scheduler use this docker-compose.override.yml with your mailcow:
version: '2.1'\n\nservices:\n\n ofelia:\n image: mcuadros/ofelia:latest\n restart: always\n command: daemon --docker\n volumes:\n - /var/run/docker.sock:/var/run/docker.sock:ro \n network_mode: none\n\n dovecot-mailcow:\n labels:\n - \"ofelia.enabled=true\"\n - \"ofelia.job-exec.dovecot-expunge-trash.schedule=0 4 * * *\"\n - \"ofelia.job-exec.dovecot-expunge-trash.command=doveadm expunge -A mailbox 'Junk' savedbefore 2w\"\n - \"ofelia.job-exec.dovecot-expunge-trash.tty=false\"\n
The job controller just need access to the docker control socket to be able to emulate the behavior of \"exec\". Then we add a few label to our dovecot-container to activate the job scheduler and tell him in a cron compatible scheduling format when to run. If you struggle with that schedule string you can use crontab guru. This docker-compose.override.yml deletes all mails older then 2 weeks from the \"Junk\" folder every day at 4 am. To see if things ran proper, you can not only see in your mailbox but also check Ofelia's docker log if it looks something like this:
common.go:124 \u25b6 NOTICE [Job \"dovecot-expunge-trash\" (8759567efa66)] Started - doveadm expunge -A mailbox 'Junk' savedbefore 2w,\ncommon.go:124 \u25b6 NOTICE [Job \"dovecot-expunge-trash\" (8759567efa66)] Finished in \"285.032291ms\", failed: false, skipped: false, error: none,\n
If it failed it will say so and give you the output of the doveadm in the log to make it easy on you to debug.
In case you want to add more jobs, ensure you change the \"dovecot-expunge-trash\" part after \"ofelia.job-exec.\" to something else, it defines the name of the job. Syntax of the labels you find at mcuadros/ofelia.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-extra_conf/","title":"Customize/Expand dovecot.conf","text":"Create a file data/conf/dovecot/extra.conf
- if missing - and add your additional content here.
Restart dovecot-mailcow
to apply your changes:
docker compose restart dovecot-mailcow\n
"},{"location":"manual-guides/Dovecot/u_e-dovecot-fts/","title":"FTS (Solr)","text":""},{"location":"manual-guides/Dovecot/u_e-dovecot-fts/#fts-solr","title":"FTS Solr","text":"Solr is used for setups with memory >= 3.5 GiB to provide full-text search in Dovecot.
Please be aware that applications like Solr may need maintenance from time to time.
Besides that, Solr will eat a lot of RAM, depending on the usage of your server. Please avoid it on machines with less than 3 GB RAM.
The default heap size (1024 M) is defined in mailcow.conf.
Since we run in Docker and create our containers with the \"restart: always\" flag, a oom situation will at least only trigger a restart of the container.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-fts/#fts-related-dovecot-commands","title":"FTS related Dovecot commands","text":"# single user\ndocker compose exec dovecot-mailcow doveadm fts rescan -u user@domain\n# all users\ndocker compose exec dovecot-mailcow doveadm fts rescan -A\n
Dovecot Wiki: \"Scan what mails exist in the full text search index and compare those to what actually exist in mailboxes. This removes mails from the index that have already been expunged and makes sure that the next doveadm index will index all the missing mails (if any).\"
This does not re-index a mailbox. It basically repairs a given index.
If you want to re-index data immediately, you can run the followig command, where '*' can also be a mailbox mask like 'Sent'. You do not need to run these commands, but it will speed things up a bit:
# single user\ndocker compose exec dovecot-mailcow doveadm index -u user@domain '*'\n# all users, but obviously slower and more dangerous\ndocker compose exec dovecot-mailcow doveadm index -A '*'\n
This will take some time depending on your machine and Solr can run oom, monitor it!
Because re-indexing is very sensible, we did not include it to mailcow UI. You will need to take care of any errors while re-indexing a mailbox.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-fts/#delete-mailbox-data","title":"Delete mailbox data","text":"mailcow will purge index data of a user when deleting a mailbox.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-idle_interval/","title":"Changing the IMAP IDLE interval","text":""},{"location":"manual-guides/Dovecot/u_e-dovecot-idle_interval/#what-is-the-idle-interval","title":"What is the IDLE interval?","text":"Per default, Dovecot sends a \"I'm still here\" notification to every client that has an open connection with Dovecot to get mails as quickly as possible without manually polling it (IMAP PUSH). This notification is controlled by the setting imap_idle_notify_interval
, which defaults to 2 minutes.
A short interval results in the client getting a lot of messages for this connection, which is bad for mobile devices, because every time the device receives this message, the mailing app has to wake up. This can result in unnecessary battery drain.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-idle_interval/#edit-the-value","title":"Edit the value","text":""},{"location":"manual-guides/Dovecot/u_e-dovecot-idle_interval/#change-configuration","title":"Change configuration","text":"Create a new file data/conf/dovecot/extra.conf
(or edit it if it already exists). Insert the setting followed by the new value. For example, to set the interval to 5 minutes you could type:
imap_idle_notify_interval = 5 mins\n
29 minutes is the maximum value allowed by the corresponding RFC.
Warning
This isn't a default setting in mailcow because we don't know how this setting changes the behavior of other clients. Be careful if you change this and monitor different behavior.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-idle_interval/#reload-dovecot","title":"Reload Dovecot","text":"Now reload Dovecot:
docker compose exec dovecot-mailcow dovecot reload\n
Info
You can check the value of this setting with
docker compose exec dovecot-mailcow dovecot -a | grep \"imap_idle_notify_interval\"\n
If you didn't change it, it should be at 2m. If you did change it, you should see your new value."},{"location":"manual-guides/Dovecot/u_e-dovecot-mail-crypt/","title":"Mail crypt","text":"Warning
Mails are stored compressed (lz4) and encrypted. The key pair can be found in crypt-vol-1.
If you want to decode/encode existing maildir files, you can use the following script at your own risk:
Enter Dovecot by running docker compose exec dovecot-mailcow /bin/bash
in the mailcow-dockerized location.
# Decrypt /var/vmail\nfind /var/vmail/ -type f -regextype egrep -regex '.*S=.*W=.*' | while read -r file; do\nif [[ $(head -c7 \"$file\") == \"CRYPTED\" ]]; then\ndoveadm fs get compress lz4:1:crypt:private_key_path=/mail_crypt/ecprivkey.pem:public_key_path=/mail_crypt/ecpubkey.pem:posix:prefix=/ \\\n \"$file\" > \"/tmp/$(basename \"$file\")\"\n if [[ -s \"/tmp/$(basename \"$file\")\" ]]; then\n chmod 600 \"/tmp/$(basename \"$file\")\"\n chown 5000:5000 \"/tmp/$(basename \"$file\")\"\n mv \"/tmp/$(basename \"$file\")\" \"$file\"\n else\n rm \"/tmp/$(basename \"$file\")\"\n fi\nfi\ndone\n\n# Encrypt /var/vmail\nfind /var/vmail/ -type f -regextype egrep -regex '.*S=.*W=.*' | while read -r file; do\nif [[ $(head -c7 \"$file\") != \"CRYPTED\" ]]; then\ndoveadm fs put crypt private_key_path=/mail_crypt/ecprivkey.pem:public_key_path=/mail_crypt/ecpubkey.pem:posix:prefix=/ \\\n \"$file\" \"$file\"\n chmod 600 \"$file\"\n chown 5000:5000 \"$file\"\nfi\ndone\n
"},{"location":"manual-guides/Dovecot/u_e-dovecot-more/","title":"More Examples with DOVEADM","text":"Here is just an unsorted list of useful doveadm
commands that could be useful.
The quota get
and quota recalc
1 commands are used to display or recalculate the current user's quota usage. The reported values are in kilobytes.
To list the current quota status for a user / mailbox, do:
doveadm quota get -u 'mailbox@example.org'\n
To list the quota storage value for all users, do:
doveadm quota get -A |grep \"STORAGE\"\n
Recalculate a single user's quota usage:
doveadm quota recalc -u 'mailbox@example.org'\n
"},{"location":"manual-guides/Dovecot/u_e-dovecot-more/#doveadm-search","title":"doveadm search","text":"The doveadm search
2 command is used to find messages matching your query. It can return the username, mailbox-GUID / -UID and message-GUIDs / -UIDs.
To view the number of messages, by user, in their .Trash folder:
doveadm search -A mailbox 'Trash' | awk '{print $1}' | sort | uniq -c\n
Show all messages in a user's inbox older then 90 days:
doveadm search -u 'mailbox@example.org' mailbox 'INBOX' savedbefore 90d\n
Show all messages in any folder that are older then 30 days for mailbox@example.org
:
doveadm search -u 'mailbox@example.org' mailbox \"*\" savedbefore 30d\n
https://wiki.dovecot.org/Tools/Doveadm/Quota \u21a9
https://wiki.dovecot.org/Tools/Doveadm/Search \u21a9
Create a new public namespace \"Public\" and a mailbox \"Develcow\" inside that namespace:
Edit or create data/conf/dovecot/extra.conf
, add:
namespace {\n type = public\n separator = /\n prefix = Public/\n location = maildir:/var/vmail/public:INDEXPVT=~/public\n subscriptions = yes\n mailbox \"Develcow\" {\n auto = subscribe\n }\n}\n
:INDEXPVT=~/public
can be omitted if per-user seen flags are not wanted.
The new mailbox in the public namespace will be auto-subscribed by users.
To allow all authenticated users access full to that new mailbox (not the whole namespace), run:
docker compose exec dovecot-mailcow doveadm acl set -A \"Public/Develcow\" \"authenticated\" lookup read write write-seen write-deleted insert post delete expunge create\n
Adjust the command to your needs if you like to assign more granular rights per user (use -u user@domain
instead of -A
for example).
To allow all authenticated users access full access to the whole public namespace and its subfolders, create a new dovecot-acl
file in the namespace root directory:
Open/edit/create /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data/public/dovecot-acl
(adjust the path accordingly) to create the global ACL file with the following content:
authenticated kxeilprwts\n
kxeilprwts
equals to lookup read write write-seen write-deleted insert post delete expunge create
.
You can use doveadm acl set -u user@domain \"Public/Develcow\" user=user@domain lookup read
to limit access for a single user. You may also turn it around to limit access for all users to \"lr\" and grant only some users full access.
See Dovecot ACL for further information about ACL.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-static_master/","title":"Static master user","text":"Random master usernames and passwords are automatically created on every restart of dovecot-mailcow.
That's recommended and should not be changed.
If you need the user to be static anyway, please specify two variables in mailcow.conf
.
Both parameters must not be empty!
DOVECOT_MASTER_USER=mymasteruser\nDOVECOT_MASTER_PASS=mysecretpass\n
Run docker compose up -d
to apply your changes.
The static master username will be expanded to DOVECOT_MASTER_USER@mailcow.local
.
To login as test@example.org
this would equal to test@example.org*mymasteruser@mailcow.local
with the specified password above.
A login to SOGo is not possible with this username. A click-to-login function for SOGo is available for admins as described here No master user is required.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-vmail-volume/","title":"Move Maildir (vmail)","text":""},{"location":"manual-guides/Dovecot/u_e-dovecot-vmail-volume/#the-new-way","title":"The \"new\" way","text":"Warning
Newer Docker versions seem to complain about existing volumes. You can fix this temporarily by removing the existing volume and start mailcow with the override file. But it seems to be problematic after a reboot (needs to be confirmed).
An easy, dirty, yet stable workaround is to stop mailcow (docker compose down
), remove /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data
and create a new link to your remote filesystem location, for example:
mv /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data_backup\nln -s /mnt/volume-xy/vmail_data /var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data\n
Start mailcow afterwards.
"},{"location":"manual-guides/Dovecot/u_e-dovecot-vmail-volume/#the-old-way","title":"The \"old\" way","text":"If you want to use another folder for the vmail-volume, you can create a docker-compose.override.yml
file and add the following content:
version: '2.1'\nvolumes:\n vmail-vol-1:\n driver_opts:\n type: none\n device: /data/mailcow/vmail \n o: bind\n
"},{"location":"manual-guides/Dovecot/u_e-dovecot-vmail-volume/#moving-an-existing-vmail-folder","title":"Moving an existing vmail folder:","text":"docker volume inspect mailcowdockerized_vmail-vol-1
[\n {\n \"CreatedAt\": \"2019-06-16T22:08:34+02:00\",\n \"Driver\": \"local\",\n \"Labels\": {\n \"com.docker.compose.project\": \"mailcowdockerized\",\n \"com.docker.compose.version\": \"1.23.2\",\n \"com.docker.compose.volume\": \"vmail-vol-1\"\n },\n \"Mountpoint\": \"/var/lib/docker/volumes/mailcowdockerized_vmail-vol-1/_data\",\n \"Name\": \"mailcowdockerized_vmail-vol-1\",\n \"Options\": null,\n \"Scope\": \"local\"\n }\n]\n
Mountpoint
folder to the new location (e.g. /data/mailcow/vmail
) using cp -a
, rsync -a
or a similar non strcuture breaking copy commanddocker compose down
from within your mailcow root folder (e.g. /opt/mailcow-dockerized
)docker-compose.override.yml
, edit the device path accordinglydocker volume rm mailcowdockerized_vmail-vol-1
docker compose up -d
from within your mailcow root folder (e.g. /opt/mailcow-dockerized
)Please see Advanced SSL and explicitly check ADDITIONAL_SERVER_NAMES
for SSL configuration.
Please do not add ADDITIONAL_SERVER_NAMES when you plan to use a different web root.
"},{"location":"manual-guides/Nginx/u_e-nginx_custom/#new-site","title":"New site","text":"To create persistent (over updates) sites hosted by mailcow: dockerized, a new site configuration must be placed inside data/conf/nginx/
:
A good template to begin with:
nano data/conf/nginx/my_custom_site.conf\n
server {\n ssl_certificate /etc/ssl/mail/cert.pem;\n ssl_certificate_key /etc/ssl/mail/key.pem;\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_prefer_server_ciphers on;\n ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;\n ssl_ecdh_curve X25519:X448:secp384r1:secp256k1;\n ssl_session_cache shared:SSL:50m;\n ssl_session_timeout 1d;\n ssl_session_tickets off;\n index index.php index.html;\n client_max_body_size 0;\n # Location: data/web\n root /web;\n # Location: data/web/mysite.com\n #root /web/mysite.com\n include /etc/nginx/conf.d/listen_plain.active;\n include /etc/nginx/conf.d/listen_ssl.active;\n server_name mysite.example.org;\n server_tokens off;\n\n # This allows acme to be validated even with a different web root\n location ^~ /.well-known/acme-challenge/ {\n default_type \"text/plain\";\n rewrite /.well-known/acme-challenge/(.*) /$1 break;\n root /web/.well-known/acme-challenge/;\n }\n\n if ($scheme = http) {\n return 301 https://$server_name$request_uri;\n }\n}\n
"},{"location":"manual-guides/Nginx/u_e-nginx_custom/#new-site-with-proxy-to-a-remote-location","title":"New site with proxy to a remote location","text":"Another example with a reverse proxy configuration:
nano data/conf/nginx/my_custom_site.conf\n
server {\n ssl_certificate /etc/ssl/mail/cert.pem;\n ssl_certificate_key /etc/ssl/mail/key.pem;\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_prefer_server_ciphers on;\n ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;\n ssl_ecdh_curve X25519:X448:secp384r1:secp256k1;\n ssl_session_cache shared:SSL:50m;\n ssl_session_timeout 1d;\n ssl_session_tickets off;\n index index.php index.html;\n client_max_body_size 0;\n root /web;\n include /etc/nginx/conf.d/listen_plain.active;\n include /etc/nginx/conf.d/listen_ssl.active;\n server_name example.domain.tld;\n server_tokens off;\n\n location ^~ /.well-known/acme-challenge/ {\n allow all;\n default_type \"text/plain\";\n }\n\n if ($scheme = http) {\n return 301 https://$host$request_uri;\n }\n\n location / {\n proxy_pass http://service:3000/;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n client_max_body_size 0;\n }\n}\n
"},{"location":"manual-guides/Nginx/u_e-nginx_custom/#config-expansion-in-mailcows-nginx","title":"Config expansion in mailcows Nginx","text":"The filename used for a new site is not important, as long as the filename carries a .conf extension.
It is also possible to extend the configuration of the default file site.conf
file:
nano data/conf/nginx/site.my_content.custom\n
This filename does not need to have a \".conf\" extension but follows the pattern site.*.custom
, where *
is a custom name.
If PHP is to be included in a custom site, please use the PHP-FPM listener on phpfpm:9002 or create a new listener in data/conf/phpfpm/php-fpm.d/pools.conf
.
Restart Nginx (and PHP-FPM, if a new listener was created):
docker compose restart nginx-mailcow\ndocker compose restart php-fpm-mailcow\n
"},{"location":"manual-guides/Nginx/u_e-nginx_webmail-site/","title":"Create subdomain webmail.example.org","text":"IMPORTANT: This guide only applies to non SNI enabled configurations. The certificate path needs to be adjusted if SNI is enabled. Something like ssl_certificate,key /etc/ssl/mail/webmail.example.org/cert.pem,key.pem;
will do. But: The certificate should be acquired first and only after the certificate exists a site config should be created. Nginx will fail to start if it cannot find the certificate and key.
To create a subdomain webmail.example.org
and redirect it to SOGo, you need to create a new Nginx site. Take care of \"CHANGE_TO_MAILCOW_HOSTNAME\"!
nano data/conf/nginx/webmail.conf
server {\n ssl_certificate /etc/ssl/mail/cert.pem;\n ssl_certificate_key /etc/ssl/mail/key.pem;\n index index.php index.html;\n client_max_body_size 0;\n root /web;\n include /etc/nginx/conf.d/listen_plain.active;\n include /etc/nginx/conf.d/listen_ssl.active;\n server_name webmail.example.org;\n server_tokens off;\n location ^~ /.well-known/acme-challenge/ {\n allow all;\n default_type \"text/plain\";\n }\n\n location / {\n return 301 https://CHANGE_TO_MAILCOW_HOSTNAME/SOGo;\n }\n}\n
Save and restart Nginx: docker compose restart nginx-mailcow
.
Now open mailcow.conf
and find ADDITIONAL_SAN
. Add webmail.example.org
to this array, don't use quotes!
ADDITIONAL_SAN=webmail.example.org\n
Run docker compose up -d
. See \"acme-mailcow\" and \"nginx-mailcow\" logs if anything fails.
Open data/conf/postfix/extra.cf
and set the message_size_limit
accordingly in bytes. See main.cf
for the default value.
Restart Postfix:
docker compose restart postfix-mailcow\n
"},{"location":"manual-guides/Postfix/u_e-postfix-custom_transport/","title":"Custom transport maps","text":"For transport maps other than those to be configured in mailcow UI, please use data/conf/postfix/custom_transport.pcre
to prevent existing maps or settings from being overwritten by updates.
In most cases using this file is not necessary. Please make sure mailcow UI is not able to route your desired traffic properly before using that file.
The file needs valid PCRE content and can break Postfix, if configured incorrectly.
"},{"location":"manual-guides/Postfix/u_e-postfix-disable_sender_verification/","title":"Disable Sender Addresses Verification","text":""},{"location":"manual-guides/Postfix/u_e-postfix-disable_sender_verification/#new-guide","title":"New guide","text":"Edit a mailbox and select \"Allow to send as *\".
For historical reasons we kept the old and deprecated guide below:
"},{"location":"manual-guides/Postfix/u_e-postfix-disable_sender_verification/#deprecated-guide-do-not-use-on-newer-mailcows","title":"Deprecated guide (DO NOT USE ON NEWER MAILCOWS!)","text":"This option is not best-practice and should only be implemented when there is no other option available to achieve whatever you are trying to do.
Simply create a file data/conf/postfix/check_sasl_access
and enter the following content. This user must exist in your installation and needs to authenticate before sending mail.
user-to-allow-everything@example.com OK\n
Open data/conf/postfix/main.cf
and find smtpd_sender_restrictions
. Prepend check_sasl_access hash:/opt/postfix/conf/check_sasl_access
like this:
smtpd_sender_restrictions = check_sasl_access hash:/opt/postfix/conf/check_sasl_access reject_authenticated_sender_login_mismatch [...]\n
Run postmap on check_sasl_access:
docker compose exec postfix-mailcow postmap /opt/postfix/conf/check_sasl_access\n
Restart the Postfix container.
"},{"location":"manual-guides/Postfix/u_e-postfix-extra_cf/","title":"Customize/Expand main.cf","text":"Please create a new file data/conf/postfix/extra.cf
for overrides or additional content to main.cf
.
Postfix will complain about duplicate values once after starting postfix-mailcow, this is intended.
Syslog-ng was configured to hide those warnings while Postfix is running, to not spam the log files with unnecessary information every time a service is used.
Restart postfix-mailcow
to apply your changes:
docker compose restart postfix-mailcow\n
"},{"location":"manual-guides/Postfix/u_e-postfix-pflogsumm/","title":"Statistics with pflogsumm","text":"To use pflogsumm with the default logging driver, we need to query postfix-mailcow via docker logs and direct the output to pflogsumm:
docker logs --since 24h $(docker ps -qf name=postfix-mailcow) | pflogsumm\n
The above log output is limited to the last 24 hours.
It is also possible to create a daily pflogsumm report via cron. Create the /etc/cron.d/pflogsumm file with the following content:
SHELL=/bin/bash\n59 23 * * * root docker logs --since 24h $(docker ps -qf name=postfix-mailcow) | /usr/sbin/pflogsumm -d today | mail -s \"Postfix Report of $(date)\" postmaster@example.net\n
To work, a local postfix must be installed on the server, which relays to the mailcow postfix.
More detailed information can be found in section Post installation tasks -> Local MTA on Dockerhost.
Based on the postfix logs of the last 24 hours, this example then sends a pflogsumm report to postmaster@example.net every day at 23:59:00.
"},{"location":"manual-guides/Postfix/u_e-postfix-postscreen_whitelist/","title":"Whitelist IP in Postscreen","text":"IPs can be removed from Postscreen and therefore also from RBL checks in data/conf/postfix/custom_postscreen_whitelist.cidr
.
Postscreen does multiple checks to identify malicious senders. In most cases you want to whitelist an IP to exclude it from blacklist lookups.
The format of the file is as follows:
CIDR ACTION
Where CIDR is a single IP address or IP range in CIDR notation, and action is either \"permit\" or \"reject\".
Example:
# Rules are evaluated in the order as specified.\n# Blacklist 192.168.* except 192.168.0.1.\n192.168.0.1 permit\n192.168.0.0/16 reject\n
The file is reloaded on the fly, postfix restart is not required.
"},{"location":"manual-guides/Postfix/u_e-postfix-relayhost/","title":"Relayhosts","text":"As of September 12, 2018 you can setup relayhosts as admin by using the mailcow UI.
This is useful if you want to relay outgoing emails for a specific domain to a third-party spam filter or a service like Mailgun or Sendgrid. This is also known as a smarthost.
"},{"location":"manual-guides/Postfix/u_e-postfix-relayhost/#add-a-new-relayhost","title":"Add a new relayhost","text":"Go to the Routing
tab of the Configuration and Details
section of the admin UI. Here you will see a list of relayhosts currently setup.
Scroll to the Add sender-dependent transport
section.
Under Host
, add the host you want to relay to. Example: if you want to use Mailgun to send emails instead of your server IP, enter smtp.mailgun.org
If the relay host requires a username and password to authenticate, enter them in the respective fields. Keep in mind the credentials will be stored in plain text.
"},{"location":"manual-guides/Postfix/u_e-postfix-relayhost/#test-a-relayhost","title":"Test a relayhost","text":"To test that connectivity to the host works, click on Test
from the list of relayhosts and enter a From: address. Then, run the test.
You will then see the results of the SMTP transmission. If all went well, you should see SERVER -> CLIENT: 250 2.0.0 Ok: queued as A093B401D4
as one of the last lines.
If not, review the error provided and resolve it.
Note: Some hosts, especially those who do not require authentication, will deny connections from servers that have not been added to their system beforehand. Make sure you read the documentation of the relayhost to make sure you've added your domain and/or the server IP to their system.
Tip: You can change the default test To: address the test uses from null@mailcow.email to any email address you choose by modifying the $RELAY_TO variable on the vars.inc.php file under /opt/mailcow-dockerized/data/web/inc This way you can check that the relay worked by checking the destination mailbox.
"},{"location":"manual-guides/Postfix/u_e-postfix-relayhost/#set-the-relayhost-for-a-domain","title":"Set the relayhost for a domain","text":"Go to the Domains
tab of the Mail setup
section of the admin UI.
Edit the desired domain.
Select the newly added host on the Sender-dependent transports
dropdown and save changes.
Send an email from a mailbox on that domain and you should see postfix handing the message over to the relayhost in the logs.
"},{"location":"manual-guides/Postfix/u_e-postfix-trust_networks/","title":"Add trusted networks","text":"By default mailcow considers all networks as untrusted excluding its own IPV4_NETWORK and IPV6_NETWORK scopes. Though it is reasonable in most cases, there may be circumstances that you need to loosen this restriction.
By default mailcow uses mynetworks_style = subnet
to determine internal subnets and leaves mynetworks
unconfigured.
If you decide to set mynetworks
, Postfix ignores the mynetworks_style setting. This means you have to add the IPV4_NETWORK and IPV6_NETWORK scopes as well as loopback subnets manually!
Warning
Incorrect setup of mynetworks
will allow your server to be used as an open relay. If abused, this will affect your ability to send emails and can take some time to be resolved.
To add the subnet 192.168.2.0/24
to the trusted networks you may use the following configuration, depending on your IPV4_NETWORK and IPV6_NETWORK scopes:
Edit data/conf/postfix/extra.cf
:
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 [fe80::]/10 172.22.1.0/24 [fd4d:6169:6c63:6f77::]/64 192.168.2.0/24\n
Run docker compose restart postfix-mailcow
to apply your new settings.
Adding IPv6 hosts is done the same as IPv4, however the subnet needs to be placed in brackets []
with the netmask appended.
To add the subnet 2001:db8::/32 to the trusted networks you may use the following configuration, depending on your IPV4_NETWORK and IPV6_NETWORK scopes:
Edit data/conf/postfix/extra.cf
:
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 [fe80::]/10 172.22.1.0/24 [fd4d:6169:6c63:6f77::]/64 [2001:db8::]/32\n
Run docker compose restart postfix-mailcow
to apply your new settings.
Info
More information about mynetworks can be found in the Postfix documentation.
"},{"location":"manual-guides/Redis/u_e-redis/","title":"Redis","text":"Redis is used as a key-value store for rspamd's and (some of) mailcow's settings and data. If you are unfamiliar with redis please read the introduction to redis and maybe visit this wonderful guide on how to use it.
"},{"location":"manual-guides/Redis/u_e-redis/#client","title":"Client","text":"To connect to the redis cli execute:
docker compose exec redis-mailcow redis-cli\n
"},{"location":"manual-guides/Redis/u_e-redis/#debugging","title":"Debugging","text":"Here are some useful commands for the redis-cli for debugging:
"},{"location":"manual-guides/Redis/u_e-redis/#monitor","title":"MONITOR","text":"Listens for all requests received by the server in real time:
# docker compose exec redis-mailcow redis-cli\n127.0.0.1:6379> monitor\nOK\n1494077286.401963 [0 172.22.1.253:41228] \"SMEMBERS\" \"BAYES_SPAM_keys\"\n1494077288.292970 [0 172.22.1.253:41229] \"SMEMBERS\" \"BAYES_SPAM_keys\"\n[...]\n
"},{"location":"manual-guides/Redis/u_e-redis/#keys","title":"KEYS","text":"Get all keys matching your pattern:
KEYS *\n
"},{"location":"manual-guides/Redis/u_e-redis/#ping","title":"PING","text":"Test a connection:
127.0.0.1:6379> PING\nPONG\n
If you want to know more, here is a cheat sheet.
"},{"location":"manual-guides/Rspamd/u_e-rspamd/","title":"Rspamd","text":"Rspamd is used for AV handling, DKIM signing and SPAM handling. It's a powerful and fast filter system. For a more in-depth documentation on Rspamd please visit its own documentation.
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#learn-spam-ham","title":"Learn Spam & Ham","text":"Rspamd learns mail as spam or ham when you move a message in or out of the junk folder to any mailbox besides trash. This is achieved by using the Sieve plugin \"sieve_imapsieve\" and parser scripts.
Rspamd also auto-learns mail when a high or low score is detected (see https://rspamd.com/doc/configuration/statistic.html#autolearning). We configured the plugin to keep a sane ratio between spam and ham learns.
The bayes statistics are written to Redis as keys BAYES_HAM
and BAYES_SPAM
.
Besides bayes, a local fuzzy storage is used to learn recurring patterns in text or images that indicate ham or spam.
You can also use Rspamd's web UI to learn ham and / or spam or to adjust certain settings of Rspamd.
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#learn-spam-or-ham-from-existing-directory","title":"Learn Spam or Ham from existing directory","text":"You can use a one-liner to learn mail in plain-text (uncompressed) format:
# Ham\nfor file in /my/folder/cur/*; do docker exec -i $(docker compose ps -q rspamd-mailcow) rspamc learn_ham < $file; done\n# Spam\nfor file in /my/folder/.Junk/cur/*; do docker exec -i $(docker compose ps -q rspamd-mailcow) rspamc learn_spam < $file; done\n
Consider attaching a local folder as new volume to rspamd-mailcow
in docker-compose.yml
and learn given files inside the container. This can be used as workaround to parse compressed data with zcat. Example:
for file in /data/old_mail/.Junk/cur/*; do rspamc learn_spam < zcat $file; done\n
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#reset-learned-data-bayes-neural","title":"Reset learned data (Bayes, Neural)","text":"You need to delete keys in Redis to reset learned data, so create a copy of your Redis database now:
Backup database
# It is better to stop Redis before you copy the file.\ncp /var/lib/docker/volumes/mailcowdockerized_redis-vol-1/_data/dump.rdb /root/\n
Reset Bayes data
docker compose exec redis-mailcow sh -c 'redis-cli --scan --pattern BAYES_* | xargs redis-cli del'\ndocker compose exec redis-mailcow sh -c 'redis-cli --scan --pattern RS* | xargs redis-cli del'\n
Reset Neural data
docker compose exec redis-mailcow sh -c 'redis-cli --scan --pattern rn_* | xargs redis-cli del'\n
Reset Fuzzy data
# We need to enter the redis-cli first:\ndocker compose exec redis-mailcow redis-cli\n# In redis-cli:\n127.0.0.1:6379> EVAL \"for i, name in ipairs(redis.call('KEYS', ARGV[1])) do redis.call('DEL', name); end\" 0 fuzzy*\n
Info
If redis-cli complains about...
(error) ERR wrong number of arguments for 'del' command\n
...the key pattern was not found and thus no data is available to delete - it is fine.
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#cli-tools","title":"CLI tools","text":"docker compose exec rspamd-mailcow rspamc --help\ndocker compose exec rspamd-mailcow rspamadm --help\n
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#disable-greylisting","title":"Disable Greylisting","text":"Only messages with a higher score will be considered to be greylisted (soft rejected). It is bad practice to disable greylisting.
You can disable greylisting server-wide by editing:
{mailcow-dir}/data/conf/rspamd/local.d/greylist.conf
Add the line:
enabled = false;\n
Save the file and restart \"rspamd-mailcow\": docker compose restart rspamd-mailcow
Each user is able to change their spam rating individually. To define a new server-wide limit, edit data/conf/rspamd/local.d/actions.conf
:
reject = 15;\nadd_header = 8;\ngreylist = 7;\n
Save the file and restart \"rspamd-mailcow\": docker compose restart rspamd-mailcow
Existing settings of users will not be overwritten!
To reset custom defined thresholds, run:
source mailcow.conf\ndocker compose exec mysql-mailcow mysql -umailcow -p$DBPASS mailcow -e \"delete from filterconf where option = 'highspamlevel' or option = 'lowspamlevel';\"\n# or:\n# docker compose exec mysql-mailcow mysql -umailcow -p$DBPASS mailcow -e \"delete from filterconf where option = 'highspamlevel' or option = 'lowspamlevel' and object = 'only-this-mailbox@example.org';\"\n
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#custom-reject-messages","title":"Custom reject messages","text":"The default spam reject message can be changed by adding a new file data/conf/rspamd/override.d/worker-proxy.custom.inc
with the following content:
reject_message = \"My custom reject message\";\n
Save the file and restart Rspamd: docker compose restart rspamd-mailcow
.
While the above works for rejected mails with a high spam score, prefilter reject actions will ignore this setting. For these maps, the multimap module in Rspamd needs to be adjusted:
Find prefilet reject symbol for which you want change message, to do it run: grep -R \"SYMBOL_YOU_WANT_TO_ADJUST\" /opt/mailcow-dockerized/data/conf/rspamd/
Add your custom message as new line:
GLOBAL_RCPT_BL {\n type = \"rcpt\";\n map = \"${LOCAL_CONFDIR}/custom/global_rcpt_blacklist.map\";\n regexp = true;\n prefilter = true;\n action = \"reject\";\n message = \"Sending mail to this recipient is prohibited by postmaster@your.domain\";\n}\n
docker compose restart rspamd-mailcow
.If you want to silently drop a message, create or edit the file data/conf/rspamd/override.d/worker-proxy.custom.inc
and add the following content:
discard_on_reject = true;\n
Restart Rspamd:
docker compose restart rspamd-mailcow\n
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#wipe-all-ratelimit-keys","title":"Wipe all ratelimit keys","text":"If you don't want to use the UI and instead wipe all keys in the Redis database, you can use redis-cli for that task:
docker compose exec redis-mailcow sh\n# Unlink (available in Redis >=4.) will delete in the backgronud\nredis-cli --scan --pattern RL* | xargs redis-cli unlink\n
Restart Rspamd:
docker compose restart rspamd-mailcow\n
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#trigger-a-resend-of-quarantine-notifications","title":"Trigger a resend of quarantine notifications","text":"Should be used for debugging only!
docker compose exec dovecot-mailcow bash\nmysql -umailcow -p$DBPASS mailcow -e \"update quarantine set notified = 0;\"\nredis-cli -h redis DEL Q_LAST_NOTIFIED\nquarantine_notify.py\n
"},{"location":"manual-guides/Rspamd/u_e-rspamd/#increase-history-retention","title":"Increase history retention","text":"By default Rspamd keeps 1000 elements in the history.
The history is stored compressed.
It is recommended not to use a disproportionate high value here, try something along 5000 or 10000 and see how your server handles it:
Edit data/conf/rspamd/local.d/history_redis.conf
:
nrows = 1000; # change this value\n
Restart Rspamd afterwards: docker compose restart rspamd-mailcow
SOGo is used for accessing your mails via a webbrowser, adding and sharing your contacts or calendars. For a more in-depth documentation on SOGo please visit its own documentation.
"},{"location":"manual-guides/SOGo/u_e-sogo/#apply-custom-sogo-theme","title":"Apply custom SOGo theme","text":"mailcow builds after 28 January 2021 can change SOGo's theme by editing data/conf/sogo/custom-theme.js
. Please check the AngularJS Material intro and documentation as well as the material style guideline to learn how this works.
You can use the provided custom-theme.js
as an example starting point by removing the comments. After you modified data/conf/sogo/custom-theme.js
and made changes to your new SOGo theme you need to
data/conf/sogo/sogo.conf
and append/set SOGoUIxDebugEnabled = YES;
docker compose restart memcached-mailcow sogo-mailcow
.allow pasting
and press entercopy([].slice.call(document.styleSheets)\n .map(e => e.ownerNode)\n .filter(e => e.hasAttribute('md-theme-style'))\n .map(e => e.textContent)\n .join('\\n')\n)\n
data/conf/sogo/custom-theme.css
data/conf/sogo/sogo.conf
and set SOGoUIxDebugEnabled = NO;
docker-compose.override.yml
with: version: '2.1'\n\nservices:\n sogo-mailcow:\n volumes:\n - ./data/conf/sogo/custom-theme.css:/usr/lib/GNUstep/SOGo/WebServerResources/css/theme-default.css:z\n
docker compose up -d
docker compose restart memcached-mailcow
data/conf/sogo/custom-theme.js
by executing git fetch ; git checkout origin/master data/conf/sogo/custom-theme.js data/conf/sogo/custom-theme.js
data/conf/sogo/custom-theme.js
: // Apply new palettes to the default theme, remap some of the hues\n $mdThemingProvider.theme('default')\n .primaryPalette('green-cow', {\n 'default': '400', // background color of top toolbars\n 'hue-1': '400',\n 'hue-2': '600', // background color of sidebar toolbar\n 'hue-3': 'A700'\n })\n .accentPalette('green', {\n 'default': '600', // background color of fab buttons and login screen\n 'hue-1': '300', // background color of center list toolbar\n 'hue-2': '300', // highlight color for selected mail and current day calendar\n 'hue-3': 'A700'\n })\n .backgroundPalette('frost-grey');\n
and replace it with: $mdThemingProvider.theme('default');\n
docker-compose.override.yml
volume mount in sogo-mailcow
: - ./data/conf/sogo/custom-theme.css:/usr/lib/GNUstep/SOGo/WebServerResources/css/theme-default.css:z\n
docker compose up -d
docker compose restart memcached-mailcow
mailcow builds after 31 January 2021 can change SOGo's favicon by replacing data/conf/sogo/custom-favicon.ico
for SOGo and data/web/favicon.png
for mailcow UI. Note: You can use .png
favicons for SOGo by renaming them to custom-favicon.ico
. For both SOGo and mailcow UI favicons you need use one of the standard dimensions: 16x16, 32x32, 64x64, 128x128 and 256x256. After you replaced said file you need to restart SOGo and Memcached containers by executing docker compose restart memcached-mailcow sogo-mailcow
.
mailcow builds after 21 December 2018 can change SOGo's logo by replacing or creating (if missing) data/conf/sogo/sogo-full.svg
. After you replaced said file you need to restart SOGo and Memcached containers by executing docker compose restart memcached-mailcow sogo-mailcow
.
Domains are usually isolated from eachother.
You can change that by modifying data/conf/sogo/sogo.conf
:
Search...
// SOGoDomainsVisibility = (\n // (domain1.tld, domain5.tld),\n // (domain3.tld, domain2.tld)\n // );\n
...and replace it by - for example: SOGoDomainsVisibility = (\n (example.org, example.com, example.net)\n );\n
Restart SOGo: docker compose restart sogo-mailcow
Edit data/conf/sogo/sogo.conf
and change SOGoPasswordChangeEnabled
to NO
. Please do not add a new parameter.
Run docker compose restart memcached-mailcow sogo-mailcow
to activate the changes.
Run docker compose exec -u sogo sogo-mailcow sogo-tool user-preferences set defaults user@example.com SOGoTOTPEnabled '{\"SOGoTOTPEnabled\":0}'
from within the mailcow directory.
If you want or have to use an external DNS service, you can either set a forwarder in Unbound or copy an override file to define external DNS servers:
Warning
Please do not use a public resolver like we did in the example above. Many - if not all - blacklist lookups will fail with public resolvers, because blacklist server has limits on how much requests can be done from one IP and public resolvers usually reach this limits. Important: Only DNSSEC validating DNS services will work.
"},{"location":"manual-guides/Unbound/u_e-unbound-fwd/#method-a-unbound","title":"Method A, Unbound","text":"Edit data/conf/unbound/unbound.conf
and append the following parameters:
forward-zone:\n name: \".\"\n forward-addr: 8.8.8.8 # DO NOT USE PUBLIC DNS SERVERS - JUST AN EXAMPLE\n forward-addr: 8.8.4.4 # DO NOT USE PUBLIC DNS SERVERS - JUST AN EXAMPLE\n
Restart Unbound:
docker compose restart unbound-mailcow\n
"},{"location":"manual-guides/Unbound/u_e-unbound-fwd/#method-b-override-file","title":"Method B, Override file","text":"cd /opt/mailcow-dockerized\ncp helper-scripts/docker-compose.override.yml.d/EXTERNAL_DNS/docker-compose.override.yml .\n
Edit docker-compose.override.yml
and adjust the IP.
Run docker compose down ; docker compose up -d
.
Watchdog uses default values for all thresholds defined in docker-compose.yml
.
The default values will work for most setups. Example:
- NGINX_THRESHOLD=${NGINX_THRESHOLD:-5}\n- UNBOUND_THRESHOLD=${UNBOUND_THRESHOLD:-5}\n- REDIS_THRESHOLD=${REDIS_THRESHOLD:-5}\n- MYSQL_THRESHOLD=${MYSQL_THRESHOLD:-5}\n- MYSQL_REPLICATION_THRESHOLD=${MYSQL_REPLICATION_THRESHOLD:-1}\n- SOGO_THRESHOLD=${SOGO_THRESHOLD:-3}\n- POSTFIX_THRESHOLD=${POSTFIX_THRESHOLD:-8}\n- CLAMD_THRESHOLD=${CLAMD_THRESHOLD:-15}\n- DOVECOT_THRESHOLD=${DOVECOT_THRESHOLD:-12}\n- DOVECOT_REPL_THRESHOLD=${DOVECOT_REPL_THRESHOLD:-20}\n- PHPFPM_THRESHOLD=${PHPFPM_THRESHOLD:-5}\n- RATELIMIT_THRESHOLD=${RATELIMIT_THRESHOLD:-1}\n- FAIL2BAN_THRESHOLD=${FAIL2BAN_THRESHOLD:-1}\n- ACME_THRESHOLD=${ACME_THRESHOLD:-1}\n- RSPAMD_THRESHOLD=${RSPAMD_THRESHOLD:-5}\n- OLEFY_THRESHOLD=${OLEFY_THRESHOLD:-5}\n- MAILQ_THRESHOLD=${MAILQ_THRESHOLD:-20}\n- MAILQ_CRIT=${MAILQ_CRIT:-30}\n
To adjust them just add necessary threshold variables (e.g. MAILQ_THRESHOLD=10
) to mailcow.conf
and run docker compose up -d
.
Notifies administrators if watchdog can not establish a connection to Nginx on port 8081 and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#unbound_threshold","title":"UNBOUND_THRESHOLD","text":"Notifies administrators if Unbound can not resolve/valide external domains/DNSSEC and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#redis_threshold","title":"REDIS_THRESHOLD","text":"Notifies administrators if watchdog can not establish a connection to Redis on port 6379 and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#mysql_threshold","title":"MYSQL_THRESHOLD","text":"Notifies administrators if watchdog can not establish a connection to MySQL or can not query a table and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#mysql_replication_threshold","title":"MYSQL_REPLICATION_THRESHOLD","text":"Notifies administrators if the MySQL replication fails.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#sogo_threshold","title":"SOGO_THRESHOLD","text":"Notifies administrators if watchdog can not establish a connection to SOGo on port 20000 and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#postfix_threshold","title":"POSTFIX_THRESHOLD","text":"Notifies administrators if watchdog can not sent a test mail via port 589 and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#clamd_threshold","title":"CLAMD_THRESHOLD","text":"Notifies administrators if watchdog can not establish a connection to Clamd and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#dovecot_threshold","title":"DOVECOT_THRESHOLD","text":"Notifies administrators if watchdog fails with various tests with Dovecot container and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#dovecot_repl_threshold","title":"DOVECOT_REPL_THRESHOLD","text":"Notifies administrators if the Dovecot replication fails.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#phpfpm_threshold","title":"PHPFPM_THRESHOLD","text":"Notifies administrators if watchdog can not establish a connection to PHP-FPM on port 9001/9002 and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#ratelimit_threshold","title":"RATELIMIT_THRESHOLD","text":"Notifies administrators if a ratelimit got hit.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#fail2ban_threshold","title":"FAIL2BAN_THRESHOLD","text":"Notifies administrators if a fail2ban banned an IP.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#acme_threshold","title":"ACME_THRESHOLD","text":"Notifies administrators if something is wrong with the acme-mailcow container. You may check its logs.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#rspamd_threshold","title":"RSPAMD_THRESHOLD","text":"Notifies administrators if watchdog fails with various tests with Rspamd container and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#olefy_threshold","title":"OLEFY_THRESHOLD","text":"Notifies administrators if watchdog can not establish a connection to olefy on port 10005 and it will restart the container automatically when issues were found and the threshold has been reached.
"},{"location":"manual-guides/Watchdog/u_e-watchdog-thresholds/#mailq_crit-and-mailq_threshold","title":"MAILQ_CRIT and MAILQ_THRESHOLD","text":"Notifies administrators if number of emails in the postfix queue is greater then MAILQ_CRIT
for period of MAILQ_THRESHOLD * (60\u00b130)
seconds.
To add or edit an entry to your domain-wide filter table, log in to your mailcow UI as (domain) administrator and go to: Configuration > Email Setup > Domains > Edit Domain > Spam Filter
.
Info
Be aware that a user can override this setting by setting their own blacklist and whitelist!
There is also a global filter table in Configuration > Configuration & Details > Global filter maps
to configure a server wide filter for multiple regex maps (todo: screenshots).
Several configuration parameters of the mailcow UI can be changed by creating a file data/web/inc/vars.local.inc.php
which overrides defaults settings found in data/web/inc/vars.inc.php
.
The local configuration file is persistent over updates of mailcow. Try not to change values inside data/web/inc/vars.inc.php
, but use them as template for the local override.
mailcow UI configuration parameters can be used to...
To change SOGos default language, you will need to edit data/conf/sogo/sogo.conf
and replace \"English\" by your preferred language.\u00a0\u21a9
For custom overrides of specific elements via CSS, use data/web/css/build/0081-custom-mailcow.css
.
The file is excluded from tracking and persists over updates.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-fido/","title":"WebAuthn / FIDO2","text":""},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-fido/#how-is-uv-handled-in-mailcow","title":"How is UV handled in mailcow?","text":"The UV flag (as in \"user verification\") enforces WebAuthn to verify the user before it allows access to the key (think of a PIN). We don't enforce UV to allow logins via iOS and NFC (YubiKey).
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-fido/#login-and-key-processing","title":"Login and key processing","text":"mailcow uses client-side key processing. We ask the authenticator (i.e. YubiKey) to save the registration in its memory.
A user does not need to enter a username. The available credentials - if any - will be shown to the user when selecting the \"key login\" via mailcow UI login.
When calling the login process, the authenticator is not given any credential IDs. This will force it to lookup credentials in its own memory.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-fido/#who-can-use-webauthn-to-login-to-mailcow","title":"Who can use WebAuthn to login to mailcow?","text":"As of today, only administrators and domain administrators are able to setup WebAuthn/FIDO2.
You want to use WebAuthn/Fido as 2FA? Check it out here: Two-Factor Authentication
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-netfilter/","title":"Netfilter","text":""},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-netfilter/#change-netfilter-ban-settings","title":"Change Netfilter Ban Settings","text":"To change the Netfilter settings in general please navigate to: Configuration -> Configuration & Details -> Configuration -> Fail2ban parameters
.
You should now see a familar interface:
Here you can set several options regarding the bans itself. For example the max. Ban time or the max. attempts before a ban is executed.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-netfilter/#change-netfilter-regex","title":"Change Netfilter Regex","text":"Danger
The following area requires at least basic regex knowledge. If you are not sure what you are doing there, we can only advise you not to attempt a reconfiguration.
In addition to the ban settings, you can also define what exactly should be used from the mailcow container logs to ban a possible attacker.
To do this, you must first expand the regex field, which will look something like this:
There you can now create various new filter rules.
Info
As updates progress, it is possible that new Netfilter regex rules will be added or removed. If this is the case, it is recommended to reset the Netfilter regex rules by clicking on Reset to default
.
Info
Pushover makes it easy to get real-time notifications on your Android, iPhone, iPad, and Desktop
You can use Pushover to get a push notification on every mail you receive for each mailbox where you enabled this feature.
1. As admin open your mailbox' settings and scroll down to the Pushover settings
2. Register yourself on Pushover
3. Put your 'User Key' in the 'User/Group Key' field in your mailbox settings
4. Create an Applications to get the API Token/Key which you also need to put in your mailbox settings
5. Optional you can edit the notification title/text and define certain sender email addresses where a push notification is triggered
6. Save everything and then you can verify your credentials
If everything is done you can test sending a mail and you will receive a push message on your phone
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-spamalias/","title":"Temporary email aliases","text":"These temporary email aliases are mostly used for places where we need to provide an email address but don't want future correspondence with. They are also called spam alias.
To create, delete or extend a temporary email aliases you need to login to mailcow's UI as a mailbox user and navigate to the tab Temporary email aliases:
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-spamfilter/","title":"Spamfilter","text":"A mailbox user may adjust the spam filter and black- / whitelist settings for his mailbox individually by navigating to the Spam filter tab in the users mailcow UI.
Info
For global adjustments on your spam filter please check our section on Rspamd. For a domain wide black- and whitelist please check our guide on Black / Whitelist
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-sub_addressing/","title":"Sub-addressing","text":"Mailbox users can tag their mail address like in me+facebook@example.org
. They can control the tag handling in the users mailcow UI panel under Mailbox > Settings
.
sub-addressing
(RFC 5233) or plus addressing
also known as tagging (do not mix with Tags)
1. Move this message to a sub folder \"facebook\" (will be created lower case if not existing)
2. Prepend the tag to the subject: \"[facebook] Subject\"
Please note: Uppercase tags are converted to lowercase except for the first letter. If you want to keep the tag as it is, please apply the following diff and restart mailcow:
diff --git a/data/conf/dovecot/global_sieve_after b/data/conf/dovecot/global_sieve_after\nindex e047136e..933c4137 100644\n--- a/data/conf/dovecot/global_sieve_after\n+++ b/data/conf/dovecot/global_sieve_after\n@@ -15,7 +15,7 @@ if allof (\n envelope :detail :matches \"to\" \"*\",\n header :contains \"X-Moo-Tag\" \"YES\"\n ) {\n- set :lower :upperfirst \"tag\" \"${1}\";\n+ set \"tag\" \"${1}\";\n if mailboxexists \"INBOX/${1}\" {\n fileinto \"INBOX/${1}\";\n } else {\n
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tags/","title":"Tags (for Domains and Mailboxes)","text":"Info
You need the mailcow Version 2022-05 at least for this feature. If you don\u00b4t have the Version installed please consider a update. For more informations about a mailcow update please take a look at the Update section here in the docs.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tags/#what-are-tags-designed-for","title":"What are Tags designed for?","text":"With the Tags you can easily sort your Domains and Mailboxes by the tags instead of their name.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tags/#where-are-the-tags-located","title":"Where are the Tags located?","text":"The Tags are located in the Domain/Mailbox section of the mailcow UI. To view them simply click on the small plus symbol on the left of your Domain/Mailbox (following picture is showing the domain ribbon menu):
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tags/#how-can-i-addremove-a-tag","title":"How can i add/remove a Tag?","text":"You can simply add/remove a Tag during the creation of a new Domain/Mailbox. You also can add/remove them if you edit your desired Domain/Mailbox.
It looks similar to this (following picture showing the domain edit section):
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tags/#how-can-i-search-for-a-tag","title":"How can i search for a tag?","text":"Simply type the Tag Name in the search bar in the Domain/Mailbox Section and wait for it to complete.
You can even specify if you want to search for tags only.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/","title":"Two-Factor Authentication","text":"So far three methods for Two-Factor Authentication are implemented: WebAuthn (replacing U2F since February 2022), Yubi OTP, and TOTP
As administrator you are able to temporary disable a domain administrators TFA login until they successfully logged in.
The key used to login will be displayed in green, while other keys remain grey.
Information on how to remove 2FA can be found here.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#yubi-otp","title":"Yubi OTP","text":"The Yubi API ID and Key will be checked against the Yubico Cloud API. When setting up TFA you will be asked for your personal API account for this key. The API ID, API key and the first 12 characters (your YubiKeys ID in modhex) are stored in the MySQL table as secret.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#example-setup","title":"Example setup","text":"First of all, the YubiKey must be configured for use as an OTP Generator. To do this, download the YubiKey Manager
from the Yubico website: here
In the following you configure the YubiKey for OTP. Via the menu item Applications
-> OTP
and a click on the Configure
button. In the following menu select Credential Type
-> Yubico OTP
and click on Next
.
Set a checkmark in the Use serial
checkbox, generate a Private ID
and a Secret key
via the buttons. So that the YubiKey can be validated later, the checkmark in the Upload
checkbox must also be set and then click on Finish
.
Now a new browser window will open in which you have to enter an OTP of your YubiKey at the bottom of the form (click on the field and then tap on your YubiKey). Confirm the captcha and upload the information to the Yubico server by clicking 'Upload'. The processing of the data will take a moment.
After the generation was successful, you will be shown a Client ID
and a Secret key
, make a note of this information in a safe place.
Now you can select Yubico OTP authentication
from the dropdown menu in the mailcow UI on the start page under Access
-> Two-factor authentication
. In the dialog that opened now you can enter a name for this YubiKey and insert the Client ID
you noted before as well as the Secret key
into the fields provided. Finally, enter your current account password and, after selecting the Touch Yubikey
field, touch your YubiKey button.
Congratulations! You can now log in to the mailcow UI using your YubiKey!
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#webauthn-u2f-replacement","title":"WebAuthn (U2F, replacement)","text":"Warning
Since February 2022 Google Chrome has discarded support for U2F and standardized the use of WebAuthn. The WebAuthn (U2F removal) is part of mailcow since 21th January 2022, so if you want to use the Key past February 2022 please consider a update with the update.sh
To use WebAuthn, the browser must support this standard.
The following desktop browsers support this authentication type:
The following mobile browsers support this authentication type:
Sources: caniuse.com, blog.mozilla.org
WebAuthn works without an internet connection.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#what-will-happen-to-my-registered-fido-security-key-after-the-update-from-u2f-to-webauthn","title":"What will happen to my registered Fido Security Key after the Update from U2F to WebAuthn?","text":"Warning
With this new U2F replacement (WebAuthn) you have to re-register your Fido Security Key, thankfully WebAuthn is backwards compatible and supports the U2F protocol.
Ideally, the next time you log in (with the key), you should get a text box saying that your Fido Security Key has been removed due to the update to WebAuthn and deleted as a 2-factor authenticator.
But don't worry! You can simply re-register your existing key and use it as usual, you probably won't even notice a difference, except that your browser won't show the U2F deactivation message anymore.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#disable-unofficial-supported-fido-security-keys","title":"Disable unofficial supported Fido Security Keys","text":"With WebAuthn there is the possibility to use only official Fido Security Keys (from the big brands like: Yubico, Apple, Nitro, Google, Huawei, Microsoft, etc.).
This is primarily for security purposes, as it allows administrators to ensure that only official hardware can be used in their environment.
To enable this feature, change the value WEBAUTHN_ONLY_TRUSTED_VENDORS
in mailcow.conf from n
to y
and restart the affected containers with docker compose up -d
.
The mailcow will now use the Vendor Certificates located in your mailcow directory under data/web/inc/lib/WebAuthn/rootCertificates
.
If you want to limit the official Vendor devices to Apple only you only need the Apple Vendor Certificate inside the data/web/inc/lib/WebAuthn/rootCertificates
. After you deleted all other certs you now only can activate WebAuthn 2FA with Apple devices.
That\u00b4s for every vendor the same, so choose what you like (if you want to).
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#use-own-certificates-for-webauthn","title":"Use own certificates for WebAuthn","text":"If you have a valid certificate from the vendor of your key you can also add it to your mailcow!
Just copy the certificate into the data/web/inc/lib/WebAuthn/rootCertificates
folder and restart your mailcow.
Now you should be able to register this device as well, even though the verification for the vendor certificates is enabled, since you just added the certificate manually.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#is-it-dangerous-to-keep-the-vendor-check-disabled","title":"Is it dangerous to keep the Vendor Check disabled?","text":"No, it isn\u00b4t! These vendor certificates are only used to verify original hardware, not to secure the registration process.
As you can read in these articles, the deactivation is not software security related: - https://developers.yubico.com/U2F/Attestation_and_Metadata/ - https://medium.com/webauthnworks/webauthn-fido2-demystifying-attestation-and-mds-efc3b3cb3651 - https://medium.com/webauthnworks/sorting-fido-ctap-webauthn-terminology-7d32067c0b01
In the end, however, it is of course your decision to leave this check disabled or enabled.
"},{"location":"manual-guides/mailcow-UI/u_e-mailcow_ui-tfa/#totp","title":"TOTP","text":"The best known TFA method mostly used with a smartphone.
To setup the TOTP method login to the Admin UI and select Time-based OTP (TOTP)
from the list.
Now a modal will open in which you have to type in a name for your 2FA \"device\" (example: John Deer\u00b4s Smartphone) and the password of the affected Admin account (you are currently logged in with).
You have two seperate methods to register TOTP to your account: 1. Scan the QR-Code with your Authenticator App on a Smartphone or Tablet. 2. Use the TOTP Code (under the QR Code) in your TOTP Program or App (if you can\u00b4t scan a QR Code).
After you have registered the QR or TOTP code in the TOTP app/program of your choice you only need to enter the now generated TOTP token (in the app/program) as confirmation in the mailcow UI to finally activate the TOTP 2FA, otherwise it will not be activated even though the TOTP token is already generated in your app/program.
"},{"location":"models/model-acl/","title":"ACL","text":"Editing a domain administrator or a mailbox user allows to set restrictions to that account.
Important: For overlapping modules like sync jobs, which both domain administrators and mailbox users can be granted access to, the domain administrators permissions are inherited, when logging in as mailbox user.
Some examples:
1.
2.
3.
The most current mailcow fully supports the following hashing methods. The default hashing method is written in bold:
The methods above can be used in mailcow.conf
as MAILCOW_PASS_SCHEME
value.
The following methods are supported read only. If you plan to use SOGo (as per default), you need a SOGo compatible hashing method. Please see the note at the bottom of this page how to update the view if necessary. With SOGo disabled, all hashing methods below will be able to be read by mailcow and Dovecot.
That means mailcow is able to verify users with a hash like {MD5}1a1dc91c907325c69271ddf0c944bc72
from the database.
The value of MAILCOW_PASS_SCHEME
will always be used to encrypt new passwords.
I changed the password hashes in the \"mailbox\" SQL table and cannot login.
A \"view\" needs to be updated. You can trigger this by restarting sogo-mailcow: docker compose restart sogo-mailcow
When a mailbox is created, a user is allowed to send mail from and receive mail for his own mailbox address.
Mailbox me@example.org is created. example.org is a primary domain.\nNote: a mailbox cannot be created in an alias domain.\n\nme@example.org is only known as me@example.org.\nme@example.org is allowed to send as me@example.org.\n
We can add an alias domain for example.org:
Alias domain alias.com is added and assigned to primary domain example.org.\nme@example.org is now known as me@example.org and me@alias.com.\nme@example.org is now allowed to send as me@example.org and me@alias.com.\n
We can add aliases for a mailbox to receive mail for and to send from this new address.
It is important to know, that you are not able to receive mail for my-alias@my-alias-domain.tld
. You would need to create this particular alias.
me@example.org is assigned the alias alias@example.org\nme@example.org is now known as me@example.org, me@alias.com, alias@example.org\n\nme@example.org is NOT known as alias@alias.com.\n
Please note that this does not apply to catch-all aliases:
Alias domain alias.com is added and assigned to primary domain example.org\nme@example.org is assigned the catch-all alias @example.org\nme@example.org is still just known as me@example.org, which is the only available send-as option\n\nAny email send to alias.com will match the catch-all alias for example.org\n
Administrators and domain administrators can edit mailboxes to allow specific users to send as other mailbox users (\"delegate\" them).
You can choose between mailbox users or completely disable the sender check for domains.
"},{"location":"models/model-sender_rcv/#sogo-mail-from-addresses","title":"SOGo \"mail from\" addresses","text":"Mailbox users can, obviously, select their own mailbox address, as well as all alias addresses and aliases that exist through alias domains.
If you want to select another existing mailbox user as your \"mail from\" address, this user has to delegate you access through SOGo (see SOGo documentation). Moreover a mailcow (domain) administrator needs to grant you access as described above.
"},{"location":"post_installation/firststeps-disable_ipv6/","title":"Disable IPv6","text":"This is ONLY recommended if you do not have an IPv6 enabled network on your host!
If you really need to, you can disable the usage of IPv6 in the compose file. Additionally, you can also disable the startup of container \"ipv6nat-mailcow\", as it's not needed if you won't use IPv6.
Instead of editing docker-compose.yml directly, it is preferable to create an override file for it and implement your changes to the service there. Unfortunately, this right now only seems to work for services, not for network settings.
To disable IPv6 on the mailcow network, open docker-compose.yml with your favourite text editor and search for the network section (it's near the bottom of the file).
1. Modify docker-compose.yml
Change enable_ipv6: true
to enable_ipv6: false
:
networks:\n mailcow-network:\n [...]\n enable_ipv6: true # <<< set to false\n [...]\n
2. Disable ipv6nat-mailcow
To disable the ipv6nat-mailcow container as well, go to your mailcow directory and create a new file called \"docker-compose.override.yml\":
NOTE: If you already have an override file, of course don't recreate it, but merge the lines below into your existing one accordingly!
# cd /opt/mailcow-dockerized\n# touch docker-compose.override.yml\n
Open the file in your favourite text editor and fill in the following:
version: '2.1'\nservices:\n\n ipv6nat-mailcow:\n image: bash:latest\n restart: \"no\"\n entrypoint: [\"echo\", \"ipv6nat disabled in compose.override.yml\"]\n
For these changes to be effective, you need to fully stop and then restart the stack, so containers and networks are recreated:
docker compose down\ndocker compose up -d\n
3. Disable IPv6 in unbound-mailcow
Edit data/conf/unbound/unbound.conf
and set do-ip6
to \"no\":
server:\n [...]\n do-ip6: no\n [...]\n
Restart Unbound:
docker compose restart unbound-mailcow\n
4. Disable IPv6 in postfix-mailcow
Create data/conf/postfix/extra.cf
and set smtp_address_preference
to ipv4
:
smtp_address_preference = ipv4\ninet_protocols = ipv4\n
Restart Postfix:
docker compose restart postfix-mailcow\n
5. If your docker daemon completly disabled IPv6:
Fix the following NGINX, Dovecot and php-fpm config files
sed -i '/::/d' data/conf/nginx/listen_*\nsed -i '/::/d' data/conf/nginx/templates/listen*\nsed -i '/::/d' data/conf/nginx/dynmaps.conf\nsed -i 's/,\\[::\\]//g' data/conf/dovecot/dovecot.conf\nsed -i 's/\\[::\\]://g' data/conf/phpfpm/php-fpm.d/pools.conf\n
"},{"location":"post_installation/firststeps-dmarc_reporting/","title":"DMARC Reporting","text":"DMARC Reporting done via Rspamd DMARC Module.
Rspamd documentation can be found here: https://rspamd.com/doc/modules/dmarc.html
Important:
Change example.com
, mail.example.com
and Example
to reflect your setup
DMARC reporting requires additional attention, especially over the first few days
All receiving domains hosted on mailcow send from one reporting domain. It is recommended to use the parent domain of your MAILCOW_HOSTNAME
:
MAILCOW_HOSTNAME
is mail.example.com
change the following config to domain = \"example.com\";
email
equally, e.g. email = \"noreply-dmarc@example.com\";
It is optional but recommended to create an email user noreply-dmarc
in mailcow to handle bounces.
Create the file data/conf/rspamd/local.d/dmarc.conf
and set the following content:
reporting {\n enabled = true;\n email = 'noreply-dmarc@example.com';\n domain = 'example.com';\n org_name = 'Example';\n helo = 'rspamd';\n smtp = 'postfix';\n smtp_port = 25;\n from_name = 'Example DMARC Report';\n msgid_from = 'rspamd.mail.example.com';\n max_entries = 2k;\n keys_expire = 2d;\n}\n
Create or modify docker-compose.override.yml
in the mailcow-dockerized base directory:
version: '2.1'\n\nservices:\n rspamd-mailcow:\n environment:\n - MASTER=${MASTER:-y}\n labels:\n ofelia.enabled: \"true\"\n ofelia.job-exec.rspamd_dmarc_reporting_yesterday.schedule: \"@every 24h\"\n ofelia.job-exec.rspamd_dmarc_reporting_yesterday.command: \"/bin/bash -c \\\"[[ $${MASTER} == y ]] && /usr/bin/rspamadm dmarc_report $(date --date yesterday '+%Y%m%d') > /var/lib/rspamd/dmarc_reports_last_log 2>&1 || exit 0\\\"\"\n ofelia-mailcow:\n depends_on:\n - rspamd-mailcow\n
Run docker compose up -d
To receive a hidden copy of reports generated by Rspamd you can set a bcc_addrs
list in the reporting
config section of data/conf/rspamd/local.d/dmarc.conf
:
reporting {\n enabled = true;\n email = 'noreply-dmarc@example.com';\n bcc_addrs = [\"noreply-dmarc@example.com\",\"parsedmarc@example.com\"];\n[...]\n
Rspamd will load changes in real time, so you won't need to restart the container at this point.
This can be useful if you...
Check when the report schedule last ran:
docker compose exec rspamd-mailcow date -r /var/lib/rspamd/dmarc_reports_last_log\n
See the latest report output:
docker compose exec rspamd-mailcow cat /var/lib/rspamd/dmarc_reports_last_log\n
Manually trigger a DMARC report:
docker compose exec rspamd-mailcow rspamadm dmarc_report\n
Validate that Rspamd has recorded data in Redis: Change 20220428
to date which you interested in.
docker compose exec redis-mailcow redis-cli SMEMBERS \"dmarc_idx;20220428\"\n
Take one of the lines from output you interested in and request it, f.e.: docker compose exec redis-mailcow redis-cli ZRANGE \"dmarc_rpt;microsoft.com;mailto:d@rua.agari.com;20220428\" 0 49\n
"},{"location":"post_installation/firststeps-dmarc_reporting/#change-dmarc-reporting-frequency","title":"Change DMARC reporting frequency","text":"In the example above reports are sent once every 24 hours and send reports for yesterday. This will be okay for most setups.
If you have a large mail volume and want to run the DMARC reporting more than once a day you need create second schedule and run it with dmarc_report $(date '+%Y%m%d')
to process the current day. You have to make sure that the first run on each day also processes the last report from the day before, so it needs to be started twice, one time with $(date --date yesterday '+%Y%m%d')
at 0 5 0 * * *
(00:05 AM) and then with $(date '+%Y%m%d')
with desired interval.
The Ofelia schedule has the same implementation as cron
in Go, supported syntax described at cron Documentation
To change schedule:
docker-compose.override.yml
:version: '2.1'\n\nservices:\n rspamd-mailcow:\n environment:\n - MASTER=${MASTER:-y}\n labels:\n ofelia.enabled: \"true\"\n ofelia.job-exec.rspamd_dmarc_reporting_yesterday.schedule: \"0 5 0 * * *\"\n ofelia.job-exec.rspamd_dmarc_reporting_yesterday.command: \"/bin/bash -c \\\"[[ $${MASTER} == y ]] && /usr/bin/rspamadm dmarc_report $(date --date yesterday '+%Y%m%d') > /var/lib/rspamd/dmarc_reports_last_log 2>&1 || exit 0\\\"\"\n ofelia.job-exec.rspamd_dmarc_reporting_today.schedule: \"@every 12h\"\n ofelia.job-exec.rspamd_dmarc_reporting_today.command: \"/bin/bash -c \\\"[[ $${MASTER} == y ]] && /usr/bin/rspamadm dmarc_report $(date '+%Y%m%d') > /var/lib/rspamd/dmarc_reports_last_log 2>&1 || exit 0\\\"\"\n ofelia-mailcow:\n depends_on:\n - rspamd-mailcow\n
Run docker compose up -d
Run docker compose restart ofelia-mailcow
To disable reporting:
Set enabled
to false
in data/conf/rspamd/local.d/dmarc.conf
Revert changes done in docker-compose.override.yml
to rspamd-mailcow
and ofelia-mailcow
Run docker compose up -d
Warning
Changing the binding does not affect source NAT. See SNAT for required steps.
"},{"location":"post_installation/firststeps-ip_bindings/#ipv4-binding","title":"IPv4 binding","text":"To adjust one or multiple IPv4 bindings, open mailcow.conf
and edit one, multiple or all variables as per your needs:
# For technical reasons, http bindings are a bit different from other service bindings.\n# You will find the following variables, separated by a bind address and its port:\n# Example: HTTP_BIND=1.2.3.4\n\nHTTP_PORT=80\nHTTP_BIND=\nHTTPS_PORT=443\nHTTPS_BIND=\n\n# Other services are bound by using the following format:\n# SMTP_PORT=1.2.3.4:25 will bind SMTP to the IP 1.2.3.4 on port 25\n# Important! Specifying an IPv4 address will skip all IPv6 bindings since Docker 20.x.\n# doveadm, SQL as well as Solr are bound to local ports only, please do not change that, unless you know what you are doing.\n\nSMTP_PORT=25\nSMTPS_PORT=465\nSUBMISSION_PORT=587\nIMAP_PORT=143\nIMAPS_PORT=993\nPOP_PORT=110\nPOPS_PORT=995\nSIEVE_PORT=4190\nDOVEADM_PORT=127.0.0.1:19991\nSQL_PORT=127.0.0.1:13306\nSOLR_PORT=127.0.0.1:18983\n
To apply your changes, run docker compose down
followed by docker compose up -d
.
Changing IPv6 bindings is different from IPv4. Again, this has a technical background.
A docker-compose.override.yml
file will be used instead of editing the docker-compose.yml
file directly. This is to maintain updatability, as the docker-compose.yml
file gets updated regularly and your changes will most likely be overwritten.
Edit to create a file docker-compose.override.yml
with the following content. Its content will be merged with the productive docker-compose.yml
file.
An example IPv6 2001:db8:dead:beef::123 is given. The first suffix :PORT1
defines the external port, while the second suffix :PORT2
routes to the corresponding port inside the container and must not be changed.
version: '2.1'\nservices:\n\n dovecot-mailcow:\n ports:\n - '[2001:db8:dead:beef::123]:143:143'\n - '[2001:db8:dead:beef::123]:993:993'\n - '[2001:db8:dead:beef::123]:110:110'\n - '[2001:db8:dead:beef::123]:995:995'\n - '[2001:db8:dead:beef::123]:4190:4190'\n\n postfix-mailcow:\n ports:\n - '[2001:db8:dead:beef::123]:25:25'\n - '[2001:db8:dead:beef::123]:465:465'\n - '[2001:db8:dead:beef::123]:587:587'\n\n nginx-mailcow:\n ports:\n - '[2001:db8:dead:beef::123]:80:80'\n - '[2001:db8:dead:beef::123]:443:443'\n
To apply your changes, run docker compose down
followed by docker compose up -d
.
The easiest option would be to disable the listener on port 25/tcp.
Postfix users disable the listener by commenting the following line (starting with smtp
or 25
) in /etc/postfix/master.cf
:
#smtp inet n - - - - smtpd\n
Furthermore, to relay over a dockerized mailcow, you may want to add 172.22.1.1
as relayhost and remove the Docker interface from \"inet_interfaces\":
postconf -e 'relayhost = 172.22.1.1'\npostconf -e \"mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128\"\npostconf -e \"inet_interfaces = loopback-only\"\npostconf -e \"relay_transport = relay\"\npostconf -e \"default_transport = smtp\"\n
Now it is important to not have the same FQDN in myhostname
as you use for your dockerized mailcow. Check your local (non-Docker) Postfix' main.cf for myhostname
and set it to something different, for example local.my.fqdn.tld
.
\"172.22.1.1\" is the mailcow created network gateway in Docker. Relaying over this interface is necessary (instead of - for example - relaying directly over ${MAILCOW_HOSTNAME}) to relay over a known internal network.
Restart Postfix after applying your changes.
"},{"location":"post_installation/firststeps-logging/","title":"Logging","text":"Logging in mailcow: dockerized consists of multiple stages, but is, after all, much more flexible and easier to integrate into a logging daemon than before.
In Docker the containerized application (PID 1) writes its output to stdout. For real one-application containers this works just fine. Run docker compose logs --help
to learn more.
Some containers log or stream to multiple destinations.
No container will keep persistent logs in it. Containers are transient items!
In the end, every line of logs will reach the Docker daemon - unfiltered.
The default logging driver is \"json\".
"},{"location":"post_installation/firststeps-logging/#filtered-logs","title":"Filtered logs","text":"Some logs are filtered and written to Redis keys but also streamed to a Redis channel.
The Redis channel is used to stream logs with failed authentication attempts to be read by netfilter-mailcow.
The Redis keys are persistent and will keep 10000 lines of logs for the web UI.
This mechanism makes it possible to use whatever Docker logging driver you want to, without losing the ability to read logs from the UI or ban suspicious clients with netfilter-mailcow.
Redis keys will only hold logs from applications and filter out system messages (think of cron etc.).
"},{"location":"post_installation/firststeps-logging/#logging-drivers","title":"Logging drivers","text":""},{"location":"post_installation/firststeps-logging/#via-docker-composeoverrideyml","title":"Via docker-compose.override.yml","text":"Here is the good news: Since Docker has some great logging drivers, you can integrate mailcow: dockerized into your existing logging environment with ease.
Create a docker-compose.override.yml
and add, for example, this block to use the \"gelf\" logging plugin for postfix-mailcow
:
version: '2.1'\nservices:\n postfix-mailcow: # or any other\n logging:\n driver: \"gelf\"\n options:\n gelf-address: \"udp://graylog:12201\"\n
Another example for Syslog:
version: '2.1'\nservices:\n\n postfix-mailcow: # or any other\n logging:\n driver: \"syslog\"\n options:\n syslog-address: \"udp://127.0.0.1:514\"\n syslog-facility: \"local3\"\n\n dovecot-mailcow: # or any other\n logging:\n driver: \"syslog\"\n options:\n syslog-address: \"udp://127.0.0.1:514\"\n syslog-facility: \"local3\"\n\n rspamd-mailcow: # or any other\n logging:\n driver: \"syslog\"\n options:\n syslog-address: \"udp://127.0.0.1:514\"\n syslog-facility: \"local3\"\n
"},{"location":"post_installation/firststeps-logging/#for-rsyslog-only","title":"For Rsyslog only:","text":"Make sure the following lines aren't commented out in /etc/rsyslog.conf
:
# provides UDP syslog reception\nmodule(load=\"imudp\")\ninput(type=\"imudp\" port=\"514\")\n
To move local3
input to /var/log/mailcow.log
and stop processing, create a file /etc/rsyslog.d/docker.conf
:
local3.* /var/log/mailcow.log\n& stop\n
Restart rsyslog afterwards.
"},{"location":"post_installation/firststeps-logging/#via-daemonjson-globally","title":"via daemon.json (globally)","text":"If you want to change the logging driver globally, edit Dockers daemon configuration file /etc/docker/daemon.json
and restart the Docker service:
{\n...\n \"log-driver\": \"gelf\",\n \"log-opts\": {\n \"gelf-address\": \"udp://graylog:12201\"\n }\n...\n}\n
For Syslog:
{\n...\n \"log-driver\": \"syslog\",\n \"log-opts\": {\n \"syslog-address\": \"udp://1.2.3.4:514\"\n }\n...\n}\n
Restart the Docker daemon and run docker compose down && docker compose up -d
to recreate the containers with the new logging driver.
As those logs can get quite big, it is a good idea to use logrotate to compress and delete them after a certain time period.
Create /etc/logrotate.d/mailcow
with the following content:
/var/log/mailcow.log {\n rotate 7\n daily\n compress\n delaycompress\n missingok\n notifempty\n create 660 root root\n}\n
With this configuration, logrotate will run daily and keep a maximum of 7 archives.
To rotate the logfile weekly or monthly replace daily
with weekly
or monthly
respectively.
To keep more archives, set the desired number of rotate
.
Afterwards, logrotate can be restarted.
"},{"location":"post_installation/firststeps-rp/","title":"Reverse Proxy","text":"You don't need to change the Nginx site that comes with mailcow: dockerized. mailcow: dockerized trusts the default gateway IP 172.22.1.1 as proxy.
1. Make sure you change HTTP_BIND and HTTPS_BIND in mailcow.conf
to a local address and set the ports accordingly, for example:
HTTP_BIND=127.0.0.1\nHTTP_PORT=8080\nHTTPS_BIND=127.0.0.1\nHTTPS_PORT=8443\n
This will also change the bindings inside the Nginx container! This is important, if you decide to use a proxy within Docker.
IMPORTANT: Do not use port 8081, 9081 or 65510!
Recreate affected containers by running docker compose up -d
.
Important information, please read them carefully!
Info
If you plan to use a reverse proxy and want to use another server name that is not MAILCOW_HOSTNAME, you need to read Adding additional server names for mailcow UI at the bottom of this page.
Warning
Make sure you run generate_config.sh
before you enable any site configuration examples below. The script generate_config.sh
copies snake-oil certificates to the correct location, so the services will not fail to start due to missing files.
Warning
If you enable TLS SNI (ENABLE_TLS_SNI
in mailcow.conf), the certificate paths in your reverse proxy must match the correct paths in data/assets/ssl/{hostname}. The certificates will be split into data/assets/ssl/{hostname1,hostname2,etc}
and therefore will not work when you copy the examples from below pointing to data/assets/ssl/cert.pem
etc.
Info
Using the site configs below will forward ACME requests to mailcow and let it handle certificates itself. The downside of using mailcow as ACME client behind a reverse proxy is, that you will need to reload your webserver after acme-mailcow changed/renewed/created the certificate. You can either reload your webserver daily or write a script to watch the file for changes. On many servers logrotate will reload the webserver daily anyway.
If you want to use a local certbot installation, you will need to change the SSL certificate parameters accordingly. Make sure you run a post-hook script when you decide to use external ACME clients. You will find an example at the bottom of this page.
2. Configure your local webserver as reverse proxy:
"},{"location":"post_installation/firststeps-rp/#apache-24","title":"Apache 2.4","text":"Required modules:
a2enmod rewrite proxy proxy_http headers ssl\n
Let's Encrypt will follow our rewrite, certificate requests in mailcow will work fine.
Take care of highlighted lines.
<VirtualHost *:80>\n ServerName CHANGE_TO_MAILCOW_HOSTNAME\n ServerAlias autodiscover.*\n ServerAlias autoconfig.*\n RewriteEngine on\n\n RewriteCond %{HTTPS} off\n RewriteRule ^/?(.*) https://%{HTTP_HOST}/$1 [R=301,L]\n\n ProxyPass / http://127.0.0.1:8080/\n ProxyPassReverse / http://127.0.0.1:8080/\n ProxyPreserveHost On\n ProxyAddHeaders On\n RequestHeader set X-Forwarded-Proto \"http\"\n</VirtualHost>\n<VirtualHost *:443>\n ServerName CHANGE_TO_MAILCOW_HOSTNAME\n ServerAlias autodiscover.*\n ServerAlias autoconfig.*\n\n # You should proxy to a plain HTTP session to offload SSL processing\n ProxyPass /Microsoft-Server-ActiveSync http://127.0.0.1:8080/Microsoft-Server-ActiveSync connectiontimeout=4000\n ProxyPassReverse /Microsoft-Server-ActiveSync http://127.0.0.1:8080/Microsoft-Server-ActiveSync\n ProxyPass / http://127.0.0.1:8080/\n ProxyPassReverse / http://127.0.0.1:8080/\n ProxyPreserveHost On\n ProxyAddHeaders On\n RequestHeader set X-Forwarded-Proto \"https\"\n\n SSLCertificateFile MAILCOW_PATH/data/assets/ssl/cert.pem\n SSLCertificateKeyFile MAILCOW_PATH/data/assets/ssl/key.pem\n\n # If you plan to proxy to a HTTPS host:\n #SSLProxyEngine On\n\n # If you plan to proxy to an untrusted HTTPS host:\n #SSLProxyVerify none\n #SSLProxyCheckPeerCN off\n #SSLProxyCheckPeerName off\n #SSLProxyCheckPeerExpire off\n</VirtualHost>\n
"},{"location":"post_installation/firststeps-rp/#nginx","title":"Nginx","text":"Let's Encrypt will follow our rewrite, certificate requests will work fine.
Take care of highlighted lines.
server {\n listen 80 default_server;\n listen [::]:80 default_server;\n server_name CHANGE_TO_MAILCOW_HOSTNAME autodiscover.* autoconfig.*;\n return 301 https://$host$request_uri;\n}\nserver {\n listen 443 ssl http2;\n listen [::]:443 ssl http2;\n server_name CHANGE_TO_MAILCOW_HOSTNAME autodiscover.* autoconfig.*;\n\n ssl_certificate MAILCOW_PATH/data/assets/ssl/cert.pem;\n ssl_certificate_key MAILCOW_PATH/data/assets/ssl/key.pem;\n ssl_session_timeout 1d;\n ssl_session_cache shared:SSL:50m;\n ssl_session_tickets off;\n\n # See https://ssl-config.mozilla.org/#server=nginx for the latest ssl settings recommendations\n # An example config is given below\n ssl_protocols TLSv1.2;\n ssl_ciphers HIGH:!aNULL:!MD5:!SHA1:!kRSA;\n ssl_prefer_server_ciphers off;\n\n location /Microsoft-Server-ActiveSync {\n proxy_pass http://127.0.0.1:8080/Microsoft-Server-ActiveSync;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_connect_timeout 75;\n proxy_send_timeout 3650;\n proxy_read_timeout 3650;\n proxy_buffers 64 512k; # Needed since the 2022-04 Update for SOGo\n client_body_buffer_size 512k;\n client_max_body_size 0;\n }\n\n location / {\n proxy_pass http://127.0.0.1:8080/;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n client_max_body_size 0;\n # The following Proxy Buffers has to be set if you want to use SOGo after the 2022-04 (April 2022) Update\n # Otherwise a Login will fail like this: https://github.com/mailcow/mailcow-dockerized/issues/4537\n proxy_buffer_size 128k;\n proxy_buffers 64 512k;\n proxy_busy_buffers_size 512k;\n }\n}\n
"},{"location":"post_installation/firststeps-rp/#haproxy-community-supported","title":"HAProxy (community supported)","text":"Warning
This is an unsupported community contribution. Feel free to provide fixes.
Important/Fixme: This example only forwards HTTPS traffic and does not use mailcows built-in ACME client.
frontend https-in\n bind :::443 v4v6 ssl crt mailcow.pem\n default_backend mailcow\n\nbackend mailcow\n option forwardfor\n http-request set-header X-Forwarded-Proto https if { ssl_fc }\n http-request set-header X-Forwarded-Proto http if !{ ssl_fc }\n server mailcow 127.0.0.1:8080 check\n
"},{"location":"post_installation/firststeps-rp/#traefik-v2-community-supported","title":"Traefik v2 (community supported)","text":"Warning
This is an unsupported community contribution. Feel free to provide fixes.
Important: This config only covers the \"reverseproxing\" of the webpannel (nginx-mailcow) using Traefik v2, if you also want to reverseproxy the mail services such as dovecot, postfix... you'll just need to adapt the following config to each container and create an EntryPoint on your traefik.toml
or traefik.yml
(depending which config you use) for each port.
For this section we'll assume you have your Traefik 2 [certificatesresolvers]
properly configured on your traefik configuration file, and also using acme, also, the following example uses Lets Encrypt, but feel free to change it to your own cert resolver. You can find a basic Traefik 2 toml config file with all the above implemented which can be used for this example here traefik.toml if you need one, or a hint on how to adapt your config.
So, first of all, we are going to disable the acme-mailcow container since we'll use the certs that traefik will provide us. For this we'll have to set SKIP_LETS_ENCRYPT=y
on our mailcow.conf
, and run docker compose up -d
to apply the changes.
Then we'll create a docker-compose.override.yml
file in order to override the main docker-compose.yml
found in your mailcow root folder.
version: '2.1'\n\nservices:\n nginx-mailcow:\n networks:\n # Add Traefik's network\n web:\n labels:\n - traefik.enable=true\n # Creates a router called \"moo\" for the container, and sets up a rule to link the container to certain rule,\n # in this case, a Host rule with our MAILCOW_HOSTNAME var.\n - traefik.http.routers.moo.rule=Host(`${MAILCOW_HOSTNAME}`)\n # Enables tls over the router we created before.\n - traefik.http.routers.moo.tls=true\n # Specifies which kind of cert resolver we'll use, in this case le (Lets Encrypt).\n - traefik.http.routers.moo.tls.certresolver=le\n # Creates a service called \"moo\" for the container, and specifies which internal port of the container\n # should traefik route the incoming data to.\n - traefik.http.services.moo.loadbalancer.server.port=${HTTP_PORT}\n # Specifies which entrypoint (external port) should traefik listen to, for this container.\n # websecure being port 443, check the traefik.toml file liked above.\n - traefik.http.routers.moo.entrypoints=websecure\n # Make sure traefik uses the web network, not the mailcowdockerized_mailcow-network\n - traefik.docker.network=web\n\n certdumper:\n image: humenius/traefik-certs-dumper\n command: --restart-containers ${COMPOSE_PROJECT_NAME}-postfix-mailcow-1,${COMPOSE_PROJECT_NAME}-nginx-mailcow-1,${COMPOSE_PROJECT_NAME}-dovecot-mailcow-1\n network_mode: none\n volumes:\n # Mount the volume which contains Traefik's `acme.json' file\n # Configure the external name in the volume definition\n - acme:/traefik:ro\n # Mount mailcow's SSL folder\n - ./data/assets/ssl/:/output:rw\n # Mount docker socket to restart containers\n - /var/run/docker.sock:/var/run/docker.sock:ro\n restart: always\n environment:\n # only change this, if you're using another domain for mailcow's web frontend compared to the standard config\n - DOMAIN=${MAILCOW_HOSTNAME}\n\nnetworks:\n web:\n external: true\n # Name of the external network\n name: traefik_web\n\nvolumes:\n acme:\n external: true\n # Name of the external docker volume which contains Traefik's `acme.json' file\n name: traefik_acme\n
Start the new containers with docker compose up -d
.
Now, there's only one thing left to do, which is setup the certs so that the mail services can use them as well, since Traefik 2 uses an acme v2 format to save ALL the license from all the domains we have, we'll need to find a way to dump the certs, lucky we have this tiny container which grabs the acme.json
file trough a volume, and a variable DOMAIN=example.org
, and with these, the container will output the cert.pem
and key.pem
files, for this we'll simply run the traefik-certs-dumper
container binding the /traefik
volume to the folder where our acme.json
is saved, bind the /output
volume to our mailcow data/assets/ssl/
folder, and set up the DOMAIN=example.org
variable to the domain we want the certs dumped from.
This container will watch over the acme.json
file for any changes, and regenerate the cert.pem
and key.pem
files directly into data/assets/ssl/
being the path binded to the container's /output
path.
You can use the command line to run it, or use the docker compose shown here.
After we have the certs dumped, we'll have to reload the configs from our postfix and dovecot containers, and check the certs, you can see how here.
Aaand that should be it \ud83d\ude0a, you can check if the Traefik router works fine trough Traefik's dashboard / traefik logs / accessing the setted domain trough https, or / and check HTTPS, SMTP and IMAP trough the commands shown on the page linked before.
"},{"location":"post_installation/firststeps-rp/#caddy-v2-supported-by-the-community","title":"Caddy v2 (supported by the community)","text":"Warning
This is an unsupported community contribution. Feel free to provide fixes.
The configuration of Caddy with mailcow is very simple.
In the caddyfile you just have to create a section for the mailserver.
For example
MAILCOW_HOSTNAME autodiscover.MAILCOW_HOSTNAME autoconfig.MAILCOW_HOSTNAME {\n log {\n output file /var/log/caddy/MAILCOW_HOSTNAME.log {\n roll_disabled\n roll_size 512M\n roll_uncompressed\n roll_local_time\n roll_keep 3\n roll_keep_for 48h\n }\n }\n\n reverse_proxy 127.0.0.1:HTTP_BIND\n}\n
This allows Caddy to automatically create the certificates and accept traffic for these mentioned domains and forward them to mailcow.
Important: The ACME client of mailcow must be disabled, otherwise mailcow will fail.
Since Caddy takes care of the certificates itself, we can use the following script to include the Caddy generated certificates into mailcow:
#!/bin/bash\nMD5SUM_CURRENT_CERT=($(md5sum /opt/mailcow-dockerized/data/assets/ssl/cert.pem))\nMD5SUM_NEW_CERT=($(md5sum /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/your.domain.tld/your.domain.tld.crt))\n\nif [ $MD5SUM_CURRENT_CERT != $MD5SUM_NEW_CERT ]; then\n cp /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/your.domain.tld/your.domain.tld.crt /opt/mailcow-dockerized/data/assets/ssl/cert.pem\n cp /var/lib/caddy/.local/share/caddy/certificates/acme-v02.api.letsencrypt.org-directory/your.domain.tld/your.domain.tld.key /opt/mailcow-dockerized/data/assets/ssl/key.pem\n postfix_c=$(docker ps -qaf name=postfix-mailcow)\n dovecot_c=$(docker ps -qaf name=dovecot-mailcow)\n nginx_c=$(docker ps -qaf name=nginx-mailcow)\n docker restart ${postfix_c} ${dovecot_c} ${nginx_c}\n\nelse\n echo \"Certs not copied from Caddy (Not needed)\"\nfi\n
Attention
Caddy's certificate path varies depending on the installation type. In this installation example, Caddy was installed using the Caddy repo (more informations here). To find out the Caddy certificate path on your system, just run a find / -name \"certificates\"
.
This script could be called as a cronjob every hour:
0 * * * * /bin/bash /path/to/script/deploy-certs.sh >/dev/null 2>&1\n
"},{"location":"post_installation/firststeps-rp/#optional-post-hook-script-for-non-mailcow-acme-clients","title":"Optional: Post-hook script for non-mailcow ACME clients","text":"Using a local certbot (or any other ACME client) requires to restart some containers, you can do this with a post-hook script. Make sure you change the paths accordingly:
#!/bin/bash\ncp /etc/letsencrypt/live/my.domain.tld/fullchain.pem /opt/mailcow-dockerized/data/assets/ssl/cert.pem\ncp /etc/letsencrypt/live/my.domain.tld/privkey.pem /opt/mailcow-dockerized/data/assets/ssl/key.pem\npostfix_c=$(docker ps -qaf name=postfix-mailcow)\ndovecot_c=$(docker ps -qaf name=dovecot-mailcow)\nnginx_c=$(docker ps -qaf name=nginx-mailcow)\ndocker restart ${postfix_c} ${dovecot_c} ${nginx_c}\n
"},{"location":"post_installation/firststeps-rp/#adding-additional-server-names-for-mailcow-ui","title":"Adding additional server names for mailcow UI","text":"If you plan to use a server name that is not MAILCOW_HOSTNAME
in your reverse proxy, make sure to populate that name in mailcow.conf via ADDITIONAL_SERVER_NAMES
first. Names must be separated by commas and must not contain spaces. If you skip this step, mailcow may respond to your reverse proxy with an incorrect site.
ADDITIONAL_SERVER_NAMES=webmail.domain.tld,other.example.tld\n
Run docker compose up -d
to apply.
Rspamd is an easy to use spam filtering tool presently installed with mailcow.
Additional configuration options and documentation can be found here : https://rspamd.com/webui/
"},{"location":"post_installation/firststeps-snat/","title":"SNAT","text":"SNAT is used to change the source address of the packets sent by mailcow. It can be used to change the outgoing IP address on systems with multiple IP addresses.
Open mailcow.conf
, set either or both of the following parameters:
# Use this IPv4 for outgoing connections (SNAT)\nSNAT_TO_SOURCE=1.2.3.4\n\n# Use this IPv6 for outgoing connections (SNAT)\nSNAT6_TO_SOURCE=dead:beef\n
Run docker compose up -d
.
The values are read by netfilter-mailcow. netfilter-mailcow will make sure, the post-routing rules are on position 1 in the netfilter table. It does automatically delete and re-create them if they are found on another position than 1.
Check the output of docker compose logs --tail=200 netfilter-mailcow
to ensure the SNAT settings have been applied.
The \"acme-mailcow\" container will try to obtain a LE certificate for ${MAILCOW_HOSTNAME}
, autodiscover.ADDED_MAIL_DOMAIN
and autoconfig.ADDED_MAIL_DOMAIN
.
Warning
mailcow must be available on port 80 for the acme-client to work. Our reverse proxy example configurations do cover that. You can also use any external ACME client (certbot for example) to obtain certificates, but you will need to make sure, that they are copied to the correct location and a post-hook reloads affected containers. See more in the Reverse Proxy documentation.
By default, which means 0 domains are added to mailcow, it will try to obtain a certificate for ${MAILCOW_HOSTNAME}
.
For each domain you add, it will try to resolve autodiscover.ADDED_MAIL_DOMAIN
and autoconfig.ADDED_MAIL_DOMAIN
to its IPv6 address or - if IPv6 is not configured in your domain - IPv4 address. If it succeeds, a name will be added as SAN to the certificate request.
Only names that can be validated, will be added as SAN.
For every domain you remove, the certificate will be moved and a new certificate will be requested. It is not possible to keep domains in a certificate, when we are not able validate the challenge for those.
If you want to re-run the ACME client, use docker compose restart acme-mailcow
and monitor its logs with docker compose logs --tail=200 -f acme-mailcow
.
Edit \"mailcow.conf\" and add a parameter ADDITIONAL_SAN
like this:
Do not use quotes (\"
) and do not use spaces between the names!
ADDITIONAL_SAN=smtp.*,cert1.example.com,cert2.example.org,whatever.*\n
Each name will be validated against its IPv6 address or - if IPv6 is not configured in your domain - IPv4 address.
A wildcard name like smtp.*
will try to obtain a smtp.DOMAIN_NAME SAN for each domain added to mailcow.
Run docker compose up -d
to recreate affected containers automatically.
Info
Using names other name MAILCOW_HOSTNAME
to access the mailcow UI may need further configuration.
If you plan to use a server name that is not MAILCOW_HOSTNAME
to access the mailcow UI (for example by adding mail.*
to ADDITIONAL_SAN
make sure to populate that name in mailcow.conf via ADDITIONAL_SERVER_NAMES
. Names must be separated by commas and must not contain spaces. If you skip this step, mailcow may respond with an incorrect site.
ADDITIONAL_SERVER_NAMES=webmail.domain.tld,other.example.tld\n
Run docker compose up -d
to apply.
To force a renewal, you need to create a file named force_renew
and restart the acme-mailcow
container:
cd /opt/mailcow-dockerized\ntouch data/assets/ssl/force_renew\ndocker compose restart acme-mailcow\n# Now check the logs for a renewal\ndocker compose logs --tail=200 -f acme-mailcow\n
The file will be deleted automatically.
"},{"location":"post_installation/firststeps-ssl/#validation-errors-and-how-to-skip-validation","title":"Validation errors and how to skip validation","text":"You can skip the IP verification by setting SKIP_IP_CHECK=y
in mailcow.conf (no quotes). Be warned that a misconfiguration will get you ratelimited by Let's Encrypt! This is primarily useful for multi-IP setups where the IP check would return the incorrect source IP address. Due to using dynamic IPs for acme-mailcow, source NAT is not consistent over restarts.
If you encounter problems with \"HTTP validation\", but your IP address confirmation succeeds, you are most likely using firewalld, ufw or any other firewall, that disallows connections from br-mailcow
to your external interface. Both firewalld and ufw disallow this by default. It is often not enough to just stop these firewall services. You'd need to stop mailcow (docker compose down
), stop the firewall service, flush the chains and restart Docker.
You can also skip this validation method by setting SKIP_HTTP_VERIFICATION=y
in \"mailcow.conf\". Be warned that this is discouraged. In most cases, the HTTP verification is skipped to workaround unknown NAT reflection issues, which are not resolved by ignoring this specific network misconfiguration. If you encounter problems generating TLSA records in the DNS overview within mailcow, you are most likely having issues with NAT reflection you should fix.
If you changed a SKIP_* parameter, run docker compose up -d
to apply your changes.
Set SKIP_LETS_ENCRYPT=y
in \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker compose up -d
.
Add ONLY_MAILCOW_HOSTNAME=y
to \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker compose up -d
.
Let's Encrypt currently has a limit of 100 Domain Names per Certificate.
By default, \"acme-mailcow\" will create a single SAN certificate for all validated domains (see the first section and Additional domain names). This provides best compatibility but means the Let's Encrypt limit exceeds if you add too many domains to a single mailcow installation.
To solve this, you can configure ENABLE_SSL_SNI
to generate:
MAILCOW_HOSTNAME
and all fully qualified domain names in the ADDITIONAL_SAN
configADDITIONAL_SAN
configured in this format (subdomain.*).ADDITIONAL_SAN=test.example.com
will be added as SAN to the main certificate. A separate certificate/key pair will not be generated for this format.Postfix, Dovecot and Nginx will then serve these certificates with SNI.
Set ENABLE_SSL_SNI=y
in \"mailcow.conf\" and recreate \"acme-mailcow\" by running docker compose up -d
.
Warning
Not all clients support SNI, see Dovecot documentation or Wikipedia. You should make sure these clients use the MAILCOW_HOSTNAME
for secure connections if you enable this feature.
Here is an example:
MAILCOW_HOSTNAME=server.email.tld
ADDITIONAL_SAN=webmail.email.tld,mail.*
The following certificates will be generated:
server.email.tld, webmail.email.tld
-> this is the default certificate, all clients can connect with these domainsmail.domain1.tld, autoconfig.domain1.tld, autodiscover.domain1.tld
-> individual certificate for domain1.tld, cannot be used by clients without SNI supportmail.domain2.tld, autoconfig.domain2.tld, autodiscover.domain2.tld
-> individual certificate for domain2.tld, cannot be used by clients without SNI supportMake sure you disable mailcows internal LE client (see above).
To use your own certificates, just save the combined certificate (containing the certificate and intermediate CA/CA if any) to data/assets/ssl/cert.pem
and the corresponding key to data/assets/ssl/key.pem
.
IMPORTANT: Do not use symbolic links! Make sure you copy the certificates and do not link them to data/assets/ssl
.
Restart affected services afterwards:
docker restart $(docker ps -qaf name=postfix-mailcow)\ndocker restart $(docker ps -qaf name=nginx-mailcow)\ndocker restart $(docker ps -qaf name=dovecot-mailcow)\n
See Post-hook script for non-mailcow ACME clients for a full example script.
"},{"location":"post_installation/firststeps-ssl/#test-against-staging-acme-directory","title":"Test against staging ACME directory","text":"Edit mailcow.conf
and add LE_STAGING=y
.
Run docker compose up -d
to activate your changes.
Edit mailcow.conf
and add the corresponding directory URL to the new variable DIRECTORY_URL
:
DIRECTORY_URL=https://acme-custom-v9000.api.letsencrypt.org/directory\n
You cannot use LE_STAGING
with DIRECTORY_URL
. If both are set, only LE_STAGING
is used.
Run docker compose up -d
to activate your changes.
Run docker compose logs acme-mailcow
to find out why a validation fails.
To check if nginx serves the correct certificate, simply use a browser of your choice and check the displayed certificate.
To check the certificate served by Postfix, Dovecot and Nginx we will use openssl
:
# Connect via SMTP (587)\necho \"Q\" | openssl s_client -starttls smtp -crlf -connect mx.mailcow.email:587\n# Connect via IMAP (143)\necho \"Q\" | openssl s_client -starttls imap -showcerts -connect mx.mailcow.email:143\n# Connect via HTTPS (443)\necho \"Q\" | openssl s_client -connect mx.mailcow.email:443\n
To validate the expiry dates as returned by openssl against MAILCOW_HOSTNAME, you are able to use our helper script:
cd /opt/mailcow-dockerized\nbash helper-scripts/expiry-dates.sh\n
"},{"location":"post_installation/firststeps-sync_jobs_migration/","title":"Sync job migration","text":"Sync jobs are used to copy or move existing emails from an external IMAP server or within mailcow's existing mailboxes.
Info
Depending on your mailbox's ACL you may not have the option to add a sync job. Please contact your domain administrator if so.
"},{"location":"post_installation/firststeps-sync_jobs_migration/#setup-a-sync-job","title":"Setup a Sync Job","text":"In the \"Configuration > Mail Setup\" or \"User Settings\" interface, create a new sync job.
If you are an administrator, select the username of the downstream mailcow mailbox in the \"Username\" dropdown.
Fill in the \"Host\" and \"Port\" fields with their respective correct values from the upstream IMAP server.
In the \"Username\" and \"Password\" fields, supply the correct access credentials from the upstream IMAP server.
Select the \"Encryption Method\". If the upstream IMAP server uses port 143, it is likely that the encryption method is TLS and SSL for port 993. Nevertheless, you can use PLAIN authentication, but it is stongly discouraged.
For all ther other fields, you can leave them as is or modify them as desired.
Make sure to tick \"Active\" and click \"Add\".
Info
Once Completed, log into the mailbox and check if all emails are imported correctly. If all goes well, all your mails shall end up in your new mailbox. And don't forget to delete or deactivate the sync job after it is used.
"},{"location":"prerequisite/prerequisite-dns/","title":"DNS setup","text":"Below you can find a list of recommended DNS records. While some are mandatory for a mail server (A, MX), others are recommended to build a good reputation score (TXT/SPF) or used for auto-configuration of mail clients (SRV).
"},{"location":"prerequisite/prerequisite-dns/#references","title":"References","text":"Make sure that the PTR record of your IP address matches the FQDN of your mailcow host: ${MAILCOW_HOSTNAME}
1. This record is usually set at the provider you leased the IP address (server) from.
This example shows you a set of records for one domain managed by mailcow. Each domain that is added to mailcow needs at least this set of records to function correctly.
# Name Type Value\nmail IN A 1.2.3.4\nautodiscover IN CNAME mail.example.org. (your ${MAILCOW_HOSTNAME})\nautoconfig IN CNAME mail.example.org. (your ${MAILCOW_HOSTNAME})\n@ IN MX 10 mail.example.org. (your ${MAILCOW_HOSTNAME})\n
Note: The mail
DNS record which binds the subdomain to the given ip address must only be set for the domain on which mailcow is running and that is used to access the web interface. For every other mailcow managed domain, the MX
record will route the traffic.
In the example DNS zone file snippet below, a simple SPF TXT record is used to only allow THIS server (the MX) to send mail for your domain. Every other server is disallowed but able to (\"~all
\"). Please refer to SPF Project for further reading.
# Name Type Value\n@ IN TXT \"v=spf1 mx a -all\"\n
It is highly recommended to create a DKIM TXT record in your mailcow UI and set the corresponding TXT record in your DNS records. Please refer to OpenDKIM for further reading.
# Name Type Value\ndkim._domainkey IN TXT \"v=DKIM1; k=rsa; t=s; s=email; p=...\"\n
The last step in protecting yourself and others is the implementation of a DMARC TXT record, for example by using the DMARC Assistant (check).
# Name Type Value\n_dmarc IN TXT \"v=DMARC1; p=reject; rua=mailto:mailauth-reports@example.org\"\n
"},{"location":"prerequisite/prerequisite-dns/#the-advanced-dns-configuration","title":"The advanced DNS configuration","text":"SRV records specify the server(s) for a specific protocol on your domain. If you want to explicitly announce a service as not provided, give \".\" as the target address (instead of \"mail.example.org.\"). Please refer to RFC 2782.
# Name Type Priority Weight Port Value\n_autodiscover._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_caldavs._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_caldavs._tcp IN TXT \"path=/SOGo/dav/\"\n_carddavs._tcp IN SRV 0 1 443 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_carddavs._tcp IN TXT \"path=/SOGo/dav/\"\n_imap._tcp IN SRV 0 1 143 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_imaps._tcp IN SRV 0 1 993 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_pop3._tcp IN SRV 0 1 110 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_pop3s._tcp IN SRV 0 1 995 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_sieve._tcp IN SRV 0 1 4190 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_smtps._tcp IN SRV 0 1 465 mail.example.org. (your ${MAILCOW_HOSTNAME})\n_submission._tcp IN SRV 0 1 587 mail.example.org. (your ${MAILCOW_HOSTNAME})\n
"},{"location":"prerequisite/prerequisite-dns/#testing","title":"Testing","text":"Here are some tools you can use to verify your DNS configuration:
If you are interested in statistics, you can additionally register with some of the many below DMARC statistic services - or self-host your own.
Tip
It is worth considering that if you request DMARC statistic reports to your mailcow server and your mailcow server is not configured correctly to receive these reports, you may not get accurate and complete results. Please consider using an alternative email domain for receiving DMARC reports.
It is worth mentioning, that the following suggestions are not a comprehensive list of all services and tools available, but only a small few of the many choices.
Tip
These services may provide you with a TXT record you need to insert into your DNS records as the provider specifies. Please ensure you read the provider's documentation from the service you choose as this process may vary.
"},{"location":"prerequisite/prerequisite-dns/#email-test-for-spf-dkim-and-dmarc","title":"Email test for SPF, DKIM and DMARC:","text":"To run a rudimentary email authentication check, send a mail to check-auth at verifier.port25.com
and wait for a reply. You will find a report similar to the following:
==========================================================\nSummary of Results\n==========================================================\nSPF check: pass\n\"iprev\" check: pass\nDKIM check: pass\nDKIM check: pass\nSpamAssassin check: ham\n\n==========================================================\nDetails:\n==========================================================\n....\n
The full report will contain more technical details.
"},{"location":"prerequisite/prerequisite-dns/#fully-qualified-domain-name-fqdn","title":"Fully Qualified Domain Name (FQDN)","text":"A Fully Qualified Domain Name (FQDN) is the complete (absolute) domain name for a specific computer or host, on the Internet. The FQDN consists of at least three parts divided by a dot: the hostname, the domain name, and the Top Level Domain (TLD for short). In the example of mx.mailcow.email
the hostname would be mx
, the domain name mailcow
and the TLD email
.\u00a0\u21a9
Before you run mailcow: dockerized, there are a few requirements that you should check:
Warning
Do not try to install mailcow on a Synology/QNAP device (any NAS), OpenVZ, LXC or other container platforms. KVM, ESX, Hyper-V and other full virtualization platforms are supported.
Info
Not supported
OpenVZ, Virtuozzo and LXC
Please make sure that your system has at least the following resources:
Resource mailcow: dockerized CPU 1 GHz RAM Minimum 6 GiB + 1 GiB swap (default config) Disk 20 GiB (without emails) System Type x86_64ClamAV and Solr can be greedy with RAM. You may disable them in mailcow.conf
by settings SKIP_CLAMD=y
and SKIP_SOLR=y
.
Info
We are aware that a pure MTA can run on 128 MiB RAM. mailcow is a full-grown and ready-to-use groupware with many extras making life easier. mailcow comes with a webserver, webmailer, ActiveSync (MS), antivirus, antispam, indexing (Solr), document scanner (Oletools), SQL (MariaDB), Cache (Redis), MDA, MTA, various web services etc.
A single SOGo worker can acquire ~350 MiB RAM before it gets purged. The more ActiveSync connections you plan to use, the more RAM you will need. A default configuration spawns 20 workers.
"},{"location":"prerequisite/prerequisite-system/#ram-usage-examples","title":"RAM usage examples","text":"A company with 15 phones (EAS enabled) and about 50 concurrent IMAP connections should plan 16 GiB RAM.
6 GiB RAM + 1 GiB swap are fine for most private installations while 8 GiB RAM are recommended for ~5 to 10 users.
We can help to correctly plan your setup as part of our support.
"},{"location":"prerequisite/prerequisite-system/#supported-os","title":"Supported OS","text":"Basically, mailcow can be used on any distribution that is supported by Docker CE (see https://docs.docker.com/install/). However, in some cases there may be incompatibilities between the operating systems and the mailcow components.
The following table contains all operating systems officially supported and tested by us (as of November 2022):
OS Compatibility Alpine 3.16 and older \u26a0\ufe0f Centos 7 \u2705 Debian 10, 11 \u2705 Ubuntu 18.04, 20.04, 22.04 \u2705 Rocky Linux 9 \u2754Legend
\u2705 = Works out of the box using the instructions. \u26a0\ufe0f = Requires some manual adjustments otherwise usable. \u274c = In general NOT Compatible. \u2754 = Pending.
Note: All other operating systems (not mentioned) may also work, but have not been officially tested.
"},{"location":"prerequisite/prerequisite-system/#firewall-ports","title":"Firewall & Ports","text":"Please check if any of mailcow's standard ports are open and not in use by other applications:
ss -tlpn | grep -E -w '25|80|110|143|443|465|587|993|995|4190'\n# or:\nnetstat -tulpn | grep -E -w '25|80|110|143|443|465|587|993|995|4190'\n
Danger
There are several problems with running mailcow on a firewalld/ufw enabled system. You should disable it (if possible) and move your ruleset to the DOCKER-USER chain, which is not cleared by a Docker service restart, instead. See this (blog.donnex.net) or this (unrouted.io) guide for information about how to use iptables-persistent with the DOCKER-USER chain. As mailcow runs dockerized, INPUT rules have no effect on restricting access to mailcow. Use the FORWARD chain instead.
If this command returns any results please remove or stop the application running on that port. You may also adjust mailcows ports via the mailcow.conf
configuration file.
If you have a firewall in front of mailcow, please make sure that these ports are open for incoming connections:
Service Protocol Port Container Variable Postfix SMTP TCP 25 postfix-mailcow${SMTP_PORT}
Postfix SMTPS TCP 465 postfix-mailcow ${SMTPS_PORT}
Postfix Submission TCP 587 postfix-mailcow ${SUBMISSION_PORT}
Dovecot IMAP TCP 143 dovecot-mailcow ${IMAP_PORT}
Dovecot IMAPS TCP 993 dovecot-mailcow ${IMAPS_PORT}
Dovecot POP3 TCP 110 dovecot-mailcow ${POP_PORT}
Dovecot POP3S TCP 995 dovecot-mailcow ${POPS_PORT}
Dovecot ManageSieve TCP 4190 dovecot-mailcow ${SIEVE_PORT}
HTTP(S) TCP 80/443 nginx-mailcow ${HTTP_PORT}
/ ${HTTPS_PORT}
To bind a service to an IP address, you can prepend the IP like this: SMTP_PORT=1.2.3.4:25
Important: You cannot use IP:PORT bindings in HTTP_PORT and HTTPS_PORT. Please use HTTP_PORT=1234
and HTTP_BIND=1.2.3.4
instead.
Quoting https://github.com/chermsen via https://github.com/mailcow/mailcow-dockerized/issues/497#issuecomment-469847380 (THANK YOU!):
For all who are struggling with the Hetzner firewall:
Port 53 unimportant for the firewall configuration in this case. According to the documentation unbound uses the port range 1024-65535 for outgoing requests. Since the Hetzner Robot Firewall is a static firewall (each incoming packet is checked isolated) - the following rules must be applied:
For TCP
SRC-IP: ---\nDST IP: ---\nSRC Port: ---\nDST Port: 1024-65535\nProtocol: tcp\nTCP flags: ack\nAction: Accept\n
For UDP
SRC-IP: ---\nDST IP: ---\nSRC Port: ---\nDST Port: 1024-65535\nProtocol: udp\nAction: Accept\n
If you want to apply a more restrictive port range you have to change the config of unbound first (after installation):
{mailcow-dockerized}/data/conf/unbound/unbound.conf:
outgoing-port-avoid: 0-32767\n
Now the firewall rules can be adjusted as follows:
[...]\nDST Port: 32768-65535\n[...]\n
"},{"location":"prerequisite/prerequisite-system/#date-and-time","title":"Date and Time","text":"To ensure that you have the correct date and time setup on your system, please check the output of timedatectl status
:
$ timedatectl status\n Local time: Sat 2017-05-06 02:12:33 CEST\n Universal time: Sat 2017-05-06 00:12:33 UTC\n RTC time: Sat 2017-05-06 00:12:32\n Time zone: Europe/Berlin (CEST, +0200)\n NTP enabled: yes\nNTP synchronized: yes\n RTC in local TZ: no\n DST active: yes\n Last DST change: DST began at\n Sun 2017-03-26 01:59:59 CET\n Sun 2017-03-26 03:00:00 CEST\n Next DST change: DST ends (the clock jumps one hour backwards) at\n Sun 2017-10-29 02:59:59 CEST\n Sun 2017-10-29 02:00:00 CET\n
The lines NTP enabled: yes
and NTP synchronized: yes
indicate whether you have NTP enabled and if it's synchronized.
To enable NTP you need to run the command timedatectl set-ntp true
. You also need to edit your /etc/systemd/timesyncd.conf
:
# vim /etc/systemd/timesyncd.conf\n[Time]\nNTP=0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org\n
"},{"location":"prerequisite/prerequisite-system/#hetzner-cloud-and-probably-others","title":"Hetzner Cloud (and probably others)","text":"Check /etc/network/interfaces.d/50-cloud-init.cfg
and change the IPv6 interface from eth0:0 to eth0:
# Wrong:\nauto eth0:0\niface eth0:0 inet6 static\n# Right:\nauto eth0\niface eth0 inet6 static\n
Reboot or restart the interface. You may want to disable cloud-init network changes.
"},{"location":"prerequisite/prerequisite-system/#mtu","title":"MTU","text":"Especially relevant for OpenStack users: Check your MTU and set it accordingly in docker-compose.yml. See Troubleshooting in our Installation guide.
"},{"location":"third_party/borgmatic/third_party-borgmatic/","title":"Borgmatic Backup","text":""},{"location":"third_party/borgmatic/third_party-borgmatic/#introduction","title":"Introduction","text":"Borgmatic is a great way to run backups on your Mailcow setup as it securely encrypts your data and is extremely easy to set up.
Due to it's deduplication capabilities you can store a great number of backups without wasting large amounts of disk space. This allows you to run backups in very short intervals to ensure minimal data loss when the need arises to recover data from a backup.
This document guides you through the process to enable continuous backups for mailcow with borgmatic. The borgmatic functionality is provided by the borgmatic Docker image. Check out the README
in that repository to find out about the other options (such as push notifications) that are available. This guide only covers the basics.
docker-compose.override.yml
","text":"In the mailcow-dockerized root folder create or edit docker-compose.override.yml
and insert the following configuration:
version: '2.1'\n\nservices:\n borgmatic-mailcow:\n image: ghcr.io/borgmatic-collective/borgmatic\n hostname: mailcow\n restart: always\n dns: ${IPV4_NETWORK:-172.22.1}.254\n volumes:\n - vmail-vol-1:/mnt/source/vmail:ro\n - crypt-vol-1:/mnt/source/crypt:ro\n - redis-vol-1:/mnt/source/redis:ro,z\n - rspamd-vol-1:/mnt/source/rspamd:ro,z\n - postfix-vol-1:/mnt/source/postfix:ro,z\n - mysql-socket-vol-1:/var/run/mysqld/:z\n - borg-config-vol-1:/root/.config/borg:Z\n - borg-cache-vol-1:/root/.cache/borg:Z\n - ./data/conf/borgmatic/etc:/etc/borgmatic.d:Z\n - ./data/conf/borgmatic/ssh:/root/.ssh:Z\n environment:\n - TZ=${TZ}\n - BORG_PASSPHRASE=YouBetterPutSomethingRealGoodHere\n networks:\n mailcow-network:\n aliases:\n - borgmatic\n\nvolumes:\n borg-cache-vol-1:\n borg-config-vol-1:\n
Ensure that you change the BORG_PASSPHRASE
to a secure passphrase of your choosing.
For security reasons we mount the maildir as read-only. If you later want to restore data you will need to remove the ro
flag prior to restoring the data. This is described in the section on restoring backups.
data/conf/borgmatic/etc/config.yaml
","text":"Next, we need to create the borgmatic configuration.
source mailcow.conf\ncat <<EOF > data/conf/borgmatic/etc/config.yaml\nlocation:\n source_directories:\n - /mnt/source\n repositories:\n - ssh://user@rsync.net:22/./mailcow\n exclude_patterns:\n - '/mnt/source/postfix/public/'\n - '/mnt/source/postfix/private/'\n - '/mnt/source/rspamd/rspamd.sock'\n\nretention:\n keep_hourly: 24\n keep_daily: 7\n keep_weekly: 4\n keep_monthly: 6\n prefix: \"\"\n\nhooks:\n mysql_databases:\n - name: ${DBNAME}\n username: ${DBUSER}\n password: ${DBPASS}\n options: --default-character-set=utf8mb4\nEOF\n
Creating the file in this way ensures the correct MySQL credentials are pulled in from mailcow.conf
.
This file is a minimal example for using borgmatic with an account user
on the cloud storage provider rsync.net
for a repository called mailcow
(see repositories
setting). It will backup both the maildir and MySQL database, which is all you should need to restore your mailcow setup after an incident. The retention settings will keep one archive for each hour of the past 24 hours, one per day of the week, one per week of the month and one per month of the past half year.
Check the borgmatic documentation on how to use other types of repositories or configuration options. If you choose to use a local filesystem as a backup destination make sure to mount it into the container. The container defines a volume called /mnt/borg-repository
for this purpose.
Note
If you do not use rsync.net you can most likely drop the remote_path
element from your config.
Create a new text file in data/conf/borgmatic/etc/crontab.txt
with the following content:
14 * * * * PATH=$PATH:/usr/local/bin /usr/local/bin/borgmatic --stats -v 0 2>&1\n
This file expects crontab syntax. The example shown here will trigger the backup to run every hour at 14 minutes past the hour and log some nice stats at the end.
"},{"location":"third_party/borgmatic/third_party-borgmatic/#place-ssh-keys-in-folder","title":"Place SSH keys in folder","text":"Place the SSH keys you intend to use for remote repository connections in data/conf/borgmatic/ssh
. OpenSSH expects the usual id_rsa
, id_ed25519
or similar to be in this directory. Ensure the file is chmod 600
and not world readable or OpenSSH will refuse to use the SSH key.
For the next step we need the container to be up and running in a configured state. To do that run:
docker compose up -d\n
"},{"location":"third_party/borgmatic/third_party-borgmatic/#initialize-the-repository","title":"Initialize the repository","text":"By now your borgmatic container is up and running, but the backups will currently fail due to the repository not being initialized.
To initialize the repository run:
docker compose exec borgmatic-mailcow borgmatic init --encryption repokey-blake2\n
You will be asked you to authenticate the SSH host key of your remote repository server. See if it matches and confirm the prompt by entering yes
. The repository will be initialized with the passphrase you set in the BORG_PASSPHRASE
environment variable earlier.
When using any of the repokey
encryption methods the encryption key will be stored in the repository itself and not on the client, so there is no further action required in this regard. If you decide to use a keyfile
instead of a repokey
make sure you export the key and back it up separately. Check the Exporting Keys section for how to retrieve the key.
Now that we finished configuring and initializing the repository restart the container to ensure it is in a defined state:
docker compose restart borgmatic-mailcow\n
"},{"location":"third_party/borgmatic/third_party-borgmatic/#restoring-from-a-backup","title":"Restoring from a backup","text":"Restoring a backup assumes you are starting off with a fresh installation of mailcow, and you currently do not have any custom data in your maildir or your mailcow database.
"},{"location":"third_party/borgmatic/third_party-borgmatic/#restore-maildir","title":"Restore maildir","text":"Warning
Doing this will overwrite files in your maildir! Do not run this unless you actually intend to recover mail files from a backup.
If you use SELinux in Enforcing mode
If you are using mailcow on a host with SELinux in Enforcing mode you will have to temporarily disable it during extraction of the archive as the mailcow setup labels the vmail volume as private, belonging to the dovecot container exclusively. SELinux will (rightfully) prevent any other container, such as the borgmatic container, from writing to this volume.
Before running a restore you must make the vmail volume writeable in docker-compose.override.yml
by removing the ro
flag from the volume. Then you can use the following command to restore the maildir from a backup:
docker compose exec borgmatic-mailcow borgmatic extract --path mnt/source --archive latest\n
Alternatively you can specify any archive name from the list of archives (see Listing all available archives)
"},{"location":"third_party/borgmatic/third_party-borgmatic/#restore-mysql","title":"Restore MySQL","text":"Warning
Running this command will delete and recreate the mailcow database! Do not run this unless you actually intend to recover the mailcow database from a backup.
To restore the MySQL database from the latest archive use this command:
docker compose exec borgmatic-mailcow borgmatic restore --archive latest\n
Alternatively you can specify any archive name from the list of archives (see Listing all available archives)
"},{"location":"third_party/borgmatic/third_party-borgmatic/#after-restoring","title":"After restoring","text":"After restoring you need to restart mailcow. If you disabled SELinux enforcing mode now would be a good time to re-enable it.
To restart mailcow use the follwing command:
docker compose down && docker compose up -d\n
If you use SELinux this will also trigger the re-labeling of all files in your vmail volume. Be patient, as this may take a while if you have lots of files.
"},{"location":"third_party/borgmatic/third_party-borgmatic/#useful-commands","title":"Useful commands","text":""},{"location":"third_party/borgmatic/third_party-borgmatic/#manual-archiving-run-with-debugging-output","title":"Manual archiving run (with debugging output)","text":"docker compose exec borgmatic-mailcow borgmatic -v 2\n
"},{"location":"third_party/borgmatic/third_party-borgmatic/#listing-all-available-archives","title":"Listing all available archives","text":"docker compose exec borgmatic-mailcow borgmatic list\n
"},{"location":"third_party/borgmatic/third_party-borgmatic/#break-lock","title":"Break lock","text":"When borg is interrupted during an archiving run it will leave behind a stale lock that needs to be cleared before any new operations can be performed:
docker compose exec borgmatic-mailcow borg break-lock user@rsync.net:mailcow\n
Where user@rsync.net:mailcow
is the URI to your repository.
Now would be a good time to do a manual archiving run to ensure it can be successfully performed.
"},{"location":"third_party/borgmatic/third_party-borgmatic/#exporting-keys","title":"Exporting keys","text":"When using any of the keyfile
methods for encryption you MUST take care of backing up the key files yourself. The key files are generated when you initialize the repository. The repokey
methods store the key file within the repository, so a manual backup isn't as essential.
Note that in either case you also must have the passphrase to decrypt any archives.
To fetch the keyfile run:
docker compose exec borgmatic-mailcow borg key export --paper user@rsync.net:mailcow\n
Where user@rsync.net:mailcow
is the URI to your repository.
Mailcow provides the ability to check for updates using its own update script.
If you want to check for mailcow updates using checkmk, you can create an executable file in the local
directory of the checkmk agent (typically /usr/lib/check_mk_agent/local/
) with the name mailcow_update
and the following content:
#!/bin/bash\ncd /opt/mailcow-dockerized/ && ./update.sh -c >/dev/null\nstatus=$?\nif [ $status -eq 3 ]; then\n echo \"0 \\\"mailcow_update\\\" mailcow_update=0;1;;0;1 No updates available.\"\nelif [ $status -eq 0 ]; then\n echo \"1 \\\"mailcow_update\\\" mailcow_update=1;1;;0;1 Updated code is available.\\nThe changes can be found here: https://github.com/mailcow/mailcow-dockerized/commits/master\"\nelse\n echo \"3 \\\"mailcow_update\\\" - Unknown output from update script ...\"\nfi\nexit\n
If the mailcow installation directory is not /opt/
, adjust this in the 2nd line.
After that re-inventory the services for your mailcow host in checmk and a new check named mailcow_update
should be selectable.
This will run the mailcow_update
everytime checkmk agent is checked, you can cache the result by placing the script in a subfolder named the number of seconds you wish to cache it. \\ /usr/lib/check_mk_agent/local/3600/
will cache the response for an 3600 seconds (1 hour).
If there are no updates available, OK
is displayed.
If updates are available, WARN
is displayed.
If CRIT
is desired instead, replace the 7th line with the following:
echo \"2 \\\"mailcow_update\\\" mailcow_update=1;1;;0;1 Updated code is available.\\nThe changes can be found here: https://github.com/mailcow/mailcow-dockerized/commits/master\"\n
"},{"location":"third_party/checkmk/u_e-checkmk/#detailed-check-output","title":"Detailed check output","text":"Using Microsoft Exchange in a hybrid setup is possible with mailcow. With this setup you can add mailboxes on your mailcow and still use Exchange Online Protection. All mailboxes setup in Exchange will receive their mails as usual, while with the hybrid approach additional Mailboxes can be setup in mailcow without any further configuration.
This setup becomes very handy if you have enabled the Office 365 security defaults and third party applications can no longer login into your mailboxes by any of the supported methods.
"},{"location":"third_party/exchange_onprem/third_party-exchange_onprem/#requirements","title":"Requirements","text":"contoso-com.mail.protection.outlook.com
. Contact your domain registrant to get further information on how to change mx record.internal relay domain
in Exchange.mail flow
pane and click on accepted domains
authorative
to internal relay
Your mailcow needs to relay all mails to your personalized Exchange Host. It is the same host address we already looked up for the mx Record.
Sender-dependent transports
dropdown. Enable relaying by ticking the Relay this domain
, Relay all recipients
and the Relay non-existing mailboxes only.
checkboxesInfo
From now on your mailcow will accept all mails relayed from Exchange. The inbound filtering and so the neural learning of your cow will no longer work. Because all mails are routed through Exchange the filtering process is handled there.
"},{"location":"third_party/exchange_onprem/third_party-exchange_onprem/#set-up-connectors-in-exchange","title":"Set up Connectors in Exchange","text":"All mail traffic now goes through Exchange. At this point the Exchange Online Protection already filters all incoming and outgoing mails. Now we need to set up two connectors to relay incoming mails from our Exchange Service to the mailcow and another one to allow mails relayed from the mailcow to our exchange service. You can follow the official guide from Microsoft.
Warning
For the connector that handles mails from your mailcow to Exchange Microsoft offers two ways of authenticating it. The recommended way is to use a tls certificate configured with a subject name that matches an accepted domain in Exchange. Otherwise you need to choose authentication with the static ip address of your mailcow.
"},{"location":"third_party/exchange_onprem/third_party-exchange_onprem/#validating","title":"Validating","text":"The easiest way to validate the hybrid setup is by sending a mail from the internet to a mailbox that only exists on the mailcow and vice versa.
"},{"location":"third_party/exchange_onprem/third_party-exchange_onprem/#common-issues","title":"Common Issues","text":"550 5.1.10 RESOLVER.ADR.RecipientNotFound; Recipient test@contoso.com not found by SMTP address lookup
Possible Solution: Your domain is not set up as internal relay
. Exchange therefore cannot find the recipient550 5.7.64 TenantAttribution; Relay Access Denied
Possible Solution: The authentication method failed. Make sure the certificate subject matches an accepted domain in Exchange. Try authenticating by static ip instead.Microsoft Guide for the connector setup and additional requirements: https://docs.microsoft.com/exchange/mail-flow-best-practices/use-connectors-to-configure-mail-flow/set-up-connectors-to-route-mail#prerequisites-for-your-on-premises-email-environment
"},{"location":"third_party/gitea/third_party-gitea/","title":"Gitea","text":"With Gitea' ability to authenticate over SMTP it is trivial to integrate it with mailcow. Few changes are needed:
1. Open docker-compose.override.yml
and add gitea:
version: '2.1'\nservices:\n\n gitea-mailcow:\n image: gitea/gitea:1\n volumes:\n - ./data/gitea:/data\n networks:\n mailcow-network:\n aliases:\n - gitea\n ports:\n - \"${GITEA_SSH_PORT:-127.0.0.1:4000}:22\"\n
2. Create data/conf/nginx/site.gitea.custom
, add:
location /gitea/ {\n proxy_pass http://gitea:3000/;\n}\n
3. Open mailcow.conf
and define the binding you want gitea to use for SSH. Example:
GITEA_SSH_PORT=127.0.0.1:4000\n
5. Run docker compose up -d
to bring up the gitea container and run docker compose restart nginx-mailcow
afterwards.
6. If you forced mailcow to https, execute step 9 and restart gitea with docker compose restart gitea-mailcow
. Go head with step 7 (Remember to use https instead of http, https://mx.example.org/gitea/
7. Open http://${MAILCOW_HOSTNAME}/gitea/
, for example http://mx.example.org/gitea/
. For database details set mysql
as database host. Use the value of DBNAME found in mailcow.conf as database name, DBUSER as database user and DBPASS as database password.
8. Once the installation is complete, login as admin and set \"settings\" -> \"authorization\" -> \"enable SMTP\". SMTP Host should be postfix
with port 587
, set Skip TLS Verify
as we are using an unlisted SAN (\"postfix\" is most likely not part of your certificate).
9. Create data/gitea/gitea/conf/app.ini
and set following values. You can consult gitea cheat sheet for their meaning and other possible values.
[server]\nSSH_LISTEN_PORT = 22\n# For GITEA_SSH_PORT=127.0.0.1:4000 in mailcow.conf, set:\nSSH_DOMAIN = 127.0.0.1\nSSH_PORT = 4000\n# For MAILCOW_HOSTNAME=mx.example.org in mailcow.conf (and default ports for HTTPS), set:\nROOT_URL = https://mx.example.org/gitea/\n
10. Restart gitea with docker compose restart gitea-mailcow
. Your users should be able to login with mailcow managed accounts.
With Gogs' ability to authenticate over SMTP it is trivial to integrate it with mailcow. Few changes are needed:
1. Open docker-compose.override.yml
and add Gogs:
version: '2.1'\nservices:\n\n gogs-mailcow:\n image: gogs/gogs\n volumes:\n - ./data/gogs:/data\n networks:\n mailcow-network:\n aliases:\n - gogs\n ports:\n - \"${GOGS_SSH_PORT:-127.0.0.1:4000}:22\"\n
2. Create data/conf/nginx/site.gogs.custom
, add:
location /gogs/ {\n proxy_pass http://gogs:3000/;\n}\n
3. Open mailcow.conf
and define the binding you want Gogs to use for SSH. Example:
GOGS_SSH_PORT=127.0.0.1:4000\n
5. Run docker compose up -d
to bring up the Gogs container and run docker compose restart nginx-mailcow
afterwards.
6. Open http://${MAILCOW_HOSTNAME}/gogs/
, for example http://mx.example.org/gogs/
. For database details set mysql
as database host. Use the value of DBNAME found in mailcow.conf as database name, DBUSER as database user and DBPASS as database password.
7. Once the installation is complete, login as admin and set \"settings\" -> \"authorization\" -> \"enable SMTP\". SMTP Host should be postfix
with port 587
, set Skip TLS Verify
as we are using an unlisted SAN (\"postfix\" is most likely not part of your certificate).
8. Create data/gogs/gogs/conf/app.ini
and set following values. You can consult Gogs cheat sheet for their meaning and other possible values.
[server]\nSSH_LISTEN_PORT = 22\n# For GOGS_SSH_PORT=127.0.0.1:4000 in mailcow.conf, set:\nSSH_DOMAIN = 127.0.0.1\nSSH_PORT = 4000\n# For MAILCOW_HOSTNAME=mx.example.org in mailcow.conf (and default ports for HTTPS), set:\nROOT_URL = https://mx.example.org/gogs/\n
9. Restart Gogs with docker compose restart gogs-mailcow
. Your users should be able to login with mailcow managed accounts.
Info
This guide is a copy from dockerized-mailcow-mailman. Please post issues, questions and improvements in the issue tracker there.
Warning
mailcow is not responsible for any data loss, hardware damage or broken keyboards. This guide comes without any warranty. Make backups before starting, 'coze: No backup no pity!
"},{"location":"third_party/mailman3/third_party-mailman3/#introduction","title":"Introduction","text":"This guide aims to install and configure mailcow-dockerized with docker-mailman and to provide some useful scripts. An essential condition is, to preserve mailcow and Mailman in their own installations for independent updates.
There are some guides and projects on the internet, but they are not up to date and/or incomplete in documentation or configuration. This guide is based on the work of:
After finishing this guide, mailcow-dockerized and docker-mailman will run and Apache as a reverse proxy will serve the web frontends.
The operating system used is an Ubuntu 20.04 LTS.
"},{"location":"third_party/mailman3/third_party-mailman3/#installation","title":"Installation","text":"This guide is based on different steps:
Most of the configuration is covered by mailcows DNS setup. After finishing this setup add another subdomain for Mailman, e.g. lists.example.org
that points to the same server:
# Name Type Value\nlists IN A 1.2.3.4\nlists IN AAAA dead:beef\n
"},{"location":"third_party/mailman3/third_party-mailman3/#install-apache-as-a-reverse-proxy","title":"Install Apache as a reverse proxy","text":"Install Apache, e.g. with this guide from Digital Ocean: How To Install the Apache Web Server on Ubuntu 20.04.
Activate certain Apache modules (as root or sudo):
a2enmod rewrite proxy proxy_http headers ssl wsgi proxy_uwsgi http2\n
Maybe you have to install further packages to get these modules. This PPA by Ond\u0159ej Sur\u00fd may help you.
"},{"location":"third_party/mailman3/third_party-mailman3/#vhost-configuration","title":"vHost configuration","text":"Copy the mailcow.conf and the mailman.conf in the Apache conf folder sites-available
(e.g. under /etc/apache2/sites-available
).
Change in mailcow.conf
: - MAILCOW_HOSTNAME
to your MAILCOW_HOSTNAME
Change in mailman.conf
: - MAILMAN_DOMAIN
to your Mailman domain (e.g. lists.example.org
)
Don't activate the configuration, as the ssl certificates and directories are missing yet.
"},{"location":"third_party/mailman3/third_party-mailman3/#obtain-ssl-certificates-with-lets-encrypt","title":"Obtain SSL certificates with Let's Encrypt","text":"Check if your DNS config is available over the internet and points to the right IP addresses, e.g. with MXToolBox:
Install certbot (as root or sudo):
apt install certbot\n
Get the desired certificates (as root or sudo):
certbot certonly -d MAILCOW_HOSTNAME\ncertbot certonly -d MAILMAN_DOMAIN\n
"},{"location":"third_party/mailman3/third_party-mailman3/#install-mailcow-with-mailman-integration","title":"Install mailcow with Mailman integration","text":""},{"location":"third_party/mailman3/third_party-mailman3/#install-mailcow","title":"Install mailcow","text":"Follow the mailcow installation. Omit step 5 and do not pull and up with docker compose
!
This is also Step 4 in the official mailcow installation (nano mailcow.conf
). So change to your needs and alter the following variables:
HTTP_PORT=18080 # don't use 8080 as mailman needs it\nHTTP_BIND=127.0.0.1 #\nHTTPS_PORT=18443 # you may use 8443\nHTTPS_BIND=127.0.0.1 #\n\nSKIP_LETS_ENCRYPT=y # reverse proxy will do the SSL termination\n\nSNAT_TO_SOURCE=1.2.3.4 # change this to your IPv4\nSNAT6_TO_SOURCE=dead:beef # change this to your global IPv6\n
"},{"location":"third_party/mailman3/third_party-mailman3/#add-mailman-integration","title":"Add Mailman integration","text":"Create the file /opt/mailcow-dockerized/docker-compose.override.yml
(e.g. with nano
) and add the following lines:
version: '2.1'\n\nservices:\n postfix-mailcow:\n volumes:\n - /opt/mailman:/opt/mailman\n networks:\n - docker-mailman_mailman\n\nnetworks:\n docker-mailman_mailman:\n external: true\n
The additional volume is used by Mailman to generate additional config files for mailcow postfix. The external network is build and used by Mailman. mailcow needs it to deliver incoming list mails to Mailman. Create the file /opt/mailcow-dockerized/data/conf/postfix/extra.cf
(e.g. with nano
) and add the following lines:
# mailman\n\nrecipient_delimiter = +\nunknown_local_recipient_reject_code = 550\nowner_request_special = no\n\nlocal_recipient_maps =\n regexp:/opt/mailman/core/var/data/postfix_lmtp,\n proxy:unix:passwd.byname,\n $alias_maps\nvirtual_mailbox_maps =\n proxy:mysql:/opt/postfix/conf/sql/mysql_virtual_mailbox_maps.cf,\n regexp:/opt/mailman/core/var/data/postfix_lmtp\ntransport_maps =\n pcre:/opt/postfix/conf/custom_transport.pcre,\n pcre:/opt/postfix/conf/local_transport,\n proxy:mysql:/opt/postfix/conf/sql/mysql_relay_ne.cf,\n proxy:mysql:/opt/postfix/conf/sql/mysql_transport_maps.cf,\n regexp:/opt/mailman/core/var/data/postfix_lmtp\nrelay_domains =\n proxy:mysql:/opt/postfix/conf/sql/mysql_virtual_relay_domain_maps.cf,\n regexp:/opt/mailman/core/var/data/postfix_domains\nrelay_recipient_maps =\n proxy:mysql:/opt/postfix/conf/sql/mysql_relay_recipient_maps.cf,\n regexp:/opt/mailman/core/var/data/postfix_lmtp\n
As we overwrite mailcow postfix configuration here, this step may break your normal mail transports. Check the original configuration files if anything changed."},{"location":"third_party/mailman3/third_party-mailman3/#ssl-certificates","title":"SSL certificates","text":"As we proxying mailcow, we need to copy the SSL certificates into the mailcow file structure. This task will do the script renew-ssl.sh for us:
/opt/mailcow-dockerized
chmod a+x renew-ssl.sh
)You have to create a cronjob, so that new certificates will be copied. Execute as root or sudo:
crontab -e\n
To run the script every day at 5am, add:
0 5 * * * /opt/mailcow-dockerized/renew-ssl.sh\n
"},{"location":"third_party/mailman3/third_party-mailman3/#install-mailman","title":"Install Mailman","text":"Basicly follow the instructions at docker-mailman. As they are a lot, here is in a nuthshell what to do:
As root or sudo:
cd /opt\nmkdir -p mailman/core\nmkdir -p mailman/web\ngit clone https://github.com/maxking/docker-mailman\ncd docker-mailman\n
"},{"location":"third_party/mailman3/third_party-mailman3/#configure-mailman","title":"Configure Mailman","text":"Create a long key for Hyperkitty, e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo
. Save this key for a moment as HYPERKITTY_KEY.
Create a long password for the database, e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo
. Save this password for a moment as DBPASS.
Create a long key for Django, e.g. with the linux command cat /dev/urandom | tr -dc a-zA-Z0-9 | head -c30; echo
. Save this key for a moment as DJANGO_KEY.
Create the file /opt/docker-mailman/docker compose.override.yaml
and replace HYPERKITTY_KEY
, DBPASS
and DJANGO_KEY
with the generated values:
version: '2'\n\nservices:\n mailman-core:\n environment:\n - DATABASE_URL=postgres://mailman:DBPASS@database/mailmandb\n - HYPERKITTY_API_KEY=HYPERKITTY_KEY\n - TZ=Europe/Berlin\n - MTA=postfix\n restart: always\n networks:\n - mailman\n\n mailman-web:\n environment:\n - DATABASE_URL=postgres://mailman:DBPASS@database/mailmandb\n - HYPERKITTY_API_KEY=HYPERKITTY_KEY\n - TZ=Europe/Berlin\n - SECRET_KEY=DJANGO_KEY\n - SERVE_FROM_DOMAIN=MAILMAN_DOMAIN # e.g. lists.example.org\n - MAILMAN_ADMIN_USER=admin # the admin user\n - MAILMAN_ADMIN_EMAIL=admin@example.org # the admin mail address\n - UWSGI_STATIC_MAP=/static=/opt/mailman-web-data/static\n restart: always\n\n database:\n environment:\n - POSTGRES_PASSWORD=DBPASS\n restart: always\n
At mailman-web
fill in correct values for SERVE_FROM_DOMAIN
(e.g. lists.example.org
), MAILMAN_ADMIN_USER
and MAILMAN_ADMIN_EMAIL
. You need the admin credentials to log into the web interface (Postorius). For setting the password for the first time use the Forgot password function in the web interface.
About other configuration options read Mailman-web and Mailman-core documentation.
"},{"location":"third_party/mailman3/third_party-mailman3/#configure-mailman-core-and-mailman-web","title":"Configure Mailman core and Mailman web","text":"Create the file /opt/mailman/core/mailman-extra.cfg
with the following content. mailman@example.org
should be pointing to a valid mail box or redirection.
[mailman]\ndefault_language: de\nsite_owner: mailman@example.org\n
Create the file /opt/mailman/web/settings_local.py
with the following content. mailman@example.org
should be pointing to a valid mail box or redirection.
# locale\nLANGUAGE_CODE = 'de-de'\n\n# disable social authentication\nMAILMAN_WEB_SOCIAL_AUTH = []\n\n# change it\nDEFAULT_FROM_EMAIL = 'mailman@example.org'\n\nDEBUG = False\n
You can change LANGUAGE_CODE
and SOCIALACCOUNT_PROVIDERS
to your needs."},{"location":"third_party/mailman3/third_party-mailman3/#run","title":"\ud83c\udfc3 Run","text":"Run (as root or sudo)
a2ensite mailcow.conf\na2ensite mailman.conf\nsystemctl restart apache2\n\ncd /opt/docker-mailman\ndocker compose pull\ndocker compose up -d\n\ncd /opt/mailcow-dockerized/\ndocker compose pull\n./renew-ssl.sh\n
Wait a few minutes! The containers have to create there databases and config files. This can last up to 1 minute and more.
"},{"location":"third_party/mailman3/third_party-mailman3/#remarks","title":"Remarks","text":""},{"location":"third_party/mailman3/third_party-mailman3/#new-lists-arent-recognized-by-postfix-instantly","title":"New lists aren't recognized by postfix instantly","text":"When you create a new list and try to immediately send an e-mail, postfix responses with User doesn't exist
, because postfix won't deliver it to Mailman yet. The configuration at /opt/mailman/core/var/data/postfix_lmtp
is not instantly updated. If you need the list instantly, restart postifx manually:
cd /opt/mailcow-dockerized\ndocker compose restart postfix-mailcow\n
"},{"location":"third_party/mailman3/third_party-mailman3/#update","title":"Update","text":"mailcow has it's own update script in /opt/mailcow-dockerized/update.sh
, see the docs.
For Mailman just fetch the newest version from the github repository.
"},{"location":"third_party/mailman3/third_party-mailman3/#backup","title":"Backup","text":"mailcow has an own backup script. Read the docs for further informations.
Mailman won't state backup instructions in the README.md. In the gitbucket of pgollor is a script that may be helpful.
"},{"location":"third_party/mailman3/third_party-mailman3/#todo","title":"ToDo","text":""},{"location":"third_party/mailman3/third_party-mailman3/#install-script","title":"install script","text":"Write a script like in mailman-mailcow-integration/mailman-install.sh as many of the steps are automatable.
This is a simple integration of mailcow aliases and the mailbox name into mailpiler when using IMAP authentication.
Disclaimer: This is not officially maintained nor supported by the mailcow project nor its contributors. No warranty or support is being provided, however you're free to open issues on GitHub for filing a bug or provide further ideas. GitHub repo can be found here.
Info
Support for domain wildcards were implemented in Piler 1.3.10 which was released on 03.01.2021. Prior versions basically do work, but after logging in you won't see emails sent from or to the domain alias. (e.g. when @example.com is an alias for admin@example.com)
"},{"location":"third_party/mailpiler/third_party-mailpiler_integration/#the-problem-to-solve","title":"The problem to solve","text":"mailpiler offers the authentication based on IMAP, for example:
$config['ENABLE_IMAP_AUTH'] = 1;\n$config['IMAP_HOST'] = 'mail.example.com';\n$config['IMAP_PORT'] = 993;\n$config['IMAP_SSL'] = true;\n
patrik@example.com
, you will only see delivered emails sent from or to this specific email address.team@example.com
, you won't see emails sent to or from this email address even the fact you're a recipient of mails sent to this alias address.By hooking into the authentication process of mailpiler, we are able to get required data via the mailcow API during login. This fires API requests to the mailcow API (requiring read-only API access) to read out the aliases your email address participates and also the \"Name\" of the mailbox specified to display it on the top-right of mailpiler after login.
Permitted email addresses can be seen in the mailpiler settings top-right after logging in.
Info
This is only pulled once during the authentication process. The authorized aliases and the realname are valid for the whole duration of the user session as mailpiler sets them in the session data. If user is removed from specific alias, this will only take effect after next login.
"},{"location":"third_party/mailpiler/third_party-mailpiler_integration/#the-solution","title":"The solution","text":"Note: File paths might vary depending on your setup.
"},{"location":"third_party/mailpiler/third_party-mailpiler_integration/#requirements","title":"Requirements","text":"Configuration & Details - Access - Read-Only Access
. Don't forget to allow API access from your mailpiler IP.Warning
As mailpiler authenticates against mailcow, our IMAP server, failed logins of users or bots might trigger a block for your mailpiler instance. Therefore you might want to consider whitelisting the IP address of the mailpiler instance within mailcow: Configuration & Details - Configuration - Fail2ban parameters - Whitelisted networks/hosts
.
Set the custom query function of mailpiler and append this to /usr/local/etc/piler/config-site.php
:
$config['MAILCOW_API_KEY'] = 'YOUR_READONLY_API_KEY';\n$config['MAILCOW_SET_REALNAME'] = true; // when not specified, then default is false\n$config['CUSTOM_EMAIL_QUERY_FUNCTION'] = 'query_mailcow_for_email_access';\ninclude('auth-mailcow.php');\n
You can also change the mailcow hostname, if required:
$config['MAILCOW_HOST'] = 'mail.domain.tld'; // defaults to $config['IMAP_HOST']\n
Download the PHP file with the functions from the GitHub repo:
curl -o /usr/local/etc/piler/auth-mailcow.php https://raw.githubusercontent.com/patschi/mailpiler-mailcow-integration/master/auth-mailcow.php\n
Done!
Make sure to re-login with your IMAP credentials for changes to take effect.
If it doesn't work, most likely something's wrong with the API query itself. Consider debugging by sending manual API requests to the API. (Tip: Open https://mail.domain.tld/api
on your instance)
Nextcloud can be set up (parameter -i
) and removed (parameter -p
) with the helper script included with mailcow. In order to install Nextcloud simply navigate to your mailcow-dockerized root folder and run the helper script as follows:
./helper-scripts/nextcloud.sh -i
In case you have forgotten the password (e.g. for admin) and can't request a new one via the password reset link on the login screen calling the helper script with -r
as parameter allows you to set a new password. Only use this option if your Nextcloud isn't configured to use mailcow for authentication as described in the next section.
In order for mailcow to generate a a certificate for the nextcloud domain you need to add \"nextcloud.domain.tld\" to ADDITIONAL_SAN in mailcow.conf and run docker compose up -d
to apply. For more informaton refer to: Advanced SSL.
To use the recommended setting (cron) to execute the background jobs following lines need to be added to the docker-compose.override.yml
:
version: '2.1'\nservices:\n php-fpm-mailcow:\n labels:\n ofelia.enabled: \"true\"\n ofelia.job-exec.nextcloud-cron.schedule: \"@every 5m\"\n ofelia.job-exec.nextcloud-cron.command: \"su www-data -s /bin/bash -c \\\"/usr/local/bin/php -f /web/nextcloud/cron.php\\\"\"\n
After adding these lines the docker compose up -d
command must be executed to update the docker image and also the docker scheduler image must be restarted to pick up the new job definition by executing docker compose restart ofelia-mailcow
. To check if the job was successfully picked up by ofelia
the command docker compose logs ofelia-mailcow
will contain a line similar to New job registered \"nextcloud-cron\" - ...
.
By adding these lines the background jobs will be executed every 5 minutes. To verify that the execution works correctly, the only way is to see it in the basic settings when logged in as an admin in Nextcloud. If everything is correct the first scheduled execution will change the background jobs processing setting to (X) Cron
and the timestamp after Last job ran
will be updated every 5 minutes.
The following describes how set up authentication via mailcow using the OAuth2 protocol. We will only assume that you have already set up Nextcloud at cloud.example.com and that your mailcow is running at mail.example.com. It does not matter if your Nextcloud is running on a different server, you can still use mailcow for authentication.
1. Log into mailcow as administrator.
2. Scroll down to OAuth2 Apps and click the Add button. Specify the redirect URI as https://cloud.example.com/index.php/apps/sociallogin/custom_oauth2/Mailcow
and click Add. Save the client ID and secret for later.
Info
Some installations, including those setup using the helper script of mailcow, need to remove index.php/ from the URL to get a successful redirect: https://cloud.example.com/apps/sociallogin/custom_oauth2/Mailcow
3. Log into Nextcloud as administrator.
4. Click the button in the top right corner and select Apps. Click the search button in the toolbar, search for the Social Login plugin and click Download and enable next to it.
5. Click the button in the top right corner and select Settings. Scroll down to the Administration section on the left and click Social login.
6. Uncheck the following items:
7. Check the following items:
Click the Save button.
8. Scroll down to Custom OAuth2 and click the + button. 9. Configure the parameters as follows:
Mailcow
Mailcow
https://mail.example.com
https://mail.example.com/oauth/authorize
https://mail.example.com/oauth/token
https://mail.example.com/oauth/profile
profile
Click the Save button at the very bottom of the page.
If you have previously used Nextcloud with mailcow authentication via user_external/IMAP, you need to perform some additional steps to link your existing user accounts with OAuth2.
1. Click the button in the top right corner and select Apps. Scroll down to the External user authentication app and click Remove next to it. 2. Run the following queries in your Nextcloud database (if you set up Nextcloud using mailcow's script, you can run source mailcow.conf && docker compose exec mysql-mailcow mysql -u$DBUSER -p$DBPASS $DBNAME
):
INSERT INTO nc_users (uid, uid_lower) SELECT DISTINCT uid, LOWER(uid) FROM nc_users_external;\nINSERT INTO nc_sociallogin_connect (uid, identifier) SELECT DISTINCT uid, CONCAT(\"Mailcow-\", uid) FROM nc_users_external;\n
If you have previously used Nextcloud without mailcow authentication, but with the same usernames as mailcow, you can also link your existing user accounts with OAuth2.
1. Run the following queries in your Nextcloud database (if you set up Nextcloud using mailcow's script, you can run source mailcow.conf && docker compose exec mysql-mailcow mysql -u$DBUSER -p$DBPASS $DBNAME
):
INSERT INTO nc_sociallogin_connect (uid, identifier) SELECT DISTINCT uid, CONCAT(\"Mailcow-\", uid) FROM nc_users;\n
"},{"location":"third_party/nextcloud/third_party-nextcloud/#update","title":"Update","text":"The Nextcloud instance can be updated easily with the web update mechanism. In the case of larger updates, there may be further changes to be made after the update. After the Nextcloud instance has been checked, problems are shown. This can be e.g. missing indices in the DB or similar. It shows which commands have to be executed, these have to be placed in the php-fpm-mailcow container.
As an an example run the following command to add the missing indices. docker exec -it -u www-data $(docker ps -f name=php-fpm-mailcow -q) bash -c \"php /web/nextcloud/occ db:add-missing-indices\"
It may happen that you cannot reach the Nextcloud instance from your network. This may be due to the fact that the entry of your subnet in the array 'trusted_proxies' is missing. You can make changes in the Nextcloud config.php in data/web/nextcloud/config/*
.
'trusted_proxies' =>\n array (\n 0 => 'fd4d:6169:6c63:6f77::/64',\n 1 => '172.22.1.0/24',\n 2 => 'NewSubnet/24',\n ),\n
After the changes have been made, the nginx container must be restarted. docker compose restart nginx-mailcow
In order to enable Portainer, the docker-compose.yml and site.conf for Nginx must be modified.
1. Create a new file docker-compose.override.yml
in the mailcow-dockerized root folder and insert the following configuration
version: '2.1'\nservices:\n portainer-mailcow:\n image: portainer/portainer-ce\n volumes:\n - /var/run/docker.sock:/var/run/docker.sock\n - ./data/conf/portainer:/data\n restart: always\n dns:\n - 172.22.1.254\n dns_search: mailcow-network\n networks:\n mailcow-network:\n aliases:\n - portainer\n
2a. Create data/conf/nginx/portainer.conf
: upstream portainer {\n server portainer-mailcow:9000;\n}\n\nmap $http_upgrade $connection_upgrade {\n default upgrade;\n '' close;\n}\n
2b. Insert a new location to the default mailcow site by creating the file data/conf/nginx/site.portainer.custom
:
location /portainer/ {\n proxy_http_version 1.1;\n proxy_set_header Host $http_host; # required for docker client's sake\n proxy_set_header X-Real-IP $remote_addr; # pass on real client's IP\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 900;\n\n proxy_set_header Connection \"\";\n proxy_buffers 32 4k;\n proxy_pass http://portainer/;\n }\n\n location /portainer/api/websocket/ {\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $connection_upgrade;\n proxy_pass http://portainer/api/websocket/;\n }\n
3. Apply your changes:
docker compose up -d && docker compose restart nginx-mailcow\n
Now you can simply navigate to https://${MAILCOW_HOSTNAME}/portainer/ to view your Portainer container monitoring page. You\u2019ll then be prompted to specify a new password for the admin account. After specifying your password, you\u2019ll then be able to connect to the Portainer UI.
"},{"location":"third_party/portainer/third_party-portainer/#reverse-proxy","title":"Reverse Proxy","text":"If you are using a reverse proxy you will have to configure it to properly forward websocket requests.
This needs to be done for the docker console and other components to work.
Here is an example for Apache:
<Location /portainer/api/websocket/>\n RewriteEngine on\n RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]\n RewriteCond %{HTTP:CONNECTION} Upgrade$ [NC]\n RewriteRule /portainer/api/websocket/(.*) ws://127.0.0.1:8080/portainer/api/websocket/$1 [P]\n</Location>\n
"},{"location":"third_party/roundcube/third_party-roundcube/","title":"Roundcube","text":""},{"location":"third_party/roundcube/third_party-roundcube/#installing-roundcube","title":"Installing Roundcube","text":"Download Roundcube 1.6.x to the web htdocs directory and extract it (here rc/
):
# Check for a newer release!\ncd data/web\nwget -O - https://github.com/roundcube/roundcubemail/releases/download/1.6.0/roundcubemail-1.6.0-complete.tar.gz | tar xfvz -\n\n# Change folder name\nmv roundcubemail-1.6.0 rc\n\n# Change permissions\nchown -R root: rc/\n
If you need spell check features, create a file data/hooks/phpfpm/aspell.sh
with the following content, then chmod +x data/hooks/phpfpm/aspell.sh
. This installs a local spell check engine. Note, most modern web browsers have built in spell check, so you may not want/need this.
#!/bin/bash\napk update\napk add aspell-en # or any other language\n
Create a file data/web/rc/config/config.inc.php
with the following content. - Change the des_key
parameter to a random value. It is used to temporarily store your IMAP password. - The db_prefix
is optional but recommended. - If you didn't install spell check in the above step, remove spellcheck_engine
parameter and replace it with $config['enable_spellcheck'] = false;
.
<?php\nerror_reporting(0);\nif (!file_exists('/tmp/mime.types')) {\nfile_put_contents(\"/tmp/mime.types\", fopen(\"http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/conf/mime.types\", 'r'));\n}\n$config = array();\n$config['db_dsnw'] = 'mysql://' . getenv('DBUSER') . ':' . getenv('DBPASS') . '@mysql/' . getenv('DBNAME');\n$config['imap_host'] = 'tls://dovecot:143';\n$config['smtp_host'] = 'tls://postfix:587';\n$config['smtp_user'] = '%u';\n$config['smtp_pass'] = '%p';\n$config['support_url'] = '';\n$config['product_name'] = 'Roundcube Webmail';\n$config['des_key'] = 'yourrandomstring_changeme';\n$config['log_dir'] = '/dev/null';\n$config['temp_dir'] = '/tmp';\n$config['plugins'] = array(\n 'archive',\n 'managesieve'\n);\n$config['spellcheck_engine'] = 'aspell';\n$config['mime_types'] = '/tmp/mime.types';\n$config['imap_conn_options'] = array(\n 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true)\n);\n$config['enable_installer'] = true;\n$config['smtp_conn_options'] = array(\n 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true)\n);\n$config['db_prefix'] = 'mailcow_rc1';\n
Point your browser to https://myserver/rc/installer
and follow the instructions. Initialize the database and leave the installer.
Delete the directory data/web/rc/installer
after a successful installation!
Open data/web/rc/config/config.inc.php
and change the following parameters (or add them at the bottom of that file):
$config['managesieve_host'] = 'tls://dovecot:4190';\n$config['managesieve_conn_options'] = array(\n 'ssl' => array('verify_peer' => false, 'verify_peer_name' => false, 'allow_self_signed' => true)\n);\n// Enables separate management interface for vacation responses (out-of-office)\n// 0 - no separate section (default),\n// 1 - add Vacation section,\n// 2 - add Vacation section, but hide Filters section\n$config['managesieve_vacation'] = 1;\n
"},{"location":"third_party/roundcube/third_party-roundcube/#enable-change-password-function-in-roundcube","title":"Enable change password function in Roundcube","text":"Open data/web/rc/config/config.inc.php
and enable the password plugin:
...\n$config['plugins'] = array(\n 'archive',\n 'password',\n);\n...\n
Open data/web/rc/plugins/password/password.php
, search for case 'ssha':
and add above:
case 'ssha256':\n $salt = rcube_utils::random_bytes(8);\n $crypted = base64_encode( hash('sha256', $password . $salt, TRUE ) . $salt );\n $prefix = '{SSHA256}';\n break;\n
Open data/web/rc/plugins/password/config.inc.php
and change the following parameters (or add them at the bottom of that file):
$config['password_driver'] = 'sql';\n$config['password_algorithm'] = 'ssha256';\n$config['password_algorithm_prefix'] = '{SSHA256}';\n$config['password_query'] = \"UPDATE mailbox SET password = %P WHERE username = %u\";\n
"},{"location":"third_party/roundcube/third_party-roundcube/#integrate-carddav-addressbooks-in-roundcube","title":"Integrate CardDAV addressbooks in Roundcube","text":"Download the latest release of RCMCardDAV to the Roundcube plugin directory and extract it (here rc/plugins
):
cd data/web/rc/plugins\nwget -O - https://github.com/mstilkerich/rcmcarddav/releases/download/v4.4.1/carddav-v4.4.1-roundcube16.tar.gz | tar xfvz -\nchown -R root: carddav/\n
Copy the file config.inc.php.dist
to config.inc.php
(here in rc/plugins/carddav
) and append the following preset to the end of the file - don't forget to replace mx.example.org
with your own hostname:
$prefs['SOGo'] = array(\n 'name' => 'SOGo',\n 'username' => '%u',\n 'password' => '%p',\n 'url' => 'https://mx.example.org/SOGo/dav/%u/',\n 'carddav_name_only' => true,\n 'use_categories' => true,\n 'active' => true,\n 'readonly' => false,\n 'refresh_time' => '02:00:00',\n 'fixed' => array( 'active', 'name', 'username', 'password', 'refresh_time' ),\n 'hide' => false,\n);\n
Please note, that this preset only integrates the default addressbook (the one that's named \"Personal Address Book\" and can't be deleted). Additional addressbooks are currently not automatically detected but can be manually added within the roundecube settings. Enable the plugin by adding carddav
to $config['plugins']
in rc/config/config.inc.php
.
If you want to remove the default addressbooks (stored in the Roundcube database), so that only the CardDAV addressbooks are accessible, append $config['address_book_type'] = '';
to the config file data/web/rc/config/config.inc.php
.
Optionally, you can add Roundcube's link to the mailcow Apps list. To do this, open or create data/web/inc/vars.local.inc.php
and add the following code-block:
NOTE: Don't forget to add the <?php
delimiter on the first line
...\n$MAILCOW_APPS = array(\n array(\n 'name' => 'SOGo',\n 'link' => '/SOGo/'\n ),\n array(\n 'name' => 'Roundcube',\n 'link' => '/rc/'\n )\n);\n...\n
"},{"location":"third_party/roundcube/third_party-roundcube/#upgrading-roundcube","title":"Upgrading Roundcube","text":"Upgrading Roundcube is rather simple, go to the Github releases page for Roundcube and get the link for the \"complete.tar.gz\" file for the wanted release. Then follow the below commands and change the URL and Roundcube folder name if needed.
# Enter a bash session of the mailcow PHP container\ndocker exec -it mailcowdockerized-php-fpm-mailcow-1 bash\n\n# Install required upgrade dependency, then upgrade Roundcube to wanted release\napk add rsync\ncd /tmp\nwget -O - https://github.com/roundcube/roundcubemail/releases/download/1.6.0/roundcubemail-1.6.0-complete.tar.gz | tar xfvz -\ncd roundcubemail-1.6.0\nbin/installto.sh /web/rc\n\n# Type 'Y' and press enter to upgrade your install of Roundcube\n# Type 'N' to \"Do you want me to fix your local configuration\" if prompted\n\n# If you see \"NOTICE: Update dependencies by running php composer.phar update --no-dev\" just download composer.phar and run it:\ncd /web/rc\nwget https://getcomposer.org/download/2.4.2/composer.phar\nphp composer.phar update --no-dev\n# When asked \"Do you trust \"roundcube/plugin-installer\" to execute code and wish to enable it now? (writes \"allow-plugins\" to composer.json) [y,n,d,?] \" hit y and continue.\n\n\n# Remove leftover files\ncd /tmp\nrm -rf roundcube*\n\n# If you're going from 1.5 to 1.6 please run the config file changes below\nsed -i \"s/\\$config\\['default_host'\\].*$/\\$config\\['imap_host'\\]\\ =\\ 'tls:\\/\\/dovecot:143'\\;/\" /web/rc/config/config.inc.php\nsed -i \"/\\$config\\['default_port'\\].*$/d\" /web/rc/config/config.inc.php\nsed -i \"s/\\$config\\['smtp_server'\\].*$/\\$config\\['smtp_host'\\]\\ =\\ 'tls:\\/\\/postfix:587'\\;/\" /web/rc/config/config.inc.php\nsed -i \"/\\$config\\['smtp_port'\\].*$/d\" /web/rc/config/config.inc.php\nsed -i \"s/\\$config\\['managesieve_host'\\].*$/\\$config\\['managesieve_host'\\]\\ =\\ 'tls:\\/\\/dovecot:4190'\\;/\" /web/rc/config/config.inc.php\nsed -i \"/\\$config\\['managesieve_port'\\].*$/d\" /web/rc/config/config.inc.php\n
"},{"location":"third_party/roundcube/third_party-roundcube/#let-admins-log-into-roundcube-without-password","title":"Let admins log into Roundcube without password","text":"First, install plugin dovecot_impersonate and add Roundcube as an app (see above).
Edit mailcow.conf
and add the following:
# Allow admins to log into Roundcube as email user (without any password)\n# Roundcube with plugin dovecot_impersonate must be installed first\n\nALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE=y\n
Edit docker-compose.override.yml
and crate/extend the section for php-fpm-mailcow
:
version: '2.1'\nservices:\n php-fpm-mailcow:\n environment:\n - ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE=${ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE:-n}\n
Edit data/web/js/site/mailbox.js
and the following code after if (ALLOW_ADMIN_EMAIL_LOGIN) { ... }
if (ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE) {\n item.action += '<a href=\"/rc-auth.php?login=' + encodeURIComponent(item.username) + '\" class=\"login_as btn btn-xs ' + btnSize + ' btn-primary\" target=\"_blank\"><i class=\"bi bi-envelope-fill\"></i> Roundcube</a>';\n}\n
Edit data/web/mailbox.php
and add this line to array $template_data
:
'allow_admin_email_login_roundcube' => (preg_match(\"/^(yes|y)+$/i\", $_ENV[\"ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE\"])) ? 'true' : 'false',\n
Edit data/web/templates/mailbox.twig
and add this code to the bottom of the javascript section:
var ALLOW_ADMIN_EMAIL_LOGIN_ROUNDCUBE = {{ allow_admin_email_login_roundcube }};\n
Copy the contents of the following files from this Snippet:
data/web/inc/lib/RoundcubeAutoLogin.php
data/web/rc-auth.php
Finally, restart mailcow
docker compose down\ndocker compose up -d\n
"},{"location":"troubleshooting/debug-admin_login_sogo/","title":"Admin login to SOGo","text":"This is an experimental feature that allows admins and domain admins to directly log into SOGo as a mailbox user, without knowing the users password.
For this, an additional link to SOGo is displayed in the mailbox list (mailcow UI).
Multiple concurrent admin-logins to different mailboxes are also possible when using this feature.
"},{"location":"troubleshooting/debug-admin_login_sogo/#enabling-the-feature","title":"Enabling the feature","text":"The feature is disabled by default. It can be enabled in the mailcow.conf
by setting:
ALLOW_ADMIN_EMAIL_LOGIN=y\n
and recreating the affected containers with docker compose up -d\n
"},{"location":"troubleshooting/debug-admin_login_sogo/#drawbacks-when-enabled","title":"Drawbacks when enabled","text":"SOGoTrustProxyAuthentication option is set to YES which makes SOGo trust the x-webobjects-remote-user header.
Dovecot will receive a random master-password which is valid for all mailboxes when used by the SOGo container.
Clicking on the SOGo button in the mailbox list will open sogo-auth.php which checks permissions, sets session variables and redirects to the SOGo mailbox.
Each SOGo, CardDAV, CalDAV and EAS http request will cause an additional, nginx internal auth_request call to sogo-auth.php with the following behavior:
If a basic_auth header is present, the script will validate the credentials in place of SOGo and provide the following headers: x-webobjects-remote-user
, Authorization
and x-webobjects-auth-type
.
If no basic_auth header is present, the script will check for an active mailcow admin session for the requested email user and provide the same headers but with the dovecot master password used in the Authorization
header.
If both fails the headers will be set empty, which makes SOGo use its standard authentication methods.
All of these options / behaviors are disabled if the ALLOW_ADMIN_EMAIL_LOGIN
is not enabled in the config.
To attach a container to your shell you can simply run
docker compose exec $Service_Name /bin/bash\n
"},{"location":"troubleshooting/debug-attach_service/#connecting-to-services","title":"Connecting to Services","text":"If you want to connect to a service / application directly it is always a good idea to source mailcow.conf
to get all relevant variables into your environment.
source mailcow.conf\ndocker compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME}\n
"},{"location":"troubleshooting/debug-attach_service/#redis","title":"Redis","text":"docker compose exec redis-mailcow redis-cli\n
"},{"location":"troubleshooting/debug-attach_service/#service-descriptions","title":"Service Descriptions","text":"Here is a brief overview of what container / service does what:
Service Name Service Descriptions unbound-mailcow Local (DNSSEC) DNS Resolver mysql-mailcow Stores SOGo's and most of mailcow's settings postfix-mailcow Receives and sends mails dovecot-mailcow User logins and sieve filter redis-mailcow Storage back-end for DKIM keys and Rspamd rspamd-mailcow Mail filtering system. Used for av handling, dkim signing, spam handling clamd-mailcow Scans attachments for viruses olefy-mailcow Scans attached office documents for macro-viruses solr-mailcow Provides full-text search in Dovecot sogo-mailcow Webmail client that handles Microsoft ActiveSync and Cal- / CardDav nginx-mailcow Nginx remote proxy that handles all mailcow related HTTP / HTTPS requests acme-mailcow Automates HTTPS (SSL/TLS) certificate deployment memcached-mailcow Internal caching system for mailcow services watchdog-mailcow Allows the monitoring of docker containers / services php-fpm-mailcow Powers the mailcow web UI netfilter-mailcow Fail2Ban like integration"},{"location":"troubleshooting/debug-common_problems/","title":"Common Problems","text":"Here we list common problems and possible solutions:
"},{"location":"troubleshooting/debug-common_problems/#mail-loops-back-to-myself","title":"Mail loops back to myself","text":"Please check in your mailcow UI if you made the domain a backup MX:
"},{"location":"troubleshooting/debug-common_problems/#i-can-receive-but-not-send-mails","title":"I can receive but not send mails","text":"There are a lot of things that could prevent you from sending mail:
465
or 587
:# telnet 74.125.133.27 465\nTrying 74.125.133.27...\nConnected to 74.125.133.27.\nEscape character is '^]'.\n
"},{"location":"troubleshooting/debug-common_problems/#my-mails-are-identified-as-spam","title":"My mails are identified as Spam","text":"Please read our guide on DNS configuration.
"},{"location":"troubleshooting/debug-common_problems/#docker-compose-throws-weird-errors","title":"docker compose throws weird errors","text":"... like:
ERROR: Invalid interpolation format ...
AttributeError: 'NoneType' object has no attribute 'keys'
.ERROR: In file './docker-compose.yml' service 'version' doesn't have any configuration options
.When you encounter one or similar messages while trying to run mailcow: dockerized please check if you have the latest version of Docker and docker compose
"},{"location":"troubleshooting/debug-common_problems/#container-xy-is-unhealthy","title":"Container XY is unhealthy","text":"This error tries to tell you that one of the (health) conditions for a certain container are not met. Therefore it can't be started. This can have several reasons, the most common one is an updated git clone but old docker image or vice versa.
A wrong configured firewall could also cause such a failure. The containers need to be able to talk to each other over the network 172.22.1.1/24.
It might also be wrongly linked file (i.e. SSL certificate) that prevents a crucial container (nginx) from starting, so always check your logs to get an idea where your problem is coming from.
"},{"location":"troubleshooting/debug-common_problems/#address-already-in-use","title":"Address already in use","text":"If you get an error message like:
ERROR: for postfix-mailcow Cannot start service postfix-mailcow: driver failed programming external connectivity on endpoint mailcowdockerized_postfix-mailcow_1: Error starting userland proxy: listen tcp 0.0.0.0:25: bind: address already in use\n
while trying to start / install mailcow: dockerized, make sure you've followed our section on the prerequisites.
"},{"location":"troubleshooting/debug-common_problems/#xyz-cant-connect-to","title":"XYZ can't connect to ...","text":"Please check your local firewall! Docker and iptables-based firewalls sometimes create conflicting rules, so disable the firewall on your host to determine whether your connection issues are caused by such conflicts. If they are, you need to manually create appropriate rules in your host firewall to permit the necessary connections.
If you experience connection problems from home, please check your ISP router's firewall too, some of them block mail traffic on the SMTP (587) or SMTPS (465) ports. It could also be, that your ISP is blocking the ports for SUBMISSION (25).
While Linux users can chose from a variety of tools1 to check if a port is open, the Windows user has only the PowerShell command Test-NetConnection -ComputerName host -Port port
available by default.
To enable telnet on a Windows after Vista please check this guide or enter the following command in an terminal with administrator privileges:
dism /online /Enable-Feature /FeatureName:TelnetClient\n
"},{"location":"troubleshooting/debug-common_problems/#inotify-instance-limit-for-user-5000-uid-vmail-exceeded-see-453","title":"Inotify instance limit for user 5000 (UID vmail) exceeded (see #453)","text":"Docker containers use the Docker hosts inotify limits. Setting them on your Docker host will pass them to the container.
"},{"location":"troubleshooting/debug-common_problems/#dovecot-keeps-restarting-see-2672","title":"Dovecot keeps restarting (see #2672)","text":"Check that you have at least the following files in data/assets/ssl
:
cert.pem\ndhparams.pem\nkey.pem\n
If dhparams.pem
is missing, you can generate it with
openssl dhparam -out data/assets/ssl/dhparams.pem 4096\n
netcat, nmap, openssl, telnet, etc.\u00a0\u21a9
Warning
This section only applies for Dockers default logging driver (JSON).
To view the logs of all mailcow: dockerized related containers, you can use docker compose logs
inside your mailcow-dockerized folder that contains your mailcow.conf
. This is usually a bit much, but you could trim the output with --tail=100
to the last 100 lines per container, or add a -f
to follow the live output of all your services.
To view the logs of a specific service you can use docker compose logs [options] $service_name
Info
The available options for the command docker compose logs are:
If your server crashed and MariaDB logs an error similar to [ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables (*.MAI) and delete all aria_log.######## files
you may want to try the following to recover the database to a healthy state:
Start the stack and wait until mysql-mailcow begins to report a restarting state. Check by running docker compose ps
.
Now run the following commands:
# Stop the stack, don't run \"down\"\ndocker compose stop\n# Run a bash in the stopped container as user mysql\ndocker compose run --rm --entrypoint '/bin/sh -c \"gosu mysql bash\"' mysql-mailcow\n# cd to the SQL data directory\ncd /var/lib/mysql\n# Run aria_chk\naria_chk --check --force */*.MAI\n# Delete aria log files\nrm aria_log.*\n
Now run docker compose down
followed by docker compose up -d
.
This step is usually not necessary.
docker compose stop mysql-mailcow watchdog-mailcow\ndocker compose run --rm --entrypoint '/bin/sh -c \"gosu mysql mysqld --skip-grant-tables & sleep 10 && bash && exit 0\"' mysql-mailcow\n
As soon as the SQL shell spawned, run mysql_upgrade
and exit the container:
mysql_upgrade\nexit\n
"},{"location":"troubleshooting/debug-reset_pw/","title":"Reset Passwords (incl. SQL)","text":""},{"location":"troubleshooting/debug-reset_pw/#mailcow-admin-account","title":"mailcow Admin Account","text":"Resets the mailcow admin account to a random password. Older mailcow: dockerized installations may find the mailcow-reset-admin.sh
script in their mailcow root directory (mailcow_path).
cd mailcow_path\n./helper-scripts/mailcow-reset-admin.sh\n
"},{"location":"troubleshooting/debug-reset_pw/#reset-mysql-passwords","title":"Reset MySQL Passwords","text":"Stop the stack by running docker compose stop
.
When the containers came to a stop, run this command:
docker compose run --rm --entrypoint '/bin/sh -c \"gosu mysql mysqld --skip-grant-tables & sleep 10 && mysql -hlocalhost -uroot && exit 0\"' mysql-mailcow\n
"},{"location":"troubleshooting/debug-reset_pw/#1-find-database-name","title":"1. Find database name","text":"# source mailcow.conf\n# docker compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME}\nMariaDB [(none)]> show databases;\n+--------------------+\n| Database |\n+--------------------+\n| information_schema |\n| mailcow_database | <=====\n| mysql |\n| performance_schema |\n+--------------------+\n4 rows in set (0.00 sec)\n
"},{"location":"troubleshooting/debug-reset_pw/#2-reset-one-or-more-users","title":"2. Reset one or more users","text":""},{"location":"troubleshooting/debug-reset_pw/#21-maria-db-104-older-mailcow-installations","title":"2.1 Maria DB < 10.4 (older mailcow installations)","text":"Both \"password\" and \"authentication_string\" exist. Currently \"password\" is used, but better set both.
MariaDB [(none)]> SELECT user FROM mysql.user;\n+--------------+\n| user |\n+--------------+\n| mailcow | <=====\n| root |\n+--------------+\n2 rows in set (0.00 sec)\n\nMariaDB [(none)]> FLUSH PRIVILEGES;\nMariaDB [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('gotr00t'), password = PASSWORD('gotr00t') WHERE User = 'root';\nMariaDB [(none)]> UPDATE mysql.user SET authentication_string = PASSWORD('mookuh'), password = PASSWORD('mookuh') WHERE User = 'mailcow' AND Host = '%';\nMariaDB [(none)]> FLUSH PRIVILEGES;\n
"},{"location":"troubleshooting/debug-reset_pw/#22-maria-db-104-current-mailcows","title":"2.2 Maria DB >= 10.4 (current mailcows)","text":"MariaDB [(none)]> SELECT user FROM mysql.user;\n+--------------+\n| user |\n+--------------+\n| mailcow | <=====\n| root |\n+--------------+\n2 rows in set (0.00 sec)\n\nMariaDB [(none)]> FLUSH PRIVILEGES;\nMariaDB [(none)]> ALTER USER 'mailcow'@'%' IDENTIFIED BY 'mookuh';\nMariaDB [(none)]> ALTER USER 'root'@'%' IDENTIFIED BY 'gotr00t';\nMariaDB [(none)]> ALTER USER 'root'@'localhost' IDENTIFIED BY 'gotr00t';\nMariaDB [(none)]> FLUSH PRIVILEGES;\n
"},{"location":"troubleshooting/debug-reset_pw/#remove-two-factor-authentication","title":"Remove Two-Factor Authentication","text":""},{"location":"troubleshooting/debug-reset_pw/#for-mailcow-webui","title":"For mailcow WebUI:","text":"This works similar to resetting a MySQL password, now we do it from the host without connecting to the MySQL CLI:
source mailcow.conf\ndocker compose exec mysql-mailcow mysql -u${DBUSER} -p${DBPASS} ${DBNAME} -e \"DELETE FROM tfa WHERE username='YOUR_USERNAME';\"\n
"},{"location":"troubleshooting/debug-reset_pw/#for-sogo","title":"For SOGo:","text":"docker compose exec -u sogo sogo-mailcow sogo-tool user-preferences set defaults user@example.com SOGoGoogleAuthenticatorEnabled '{\"SOGoGoogleAuthenticatorEnabled\":0}'\n
"},{"location":"troubleshooting/debug-reset_tls/","title":"Reset TLS certificates","text":"In case you encounter problems with your certificate, key or Let's Encrypt account, please try to reset the TLS assets:
source mailcow.conf\ndocker compose down\nrm -rf data/assets/ssl\nmkdir data/assets/ssl\nopenssl req -x509 -newkey rsa:4096 -keyout data/assets/ssl-example/key.pem -out data/assets/ssl-example/cert.pem -days 365 -subj \"/C=DE/ST=NRW/L=Willich/O=mailcow/OU=mailcow/CN=${MAILCOW_HOSTNAME}\" -sha256 -nodes\ncp -n -d data/assets/ssl-example/*.pem data/assets/ssl/\ndocker compose up -d\n
This will stop mailcow, source the variables we need, create a self-signed certificate and start mailcow.
If you use Let's Encrypt you should be careful as you will create a new account and a new set of certificates. You will run into a ratelimit sooner or later.
Please also note that previous TLSA records will be invalid.
"},{"location":"troubleshooting/debug-rm_volumes/","title":"Remove Persistent Data","text":"You may want to remove a set of persistent data to resolve a conflict or to start over.
mailcowdockerized
can vary and depends on your compose project name (if it's unchanged, mailcowdockerized
is the correct value). If you are unsure about volume names, run docker volume ls
for a full list.
Delete a single volume:
docker volume rm mailcowdockerized_${VOLUME_NAME}\n
mysql-vol-1
to remove all MySQL data.redis-vol-1
to remove all Redis data.vmail-vol-1
to remove all contents of /var/vmail
mounted to dovecot-mailcow
.rspamd-vol-1
to remove all Rspamd data.crypt-vol-1
to remove all crypto data. This will render all mails unreadable.Alternatively, running docker compose down -v
will destroy all mailcow: dockerized volumes and delete any related containers and networks.
A quick guide to deeply analyze a malfunctioning Rspamd.
docker compose exec rspamd-mailcow bash\n\nif ! grep -qi 'apt-stable-asan' /etc/apt/sources.list.d/rspamd.list; then\n sed -i 's/apt-stable/apt-stable-asan/i' /etc/apt/sources.list.d/rspamd.list\nfi\n\napt-get update ; apt-get upgrade rspamd\n\nnano /docker-entrypoint.sh\n\n# Before \"exec \"$@\"\" add the following lines:\n\nexport G_SLICE=always-malloc\nexport ASAN_OPTIONS=new_delete_type_mismatch=0:detect_leaks=1:detect_odr_violation=0:log_path=/tmp/rspamd-asan:quarantine_size_mb=2048:malloc_context_size=8:fast_unwind_on_malloc=0\n
Restart Rspamd: docker compose restart rspamd-mailcow
Your memory consumption will increase by a lot, it will also steadily grow, which is not related to a possible memory leak you are looking for.
Leave the container running for a few minutes, hours or days (it should match the time you usually wait for the leak to \"happen\") and restart it: docker compose restart rspamd-mailcow
.
Now enter the container by running docker compose exec rspamd-mailcow bash
, change the directory to /tmp and copy the asan Files to your desired location or upload them via termbin.com (cat /tmp/rspamd-asan.* | nc termbin.com 9999
).
When a problem occurs, then always for a reason! What you want to do in such a case is: