Author Archives: Rainer Müller

Setting up dovecot-antispam with Spamassassin

dovecot-antispam is a plugin for the Dovecot IMAP server that automatically runs a classifier tool to train your spam filter whenever you move a mail into or out of the Junk folder. As it is written with a generic interface, its configuration allows you to configure a command to be run whenever such an event occurs. It will be called with a configurable argument indicating whether the mail should be considered spam or not and will pipe the mail itself to the standard input of the command.

# /etc/dovecot/conf.d/90-plugin.conf
plugin {
  # dovecot-antispam
  antispam_backend = pipe
  antispam_trash = trash;Trash;Deleted Items;Deleted Messages
  antispam_spam = Junk
  antispam_pipe_program = /usr/local/sbin/sa-learn-pipe
  antispam_pipe_program_spam_arg = --spam
  antispam_pipe_program_notspam_arg = --ham
  antispam_pipe_tmpdir = /tmp
}

Now this should be a sane interface for any Unix system, pipes are quite the preferred way of handling input. However, in case an error occurs, the log files will not include any helpful output from the failed program, just that it failed. Therefore I wrote a small wrapper around sa-learn(1), the tool of Spamassassin to train the Bayesian classifier.

The example script on the wiki page for dovecot-antispam uses temporary files to pass the mail content as a file. However, sa-learn(1) also accepts the common dash "-" as an argument, which internally will create a temporary file from the contents of stdin. Although this is a fully undocumented feature, I looked into the source to confirm this will work as expected. If the command fails, the wrapper script below will record the full output with logger in the mail.err facility in syslog.

#!/bin/bash
 
# /usr/local/sbin/sa-learn-pipe
 
out=$(sa-learn "$@" - 2>&1)
ret=$?
 
if [ $ret -gt 0 ]; then
    logger -p mail.err -i -t "${0##*/}" "${1//[^a-z]/}: $out"
fi
 
exit $ret

My previous way of implementing spam learning was to move spam mails into a special directory, where a cronjob would pick it up to pass it to sa-learn. Now I like this much better, as it integrates nicely with the “Mark as Spam” actions in most IMAP clients. In addition to this, I expunge old spam mails with a cronjob deleting all mails created more than 30 days ago in the Junk folder.

A custom clock face for my kitchen clock made with TikZ in LaTeX

A few weeks ago my kitchen clock broke. Actually not the clock itself did break. Instead, the clock face inside began to dissolve and fell into small pieces that hindered the hands from moving. I have no idea how that happened. It might have been the location in the kitchen above the window that includes exposure to fat and high humidity. Maybe it was a more general problem and it failed due to its age. This clock must be in my household for more than nine years now.

Finding the right size from design to printing

Finding the right size from design to printing

I pondered getting a new clock. However, I could not easily find one that both matched my expectations in price and design. Anyway, the clockwork itself is still functioning as it should and the rest of the plastic housing is also fine. Therefore, I decided to breath new life into this one by designing and printing a new clock face.

Continue reading

Die Briefmarke

Es folgt eine längere Abhandlung über meine Versuche einen Brief mit der Deutschen Post zu verschicken. Oder viel mehr meine Versuche zu diesem Zweck eine Briefmarke zu erwerben. Jeglicher Versuch dabei lustig zu wirken ist rein zufällig, denn eigentlich sollte das doch ganz einfach gehen, oder?

Es begab sich im Januar 2016, als ich eine Kündigung aussprechen wollte. Das entsprechende Formular war auf der Website der Firma schnell gefunden und ausgefüllt. Doch dann befand sich darauf der Hinweis, dass man dieses Bitte per Post oder Fax zusenden sollte. Aber ich dachte natürlich, ich drucke das einfach aus, unterschreibe, scanne es wieder ein und schicke das PDF per E-Mail. Das klappt ja schließlich oft auf diese Art und Weise. Doch, nein, die Buchhaltungsabteilung besteht darauf, dass ich die Kündigung mit der Post oder per Fax schicke. Nun gut, dann verschicke ich eben doch einen Brief. Wie so ein Höhlenmensch.

Ich muss zugeben, ich habe schon wirklich lange keinen Brief mehr mit der Post verschickt. Oder genauer, schon länger keinen Brief mehr frankiert. Die Briefe, die ich sonst auszufüllen hatte, waren immer mit dem Hinweis “Porto zahlt Empfänger” versehen oder kamen bereits mit frankiertem Rückumschlag. Diese Kündigung aber einfach unfrei zu verschicken will ich wiederum auch nicht, da ich mich doch in guten Verhältnis von dem Vertragsverhältnis mit der betroffenen Firma trennen will. Also muss ich dafür eine Briefmarke kaufen. Das Briefporto für einen Standardbrief beträgt 70 ct. So viel bekommt man ja trotzdem mit, auch wenn man die Dienstleistung gar nicht nutzt. Woher bekomme ich also eine Briefmarke?

Continue reading

Should we distrust Comodo after issuing a rogue SSL certificate for Windows Live?

About a year ago, I wrote an article why I no longer trust StartSSL. Back then, I said I switched to a paid certificate issued by Comodo under the PositiveSSL brand instead. A reader now brought a recent issue with a Comodo certificate erroneously issued for Microsoft’s Windows Live to my attention and asked whether I would still prefer them over StartSSL.

Arno wrote this comment (link):

Do you still trust Commodo to be more trustworthy than StartCom just because they asked for money to handle revocations? Think twice – a guy from Finland managed to get a valid certificate from Commodo for “live.fi”, (Microsoft Live in Finland), just because he was able to register “hostmaster@live.fi” as his e-mail-address:

http://arstechnica.com/security/2015/03/bogus-ssl-certificate-for-windows-live-could-allow-man-in-the-middle-hacks/

I started to type my answer as a comment as well, but soon I realized my explanation just became too long to be a comment, so I turned it into an article on its own.
Continue reading

Backup with duply to Amazon S3: BackendException: No connection to backend

I stumbled across this problem during setting up duplicity backups to a S3 bucket. As it took me quite a while to resolve this, I wanted to document this problem and its solution here. I just hope someone else with the same problem may find this blog post.

I tried to set up duply, a frontend for the backup tool duplicity, to back up to Amazon S3 storage.

The challenge appeared to be that I wanted to do this with the version available in Debian wheezy. The problem described here is probably already fixed in duplicity >= 0.7.0. These are the versions I used:

i duplicity	wheezy-backports	0.6.24-1~bpo70
i duply         stable			1.5.5.5-1
i python-boto   wheezy-backports	2.25.0-1~bpo7

Problem

I added S3 as a target to the duply configuration as documented on various places on the web. However, I always ran into this error message:

$ duply donkey-s3-test status
Start duply v1.5.5.5, time is 2015-03-12 00:03:40.
Using profile '/etc/duply/donkey-s3-test'.
Using installed duplicity version 0.6.24, python 2.7.3, gpg 1.4.12 (Home: ~/.gnupg), awk 'GNU Awk 4.0.1', bash '4.2.37(1)-release (x86_64-pc-linux-gnu)'.
Signing disabled. Not GPG_KEY entries in config.
Test - Encryption with passphrase (OK)
Test - Decryption with passphrase (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.10622.1426115020_*'(OK)

--- Start running command STATUS at 00:03:40.984 ---
BackendException: No connection to backend
00:03:41.301 Task 'STATUS' failed with exit code '23'.
--- Finished state FAILED 'code 23' at 00:03:41.301 - Runtime 00:00:00.316 ---

Similar occurrences of this bug are also tracked here: https://bugs.launchpad.net/duplicity/+bug/1278529

Solution

The exception above is highly unspecific and returning such a generic error message is bad style in my opinion. It took me quite a while to find the solution. To make it short, with this snippet from my /etc/duply/donkey-s3-test/conf file I got this to work:

TARGET='s3://s3-eu-central-1.amazonaws.com/.../'
TARGET_USER='...'
TARGET_PASS='...'
DUPL_PARAMS="$DUPL_PARAMS --s3-use-rrs"
# XXX: workaround for S3 with boto to s3-eu-central-1
export S3_USE_SIGV4="True"

Using a shell export in the configuration file is clearly a hack, but it works. In fact, you can also export it to the environment before running duply or set it in the configuration file of the boto library. However, with the former, you do not have to change anything on the duply invocation.

Why does this solve the problem?

I found out that the problem was not reproducible for some people because it only appears in specific regions. I use Frankfurt, EU (eu-central-1) as my Amazon S3 region. According to the documentation, only the newest API V4 is supported in this region:

Any new regions after January 30, 2014 will support only Signature Version 4 and therefore all requests to those regions must be made with Signature Version 4.

The region Frankfurt, EU was introduced after this date. This means this new region only accepts requests with “Signature Version 4” and not any prior version. Meanwhile other regions continue to accept the old API requests.

This kind of setup is complete madness for me. Especially for open source projects with developers all around the globe, this just means that some developers could not reproduce the problem. Who would assume your endpoint region matters?

In fact, the duplicity manual page has a whole section on how European endpoints are different from other locations. Unfortunately, the recommended --s3-use-new-style --s3-european-buckets does not solve this problem. I could not even observe any difference in behavior with these flags.

Apparently, the boto library used by duplicity for access to Amazon S3 supports the new “Signature Version 4” for API requests, but it is not enabled by default. By exporting this environment variable S3_USE_SIGV4=True the library is forced to use “Signature Version 4”.

The specification of the target protocol for duplicity is another peculiarity. Make sure you use s3:// and specify an explicit endpoint region in the URL, as I could not get it work with s3+http:// and also always with the hostname for your region.

Further Investigations

Unfortunately, the duplicity option --s3-use-rrs which is supposed to put the files into the cheaper Reduced Redundancy Storage (RRS) does not seem to do anything and all uploaded files get the standard storage class. Probably I have to maintain my own installation of the latest versions of duplicity and boto to get all the features to work.

Depending on where you are in the world, YMMV.


Edit 2018-05-23: Fixed a typo in DUPL_PARAMS. Thanks to Zedino.