I stumbled across this problem during setting up duplicity backups to a S3 bucket. As it took me quite a while to resolve this, I wanted to document this problem and its solution here. I just hope someone else with the same problem may find this blog post.
I tried to set up duply, a frontend for the backup tool duplicity, to back up to Amazon S3 storage.
The challenge appeared to be that I wanted to do this with the version available in Debian wheezy. The problem described here is probably already fixed in duplicity >= 0.7.0. These are the versions I used:
i duplicity wheezy-backports 0.6.24-1~bpo70 i duply stable 188.8.131.52-1 i python-boto wheezy-backports 2.25.0-1~bpo7
I added S3 as a target to the duply configuration as documented on various places on the web. However, I always ran into this error message:
$ duply donkey-s3-test status Start duply v184.108.40.206, time is 2015-03-12 00:03:40. Using profile '/etc/duply/donkey-s3-test'. Using installed duplicity version 0.6.24, python 2.7.3, gpg 1.4.12 (Home: ~/.gnupg), awk 'GNU Awk 4.0.1', bash '4.2.37(1)-release (x86_64-pc-linux-gnu)'. Signing disabled. Not GPG_KEY entries in config. Test - Encryption with passphrase (OK) Test - Decryption with passphrase (OK) Test - Compare (OK) Cleanup - Delete '/tmp/duply.10622.1426115020_*'(OK) --- Start running command STATUS at 00:03:40.984 --- BackendException: No connection to backend 00:03:41.301 Task 'STATUS' failed with exit code '23'. --- Finished state FAILED 'code 23' at 00:03:41.301 - Runtime 00:00:00.316 ---
Similar occurrences of this bug are also tracked here: https://bugs.launchpad.net/duplicity/+bug/1278529
The exception above is highly unspecific and returning such a generic error message is bad style in my opinion. It took me quite a while to find the solution. To make it short, with this snippet from my
/etc/duply/donkey-s3-test/conf file I got this to work:
TARGET='s3://s3-eu-central-1.amazonaws.com/.../' TARGET_USER='...' TARGET_PASS='...' DUPL_PARAMS="$DUPL_PARAMS --s3-use-rrs" # XXX: workaround for S3 with boto to s3-eu-central-1 export S3_USE_SIGV4="True"
Using a shell
export in the configuration file is clearly a hack, but it works. In fact, you can also export it to the environment before running duply or set it in the configuration file of the
boto library. However, with the former, you do not have to change anything on the
Why does this solve the problem?
I found out that the problem was not reproducible for some people because it only appears in specific regions. I use Frankfurt, EU (eu-central-1) as my Amazon S3 region. According to the documentation, only the newest API V4 is supported in this region:
Any new regions after January 30, 2014 will support only Signature Version 4 and therefore all requests to those regions must be made with Signature Version 4.
The region Frankfurt, EU was introduced after this date. This means this new region only accepts requests with “Signature Version 4” and not any prior version. Meanwhile other regions continue to accept the old API requests.
This kind of setup is complete madness for me. Especially for open source projects with developers all around the globe, this just means that some developers could not reproduce the problem. Who would assume your endpoint region matters?
In fact, the duplicity manual page has a whole section on how European endpoints are different from other locations. Unfortunately, the recommended
--s3-use-new-style --s3-european-buckets does not solve this problem. I could not even observe any difference in behavior with these flags.
boto library used by duplicity for access to Amazon S3 supports the new “Signature Version 4” for API requests, but it is not enabled by default. By exporting this environment variable
S3_USE_SIGV4=True the library is forced to use “Signature Version 4”.
The specification of the target protocol for duplicity is another peculiarity. Make sure you use
s3:// and specify an explicit endpoint region in the URL, as I could not get it work with
s3+http:// and also always with the hostname for your region.
Unfortunately, the duplicity option
--s3-use-rrs which is supposed to put the files into the cheaper Reduced Redundancy Storage (RRS) does not seem to do anything and all uploaded files get the standard storage class. Probably I have to maintain my own installation of the latest versions of duplicity and boto to get all the features to work.
Depending on where you are in the world, YMMV.
Edit 2018-05-23: Fixed a typo in
DUPL_PARAMS. Thanks to Zedino.
Thanks for your nice post. I’m experiencing the same issue and tried to do what explained above. But I’m receiving an error saying that:
Backtrace of previous error: Traceback (innermost last):
File “/usr/lib/python2.7/dist-packages/duplicity/backend.py”, line 365, in inner_retry
return fn(self, *args)
File “/usr/lib/python2.7/dist-packages/duplicity/backend.py”, line 540, in put
File “/usr/lib/python2.7/dist-packages/duplicity/backend.py”, line 526, in __do_put
File “/usr/lib/python2.7/dist-packages/duplicity/backends/_boto_single.py”, line 242, in _put
self.upload(source_path.name, key, headers)
File “/usr/lib/python2.7/dist-packages/duplicity/backends/_boto_single.py”, line 293, in upload
num_cb=(max(2, 8 * globals.volsize / (1024 * 1024)))
File “/usr/local/lib/python2.7/dist-packages/boto/s3/key.py”, line 1362, in set_contents_from_filename
File “/usr/local/lib/python2.7/dist-packages/boto/s3/key.py”, line 1293, in set_contents_from_file
File “/usr/local/lib/python2.7/dist-packages/boto/s3/key.py”, line 750, in send_file
File “/usr/local/lib/python2.7/dist-packages/boto/s3/key.py”, line 951, in _send_file_internal
File “/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py”, line 668, in make_request
File “/usr/local/lib/python2.7/dist-packages/boto/connection.py”, line 1071, in make_request
File “/usr/local/lib/python2.7/dist-packages/boto/connection.py”, line 1030, in _mexe
error: [Errno 104] Connection reset by peer
Attempt 1 failed. error: [Errno 104] Connection reset by peer
Do you have any clue?
My only suggestion would be to check the S3 connection string according to this issue. Did you specify an explicit region?
Thanks dude it’s saved my day 🙂
Many thanks !!
just a question : what is the purpose of the DUPLY_PARAMS=”$DUPLY_PARAMS –s3-use-rrs” ?
it worked for me without this parameters.
By the way, in the duply conf file the variable params is DUPL_PARAMS not DUPLY_PARAMS, event if both are useless for me.
Ok RRS is for Reduced Redundancy Storage as you said :). It works for me, maybe is because of your variable name DUPLY_PARAMS VS DUPL_PARAMS.
Thank you again
Hello, this is a script that makes the job (without duply)
I fixed the typo in
DUPL_PARAMS. Thank you for noticing!