AutoPirate - Funky Penguin's Geek Cookbook

Once the cutting edge of the "internet" (pre-world-wide-web and mosiac days), Usenet is now a murky, geeky alternative to torrents for file-sharing. However, it's cool geeky, especially if you're into having a fully automated media platform.


This is a companion discussion topic for the original entry at https://geek-cookbook.funkypenguin.co.nz/recipies/autopirate/

I’ve got some funny business going on here which is just doing my head in!

I got stuff working once but I had loads of crap hanging around from before I started trying to follow these recipes, so I very stupidly rebuilt my server using Ubuntu to start afresh (I tried Atomic but that was just a total mind-bending experience lol)

I’ve got Portainer, Duplicity etc. all working nicely so I know that Traefik is setup properly. However, when it comes to AutoPirate, there seems to be some breakdown between Traefik, OAuth2 and the other services :cry:

If I go to any of my service URLs, I get taken through the GitHub OAuth2 process and a shiny token is spat out the end. However, I never get routed back to the actual container. I just get a blank screen and the oauth proxy logs state:

oauthproxy.go:535: 10.1.0.17:54908 ("xxx.xxx.xxx.xxx") Cookie "_oauth2_proxy" not present
github.go:222: got 200 from "https://api.github.com/user/emails?access_token=3b839c99ca337d1152ab2482a59256exxxxxxxxx" [{"email":"[email protected]","primary":true,"verified":true,"visibility":"private"}]
oauthproxy.go:490: 10.1.0.17:54908 ("xxx.xxx.xxx.xxx") authentication complete Session{[email protected] token:true}
reverseproxy.go:275: http: proxy error: dial tcp xxx.xxx.xxx.xxx:6789: i/o timeout
reverseproxy.go:275: http: proxy error: dial tcp xxx.xxx.xxx.xxx:6789: i/o timeout
reverseproxy.go:275: http: proxy error: dial tcp xxx.xxx.xxx.xxx:6789: i/o timeout
reverseproxy.go:275: http: proxy error: dial tcp xxx.xxx.xxx.xxx:6789: i/o timeout

xxx.xxx.xxx.xxx is the external IP for my domain, I’m guessing that is where the problem lies? At this point I’ve gone over the files so much that Im on the verge of starting again lol

In this example, I’m trying to access my NZBGet URL and the port 6789 is definitely correct and taken straight from the recipe. Its the same for all my AutoPirate URLs though. Anything involving the OAuth proxy just goes nowhere

Here is the NZBGet service taken from my autopirate.yml file:

  nzbget:
    image: linuxserver/nzbget
    env_file: /var/data/config/autopirate/nzbget.env
    volumes:
      - /var/data/autopirate/nzbget:/config
      - /mnt/media:/data
    networks:
      - traefik_public

  nzbget_proxy:
    image: zappi/oauth2_proxy
    env_file: /var/data/config/autopirate/nzbget.env
    networks:
      - internal
      - traefik_public
    deploy:
      labels:
        - traefik.frontend.rule=Host:nzbget.my.domain.co.uk
        - traefik.docker.network=traefik_public
        - traefik.port=4180
    volumes:
      - /var/data/config/autopirate/authenticated-emails.txt:/authenticated-emails.txt
    command: |
      -cookie-secure=false
      -upstream=http://nzbget:6789
      -redirect-url=https://nzbget.my.domain.co.uk
      -http-address=http://0.0.0.0:4180
      -email-domain=my.domain.co.uk
      -provider=github
      -authenticated-emails-file=/authenticated-emails.txt

And here is the env file:

OAUTH2_PROXY_CLIENT_ID=4935081605exxxxxxxxx
OAUTH2_PROXY_CLIENT_SECRET=bcfcc005ca61d63c51d2f2xxxxxxxxxxxxxxxxxx
OAUTH2_PROXY_COOKIE_SECRET=xxxxxxxxx
PUID=4242
PGID=4242

Any ideas?? I’m going round and round in circles. I’m thinking about starting again but focusing just on Traefik and AutoPirate before worrying about NextCloud etc.

Do you perhaps have (a) a wildcard DNS record, and (b) the same DNS suffix on your Ubuntu host as your actual domain (my.domain.co.uk)?

If so, you may be encountering a problem whereby oauth2_proxy would prefer to resolve container names by valid public DNS vs the stack’s own service-discovery DNS mechanism. (Yes, I had this problem myself recently)

Try applying an extra (non-existent) “dns_suffix” to the oauth2 containers, as illustrated here:

D

Separate topic - I added rtorrent/rutorrent to the stack, in response to requests for “torrent love” :slight_smile:

I decided to start again, before nearly throwing my PC out the window yesterday lol

This time, I started with just Traefik and AutoPirate to try and narrow down any issues. This time, I copied over your recipe including the dns_search var. Rather than get blank pages, I kept getting 500 server errors after completing the oauth verification. When looking at the logs for the proxy containers, it said that it couldn’t lookup GitHub.com. Bit weird considering it had just taken me to GitHub to do the auth in the first place.

I logged on to the container to try out the networking and true enough, I couldn’t ping any external websites or IPs. Not sure if the networking had just fallen over or what. Just in case I hadn’t made the same mistakes as yesterday, I took out the dns_search bit and restarted everything and now - IT WORKS!!

I’m 99.999999% sure that everything is the same in the autopirate and traefik compose files but I must have buggered something up or else it would have worked yesterday too!

Thanks for your help, as always! :smiley:

Woohoo! The DNS suffix thing is an edge case BTW, it shouldn’t have stopped all DNS working!

I hate to admit it; but I’m really struggling to bring this stack up, given limited Docker swarm experience.
Can anyone share thier configuration using a non-example configuration? I could do with some pointers.

I agree it’s definitely the most complex recipe yet. The example in the premix repo is very close to what I’m using in RealLife ™. Are you on telegram/signal? It might be easier to help you debug in realtime :wink:

So, after our Telegram conversation, I added a section to the cookbook re troubleshooting recipies : Troubleshooting |・∀・

Hey, I recently got this all set up . It was definitely a challenge and its taken a few attempts and several days lol. Im still trying to configure all the services to work together but I’m almost there. Here are some tips (and edits for the chef :stuck_out_tongue:

I’m doing this on a single dusty old box at home based on Ubuntu Server, I tried Atomic but getting it set up following their bare metal instructions was a step too far on my skills lol. So these hints might help those working from home

  1. Make sure to open ports 80 and 443 on your router (plus port 34000 if you plan on adding Plex)
  2. Depending on your hosting/DNS provider, set up a wildcard DNS entry (*.example.com) which points to your preferred IP address. If you are doing this at home on a dynamic IP, you can use something like ddclient to update your IP automatically - I know Namecheap accept wildcard subdomains on their DDNS settings
  3. All the other recipes are great but I had some problems getting them all going at once. Start with Docker Swarm, then Traefik, then AutoPirate. Check that you can access the services, then feel free to add the other components in, probably starting with Duplicity so you can backup a know good config lol
  4. When setting up Traefik, you need to adjust the permissions on the acme.json file. Traefik will run but it won’t be able to generate any LetsEncrypt info/certs. If you don’t get this right, you might end up hitting LetsEncrypt’s rate limit and end up having to wait a week before being able to get real SSL certs. Yes, that happened :cry: Make sure you do chmod 600 /var/data/config/traefik/acme.json before starting Traefik for the first time to fix. (Chef: could you add this bit to your recipe?)
  5. Careful following the latest AutoPirate instructions - the most recent update has removed some of the indentations from the examples and I found some of the variables were ignored (Chef: can you please check this - every line for each service needs at least 2 spaces, otherwise they aren’t recognised when copied/pasted, some of the volumes/env vars need 3 spaces adding)
  6. If you go with NZBGet over SABnzbd, you need to add the env_file variable from nzbget_proxy to the proper nzbget service or else it won’t run as your chosen UID/GID (Chef: can you please check this out too)
  7. Pay attention to your volume mounts in the AutoPirate yml file. When it comes to getting your services configured, your downloader (NZBGet/SAB/rTorrent) might save files to /downloads/complete/* but if the search tools like Sonarr don’t have access to the exact same path, they won’t be able to find the files. It took me a while to realise that I had given Sonarr access to the root of my media storage folders whereas I had given my downloaders access only to the Download folder buried within. Both services could access the same files but each had a different path to get there so file discovery failed.
  8. Definitely follow the steps to add docker aliases and command completion from the Docker Swarm recipe. It will help to retrieve logs for specific containers etc. without have to constantly list services to remember what they are called. Then follow the Portainer recipe to make it even easier!

I think that covers most of the issues I stumbled across on my most recent attempt. I’ll update this if I remember any more. Seems like you might be sorted now anyway but thought I’d send anyway :smiley:

Good luck!

Thanks Nick, I’ll update the chefy-bits above :wink:

I’ve been thinking about the complexity of the AutoPirate recipe, and would appreciate feedback from you geeks who’ve built it thus far. It’s the most popular recipe (yo, ho ho), but it’s also the most complex.

Here’s my question - would the recipe have been easier to follow if each service was documented on its own page, and the AutoPirate page turned into an index? I.e., one page describing setting up Radarr, another describing NZBGet, another Mylar, etc?

I.e., I imagine a structure like this:

NextCloud
Huginn
Autopirate
   /OAuth Proxies
   /SABNZBD
   /NZBGet
   /Rtorrent
   /Radarr
etc...

Interested to hear your thoughts :slight_smile:
D

I think it would be much better

By the way; a new version (#2) of NZBHydra has been released:

Woo, new toys!

Thanks for the feedback re autopirate, I’ve started splitting each component into a “sub-recipe”, which definitely improves readability, and allows me to add more tool-specific details.

also another heads up, check out Lidarr, its a modern version of headphones.

I noticed that, but the repo has a scary message reading:

Looks and smells like Sonarr but made for music. IN DEVELOPMENT DO NOT TRY TO USE.

Is it actually usable yet?

Hey Nick; I’m currently having exactly the same hair pulling moment with

Jan 31 03:42:32 medved dockerd[1347]: time=“2018-01-31T03:42:32.665394287+02:00” level=debug msg=“[resolver] query api.github.com. (AAAA) from 172.18.0.5:37230, forwarding to udp:192.168.50.1”
Jan 31 03:42:32 medved dockerd[1347]: time=“2018-01-31T03:42:32.665415074+02:00” level=debug msg=“[resolver] query api.github.com. (A) from 172.18.0.5:56433, forwarding to udp:192.168.50.1”
Jan 31 03:42:32 medved dockerd[1347]: time=“2018-01-31T03:42:32.674675362+02:00” level=debug msg=“[resolver] received A record "192.30.253.116" for "api.github.com." from udp:192.168.50.1”
Jan 31 03:42:32 medved dockerd[1347]: time=“2018-01-31T03:42:32.674693526+02:00” level=debug msg=“[resolver] received A record "192.30.253.117" for "api.github.com." from udp:192.168.50.1”
Jan 31 03:42:32 medved dockerd[1347]: time=“2018-01-31T03:42:32.677618343+02:00” level=debug msg=“[resolver] external DNS udp:192.168.50.1 did not return any AAAA records for "api.github.com."”

however no such luck after removing the dns_search entry in my autopirate definition. Can you remember anything else you tried?
Kind regards
BC

Hello,
I see that this discussion is not too old yet so just want to ask several things.

#1. this all is setup using Docker. is there a way for me to use this with Proxmox LXI containers ?
I have a Proxmox host setup and a TurnKey LXI Media server setup on it.
it is a pre-configured Debian Jessie with Emby File Server.
all is working so far. and it would be nice if I could have a set of scripts/configs to use in case of recovery.

#2. can I setup the proxy after? my server is not directly exposed to internet. I have a pfSense firewall/router setup on other dedicated machine so I guess I need to use that to mange my outside access. right?

#1 - I don’t think it’d work with Proxmox LXIs, because traefik and the inter-container communications rely on the features of Docker Swarm. You could probably replicate it with some manual traefik config within LXI if you were motivated enough though :slight_smile:

#2 - Well, you need Traefik for seamless access to the stack-deployed services, from the beginning. You don’t have to expose it to the internet though. You can do you LetsEncrypt domain validations using DNS (as opposed to HTTP), so theoretically you never have to expose it to the world.

I’m running my own stack behind a pfsense FW too :wink:

D