Hardware advice

Hello!,
First of all, thank you for your amazing work sharing your knowledge!

I started 8 years ago buying a two disc NAS for sharing the common files for my wife and me.
Then I started to add some services (flexget, deluge, etc…) Then I bought 4x3tb disk and used and “old” pc with an Ubuntu server running from an usb and it has been working 24x7 during the last years.
I used zfs for the storage, and a fith disk for deluge temp (to release work from the disks).
After a while I started to move some of the services to docker containers (owncloud, emby server etc).
Unfortunately the server has stop working recently and I have decided to go for a more “server” specific hardware. In the meantime I found your blog and I started to think on giving a try to VM.
However, I live in a flat, so my “bare metal” server can’t be in a rack, so, after digging I’m about to buy a xeon 1245 v6 with 16gb ecc ram and a motherboard with 8 sata ports and a m.2 ssd to migrate my old server to the new hardware.

So, although I’ve been using VM in the old way for 15 years (in my first work I have to use a vmware image to use windows inside a linux host) I’m a newbie regarding hypervisors and dedicated VMhost.
So my question is if this hardware would be enough to support some structure similar to your configuration or if it doesn’t have any sense to run all VM on the same physical server.
Before I found your blog I was thinking of just using another ubuntu server but with all services inside docker containers instead of some.

Again thank you very much,
Regads
Gabriel Pulido

Welcome, Gabriel :slight_smile:

Since you’re probably not planning on 3 redundant servers in your flat, there’s not much advantage to adding a VM layer. I’d agree with you that what makes the most sense is to run a “single-node” swarm on bare-metal (Ubuntu 1604 works well, among others).

Once your swarm/stacks are built using a common directory structure, and .yml files, it becomes very easy to migrate them to a larger swarm, or to new hardware, without having to fluff about with operating system dependencies etc.

For comparison’s sake, my single-node swarm at home (my cluster is in a datacenter) is a Intel(R) Core™ i5-4590 CPU @ 3.30GHz with 32GB RAM, and 4 x 3TB HDDs in RAID1.

It’s running 52 docker services currently (a full autopirate stack, home assistant, calibre-web, plex, and a turtlecoin testnet), plus a CentOS6 VM for monitoring, and a pfsense VM:

[root@kvm ~]# docker service ls | wc -l
52
[root@kvm ~]#

System load is:

[root@kvm ~]# uptime
 08:51:03 up 30 days, 11:30,  1 user,  load average: 2.96, 3.17, 3.22
[root@kvm ~]#

D

Hello David,
I’m surely not planning to have 3 servers at home, probably my wife would kick me out of the house :slight_smile:

So at the end, the configuration is very similar to what I already had, but instead of having mixture of services running on the OS and docker containers (some of them with docker-compose) use docker swarm to “orchestrate” all services.

The motherboard that I’m planning to buy is a supermicro with IPMI so the monitoring could be made just using it.
Regarding the storage, what I had before was a RAIDZ with the 4 disk but mounted as a local device and I changed the docker configuration folder to be on the mounted zfs raid. Also, when I had to create a container I used volumes mounted on subfolders of the zfs mounting point.

In the new hardware I probably use a M.2 ssd for the system, just to have it out of the sata / usbs.

I was thinking of starting with 18.04 LTS as it is most recent and would provide support for more time than the 16.04 right now. Any thoughts against using it?

Also, although the raidz provides me to one disk faliure I want to automatize the backups. And I was thinking on using a two “layers” protocol:

Add a 6tb disk that is just mounted and use a rsync or something similar to made a nightly backup of just the important data. This would provide an extra backup just in case two disk fails almost at the same time. Of course this doesn’t provide a solution for “burning” houses. So I thinking adding a second offsite backup system. Maybe using your duplicity recipe. In that case I don’t know if I should backup the entire system (docker data included) or just the “data”.

Again, thank for the advices.
Gabriel

Hey Gabriel,

Sorry about the delay, I missed the notification (I did receive it, just got buried in the noise).

I’d agree that Ubuntu 18.04 is the way to go, I did that on the hosts I recently used to rebuild my Atomic cluster, after a Ceph failure lost all my data.

And to your comment about Duplicity, what I’ve learned is that Duplicity is really bad at doing selective restores. So best to setup multiple, small duplicity jobs, rather than one big one. I’m going to refresh some of my recipes to reflect this.

Take a look at “rsnapshot” for your local backups, it’s the simplicity of rsync with the history of hourly, daily, and monthly backups :slight_smile:

D

Hello David!
Please don’t apologize for the delay, it is always a pleasure to read you and in the meantime I ended buying my server. The hardware that I bought is the following (I’m just to share it just in case it can be usefull for anybody else):

  • 1 Xeon E3 1245V6 (4C / 8T) mounted on a Supermicro x11ssh4ln.

  • 32gb RAM ECC on two modules 2x16. (I will decide if I expand to 64gb in a future)

  • 1 Samsung 970 EVO on M.2 NVMe port (250gb) as main SO hard disk (I know it is an overkill but it frees me of wasting sata ports / usbs)

  • 4x3tb 3.5" WesternDigital green (from the old server, I’m planning on replacing them with 6tb NAS drives one by one and use the 3tb as backups)

  • 1 315gb 2.5" WD blue to use as temp for p2p operations (also inherited from old server)

  • bequiet Straightpower 11 450W (Tier 1 PSU) (althougth the server is going to be attached to a SAI)

  • A fractal design R2 mini case. It the only microATX case that I’ve found that can hold up 6 hdd disk and has a lot of smart disk placement decisions.

  • A noctua NH-C14S CPU cooler.

  • Some noctua fans to replace the fractal design ones (they are not bad, but the noctua are better and I want to try to keep the server as quiet as possible)

Key hardware features:

  • The motherboard has IPMI that allows me to remotely configure and control the server without having to attach a keyboard / monitor.
  • The motherboard has 4 ethernet ports that I will bond using 802.1ad (if the mikrotik router I have configured as switch allows me to do it, if not I will use another kind of link aggregation)
  • Plenty of RAM for zfs and docker.
  • Try to keep it quiet, as it going to be on a working room on a flat.

About the software.
I started just installing ubuntu 18.04.1LTS on the ssd with LVM on a 64gb partition for the os and the rest I’m not sure what to do, but the space is always welcome :slight_smile:
My first step is to recover the data from the old server (I have some backups but I wanted to try to do a full backup from some “less important” data that I didn’t have the backup) So I connected the 4x3tb did a zpool import and voila, my old pool was up and running. Now I’m just doing some garbage cleaning before I do a backup to a 6tb disk. I will take a look to the rsnapshot although I already was using a “manual triggered” backup system that uses rsync that I will automatize

Then I will reinstall the SO, destroying the old pool and create a new one as a “permanent storage for the server” and docker swarm. And start adding services (using your recipes of course). However I’m not sure If I will keep the nfs sharing of some of the pool as the server will be used also as a NAS, but to move all data access to services (like nextcloud) maybe is an overkill, not sure what to do here.

Thanks for the duplicity advice, I’m looking forward to your new recipes and tips.
Regards
Gabriel

That sounds intense! I haven’t played much with ZFS - I’ve only gone as far as setting up ZFS pools etc under FreeNAS.

The reason I started playing with Docker and friends was because I got tired of having to rebuild my home server stack every time I upgraded or wanted to change distros. It was a pain to get it all “Dockerized”, but now all the configuration is separated from the data, backed up independently, and rebuilding on a new OS is straightforward :slight_smile:

Let us know how you get on - I think the upcoming Swarmprom recipe will interest you too!

D

I didn’t played a lot with ZFS but I did take the risk and used it when Ubuntu didn’t have official support and I used zfsforlinux, and I’m still learning. I think it is a very interesting technology, however what made me use it was to avoid the “write hole” in RAID5 and of course, play with a “new” concept.

I’m still looking to a good approach to backup the system / configurations locally and following your tip regarding RSnapshot I have found this:

https://www.elkarbackup.org/
And it looks very interesting, and it is “Shiny” :slight_smile: Also it can be dockerized :wink: so what do you think?

I like the idea of reusing configurations, and to be capable of moving things without too much hassle, however the origin of my server was a NAS then I moved to ubuntu server 12.04 with zfs for the storage and nfs for the sharing. Then I started to play with docker (ember, nextcloud) keeping the nfs for local sharing. The backup of the data was made using the nfs share on an ubuntu desktop using backintime.

Now I don’t think I can completetly get ride of the nfs share, I used it to access my photography collection and to store them from the camera. Also I had configured emby-kodi plugin to pick the nfs link so the kodi library would work even if emby is down. However there are a lot of “files” that are being stored through the local share that could be moved to a more specific docker services: development files to gitlab for example or If I found a “better” photography manager I will use it instead (maybe the gallery plugin for nextcloud)

I didn’t know about swarmprom, looks very interesting.

Hi Gabriel,

      El              kar _does_ look interesting, especially since it builds on rsnapshot.. I'll take a look for                 a possible future recipe integration, thanks for the tip!

                  D

Just a quick feedback after a month of use.
I upgraded the RAM to 64gb, and added a 2x5.25 two 3x3.5 hotswap cage, changed all fans to noctua’s and start playing with docker.
I successfully configured almost all autopirate stack but to be used with torrent not with usenet. Also I added Lidarr to use instead of Headphones. It was very easy to configure, following sonarr / radarr recipes.
I use oauth2 in almost all autopirate. However I found that ombi doesn’t like it and I had an strange behavior when using oauth2 with it as the web only showed the “tab” part of the windows, and not the main contain panels. Once I removed oauth2 it worked properly. (very strange)
Also I had to use emby without oauth2 to be able to use the android app, as I don’t know how to tell the app to bypass the oauth proxy…

I added a service for elkarbackup, and it seems to be working properly, some adjustments are needed though. I can publish the configuration if you want.

The only service that I’m having big problems is the mail one, as I’m behind a dynamic ip (although I have bought a domain name) and it seems that gmail (for example) doesn’t like to receive mails from dynamic ips as they are on a blacklist. One possible solution (I’m still working on it) is to use an external smtp relay like sparkpost or mailjet, but I haven’t been able to make it work properly yet.