How much does compression impact NAUbackup times on XenServer?

In setting up NAUbackup – all the documentation says that while compression is supported, it’s recommended to leave it off and compress on the storage side later on if needed. I decided that I’d take a quick look at the performance on a small EDI stack that we have running in production, duplicated over to the test environment, to see the impact.

Test #1 –  The AS/2 front end VM with iSCSI storage on a 1 Gb connection:

  • Uncompressed – 18 minutes, 57 seconds – total size 19.12 GB
  • Compressed (“compress=true”) – 31 minutes, 4 seconds – 7.48 GB

So for slightly less than a doubling in time, we get well under half the file size footprint. In just subjective usage, I didn’t notice that any processes or general usage was delayed or slowed down in any way. Now this is a software stack that essentially has zero user input and generally very low load – so your results might be slightly different. In general monitoring of XenCenter performance reporting, none of our performance warnings were triggered.

That test went well enough that I decided to try it on the whole list of VM’s.

Test #2 – The entire EDI stack using local storage backed up to same NAS share on 1 Gb connection:

  • Uncompressed – 2 hours, 40 minutes, 23 seconds – 579.62 GB
  • Compressed (“compress=true”) – 5 hours, 50 minutes, 42 seconds – 310.34 GB

This stack has a combination of Ubuntu and Windows servers running a mix of light load web servers and the EDI translator databases, and is probably a closer representation of our typical environment and loads. Again, system performance seemed reasonable while performing the backup and no alarms went off in XenCenter. Here we see that an almost doubling of time results in not quite half the data size. I think a follow-up test would be to go back and see which machines are under the double time factor and if there are common elements.

In conclusion – should you use compress=true when using the NAUbackup scripts? The answer is of course – maybe. For us, we have the luxury of enough down time that no work-hour loads are impacted and the space savings allowed us to cut our off-site procedure down to a single backup drive instead of having to split the job or go with more expensive backup media. Restoring from the compressed backups does require a slightly different route, although in doing our DR testing, that has not been a big hurdle.

Find the time left on a Synology rebuild or expand job

While the DSM GUI will give you a rough idea of time needed for a typical rebuild with a percent graph and a little middle school math – getting the estimated time takes only a few commands from the command line.

First – make sure you can connect to your NAS over SSH-you can read the whole thing from Synology in the Knowledge Base or:

  1. Log in to the web GUI with an admin account
  2. Go to Network Center > Administration > Service > Terminal.
  3. Tick Enable SSH service.
  4. Specify the desired port for the SSH service. (The default port is 22.)
  5. Click Apply.

Now for the easy part – use PuTTY or your favorite SSH client and log in to your NAS and at the command line enter:

cat /proc/mdstat

You should see a listing of all the Synology devices – most of which are internal and not helpful for what we’re doing here – but you should see one that has the words “recovery = 52% (XXXXXX / XXXXX) finish=2880.4 min” For our expansion of adding another 4 TB drive into the Synology Hybrid Raid, we had an initial finish time of 3250 minutes (2 days, 8 hours) and used this technique to keep tabs on the progress.

Import/Restore gzipped XVA archives from the command line

If you’re like me – you value space over backup run time and are using compression with your NAU XenServer backup scripts. I”m working on more concrete numbers, but safe to say that on a small scale test of some lightly used services, we see a 45% reduction in XVA backup size with compression allowing us to keep additional days in hot storage. The only catch is that out of the box, XenServer doesn’t import the gzipped files directly from the command line or from Xen Center. To get around this, we simply use this single line to unpack the XVA file and pipe it into our vm-import command:

gunzip -c /snapshots/vm_backup.xva.gz | xe vm-import sr-uuid={SR-UUID} filename=/dev/stdin

This assumes a couple of things to work:

  • That you have a backup to restore.
  • That the XenServer installation wasdn’t lost or was already reinstalled.
  • That you’ve restored the mount point to where you’re storing the backups.
  • That you know your SR UUID – bonus hint: try xe sr-list.

In the initial testing – I haven’t had a big enough difference in restoration time that would sway me to go back to uncompressed XVA backup files.  And remember, it’s just like they say in the forums – it’s not a backup if you haven’t tried restoring it.

Plex media server installation in FreeNAS 9.3 Jail with media

Installing Plex Media Server in a FreeNAS jail is very simple, but not always clear without a little bit of checking Google. I started with the same instructions most people follow from the FreeNAS forums – (Tutorial) How To Install Plex in a FreeNAS 9.3 Jail (Updated) – although I ran into a somewhat common situation while adding media access to the jail.

Follow all the instructions in the tutorial for creating your jail and adding the Plex Media Server plugin – up to:

service plexmediaserver start

The plugin that was initially installed was version 1.4 and at the time I found that 1.9.3 was available, so I reran the update command to update and take care of a few new dependencies:

pkg update && pkg upgrade -y

At this point – you have Plex installed and updated, but there’s not a lot going on without access to your libraries. Now all that’s left is to add access to your media – which can be done from the command line or from the GUI, which I found was a little quicker for me. In the tutorial, there’s simply a link to the FreeNAS documentation about adding storage — Add Storage. Going through the screens step by step, you can add a “hook” back to your FreeNAS dataset and then add the directories to your Plex libraries. If you’re like me, you didn’t read the FreeNAS documentation close enough in regards to source and although the storage was added to the jail – you see that your storage shows “Mounted? No” <picture needed> and isn’t available to Plex. A little closer reading of the source description shows what’s going on:

This directory must reside outside of the volume or dataset being used by the jail. This is why it is recommended to create a separate dataset to store jails, so that the dataset holding the jails will always be separate from any datasets used for storage on the FreeNAS® system.

To resolve this – simply point your source down one level. In my case, instead of the whole CIFS share, I created new folders for each library and pointed them individually and the storage mounted correctly. I added the new directories to the Plex library and was streaming in no time.