We now recommend using autofs vs this method. See our new article here.

In this article we provide a ready made script to help automatically check if an sshfs mount is properly mounted and if not it can forcibly close and remount it in most cases automatically.

We use sshfs mounts to mount a remote server path to a local one for making backups which is great when it works. On some occasions the mount will disconnect and then the backup will write to the local mount point instead of to the remote path and this causes the disk to fill and crash some services. We had tried various variations of things to prevent this but finally ended up with the below script so far seems to work fine and only email us in the event of an error.

This script requires password-less ssh keypair is already setup between the two linux based hosts. The script should be created as a file on the machine that the sshfs mount is setup on and then edited with the information for the sshfs mount such as the local and remote mount directories and host login information. Once the script is tested and working manually it can then be setup as a cronjob as desired.

#!/bin/bash
## Author: Michael Ramsey
## Objective: Check and ensure a remote sshfs mount is mounted. This script assumes you have already have ssh passwordless keypair setup for accessing the remote sshfs mount point.

## How to use.
# Set the below variables to match your needs first.

# Then the below script can be run manually or via a cronjob
# sh check_remote_mount.sh

#example cronjob path to the script may need to be updated depending on where it was saved.
# * * * * * /bin/bash /root/check_remote_mount.sh > /dev/null

##### User specific variables to edit
#Set local mount directory
mountdir="/mnt/backup"

#Remote mount directory
remotemountdir="/root/server-backups"

#Set file name to test for in the remote mount directory. Just create an empty file "is_mounted" in the remote mount so we can check if it exists.
remotemounttestfile="/mnt/backup/is_mounted"

#Remote hostname or IP
remotehost="something.example.com"

#Remote ssh username
remoteuser="root"

#SSH IdentityFile path. Please Note: This would be the key like "/root/.ssh/id_rsa" not the pub file "/root/.ssh/id_rsa.pub"
sshIdentityFile="/root/.ssh/id_rsa"

#### Do not edit below this line

if mountpoint $mountdir && [ -f $remotemounttestfile ]; then
    echo "Mounted"
    RC=$?
else
   echo "Not mounted properly"
   #umount gracefully if possible
   umount $mountdir  > /dev/null 2>&1

   #kill any frozen process on the mount
   fuser -k $mountdir  > /dev/null 2>&1
   fusermount -u $mountdir  > /dev/null 2>&1
   umount -l $mountdir  > /dev/null 2>&1
   umount $mountdir  > /dev/null 2>&1

   #kill any hung processes and mounts
   pkill -9 sshfs && umount "$mountdir"  > /dev/null 2>&1
   
   #remount backup server
   rm -rf "${mountdir:?}/"* && sshfs -o nonempty,allow_other,IdentityFile=$sshIdentityFile $remoteuser@$remotehost:$remotemountdir $mountdir

   if mountpoint $mountdir && [ -f $remotemounttestfile ]; then
    echo "Mounted"
    RC=$?
   else
   echo "Not mounted properly needs fixed manually"
   RC=1
   fi
fi
exit $RC

We hope you found this script and basic tutorial helpful in keeping your sshfs mounts working and in an automatic fashion.

linux

Leave a Reply