0

I have a linux box, that i putty into, which is my backup system. It has a zfs linear span array with 3 2TB drives. Every night i use freefilesync (great program) to sync files to this mount, which is mapped as a network drive.

$ zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
san                                3.31T  2.04T  2.87M  /san
san/vault                          3.31T  2.04T   136K  /san/vault
san/vault/falcon                    171G  2.04T   100K  /san/vault/falcon
san/vault/falcon/snapshots          171G  2.04T   171G  /san/vault/falcon/snapshots
san/vault/falcon/version            160K  2.04T    96K  /san/vault/falcon/version
san/vault/gyrfalcon                 564K  2.04T   132K  /san/vault/gyrfalcon
san/vault/gyrfalcon/snapshots       184K  2.04T   120K  /san/vault/gyrfalcon/snapshots
san/vault/gyrfalcon/version         184K  2.04T   120K  /san/vault/gyrfalcon/version
san/vault/osprey                    170G  2.04T   170G  /san/vault/osprey
san/vault/osprey/snapshots         24.2M  2.04T  24.2M  /san/vault/osprey/snapshots
san/vault/osprey/version            120K  2.04T   120K  /san/vault/osprey/version
san/vault/redtail                  2.98T  2.04T  17.2M  /san/vault/redtail
san/vault/redtail/c                 777M  2.04T  72.9M  /san/vault/redtail/c
san/vault/redtail/c/AMD            4.44M  2.04T  4.24M  /san/vault/redtail/c/AMD
san/vault/redtail/c/Users           699M  2.04T   694M  /san/vault/redtail/c/Users
san/vault/redtail/d                1.59T  2.04T   124K  /san/vault/redtail/d
san/vault/redtail/d/UserFiles      1.59T  2.04T  1.59T  /san/vault/redtail/d/UserFiles
san/vault/redtail/d/archive         283M  2.04T   283M  /san/vault/redtail/d/archive
san/vault/redtail/e                1.34T  2.04T   124K  /san/vault/redtail/e
san/vault/redtail/e/PublicArchive  1.34T  2.04T  1.34T  /san/vault/redtail/e/PublicArchive
san/vault/redtail/e/archive         283M  2.04T   283M  /san/vault/redtail/e/archive
san/vault/redtail/snapshots         184K  2.04T   120K  /san/vault/redtail/snapshots
san/vault/redtail/version          44.3G  2.04T  43.9G  /san /vault/redtail/version

When looking at the linux, via putty, the mounts are there one minute, then a few minutes later they are gone(zfs list always shows them, you have to traverse into them to see if they have been unmounted, they will be empty, or not in the parent dir at all). These are datasets that are losing their mounts.

san/vault/redtail is empty almost every time when i come back the next morning, or a couple hours later, just after freefilesync starts its synch, but before i can start moving files.

I have tried to export and import , same problem happens still. My data is still in tact...

this command will fix all of them momentarily again(for an unknown duration).. zfs mount -a

This all started (coulpe weeks ago) after i made quite a few child datasets inside the parnt san, where before it was only the san dataset parent(i NEVER had any problems), no child datasets existed when all the data was layed in there. I have since moved out the data, made the datases, and moved back int he data.

My machine that keeps network traversing it to synchronize, the mount gets lost in the duration and my backups have yet to finish in over a month now almost. Before it can tally the files (say 15 minutes to an hour) the fricken mount is gone again, and it hangs. Back ups are in limbo.

I may have fubar'd something when i crated the datasets. I had to delete a few after because i wasnt happy with them. The data wasnt either visible or what, but once i had all said and done, things looked perfect after several reboots and double checks!!

After this, things seemed good, but I looked into nto one of its child data sets san/vault/redtail/c and that fricker is empty too, i think it came unmounted.

before i destroy shit. i need to know whats going on. This data is a duplicate backup of a current healthy system, but this backup is the only, therefore is at the mercy of the health of the source drives.. So i cant afford it to be offline at all..

One more note on SNAPSHOT's. I made a snapshot, for the first time, just prior to the seeming brakage. could that have caused it? Can i possibly use this snapshot to fix something? Is this coincidence, or symptomatic?

see this post, Bulk remove a large directory on a ZFS without traversing it recursively if you want information as to why i created a bunch of datasets, with children...

Edit: The question has just been rewritten, due to lack of attention on this post, please read it again..

  • This could be an intermittent/load-related hardware issue. Do all the mount points share common hardware? If so, try moving the most frequented dataset to another mountpoint, maybe via a soft link. If it's an intermittent hardware issue and all your mounts share the same hardware, the mount/access issues you're having will move to the previously-good mount point. – Andrew Henle Aug 27 '15 at 09:44
  • its one gigabyte mobo, 4 sata ports, one running a ext3 4 partition system, then there is 3 2tb satas in there as a linear span 6tb zfs array. Im about to move it to to a 6GB Raid 5, which will FINALLY have the redundancy i need, but the board only has 4 sata ports on it. Oh yeah, i have a 5th drive in there or something, and one of them atleast at one point was running off a pci sata controller card for the 5th port now that i think of it. Im going to rip off the side of the case right now to check if that drive thats losing mounts is on that pci rail, or mobo rail.. – Brian Thomas Aug 28 '15 at 06:42
  • Ok nope, the drive thats connected to the PCI card is not related, and not being used. So all 3 zfs drives are on port 0, port 1 and port 3, the system drive is on port 2 of the mobo. should i switch the ports opposite for all drives at once, to see if i can get the symtom to alter? I found, its not only redtail, today san/vault/osprey has now went offline, the one im working on a tar zip in, then i look at zfs mount later just for kicks after all was idle, and redtail and osprey were not mounted(5 or 6 datasets worth), however the other two untouched dirs were still mounted – Brian Thomas Aug 28 '15 at 06:46
  • I totally rewrote the question again, just now, because this thread is not getting any attention.

    Please read again.

    – Brian Thomas Sep 03 '15 at 18:55

0 Answers0