Friday 15 July 2016

delete, info, config : GlusterFS Snapshots CLI Part 2

Now that we know how to create GlusterFS snapshots, it will be handy to know, how to delete them as well. Right now I have a cluster with two volumes at my disposal. As can be seen below, each volume has 1 brick.
# gluster volume info

Volume Name: test_vol
Type: Distribute
Volume ID: 74e21265-7060-48c5-9f32-faadaf986d85
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: VM1:/brick/brick-dirs1/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

Volume Name: test_vol1
Type: Distribute
Volume ID: b6698e0f-748f-4667-8956-ec66dd91bd84
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: VM2:/brick/brick-dirs/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
We are going to take a bunch of snapshots for both these volumes using the create command.
# gluster snapshot create snap1 test_vol no-timestamp
snapshot create: success: Snap snap1 created successfully
# gluster snapshot create snap2 test_vol no-timestamp
snapshot create: success: Snap snap2 created successfully
# gluster snapshot create snap3 test_vol no-timestamp
snapshot create: success: Snap snap3 created successfully
# gluster snapshot create snap4 test_vol1 no-timestamp
snapshot create: success: Snap snap4 created successfully
# gluster snapshot create snap5 test_vol1 no-timestamp
snapshot create: success: Snap snap5 created successfully
# gluster snapshot create snap6 test_vol1 no-timestamp
snapshot create: success: Snap snap6 created successfully
# gluster snapshot list
snap1
snap2
snap3
snap4
snap5
snap6
#
Now we have 3 snapshots for each volume. To delete a snapshot we have to use the delete command along with the snap name.
# gluster snapshot delete snap1
Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
snapshot delete: snap1: snap removed successfully
# gluster snapshot list
snap2
snap3
snap4
snap5
snap6
#
We can also choose to delete all snapshots that belong to a particular volume. Before doing that let's see what snapshots are present for volume "test_vol". Apart from snapshot list, there is also snapshot info command that provides more elaborate details of snapshots. Like snapshot list, snapshot info can also take volume name as an option to show information of snapshots belonging to only that volume.
# gluster snapshot list test_vol
snap2
snap3
# gluster snapshot info volume test_vol
Volume Name               : test_vol
Snaps Taken               : 2
Snaps Available           : 254
    Snapshot                  : snap2
    Snap UUID                 : d17fbfac-1cb1-4276-9b96-0b73b90fb545
    Created                   : 2016-07-15 09:32:07
    Status                    : Stopped

    Snapshot                  : snap3
    Snap UUID                 : 0f319761-eca2-491e-b678-75b56790f3a0
    Created                   : 2016-07-15 09:32:12
    Status                    : Stopped
 #
As we can see from both list and info command, test_vol  has 2 snapshots snap2, and snap3. Instead of individually deleting these snapshots one by one, we can choose to delete all snapshots that belong to a particular volume, in this case test_vol.
# gluster snapshot delete volume test_vol
Volume (test_vol) contains 2 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: snap2: snap removed successfully
snapshot delete: snap3: snap removed successfully
#
# gluster snapshot list
snap4
snap5
snap6
# gluster snapshot list test_vol
No snapshots present
# gluster snapshot info volume test_vol
Volume Name               : test_vol
Snaps Taken               : 0
Snaps Available           : 256
#
With the above volume option we successfully deleted both the snapshots of test_vol with a single command. Now only 3 snapshots remain, and both belong to volume "test_vol1". Before proceeding further let's create one more snapshot for volume "test_vol".
# gluster snapshot create snap7 test_vol no-timestamp
snapshot create: success: Snap snap7 created successfully
# gluster snapshot list
snap4
snap5
snap6
snap7
#
With this, we have four snapshots belonging, three of which belong to test_vol1, and one belongs to test_vol. Now with the 'delete all'  command we will be able to delete all snapshots present irrespective of which volumes they belong to.
 # gluster snapshot delete all
System contains 4 snapshot(s).
Do you still want to continue and delete them?  (y/n) y
snapshot delete: snap4: snap removed successfully
snapshot delete: snap5: snap removed successfully
snapshot delete: snap6: snap removed successfully
snapshot delete: snap7: snap removed successfully
# gluster snapshot list
No snapshots present
#
So that is how you delete GlusterFS snapshots. There are some configurable options for Gluster snapshots, which can be viewed and modified using the snapshot config option.
# gluster snapshot config

Snapshot System Configuration:
snap-max-hard-limit : 256
snap-max-soft-limit : 90%
auto-delete : disable
activate-on-create : disable

Snapshot Volume Configuration:

Volume : test_vol
snap-max-hard-limit : 256
Effective snap-max-hard-limit : 256
Effective snap-max-soft-limit : 230 (90%)

Volume : test_vol1
snap-max-hard-limit : 256
Effective snap-max-hard-limit : 256
Effective snap-max-soft-limit : 230 (90%)
#
Just running the config option, as shown above displays the current configuration in the system. What we are looking at are the default configuration values. There are four different configurable parameters. Let's go through them one by one.

  • snap-max-hard-limit: Set by default as 256, the snap-max-hard-limit is the maximum number of snapshots that can be present in the system. Once a volume reaches this limit, in terms of the number of snapshots it has, we are not allowed to create any more snapshot, unless we either delete a snapshot, or increase this limit.
    # gluster snapshot config test_vol snap-max-hard-limit 2
    Changing snapshot-max-hard-limit will limit the creation of new snapshots if they exceed the new limit.
    Do you want to continue? (y/n) y
    snapshot config: snap-max-hard-limit for test_vol set successfully
    # gluster snapshot config

    Snapshot System Configuration:
    snap-max-hard-limit : 256
    snap-max-soft-limit : 90%
    auto-delete : disable
    activate-on-create : disable

    Snapshot Volume Configuration:

    Volume : test_vol
    snap-max-hard-limit : 2
    Effective snap-max-hard-limit : 2
    Effective snap-max-soft-limit : 1 (90%)

    Volume : test_vol1
    snap-max-hard-limit : 256
    Effective snap-max-hard-limit : 256
    Effective snap-max-soft-limit : 230 (90%)
    #
    #
    # gluster snapshot info volume test_vol
    Volume Name               : test_vol
    Snaps Taken               : 0
    Snaps Available           : 2
    #
    As can be seen with the config option, I have modified the snap-max-hard-limit for the volume test_vol to 2. This means after taking 2 snapshots it will not allow me to take any more snapshots, till I either delete one of them, or increase this value. See how the snapshot info for the volume test_vol shows 'Snaps Available' as 2.
    # gluster snapshot create snap1 test_vol no-timestamp
    snapshot create: success: Snap snap1 created successfully
    # gluster snapshot create snap2 test_vol no-timestamp
    snapshot create: success: Snap snap2 created successfully
    Warning: Soft-limit of volume (test_vol) is reached. Snapshot creation is not possible once hard-limit is reached.
    #
    #
    # gluster snapshot info volume test_vol
    Volume Name               : test_vol
    Snaps Taken               : 2
    Snaps Available           : 0
        Snapshot                  : snap1
        Snap UUID                 : 2ee5f237-d4d2-47a6-8a0c-53a887b33b26
        Created                   : 2016-07-15 10:12:55
        Status                    : Stopped

        Snapshot                  : snap2
        Snap UUID                 : 2c74925e-4c75-4824-b39e-7e1e22f3b758
        Created                   : 2016-07-15 10:13:02
        Status                    : Stopped

    #
    # gluster snapshot create snap3 test_vol no-timestamp
    snapshot create: failed: The number of existing snaps has reached the effective maximum limit of 2, for the volume (test_vol). Please delete few snapshots before taking further snapshots.
    Snapshot command failed
    #
    What we have done above is we created 2 snapshots for the volume test_vol and we reached it's snap-max-hard-limit. Notice two things here, first is when we created the second snapshot it gave us a warning that the soft-limit is reached for this volume (we will come to the soft-limit in a while), and second is that the 'Snaps Available' in snapshot info has now become 0. As explained, when we try to take the third snapshot it fails to do so, while explaining that we have reached the maximum limit, and asking us to delete a few snapshots.
    # gluster snapshot delete snap1
    Deleting snap will erase all the information about the snap. Do you still want to continue? (y/n) y
    snapshot delete: snap1: snap removed successfully
    # gluster snapshot create snap3 test_vol no-timestamp
    snapshot create: success: Snap snap3 created successfully
    Warning: Soft-limit of volume (test_vol) is reached. Snapshot creation is not possible once hard-limit is reached.
    #
    # gluster snapshot config test_vol snap-max-hard-limit 3
    Changing snapshot-max-hard-limit will limit the creation of new snapshots if they exceed the new limit.
    Do you want to continue? (y/n) y
    snapshot config: snap-max-hard-limit for test_vol set successfully
    # gluster snapshot info volume test_vol
    Volume Name               : test_vol
    Snaps Taken               : 2
    Snaps Available           : 1
        Snapshot                  : snap2
        Snap UUID                 : 2c74925e-4c75-4824-b39e-7e1e22f3b758
        Created                   : 2016-07-15 10:13:02
        Status                    : Stopped

        Snapshot                  : snap3
        Snap UUID                 : bfd080f3-848e-490a-83ed-066858bd96fc
        Created                   : 2016-07-15 10:19:17
        Status                    : Stopped

    # gluster snapshot create snap4 test_vol no-timestamp
    snapshot create: success: Snap snap4 created successfully
    Warning: Soft-limit of volume (test_vol) is reached. Snapshot creation is not possible once hard-limit is reached.
    #
    As seen above, once we delete a snapshot the system allows us to create another one. It also allows us to do so when we increase the snap-max-hard-limit. I am curious to see what happens when we have hit the snap-max-hard-limit, and I go ahead and further decrease the limit. Does the system delete snapshots to bring the number of snapshots to the set limit.
    # gluster snapshot config test_vol snap-max-hard-limit 1
    Changing snapshot-max-hard-limit will limit the creation of new snapshots if they exceed the new limit.
    Do you want to continue? (y/n) y
    snapshot config: snap-max-hard-limit for test_vol set successfully
    # gluster snapshot config

    Snapshot System Configuration:
    snap-max-hard-limit : 256
    snap-max-soft-limit : 90%
    auto-delete : disable
    activate-on-create : disable

    Snapshot Volume Configuration:

    Volume : test_vol
    snap-max-hard-limit : 1
    Effective snap-max-hard-limit : 1
    Effective snap-max-soft-limit : 0 (90%)

    Volume : test_vol1
    snap-max-hard-limit : 256
    Effective snap-max-hard-limit : 256
    Effective snap-max-soft-limit : 230 (90%)
    # gluster snapshot info volume test_vol
    Volume Name               : test_vol
    Snaps Taken               : 3
    Snaps Available           : 0
        Snapshot                  : snap2
        Snap UUID                 : 2c74925e-4c75-4824-b39e-7e1e22f3b758
        Created                   : 2016-07-15 10:13:02
        Status                    : Stopped

        Snapshot                  : snap3
        Snap UUID                 : bfd080f3-848e-490a-83ed-066858bd96fc
        Created                   : 2016-07-15 10:19:17
        Status                    : Stopped

        Snapshot                  : snap4
        Snap UUID                 : bd9a5297-0eb5-47d1-b250-9b57f4e57427
        Created                   : 2016-07-15 10:20:08
        Status                    : Stopped

    #
    # gluster snapshot create snap5 test_vol no-timestamp
    snapshot create: failed: The number of existing snaps has reached the effective maximum limit of 1, for the volume (test_vol). Please delete few snapshots before taking further snapshots.
    Snapshot command failed
    #
    So the answer to that question is a big NO. We don't explicitly delete snapshots when you decrease the snap-max-hard-limit to a number below the current number of snapshots. The reason for not doing so, is it will become very easy to lose important snapshots. However, what we do is, we do not allow you to create snapshots, till you... (yeah you guessed it right), either delete a snapshot or increase the snap-max-hard-limit.

    snap-max-hard-limit is both a system config and a volume config. What it means is we can set this value for indiviudal volumes, and we can also set a system value.
    # gluster snapshot config snap-max-hard-limit 10
    Changing snapshot-max-hard-limit will limit the creation of new snapshots if they exceed the new limit.
    Do you want to continue? (y/n) y
    snapshot config: snap-max-hard-limit for System set successfully
    # gluster snapshot config

    Snapshot System Configuration:
    snap-max-hard-limit : 10
    snap-max-soft-limit : 90%
    auto-delete : disable
    activate-on-create : disable

    Snapshot Volume Configuration:

    Volume : test_vol
    snap-max-hard-limit : 1
    Effective snap-max-hard-limit : 1
    Effective snap-max-soft-limit : 0 (90%)

    Volume : test_vol1
    snap-max-hard-limit : 256
    Effective snap-max-hard-limit : 10
    Effective snap-max-soft-limit : 9 (90%)
    #
    Notice, how not mentioning a volume name for a snapshot config, sets that particular config for the whole system, instead of a particular volume. The same is clearly visible in the 'Snapshot System Configuration' section of the snapshot config output. Look at this system option as an umbrella limit for the entire cluster. You are allowed to still configure individual volume's snap-max-hard-limit. If the individual volume's limit is lesser than the system's limit, then it will be honored, else the system limit will be honored.

    For example, we can see that the system snap-max-hard-limit is set to 10. Now, in case of the volume test_vol, the snap-max-hard-limit for the volume is set to 1, which is lower than the system's limit and is hence honored, making the effective snap-max-hard-limit as 1. This effective snap-max-hard-limit is the limit that is taken into consideration during snapshot create, and is displayed as 'Snaps Available' in snapshot info. Similarly, for volume test_vol1, the snap-max-hard-limit is 256, which is higher than the system's limit, and is hence not honored, making the effective snap-max-hard-limit of that volume as 10, which is the system's snap-max-hard-limit. Pretty intuitive huh!!!
  • snap-max-soft-limit: This option is set as a percentage (of snap-max-hard-limit), and as we have seen in examples above, on crossing this limit, a warning is shown saying the soft-limit is reached. It serves as a reminder to the user, that he is nearing the hard-limit and should do something about it in order to be able to keep on taking snapshots. By default the snap-max-hard-limit is set to 90%, and can be modified using the snapshot config option.
    # gluster snapshot config test_vol snap-max-soft-limit 50
    Soft limit cannot be set to individual volumes.
    Usage: snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])| ([activate-on-create <enable|disable>])
    #
    So what do we have here... Yes, the snap-max-soft-limit is a system option only and cannot be set to individual volumes. When the snap-max-soft-limit option is set for the system, it applies on the effective snap-max-hard-limit of individual volumes, to get the effective snap-max-soft-limit of those respective volumes.
    # gluster snapshot config snap-max-soft-limit 50
    If Auto-delete is enabled, snap-max-soft-limit will trigger deletion of oldest snapshot, on the creation of new snapshot, when the snap-max-soft-limit is reached.
    Do you want to change the snap-max-soft-limit? (y/n) y
    snapshot config: snap-max-soft-limit for System set successfully
    # gluster snapshot config

    Snapshot System Configuration:
    snap-max-hard-limit : 10
    snap-max-soft-limit : 50%
    auto-delete : disable
    activate-on-create : disable

    Snapshot Volume Configuration:

    Volume : test_vol
    snap-max-hard-limit : 1
    Effective snap-max-hard-limit : 1
    Effective snap-max-soft-limit : 0 (50%)

    Volume : test_vol1
    snap-max-hard-limit : 256
    Effective snap-max-hard-limit : 10
    Effective snap-max-soft-limit : 5 (50%)
    #
    As we can see above, on setting the option for the system, it applies to the individual volume's (see test_vol1) snap-max-soft-limit, to procure that particular volume's snap-max-soft-limit.

    I am sure the keen-eyed observer in you has noticed, the Auto-delete warning in the output above, and it's just as well because it is our third configurable parameter.
  • auto-delete: This option is tightly tied with snap-max-soft-limit, or rather effective snap-max-soft-limit of individual volumes. It is however a system option and cannot be set for individual volumes. On enabling this option, once we exceed the effective snap-max-soft-limit, of a particular volume, we automatically delete the oldest snapshot for this volume, making sure the total number of snapshots don't increase the effective snap-max-soft-limit, and never reach the effective snap-max-hard-limit, enabling you to keep taking snapshots without hassle.

    NOTE: Extreme Caution Should Be Exercised When Enabling This Option, As It Automatically Deletes The Oldest Snapshot Of A Volume, When The Number Of Snapshots For That Volume Exceeds The Effective snap-max-soft-limit Of That Volume.
    # gluster snapshot config auto-delete enable
    snapshot config: auto-delete successfully set
    # gluster snapshot config

    Snapshot System Configuration:
    snap-max-hard-limit : 10
    snap-max-soft-limit : 50%
    auto-delete : enable
    activate-on-create : disable

    Snapshot Volume Configuration:

    Volume : test_vol
    snap-max-hard-limit : 1
    Effective snap-max-hard-limit : 1
    Effective snap-max-soft-limit : 0 (50%)

    Volume : test_vol1
    snap-max-hard-limit : 256
    Effective snap-max-hard-limit : 10
    Effective snap-max-soft-limit : 5 (50%)
    #
    # gluster snapshot list
    snap2
    snap3
    snap4
    # gluster snapshot delete all
    System contains 3 snapshot(s).
    Do you still want to continue and delete them?  (y/n) y
    snapshot delete: snap2: snap removed successfully
    snapshot delete: snap3: snap removed successfully
    snapshot delete: snap4: snap removed successfully
    # gluster snapshot create snap1 test_vol1 no-timestamp
    snapshot create: success: Snap snap1 created successfully
    # gluster snapshot create snap2 test_vol1 no-timestamp
    snapshot create: success: Snap snap2 created successfully
    # gluster snapshot create snap3 test_vol1 no-timestamp
    snapshot create: success: Snap snap3 created successfully
    # gluster snapshot create snap4 test_vol1 no-timestamp
    snapshot create: success: Snap snap4 created successfully
    # gluster snapshot create snap5 test_vol1 no-timestamp
    snapshot create: success: Snap snap5 created successfully
    In the above example, we first set the auto-delete option in snapshot config,  followed by deleting all the snapshots currently in the system. Then we create 5 snapshots for test_vol1, whose effective snap-max-soft-limit is 5. On creating one more snapshot, we will exceed the limit, and the oldest snapshot will be deleted.
    # gluster snapshot create snap6 test_vol1 no-timestamp
    snapshot create: success: Snap snap6 created successfully
    #
    # gluster snapshot list volume test_vol1
    snap2
    snap3
    snap4
    snap5
    snap6
    #
    As soon as we create snap6, the total number of snapshots become 6, thus exceeding the effective snap-max-soft-limit for test_vol1. The oldest snapshot for test_vol1(which is snap1) is then deleted in the background,  bringing the total number of snapshots to 5.
  • activate-on-create: As we discussed during creation of snapshot, a snapshot on creation is in deactivated state by default, and needs to be activated to be used. On enabling this option in snapshot config, every snapshot created thereafter, will be activated by default. This too is a system option, and cannot be set for individual volumes.
    # gluster snapshot status snap6

    Snap Name : snap6
    Snap UUID : 7fc0a0e7-950d-4c1b-913d-caea6037e633

        Brick Path        :   VM2:/var/run/gluster/snaps/db383315d5a448d6973f71ae3e45573e/brick1/brick
        Volume Group      :   snap_lvgrp
        Brick Running     :   No
        Brick PID         :   N/A
        Data Percentage   :   1.80
        LV Size           :   616.00m

    #
    # gluster snapshot config activate-on-create enable
    snapshot config: activate-on-create successfully set
    # gluster snapshot config

    Snapshot System Configuration:
    snap-max-hard-limit : 10
    snap-max-soft-limit : 50%
    auto-delete : enable
    activate-on-create : enable

    Snapshot Volume Configuration:

    Volume : test_vol
    snap-max-hard-limit : 1
    Effective snap-max-hard-limit : 1
    Effective snap-max-soft-limit : 0 (50%)

    Volume : test_vol1
    snap-max-hard-limit : 256
    Effective snap-max-hard-limit : 10
    Effective snap-max-soft-limit : 5 (50%)
    # gluster snapshot create snap7 test_vol1 no-timestamp
    snapshot create: success: Snap snap7 created successfully
    # gluster snapshot status snap7

    Snap Name : snap7
    Snap UUID : b1864a86-1fa4-4d42-b20a-3d95c2f9e277

        Brick Path        :   VM2:/var/run/gluster/snaps/38b1d9a2f3d24b0eb224f142ae5d33ca/brick1/brick
        Volume Group      :   snap_lvgrp
        Brick Running     :   Yes
        Brick PID         :   6731
        Data Percentage   :   1.80
        LV Size           :   616.00m

    #
    As can be seen when this option was disabled, snap6 wasn't activated by default. After enabling this option, snap7 on creation was in activated state. In the next post we will be discussing snapshot restore and snapshot clone.

Sunday 26 June 2016

create, help, list, status, activate, deactivate : GlusterFS Snapshots CLI Part 1

After discussing what GlusterFS snapshots are, what are their pre-requisites, and  what goes behind creation of a snapshot, it's time we actually created one, and familiarize ourselves with it.

To begin with let's create a volume called test_vol.
# gluster volume create test_vol replica 3 VM1:/brick/brick-dirs/brick VM2:/brick/brick-dirs/brick VM3:/brick/brick-dirs/brick
volume create: test_vol: success: please start the volume to access data
#
# gluster volume start test_vol
volume start: test_vol: success
#
# gluster volume info test_vol

Volume Name: test_vol
Type: Replicate
Volume ID: 09e773c9-e846-4568-a12d-6efb1cecf8cf
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: VM1:/brick/brick-dirs/brick
Brick2: VM2:/brick/brick-dirs/brick
Brick3: VM3:/brick/brick-dirs/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
#
As you can see, we created a 1x3 replica volume, and started it. We are now primed to take our snapshot of this volume. But before, we do so let's add some data to the volume.
# mount -t glusterfs VM1:/test_vol /mnt/test-vol-mnt/
#
# cd /mnt/test-vol-mnt
#
# ls -lrt
total 0
# touch file1
# ls -lrt
total 0
-rw-r--r-- 1 root root 0 Jun 24 13:39 file1
#
So we have successfully mounted our volume and created(touched) a file called file1. Now we will take a snapshot of 'test_vol' and we will call it 'snap1'.
# gluster snapshot create snap1 test_vol
snapshot create: success: Snap snap1_GMT-2016.06.24-08.12.42 created successfully
#
That's weird isn't it. I asked it to create a snapshot called snap1, and it created a snapshot called snap1_GMT-2016.06.24-08.12.42. What happened is it actually created a snapshot called snap1 and appended the snap's name with the timestamp of it's creation. This is the default naming convention of GlusterFS snapshots, and like everything else it it so for a couple of reasons.
  • This naming format is essential to support Volume Shadow Copy Service Support in GlusterFS volumes.
  • The reason for keeping it as default naming convention is that it is more informative than just a name. Scrolling through a list of snapshots not only gives you the thoughtful name you have chosen for it, but also the time the snapshot was created, which makes it so much more relatable to you, and gives you more clarity to decide what to do with the said snapshot.
But if it still look's icky to you, as it does to a lot of people, you can choose to  not have the timestamp appended by adding the no-timestamp option in the create command.
# gluster snapshot create snap1 test_vol no-timestamp
snapshot create: success: Snap snap1 created successfully
#
So there you go. Congratulation on creating your first GlusterFS snapshot. Now what do you do with it, or rather what all can you do with it. Let's ask for some help.
# gluster snapshot help
snapshot activate <snapname> [force] - Activate snapshot volume.
snapshot clone <clonename> <snapname> - Snapshot Clone.
snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>])| ([activate-on-create <enable|disable>]) - Snapshot Config.
snapshot create <snapname> <volname> [no-timestamp] [description <description>] [force] - Snapshot Create.
snapshot deactivate <snapname> - Deactivate snapshot volume.
snapshot delete (all | snapname | volume <volname>) - Snapshot Delete.
snapshot help - display help for snapshot commands
snapshot info [(snapname | volume <volname>)] - Snapshot Info.
snapshot list [volname] - Snapshot List.
snapshot restore <snapname> - Snapshot Restore.
snapshot status [(snapname | volume <volname>)] - Snapshot Status.
#
Quite the buffet isn't it. So let's first see what snapshots do we have here. gluster snapshot list will do the trick for us.
# gluster snapshot list
snap1_GMT-2016.06.24-08.12.42
snap1
#

# gluster snapshot list test_vol
snap1_GMT-2016.06.24-08.12.42
snap1
#
The list command will display all the snapshots in the trusted pool. Adding a volume's name along with the list command will list all snapshots of that particular volume only. As we have only one volume now, it shows the same result for both. It helps provide more clarity when you have a couple of volumes, and each volume has a number of snapshots.

We have previously discussed that a GlusterFS snapshot is like a GlusterFS volume. Just like a regular volume you can mount it, delete it, and even see it's status. So let's see the status of our snapshots.
# gluster snapshot status

Snap Name : snap1_GMT-2016.06.24-08.12.42
Snap UUID : 26d1455d-1d58-4c39-9efa-822d9397088a

    Brick Path        :   VM1:/var/run/gluster/snaps/f4b2ae1fbf414c8383c3b198dd42e7d7/brick1/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   95.81
    LV Size           :   616.00m


    Brick Path        :   VM2:/var/run/gluster/snaps/f4b2ae1fbf414c8383c3b198dd42e7d7/brick2/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.45
    LV Size           :   616.00m


    Brick Path        :   VM3:/var/run/gluster/snaps/f4b2ae1fbf414c8383c3b198dd42e7d7/brick3/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.43
    LV Size           :   616.00m


Snap Name : snap1
Snap UUID : 73489d9b-c370-4687-8be9-fc094ee78d0a

    Brick Path        :   VM1:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick1/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   95.81
    LV Size           :   616.00m


    Brick Path        :   VM2:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick2/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.45
    LV Size           :   616.00m


    Brick Path        :   VM3:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick3/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.43
    LV Size           :   616.00m
As with the volume status command, the snapshot status command also shows the status of all the snapshot bricks of all snapshots. Adding the snapname in the status command displays the status of only that particular snapshot.
# gluster snapshot status snap1

Snap Name : snap1
Snap UUID : 73489d9b-c370-4687-8be9-fc094ee78d0a

    Brick Path        :   VM1:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick1/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   95.81
    LV Size           :   616.00m


    Brick Path        :   VM2:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick2/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.45
    LV Size           :   616.00m


    Brick Path        :   VM2:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick3/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.43
    LV Size           :   616.00m
Similar to the snapshot list command adding the volname instead of the snapname in the status command displays the status of all snapshots of that particular volume.
The status itself gives us a wealth of information about each snapshot brick like the volume group, the data percentage, the LV Size. It also tells us if the brick is running or not, and if it is what is the PID of the brick. Interestingly we see that none of the bricks are running. This is the default behaviour of GlusterFS snapshots, where a newly created snapshot is in deactivated state(analogous to the Created/Stopped state of a GlusterFS volume), where none of it's bricks are running. In order to start the snap brick process we will have to activate the snapshot.
# gluster snapshot activate snap1
Snapshot activate: snap1: Snap activated successfully
#
# gluster snapshot status snap1

Snap Name : snap1
Snap UUID : 73489d9b-c370-4687-8be9-fc094ee78d0a

    Brick Path        :   VM1:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick1/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   Yes
    Brick PID         :   29250
    Data Percentage   :   95.81
    LV Size           :   616.00m


    Brick Path        :   VM1:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick2/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   Yes
    Brick PID         :   12616
    Data Percentage   :   3.45
    LV Size           :   616.00m


    Brick Path        :   VM1:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick3/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   Yes
    Brick PID         :   3058
    Data Percentage   :   3.43
    LV Size           :   616.00m
After the snapshot is activated, we can see the the bricks are running and their respective PIDs. The snapshot can also be deactivated again by using the deactivate command.
# gluster snapshot deactivate snap1
Deactivating snap will make its data inaccessible. Do you want to continue? (y/n) y
Snapshot deactivate: snap1: Snap deactivated successfully
#
# gluster snapshot status snap1

Snap Name : snap1
Snap UUID : 73489d9b-c370-4687-8be9-fc094ee78d0a

    Brick Path        :   VM1:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick1/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   95.81
    LV Size           :   616.00m


    Brick Path        :   VM2:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick2/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.45
    LV Size           :   616.00m


    Brick Path        :   VM3:/var/run/gluster/snaps/d5171e51e1ef407292ee4e24677385cb/brick3/brick
    Volume Group      :   snap_lvgrp
    Brick Running     :   No
    Brick PID         :   N/A
    Data Percentage   :   3.43
    LV Size           :   616.00m
Uptill now we have barely grazed the surface. There's delete, restore, config, and a whole lot more. We will be covering these in future posts.

Monday 13 June 2016

GlusterFS Snapshots And Their Prerequisites

Long time, no see huh!!! This post has been pending on my part for a while now, partly because I was busy and partly because I am that lazy. But it's a fairly important post as it talks about snapshotting the GlusterFS volumes. So what are these snapshots and why are they so darn important. Let's find out...

Wikipedia says, 'a snapshot is the state of a system at a particular point in time'. In filesystems specifically, a snapshot is a 'backup' (a read only copy of the data set frozen at a point in time). Obviously, it's not a full backup of the entire dataset, but it's a backup nonetheless, which makes it pretty important. Now moving on to GlusterFS snapshots. GlusterFS snapshots, are point-in-time, read-only, crash consistent copies, of GlusterFS volumes. They are also online snapshots, and hence the volume and it's data continue to be available to the clients, while the snapshot is being taken.

GlusterFS snapshots are thinly provisioned LVM based snapshots, and hence they have certain pre-requisites. A quick look at the product documentation tells us what those pre-requisites. For a GlusterFS volume, to be able to support snapshots, it needs to meet the following pre-requisites:
  • Each brick of the GlusterFS volume, should be on an independent, thinly-provisioned LVM.
  • A brick's lvm should not contain any data other than the brick's.
  • None of the bricks should be on a thick LVM
  • gluster version should be 3.6 and above (duh!!)
  • The volume should be started.
  • All brick processes must be up and running.

Now that I have laid out the rules above, let me give you their origin story as well. As in, how do the GlusterFS snapshots internally enable you to take a crash-consistent backup using thinly-provisioned LVM in a space efficient manner. We start by having a look at a GlusterFS volume, whose bricks are on independent, thinly-provisioned LVMs.


In the above diagram, we can see that GlusterFS volume test_vol comprises of two bricks, Brick1 and Brick2. Both the bricks are mounted on independent, thinly-provisioned LVMs. When the volume is mounted, the client process maintains a connection to both the bricks. This is as much summary of GlusterFS volumes, as is needed for this post. A GlusterFS snapshot, is also internally a GlusterFS volume with the exception that, it is a read-only volume and it is treated differently than a regular volume in certain aspects.

When we take a snapshot (say snap1) of the GlusterFS volume test_vol, following set of things happen in the background:
  •  It is checked if the volume is in started state, and if so are all the brick processes up and running.
  • At this point in time, we barrier certain fops, in order to make the snapshots crash-consistent. What it means is even though it is an online snapshot, certain write fops will be barriered for the duration of the snapshot. The fops that are on the fly when the barrier is initiated will be allowed to complete, but the acknowledgement to the client will be pending till the snapshot creation is complete. The barriering has a default time-out window of 2 mins, within which if the snapshot is not complete, the fops are unbarriered, and we fail that particular snapshot.
  • After successfully barriering fops on all brick processes, we proceed to take individual copy-on-write LVM snapshots of each brick. A copy-on-write snapshot LVM snapshot ensures a fast, space-efficient backup of the data currently on the brick. These LVM snapshots reside in the same LVM thinpool as the GLusterFS brick LVMs.
  • Once this snapshot is taken, we carve bricks out of these LVM snapshots, and create a snapshot volume out of those bricks.
  • Once the snapshot creation is complete, we unbarrier the GlusterFS volume.

As can be seen in the above diagram, the snapshot creation process has created a LVM snapshot for each LVM, and these snapshots lie in the same thinpool as the LVM. Then we carve bricks (Brick1" and Brick2") out of these snapshots, and create a snapshot volume called snap1.

This snapshot, snap1 is a read-only snapshot volume which can be:
  • Restored to the original  volume test_vol.
  • Mounted as a read-only volume and accessed.
  • Cloned to create a writeable snapshot.
  • Can be accessed via User-Servicable-Snapshots.
All these functionalities will be discussed in future posts, starting with the command line tools to create, delete and restore GlusterFS snapshots.