How PDL (Permanent Device Loss) looks like

How PDL (Permanent Device Loss) looks like

Some time ago differentiation between PDL (Permanent Device Loss) and APD (All Path Down) was introduced to vSphere. In a PDL situation, the hosts does not expect the device to return. PDL happens for example, when an administrator un-presents a LUN from a host. APD on the other hand is completely unplanned. By default the host tries to get the device back. Starting with vSphere 6.0 there are settings in a HA-Cluster for response to PDL and APD. This post gives some information about the different settings, what happens to VMs with more than one VMDK, custom alarm creation and more.

Device and datastore IDs

In ESXi hosts there are a few methods to identify a storage LUN respectively VMFS Volume. There is the LUN WWN (starting with naa.), created by the storage device, there is an ID given by ESXi and of course the name of the datastore. To show a table of all VMFS volumes and their corresponding IDs, execute the following command on the console (or any other method that leverages esxcli):

esxcli storage vmfs extent list

To know different IDs for your datastores is important, because they are used in log-files such as /var/log/vmkernel.log and /var/log/vobd.log.

How to distinguish between APD and PDL

The way to recognize PDL is by receiving and reacting to a specific SCSI sense code. When a storage device sends the sense code 0x5 0x25 0x0 (LOGICAL UNIT NOT SUPPORTED) on a path, ESXi declares PDL state to this path. In vmkernel.log you see such entries:

2017-02-02T10:25:44.733Z cpu9:33406)WARNING: NMP: nmp_PathDetermineFailure:2973: Cmd (0x2a) PDL error (0x5/0x25/0x0) - path vmhba3:C0:T2:L15 device naa.60002ac00000000000001f1300014d57 - triggering path failover 2017-02-02T10:25:44.733Z cpu9:33406)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x2a (0x439dcea80f40, 32798) to dev "naa.60002ac00000000000001f1300014d57" on path "vmhba3:C0:T2:L15" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0. Act:FAILOVER 2017-02-02T10:25:45.651Z cpu15:33044)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:670: Path "vmhba3:C0:T2:L15" (PERM LOSS) command 0xa3 failed with status Device is permanently unavailable. H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0.

To declare PDL state for a volume, all paths to the volume must declare PDL state. That means, each path has to return sense code 0x25 in order to set the device in PDL state.

Alarm definition

You can create specific alarms triggered by new entries in /var/log/vobd.log. In this log-file IDs such as esx.problem.scsi.device.state.permanentloss are used. You can create an alarm when these IDs are logged. To create an alarm do:

  • Add alarm
    • Monitor: Hosts
    • Monitor for: specific event occuring on this object
    • Add trigger: ID such as esx.problem.scsi.device.state.permanentloss (gets translated to “Device has been removed or is permanently inaccessible”)

Creating such alarms is not limited to PDL! Some time ago William Lam posted a log list of IDs, called VOB IDs, you can use to create alarms. Here is the Link for more information. List all VOB IDs, check file: /usr/lib/vmware/hostd/extensions/hostdiag/locale/en/event.vmsg

Which VMs are affected?

This trivial sounding question needs some investigation. Sure, VMs are affected that uses space on PDL datastore. A VM can allocate space for

  • Configuration files (vmx, nvram, …)
  • VMDK (OS Disk)
  • VMDK (Data Disk).

Which type of data affects VM to PDL? Answer: ALL types! Regardless which of the three types are stored on a PDL datastore, VM is affected. Just the impact may be different, depending of your choice of PDL response.

PDL behavior

Starting with vSphere 6.0, response to PDL can be configured on HA-Cluster level. These settings can also be overwritten on a per-VM basis. You can see in WebClient and H5 Client the current settings on Overview for each VM. In WebClient 6.5 wrong settings may be displayed with VM-overwriting. H5 Client shows the correct settings for me.

You can set one of these PDL responses on Cluster respectively VM level:

  • Disabled (default)
  • Issue events
  • Power down and restart VM.
Disabled

Logged Events for VMs in this case:

Event Type Description:
This event is logged when a VM’s files were not accessible due to a storage connectivity failure. vSphere HA will take action if VM Component Protection is enabled for the VM.
Possible Causes:
A datastore was inaccessible due to a storage connectivity loss of All Paths Down or Permenant Device Loss.A VM was affected because it had files on the inaccessible datastore.
Related events:
There are no related events.

When PDL is declared for a datastore, for each affected VM a question is to answer in WebClient:

The storage backing for virtual disk ‘/vmfs/volumes/57f51dcf-a1dbb293-5d65-ac162d6edcf4/vm01/vm01.vmdk’ has been permanently lost. You may be able to hot remove this virtual device from the virtual machine and continue after clicking Retry. Click Cancel to terminate this session.

By clicking Retry, the host tries to get the device back. But in a moderate way, no running text when looking a vmkernel.log. Notice: when for a VM OS disks are not affected, clicking Retry will not stop OS, just in OS affected disks cannot be access any more.

When clicking Cancel, VM resets. When another host still have access to the datastore, the VM can start there automatically.

Notice: From the point the question is asked to the point the question is answered, affected VMs are frozen, even if OS disks are not affected. But, when question is not answered interactively it is done automatically after 4 minutes (observed). Auto-answer is: Retry.

Issue events

Same behaviors as Disabled. Events are logged for VMs:

Event Type Description:
This event is logged when a VM affected by an inaccessible datastore in a vSphere HA cluster was not terminated.
Possible Causes:
VM Component Protection is configured to not terminate the VM, or vSphere HA host monitoring is disabled, or VM restart priority is disabled, or the VM is an agent VM, or there are no sufficient resources to fail over the VM. For the case of insufficient resources, vSphere HA will attempt to terminate the VM when resources become available. Action: If vSphere DRS is in manual mode, look for any pending recommendations and approve them so that vSphere HA failover can proceed
Related events:
There are no related events.

Power down and restart VM

Does what is says: VM starts on a host that have still access to the datastore. Remember: no matter which data of VM is on the datastore.

Device removal

When a device under PDL is not expected to come back again, it should be removed from hosts. A automatic, clean removal can be done in these cases when affected VMs are powered off (restarting on another host is not necessary for removal). As described before these cases are:

  • Option Power down and restart VM
  • Clicking Cancel.

In all other cases the device is not removed, because there are still unclosed handles left.

Notes

  • Behavior of advanced settings AutoremoveOnPDL has changed with ESXi 6. For vSphere Metro Storage Cluster (vMSC) environments it was recommended to disable AutoremoveOnPDL in 5.5. In 6.0 you can leave in enabled in vMSC environments. For more information see KB article here.
  • To check path count to you datastores, you can use this PowerCLI function here. It counts active, standby and dead paths.

Leave a Reply

Your email address will not be published. Required fields are marked *