Objective 1.1 - Revision

RAID Levels

Surprise, wikipedia has a decent overview of different RAID levels. As such there is no point reproducing it here! In practice, levels 0, 1, 5 and 6 are most commonly used in commercial storage systems. Know these off by heart.

More recently some vendors have developed technologies that do not use RAID but pool large numbers of disks together and allows the storage subsystem to store data in a location that will get the best performance. This is often referred to as storage virtualization.

Supported HBA Types

The storage compatibility guide (document #1) is updated every few weeks to contain the latest compatible storage devices. The document runs to over 1100 pages but the first few contain everything that is needed. In summary, the following are supported types of storage:

Fibre Channel (FC) – the only type that supports Microsoft clustering

Fibre Channel over Ethernet (FCoE) – these are connected using Converged Network Adapters (CNAs)

Hardware iSCSI – These come in two flavours: Dependent – These types are dependent on the networking and iSCSI management interfaces provided by vSphere; Independent – These types have their own networking and iSCSI management interfaces, for example the QLA4052, and may need to be separately licensed

Software iSCSI – Uses functionality built into the vmkernel and connections are made using supported NICs.

NFS – Like software iSCSI, functionality is built into the vmkernel and connections are made using supported NICs

Virtual Disk Format Types

There are 10 types of virtual disk that VMware's products use. These are taken from VMware KB 1022242.

  • zeroedthick (default) – Space required for the virtual disk is allocated during the creation of the disk file. Any data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine. The virtual machine does not read stale data from disk.
  • eagerzeroedthick – Space required for the virtual disk is allocated at creation time. In contrast to zeroedthick format, the data remaining on the physical device is zeroed out during creation. It might take much longer to create disks in this format than to create other types of disks. (Required for Fault Tolerance feature!)
  • thick – Space required for the virtual disk is allocated during creation. This type of formatting does not zero out any old data that might be present on this allocated space. A non-root user cannot create disks of this format.
  • thin – Space required for the virtual disk is not allocated during creation, but is supplied and zeroed out, on demand at a later time.
  • rdm – Virtual compatibility mode for raw disk mapping.
  • rdmp – Physical compatibility mode (pass-through) for raw disk mapping.
  • raw – Raw device.
  • 2gbsparse – A sparse disk with 2GB maximum extent size. You can use disks in this format with other VMware products, however, you cannot power on sparse disk on a ESX host till you reimport the disk with vmkfstools in a compatible format, such as thick or thin.
  • monosparse – A monolithic sparse disk. You can use disks in this format with other VMware products.
  • monoflat – A monolithic flat disk. You can use disks in this format with other VMware products.

Conversions between thin and thick disks can be done as part of a Storage vMotion operation. Eager Zeroed Thick disks are a special case. To determine if a disk is zeroedthick or eagerzeroedthick, check out vmkfstools in VMware KB 1011170. There's also a community script (in perl) that can be used to determine the disk format type (see getRealVMDiskFormat.pl). Converting to eagerzeroedthick can also be done to a powered off VM using vmkfstools or when adding a disk to a VM by ticking the checkbox shown below.