UPDATE Sept 2015 – there is new functionality in the vSphere Web Client (v6.0u1) that allows you to delete all partitions – good info via William Lam’s website. Similar functionality will be available in the ESXi Embedded Host Client when it’s available in a later update.
UPDATE March 2015 – some people are hitting a similar issue when trying to reuse disks previously used by VSAN. The process below may still work but there are a few other things to check, as detailed here by Cormac Hogan.
Over the Christmas break I finally got some time to upgrade my home lab. One of my tasks was to build a new shared storage server and it was while installing the base ESXi (v5, build 469512) that I ran into an issue. I was unable to add any of the local disks to my ESXi host as VMFS datastores as I got the error “HostDatastoreSystem.QueryVmfsDatastoreCreateOptions” for object ‘ha-datastoresystem’ on ESXi….” as shown below;
I’d used this host and the same disks previously as an ESX4 host so I knew hardware incompatibility wasn’t an issue. Just in case I tried VMFS3 (instead of VMFS5) with the same result. I’ve run into a similar issue before with HP DL380G5’s where the workaround is to use the VI client connected directly to the host rather than vCentre. I connected directly to the host but got the same result. At this point I resorted to Google as I had a pretty specific error message. One of the first pages was this helpful blogpost at Eversity.nl (it’s always the Dutch isn’t it?) which confirmed it was an issue with pre-existing or incompatible information on the hard disks. There are various situations which might lead to pre-existing info on the disk;
- Vendor array utilities (HP, Dell etc) can create extra partitions or don’t finalise the partition creation
- GPT partitions created by Mac OSX, ZFS, W2k8 r2 x64 etc. Microsoft have a good explanation of GPT.
This made a lot of sense as I’d previously been trialling this host (with ZFS pools) as a NexentaStor CE storage server
The suggestion on the Eversity.nl site is to use ‘fdisk’ from an SSH session to delete the extraneous partitions. I gave that a try but it wouldn’t even list the disks, throwing an error about not being able to handle GPT disks;
There are various useful suggestions in the Eversity comments for dealing with GPT partitions but they are quite long winded. The quickest solution (as suggested by fdisk) was to use partedUtil.
You need to run the following command for each disk that you’re having issues with (this overwrites the partition table with a standard msdos one which VMware can work with);
NOTE: This will ERASE ALL DATA on the disk in question so be careful to select the right disks!
# partedUtil mklabel /dev/disks/<disk id> msdos
To get a list of your disks;
/dev/disks # ls mpx.vmhba32:C0:T0:L0 mpx.vmhba32:C0:T0:L0:1 mpx.vmhba32:C0:T0:L0:5 mpx.vmhba32:C0:T0:L0:6 mpx.vmhba32:C0:T0:L0:7 mpx.vmhba32:C0:T0:L0:8 naa.5000c5001092762f naa.50010b9000426a94 naa.50010b9000426a94:1 naa.50010b900042f244 naa.50010b900042f244:1 naa.5e83a97eca49476b t10.ATA_____MAXTOR_STM3500320AS_________________________________9QM8XE4F t10.ATA_____MAXTOR_STM3500320AS_________________________________9QM8XE4F:1 vml.0000000000766d68626133323a303a30 vml.0000000000766d68626133323a303a30:1 vml.0000000000766d68626133323a303a30:5 vml.0000000000766d68626133323a303a30:6 vml.0000000000766d68626133323a303a30:7 vml.0000000000766d68626133323a303a30:8 vml.010000000020202020202020202020202039514d38584534464d4158544f52 vml.010000000020202020202020202020202039514d38584534464d4158544f52:1 vml.02000000005000c5001092762f4d4158544f52 vml.020000000050010b9000426a94474e41303733 vml.020000000050010b9000426a94474e41303733:1 vml.020000000050010b900042f244474e41303733 vml.020000000050010b900042f244474e41303733:1 vml.02000000005e83a97eca49476b4f435a2d4147 /dev/disks #
As you can see from the above output, several of my GPT disks had extra partitions (indicated by the :1 on the end of the disk name). During relabelling these extra partitions will automatically be destroyed. The ‘.vml’ entries are simply symbolic links to the actual disks so you don’t need to run the command against those.
When you’ve finished you can go back to the VI client and add the disks successfully, no reboots required.