ESX 5.5 – NFS – iSCSI Netapp performance issue?

Home Forums Virtualization VMware Virtualization ESX 5.5 – NFS – iSCSI Netapp performance issue?

Viewing 1 post (of 1 total)
  • Author
  • Avatar

    [SIZE=14px]Hi All.[/SIZE]

    [SIZE=14px]I visited a sit today that was installed by someone else to troubleshoot performance issues. T[SIZE=14px]he performance issues seem to relate to high disk queues and latency on the datastores.[/SIZE][/SIZE]

    [SIZE=14px]On first inspection the hardware seems great.[/SIZE]

    [SIZE=14px]80 user site.[/SIZE]

    [SIZE=14px]2x HP DL360P 180gb RAM – 10GBit SFP to switch[/SIZE]
    [SIZE=14px]1x Netapp 24x 500gb SAS connected to 10Gbit SFP switch.[/SIZE]
    [SIZE=14px]8 guest servers, exchange, sql, rds nothing special.[/SIZE]

    [SIZE=14px]The netapp san appears to be carved up into two separate units and each with 12 disks SAN1/SAN2 with HA enabled between the two. One side is used for NFS sharing of user data files and mapped to network drive on each pc. [/SIZE]

    [SIZE=14px]Other side has NFS shares also but for the virtual machines. this is where it gets interesting. VMware servers are connected to the SAN but guest VMDK files reside in an NFS share not bock ISCSI as i would have expected.[/SIZE]

    [SIZE=14px]The guests have additional virtual network cards added to talk back to the san using the microsoft iscsi initiators connected to a second vSwitch to mount extra disks rather than adding the disk in the settings of the guest to a LUN on the san via iscsi. All cards are E1000 cards so limited to 1GB[/SIZE]

    [SIZE=14px]In one instance i have an exchange server with a single disk in vmware running on a NFS lun, then in the guest windows os a Microsoft iscsi initator back to the san over the virtual nic’s to add an extra disk on which the exchange database resides. Presumably because Microsoft don’t support exchange in a NFS environment and SQL not so good… I know NFS should be about the same as block level iscsi but I’m not sure if i should change it all. perhaps mainly because the san is only giving 12 splingles to 8 servers which is causing my performance issues.[/SIZE]

Viewing 1 post (of 1 total)

You must be logged in to reply to this topic.