k.jackoMemberMar 23, 2012 at 6:09 am #158190
i’d really appreciate expert advice on this as my current support provider don’t seem to be coming up with the answers.
2 x HP Proliant ML370/G6 servers (S1 and S2) – each have 4 physical nics
All hosts and guests run WS2008 R2.
S1 = 32Gb ram, 2 x E5520 cpus, 1.8Tb raid 5 sas 10k disks (bios and drivers up-to-date)
VDC (4Gb static, 2 vpus) – domain controller
VPS (4Gb static, 2 vcpus) – print server
S2 = 68Gb ram, 2 x E5620 cpus, 2.8Tb raid 5 sas 10k disks (bios and drivers up-to-date)
VTS – 25Gb static, 4 vcpus – currently DYNAMIC disk (RDS (terminal server)
EXCHANGE – 20Gb static, 4 vcpus – currently DYNAMIC disk (Exchange 2007 spe, ru6)
VIRTUE 8Gb static, 2 vpcus – currently DYNAMIC disk (DMS)
VWSUS – 4Gb static, 2 vcpus – currently DNAMIC disk (MS updates)
As each ML370 host has 4 physical nics, i’ve created a virtual lan for each (VL2, VL3, VL4)
On S2 only VIRTUE and VWSUS share one of these (as they are not as important as VTS and EXCHANGE).
The problem is that VTS users (RDS) experience screen freezes and sluggishness when working. Screen freezes often happen when accessing outlook 2010. When i check EXCHANGE the disk I/O is very high.
No one seems to know why.
Things i’m planning on doing:
Convert VTS and EXCHANGE from ‘dynamic’ to ‘fixed’ disks. The other two can stay as dynamic, unless suggested otherwise.
Now, currently ALL VL nics have “allow managment operating system to share this network adapter” ticked. Not sure whether it should be this way or not.
The host os uses its own physical nic.
So, does this sound like a reasonable setup, or are their obvious flaws in the config of these two host machines AND the guests on them.
We have 2 physical and 1 virtual DC/GC and replication across all 3 often fails, not sure if this makes a diff to the rest of the stuff above.
I know this is a long read, but i could really do with the help. It’s also a decent learning curve for me if i can integrate any advice you guys have to offer.
Many thanks in advance.
You must be logged in to reply to this topic.