- This topic has 3 replies, 4 voices, and was last updated 2 years, 10 months ago by Anonymous.
ranjbParticipantNov 20, 2017 at 4:32 am #167307
Hi fellow members
I have got a question to understand how you address this problem that we have started to see since increasing our patching levels. I am sure someone out there must have run into a similar problem to what we are facing.
So historically we used to patch our most critical servers manually which we found was taking a long time and now with zero day attacks and ransom aware becoming more common a decision was made to ensure WSUS patches are applied in a reasonable time to our whole server/client estate. We do this using WSUS and other third party patching tools to install and reboot servers early on a Monday morning (between 2am – 5am).
Since we have introduced this we have run into some problems. The main problem is when servers are rebooted automatically as part of the patching, if the application relies on a database and/or the application was rebooted before the database it doesnt always make a successful connection as well as if after a reboot the services don’t start up correctly the application will fail for the end user and we sometimes need to manually start these up even though the services should automatically start following a reboot.
We use VMware and use vApps to tie applications, database servers and other related back end servers but this is only good if it’s part of a planned maintenance with the vsphere farm, it doesnt help when the reboots happen as part of patching/windows/application updates. I know there is an option for VM monitoring within VMware but unsure if this would help us in this situation and whether that could cause more issues.
We need to somehow find a solution to ensure that we can control the order of certain high critical services are started up after a patching reboot otherwise we will need to go back to manually patching which is something we don’t really want to do.
Does anyone else have this issue, if so do you have any ideas of how we could address this?
So far our best option for us to easily identify if services are down is to put our NOC in the DMZ and play with firewall rules to ensure this server can detect services which are down in the internal network or invest in a cloud NOC offering and perform monitoring this way.
Any suggestions would be most appreciated.
You must be logged in to reply to this topic.