You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since some time after we upgraded our fleet to ESXi 6.5u1 the check returns the critical status for disk or disk bay of all installed drives (2 SAS HDDs) for two of our hosts.
./check_esxi_hardware-20161013.py -H hostX.fqdn -U user -P password
CRITICAL : Disk or Disk Bay 2 C1 P1I Bay 2: In Failed Array CRITICAL : Disk or Disk Bay 1 C1 P1I Bay 1: In Failed Array - Server: HP ProLiant DL380p Gen8 s/n: XXXXXXXXXXX System BIOS: P70 2015-07-01
At the same time it returns all good for a dozen other hosts with identical or similar hardware configuration:
./check_esxi_hardware-20161013.py -H hostY.fqdn -U user -P password
OK - Server: HP ProLiant DL380p Gen8 s/n: XXXXXXXXXXYY System BIOS: P70 2015-07-01
Several things confuse me:
The critical status was not triggered immediately after the firmware and ESXi upgrades, but only after a later reboot.
The check returns all good for a dozen other hosts of which most have the same or very similar configuration.
Firmware versions at first glance don't seem to be the culprit, as both hosts with critical checks have slightly different firmware versions (controller and disks) and the same versions are found on other hosts that do not result in critical checks.
Is there further information I can provide to help get to the root of this?
Best regards
The text was updated successfully, but these errors were encountered:
Check the hardware status tab in vsphere client/in the WebUI and compare the output
I don't see you using the "-V hp" switch, although you have a Proliant server - try with this, too
Make sure you have updated your CIM Offline Bundle from HP
The question is rather why your ESXi server's CIM service reported these failures (firmware problem? cim offline bundle problem? etc). The plugin does nothing else than parsing through all output and report "non-ok" elements.
Since some time after we upgraded our fleet to ESXi 6.5u1 the check returns the critical status for disk or disk bay of all installed drives (2 SAS HDDs) for two of our hosts.
At the same time it returns all good for a dozen other hosts with identical or similar hardware configuration:
Several things confuse me:
Is there further information I can provide to help get to the root of this?
Best regards
The text was updated successfully, but these errors were encountered: