Hey,
I am having problems with running IPMI on my servers that have network bonding enabled.
Platform: CentOS release 5.3 (Final)
Kernel: 2.6.18-92.el5
64bit Dell PowerEdge 1950
Ethernet controller: Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet
I have bonded the interface eth0 and eth1 as active passive, with eth0 as the active interface, below is conf description from /proc
Bonding Mode: fault-tolerance
(active-backup) Primary Slave: eth0
Currently Active Slave: eth0 MII
Status: up MII Polling Interval (ms):
30 Up Delay (ms): 0 Down Delay (ms): 0
Slave Interface: eth0 MII Status: up
Link Failure Count: 0 Permanent HW
addr: 00:22:19:56:b9:cd
Slave Interface: eth1 MII Status: up
Link Failure Count: 0 Permanent HW
addr: 00:22:19:56:b9:cf
My IPMI device is as follows
IPMI Device Information
Interface Type: KCS (Keyboard Control Style)
Specification Version: 2.0
I2C Slave Address: 0x10
NV Storage Device: Not Present
Base Address: 0x0000000000000CA8 (I/O)
Register Spacing: 32-bit Boundaries
I Have used openIPMI as well as freeipmi both to control the chassis via the IPMI card, but on servers which have bonding enabled, the command times out, below is the full run of the command with debug info.
ipmi_lan_send_cmd:opened=[0],
open=[4482848] IPMI LAN host
70.87.28.115 port 623 Sending IPMI/RMCP presence ping packet
ipmi_lan_send_cmd:opened=[1],
open=[4482848] No response from
remote controller Get Auth
Capabilities command failed
ipmi_lan_send_cmd:opened=[1],
open=[4482848] No response from
remote controller Get Auth
Capabilities command failed Error:
Unable to establish LAN session Failed
to open LAN interface Unable to get
Chassis Power Status
On the other hand I configured IPMI on a box with the same specs as mentioned above without bonding and IPMI works perfectly.
Has anyone faced this problem with IPMI + Bonding ?
I would be thankful is someone helps circumvent this issue.
Muhammed Sameer