How to Check Qlogic Hba Firmware Version in Windows

How to Check Qlogic Hba Firmware Version in Windows

Computer

The
Dell
blade server
products are congenital around their
M1000e
enclosure that can hold their server blades, an embedded EqualLogic
iSCSI
storage area network
and I/O modules including
Ethernet,
Fibre Channel
and
InfiniBand
switches.



M1000e enclosure with selection of G12 server blades

Enclosure

[
edit
]

The M1000e fits in a
19-inch rack
and is 10
rack units
high (44 cm), 17.vi” (44.7 cm) wide and 29.7″ (75.four cm) deep. The empty blade enclosure weighs 44.five kg while a fully loaded arrangement tin can weigh up to 178.viii kg.
[1]

On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(south) (CMC or chassis management controller) and the
KVM switch. A blade enclosure offers centralized direction for the servers and I/O systems of the blade-arrangement. Most servers used in the blade-system offer an iDRAC carte and one can connect to each servers iDRAC via the M1000e direction system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server.

In June 2013 Dell introduced the
PowerEdge VRTX, which is a smaller bract system that shares modules with the M1000e. The bract servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors.[

commendation needed

]

In 2018 Dell introduced the
Dell PE MX7000, a new MX enclosure model, next generation of Dell enclosures.

The M1000e enclosure has a front-side and a dorsum-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function equally a
backplane
but has connectors at both sides where the front side is defended for server-blades and the back for
I/O
modules.

Midplane

[
edit
]



Indication on dorsum of chassis to run into which midplane was installed in manufactory

The midplane is completely passive. The server-blades are inserted in the forepart side of the enclosure while all other components can be reached via the back.
[two]


The original midplane 1.0 capabilities are Fabric A – Ethernet 1Gb; Fabrics B&C – Ethernet 1Gb, 10Gb, 40Gb – Fibre Channel 4Gb, 8Gb – IfiniBand DDR, QDR, FDR10. The enhanced midplane i.1 capabilities are Material A – Ethernet 1Gb, 10Gb; Fabrics B&C – Ethernet 1Gb, 10Gb, 40Gb – Fibre Channel 4Gb, 8Gb, 16Gb – IfiniBand DDR, QDR, FDR10, FDR. The original M1000e enclosures came with midplane version 1.0 but that midplane did non back up the
10GBASE-KR
standard on fabric A (10GBASE-KR
standard is supported on fabrics B&C). To have 10Gb Ethernet on fabric A or 16Gb
Fibre Channel
or
InfiniBand
FDR (and faster) on fabrics B&C, midplane 1.1 is required. Current versions of the enclosure come with midplane ane.i and it is possible to upgrade the midplane. Via the markings on the dorsum-side of the enclosure, only above the I/O modules: if an “arrow downwards” can be seen in a higher place the vi I/O slots the one.0 midplane was installed in the factory; if there are 3 or 4 horizontal bars, midplane 1.1 was installed. As it is possible to upgrade the midplane the outside markings are not decisive: via the CMC direction interface actual installed version of the midplane is visible
[3]

Front:Bract servers

[
edit
]

Each M1000e enclosure tin concord upward to 32 quarter-height, 16 half-elevation blades or 8 full-meridian or combinations (e.g. 1 total-peak + 14 half-height). The slots are numbered i-16 where 1-8 are the
upper
blades and 9-16 are directly below ane-viii. When using full-tiptop blades one utilize slot n (where n=1 to 8) and slot north+eight Integrated at the bottom of the front-side is a connectedness-option for two ten
USB, meant for a mouse and keyboard, as well as a standard
VGA
monitor connection (fifteen pivot). Adjacent to this is a power-push with ability-indication.

Next to this is a small-scale
LCD
screen with navigation buttons which allows 1 to get organisation-data without the need to access the CMC/management arrangement of the enclosure. Basic status and configuration information is available via this brandish. To operate the display one can pull it towards one and tilt it for optimal view and admission to the navigation button. For quick status checks, an indicator light sits alongside the LCD display and is always visible, with a blue LED indicating normal performance and an orange LED indicating a problem of some kind.

This LCD display can also be used for the initial configuration wizard in a newly delivered (unconfigured) system, allowing the operator to configure the CMC IP address.
[2]


Back:ability, management and I/O

[
edit
]

All other parts and modules are placed at the rear of the M1000e. The rear-side is divided in 3 sections: top: hither 1 insert the iii management-modules: one or two CMC modules and an optional iKVM
module. At the bottom of the enclosure at that place are 6 bays for power-supply units. A standard M1000e operates with three PSU’south The area in between offers 3 x three bays for cooling-fans (left – center – correct) and up to 6 I/O modules: iii modules to the left of the middle fans and three to the right. The I/O modules on the left are the I/O modules numbered A1, B1 and C1 while the right mitt side has places for A2, B2 and C2. The A material I/O modules connect to the on-board I/O controllers which in most cases will be a dual 1Gb or 10Gb Ethernet NIC. When the blade has a dual port on-board 1Gb NIC the first NIC will connect to the I/O module in fabric A1 and the second NIC will connect to material A2 (and the bract-slot corresponds with the internal Ethernet interface: e.g. the on-board NIC in slot v will connect to interface 5 of fabric A1 and the 2nd on-lath NIC goes to interface 5 of fabric A2)

I/O modules in fabric B1/B2 will connect to the (optional) Mezzanine card B or 2 in the server and fabric C to Mezzanine C or 3.

All modules can be inserted or removed on a running enclosure (Hot swapping)
[2]

Bachelor server-blades

[
edit
]

An M1000e holds up to 32 quarter-height, sixteen one-half-peak blades or viii full-height blades or a mix of them (e.g. two full elevation + 12 half-tiptop). The one/4 height blades require a full-size sleeve to install. The current list are the currently available
11G
blades and the latest
generation 12
models. There are also older blades similar the M605, M805 and M905 serial.

Power Edge M420

[
edit
]

Released in 2012,
[4]

PE M420 is a “quarter-size” blade: where most servers are ‘half-size’, assuasive 16 blades per M1000e enclosure, with the new M420 up to 32 blade servers can be installed in a single chassis. Implementing the M420 has some consequences for the system: many people have reserved 16 IP addresses per chassis to support the “automatic IP address assignment” for the iDRAC management card in a blade, only every bit it is at present possible to run 32 blades per chassis people might need to change their management IP assignment for the iDRAC. To support the M420 server one needs to run CMC firmware 4.1 or after
[5]

and one needs a total-size “sleeve” that holds up to 4 M420 blades. It besides has consequences for the “normal” I/O NIC assignment: most (one-half-size) blades have two LOMs (LAN On Motherboard): one connecting to the switch in the A1 cloth, the other to the A2 fabric. And the same applies to the Mezzanine cards B and C. All bachelor I/O modules (except for the
PCM6348,
MXL an MIOA) have xvi internal ports: one for each half-size blade. As an M420 has two 10 Gb LOM NICs, a fully loaded chassis would require two × 32 internal switch ports for LOM and the same for Mezzanine. An M420 server only supports a single Mezzanine carte (Mezzanine B OR Mezzanine C depending on their location) whereas all half-peak and full-peak systems support ii Mezzanine cards. To back up all on-lath NICs one would need to deploy a 32-slot Ethernet switch such as the MXL or Force10 I/O Aggregator. Only for the Mezzanine card it is different: the connections from Mezzanine B on the PE M420 are “load-counterbalanced” betwixt the B and C-fabric of the M1000e: the Mezzanine card in “slot A” (top slot in the sleeve) connects to Fabric C while “slot B” (the 2d slot from the pinnacle) connects to material B, and that is and then repeated for C and D slots in the sleeve.
[4]

Power Edge M520

[
edit
]

A half-peak server with up to 2x 8 core Intel Xeon E5-2400 CPU, running the Intel C600 chipset and offering up to 384 Gb RAM retention via 12 DIMM slots. Two on-blade disks (2.v-inch PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O.
[half dozen]

The M520 can also exist used in the
PowerEdge VRTX
arrangement.

Power Edge M600

[
edit
]

A one-half-top server with a Quad-Core Intel Xeon and 8 DIMM slots for upwards to 64GB RAM

Power Border M610

[
edit
]

A half-peak server with a quad-core or six-cadre
Intel
5500 or 5600
Xeon
CPU and Intel 5520 chipset. RAM memory options via 12 DIMM slots for up to 192 Gb RAM DDR3. A maximum of two on-blade hot-pluggable 2.5-inch difficult-disks or
SSDs
and a choice of built-in NICs for Ethernet or
converged network adapter
(CNA), Fibre Aqueduct or InfiniBand. The server has the Intel 5520 chipset and a Matrox G200 video carte
[7]

Ability Border M610x

[
edit
]

A total-height bract server that has the same capabilities as the one-half-top M610 but offering an expansion module containing x16 PCI Express (PCIe) 2.0 expansion slots that can support up to two standard full-length/full-pinnacle PCIe cards.
[viii]

Power Edge M620

[
edit
]

A one-half-height server with up to 2x 12 core Intel Xeon E5-2600 or Xeon E5-2600 v2 CPUs, running the Intel C600 chipset and offering up to 768 GB RAM retentiveness via 24 DIMM slots. Two on-bract disks (two,five” PCIe SSD, SATA HDD or SAS HDD) are installable for local storage with a range of RAID controller options. Two external and one internal USB ports and two SD carte du jour slots. The blades tin come up pre-installed with Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise or RHEL. It can also be ordered with Citrix XenServer or VMWare vSphere ESXi or using Hyper-V which comes with W2K8 R2.
[nine]

According to the vendor all Generation 12 servers are optimized to run as virtualisation platform.
[x]

Out-of-band management is done via
iDRAC 7
via the CMC.

Power Edge M630

[
edit
]

A half-tiptop server with up to 2x 22-core Intel Xeon E5-2600 v3/v4 CPUs, running the Intel C610 chipset and offering up to 768 GB RAM retentiveness via 24 DIMM slots, or 640 GB RAM memory via 20 DIMM slots when using 145w CPUs. 2 on-blade disks (two,v” PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a selection of Intel or Broadcom LOM + ii Mezzanine slots for I/O.
[half dozen]

The M630 can besides be used in the
PowerEdge VRTX
system. Amulet HotKey offers a modified M630 server that can be fitted with a GPU or Teradici PCoIP Mezzanine module.

Ability Edge M640

[
edit
]

A half-top server with upwards to 2x 28-core Xeon Scalable CPU. Supported on both the M1000e and
PowerEdge VRTX
chassis. The server tin can support up to 16 DDR4 RDIMM memory slots for upward to 1024 GB RAM and 2 drive bays supporting SAS / SATA or NVMe drives (with an adapter). The server uses iDRAC 9.

Power Edge M710

[
edit
]

A full-tiptop server with a quad-core or half-dozen-core
Intel
5500 or 5600
Xeon
CPU and up to 192 Gb RAM. A maximum of four on-bract hot-pluggable 2.5″ hard-disks or
SSD‘south and a choice of born NICs for Ethernet or converged network adapter, Fibre Channel or InfiniBand. The video menu is a Matrox G200.The server has the Intel 5520 chipset
[xi]

Ability Border M710HD

[
edit
]

A two-socket version of the M710 merely now in a half-peak blade. CPU can be 2 quad-core or half dozen-core Xeon 5500 or 5600 with the Intel 5520 chipset. Via 18 DIMM slots up to 288 Gb DDR3 RAM can put on this blade and the standard choice of on-board Ethernet NICs based on Broadcom or Intel and one or two Mezzanine cards for Ethernet, Fibre Aqueduct or InfiniBand.
[12]

Read:  Cara Mengatasi Jaringan Internet Indosat Lemot

Power Edge M820

[
edit
]

A full-superlative server with 4x viii cadre Intel Xeon E5-4600 CPU, running the Intel C600 chipset and offer upwards to i.5 TB RAM memory via 48 DIMM slots. Up to four on-blade 2,5″ SAS HDD/SSD or two PCIe flash SSD are installable for local storage. The M820 offers a choice of 3 different on-lath converged Ethernet adaptors for 10 Gbit/s
Fibre Aqueduct over Ethernet
(FCoE) from Broadcom,
Brocade
or
QLogic
and upwardly to ii boosted Mezzanine for Ethernet, Fibre Aqueduct or InfiniBand I/O
[thirteen]

Power Border M910

[
edit
]

A full-pinnacle server of the 11th generation with up to 4x 10-core Intel XEON E7 CPU or 4 ten 8 core XEON 7500 serial or two x 8 core XEON 6500 series, 512 Gb or 1Tb DDR3 RAM and two hot-swappable ii,5″ hard-drives (spinning or SSD). Information technology uses the Intel E 7510 chipset. A pick of built-in NICs for Ethernet, Fibre Aqueduct or InfiniBand
[14]

Power Edge M915

[
edit
]

Likewise a full-pinnacle 11G server using the AMD Opteron 6100 or 6200 series CPU with the AMD SR5670 and SP5100 chipset. Memory via 32 DDR3 DIMM slots offer up to 512Gb RAM. On-board up to 2 2,5 inch HDD or SSD’s. The blade comes with a choice of on-board NICs and up to ii mezzanine cards for dual-port 10Gb Ethernet, dual-port FCoE, dual-port 8Gb fibre-channel or dual port Mellanox Infiniband. Video is via the on-board Matrox G200eW with 8MB memory
[15]

Mezzanine cards

[
edit
]

Each server comes with Ethernet NICs on the motherboard. These ‘on board’ NICs connect to a switch or pass-through module inserted in the A1 or the A2 bay at the back of the switch. To let more than NICs or non-Ethernet
I/O
each blade
[sixteen]

has 2 so-called
mezzanine
slots: slot B connecting to the switches/modules in bay B1 and B2 and slot C connecting to C1/C2: An M1000e chassis holds up to 6 switches or laissez passer-through modules. For redundancy one would ordinarily install switches in pairs: the switch in bay A2 is commonly the same as the A1 switch and connects the blades on-motherboard NICs to connect to the data or storage network.


(Converged) Ethernet Mezzanine cards

[
edit
]

Standard bract-servers have one or more built-in NICs that connect to the ‘default’ switch-slot (the
A-cloth) in the enclosure (often blade-servers also offer one or more than external NIC interfaces at the front of the bract) but if ane desire the server to have more physical (internal) interfaces or connect to different switch-blades in the enclosure one tin can place extra mezzanine cards on the blade. The aforementioned applies to adding a
Fibre Channel
host omnibus adapter or a
Fibre Channel over Ethernet
(FCoE)
converged network adapter
interface. Dell offers the post-obit (converged) Ethernet mezzanine cards for their PowerEdge blades:
[17]

  • Broadcom 57712 dual-port CNA
  • Brocade BR1741M-grand CNA
  • Mellanox
    ConnectX-2 dual 10Gb menu
  • Intel dual port 10Gb Ethernet
  • Intel Quad port Gigabit Ethernet
  • Intel Quad port Gigabit Ethernet with virtualisation applied science and iSCSI acceleration features
  • Broadcom NetXtreme II 5709 dual- and quad-port Gigabit Ethernet (dual port with iSCSI offloading features)
  • Broadcom NetXtreme Two 5711 dual port 10Gb Ethernet with iSCSI offloading features

Non-Ethernet cards

[
edit
]

Apart from the above the following mezzanine cards are bachelor:
[17]

  • Emulex LightPulse LPe1105-M4
    Host adapter
  • Mellanox ConnectX IB MDI Dual-Port
    InfiniBand
    Mezzanine Card
  • QLogic SANblade HBA
  • SANsurfer Pro

Bract storage

[
edit
]

In most setups the server-blades will utilise external storage (NAS
using
iSCSI,
FCoE
or
Fibre Aqueduct) in combination with local server-storage on each bract via
hard disk drive drives
or
SSDs
on the blades (or even only a SD-card with boot-OS similar
VMware ESX

[eighteen]
). It is also possible to utilize completely diskless blades that boot via PXE or external storage. But regardless of the local and boot-storage: the majority of the information used by blades volition be stored on SAN or
NAS
external from the blade-enclosure.

EqualLogic Blade-SAN

[
edit
]

Dell has put the EqualLogic PS M4110 models of
iSCSI
storage arrays
[19]

that are physically installed in the M1000e chassis: this SAN will accept the same space in the enclosure every bit 2 half-height blades side by side to each other. Autonomously from the form gene (the physical size, getting power from the enclosure system etc.) it is a “normal” iSCSI SAN: the blades in the (same) chassis communicate via Ethernet and the system does require an accustomed Ethernet blade-switch in the back (or a pass-through module + rack-switch): in that location is no option for straight communication of the server-blades in the chassis and the M4110: it merely allows a user to pack a consummate mini-datacentre in a single enclosure (xix” rack, ten
RU)

Depending on the model and used disk driver the PS M4110 offers a system (raw) storage capacity between 4.5 TB (M4110XV with fourteen × 146 Gb, 15K SAS HDD) and 14 TB (M4110E with 14 x 1TB, seven,2K SAS HDD). The M4110XS offer 7.4TB using nine HDD’south and 5
SSD’s.
[20]

Each M4110 comes with 1 or two controllers and 2 10-gigabit Ethernet interfaces for iSCSI. The management of the SAN goes via the chassis-direction interface (CMC). Because the iSCSI uses 10 Gb interfaces the SAN should be used in combination with one of the 10G blade switches: the
PCM 8024-thou
or the
Force10 MXL
switch.
[xx]

The enclosure’south mid-aeroplane hardware version should be at least version 1.1 to support 10Gb KR connectivity
[21]


[22]

PowerConnect switches

[
edit
]


Drawing of M1000e enclosure with 2 x FTOS MXL, 2 x M8024-k and 2x FibreChannel 5424

At the rear side of the enclosure i will find the power-supplies, fan-trays, i or 2 chassis-management modules (the CMC’s) and a virtual
KVM switch. And the rear offers 6 bays for I/O modules numbered in iii pairs: A1/A2, B1/B2 and C1/C2. The
A
bays connect the on-motherboard NICs to external systems (and/or allowing communication betwixt the dissimilar blades within one enclosure).

The
Dell PowerConnect
switches are modular switches for use in the Dell
bract server
enclosure
M1000e. The M6220, M6348, M8024 and M8024K are all switches in the same family, based on the same fabrics (Broadcom) and running the same firmware-version.
[23]

All the K-series switches are
OSI layer 3
capable: and then one can also say that these devices are
layer 2
Ethernet switches with built-in
router
or layer3 functionality.

The about important difference between the M-series switches and the
Dell PowerConnect
classic
switches (due east.yard. the
8024 model) is the fact that most interfaces are
internal
interfaces that connect to the blade-servers via the midplane of the enclosure. Also the M-serial tin’t be running outside the enclosure: it will merely work when inserted in the enclosure.

PowerConnect M6220

[
edit
]

This is a 20-port switch: xvi internal and four external Gigabit Ethernet interfaces and the option to extend it with up to 4 10Gb external interfaces for uplinks or two 10Gb uplinks and two stacking ports to
stack
several PCM6220’southward into i big logical switch.

PowerConnect M6348

[
edit
]

This is a 48 port switch: 32 internal 1Gb interfaces (ii per serverblade) and 16 external copper (RJ45) gigabit interfaces. In that location are also two SFP+ slots for 10Gb uplinks and two CX4 slots that can either be used for two extra 10Gb uplinks or to stack several M6348’due south blades in one logical switch. The M6348 offers four 1Gb interfaces to each blade which means that ane tin can merely utilize the switch to total capacity when using blades that offer four internal NICs on the A fabric (=the internal/on motherboard NIC). The M6348 tin can exist stacked with other M6348 but also with the
PCT7000 series
rack-switches.

PowerConnect M8024 and M8024k

[
edit
]

The M8024 and M8024-k offer xvi internal autosensing 1 or x Gb interfaces and upwardly to eight external ports via i or two I/O modules each of which tin offer: 4 × 10Gb SFP+ slots, iii ten CX4 10Gb (only) copper or 2 x 10G BaseT i/x Gb RJ-45 interfaces. The PCM8024 is ‘finish of sales’ since November 2011 and replaced with the PCM8024-k.
[24]

Since firmware update 4.2 the PCM8024-1000 supports partially
FCoE
via FIP (FCoE Initialisation Protocol) and thus
Converged network adapters
just dissimilar the PCM8428-k
it has no native
fibre channel
interfaces.

Also since firmware four.2 the PCM8024-one thousand can be stacked using external 10Gb Ethernet interfaces by assigning them as stacking ports. Although this new stacking-option is also introduced in the same firmware release for the PCT8024 and PCT8024-f i can’t stack blade (PCM) and rack (PCT)-versions in a unmarried stack. The new features are non available on the ‘original’ PCM8024. Firmware 4.2.10 for the PCM8024 only corrected bugs: no new features or new functionality are added to ‘end of sale’ models.
[25]


[26]

To use the PCM8024-g
switches ane will demand the backplane that supports the KR or IEEE 802.3ap standards
[21]


[22]

Powerconnect capabilities

[
edit
]

All PowerConnect M-serial (“PCM”) switches are multi-layer switches thus offer both layer 2 (Ethernet) options as well as layer 3 or IP routing options.
Depending on the model the switches offer internally 1Gbit/s or 10Gbit/southward interfaces towards the blades in the chassis. The PowerConnect M series with “-grand” in the model-proper name offer 10Gb internal connections using the
10GBASE-KR
standard. The external interfaces are mainly meant to exist used every bit uplinks or stacking-interfaces merely can also be used to connect non-bract servers to the network.
On the link-level PCM switches back up
link aggregation: both static LAG’s as well every bit LACP. As all PowerConnect switches the switches are running
RSTP
as
Spanning Tree Protocol, just it is besides possible to run MSTP or Multiple Spanning Tree. The internal ports towards the blades are by default set as edge or “portfast” ports. Another characteristic is to use link-dependency. I can, for example, configure the switch that all internal ports to the blades are close down when the switch gets isolated because information technology loses its uplink to the rest of the network.
All PCM switches can be configured as pure layer-2 switches or they can exist configured to practise all routing: both routing between the configured VLAN’s as external routing. Likewise static routes the switches besides support
OSPF
and
RIP
routing. When using the switch equally routing switch 1 demand to configure vlan interfaces and assign an IP accost to that vlan interface: information technology is not possible to assign an IP address directly to a concrete interface.

[23]

Stacking

[
edit
]

All PowerConnect blade switches, except for the
original
PC-M8024, can be stacked. To stack the
new
PC-M8024-k switch the switches demand to run firmware version four.two or college.
[27]

In principle i can but stack switches of the same family; thus stacking multiple PCM6220’s together or several PCM8024-1000. The only exception is the capability to stack the blade PCM6348 together with the rack-switch PCT7024 or PCT7048. Stacks tin can contain multiple switches within one M1000e chassis merely 1 tin besides stack switches from different chassis to class one logical switch.
[28]

Force10 switches

[
edit
]


MXL 10/40 Gb switch

[
edit
]

At the Dell Interop 2012 in
Las Vegas
Dell appear the first
FTOS
based bract-switch: the
Force10
MXL x/40Gpbs
bract switch, and afterward a 10/40Gbit/s concentrator. The FTOS MXL forty Gb was introduced on 19 July 2012.
[29]

The MXL provides 32 internal 10Gbit/due south links (ii ports per bract in the chassis), ii QSFP+ 40Gbit/s ports and two empty expansion slots allowing a maximum of four additional QSFP+ 40Gbit/s ports or 8 10Gbit/s ports. Each QSFP+ port tin can be used for a 40Gbit/southward switch to switch (stack) uplink or, with a intermission-out cable, 4 x 10Gbit/s links. Dell offers
directly attach
cables with on one side the QSFP+ interface and four x SFP+ on the other end or a QSFP+ transceiver on i end and 4 fibre-optic pairs to exist continued to SFP+ transceivers on the other side. Upwards to half dozen MXL blade-switch can be stacked into one logical switch.

Besides the above 2×40 QSFP module the MXL also supports a 4x10Gb SFP+ and a 4x10GbaseT module. All ethernet extension modules for the MXL can also be used for the rack based N4000 series (fka Ability connector 8100).

Read:  Razer Blackwidow Chroma Tournament Edition Firmware Update

The MXL switches also support Fibre Aqueduct over Ethernet so that server-blades with a
converged network adapter
Mezzanine bill of fare can exist used for both data as storage using a Fibre Channel storage system. The MXL x/twoscore Gbit/due south blade switch will run
FTOS

[30]

and because of this will be the first M1000e I/O product without a Spider web
graphical user interface. The MXL can either forward the FCoE traffic to an upstream switch or, using a 4 port 8Gb FC module, perform the FCF function, connecting the MXL to a full FC switch or directly to a FC SAN.


I/O Aggregator

[
edit
]

In October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on
FTOS. The I/O Aggregator offers 32 internal 10Gb ports towards the blades and standard ii twoscore Gbit/s QSFP+ uplinks and offers 2 extension slots. Depending on one’s requirements i can get extension modules for 40Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. One can assign upwards to 16 x 10Gb uplinks to ane’south distribution or core layer. The I/O aggregator supports
FCoE
and DCB (Information center bridging) features
[31]

Cisco switches

[
edit
]

Dell also offered some Cisco Goad switches for this blade enclosure. Cisco offers a range of switches for bract-systems from the primary vendors. As well the Dell M1000e enclosure Cisco offers similar switches also for HP, FSC and IBM blade-enclosures.
[32]

For the Dell M1000e there are 2 model-ranges for Ethernet switching: (note: Cisco also offers the Catalyst 3030, but this switch is for the
old
Generation viii
or
Gen 9
blade system, non for the current M1000e enclosure
[33]
)

As per 2017 the simply bachelor Cisco I/O device for the M1000e chassis is the Nexus FEX
[34]

Goad 3032

[
edit
]

The Goad 3032: a layer 2 switch with 16 internal and 4 external 1Gb Ethernet interfaces with an choice to extend to 8 external 1Gb interfaces. The congenital-in external ports are ten/100/thousand BaseT copper interfaces with an
RJ45
connector and upward to 4 additional 1Gb ports can be added using the extension module slots that each offer 2
SFP
slots for fiber-optic or Twinax 1Gb links. The Catalyst 3032 doesn’t offer stacking (virtual blade switching)
[35]

Goad 3130

[
edit
]

The 3130 serial switches offering 16 internal 1Gb interfaces towards the blade-servers. For the uplink or external connections there are 2 options: the 3130G offering four congenital-in ten/100/1000BaseT RJ-45 slots and two module-bays allowing for up to 4 SFP 1Gb slots using SFP transceivers or SFP Twinax cables.
[36]

The 3130X also offers the iv external 10/100/1000BaseT connections and two modules for X2 10Gb uplinks.
[37]

Both 3130 switches offering ‘stacking‘ or ‘virtual blade switch’. 1 can stack up to 8 Catalyst 3130 switches to behave similar one single switch. This can simplify the management of the switches and simplify the (spanning tree) topology equally the combined switches are simply one switch for
spanning tree
considerations. It likewise allows the network manager to
aggregate
uplinks from physically dissimilar switch-units into one logical link.
[35]

The 3130 switches come standard with IP Base
IOS
offering all layer ii and the basic layer 3 or routing-capabilities. Users can upgrade this bones license to IP Services or IP Avant-garde services adding additional routing capabilities such as
EIGRP,
OSPF
or
BGP4
routing protocols, IPv6 routing and hardware based unicast and multicast routing. These advances features are built into the IOS on the switch, but a user has to upgrade to the
IP (Advanced) Services
license to unlock these options
[38]

Nexus Fabric Extender

[
edit
]

Since January 2013 Cisco and Dell offering a
Nexus Fabric Extender
for the M1000e chassis: Nexus B22Dell. Such FEX’s were already available for HP and Fujitsu blade systems, and now there is also a FEX for the M1000e blade system. The release of the B22Dell is approx. 2,5 years after the initially planned and announced appointment: a disagreement between Dell and Cisco resulted in Cisco stopping the development of the FEX for the M1000e in 2010.
[39]

Customers manage a FEX from a core
Nexus
5500 series switch.
[40]


Other I/O cards

[
edit
]

An M1000e enclosure tin concur up to 6 switches or other I/O cards. Also the ethernet switches as the Powerconnect M-series, Force10 MXL and Cisco Goad 3100 switches mentioned above the following I/O modules are available or usable in a Dell M1000e enclosure:
[1]


[41]

  • Ethernet pass-through modules bring internal server-interfaces to an external interface at the back of the enclosure. There are pass-through modules for 1G, 10G-XAUI
    [42]

    and 10G 10GbaseXR.
    [43]

    All passthrough modules offer 16 internal interfaces linked to 16 external ports on the module.
  • Emulex 4 or 8 Gb Fibre Aqueduct Passthrough Module
    [ane]
  • Brocade 5424 8Gb FC switch for
    Fibre Channel
    based
    Storage area network
  • Brocade M6505. 16Gb FC switch

    [44]
  • Dell 4 or 8Gb Fibre-channel NPIV Port aggregator
  • Mellanox
    2401G and 4001F/Q

    InfiniBand
    Dual Data Rate or Quad Data Charge per unit modules for
    High-operation computing
  • Infiniscale 4: 16 port 40Gb Infiniband switch
    [45]
  • Cisco M7000e Infiniband switch with 8 external DDR ports
  • the below Powerconnect 8428-g switch with 4 “native” 8Gb Fibre channel interfaces:

PCM 8428-thousand Brocade FCoE

[
edit
]

Although the PCM8024-k and MXL switch exercise back up Fibre Channel over Ethernet, it is not a ‘native’ FCoE switch: it has no Fibre Channel interfaces. These switches would demand to be continued to a “native” FCoE switch such as the
Powerconnect B-serial
8000e (same equally a Brocade 8000 switch) or a
Cisco Nexus
5000 series switch with fibre channel interfaces (and licenses). The PCM8428 is the only full Fibre Channel over Ethernet capable switch for the M1000e enclosure that offers 16 x enhanced Ethernet 10Gb internal interfaces, 8 x 10Gb (enhanced) Ethernet external ports and also up to four 8Gb Fibre Channel interfaces to connect directly to a FC SAN controller or fundamental Fibre Channel switch.
The switch runs Brocade FC firmware for the material and fibre-channel switch and Foundry Bone for the Ethernet switch configuration.
[46]

In capabilities it is very comparable to the Powerconnect-B8000, only the formfactor and number of Ethernet and FC interfaces are different.
[1]


[47]


PowerConnect M5424 / Brocade 5424

[
edit
]

This is a Brocade full Fibre Channel switch. Information technology uses either the B or C fabrics to connect the Fibre Channel mezzanine card in the blades to the FC based storage infrastructure. The M5424 offers sixteen internal ports connecting to the FC Mezzanine cards in the blade-servers and 8 external ports. From factory only the first two external ports (17 and 18) are licensed: boosted connections require actress Dynamic Ports On Demand (DPOD) licenses. The switch runs on a PowerPC 440EPX processor at 667 MHz and 512 MB DDR2 RAM system memory. Farther it has four Mb boot flash and 512 Mb compact flash memory on board.
[48]

Brocade M6505

[
edit
]

Similar capabilities as higher up, merely offering 16 10 16Gb FC towards server mezzanine and 8 external. Standard license offers 12 connections which can be increased by 12 to support all 24 ports. auto-sensing speed 2,4,8 and 16Gb. Total aggregate bandwidth 384 GB
[49]

Brocade 4424

[
edit
]

Equally the 5424, the 4424 is also a Brocade SAN I/O offering 16 internal and viii external ports. The switch supports speeds up to 4 Gbit/south. When delivered 12 of the ports are licensed to exist operation and with additional licenses i tin enable all 24 ports. The 4424 runs on a PowerPC 440GP processor at 333 MHz with 256 SDRAM system retention, 4 Mb kick flash and 256 Mb compact flash memory.
[l]

Infiniband

[
edit
]

There are several modules available offering
Infiniband
connectivity on the M1000e chassis. Infiniband offers high bandwidth/low-latency intra-estimator connectivity such every bit required in Academic
HPC clusters, large enterprise datacenters and cloud applications.
[51]

There is the SFS M7000e InfiniBand switch from Cisco. The Cisco SFS offers xvi internal ‘autosensing’ interfaces for single (10) (SDR) or double (20Gbit/s) data rate (DDR) and eight DDR external/uplink ports. The total switching capacity is 960 Gbit/south
[52]

Other options are the Mellanox SwitchX M4001F and M4001Q
[53]

and the Melanox M2401G 20Gb Infiniband switch for the M1000e enclosure
[54]

The M4001 switches offer either 40 GBit/s (M4001Q) or the 56 Gbit/s (M4001F) connectivity and has sixteen external interfaces using
QSFP
ports and 16 internal connections to the Infiniband Mezzanine card on the blades. As with all other non-Ethernet based switches it tin only be installed in the B or C fabric of the M1000e enclosure as the A fabric connects to the “on motherboard” NICs of the blades and they only come as Ethernet NICs or converged Ethernet.

The 2401G offers 24 ports: 16 internal and 8 external ports. Unlike the M4001 switches where the external ports are using QSFP ports for cobweb transceivers, the 2401 has
CX4
copper cable interfaces. The switching chapters of the M2401 is 960 Gbit/s
[54]

The 4001, with xvi internal and 16 external ports at either twoscore or 56 Gbit/s offers a switching capacity of 2.56 Tbit/s

Passthrough modules

[
edit
]

In some setups one don’t desire or demand switching capabilities in one’s enclosure. For example: if only a few of the blade-servers do apply fibre-channel storage one don’t need a fully manageble FC switch: i but want to be able to connect the ‘internal’ FC interface of the blade straight to one’s (existing) FC infrastructure. A pass-through module has only very limited management capabilities. Other reasons to choose for pass-through instead of ‘enclosure switches’ could be the wish to have all switching done on a ‘one vendor’ infrastructure; and if that isn’t available equally an M1000e module (thus not one of the switches from Dell Powerconnect, Dell Force10 or Cisco) one could go for laissez passer-through modules:

  • 32 port 10/100/one thousand Mbit/s gigabit Ethernet pass-through carte du jour: connects 16 internal Ethernet interfaces (1 per bract) to an external RJ45 10/100/thou Mbit/s copper port
    [55]
  • 32 port 10 Gb NIC version supports 16 internal 10Gb ports with xvi external SFP+ slots
  • 32 port x Gb CNA version supports 16 internal 10Gb
    CNA
    ports with 16 external CNA’southward
    [56]
  • Dell 4 or 8Gb Fibre-channel NPIV Port aggregator
  • Intel/Qlogic offer a QDR Infiniband passthru module for the Dell M1000e chassis, and a mezzanine version of the QLE7340 QDR IB HCA.

Managing enclosure

[
edit
]

An M1000e enclosure offers several ways for management. The M1000e offers ‘out of band’ management: a dedicated VLAN (or fifty-fifty physical LAN) for management. The CMC modules in the enclosure offering direction Ethernet interfaces and do not rely on network-connections made via I/O switches in the blade. One would normally connect the Ethernet links on the CMC fugitive a switch in the enclosure. Oftentimes a physically isolated LAN is created for direction allowing direction access to all enclosures even when the entire infrastructure is downwards. Each M1000e chassis can concord two CMC modules.

Each enclosure can have either one or two CMC controllers and by default one can access the CMC Webgui via
https
and
SSH
for command-line admission. It is likewise possible to access the enclosure direction via a serial port for CLI access or using a local keyboard, mouse and monitor via the iKVM switch. It is possible to daisy-concatenation several M1000e enclosures.

Management interface

[
edit
]



Main page of the CMC Webgui

Below information assumes the use of the Webgui of the M1000e CMC, although all functions are also available via the text-based CLI access. To access the management system i must open the CMC Webgui via https using the out of band management IP address of the CMC. When the enclosure is in ‘stand up lonely’ mode one will get a general overview of the unabridged system: the webgui gives one an overview how the arrangement looks in reality, including the status-leds etc. By default the Ethernet interface of a CMC menu will get an address from a DHCP server merely it is also possible to configure an IPv4 or IPv6 address via the LED display at the front end of the chassis. Once the IP address is set or known the operator can access the webgui using the default root-business relationship that is built in from mill.

Via the CMC direction one can configure chassis-related features: management IP addresses, hallmark features (local user-list, using RADIUS or Tacacs server), admission-options (webgui, cli, serial link, KVM etc.), error-logging (syslog server), etc. Via the CMC interface one tin can configure blades in the system and configuring iDRAC access to those servers. Once enabled one tin admission the iDRAC (and with that the console of the server) via this webgui or straight opening the webgui of the iDRAC.

Read:  How to Upgrade Firmware on Netgear Router

The same applies to the I/O modules in the rear of the organization: via the CMC one can assign an IP accost to the I/O module in one of the 6 slots and and then surf to the webgui of that module (if there is a web-based gui: unmanaged pass-through modules won’t offering a webgui as there is cipher to configure.

LCD screen

[
edit
]

On the front end-side of the chassis there is a pocket-sized hidden LCD screen with three buttons: one 4 manner directional button assuasive one to navigate through the menus on the screen and two “on/off” button buttons which work equally an “OK” or “Escape” button. The screen tin can be used to check the status of the enclosure and the modules in it: one can for case check agile alarms on the arrangement, become the IP address of the CMC of KVM, check the system-names etc. Specially for an environment where there are more enclosures in 1 datacenter information technology tin can be useful to check if one are working on the correct enclosure. Unlike the rack or tower-servers there are only a very limited gear up of indicators on individual servers: a bract server has a power-led and (local) disc-action led’s simply no LCD display offering one any alarms, hostnames etc. Nor are there LED’southward for I/O activity: this is all combined in this little screen giving one data on both the enclosure also every bit information over the inserted servers, switches, fans, power-supplies etc. The LCD screen can besides exist used for the initial configuration of an unconfigured chassis. One can utilize the LCD screen to prepare the interface-language and to set the IP address of the CMC for further CLI or web-based configuration.
[ii]

During normal operation the display tin be “pushed” into the chassis and is mainly hidden. To utilize it one would need to pull information technology out and tilt it to read the screen and have access to the buttons.


Blade 17: Local direction I/O

[
edit
]

A blade-system is not really designed for local (on-site) management and nearly all communication with the modules in the enclosure and the enclosure itself are done via the “CMC” bill of fare(s) at the dorsum of the enclosure. At the front-side of the chassis, directly adjacent to the power-button, ane can connect a local last: a standard
VGA
monitor connector and two
USB
connectors. This connection is referred to inside the system as ‘blade 17’ and allows one a local interface to the CMC direction cards.
[2]

iDRAC remote access

[
edit
]

Apart from normal operation access to i’s blade servers (e.g. SSH sessions to a Linux-based Os, RDP to a Windows-based OS etc.) there are roughly two ways to manage i’s server blades: via the iDRAC function or via the iKVM switch. Each bract in the enclosure comes with a congenital-in iDRAC that allows ane to access the console over an IP connection. The iDRAC on a blade-server works in the same way as an iDRAC card on a rack or tower-server: there is a special iDRAC network to get admission to the iDRAC function. In rack or belfry-servers a dedicated iDRAC Ethernet interface connects to a management LAN. On blade-servers it works the same: via the CMC one configure the setup of iDRAC and access to the iDRAC of a blade is NOT linked to any of the on-board NICs: if all 1’due south server NICs would be downwards (thus all the on-motherboard NICs and also the Mezzanine B and C) one can even so access the iDRAC.

iKVM: Remote console access

[
edit
]

Apart from that, ane can too connect a keyboard, mouse and monitor directly to the server: on a rack or tower switch one would either connect the I/O devices when needed or one accept all the servers connected to a
KVM switch. The same is possible with servers in a bract-enclosure: via the optional iKVM module in an enclosure one tin access each of ane’southward sixteen blades directly. Information technology is possible to include the iKVM switch in an existing network of digital or analog KVM switches. The iKVM switch in the Dell enclosure is an
Avocent
switch and one can connect (tier) the iKVM module to other digital KVM switches such as the Dell 2161 and 4161 or Avocent DSR digital switches. Also tiering the iKVM to analog KVM switches every bit the Dell 2160AS or 180AS or other Avocent (compatible) KVM switches is possible.
[two]

Unlike the CMC, the iKVM switch is not redundant just as ane can always access a server (also) via its iDRAC any outage of the KVM switch doesn’t stop one from accessing the server-console.

Flex addresses

[
edit
]

The M1000e enclosure offers the option of flex-addresses. This characteristic allows the arrangement administrators to use dedicated or fixed
MAC addresses
and
World Wide Names
(WWN) that are linked to the chassis, the position of the blade and location of the I/O interface. Information technology allows administrators to physically replace a server-bract and/or a Mezzanine bill of fare while the system volition continue to apply the same MAC addresses and/or WWN for that bract without the demand to manually change any MAC or WWN addresses and avoid the risk of introducing duplicate addresses: with flex-addresses the organisation will assign a globally unique MAC/WWN based on the location of that interface in the chassis. The flex-addresses are stored on a
SD-menu that is inserted in the CMC module of a chassis and when used it overwrites the address burned in into the interfaces of the blades in the system.
[2]

Power and cooling

[
edit
]

The M1000e enclosure is, as nearly blade systems, for IT infrastructures enervating high availability. (Nearly) everything in the enclosure supports redundant functioning: each of the 3 I/O fabrics (A, B and C) back up ii switches or pass-through cards and it supports ii CMC controllers, even though one tin run the chassis with simply one CMC. As well ability and cooling is redundant: the chassis supports up to six power-supplies and nine fan units. All ability supplies and fan-units are inserted from the back and are all hot-swappable.
[2]

The power supplies are located at the bottom of the enclosure while the fan-units are located next to and in between the switch or I/O modules. Each ability-supply is a 2700-watt power-supply and uses 208–240 5 Air conditioning as input voltage. A chassis can run with at to the lowest degree two power-supplies (2+0 non-redundant configuration). Depending on the required redundancy one can use a ii+2 or 3+3 setup (input redundancy where one would connect each grouping of supplies to ii different power sources) or a three+1, four+2 or five+1 setup, which gives protection if one ability-supply unit of measurement would neglect – only not for losing an entire AC power group
[1]

References

[
edit
]

  1. ^



    a






    b






    c






    d






    e




    Dell website
    Tech specs for the M1000e, visited 10 March 2013
  2. ^



    a






    b






    c






    d






    e






    f






    1000






    h




    Dell back up website
    M1000e owners manual, retrieved 26 October 2012


  3. ^



    PowerEdge M1000e Installation Guide, Revision A05, page 47-51. Date: March 2011. Retrieved: 25 Jan 2013
  4. ^



    a






    b






    “Details on the Dell PowerEdge M420 Blade Server”.
    BladesMadeSimple.com. May 22, 2012. Retrieved
    January 29,
    2017
    .




  5. ^




    “Dell PowerEdge M420 Blade Server – Dell”.
    Dell.com
    . Retrieved
    January 29,
    2017
    .


  6. ^



    a






    b




    Dell website: Poweredge
    M630 Technical specifications, visited 29 August 2016.


  7. ^


    Tech Specs brochure
    PowerEdge M610, updated 20 December 2011


  8. ^


    Technical specs of the Dell PowerEdge
    M610x, retrieved twenty December 2011


  9. ^


    Overview of technical specifications of the
    Poweredge M620, visited 12 June 2012


  10. ^


    Dell website announcing G12 servers with details on
    virtualisation
    Archived
    2012-06-14 at the
    Wayback Machine, visited 12 June 2012


  11. ^


    Tech Specs brochure
    PowerEdge M710, retrieved 27 June 2011


  12. ^


    Tech Specs for Ability Edge
    PowerEdge M710HD, retrieved 20 December 2011


  13. ^


    Dell website: Poweredge
    M820 Technical specifications, visited 28 July 2012


  14. ^


    Technical specs on the:M910, retrieved 20 December 2011


  15. ^


    Dell website with technical specification of the
    M915 blade, retrieved 20 December 2011


  16. ^


    Footnote:Except the PE M420 which only supports one Mezzanine carte: The PE M420 quarter height blade server only has a Mezzanine B slot
  17. ^



    a






    b




    Dell support site with an
    Overview manuals for the M1000e chassis, visited 27 June 2011


  18. ^


    Whitepaper on redundant SD bill of fare installation of
    Hypervisors, visited 19 February 2013


  19. ^


    Technical specifications of the
    Equallogic PS M4110 blade array, visited 27 September 2012
  20. ^



    a






    b




    Dell datasheet for the
    PS-M4110, downloaded: 2 March 2013
  21. ^



    a






    b





    Using M1000e Organization with an AMCC QT2025 Backplane PHY in a 10GBASE-KR Application, retrieved 12 June 2012
  22. ^



    a






    b





    How to find midplane revision of M1000e, visited 19 September 2012
  23. ^



    a






    b




    PowerConnect Grand-series
    User Guide, firmware four.ten, March 2011, retrieved 26 June 2011


  24. ^


    Dell website: available blade-switches
    PCM8024 non listed every bit available, 29 Dec 2011


  25. ^


    Dell website
    PCM8024-k, visited 29 December 2012


  26. ^


    Release notes page vi and further included in firmware bundle
    PC iv.ii.one.three, release-date ii February 2012, downloaded: sixteen February 2012


  27. ^



    Stacking the PowerConnect 10G switches, December 2011. Visited 10 March 2013


  28. ^



    PCM6348 User Configuration Guide, downloaded 10 March 2013


  29. ^


    Dell community website:
    Dell announces F10 MXL switch, 24 April 2012. Visited 18 May 2012


  30. ^


    EWeek:
    Dell unveils 40GbE Enabled networking switch, 24 April 2012. Visited 18 May 2012


  31. ^


    Dell website:
    PowerEdge Thou I/O Aggregator, Baronial, 2012. Visited: 26 October 2012


  32. ^


    Cisco website:
    Comprehensive Blade Server I/O Solutions, visited: 14 April 2012


  33. ^



    Goad 3032 for Dell, visited: fourteen April 2012


  34. ^



    Nexus FEX for M1000e, visited ii July, 2017
  35. ^



    a






    b





    Catalyst for Dell at a glance, retrieved: xiv April 2012


  36. ^


    Dell website
    Catalyst 3130G
    Archived
    2011-06-21 at the
    Wayback Auto, visited 14 April 2012


  37. ^


    Dell website on
    Catalyst 3130X
    Archived
    2011-06-21 at the
    Wayback Machine, visited 14 April 2012


  38. ^


    Cisco datasheet on the
    Catalyst 3130, section: 3130 software. Visited: 14 Apr 2012


  39. ^


    TheRegister website:
    Cisco cuts Nexus 4001d blade switch, xvi February 2010. Visited: ten March 2013


  40. ^


    Cisco datasheet:
    Cisco Nexus B22 Blade Cloth Extender Data Sheet, 2013. Downloaded: 10 March 2013


  41. ^



    Manuals and Documents for PowerEdge M1000E, visited 9 March 2013


  42. ^


    Usermanual for the
    10GbE XAUI passthrough module, 2010, visited: 10 March 2013


  43. ^


    Usermanual for the
    10 Gb passthrough -1000 for M1000e, 2011. Visited: x March 2013


  44. ^



    Brocade M6505 for M1000e chassis, visited 2July, 2017


  45. ^


    Userguide for the
    Infiniscal Iv, 2009. Downloaded: 10 March 2013


  46. ^


    Dell website
    Specifications of the M8424 Converged 10Gbe switch, visited 12 October 2012


  47. ^


    Details on the
    PC-B-8000 switch, visited 18 March 2012


  48. ^




    “Brocade M5424 Blade Server SAN I/O Module Hardware Reference Manual, September 2008”
    (PDF).
    Support.Euro.Dell.com
    . Retrieved
    12 October
    2012
    .




  49. ^



    M6505 technical overview, visited 2 July, 2017


  50. ^


    Dell transmission:
    Brocade 4424 Blade Server SAN I/O Module Hardware Reference, November 2007. Downloaded: 12 October 2012


  51. ^





    News
    , NO: IDG




  52. ^


    Cisco datasheet on the
    SFS M7000e Infiniband switch, March 2008. Visited: 12 Oct 2012


  53. ^


    Melanox Userguide for the
    SwitcX M4001 M4001 Infiniband switches, Nov, 2011. Retrieved: 12 Oct 2012
  54. ^



    a






    b




    Melanox userguide for the
    M2401 Infiniband switch, June, 2008. Visited: 12 October 2012


  55. ^


    Dell website
    Gigabit passthrough module for M-serial
    Archived
    2010-12-18 at the
    Wayback Automobile, visited 26 June 2011


  56. ^



    10Gb Pass Through Specifications, PDF, retrieved 27 June 2011



How to Check Qlogic Hba Firmware Version in Windows

You May Also Like