VMware vSphere 5.1: 16Gb Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

seedgemsbokStorage

Dec 10, 2013 (3 years and 7 days ago)

475 views



Deployment Guide

VMware vSphere 5.1: 16Gb

Fibre
Channel SANs with HP ProLiant
DL380 Gen8 servers and HP 3PAR
Storage

Create robust, highly
available
vSphere 5.1 environments with best
-
of
-
breed Fibre Channel HP 3PAR
Storage and HP ProLiant DL servers





Table of contents

Emulex Solution Implementer’s Series

................................
................................
................................
.............
3

Executive summary

................................
................................
................................
................................
..........
3

Introduction

................................
................................
................................
................................
......................
5

About this guide

................................
................................
................................
................................
...........
5

Solution components

................................
................................
................................
................................
........
6

ESXi 5.1

................................
................................
................................
................................
.......................
7

HP ProLiant Gen8 servers

................................
................................
................................
...........................
7

Deploying the solution components

................................
................................
................................
.............
8

Pre
-
installation

................................
................................
................................
................................
.................
9

Updating firmware

................................
................................
................................
................................
........
9

Configuring network connectivity

................................
................................
................................
....................

10

Planning the network environment for the host

................................
................................
..........................

11

Configuring storage

................................
................................
................................
................................
........

12

Using the VMFS
-
5 filesystem for additional

functionality

................................
................................
............

12

Configuring Fibre Channel connectivity

................................
................................
................................
......

13

Deploying ESXi 5.1

................................
................................
................................
................................
........

18

Post
-
installation

................................
................................
................................
................................
..............

19

Configuring the HBA

................................
................................
................................
................................
..

19

Using NPIV to identify HBA ports

................................
................................
................................
...............

21

Configuring the storage array

................................
................................
................................
.....................

21

Provisioning virtual LUNs

................................
................................
................................
...........................

22

Performance comparison

................................
................................
................................
...............................

23

Test method

................................
................................
................................
................................
...............

24

Results

................................
................................
................................
................................
...........................

25

Summary

................................
................................
................................
................................
........................

30

Appendix A


Configuring BfS

................................
................................
................................
........................

31

For more information

................................
................................
................................
................................
......

32



Solution Implementer’s Series



3

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage



Emulex Solution Implementer’s Series

This document is part of the
Emulex Solution I
mplementer’s Series
,
provid
ing Implementers (
IT
administrators and system architects
)

with solution and deployment
information on
popular
server and software platforms.


As a leader in I/O
adapters


Fibre Channel

(FC)
, Ethernet,
iSCSI and F
ibre
C
hannel

over Ethernet

(F
CoE)



the
Emulex technology team is taking a lead
in providing guidelines for implementing I/O for these solutions.

Executive summary

With
vSphere

5.
1
,
VMware
continues to
raise
the bar for hypervisor

products, introducing
many
new features
along with
support for
more and
larger virtual machines

(VMs)

that can
now
utilize

up to 64
virtual CPUs (
vCPUs
)
.

vSphere

5.
1
,

like 5.0
,

does not
include

a service console
operating system (
OS
)
; VMware
agents and Common Information Model (CIM) providers run directly

on the hypervisor layer
(VMkernel).
The
re are three options for communicating with VMkernel: VMware vSphere’s
enhanced command
-
line interface (vCLI),
vSphere PowerCLI

or the
vSphere Management
Assistant
(vMA)
virtual appliance
.

vSphere

5.
1

provides
many
new features

and enhancements
in areas such as
storage

and
networking
. Indeed, VMware has
several new features
,
1

claiming, for example, that
vSphere

5.
1

can run
VMs
that are
two
-
times as
powerful
as those supported by earlier
versions

and s
upport
for new
V
M

ha
rdware formats
, such as
VM Virtual Hardware Version 9
. In storage, there

i
s
added
support for

16
Gb

Fibre Channel
(
16GFC
)
, h
owever, as shown in
Table
1
, th
e
new
, larger

VM will place heavy
demand
s

on data

center infrastructure
.






1

http://www.vmware.com/files/pdf/products/vsphere/vmware
-
what
-
is
-
new
-
vsphere51.pdf

Solution Implementer’s Series



4

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Table 1
.
Resources supported by various VMware hypervisors

Component

ESX 1

ESX2

VMw
are

Infrastructure 3

VMware
vSphere 4

VMware
vSphere 5

VMware
vSphere 5.1

Scale factor

vCPUs

1

2

4

8

32

64

x 64

Memory

(GB per VM)

2

3.6

64

256

1,000

1,000

x 500

Network

(
Gb
)

< 0.5

0.9

9

30

>36

>36

x 72

Fibre Channel SAN

(MB/s per HBA port)

1GFC:

10
0

2GFC:

200

4GFC:

400

8GFC:

800

8GFC:

800

16GFC:

1,600

x 16



To help you transition to
an
infrastructure
that
is capable of
support
ing

the storage
needed
by
new
-
generation

VMs,
Emulex
has
validated the functionality and performance of
v
Sphere
5.
1

in
conjunc
tion
with
16GFC

connectivity. The
proof
-
of
-
concept
(POC)
environment included

the
following components:



B
est
-
of
-
breed HP ProLiant
DL380 G
en 8

server



D
ual
-
port

16
GFC
adapter produced for HP by Emulex



HP
3PAR
P10000 V400
Storage



SANBla
ze 16GFC storage emulator (16GFC connectivity end
-
to
-
end)

In additio
n,
since
16
G
FC

Storage Area Network (
SAN
) storage is
better suited for the new
release of vSphere, t
h
is implementer’s guide
outlines the process for deploying
v
Sphere

5.
1
with
16
G
FC
connectivity an
d outlines the results of performance tests carried out in such an
environment.

The performance test
section was executed
by the Emulex Technical Marketing team
in their
labs
and it was
to

validate and

test 16GFC connectivity end
-
to
-
end.

Th
e
performance
testing
demonstrated
that
, with
a

16
G
FC
Emulex
Fibre Channel
Host Bus
A
dapter

(HBA)
, I/O performance was significantly higher
(as much as 99%)
than with 8GFC

technology
, without requiring additional CPU cycles. T
hus, as VM density i
ncreases,
you are
placing more burden on the storage array by migrating from 8GFC to 16GFC since I/O
bandwidth has doubled on the array.

Intended audience
:
Thi
s
document
is intended for
engineers, system administrators and
VMware administrators interested in deploying
v
Sphere

5.
1

on an HP ProLiant
Gen8
server
Solution Implementer’s Series



5

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


featuring an
HP Ethernet 10Gb 2
-
port 554FLR
-
SFP+ Adapter (684213
-
B21)

and
HP SN1000E

16Gb 2
-
port PCIe Fibre

Channel Host Bus Adapter (QR559A)
.

T
esting performed in
August


September
,

2012
,

is described
.

Introduction

vSphere

5.
1

supports an unprecedented number of VMs on a single host; moreover, these VMs
can reach unprecedented size

depending on the

application and workload for each
V
M
.
Thus
,
Emulex expects to see more and more workloads being virtualized, as well as additional
resources being assigned to existing VMs in order to meet the needs of particular workloads. As
noted by VMware
,
2

“the VM will only

get bigger and faster.”


VMware
sees
more VMs be
ing

deployed than ever before,
3

with
v
Sphere
5.
1

allowing these
VMs to grow as much as
two

times larger. With this increased density, virtualized
environments
must be able to provide add
itional network and storage resources in order to support the
increased
workload.

About this guide

Th
is implementer’s guide
describes how to configure a
16
G
FC
SA
N

with
a
DL380 G
en8

server
in a

vSphere

e
nvironment. Guide
lines and instructions are provi
ded for configuring servers,
adapters and storage using
technical documentation provided by VMware and HP.

Information is provided on
the following

topics
:



Solution components



Networking
c
onfiguration



Storage
c
onfiguration
,

including
boot from SAN

(BfS
)



Perf
o
rmance test
ing to
compar
e
8GFC
and
16GFC






2

http://www.vmware.com/files/pdf/products/vsphere/vmware
-
wh
at
-
is
-
new
-
vsphere51.pdf

3

Based on interim results of VMware customer surveys performed in January 2010 and April 2011

Solution Implementer’s Series



6

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Solution
components

Emulex built the
POC environment
shown in Figure
1
i
n order to
validate
16
G
FC
connectivity
with
v
Sphere
5.
1
.



Figure
1
.
16GFC
POC





Solution Implementer’s Series



7

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Table 2 outlines the key components deployed in t
his environment.


Table
2
. Test environment

Component

Device

Comments

Tested server

HP ProLiant DL380 Gen8

Virtualization host running ESXi 5.1;

10 identically
-
configured VMs

Management server

vCenter Server 5.1
VM running

on
ESXi 5.1 host

vCenter Server 5.1 is a VM running Windows
Server 2008 R2

AD & DNS server

Generic Windows Server 2008 system

Support for Microsoft Active Directory (AD) and
Domain Name Sy
stem (DNS)

Storage

HP 3PAR Storage

P10000

V400
Storage


Fibre Channel
connectivity

16
G
FC HBA

HP SN1000E 16Gb
2
-
port PCIe Fibre Channel
Host Bus Adapter (
HBA
)



16
GFC
fabric switch

2 x
Brocade 6510 16GFC

SAN switch




If this is your first time insta
lling VMware
products on a
ProLiant server
,
it

i
s important
for you
to
have
a basic understanding of each of the
solution
components

so that the
terminology used

in
this guide is clear
.

vSphere
5.
1

VMware’s latest hypervisor,

v
Sphere
5.
1
, extends the cor
e capabilities of vSphere 4.1 and
5.0

to
provide
the foundation for
a
cloud infrastructure
, whether
public or private.
Areas
where
you
can expect to see improvements after
deploying
v
Sphere

5.
1

include
s
erver consolidation,

performance,
management,
provisioni
ng

and troubleshooting
.

For more information on
v
Sphere 5
.
1
, refer to

What

s New in VMware vSphere 5.
1
,” available
at

http://www.vmware.com/files
/pdf/products/vsphere/vmware
-
what
-
is
-
new
-
vsphere51.pdf
.



HP ProLiant

Gen8

s
ervers

The latest
DL380 G
en8

servers are
based on
Intel Romley

processors. These
systems
continue
to be the
server
s

of choice for
many
HP shops
in the VMware space

and
are
widely
used



from
small and medium
-
sized businesses (
SMB
s)
to large
data centers


for
their

high
availability
,
scalability
,
CPU horsepower and memory capacity. In addition
, these
2U

rack
-
mount servers
save space and power
,

making

them
ideal
for large data

cente
rs moving to
a
cloud
infrastructure.

Solution Implementer’s Series



8

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Using
Fibre Channel for
shared storage

W
hen deploying
vSphere
on ProLiant servers, you should
always consider

using
a SAN

so that
you can take full
advantage of
the
many
hypervisor
features and enhancements
that requir
e
shared storage.

While
VMware supports most of the more common protocols
,

F
ibre
Channel
is the predominant

choice for shared SAN storage.
Thus, while s
torage protocols continue to evolve
, introducing
options such as
iSCSI and NFS sto
r
age arrays
,
this guid
e
focuses
on

F
ibre
Channel
connectivity
.

Server sizing

HP
has
developed

an automated tool


HP Sizing Tool for VMware vSphere


that can
help you
size and scope a

server for
a particular
vSphere

deployment
.

Based on
your responses to a
questionnaire
, t
h
e

t
ool
provides
a quick
,
consistent method

for identifying
the best server for
your envir
o
nment
. It also creates a
bill of materia
l.


For more information on
this sizer and other
HP

solutions for
VMware
,
visit
www.hp
.com/go/vmware
.

HP
3PAR
Storage

H
ighly virtualized from the ground up
,
HP 3PAR Storage
can
enhanc
e
the benefits of a

vSphere
environment by
taking care of the
demands that server virtualization places on the storage
infrastructure.


HP 3PAR Storage combin
es highly virtualized, autonomically managed
,
dynamically tiered
storage arrays with advanced internal virtualization capabilities to increase administrative
efficiency, system utilization and storage performance. As a result, HP 3PAR Storage
can
boost

the

return on a
vSphere
investment
by
allowing
you to optimize your data center infrastructure,
simplify storage administration and maximize virtualization savings.

A key area for HP 3PAR P10000 V400 storage arrays which make
s

a
s
ignificant

impact on
performance whe
n migrating

from 8GFC to 16GFC is the wide

striping architecture. As noted in
HP
’s documentation
,

wide stri
ping distributes each vSphere storage volume across all array
resources. When you increase bandwidth to the array
,

you are not disk bound on the LUN l
ike
you are with traditional arrays that narrow stripe data.

Deploying the solution components

Having introduced the key component
s of the
POC
, this implementer’s guide now describes
h
o
w to configure
16
G
FC
connectivity. Guidelines are provided for the fol
lowing areas:



Pre
-
installation



Configuring network connectivity

Solution Implementer’s Series



9

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage




Configuring storage



Deploying
v
Sphere

5.
1



Post
-
installation

Pre
-
installation

There are several steps
to consider
before applying power or installing an operating system on
a system. First, you ne
ed to
ensu
re
that sufficient
rack space

and
appropriate power and
cooling are available
; you should also verify that firmware is at
the latest levels and download
any

necessary patches and drivers

using
the
HP and VMware links provided in this section.


Th
e
following
pre
-
installation

process
offers

guidelines for
pre
-
configuring

a
network
to support

an ESX
i

hos
t,
as well as
suggestions for
configuring
storage
systems
and

storage area
networking.

As a best practice,
Emulex
recommends
verify
ing

with HP techn
ical support

that you are
running the

very
latest HP
firmware
and drivers
on components
,

such as
HP
Flex
LOM
4

and
PCI
adapters
.



Updating f
irmware

Before deploying
v
Sphere

5.
1

on
a
ProLiant
s
erver,
you
should
determine the latest versions of
the following fir
mware
and,
if necessary
, update
:



ESXi host:

o

S
ystem

BIOS

o

Flexible
-
LOMs

and
PCI

adapters



S
torage array and controllers



SAN switches


You can review the latest firmware levels recommended by HP and VMware at the following
sites:



HP
: Visit
www.hp.com/go/vmware

and refer to the
Certified ProLiants

and
Certified HP
Storage
links under
Tools/Resources
.



VMware
: Refer to the VMware Compatibility Guides at
http://w
ww.vmware.com/resources/guides.html
.

Note

Always contact HP support to
identify
the latest firmware
updates and drivers
.





4

Where LOM refers to LAN on motherboard

Solution Implementer’s Series



10

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


As always
,

plan your deployment or upgrade before installing

software
; r
ead all the
documentation provided by VMware
and HP

befor
e
starting. Planning will speed

up

the process



particularly
if you
intend
to deploy m
ultiple
server
s
.

With pre
-
installation activities complete, you can now
start
configur
ing

your
network
.

Configuring

network
connectivity

Before installing
v
Sphere

5.
1
,

yo
u need
to
understand the network requirements for
the
particular
ESXi host

and the connectivity
supported
by the physical server. For example, w
hile
m
any phys
i
cal servers
feature
LOM or
integrated network interfaces
, ports
are typically 1
Gb

Ethernet (1Gb
E)
,
though newer models
such as the DL380 Gen8 server offer
10GbE

ports.


In order to best use 10GbE, y
ou
should
understand the requirements of
the
traffic being carried
on
the network, as outlined in Table 3.


Table
3
. Typical network requirements

Traffic

type

Bandwidth usage

Other requirements

Management

Low

Highly reliable, secure channel

VMware vMotion

High

Isolated

VMware Fault
Tolerance (FT)

Medium



-
high

䡩ehly⁲ liableⰠlow
-
la瑥ncy⁣hannel

i千卉

䡩eh

剥oiableⰠhigh
-
speed channel



䅰plicat
ion
-
dependent

䅰plica瑩on
-
dependent


In general, combining

management port
traffic, which is relatively
light
, with
VMware
v
Motion
traffic is
acceptable in many
deployments that utilize
four

network interface cards (
NIC
s)
.
Since
v
Motion

traffic
is
heavier
,
it

i
s not a good
practice
to
combine this
with
VM
traffic
;
thus
, you
s
hould consider separating
such
traffic

on
different subnets
.

For simplicity, Emulex placed
management
and v
Motion traffic o
n the same v
irtual s
witch

in the

POC. In such an implementat
ion,
it is
a good practice to use

virtual
LAN
s

(VLAN
s
)
to enhance
security

and isolat
e

traffic
.

Following VMware’s
best practices
for
performance
5

provides

a good starting point.
Remember
that,
as time
passes
,

you
should regularly
revisit your envi
r
onment
to
en
sure
the
network

configuration is
still eff
ective.




5

http://www.vmware.
com/pdf/Perf_Best_Practices_vSphere5.1.pdf

Solution Implementer’s Series



11

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Planning the network environment for the host

You should plan the network environment for the host in conjunction with your network
administrator.

The DL380 Gen8 server
utilized
in the
POC

was equipp
ed with an
HP
Ethernet 10Gb 2
-
port
554FLR
-
SFP+ Adapter,
which was used
to configure the single network interface card (NIC)
that was needed.

In vSphere deployments
featuring
1GbE network adapters, it
is
not uncommon
for
the
use

of

six
or even eight NICs in or
der to meet
VMware’s networking requirements for performance,
isolation and
redundancy. However, for this 10GbE environment, Emulex was able to utilize just
two 1
0GbE port
s
; VLANs were used in conjunction with VMware’s networking I/O control to
manage band
width.

Table
4

shows VMware’s best practices for two 10GbE ports.

Table
4
. VMware’s best practices, which were used in the
POC

Traffic type

Port group

Teaming
option

Active uplink

Standby
uplink

NIOC shares

VM

PG
-
A

LBT

dvuplink 1, 2

None

20

iSCSI

PG
-
B

L
BT

dvuplink 1, 2

None

20

FT

PG
-
C

LBT

dvuplink 1, 2

None

10

Management

PG
-
D

LBT

dvuplink 1, 2

None

5

vMotion

PG
-
E

LBT

dvuplink 1, 2

None

20



After
setting up

network connectivity, you can now configure storage.



Solution Implementer’s Series



12

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Configuring
storage

This section provid
es information on the following topics:



Using the new VMFS
-
5 filesystem
6

to support additional storage functionality



Configuring Fibre Channel connectivity



Implementing fabric zoning



Configuring the storage array



Configuring BfS

Using the
VMFS
-
5

filesystem

for additional functionality

Introduced with
ESXi 5.0
, the
VMFS
-
5 filesystem

provides support for VMFS volumes
up to
64TB

in a single extent.
With VMFS
-
3, 32 extents would have been required to achieve
64TB
.

Note

The volume creation wizard for VMFS
-
5 uses

GUID
Partition Table (GPT) format rather than Master Boot
Record (MBR),
which
allow
s
you to create VMFS
volumes that are larger than 2TB.

GUID refers to a globally
-
unique identifier.




W
ith the ability to create large VMFS volume
s, you must now m
anage
s
torage array queue
depth as well as
LUN

queue depth. For example, queue depth for an
HP SN1000E 16Gb 2
-
port
PCIe Fibre Channel H
BA
is set by default

t
o
30

and
may

be
adjusted
via
Emulex
OneCommand
Manager
,
the OneCommand Manager plug
-
in for
vCenter Server
,

or
vMA
.

Other benefits delivered by the VMFS
-
5 filesystem include:



Support for up
to 130
,000 files
rather than
30
,000

as
before



Support for 64TB physical
-
mode

RDM LUNs



Virtual mode al
l
ow
s

you to create snapshots
, which is beneficial when a file exceeds
2
TB



F
or space efficiency
, there can now be
up to
30,000 8KB sub
-
blocks



T
here

i
s small
-
file support
for files of
1KB

or less;
in
the past
,

such f
ile
s

would

have
occup
ied

entire sub
-
block
s


As you plan the deployment of shared storage,

take into consideration

the benefits of VMFS
-
5.
For example, if you are migrating from
a hypervisor that is p
re
-
v
Sphere

5.1
, you may also wish
to migrate to VMFS
-
5 to take advantage of the new features and enhancements.





6

VMFS is a VMware Virtual Machine File System format.

Solution Implementer’s Series



13

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Configuring

Fibre Channel connectivity

T
he Emulex Fibre Chann
el driver is in
-
box

with vSphere
,
making
it
is easy
to
transition

to

16
G
FC
connectivity from an earlier platform



there
is no need to install the driver during the
deployment of
v
Sphere

5.
1
.

As a best practice, it is recommended to update the fibre channel
d
river since the in
-
box driver will be out of date.
Thus, the configuration of
16
G
FC

connectivity

via
HP SN1000E 16Gb 2
-
port PCIe Fibre Channel
HBAs
is a simple process

in
a

vSphere

5.
1

environment, with just the
following

stages
:



Implement f
abric
z
oning



Conf
igur
e

the s
torage
array



Configur
e
BfS

Before
you begin the configuration, it is
a best practice
to review the appropriate VMware
Compatibility Guide to ensure firmware
for the storage
array is
at the la
test level
s

and the array
has been certified
, as shown

in Figure
2
.


Figure
2
. Showing firmware levels specified in the VMware Compatibility Guide for
HP
3PAR
P10000
Storage



Solution Implementer’s Series



14

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


IMPORTANT

Whenever
you
need to update code or firmware, Emulex
recommends backing up your data and configuration.





Implementing

f
abric

z
oning

Zoning has become

a standard component of
VMware deployments
; indeed, m
ost if not all
storage array vendors recommend zon
ing
LUNs
that
are
p
resented to
ESXi
hosts. Fabric zones
can
enhance manageability while providing support for advanced fea
tures such as vMotion and
Fault Tolerance that require multiple hosts to access the same LUN.

Zones

can also enhance security. For example, consider what might happen if you were to
connect a new

Microsoft Windows
server to the same
SAN
switch as an
exis
ting
ESX
i

host.
Without zoning or some other security measure, the new server would be able to access the
same storage as the
existing
host and could potentially overwrite the filesystem, obliterating VM
data and files.

Thus, s
ince the
POC
features two
16
Gb

HBA ports,
there s
hould ideally
be
two

or more

fabric switches,
each
configur
ed
with a zone that includes
one of
the
port
s
.



Solution Implementer’s Series



15

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Figure
3

shows the zoning used in the
POC
.


Figur
e
3
.
Z
oning in the
POC
,
with
four paths to the LUN






Solution Implementer’s Series



16

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


The
POC

utilizes
two

Brocade 16
Gb

SAN switches

and a

total of four zones, as shown in Table
5
.

The zones
are

added to an alias zone configuration
, which
is
then activated
.

T
able
4
.
Zone configuration
for
the POC

HBA

Storage

controller

Alias

Zone

Zone
configuration

Switch

Port 0


Port A1

Zone 1

ZoneSet 1

ZoneConfig_1

1

Port B2

Zone 2

ZoneSet 1

ZoneConfig_1

2

Port 1


Port A2

Zone 3

ZoneSet 2

ZoneConfig_1

2

Port B1

Zone 4

ZoneSet 2

ZoneConfig_1

1



This zone configuration give
s

the
ESX
i

host four paths to a single LUN.

At this stage of the
deployment,
no
LUNs have been created; thus, LUNs cannot yet be bound to WWN ports on
the
HP
SN1000E2P 16
Gb

HBA
.

Setting the
storage array

HP
3PAR
Storage

is
very
popular in VMware environments

due to
its
extensive
virtualization
ca
pabilities
and
ease of
management
. HP has documented deployment and tuning best
practices for this simple
yet
robust and highly
available

array
; for more information, refer to

3PAR Utility Storage with VMware vSphere
.



Emulex followed HP’s best practices when configuring
HP
3PAR
S
torage

for the
POC
. The
process included the following stages
:



Update
storage
array co
ntroller firmware and
management software as needed



Configure
virtual domains



Configure
virtual
LUNs



Assign
host
mode



Create hosts



Present LUNs to
the h
ost

If the correct zoning and host mode have been applied, LUNs will be visible to the assigned
hosts. T
here should be four paths to each LUN for optimal performance and redundancy.



Solution Implementer’s Series



17

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Configuring B
oot
f
rom
S
AN (BfS)

Enterprise server manufacturers such as HP continue to offer local disk storage; however, with
the growth of virtualization and the increased us
e of BfS
,
7

server configurations are evolving.
For example, HP offers a diskless server

that
would allow you to deploy
v
Sphere

via a USB
flash drive

or SD card
.

BfS capability is often configured in
a

vSphere

environment
, where its benefits include
:



Enhanced
m
anageability



Faster deployment



Easier b
ackup
8

of the hypervisor



Enhanced d
isaster recovery

capabilities

The process for configuring B
f
S via an HP
SN1000E2P 16
Gb

HBA is simple and can be
achieved in the following stages:



Load the latest boot code to
HBA




Provision the boot LUNs



Configur
e
the ESX
i

host



Specify

the desired boot volume



Plac
e

the
HBA
first in the boot order

Th
is
vendor
-
specific proce
ss is
describ
ed
in
more detail in

Appendix
A



Co
nfigur
ing BfS
.


Note

I
f you plan to install
or upgrade
ESXi 5.
1

with
local
storage,
Emulex
recommend
s
disco
n
nect
ing

the
F
ibre
C
hannel cables from the SAN

to prevent the
OS

from
being
accidentally installed

on the SAN.




Once
storage has been configured


and you have verifie
d that the hardware has been certified
by VMware
9



you can
deploy
ESXi 5.
1
.






7

ESXi is installed directly on a LUN instead of local storage

8

Since the array owns the LUN, array
-
b
ased copies can be made without server intervention.

9

Refer to the V
Mware Compatibility Guides at
http://www.vmware.com/resources/guides.html
.

Solution Implementer’s Series



18

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Deploy
ing
v
Sphere

5.
1

The main points to remember before beginning
a

vSphere

deployment are as follows:



Check all firmware on host and adapters and make updates as needed



Check HBA d
rivers

(
v
Sphere

5.1 has inbox drivers for Emulex
16G
FC)



Plan the networking configuration



Plan the Fibre Channel storage configuration and LUNs



Decide on local or BfS storage



Select the most suitable deployment option for your environment

Since
v
Sphere

5.
1

has

been designed for flexibility,
you have
a range of options

for deploying
this hypervisor on a ProLiant server. These options
includ
e

the following
:



Interactive installation
:
S
uggested for
fewer
than
five h
osts



Scripted installation
:
Unattended d
eploy
ment

for multiple
hosts



vSphere Auto Deploy installation
:
S
uggested for
a
large number of ESXi hosts
; uses

VMware

vCenter Server



Custom installation
:
vSphere 5 I
mage
Builder command
-
line interfaces (
CLI
s) provide
cu
stom updates, patches and drivers

For the POC
,
Emulex elected to
use the
i
nteractive
method
,
downloading
an
v
Sphere 5.1

image
from
the
VMware website

to local storage.

The
deployment
process is fairly
straightforward

and
,
in many ways,
identical to
the deployment of earlier
versions

of ESXi.

Since this
process is
detailed in
VMware
’s
technical documentation
, it is not described in this guide.

It
should not take more than a few minutes to install
v
Sphere

5.
1
.

Following the installation

of
v
Sphere

5.1
you
can c
onfigure the management network and
, if
appropriate
,
enable lockdown mode

via vCenter Server or the ESXi
direct console user interface
(
DCUI).

Y
ou can
then
proceed with
the post
-
installation process, which includes configuring the
v
Sphere

5.
1

host for
16
GFC

SAN connectivity
.



Solution Implementer’s Series



19

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Post
-
i
nstallation

After
v
Sphere

5.
1

has been
deployed
on the host
, you should review the installation and
perform any updates

that are necessary


for example, a

vendor may have recently released
new firmware
or
drivers
.

Next, y
ou

should configure

NIC teaming
, which is not configured automa
tically unless you are
using a scripted installation.


If you are using local storage, remember to reconnect the Fibre Channel cables to the SAN and
then verify that the host can login to the fabric and view any LUNs that have been assigned
.

You can now
co
nfigure the
v
Sphere

5.
1

host and storage array for
16
GFC
SAN connectivity,
which
may
involve

the following activities:



Planning the network environment for the host



Configuring the HBA



Configuring the storage array with features such as multipathing

Configu
ring the HBA

If you are migrating to

v
Sphere

5.
1

or
are
installing
v
Sphere

5.
1

for the first time
and have
installed

the

HP
SN1000E 16Gb
2
-
p
ort
PCIe
HBA
in
a

full
-
length

PCI
e 3.0

slot
,
with the
correct
small form
-
factor pluggable (SFP) transceiver
s,
configur
atio
n

i
s

simple
.

Since
v
Sphere
5.
1

already provides an in
-
box VMware driver for Fibre Channel,
there is no
need to install
a drive
r

during

the initial setup.
After the
v
Sphere

5.1

installation
,

however, you
should
verify with VMware or Emulex t
hat the in
-
box driver

is the
latest
and, if necessary,
update it.



Solution Implementer’s Series



20

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


After
configuring the HBA
, review the
Configuration

tab
of vCenter Server
; you
should
see your

device
listed under
Storage Adapters
, as shown in Figure
4
.

Figure
4
.
In this example,
ESXi has automatically rec
ognize
d

a

16
G
FC
HBA


Alternatively
, you can use
vMA
to remotely
send commands to the ESXi host

to verify that the
driver has been
configured correctly
.




Solution Implementer’s Series



21

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Using
N
PIV to identify HBA ports

N
-
Port ID Virtualization
(
NPIV
)
10

is supported

in
v
Sphere

5.
1
.
This
capa
bility

allows
a single
F
ibre
Channel HBA p
ort
to
register with
the
f
a
bric using
multiple
worldwide port names

(WWNs)
,
e
ach
having
a unique entity
.

To learn more
, refer to the
VMware technical note
, “
Configuring
and Troubleshooting N
-
Port ID Virtualization
,” which provides specific information on
Emulex
adapters
.

For
additional
information,
refer to the VMware document, “vSphere

Storage Guide
,” which is
available at

http://www.vmware.com/support/pubs/vsphere
-
esxi
-
vcenter
-
server
-
pubs.html
.

Configuring the s
torage
array

v
Sphere

5.1
introduced
several new
storage features and
enhance
d others
.
Y
ou should
be
aware

of the following
:



VMFS
-
5

o

S
upport for 32 host
s

for single read
-
only file on a VMFS volume



vStorage API for array integration

(
VAAI
)



Storage Distributed Resource Scheduler

(
S
DRS
)

o

Datastore correlation de
tector

o

New I/O metric



VMobservedLatency



Storage I/O Control

(SIOC)

o

Automatic setting for threshold latency



Storage vMotion

o

Four

parallel migrations
concurrently



Storage protocol enhancements

o

16GFC support

While t
hese valuable new features are beyond the scope of this guide,
be
aware that you
may

be facing
additional steps

after you install
v
Sphere

5.
1
.
For example, after
mounting a LUN and
formatting it with

VMFS
-
5, you may need to determine if
additional, array
-
specific agents are
required to support features such as VAAI or
vSphere APIs for Storage Awareness

(
VASA
)
.


For
this
POC
, Emulex configured the HP 3PAR Storage based on information provided in the
HP document, “
HP 3PAR Storage and VMware vSphere 5 best practices
.








10

M
aintained by
American National Standards Institut
e

(ANSI)
, Technical Committee T11

Solution Implementer’s Series



22

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage



Note

For more information on
ESXi 5.
1

features that
are
supported by a

particular
HP array, you
should
see
VMware’s HCL to see which specific array fe
atures are
supported for each model and also
consult HP technical
support
.





Provision
ing

virtual
LUNs

You can use the HP 3PAR InForm Management Console (shown in Figure 5)
to
configure and
manage HP 3PAR
S
torage.

Figure 5
. View of the HP 3PAR InForm

Management Console


For this POC
, Emulex
started off by
configur
ing

a single
ESXi C
ommon Provisioning
Group
(
CPG
)

named
VMware

and
used all drives in the array for provisioning virtual LUNs

(VLUNs)
.
Using t
h
is
CPG group
, Emulex

had to
validate FC connecti
vity

provisioned
to
the D
L380 Gen8

server
with
a single 600GB LUN to
provide
support
for
VMs
.

T
he next concern
,

after deploying
v
Sphere

5.1
,

is
HP’s suggested configuration for

multipathing.

Solution Implementer’s Series



23

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Configuring m
u
l
tipathing

By default w
ith
v
Sphere

5.
1
,
HP
3PAR S
torage
use
s

Fixed

path policy for
active
/
active
storage
arrays
. This policy m
aximize
s
bandwidth utilization by designating the preferred path to each
LUN through the
appropriate
storage controller.

According to HP documentation,
HP
3PAR
Storage

also supports
R
ou
nd Robin

path policy
,
which can improve storage performance by load
-
balancing I/O requests
between active
paths
,
sending a fixed number of requests through each in turn.

Note

The fixed number of I/Os is user
-
configurable. Y
ou
should
consult HP technical s
upport

for their
recommendations
.





You might consider enabling the array’s
Asymmetric Logical Unit Access

(ALUA) feature,
which can
improve storage performance in some
environments
.


Performance comparison

As VM density increases, the burden placed o
n storage by applications running on these VMs
will also increase. As a result,
both
Emulex
and VMware have
carried
out performance tests
designed
to demonstrate the ability of
16
G
FC

storage to sustain a significantly higher workload
than
8
GFC storage

with
out increasing CPU utilization on the host
.















Solution Implementer’s Series



24

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Figure
6
shows how storage was configured in the
VMware
test environment.



Figure
6
.
Test environment



Test method

The Iometer tool was used to compare sequential read and write throughput and CP
U
effectiveness

with 16GFC and 8GFC HBAs. Various block sizes were used.

A VM was configured with a single IOmeter worker thread.
The VM was hosted on a DL380
G
en8
server that was configured with the following:



Two six
-
core Intel Xeon E5
-
2640 processors



HP

SN1000E dual
-
port 16GFC HBA



Emulex LPe12002 dual
-
port 8GFC HBA

A Brocade 6510 16Gb FC switch was
also
in the POC.



Solution Implementer’s Series



25

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


The VM was configured as follows:



One vi
rtual CPU



Guest
m
emory: 4
,
096MB



Virtual
d
isk: 10GB OS and 256
G
B RDM



SCSI
c
ontroller
s
: One LSI
Logi
c
and
o
ne PVSCSI virtual controller



VM

Virtual Hardware
V
er
sion
9



Guest OS: M
icrosoft
Windows
Server
2008 R2, 64
-
bit

The target RDM LUN for the testing was placed on a SANBlaze VirtualLUN 6.3 16GFC
appliance, which is used to emulate Fibre Channel dri
ves in order to characterize read/write
performance.
The SANBlaze device was configured as follows:




HP DL380 G7



256GB RAM



16GFC HBA

Results

Sequential read throughput

Figure 7 shows sequential read throughput for a range of block sizes.



Solution Implementer’s Series



26

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage




Figure
7
. Seq
uential read throughput in MB/s in the test environment


With larger block sizes
the result is
what was expected from

16GFC
a
s

we are
able to utilize
near
-
line rate
.
11

As we double the pipeline in size
,

we are
also
able
to
prove the 16G
FC adapte
r
can double the throughput.
12

16GFC out
-
performed 8GFC by a
lmost 100
% with
larger

blocks.

CPU effectiveness



reads

Figure 8 shows CPU effectiveness (defined as total IOPS divided by CPU utilizati
on) for
sequential reads.

This metric characterizes the number of CPU cycles taken to complete a
particular
IOPS total; thus, a lower number is more desirable
because it
indicates the processor
is less stressed.





11

1,560 MB/s for sequential reads

12

75
0 MB/s for sequential reads

0
200
400
600
800
1000
1200
1400
1600
1800
1K
4K
8K
16K
32K
64K
128K
256K
MBs

Block Size

MB/Sec

Seq. Reads

8GB Reads
16GB Reads
Solution Implementer’s Series



27

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage




Figure
8
.
CPU effectiveness for sequenti
al reads at various block sizes


CPU effectiveness was similar with 16GFC and 8GFC, despite the
fact that, with 16GFC, the
CPU was completing
significant
ly more I/Os
.



0
500
1000
1500
2000
2500
1K
4K
8K
16K
32K
64K
128K
256K
Block Size

CPU Effectiveness

Seq. Reads

8GB Reads
16GB Reads
Solution Implementer’s Series



28

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Sequential write throughput

Figure 9 shows sequential
write

throughput for a range of b
lock sizes.





Figure
9
. Sequential write throughput in MB/s in the test environment


With large
r block sizes the result is
what was expected from 16GFC
a
s

we are able to utilize
near
-
line rate.

Again
,

as we double the pipeline in size
,

we are able to also prove the 16GFC
adapter can double the throughput.
16GFC out
-
performed 8GFC by
almost 100%
with
larger

blocks.



0
200
400
600
800
1000
1200
1400
1600
1800
1K
4K
8K
16K
32K
64K
128K
256K
MBs

Block Size

MB/Sec

Seq. Writes

8GB Writes
16GB Writes
Solution Implementer’s Series



29

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


CPU

effectiveness


writes

Figure 10 shows CPU effectiveness for sequential writes.



Figure
10
.
CPU effectiveness for sequential writes at various block sizes


CPU effectiveness was similar with 16GFC and 8GFC, despite the
fact that, with 16GFC, the
CPU wa
s completing
significant
ly more I/Os
.





0
500
1000
1500
2000
2500
3000
1K
4K
8K
16K
32K
64K
128K
256K
Block Size

CPU Effectiveness

Seq. Writes

8GB Writes
16GB Writes
Solution Implementer’s Series



30

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Summary

vSphere
5.
1

clearly adds a broad range of features to
the hypervisor
;
however,
taking full
advantage of the
se
features
requires
newly
-
supported

16
G
FC
SAN
connectivity
in your
virtualization environment
.

Testi
ng carried out b
y
Emulex
indicated that
I/O performance with
a
16
G
FC

implementation

with
HP DL380 Gen 8 Servers and HP 3PAR P10000 V400

was significantly higher than with
8
G
FC
,
w
ithout
the need to stress
the

CPU
s
.

Th
is gives HP

DL380
Gen 8 servers the abilit
y to utilize
the CPU power to
complete
others
higher priority
tasks.

Planning the
deploy
ment of

a

vSphere

5.
1

HP
host


or the
migrat
ion of

an existing
host to
v
Sphere
5.
1



is a
relatively
simple
process
; guidelines a
re provided in this guide
.



Using
HP DL380 Gen8, HP 3PAR P10000 with
16
G
FC connectivity
,

rather than 8GFC
,

result
s
in an
infrastructure

that can
meet the demand
s

of the
larger VMs
you
are
now
able to
create

with
v
Sphere

5.1, thus
allow
ing
you
to
introduce addi
tional
business
-
critical applications
.



Solution Implementer’s Series



31

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


Appendix
A



Co
nfigur
ing BfS

This appendix outlines the process for configuring BfS:

1.

Collaborate
with your SAN
administrator on
provis
i
on
ing

a boot LUN and present
ing

it to the

v
Sphere
5.
1

host.



2.

Download the Univers
al Boot Code firmware for your adapter, available at
http://h20000.www2
.hp.com/bizsupport/TechSupport/SoftwareIndex.jsp?lang=en&cc=us&pr
odNameId=5219801&prodTypeId=12169&prodSeriesId=5219798&swLang=8&taskId=135&
swEnvOID=54
. For example, select
HP
SN1000
E
16
Gb
2
-
Port PCIe Fibre Channel Host
Bus Adapter

Cross operating system
(BIOS, Firmware, Diagnostics, etc.)
.

3.

Power on the server
and
,

at the prompt
,

select
Ctrl
-
E
.

4.

Install
the
boot code firmware
.


5.

Reboot
the
host and
,

at the
prompt
, select
Ctrl
-
E
.

6.

S
pecify
the adapter port
from which the system will be
booting
.

7.

Scan the array a
nd select the
boot
LUN
.

8.

Save the settings and reboot the host
.

9.

Insert the
v
Sphere

5.
1

CD into the
CD

drive
.

10.

Initiate the i
nstall
.

11.

Select the
appropriate
disk
on the
SAN when asked where to install the media.






Solution Implementer’s Series



32

Deploying

16
Gb/s Fibre Channel SANs with HP ProLiant DL380 G
en8

servers and HP
3PAR S
torage


For more information


Storage I/O Performance

on VMware
vSphere 5.1 over 16 Gigabit Fibre Channel


http://www.vmware.com/files/pdf/techpaper/V
Mware
-
vSphere
-
16Gb
-
StorageIO
-
Perf.pdf


What’s New

in VMware vSphere 5.1


Performance


http://www.vmware.com/resources/techresour
ces/10309

“What’s New in VMware vSphere 5.1


Storage”

http://www.vmware.com/resources/techresour
ces/10308

HP virtualization with VMware, including a
section on ProLiant servers

www.hp.com/go/vmware


HP storage solutions for VMware

http://h71028.www7.hp.com/enterprise/w1/en/
solutions/storage
-
vmware.html


“3PAR Utility Storage with VMware vSphere”

http://www.vmware.com/files/pdf/techpaper/vm
w
-
vsphere
-
3par
-
utility
-
storage.pdf




To help us improve our documents, please provide feedback at

implementerslab@emulex.com
.



©

Copyright 201
2

Emulex Corporation. The information contained herein is subject to change without notice. The only warranties for
Emulex products and services are set forth in the express warranty statements accompanying such products and services. Emulex

shall not be liable for technical or editorial errors or omissions contained herein.

OneCommand
is a
registered trademarks of Emulex Corporation.
HP

is a registered trademark
in the U.S. and other countries.

VMware is a registered trademark of VMware Corp
oration.









World Headquarters
3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600

Bangalore, India
+91 80 40156789 |
Beiji
ng, China
+86 10 68499547

Dublin, Ireland
+35 3 (0)1 652 1700 |
Munich, Germany
+49 (0) 89 97007 177

Paris, France
+33 (0) 158 580 022 |
Tokyo, Japan
+81 3 5322 1348

Wokingham, United Kingdom
+44 (0) 118 977 2929