IBM Power Systems - power.avnetportal.ch

capybarabowwowSoftware and s/w Development

Oct 30, 2013 (3 years and 5 months ago)

788 views

© 2012 IBM Corporation

October Hardware Announcement

Deep Dive

Mark Olson

olsonm@us.ibm.com

WW Power Systems Offering Manager

Patrick O’Rourke

pmorour@us.ibm.com

Power Executive Briefing Center

IBM Power Systems

© 2012 IBM Corporation

2

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

3

Oct 3 Announce Content GA Dates

2012 Oct 3 :Announcement


Oct 12

First software GA


AIX 7.1 TL2, AIX 6.1 TL8, IBM i 7.1 TR5



Linux RHEL 6.3, SLES 11SP2



PowerVM 2.2.2
(POWER7+ only)



Oct 19

General availability of


POWER7+ 770/780


eFW 7.6 available for Power 795


EXP30 Ultra SSD I/O Drawer, RoCE adapter,
900/856 GB HDD, RDX docking stations,

Nov 9

Second software GA



IBM i 6.1.1 support of POWER7+ 770/780

Nov 16 General availability of


Model upgrades into POWER7+ 770/780


Power 795 GX++ PCIe Gen2 adapters


New rack doors


Elastic CoD




Dec 19


Other AIX 71. and 6.1 TL levels and VIOS 2.2.1.5
supporting POWER7+ 770/780

2013 planned Content


SOD for AIX 5.3 support for
POWER7+ 770/780


CHARM on new “D” model
770/780 February 14, 2013


IBM Power Systems

© 2012 IBM Corporation

4

e
-
config/SPT Staged Support for Oct 3, 2012 Announce

Oct 3 Announce items being Staged in eConfig

plan

Comments

Overall


Overall announce support

10/6

Available with the code update that is planned
to be released starting late Friday, 10/5 (USA
time)

Hardware Related

9117
-
MMD & 9179
-
MHD


Support of carry
-
over indicator features for DDR3
memory on a model upgrade to MMD or MHD


EH04: Carry
-
over indicator for DDR3 CoD Memory Feature 5600


EH05: Carry
-
over indicator for DDR3 CoD Memory Feature 5601


EH06: Carry
-
over indicator for DDR3 CoD Memory Feature 5602


EH07: Carry
-
over indicator for DDR3 CoD Memory Feature 5564

11/6

Ordered when on of the mentioned DDR3 CoD
Memory Features is being carried over from the
base server configuration to the new upgraded
model configuration


9117
-
MMD & 9179
-
MHD


Change orderability to MES ONLY for PCI
-
X adapters

(Feature codes 5908, 5713, 5736, 5759, 5912, 5706,
5749). Note the #5796 PCI
-
X I/O drawer has been
withdrawn from marketing.

10/23


Changed from being available for both initial order
and MES


Not available initial order because PCR
-
X I/O
drawers are not available on initial order, is only
supported with model upgrades


From 10/3 to 10/23, these features will reflect a
“No price found” when included on an initial order

Essentially same delays and planned add
schedule for SPT


On Oct 3 (announce date) there is no eConfig or SPT support.

Support quickly being added.

Note, delays from 10/3 due to late product definition changes

IBM Power Systems

© 2012 IBM Corporation

5

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

6

2004

2001

2007

2010

Future

POWER4/4+

180/130 nm

POWER5/5+

130/90 nm

POWER6/6+

65/65 nm

POWER7/7+

45/32 nm


Dual Core


Chip Multi Processing


Distributed Switch


Shared L2


Dynamic LPARs (32)


Dual Core


Enhanced Scaling


SMT


Distributed Switch +


Core Parallelism +


FP Performance +


Memory Bandwidth +


Virtualization


Dual Core


High Frequencies


Virtualization +


Memory Subsystem +


Altivec


Instruction Retry


Dynamic Energy Mgmt


SMT +


Protection Keys


Eight Cores


On
-
Chip eDRAM


Power
-
Optimized Cores


Memory Subsystem ++


SMT++


Reliability +


VSM & VSX


Protection Keys+

Power Processor Technology Roadmap

IBM Power Systems

© 2012 IBM Corporation

7

Innovation Drives Performance

Relative %

of Improvement

IBM Power Systems

© 2012 IBM Corporation

8

POWER7+

32 nm

POWER7+

POWER7

45 nm

POWER7

32 nm

IBM Power Systems

© 2012 IBM Corporation

9

POWER7+

32 nm

POWER7+

POWER7

45 nm

POWER7

32 nm

Add additional Cache

IBM Power Systems

© 2012 IBM Corporation

10

POWER7+

32 nm

POWER7+

POWER7

45 nm

POWER7

32 nm

Add additional Cache

Add on Chip Accelerators

IBM Power Systems

© 2012 IBM Corporation

11

SMP Fabric
















SMP Fabric





POWER7+ Design

Core

L2

Core

L2

G

X


B

u

s

Core

L2

Core

L2

Core

L2

A

c

c


E

n

g

Power Bus

Core

L2

M

C

M

C

L3 Cache

L3 Cache

L3 Cache

L3 Cache

Core

L2

Core

L2

Physical Design:



8 cores with integrated Cache, Memory
Controllers, and Accelerators



3 / 4 / 6 / 8 Core options



32nm technology

Features:



2.5X increase in L3 Cache


eDRAM technology


Higher Frequencies



Memory Compression Engine


Active Memory Expansion with no
processor overhead penalty



Encryption / Cryptography Support



Random Number Generator



Enhanced Energy / Power Gating



1/20 LPAR Core Granularity



2X SPFP performance

IBM Power Systems

© 2012 IBM Corporation

12

POWER5

POWER5+

POWER6

POWER7

POWER7+

Technology

130nm

90nm

65nm

45nm

32nm

Size

389 mm
2

245 mm
2

341 mm
2

567 mm
2

567 mm
2

Transistors

276 M

276 M

790 M

1.2 B

2.1 B

Cores

2

2

2

8

8

Frequencies

1.65

GHz

1.9

GHz

4
-

5

GHz

3


4

GHz

3.6


4.4+
GHz

L2 Cache

1.9MB Shared

1.9MB Shared

4MB / Core

256 KB

per Core

256 KB

per Core

L3 Cache

36MB

36MB

32MB

4MB / Core

10MB / Core

Memory Cntrl

1

1

2 / 1

2 / 1

2 / 1

Architecture

Out of Order

Out of Order

In of Order

Out of Order

Out of Order

LPAR

10 / Core

10 / Core

10 / Core

10 / Core

20 / Core

Processor Designs

IBM Power Systems

© 2012 IBM Corporation

13

Benefits of eDRAM for POWER7+

With eDRAM

Without eDRAM

2.1B Transistors

567 mm
2

5.4B Transistors

950 mm
2

IBM’s eDRAM Benefits:



Greater density: 1/3 the space of 6T SRAM implementation



Less power requirements: 1/5 the standby power



Fewer soft errors: Soft Error Rate 250x lower than SRAM



Better Performance

IBM Power Systems

© 2012 IBM Corporation

14

Improved SP Floating Point performance

POWER7/7+ has four FP pipelines.


Each pipeline in POWER7/7+ can do either a SP or a DP.


DP width is 2x of SP.


POWER7/7+ feeds two SP in each DP pipeline



Needed separate area for control and some status bits.


In POWER7: Two DP pipe together can execute a 2
-
way SIMP
FPDP instruction.


In POWER7+: Same two DP pipe together now can execute a 4
-
way SIMD SPFP instruction.


Two 4
-
way SIMD SPFP instruction can be executed in POWER7+



Only one in POWER7.


Multiply
-
add counting as two, POWER7+ has 16 SP FLOP per cycle

IBM Power Systems

© 2012 IBM Corporation

15

POWER7+ Encryption Accelerator

Client Benefits



C
an be applied to a broader set of data
creating a stronger security ecosystem


POWER7+ core focused on application
performance.


Two primary AIX security applications:
1)
Encrypted File Systems

protecting
your data in storage or on backup media
2)
IPsec

protecting your data over the
network.




Ensures that the high demand for
entropy on a heavily loaded systems
always yields high quality random
numbers.


RNG offload provides entropy snd
security is ensured


Processor performance focused on your
applications.

Technology



Crypto Offload
Accelerators:

o

P
rovide cryptographic
engines to relieve the P7+
processor from the
performance intensive
cryptographic algorithms of
AES, SHA, and RSA.




High quality random
numbers
:

o
Generated with high
performance with the RNG
offload feature of the P7+
processor.


IBM Power Systems

© 2012 IBM Corporation

16

AIX POWER7+ Crypto Acceleration Support

Crypto Acceleration Support

Application

Application Algorithms

Encrypted File System

P7+ AES = CBC / ECB

P7+ RSA*

IPSec

P7+ AES = CBC / SCB / GCM / GMAC

P7+ Digest = MD5 / SHA1 / SHA 256

P7+ HMAC = SHA1 / SHA 256

P7+ RSA*

PKCS11 User Applications

P7+ AES = CBC / SCB / GCM / GMAC / CTR / XMAC

P7+ Digest = MD5 / SHA1 / SHA 256 / SHA 512

P7+ HMAC = SHA1 / SHA 256 / SHA 512

P7+ RSA*

* 2013 time frame


Provides public key encryption


Used during initial handshake authentication

IBM Power Systems

© 2012 IBM Corporation

17

New Power On Reset Engine (PORE)


Enables a processor to be re
-
initialized
while system remains up and running


Directly used to:


Allow for Concurrent Firmware
Updates:
In cases where a processor
initialization register value needs to be
changed

L3 Cache dynamic column repair


New self
-
healing capability that
complements cache line delete


Uses PORE feature to remove a
substitute a failing bit
-
line for a spare
during run
-
time.


New Fabric Bus Dynamic Lane Repair


POWER7+ has spare bit lanes that can
dynamically be repaired (using PORE)


For Busses that connect CEC drawers


Avoids any repair action or outage
related to a single bit failure.

P7+ RAS Specific Features

IBM Power Systems

© 2012 IBM Corporation

18

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

19

POWER7+

770 / 780

IBM Power Systems

© 2012 IBM Corporation

20

POWER7+ 770


POWER7+


Frequencies:



4C SCM @ 3.8 GHz


Max Config:64 Cores




3C SCM @ 4.2 GHz


Max Config: 48 Cores


Up to 64 Cores


Up to 4 TB of memory


6 PCIe Gen2 slots / CEC


Ethernet ports: Dual 10 Gbt & Dual 1 Gbt


Capacity on Demand


Enhanced RAS



Self
-
healing capability for L3 Cache functions



Core re
-
initialization (Running system)



Dynamic Processor Fabric Bus repair

4Socket / 4U

16
-
core or 12
-
core / 4U

IBM Power Systems

© 2012 IBM Corporation

21

POWER7+ 780


POWER7+


Frequencies:



8C SCM @ 3.7 GHz

Max Cores: 128 Cores



4C SCM @ 4.4 GHz

Max Cores: 64 Cores


Up to 128 Cores


Up to 4 TB of memory


6 PCIe Gen2 slots / CEC


Ethernet ports: Dual 10 Gbt & Dual 1 Gbt


Enhanced Capacity on Demand options


Enhanced RAS



Self
-
healing capability for L3 Cache functions



Core re
-
initialization (Running system)



Dynamic Processor Fabric Bus repair

4Socket / 4U

32
-
core or 16
-
core / 4U

IBM Power Systems

© 2012 IBM Corporation

22

Power 770(MMC) / 780(MHC) Diagram…..

RAID

Battery

RAID

Battery




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

8 SN

Dimms




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

PSI

Media

X Bus

A/B Buses

8 SN

Dimms

Write Cache

Y Bus

Z Bus

CPU CARD

DASD Backplane

PSI

I/O

Backplane

P7

Ext

SAS

GX++

A/B Buses

GX++

Write Cache

FSP

Card

SAS

SAS

EXP

EXP

SATA

2 x HMC

Anchor

Card

P7

GX++

Busses

GX++

Busses

PCIe

TPMD

Inter Node Buses

12X IB

12X IB

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

Gen2

Slots

2x 1Gbt & 2x 10Gbt

USB

Serial

P7IOC

Hub

P7IOC

Hub

Ethernet

USB

Serial

PCIe

IBM Power Systems

© 2012 IBM Corporation

23

POWER7+ 770 / 780 Quad Socket Planar…..

RAID

Battery

RAID

Battery

PSI

Media

Write Cache

CPU

CARD

DASD Backplane

PS
I

I/O

Backplane

Ext

SAS

Write Cache

SAS

SAS

EXP

EXP

PCIe Buses

PCIe

Buses

SATA

2 x 1Gbt & 2 x 10Gbt

2 x HMC

Anchor

Card

USB

PCIe

Serial

PCIe

USB

Serial

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P7IOC

Hub

P7IOC

Hub

TPMD

Ethernet

12X IB

PCIe x8

12X IB

PCIe x8

GX++

GX++

GX++

Busses

GX++

Busses

FSP

Card

P7+

P7+

P7+

P7+




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM




SN

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

DRAM

IBM Power Systems

© 2012 IBM Corporation

24

POWER7+ 770 / 780 Top View

POWER7+

Socket 0

POWER7+

Socket 2

POWER7+

Socket 3

POWER7+

Socket 1

Memory

Memory

Memory

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

R

E

G

U

L

A

T

O

R

IBM Power Systems

© 2012 IBM Corporation

25

Dynamic Processor Fabric Bus repair

Fabric busses run between CEC drawers / use ECC protection.

Spare signals run between the drawers.

If fault occurs on one of the signals, ECC allows continued operations
until we dynamically activate and start using a spare.

Activation of a spare involves temporarily stopping the clocks using a
power savings mode that is new for POWER7+.

Transparent to applications since it is a very momentary pause that
allows us to switch from one signal to another on the bus

At least 2 bits can being bad can be tolerated


First bit can be "self
-
healed" without any need to take a service
action.


Competition: Single bit
-
line error will cause an entire lane of traffic to
be taken out of commission




Major impact to performance,



Drive the necessity of a service action.

IBM Power Systems

© 2012 IBM Corporation

26

Operating Systems

IBM AIX operating system:

AIX Version 7.1 TL02

AIX Version 7.1 Tl01 SP 6, or later (planned availability December 19, 2012)

AIX Version 7.1 TL00 SP 8, or later (planned availability December 19, 2012)

AIX Version 6.1 TL08, or later

AIX Version 6.1 TL07 SP 6, or later (planned availability December 19, 2012)

AIX Version 6.1 TL06 SP 10, or later (planned availability December 19, 2012)

AIX 5.3 TL12 Statement of Direction

IBM i operating system


IBM i 7.1 TR5, or later; required if the primary OS is IBM

IBM i 6.1 with machine code 6.1.1, or later



All I/O must be virtual (I/O provided through either IBM i 7.1 or VIOS)



Can not be ordered as the primary OS with feature number 0566

If installing Linux:

Red Hat Enterprise Linux 6.3 for POWER, or later

Red Hat Enterprise Linux 5.7 for POWER, or later

SUSE Linux Enterprise Server 11 Service Pack 2, or later, with current maintenance
updates available from SUSE to enable all planned functionality

If installing VIOS:

VIOS 2.2.2.0

VIOS 2.2.1.5 (planned availability December 19, 2012)

IBM Power Systems

© 2012 IBM Corporation

27

POWER4

p670

1.1 GHz

KWatts: 6.7

POWER4+

p670

1.5 GHz

KWatts: 6.7

POWER5

p5
-
570

1.65 GHz

KWatts: 5.2

POWER5+

p570

1.9 GHz

KWatts: 5.2

POWER6


Power 570

4.7 GHz

KWatts: 5.6

POWER6+

Power 570

4.2 GHz

KWatts: 5.6

POWER7


Power 780

3.8 GHz

KWatts: 6.9

POWER7+


Power 780

3.7 GHz

KWatts: 7.7

rPerf per KWatt

>5X increase in performance per watt over POWER6+

>10X increase in performance per watt since POWER5+

>10 years of changing the server landscape

POWER7+ Deliver more
Performance per Watt

IBM Power Systems

© 2012 IBM Corporation

35

Power 770 and 780 Memory for MMC & MHC

16 DDR3 DIMM slots per proc enclosure

DIMMS: 8GB, 16GB, 32GB, and
64GB

Plugged in quads of DIMMs. 1 feature code = 4 identical DIMMs

CAN mix different size DIMM features

Minimum 1 quad DIMMs (one feature) per processor enclosure & minimum of 50% of memory
capacity activated

Feature
Code

Feature
GB

#5600

32

#5601

64

#5602

128

#5564

256

# Proc Encl.

1

2

3

4

DIMM slots

16

32

48

64

Max TB

1

2

3

4

One proc card GB memory capacity with

DIMM size

1 Quad

2 Quad

3 Quad

4 Quad


8 GB

32


not valid

64

96

128

16 GB

64


not valid

128

192

256

32 GB

128


not valid

256

384

512

64 GB

256


not valid

512

768

1024

3 & 4 quad columns assume
no mixing for simplicity

IBM Power Systems

© 2012 IBM Corporation

36

Power 770 and 780 Memory for MMD & MHD

16 DDR3 DIMM slots per proc enclosure

DIMMS: 8GB, 16GB, 32GB, and
64GB

Plugged in quads of DIMMs. 1 feature code = 4 identical DIMMs

CAN mix different size DIMM features

Minimum 1 quad DIMMs (one feature) per processor enclosure & minimum of 50% of memory
capacity activated

One proc card GB memory capacity with

DIMM size

1 Quad

2 Quad

3 Quad

4 Quad


8 GB

32


not valid

64

96

128

16 GB

64


not valid

128

192

256

32 GB

128


not valid

256

384

512

64 GB

256


not valid

512

768

1024

Feature
Code

Feature
GB

#5600*

#EM40

32

#5601*

#EM41

64

#5602*

#EM42

128

#5564*

#EM44

256

# Proc Encl.

1

2

3

4

DIMM slots

16

32

48

64

Max TB

1

2

3

4

3 & 4 quad columns assume
no mixing for simplicity

IBM Power Systems

© 2012 IBM Corporation

38

POWER7+ 780

9179
-
MHD

POWER7 780

9179
-
MHB

POWER6 570

9117
-
MMA

Power 570 & 770 systems can upgrade to POWER7+

POWER7+ 770

9117
-
MMD

POWER6 570

9117
-
MMA

POWER7 770

9117
-
MMB

POWER7+ System Upgrades….

POWER7 780

9179
-
MHC

POWER7 770

9117
-
MMC

9406
-
MMA converted to 9117
-
MMA

IBM Power Systems

© 2012 IBM Corporation

41

I/O Differences between C and D models

No difference in system unit I/O … same PCIe Gen2 slots, multifunction
card, and integrated SAS controller, BUT ….


1.
No SCSI disk support on D models. Was supported on C. 770/780 selection
consideration for clients planning to use old SCSI 3.5
-
inch disk on new server
(probably IBM i clients if any).
SCSI disk would be found in #5786 EXP24 I/O drawer and run
by SCSI PCI
-
X adapter with cache (see SOD from April 2012). SCSI tape/optical remains supported.


2.
New EXP30 Ultra SSD I/O Drawer (#EDR1) announced for 770/780 D models
under AIX.. #EDR1 has great performance and wonderful footprint density.


3.
Native IBM i I/O support using IBM i 7.1, but no native IBM i 6.1 support of I/O
on POWER7+. IBM i 6.1 can be client partition and be provided I/O linkages
through either IBM i 7.1 or VIOS. Note communications adapters #2893/2894
or cryptographic adapters can’t be virtualized.


4.
Max quantity of disk drives higher for D models vs C models …. 3024 vs 1344.
Max number of SSD also increased, but not as dramatically.

IBM Power Systems

© 2012 IBM Corporation

42

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

43

795 Hardware Oct Announcements

2X Max Memory

Now up to 16 TB


PCIe Gen2

Two new GX++ Adapters

16Gb Fibre Channel

10Gb FCoE / CNA


RoCE support

IBM Power Systems

© 2012 IBM Corporation

44

Power 795 Memory


New 256GB Feature

32 DDR3 DIMM slots per processor book (8 memory feat codes)

DIMMS: 8GB, 16GB, 32GB, and
64GB

Plugged in “octants” of DIMMs. 1 feature code = 4 identical DIMMs. So always plug two feat
codes of memory at a time

CAN mix pairs of different size DIMM features on same processor book

Per processor book: minimum 2 quad DIMMs (two of the same memory features)

Per system: minimum of 50% of memory capacity activated or minimum of 32GB memory
activated (which ever is larger)

Capacity per processor book
(assuming same size DIMMs used)

DIMM

size

1
Quad

2
Quad

3
Quad

4
Quad

5
Quad

6
Quad

7
Quad

8
Quad


8 GB

n/a

64

n/a

128

n/a

192

n/a

256

16 GB

n/a

128

n/a

256

n/a

384

n/a

512

32 GB

n/a

256

n/a

512

n/a

768

n/a

1024

64 GB

n/a

512

n/a

1024

n/a

1536

n/a

2048

Feature
Code

Feature
GB

#5600

0/32

#5601

0/64

#5602

0/128

#5564

0/256

# Proc books

1

2

3

4

5

6

7

8

DIMM slots

32

64

96

128

160

192

224

256

Max TB

2

4

6

8

10

12

14

16

IBM Power Systems

© 2012 IBM Corporation

45

PCIe Gen2 for Power 795

Two PCIe Gen2 GX++ adapters


16 Gb Fibre Channel #EN23


10 Gb FCoE/CNA #EN22



2
-
ports of high performance capability


Access to full bandwidth of GX++ bus (up to 20 GB/s peak)


Slightly lower latency than adapter in #5803/5873 I/O Drawer, but
probably not noticeable for most client applications


Innovative, highly compact packaging


Combines GX adapter + GX cables + PCIe I/O drawer + PCIe adapter
into just one new 2
-
port GX adapter


Can save money, space, energy/cooling, maintenance,

Nov GA 2012:

IBM Power Systems

© 2012 IBM Corporation

48

Power 795 & PCIe Gen2 Function


Plug GX++ PCIe adapter
directly into GX++ slot on the
processor card


No cabling except optical fiber
for downstream I/O switch



Max 3 GX++ PCIe adapters per
per processor book


Can be in any of the 4 GX slots



Using a GX++ PCIe adapter
means an I/O drawer can not
be placed in that GX slot. Can
have one or the other. OK to
mix.



No change to minimum of one
#5803 I/O drawer per 795

IBM Power Systems

© 2012 IBM Corporation

49

GX++ 2
-
port 16Gb Fibre Channel Adapter (#EN23)


Two ports
-

each 16 Gb Fibre Channel


Link speeds of 4, 8 and 16 Gbps (auto negotiation supported)


N_Port ID Virtualization (NPIV) capability through VIOS


Attachment to switch is supported, no direct device attachment support


OS levels required


AIX Version 7.1 with TL 7100
-
02


AIX Version 6.1 with TL 6100
-
08


IBM i
-

not supported as of 2012


Linux
-

not supported as of 2012


VIOS 2.2.2.0


Firmware 7.6 required


Standard SR fiber optic cables with LC type connectors :


OM4
-

multimode 50/125 micron fibre, 4700 MHz*km bandwidth


OM3
-

multimode 50/125 micron fibre, 2000 MHz*km bandwidth


OM2
-

multimode 50/125 micron fibre, 500 MHz*km bandwidth


OM1
-

multimode 62.5/125 micron fibre, 200 MHz*km bandwidth


CCIN = 2B9B

1
st

16 Gb Fibre
Channel for Power
Systems

Cable / Speed / Distance



Cable | 4 Gbps | 8 Gbps | 16 Gbps


-----------------------------------------------------------


OM4 | .5m
-

400m | .5m
-

190m | .5m


125m


OM3 | .5m
-

380m | .5m
-

150m | .5m


100m


OM2 | .5m
-

150m | .5m
-

50m | .5m


35m


OM1 | .5m
-

70m | .5m
-

21m | .5m


15m

IBM Power Systems

© 2012 IBM Corporation

50

GX++ 2
-
port 10Gb FCoE (CNA) SR Adapter (#EN22)


Two ports
-

each 10 Gb


Each port runs Ethernet NIC (Network Interface Card) and/or Fibre
Channel traffic


N_Port ID Virtualization (NPIV) capability through VIOS


Attachment to switch is supported, no direct device attachment support


OS levels required


AIX Version 7.1 with TL 7100
-
02


AIX Version 6.1 with TL 6100
-
08


IBM i
-

NIC supported through VIOS w/ i 6.1.1; but FC not supported as of 2012


Linux
-

not supported as of 2012


VIOS 2.2.2.0


Firmware 7.6 required


Standard SR fiber optic cables with LC type connectors up to 300m


OM4
-

multimode 50/125 micron fibre, 4700 MHz*km bandwidth


OM3
-

multimode 50/125 micron fibre, 2000 MHz*km bandwidth


OM2
-

multimode 50/125 micron fibre, 500 MHz*km bandwidth


OM1
-

multimode 62.5/125 micron fibre, 200 MHz*km bandwidth


CCIN = 2B74



FCoE (Fibre Channel
over Ethernet



CNA (Converged
Network Adapter)

IBM Power Systems

© 2012 IBM Corporation

51

GX++ PCIe Gen2 Adapter Economics 101

PCIe adapter

+

PCIe I/O drawer

+

12X & UPIC cables

+

GX adapter

#EN22 FCoE = $12,000

Add

l maint = zero



#EN23 FC = $ 13,000

Add

l maint = zero



PCIe adapter


#5708 FCoE (10Gb 2 port) = $ 5,499


#5735 FC (8GB 2 port) = $ 4,631


PCIe I/O drawer (20 PCIe slots)


#5877 = $ 28,500


Plus $214 monthly maint after warranty


Three
-
four 12X cables + UPIC cables


~ $ 3,500


One #1816 GX adapter


Minimum $ 4,000 (often use two)


Total


FCoE = $ 41,500 + maint


FC = $ 40,600 + maint


Much higher cost, HOWEVER drawer has more PCIe
slots which can be used. Thus for a larger number
of ports, cost per adapter port would be better

Less than
1/3
rd

the cost



assuming available GX
slots and modest number
of I/O ports needed

Plus saving energy, cooling

Based on USA suggested list prices and are subject to change. Reseller prices may vary. Prices outside USA may vary.

IBM Power Systems

© 2012 IBM Corporation

53

When to Use Power 795 GX++ PCIe Adapters

Use GX++ PCIe Adapters when:


Just need a few high speed FC or
FCoE ports and have available GX
slots


Need 16 GB Fibre Channel



Use #5803/5873 I/O Drawer when:


need lots of PCIe slots


Min 1 #5803 still needed on 795


Need adapters other than FC or
FCoE


OS/firmware levels don’t match
your establishment



IBM Power Systems

© 2012 IBM Corporation

54

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

55

Enhanced Capacity on Demand (CoD)

Changing name to “Elastic CoD” instead of “On/Off CoD” to help point out
enhancements in flexiblity, usability and operation

However, many feature code names remain “On/Off” and many documents
still use “On/Off”. Therefore be patient with us as the change is made and
help us switch over.




Capacity on Demand names FYI


Permanent activations


Capacity Upgrade on Demand (CUoD)


Temporary activations


Temporary CoD (TCoD)


Elastic CoD ( was On/Off CoD ) usage by the day


Utility CoD automated usage by the minute


Trial CoD


IBM Power Systems

© 2012 IBM Corporation

56

CoD Enhancement: 90
-
Day Enablement

90
-
Day Elastic Temporary Enablement keys ….


For 770/780 “D” models and for 795 (new or existing)


One key for processors (#E
P
9T), one key for memory (#E
M
9T)


No
-
charge to enable …… will pay later if temporarily activate resource


MES order only. Can not be ordered with initial server order.


Firmware 7.6 is pre
-
req



For all processor cores or memory not permanently activated for 90 days


Implemented as 90 times the number of processors or 90 times the GB
memory which are not permanently activated.


Earlier models enable a fixed number of processor
-
days/memory
-
days (different
number for memory or processors). If usage high, fixed number has the
logistical hassle of running low at an inconvenient time and potentially needing
to emergency reorder


New 90
-
day approach more simple and enables a quantity which will last at least
90 days. (and may last a lot more …up to a calendar year… if Elastic CoD
usage is modest.)


You could also choose to order every 90 days (no
-
charge) to guarantee you
never ran out if logistically you prefer



IBM Power Systems

© 2012 IBM Corporation

58

Power 780/795 Elastic CoD & Large Block No
-
Charge

All new POWER7+ 780 and POWER7 795 come

“pre
-
loaded” with Elastic COD processor and memory


days for no
-
charge


Fifteen (15) processor days for every active and inactive processor initially ordered


Two hundred & forty (240) GBs of Elastic Memory days for every processor initially
ordered


Only available at time of initial ship or at initial model upgrade to MHD/FHB


Elastic CoD processor and memory days will be kept as a credit by IBM


Credit not transferable to another company

Usage of Elastic CoD resources


No restrictions apply to how client uses the Elastic CoD resources


TCoD contract must be signed prior to receiving TCoD Enablement code


Enablement code is ordered via an MES


Reports of usage of Elastic CoD resources will be sent to IBM on monthly basis


Usage will be debited to clients account on a quarterly basis and a report will be
sent providing the Elastic CoD account status


Normal billing occurs when all the client’s Elastic Processor and memory days
credits are exhausted

58

IBM Power Systems

© 2012 IBM Corporation

59

Value of Large Block Elastic CoD


1 Processor Day = enabled to use 1 processor core for one 24
-
hour period


1 Memory Day = enabled to use 1 GB memory for 24
-
hour period

No
-
charge initial block of processor days and memory days


Fifteen (15) processor days for every active and inactive processor core initially ordered


Two hundred & forty (240) memory days for every processor core initially ordered


For EACH 795 #4700 32
-
core processor book: 15 x 32 = 480 processor days and if 8 books =
3,840 processor days


#4709 = $20 /day = up to $ 76,800


For EACH 795 #4700 32
-
core processor book: 240 x 32 = 7,680 memory days and if 8 books =
61,440 memory days


#7377 = $1/day = up to $61,440


For EACH 795 #4702 24
-
core processor book: 15 x 24 = 360 processor days and if 8 books = 2,880 processor days


For EACH 795 #4702 24
-
core processor book 240 x 24 = 5760 memory days and if 8 books = 46,080 memory days


For EACH 780 #EPHO 16
-
core processor card 15 x 16 = 240 processor days and if 4 cards = 960 processor days


For EACH 780 #EPHO 16
-
core processor card 240 x 16 = 3,840 memory days and if 4 cards = 15,360 memory days


For EACH 780 #EPH2 32
-
core processor card 15 x 32 = 480 processor days and if 4 cards =
1920 processor days


#EPHJ = $20/day = up to $38,400


For EACH 780 #EPH2 32
-
core processor card 240 x 32 = 7,680 memory days and if 4 cards =
30,720 memory days


#7377 = $1/day = up to $30,720

Value of up to
$138,240*

for a 795 or up to
$69,120*

for a 780

PLUS

daily usage at no additional charge of AIX, PowerHA, PowerVM,
PowerSC, CSM, GPFS. Software maintenance on the above also at no
additional charge. (IBM i would be higher value.)

Value provided as credit balance of days in IBM records.

* Value calculated using max processor books/cards and adding billing feature list price value for processors and memory days


Based on USA suggested list prices and are subject to change. Reseller prices may vary. Prices outside USA may vary.

IBM Power Systems

© 2012 IBM Corporation

60

Power 780 Elastic CoD


Large Block of No
-
Charge

Ordering the No
-
charge initial block of processor days and memory days


Fifteen (15) processor days for every active and inactive processor core initially ordered


For EACH #EPH0 feature: Qty 10 #EPJ2 features (one #EPJ2 = 24 proc days)


For EACH #EPH2 feature: Qty 10 #EPJ0 features (one #EPJ0 = 48 proc days)


Two hundred & forty (240) GBs of On/Off Memory days for every processor core initially ordered


For EACH #EPH0 feature: Qty 10 #EMJ2 features (one #EMJ2 = 384 mem days)


For EACH #EPH2 feature: Qty 10 #EMJ0 features (one #EMJ0 = 768 mem days)


60

Qty proc
features

4.42 GHz
#EPH0

Qty cores

Qty no
-
charge
processor
days

Qty no
-
charge
memory days

3.72 GHz

#EPH2

Qty cores

Qty no
-
charge
processor
days

Qty no
-
charge
memory days

Use ten #EPJ2
when ordering

Use ten #EMJ2
when ordering

Use ten #EPJ0
when ordering

Use ten #EMJ0
when ordering

1

16

1 x 10 x 24 =
240 days

1 x 10 x 384 =
3,840 days

32

1 x 10 x 48 =
480 days

1 x 10 x 768 =
7,680 days

2

32

2 x 10 x 24 =
480 days

2 x 10 x 384 =
7,680 days

64

2 x 10 x 48 =
960 days

2 x 10 x 768 =
15,360 days

3

48

3 x 10 x 24 =
720 days

3 x 10 x 384 =
11,520 days

96

3 x 10 x 48 =
1,440 days

3 x 10 x 768 =
23,040 days

4

64

4 x 10 x 24 =
960 days

1 x 10 x 384 =
15,360 days

128

4 x 10 x 48 =
1,920 days

4 x 10 x 768 =
30,720 days

Power 780 Model MDH
(not offered for 780 Model B/C models or to Power 770 MMD )


IBM Power Systems

© 2012 IBM Corporation

61

Power 795 Elastic CoD
--

Large Block of No
-
Charge

Ordering the No
-
charge initial block of processor days and memory days


Fifteen (15) processor days for every active and inactive processor core initially ordered


For EACH #4700 feature: Qty 10 #EPJ0 features (one #EPJ0 = 48 proc days)


For EACH #4702 feature: Qty 10 #EPJ1 features (one #EPJ1 = 36 proc days)


Two hundred & forty (240) GBs of On/Off Memory days for every processor core initially ordered


For EACH #4700 feature: Qty 10 #EMJ0 features (one #EMJ0 = 768 mem days)


For EACH #4702 feature: Qty 10 #EMJ1 features (one #EMJ1 = 576 mem days)


Qty proc
features

4.0 GHz
#4700

Qty cores

Qty no
-
charge
processor days

Qty no
-
charge memory
days

3.7 GHz
#4702

Qty cores

Qty no
-
charge
processor days

Qty no
-
charge memory
days

Ten #EPJ0 when order

Ten #EMJ0 when order

Ten #EPJ1 when order

Ten #EMJ1 when order

1

32

1 x 10 x 48 = 480 days

1 x 10 x 768 = 7,680 days

24

1 x 10 x 36 = 360 days

1 x 10 x 576 = 5,760 days

2

64

2 x 10 x 48 = 960 days

2 x 10 x 768 = 15,360 days

48

2 x 10 x 36 = 720 days

2 x 10 x 576 = 11,520 days

3

96

3 x 10 x 48 = 1440 days

3 x 10 x 768 = 23,040 days

72

3 x 10 x 36 = 1080 days

3 x 10 x 576 = 17,280 days

4

128

4 x 10 x 48 = 1920 days

4 x 10 x 768 = 30,720 days

96

4 x 10 x 36 = 1440 days

4 x 10 x 576 = 23,040 days

5

160

5 x 10 x 48 = 2400 days

5 x 10 x 768 = 38,400 days

120

5 x 10 x 36 = 1800 days

5 x 10 x 576 = 28,800 days

6

192

6 x 10 x 48 = 2880 days

6 x 10 x 768 = 46,080 days

144

6 x 10 x 36 = 2160 days

6 x 10 x 576 = 34,560 days

7

224

7 x 10 x 48 = 3360 days

7 x 10 x 768 = 53,760 days

168

7 x 10 x 36 = 2520 days

7 x 10 x 576 = 40,320 days

8

256

8 x 10 x 48 = 3840 days

8 x 10 x 768 = 61,440 days

192

8 x 10 x 36 = 2880 days

8 x 10 x 576 = 46,080 days

Power 795 Model FHB

IBM Power Systems

© 2012 IBM Corporation

62

Power 770 “D” On Demand Features

Processor

770 (MMD)

4.2 GHz

#EPM0

770 (MMD)

3.8 GHz

#EPM1

1 processor base activation (no charge)

n/a

n/a

1 processor CUoD (permanent) activation

EPMA

EPMB

90
-
Days On/Off Temporary Enablement (processor)

EP9T

EP9T

1 On/Off processor day billing (without IBM i)

EPME

EPMG

100 On/Off processor day billing (without IBM i)

EPMN

EPMQ

100 minutes On/Off utility billing (without IBM i)

EPMW

EPMY

1 On/Off processor day billing (with IBM i)

EPMF

EPMH

100 On/Off processor day billing (with IBM i)

EPMP

EPMR

100 minutes On/Off utility billing (with IBM i)

EPMX

EPMZ

Memory

1GB activation (to server, not to memory DIMMs)

EMA2

EMA2

100GB Memory activation (to server, not to DIMMs)

EMA3

EMA3

90
-
Day On/Off Temporary Enablement (memory)

EM9T

EM9T

On/Off 1GB
-
1Day Billing

7377

7377

On/Off 999 GB
-
Days Billing

4710

4710

5250 Enterprise Enablement (5250 OLTP)

Enablement (1 add’l processor’s worth)

4992

4992

Full Enterprise Enablement

4997

4997

IBM Power Systems

© 2012 IBM Corporation

63

Power 780 “D” On Demand Features

Processor

780 (MHD)

4.42 GHz

#EPH0

780 (MHD)

3.72 GHz

#EPH2

1 processor base activation (no charge)

n/a

n/a

1 processor CUoD (permanent) activation

EPHA

EPHC

90
-
Days On/Off Temporary Enablement (processor)

EP9T

EP9T

1 On/Off processor day billing (without IBM i)

EPHE

EPHJ

100 On/Off processor day billing (without IBM i)

EPHN

EPHS

100 minutes On/Off utility billing (without IBM i)

EPHU

EPHY

1 On/Off processor day billing (with IBM i)

EPHF

EPHK

100 On/Off processor day billing (with IBM i)

EPHP

EPHT

100 minutes On/Off utility billing (with IBM i)

EPHV

EPHZ

No Charge 24/48 Proc
-
Days On/Off (initial order)

EPJ2 (24)

EPJ0 (48)

Memory

1GB activation (to server, not to memory DIMMs)

EMA2

EMA2

100GB Memory activation (to server, not to DIMMs)

EMA3

EMA3

90
-
Day On/Off Temporary Enablement (memory)

EM9T

EM9T

On/Off 1GB
-
1Day Billing

7377

7377

On/Off 999 GB
-
Days Billing

4710

4710

No Charge 384/768 GB
-
Days On/Off (initial order)

EMJ2 (384)

EMJ0 (768)

5250 Enterprise Enablement (5250 OLTP)

Enablement (1 add’l processor’s worth)

4992

4992

Full Enterprise Enablement

4997

4997

IBM Power Systems

© 2012 IBM Corporation

64

Power 795 9119
-
FHB On Demand Features

Processor

3.7 GHz*

#4702

4.0 GHz*

#4700

1 processor base activation (no charge)

n/a

n/a

1 processor core CUoD (permanent) activation

4714

4713

64 processor core CUoD (permanent) activation

4718

4717

90
-
Days On/Off Temporary Enablement (processor)

EP9T

EP9T

On/Off (temporary) enablement

7971

7971

1 On/Off processor day billing (without IBM i)

4711

4709

100 On/Off processor days billing (without IBM i)

EP2Q

4704

100 minutes On/Off utility billing (without IBM i)

4707

4706

1 On/Off processor day billing (with IBM i)

4724

4721

100 On/Off processor days billing (with IBM i)

EP2R

4722

100 minutes On/Off utility billing (with IBM i)

4722

4719

No Charge 24/48 Proc
-
Days On/Off (initial order)

EPJ1 (36)

EPJ0 (48)

Memory

1 memory day
-

On/Off 1GB Billing

7377

7377

999 memory days
-

On/Off 1GB Billing

4710

4710

90
-
Day On/Off Temporary Enablement (memory)

EM9T

EM9T

Memory enablement feature (for On/Off)

7973

7973

1GB activation (to server, not to memory DIMMs)

8212

8212

256GB Memory activation (to server, not to DIMMs)

8213

8213

No Charge 24/48 Proc
-
Days On/Off (initial order)

EMJ1 (576)

EMJ0 (768)

5250 Enterprise Enablement (5250 OLTP)

Enablement (1 add’l processor’s worth)

4995

4995

Full Enterprise Enablement

4996

4996

* CBU for DR processor features are #7562 (3.7 GHz) and #7560 (4.0 GHz). The rest of the features are the same.

IBM Power Systems

© 2012 IBM Corporation

66

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

67

Introducing Power System Pools for Power 780 and 795

New option to create pools of high end
Power Systems servers that allows sharing
of processor and memory resources in
support of planned maintenance events


Available on Power 795 and on 780 with
POWER7+ based processors


Simplified Elastic COD enablement delivers
utility
-
like compute capacity on a much larger
scale

Easily deliver compute resources to where they are required in order to achieve the
highest levels of flexibility and business resiliency

Based on USA suggested list prices and are subject to change. Reseller prices may vary. Prices outside USA may vary.

IBM Power Systems

© 2012 IBM Corporation

68

Power Pools Offering Details

New no
-
charge Power high end availability offering

Power Pool can be made of up to 10 Power 780/795s


Each system purchased with 2 or more processor books/nodes


Minimum of 50% of processors in pool must be active


Cannot mix AIX and IBM i Systems in same pool


PowerVM and Electronic Service Agent are prerequisites `


All systems in a pool must have equivalent IBM hardware maintenance status


Software licensed by core or S/N and its SWMA must have at least a minimum
license on each machine on which the software will run

On/Off Processor and Memory Days are purchased for the pool, not the individual server, and
are intended to provide utility computing for short time workload balancing, workload spikes or
immediate capacity


Single O/O processor and memory enablement key needs to be entered only
once/year

Support 8 Planned Maintenance Events per pool via Trial CoD

Single aggregated bill provided to client on quarterly basis for On/Off COD usage

IBM Power Systems

© 2012 IBM Corporation

69

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

71

4Q 2012 Active Memory Expansion

POWER7+ AME Hardware Accelerator


Enhanced Power Systems value for AIX


On
-
chip enhancement


Compared to POWER7, more efficient memory expansion
(less
processor overhead for the same compression/decompression


or even more equivalent
memory for the same processor overhead)

7+

Expand
memory

True
memory

True

memory

True
memory

True
memory

True
memory

True
memory

Expand
memory

Expand
memory

Expand
memory

Up to 25%
more memory
expansion

Expand
memory

Expand
memory

Expand
memory

Expand
memory

POWER7+
accelerator

Up to 100% POWER7

Up to 125% POWER7+

Note expansion percentage very workload dependent

IBM Power Systems

© 2012 IBM Corporation

72

Benefit of POWER7+ HW Accelerator

POWER7 Act Mem
Exp CPU Time

POWER7+ Act Mem
Exp CPU Time

Work of
managing
pages

Work of
compress /
decompress

Work of compress / decompress

Work done with software

Work done by hardware accelerator

CPU TIME


Less CPU for the
same amount of
memory expansion


Can then run more
partitions or work per
partition


If fewer cores
needed, may result in
lower software
licensing


OR more memory
expansion for the
same amount of
processor


Better able to relieve
memory shortages
and improve
performance


May be able to do
more work

Work of
managing
pages

IBM Power Systems

© 2012 IBM Corporation

73

POWER7+ uses on
-
chip hardware accelerator to do some of the compression /
decompression work. There is a knee
-
of
-
cure relationship for CPU resource
required for memory expansion


Even with POWER7+ hardware accelerator there is some resource required.


The more memory expansion done, the more CPU resource required

Knee varies depending on how compressible memory contents are

% CPU
utilization
for
expansion

Amount of memory expansion

POWER7

Active Memory Expansion
-

CPU & Performance

POWER7+

IBM Power Systems

© 2012 IBM Corporation

74

Permanent Enablement features


Chargeable



Power Blades & PureFlex


#4796 Enablement Feature


Power 710/730 & 7R1/7R2 #4795 Enablement Feature


Power 720 #4793 Enablement Feature


Power 740 #4794 Enablement Feature


Power 750 #4792 Enablement Feature


Power 755 N/A


Power 770 & Power 780 #4791 Enablement Feature


Power 795 #4790 Enablement Feature

ONE feature per server



no matter how many partitions choose to use it


One
-
time, 60
-
day trial
-

No charge


Request via Capacity on Demand Web page
www.ibm.com/systems/power/hardware/cod/

Additional Rules/Insights


Permanent enablement available either with new server order or with MES order


There is no mechanism to move enablement to a different server


Enablement does not mean function has to be used. Enablement allows Act Mem Exp to be
used on any or all of the AIX partitions selected by the client

Active Memory Expansion Product Structure

POWER7 or
POWER7+

Same feat code,

Same price

IBM Power Systems

© 2012 IBM Corporation

76

1

Planning Tool
(AMEPAT)

A.
Part of AIX 6.1 TL4

B.
Calculates data
compressibility &
estimates CPU
overhead due to
Active Memory
Expansion

C.
Provides initial
recommendations

2

60
-
Day Trial
(no charge)

A.
One
-
time,
temporarily
enablement

B.
Config LPAR based
on planning tool

C.
Use AIX tools to
monitor Act Mem
Exp environment

D.
Tune based on
actual results

3

Deploy into Production

A.
Permanently enable
Active Memory
Expansion

B.
Deploy workload into
production

C.
Continue to monitor
workload using AIX
performance tools

Memory Expansion

CPU Utilization

Estimated Results

CPU Utilization

Memory Expansion

App. Performance

Memory Expansion

Time

Performance

Actual Results

Active Memory Expansion


Client Deployment Steps

Same for POWER7+ as for POWER7 servers

IBM Power Systems

© 2012 IBM Corporation

77

Agenda



POWER7+ chips


POWER7+ 770/780


Power 795 enhancements


Elastic CoD for 770/780/795


Enterprise Power Pool


Active Memory Expansion for POWER7+


New Ultra SSD I/O Drawer for 770/780


New rack Doors for 770/780/795


RDX docking station refresh


IBM i I/O enhancements


900/856 GB 10k rpm disk drive


Refreshed HMC and Firmware Insights


IBM Networking highlights


PCIe RoCE adapter enhancements


SODs, Planning Stmts, Product Withdrawals


Future 2012 announcements


Bonus Topics

IBM Power Systems

© 2012 IBM Corporation

78

New EXP30 Ultra SSD I/O Drawer

1U drawer … Up to 30 SSD

30 x 387 GB drives = up to 11.6 TB

Great performance


Up to 480,000 IOPS (100% read)


Up to 410,000 IOPS (60/40% read/write)


Up to 325,000 IOPS (100% write)


Up to 4.5 G
B
/s bandwidth

Up to 48 drives & 43 TB downstream HDD

#EDR1

Nov GA 2012:

770/780

D


models

AIX

IBM i SOD

20% more IOPS over
April

s Ultra Drawer intro

Ultra performance

Ultra density

IBM Power Systems

© 2012 IBM Corporation

79

Simple SSD Sales Call

The most power … The most space efficient … The best AIX/Linux/VIOS DAS SSD
solution IBM has ever introduced

The Ultra SSD Drawer


And easy for a sales person to summarize in an executive conversation …


A SIX
-
PACK

(sound good? (smile))












Prices shown are IBM USA suggested list prices as of Oct 2012 and are subject to change without notice; reseller prices may v
ary
. Consists of six
#ES02. An Ultra Drawer would add about $36,000. Maintenance after warranty not included.

IBM Power Systems

© 2012 IBM Corporation

80

Simple SSD Sales Call

The most power … The most space efficient … The best AIX/Linux/VIOS DAS SSD
solution IBM has ever introduced

The Ultra SSD Drawer


And easy for a sales person to summarize in an executive conversation …


A SIX
-
PACK

(sound good? (smile))





A 2.3 TB six
-
pack can provide your POWER7+
770/780 up to 140,000 IOPS for just $48,000 list price
.. (plus the Ultra Drawer it goes in ($36,000))


Prices shown are IBM USA suggested list prices as of Oct 2012 and are subject to change without notice; reseller prices may v
ary
. Consists of six
#ES02. An Ultra Drawer would add about $36,000. Maintenance after warranty not included.

IBM Power Systems

© 2012 IBM Corporation

81

New EXP30 Ultra SSD I/O Drawer

For POWER7
+

770/780 “D” mdl

Requires Firmware
7.6
or later

1U drawer … Up to 30 SSD

30 x 387 GB drives = up to 11.6 TB

Great performance


Up to
480,000

IOPS (100% read)


Up to 4.5 GB/s bandwidth


Slightly faster PowerPC processor in
SAS controller & faster internal
instruction memory

Concurrent maintenance

Up to 48 drives & up to 43 TB
downstream HDD in #5887

#EDR1 .. Oct 2012

#5888 .. Apr 2012

For POWER7
710/720/730/740 “C” mdl

Requires Firmware
7.5

or later

1U drawer … Up to 30 SSD

30 x 387 GB drives = up to 11.6 TB

Great performance


Up to
400,000

IOPS (100% read)


Up to 4.5 GB/s bandwidth


Slightly slower PowerPC processor in
SAS controller & slower internal
instruction memory

Limited concurrent maintenance

No downstream HDD

IBM Power Systems

© 2012 IBM Corporation

82

Front, Rear and Inner Views…

1.8” SSD Bays

30 Bays !!!!

Two SAS RAID Controller

SAS Connectors/expanders

Only 1U !

Front
(grills removed)

Rear view

Power

supply

Power

supply

IBM Power Systems

© 2012 IBM Corporation

83

Ultra Drawer Attachment: GX++ PCIe2 Adapter


Integrated SAS RAID controller connects to Power server into a
GX++ PCIe2 Adapter via PCIe cable



GX++ PCIe2 Adapter plugs into a GX++ slot




NOTE:

a 12X I/O loop can NOT be connected to that GX++ slot

Rear of EXP30 Ultra
Drawer (#EDR1)


D


Model 770/780

PCIe cables

#EN05 (1.5m),
#EN07 (3m),
#EN08 (8m)

Each of the two
integrated SAS
controllers need a PCIe
cable connection to a
GX++ PCIe Adapter

Uses zero PCIe slots !

GX++ PCIe2
Adapter #1914

2 ports

IBM Power Systems

© 2012 IBM Corporation

84

#EDR1 EXP30 Ultra SSD I/O Drawer

Supported on


“D” model Power 770/780


No other models today (see SOD)

Supported by


AIX 6.1 or later


Linux


IBM i SOD (Not supported today, even via VIOS)



Minimum of 6 SSD per Ultra Drawer

All drives RAID formatted
(no JBOD)


Active/Active recommended for top performance


A/A requires at least two arrays


ideally even number arrays, evenly
balanced


Attaches to one or two GX++ slots


Two integrated RAID SAS controllers


3.1GB write cache


protected through redundancy




#EDR1

IBM Power Systems

© 2012 IBM Corporation

85

Industry leader
-

1.8” eMLC


Performance + Endurance + Small size

Excellent Performance


Similar to 387GB SFF eMLC


Big improvements over 177GB 1.8” SSD

Used only in EXP30 Ultra Drawer

RAID format
(528 byte sectors


no JBOD)

One feature code #ES02


For AIX/Linux/VIOS


IBM i not supported

CCIN = 58BB

Server model & OS pre
-
reqs


See #EDR1 or #5888 EXP30 Ultra Drawer


387GB 1.8” SAS

SSD with eMLC

#ES02

Throughput

IOPS: 10k
-

35k

Bandwidth: 320


180 MB

Latency: 0.26 milliseconds

These are drive
-
specific values. Overall system
performance impacted by many other factors such
as controllers, caches, buses, system memory,
applications, etc

387GB 1.8


SSD

Physically like a fat,
really
-
long credit card

IBM Power Systems

© 2012 IBM Corporation

86

World Class eMLC SSD Performance

IO OPERATIONS PER SECOND
(IOPS)

Throughput (MB/s)

Latency
-

Response
Time (ms)

SSD

Random
Read

Random
Write

Random
Mixed

70% Read /
30% Write

Read

Write

Single

Read

177GB 1.8” SSD
in PCIe
-
based
Adapter
(#1995/1996)

19 k

5 k

12 k

186 MB

57 MB

(21
-
166)

.25 ms

387GB 1.8” SSD
in #EDR1 Ultra
SSD Drawer

35 k

10 k

21 k

320 MB

180 MB

(150
-
400)

.26 ms

177GB 2.5” SSD
in SAS SFF bays

15 k

4 k

11 k

170 MB

64 MB

(24
-
123)

.25 ms

387GB 2.5” SSD
in SAS SFF bays

39 k

22 k

24 k

340 MB

375 MB

.20 ms

Note these are drive specific measurements and projections which can vary from what you might experience. The values assume
528

byte sectors running RAID
-
0 with no
protection. Hypothetically if measured with unsupported 512 byte sectors, values would be higher. The values are highly wor
klo
ad dependent. Factors such as read/write mix,
random/non
-
random data, drive cache hits/misses, data compressibility in the drive controller, large/small block, type of RAID o
r mirroring protection, etc will change these
values. These values produced by a server with plenty of processor, memory and controller resources to push this much I/O int
o t
he SSD. Most client system applications don’t
push SSD nearly this hard.

For grins …

15k rpm HDD

0.12
-

0.4 k

0.12
-

0.4 k

0.12
-

0.4 k

~175 MB

~175 MB

8.3


2.5 ms

Ultra

Drawer

IBM Power Systems

© 2012 IBM Corporation

87

77x

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

1914

GX++

Ultra Drawer Cabling Examples to the 770/780

* Attaching an Ultra Drawer to two different GX++ PCI adapters provides additional redundancy

77x

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

1914

GX++

One Ultra Drawer with both
connections to a single #1914
GX++ adapter

77x

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

1914

1914

One Ultra Drawer attached to two
different #1914 GX++ adapters *

77x

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

1914

GX++

One Ultra Drawer attached to two
different #1914 GX++ adapters in
different processor enclosures *

77x

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

P

C

I

e

1914

1914

Two Ultra Drawers, each attached
to two different #1914 GX++
adapters *

IBM Power Systems

© 2012 IBM Corporation

91

Ultra Drawer Attachment Downstream HDD

1U


30 SSD

Up to 78 SAS bays in 5U

Up to 54.8TB in just 5U


Up to 11.6 TB SSD (30 x 387GB drives)


Up to 43.2 TB HDD (48 x 900GB drives)

Zero PCIe slots ! Direct GX++ connection

With huge IOPs


2U


24 HDD

2U


24 HDD

Up to two EXP24S #5887
disk drawers run by the two
integrated SAS controllers
in the #EDR1 Ultra Drawer

#EDR1 Ultra Drawer

IBM Power Systems

© 2012 IBM Corporation

92

Ultra Drawer Attachment Downstream HDD


Integrated SAS RAID controllers in Ultra Drawer run the 30 SSD bays in
the Ultra Drawer PLUS run up to 48 HDD SAS bays in up to two #5887
EXP24S Drawers


Each of the two integrated SAS controllers need a SAS EX cable
connection to the #EXP24S Drawer.


Note: only HHD, not SSD in the downstream #5887 EXP24S are
supported. The EXP24S must be in mode 1.

Rear of EXP30 Ultra Drawer (#EDR1)


D


Model 770/780

SAS EX Cables:

#5926 (1.5m)

#3675 (3m)

#3680 (6m)

Uses zero PCIe
slots !

2 ports

Mode 1 only

HDD only

Mode 1 only

HDD only

#5887 EXP24S

#5887 EXP24S

IBM Power Systems

© 2012 IBM Corporation

93

#EDR1 EXP30 Ultra Drawer RAS


Redundant,


Power supplies, fans


SAS controllers, expanders, fans


SAS paths from GX++ adapter to SAS drive


Concurrent maintenance for


Power supplies, fans


SAS controllers, expanders, fans


CHARM support of GX++ PCI adapter planned 1H2013


Protected/mirrored write cache


No cache batteries to maintain

Ultra Drawer SSD Protection


RAID 0, 10, 5, & 6 (or LVM through OS)


RAID ECC (error correction code)


Hot Spare for RAID 5/6/10


T10 DIF

IBM Power Systems

© 2012 IBM Corporation

95

Ultra SSD Drawer
-

Performance Detail

“it depends”
The above values are great for marketing and are similar to
values posted by other vendors (though perhaps more conservative).

HOWEVER, DO NOT USE THEM FOR SIZING REAL CUSTOMER CONFIGS

The above numbers use “trivial” data/workloads running RAID
-
0 with no
protection. The workload is so atypical that the above values produced with
write cache turned off. Typically write cache can be a big help.


Up to 480,000 IOPS (100% read)


Up to 410,000 IOPS (60/40% read/write)


Up to 325,000 IOPS (100% write)


Up to 4.5 GB/s bandwidth

Values are highly application workload and configuration dependent. For example, is the
job single threaded, or sequentially dependent? Using RAID
-
0 with no mirroring (no
protection)? Using RAID
-
5/6/10/mirroring can make a big difference in max IOPS with
write sensitive workloads or have little difference with mostly read workloads. The IOPS
above are simplistic benchmark type measurements designed to push the I/O component
to this limits not to test system performance. Plenty of CPU cores and memory are
needed to drive the I/O hard enough to achieve these values.

#EDR1

IBM Power Systems

© 2012 IBM Corporation

96

EXP30 Ultra SSD I/O Drawer OS Support Levels

AIX


Version 7.1 with the 7100
-
02 Technology Level, or later


Version 7.1 with the 7100
-
01 Technology Level and Service Pack 6, or later
(planned availability December 19, 2012)


Version 7.1 with the 7100
-
00 Technology Level and Service Pack 8, or later
(planned availability December 19, 2012)


Version 6.1 with the 6100
-
08 Technology Level, or later


Version 6.1 with the 6100
-
07 Technology Level and Service Pack 6, or later
(planned availability December 19, 2012)


Version 6.1 with the 6100
-
06 Technology Level and Service Pack 10, or later
(planned availability December 19, 2012)

Linux


Red Hat Enterprise Linux 6.3 for POWER, or later


Red Hat Enterprise Linux 5.7 for POWER, or later


SUSE Linux Enterprise Server 11 Service Pack 2, or later, with current maintenance
updates available from SUSE to enable all planned functionality

VIOS V2.2.2.0

VIOS V2.2.1.5 (planned availability December 19, 2012)


No IBM i support
-

see SOD for native support

#EDR1

For #EDR1

IBM Power Systems

© 2012 IBM Corporation

97

SSD Config Options
---

Oct 2012



Power Systems (internal / DAS)

SAN
-
based

PCIe
-
based
SSD

SAS
-
bay
-
based SSD


DS8000

SVC

V7000

XIV

#2053/54/55
RAID & SSD
SAS Adapter

In CEC w/
int SAS
contrlr

#5805

Gen1 PCIe
380MB
cache

#5913
Gen2 PCIe
1800MB
cache

#ESA1/A2
Gen2 PCIe
0 MB
cache

#5888 Ultra
Drawer
3100MB
cache

#EDR1 Ultra
Drawer
3100MB
cache


Many DAS SSD config options* for Power Clients

Options vary


Performance


Price


Physical size


Where tested/supported


Function

IBM Power Systems

© 2012 IBM Corporation

98

Power DAS SSD Options: Oct 2012

PCIe
-
based
SSD

SAS
-
bay
-
based SSD

#2053/54/55

In CEC w/ int
SAS contrlr

#5805 &
#5887***

#5913 &
#5887***

#ESA1/A2
& #5887***

#5888 Ultra
Drawer

#EDR1 Ultra
Drawer

Number PCIe slots used

2 (4 mirror)

0

2

2

2

0

0

Number GX slots used

0

0

0

0

0

1
-
2
(710/730=2)

1

Max SSD attach

4

3
-
8
mdl dependent

9

24

24

30

30

Max 177GB busy SSD reasonably
supported @

W1 4

W2 3
-
4

W1 2
-
3

W2 1
-
2

W1 4
-
6

W2 3
-
4

W1 ~24

W2 ~24

W1 ~24

W2 ~24

N/A

N/A

Max 387GB busy SSD reasonably
supported @

N/A

W1 1
-
2

W2 1

W1 2
-
3

W2 1
-
2

W1 ~20

W2 ~14

W1 ~18

W2 ~14

W1 ~22

W2 ~14

W1 ~27

W2 ~14

Max downstream HDD + SSD

0

n/a

0

48

n/a

n/a

48

Write cache (MB)

0

175

380

1800

0

3100

3100

GB / SSD

177

177 or 387

177 or 387

177 or 387

177 or 387

387

387

Servers supported


-

Newest 710
-
740 (C models)


-

More POWER7 710
-
795


-

POWER6


Y

Y, except 795

Y, except 595


Y

Y, except 795

N


Y
710/730 limit**

Y

not 710/730

Y
177GB SSD


Y
710/730 limit**

Y
not 710/730

Y
177GB SSD


Y
720/740 limit *

N

N


710

740 “C”

N

N


770+780 “D”

N (SOD)

N

AIX/IBM i/Linux support

Y

Y

Y

Y

Y

AIX / Linux

AIX / Linux

Mix HDD & SSD

N

Y

N

Y

N

N

N (SOD)

Rack space needed

depends

N/A

2U+

2U+

2U+

1U

1U

Easy Tier

N

N

N

N

N

N

N (SOD)

Approximate USA

list price with
zero SSD for Mdl 740

$3k + 2PCIe
slots

0

$4