a dependability benchmark

townripeΔιαχείριση Δεδομένων

31 Ιαν 2013 (πριν από 4 χρόνια και 6 μήνες)

223 εμφανίσεις

Dependability benchmarking

for transactional and web systems

Henrique Madeira

University of Coimbra, DEI
-
CISUC

Coimbra, Portugal

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


2

Ingredients of a recipe to “bake” a
dependability benchmark


Measures


Workload


Faultload


Procedure

and

rules

(how

to

cook

the

thing)

Dependability benchmark specification


Document based only



or


Document + programs, tools,

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


3

Benchmark properties


Representativeness


Portability


Repeatability


Scalability


Non
-
intrusiveness


Easy

to

use


Easy

to

understand


Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


4

Benchmark properties


Representativeness


Portability


Repeatability


Scalability


Non
-
intrusiveness


Easy

to

use


Easy

to

understand



A benchmark is always an
abstraction of the real world!


It’s an
imperfect

and
incomplete

view of the world.


Usefulness


improve things


Agreement

In practice…

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


5

The very nature of a benchmark


Compare

components,

systems,

architectures,

configurations,

etc
.


Highly

specific
:

applicable/valid

for

a

very

well

defined

domain
.


Contribute

to

improve

computer

systems

because

you

can

compare

alternative

solutions
.


A

real

benchmark

represents

an

agreement
.

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


6

Three examples of dependability
benchmarks for transactional systems

1.
DBench
-
OLTP

[DSN

2003

+

VLDB

2003
]


Dependability

benchmark

for

OLTP

systems

(database

centric)


Provided

as

a

document

structured

in

clauses

(like

TPC

benchmarks)

2.
Web
-
DB

[SAFECOMP

2004
]


Dependability

benchmark

for

web

servers


Provided

as

a

set

of

ready
-
to
-
run

programs

and

document
-
based

rules

3.
Security benchmark (first step)

[DSN 2005]



Security

benchmark

for

database

management

systems


Set

of

tests

to

database

security

mechanisms

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


7

The DBench
-
OLTP Dependability
Benchmark

(SUB)

System Under
Benchmarking


(BMS)


Benchmark
Management System

DBMS

OS

BM

RTE

FLE

Workload

Faultload

Control Data + results

Benchmark
Target

Workload and setup adopted from the TPC
-
C performance
benchmark

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


8

Phase 1

Phase 2

Time


Slot N


Slot 1


Slot 2


Slot 3


Phase

1
:

Baseline

performance

measures

(TPC
-
C

measures)


Phase

2
:

Performance

measures

in

the

presence

of

the

faultload


Dependability

measures

Injection
time

Recovery
time

Testing Slot

(Start)

Testing Slot

(End)

Steady
state
time

Keep

time

Detection
time

Recovery
start

Fault
activation

Recovery
end

Steady state
condition

Data Integrity
Testing

Measurement Interval

Benchmarking Procedure

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


9

Measures


Baseline

performance

measures


tpmC



number

of

transactions

executed

per

minute


$
/tpmC



price

per

transaction


Performance

measures

in

the

presence

of

the

faultload


Tf



number

of

transactions

executed

per

minute

(with

faults)


$
/Tf



price

per

transaction

(with

faults)


Dependability

measures


AvtS



availability

from

the

server

point
-
of
-
view


AvtC



availability

from

the

clients

point
-
of
-
view


Ne



number

of

data

integrity

errors

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


10

Faultload


Operator

faults


Emulate

database

administrator

mistakes


Software

faults


Emulate

software

bugs

in

the

operating

system


High
-
level

Hardware

failures


Emulates

hardware

component

failures

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


11

Examples of systems benchmarked

System

Hardware

Operating System

DBMS

DBMS Config.

A


Processor
:



Intel

Pentium

III

800

MHz


Memory
:

256
MB


Hard

Disks
:

Four


20
GB/
7200

rpm


Network
:



Fast

Ethernet

Win2k Prof . SP 3

Oracle

8
i

(
8
.
1
.
7
)

Conf. 1

B

Win2k Prof . SP 3

Oracle

9
i

(
9
.
0
.
2
)

Conf. 1

C

WinXp Prof. SP 1

Oracle

8
i

(
8
.
1
.
7
)

Conf. 1

D

WinXp Prof. SP 1

Oracle

9
i

(
9
.
0
.
2
)

Conf. 1

E

Win2k Prof . SP 3

Oracle

8
i

(
8
.
1
.
7
)

Conf. 2

F

Win2k Prof . SP 3

Oracle

9
i

(
9
.
0
.
2
)

Conf. 2

G

SuSE Linux 7.3

Oracle

8
i

(
8
.
1
.
7
)

Conf. 1

H

SuSE Linux 7.3

Oracle

9
i

(
9
.
0
.
2
)

Conf. 1

I

RedHat Linux 7.3

PostgreSQL

7
.
3

-

J


Processor
:

Intel


Pentium

IV

2

GHz


Memory
:

512
MB


Hard

Disks
:

Four


20
GB

/

7200

rpm


Network
:



Fast

Ethernet

Win2k Prof . SP 3

Oracle

8
i

(
8
.
1
.
7
)

Conf. 1

K

Win2k Prof . SP 3

Oracle

9
i

(
9
.
0
.
2
)

Conf. 1

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


12

DBench
-
OLTP

benchmarking results

Baseline Performance
0
1000
2000
3000
4000
A
B
C
D
E
F
G
H
I
J
K
0
10
20
30
$
tpmC
$/tpmC
Performance With Faults
0
1000
2000
3000
4000
A
B
C
D
E
F
G
H
I
J
K
0
10
20
30
$
Tf
$/Tf
Availability
50
60
70
80
90
100
A
B
C
D
E
F
G
H
I
J
K
%
AvtS (Server)
AvtC (Clients)

Performance

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


13

DBench
-
OLTP

benchmarking results

Baseline Performance
0
1000
2000
3000
4000
A
B
C
D
E
F
G
H
I
J
K
0
10
20
30
$
tpmC
$/tpmC
Performance With Faults
0
1000
2000
3000
4000
A
B
C
D
E
F
G
H
I
J
K
0
10
20
30
$
Tf
$/Tf
Availability
50
60
70
80
90
100
A
B
C
D
E
F
G
H
I
J
K
%
AvtS (Server)
AvtC (Clients)

Performance


Availability

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


14

DBench
-
OLTP

benchmarking results

Baseline Performance
0
1000
2000
3000
4000
A
B
C
D
E
F
G
H
I
J
K
0
10
20
30
$
tpmC
$/tpmC
Performance With Faults
0
1000
2000
3000
4000
A
B
C
D
E
F
G
H
I
J
K
0
10
20
30
$
Tf
$/Tf
Availability
50
60
70
80
90
100
A
B
C
D
E
F
G
H
I
J
K
%
AvtS (Server)
AvtC (Clients)

Performance


Availability


Price

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


15

Using DBench
-
OLTP to obtain more
specific results

Availability (AvtS)
0
20
40
60
80
100
1
11
21
31
41
51
61
71
81
91
Injecton Slot
%
Win2kOra8i
Win2kOra9i
Availability (AvtC)
0
20
40
60
80
100
1
11
21
31
41
51
61
71
81
91
Injection Slot
%
Win2kOra8i
Win2kOra9i
Availability variation during the benchmark run

Corresponds to about 32 hours of functioning in which the system have
been subject of 97 faults. Each fault is injected in a 20 minutes injection
slot. System is rebooted between slots

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


16

DBench
-
OLTP

benchmarking effort

Type of fault

# of days

TPC
-
C

benchmark

implementation

10 (with reuse of code)

DBench
-
OLTP

benchmark

implementation

10 (first implementation)

Benchmarking

process

execution

3 (average per system)

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


17

The WEB
-
DB Dependability Benchmark

(SUB)

System Under
Benchmarking


(BMS)


Benchmark
Management System

Web
-

Server

OS

SPECWeb Client

Benchmark
Target

Fault injector

Bench. Coordinator

Availability tester

Workload and setup adopted from the SPECWeb99
performance benchmark

Workload

Faultload

Control Data + results

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


18

WEB
-
DB measures


Performance

degradation

measures
:


SPECf

:

Main

SPEC

measure

in

the

presence

of

the

faultload



THRf

:

Throughput

in

the

presence

of

the

faultload

(ops/s)



RTMf

:

Response

time

in

the

presence

of

the

faultload

(ms)



Dependability

related

measures



Availability
:

Percentage

of

time

the

server

provides

the

expected

service



Autonomy
:

Percentage

of

times

the

server

recovered

without

human

intervention




(estimator

of

the

self
-
healing

abilities

of

the

server)



Accuracy
:

Percentage

of

correct

results

yielded

by

the

server

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


19


Network

&

hardware

faults


Connection

loss

(server

sockets

are

closed)


Network

interface

failures

(disable

+

enable

the

interface)



Operator

faults


Unscheduled

system

reboot


Abrupt

server

termination



Software

faults


Emulation

of

common

programming

errors


Injected

in

the

operating

system

(not

in

the

web
-
server)

WEB
-
DB faultloads

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


20

BL

1

2

3

res

Benchmark

procedure
:

2

steps


Step

1
:


Determine

baseline

performance


(SUB

+

benchmark

tools

running

workload

without

faults
)


Tune

workload

for

a

SPEC

conformance

of

100
%


Step

2
:


3

runs


Each

run

comprises

all

faults

specified

in

the

faultload


Bechmark

results
:

the

average

of

the

3

runs

WEB
-
DB procedure

Time

Specweb
Ramp Up

+
Ramp Down

times.

faults

O.S. / Net / W.S.

workload

Web Srv. (BT)

idle

idle

workload

workload

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


21

Examples of systems benchmarked


Benchmark

and

compare

the

dependability

of

two

common

web
-
servers
:


Apache

web
-
server


Abyss

web
-
server




When

running

on
:


Win
.

2000


Win
.

XP


Win
.

2003


Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


22

89
91
93
95
97
99
2000
XP
2003
89
91
93
95
97
99
2000
XP
2003
89
91
93
95
97
99
2000
XP
2003
Availability

Accuracy

Autonomy

Apache

Abyss

Dependability

results

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


23

0
5
10
15
20
25
2000
XP
2003
50
60
70
80
90
2000
XP
2003
SPECf

THRf

RTMf

350
360
370
380
390
400
410
2000
XP
2003
Apache

Abyss

Baseline performance

-

Apache: 31, 26, 30

-

Abyss: 28, 25, 24

Performance in the presence of faults

Performance degradation (%)

-

Apache: 55.4, 30.7, 62.3

-

Abyss: 63.2, 45.2, 46.3

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


24

Security benchmark for database
management systems



Client

Application





Client

Application

Client

Application

Web

Browser

Web

Browser

Application

Server

Web

Server

Client

Application

DBMS

Network

Network

Network

Key Layer

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


25

Security Attacks vs System Vulnerabilities


Security

attacks
:


Intentional

attempts

to

access

or

destroy

data



System

vulnerabilities
:


Hidden

flaws

in

the

system

implementation


Features

of

the

security

mechanisms

available


Configuration

of

the

security

mechanisms

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


26

Approach for the evaluation of security in
DBMS


Characterization

of

DBMS

security

mechanisms


Our

approach
:

1
)

Identification

of

data

criticality

levels

2
)

Definition

of

database

security

classes

3
)

Identification

of

security

requirements

for

each

class

4
)

Definition

of

security

tests

for

two

scenarios
:


Compare

different

DBMS


Help

DBA

assessing

security

in

real

installations

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


27

Database Security Classes

DB Security Class

Data Criticality Level

Required Security Mechanisms

Class 0

None

None


Class 1

Level 1

-

User

authentication

(
internal

or

external
)

Class 2

Level 2

-

User

authentication

-

User

privileges

(
system

and

object

privileges
)

Class 3

Level 3

-

User

authentication

-

User

privileges

-

Encryption

in

the

data

communication

Class 4

Level 4

-

User

authentication

-

User

privileges

-

Encryption

in

the

data

communication

-

Encryption

in

the

data

storage

Class 5

Level 5

-

User

authentication

-

User

privileges

-

Encryption

in

the

data

communication

-

Encryption

in

the

data

storage

-

Auditing

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


28

Requirements for DBMS Security
Mechanisms

Requirements

Req.

Weight (%)

Req.

Reference

The

system

must

provide

internal

user

authentication

by

using

usernames

and

passwords

10

1.1

The

system

must

guarantee

that,

besides

the

DBA

users,

no

other

users

can

read/write

to/from

the

table/file

where

the

usernames

and

passwords

are

stored

6

1.2

The

password

must

be

encrypted

during

the

communication

between

the

client

and

the

server

during

the

authentication

6

1.3

The

passwords

must

be

encrypted

in

the

table/file

where

they

are

stored

4

1.4


Internal

user

authentication

(username/

password)
:


Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


29

Measures and Scenarios


Measures

provided
:


Security

Class

(
SCL
)


Security

Requirements

Fulfillment

(
SRF
)



Potential

scenarios
:


Compare

different

DBMS

products


Help

DBA

assessing

security

in

real

installations

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


30

Comparing DBMS Security


Set

of

tests

to

verify

if

the

DBMS

fulfill

the


security

requirements



Development

of

a

database

scenario
:


Database

model

(tables)


Data

criticality

levels

for

each

table


Database

accounts

corresponding

to

the

several

DB

user

profiles


System

and

object

privileges

for

each

account

Network

Oracle?

DB2?

PostgreSQL?

System under
development

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


31

Database scenario


CUSTOMER

C_ID

C_FNAME

C_LNAME

C_PHONE

C_EMAIL

C_SINCE

C_LAST_VISIT

C_DISCOUNT

C_BALANCE

C_YTD_PMT

C_BIRTH_DATE

C_DATA

C_ADDR_ID

<pk>

<fk>

ORDER

O_ID

O_DATE

O_SUB_TOTAL

O_TAX

O_TOTAL

O_SHIP_TYPE

O_STATUS

O_C_ID

O_BILL_ADDR_ID

O_SHIP_ADD_ADDR_ID

<pk>

<fk3>

<fk1>

<fk2>

ORDER_LINE

OL_ID

OL_QTY

OL_DISCOUNT

OL_COMMENT

OL_O_ID

OL_I_ID

<pk>

<fk1>

<fk2>

ITEM

I_ID

I_TITLE

I_PUB_DATE

I_PUBLISHER

I_SUBJECT

I_DESC

I_RELATED

I_THUMBNAIL

I_IMAGE

I_SRP

I_COST

I_AVAIL

I_STOCK

I_ISBN

I_PAGE

I_BACKING

I_DIMENSION

I_A_ID

<pk>

<fk>

ADDRESS

ADDR_ID

ADDR_STREET1

ADDR_STREET2

ADDR_CITY

ADDR_STATE

ADDR_ZIP

ADDR_CO_ID

<pk>

<fk>

COUNTRY

CO_ID

CO_NAME

CO_
CONTINENT


<pk>

AUTHOR

A_ID

A_FNAME

A_LNAME

A_MNAME

A_DOB

A_BIO

A_CO_ID

<pk>

<fk>

USER

U_ID

U_USERNAME

U_PASSWORD

C_ID

<pk>

<fk>

CREDIT_CARD

CX_O_ID

CC_TYPE

CC_NUM

CC_NAME

CC_EXPIRY

CC_AUTH_ID

CC_XACT_AMT

CC_XACT_DATE

CX_CO_ID

<pk,fk1>

<fk2>


Database

model
:

Level 1

Level 4

Level 3

Level 5

Level 2

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


32

Example: Comparative Analysis of
Two DBMS


Oracle

9
i

vs

PostgreSQL

7
.
3


Security Mechanism

# Req.

Oracle 9i

PostgreSQL

Internal user authentication

ALL

OK

OK

External user authentication

ALL

OK

OK

User privileges

3.1

OK

Not OK

3.2

OK

OK

3.3

OK

OK

3.4

OK

OK

3.5

OK

Not OK

Encryption in the data

communication

4.1

OK

Not OK

4.2

Depends on the method

Not OK

Encryption in the data

storage

5.1

OK

Not OK

5.2

Not OK

Not OK

5.3

Not OK

Not OK

Auditing

6.1

OK

Not OK

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


33

Results Summary

Oracle 9i

(encryption RC4, AES, and DES )

Oracle 9i

(encryption 3DES)

PostgreSQL 7.3


Security

Class

Class 5

Class 5

Class 1

SRF

metric

96%

92%

66%


Oracle

9
i

does

not

fulfill

all

encryption

requirements


400
%

<

performance

degradation

<

2700
%


PostgreSQL

7
.
3
:


Some

manual

configuration

is

required

to

achieve

Class

1


High

SRF

for

a

Class

1

DBMS



Fulfills

some

Class

2

requirements

Henrique Madeira

Workshop on Dependability Benchmarking, November 8, 2005
-

Chicago, IL USA


34

Conclusions


Comparing

(components, systems, architectures,
and configurations) is essential to improve
computer systems


Benchmarks needed
!


Comparisons could be missleading


Benchmarks must be carefully validated
!


Two

ways

of

having

real

benchmarks
:


Industry

agreement


User

community

(tacit

agreement)