Definition

concretecakeΠολεοδομικά Έργα

29 Νοε 2013 (πριν από 3 χρόνια και 6 μήνες)

105 εμφανίσεις

1

Implementation/Infrastructure
Support for Collaborative
Applications

Prasun Dewan

2

Infrastructure vs. Implementation
Techniques


Implementation technique are interesting when
general


Applies to a class of applications


Coding of such an implementation technique is
infrastructure.


Sometimes implementation techniques apply to
very narrow app set


Operation transformation for text editors.


These may not qualify as infrastructures


Will study implementation techniques applying to
small and large application sets.

3

Collaborative Application

Coupling

Coupling

4

Infrastructure
-
Supported Sharing

Client

Sharing
Infrastructure

Coupling

Coupling

5

Systems: Infrastructures


NLS (Engelbart ’68)


Colab (Stefik ’85)


VConf (Lantz ‘86)


Rapport (Ahuja ’89)


XTV (Abdel
-
Wahab, Jeffay & Feit
‘91)


Rendezvous (Patterson ‘90)


Suite (Dewan & Choudhary ‘92)


TeamWorkstation (Ishii ’92)


Weasel (Graham ’95)


Habanero (Chabert et al ‘ 98)



JCE (Abdel
-
Wahab ‘99)


Disciple (Marsic ‘01)



Post Xerox


Xerox


Stanford


Bell Labs


UNC/ODU


Bellcore



Purdue


Japan


Queens


U. Illinois


ODU


Rutgers

6

Systems: Products


VNC (Li, Stafford
-
Fraser, Hopper ’01)


NetMeeting


Groove


Advanced Reality


LiveMeeting (Pay
by minute service
model)


Webex (service
model)




ATT Research


Microsoft





Microsoft

7

Issues/Dimensions


Architecture


Session management


Access control


Concurrency control


Firewall traversal


Interoperability


Composability




Colab. Sys. 1
Implementation 1

Colab. Sys. 2
Implementation 3

Architecture Model

Session Management

Concurrency Control

8

Infrastructure
-
Supported Sharing

Client

Sharing
Infrastructure

Coupling

Coupling

9

Architecture?

Infrastructure/
client (logical)
components

Component
(physical)
distribution

10

Shared Window Logical Architecture

Application


Window


Window

Coupling

Near
-
WYSIWIS

11

User 1

User 2


X Server

X Client


X Server


Pseudo Server


Pseudo Server

Centralized Physical Architecture

XTV (‘88)
VConf (‘87)
Rapport (‘88)
NetMeeting

Input/Output

12

User 1

User 2


X Server

X Client


X Server

X Client


Pseudo Server


Pseudo Server

Replicated Physical Architecture

Rapport
VConf

Input

13

Relaxing WYSIWIS?

Application


Window


Window

Coupling

Near
-
WYSIWIS

14

Model
-
View Logical Architecture

Model

View

View

Window

Window

Sync
Coupling

15

Centralized Physical Model

Model

View

View

Window

Window

Rendezvous
(‘90, ’95)

16

Replicated Physical Model

Model

View

View

Window

Window

Sync ’96,
Groove

Model

Infrastructure

17

Comparing the Architectures

Model

View

View

Window

Window


Window

App


Window

App


Pseudo
Server


Pseudo
Server

Input


Window

App


Window


Pseudo
Server


Pseudo
Server

I/O

Model

View

View

Window

Window

Model

Architecture
Design
Space?

18

Architectural Design Space


Model/ View are Application
-
Specific


Text Editor Model


Character String


Insertion Point


Font


Color


Need to capture these differences in
architecture


19

Single
-
User Layered Interaction

PC

Increasing
Abstraction

Layer N

Layer N
-
1


Layer 0

Layer 1

Communication
Layers

Layer N

Layer N
-
1


Layer 0

Layer 1

I/O Layers

Physical
Devices

20

Single
-
User Interaction

Layer N

Layer N
-
1


Layer 0

Layer 1

PC

Increasing
Abstraction

PC

21

Example I/O Layers

Framebuffer

Window


Model

Widget

Increasing
Abstraction

PC

22

Layered Interaction with an Object

{“John Smith”,


2234.57}



John Smith






John Smith







John Smith




X

Abstraction

Interactor/
Abstraction

Interactor/
Abstraction

Interactor

Interactor =

Absrtraction
Representation


+

Syntactic Sugar

23

Single
-
User Interaction

Layer N

Layer N
-
1


Layer 0

Layer 1

Increasing
Abstraction

PC

24

Identifying the Shared Layer

Increasing
Abstraction

Layer N

Layer S+1

Shared
Layer

Higher layers will
also be shared

Lower layers may
diverge


Layer 0

Layer S

Program
Component

User
-
Interface
Component

PC

25

Replicating UI Component

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1

PC

PC

PC

26

Centralized Architecture


Layer 0

Layer S

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1

PC

PC

PC

27

Replicated (P2P) Architecture


Layer 0

Layer S


Layer 0

Layer S


Layer 0

Layer S

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1

PC

PC

PC

28

Implementing Centralized Architecture

PC

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1


Layer 0

Layer S

Master Input Relayer
Output Broadcaster

Slave I/O Relayer

Slave I/O Relayer

29

Replicated Architecture

PC

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1


Layer 0

Layer S


Layer 0

Layer S


Layer 0

Layer S

Input Broadcaster

Input Broadcaster

Input Broadcaster

30

Hybrid Architecture

PC

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1


Layer 0

Layer S


Layer 0

Layer S

Slave I/O Relayer

I/O Distributor

Input Broadcaster

31

Order of Execution

U
m

U
j

P
m

U
m’

P
m’

2

1

3

4

Forward Input

Process Input

Send Output

Process Output

Other orderings also make sense

32

Classifying Previous Work


XTV


NetMeeting App Sharing


NetMeeting Whiteboard


Shared VNC


Habanero


JCE


Suite


Groove


LiveMeeting


Webex



Rep vs.
Central

Shared
Layer

33

Classifying Previous Work


Shared layer


X Windows (XTV)


Microsoft Windows (NetMeeting


App Sharing)


VNC Framebuffer (Shared VNC)


AWT Widget (Habanero, JCE)


Model (Suite, Groove, LiveMeeting)


Replicated vs. centralized


Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, LiveMeeting)


Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)



Rep vs.
Central

Shared
Layer

34

Service vs. Server vs. Local
Commuication


Local: User site sends data


VNC, XTV, VConf, NetMeeting Regular


Server: Organization’s site connected by
LAN to user site sends data


NetMeeting Enterprise, Sync


Service: External sites connected by WAN
to user site sends data


LiveMeeting, Webex

35

Push vs. Pull of Data


Consumer pulls new data by sending request for it
in response to


notification


MVC


receipt of previous data


VNC


Producer pushes data for consumers


As soon as data are produced


NetMeeting, Real
-
time sync


When user requests


Asynchronous Sync

36

Dimensions


Shared layer level.


Replicated vs. Centralized.


Local vs. Server vs. Service Broadcast


Push vs. Pull Data




37

Dimensions

Centralized Mapping
p,t

Webex, NetMeeting & LiveMeeting App Sharing, Half
-
life, Halo 2

Replicated Mapping
p,t

Webex, LiveMeeting Whiteboard & Presentation Manager, Groove, Age of Empires

Hybrid Mapping
t

[32]
,
[33]

Screen Sharing
p,t

VNC, Webex, NetMeeting & LiveMeeting Desktop Sharing

Window Sharing
p,t

Webex, NetMeeting & LiveMeeting App Sharing
[2, 3, 34]

Model Sharing
p,t

Webex
, LiveMeeting Whiteboard & Presentation Manager, Groove, Half
-
life, Halo 2

Multi Prog. Comp.

Groove

Comm. Server

T 120
[35]
, LiveMeeting, Webex, MSN Messenger

Push
t

NetMeeting, LiveMeeting

Pull

VNC,
IBM
SameTime

Compre
ssion Templates

[31]

Scheduling, Protocols
,t

Scheduling polices
and network protocols of
systems have not been published


38

Evaluating design space points


Coupling Flexibility


Automation


Ease of Learning


Reuse


Interoperability


Firewall traversal


Concurrency and
correctness


Security


Performance


Bandwidth usage


Computation load


Scaling


Join/leave time


Response time


Feedback to actor


Local


Remote


Feedthrough to observers


Local


Remote


Task completion time



39

Performance Params

Performance

162 ms

U1

U2

P1

U3

P3

Number of
Users

Input and
Output Costs

Processing
Powers

Network
Latencies

Think Time

Architecture

40

Processing Power

U
m

U
j

P
m

U
m’

P
m’

time

User enters
input

Forward Input

Process Input

Send Output

Process Output

Processing Power
Decreases

All times increase

Processing time increases

Master

41

Input Processing Cost

U
m

U
j

P
m

U
m’

P
m’

Forward Input

Process Input

Send Output

Process Output

time

User enters
input

Input Processing
Cost Increases

Processing time increases

Master

<

42

Input Size

U
m

U
j

P
m

U
m’

P
m’

Forward Input

Process Input

Send Output

Process Output

Input Size
Increases

Forwarding time increases

time

User enters
input


Next
Slide

Master

Start
Show

<

43

Number of Master Computers

U
m

U
j

P
m

U
m’

P
m’

Forward Input

Process Input

Send Output

Process Output

Forwarding time increases

Number of masters
increases

time

User enters
input

U
x

P
x

U
y

P
y

Master

44

Output Processing Cost

U
m

U
j

P
m

U
m’

P
m’

Forward Input

Process Input

Send Output

Process Output

Output processing
cost increases

Output processing time increases

time

User enters
input

Master


Next
Animation

<

45

Output Size

U
m

U
j

P
m

U
m’

P
m’

Forward Input

Process Input

Send Output

Process Output

Output Size
Increases

Send Output Time Increases

time

User enters
input

Master


Next
Animation

<

46

Cluster Size

U
m

U
j

P
m

U
m’

P
m’

Forward Input

Process Input

Send Output

Process Output

Cluster size
increases

Send Output Time Increases

time

Master

User enters
input

U
y

U
x

47

Master vs. Slave User

U
m

U
j

P
m

U
m’

P
m’

time

Master

Slave

Master vs. Slave
User

U
m

P
m

U
j

P
m

Forward Input

Process Input

Send Output

Process Output

Master

User enters
input

User enters
input

48

Master vs. Slave User

U
m

U
j

P
m

U
m’

P
m’

Forward
Input

Process
Input

Send
Output

Process
Output

User enters
input

Send
Input

time

Master

Slave

Master vs. Slave
User

U
m

P
m

U
j

P
m

49

Network Latencies

U
m

U
j

P
m

U
m’

P
m’

Forward
Input

Process
Input

Send
Output

Process
Output

User enters
input

Send
Input

time

Master

Slave

Network Latencies
Increase

Send Times Increase

50

Think Times

U
m

U
j

P
m

U
m’

P
m’

Process
Input

Send
Output

Process
Output

time

Master

Slave

Small Think Times

Master not ready to accept input

Send
Input

Think
Time

Process
Output

Next input arrives at
master

Master ready to
process next input

51

Think Times

U
m

U
j

P
m

U
m’

P
m’

Process
Input

Send
Output

Process
Output

time

Master

Slave

Large Think Times

Send
Input

Think
Time

Process
Output

Think
Time

Master ready to accept input

Next input arrives at
master

Master ready to
process next input

52

Performance Params

Performance

162 ms

U1

U2

P1

U3

P3

Number of
Users

Input and
Output Costs

Processing
Powers

Network
Latencies

Think Time

Architecture

53

Sharing Low
-
Level vs. High
-
Level Layer


Sharing a layer nearer the
data


Greater view independence


Bandwidth usage less


For large data sometimes
visualization is compact.


Finer
-
grained access and
concurrency control


Shared window system
support floor control.


Replication problems better
solved with more app
semantics


More on this later.




Sharing a layer nearer the
physical device


Have referential
transparency


Green object no meaning
if objects colored
differently


Higher chance layer is
standard.


Sync vs. VNC


promotes reusability and
interoperability



Sharing flexibility limited with fixed layer
sharing


Need to support multiple layers.

54

Centralized vs. Replicated: Dist.
Comp. vs. CSCW


CSCW


Input immediately
delivered without
distributed commitment.


Floor control or
operation transformation
for correctness


Distributed
computing:


More reads (output)
favor replicated


More writes (input)
favor centralized


55

Bandwidth Usage in Replicated vs.
Centralized


Remote I/O bandwidth only an issue when
network bandwidth < 4MBps (Nieh et al
‘2000)


DSL link = 1 Mbps


Input in replication less than output


Input produced by humans


Output produced by faster computers

56

Feedback in Replicated vs. Centralized


Replicated: Computation time on local computer


Centralized


Local user


Computation time on local computer


Remote user


Computation time on hosting computer plus roundtrip time


In server/ service model an extra LAN/ WAN link

57

Influence of communication cost


Window sharing remote feedback


Noticeable in NetMeeting.


Intolerable in LiveMeeting’s service model.


Powerpoint presentation feedback time


not noticeable in Groove & Webex replicated model.


noticeable in NetMeeting for remote user.


Not typically noticeable in Sync with shared model


Depends on amt of communication with remote site


Which depends on shared layer

58

Case Study: Colab. Video Viewing

59

Case Study: Collaborative Video
Viewing
(
Cadiz, Balachandran et al. 2000)


Two users collaboratively
executing media player
commands


Centralized NetMeeting
sharing added unacceptable
video latency


Replicated architecture
created using T 120 later


Part of problem in centralized
system sharing video through
window layer


60

Influence of Computation Cost


Computation intensive apps


Replicated case: local computer’s computation
power matters.


Central case: central computer’s computation
power matters


Central architecture can give better feedback,
specially with fast network [Chung and Dewan ’99]


Asymmetric computation power => asymmetric
architecture (server/desktop, desktop/PDA)

61

Feedthrough


Time to show results at remote site.


Replicated:


One
-
way input communication time to remote site.


Computation time on local replica


Centralized:


One
-
way input communication time to central host


Computation time on central host


One
-
way output communication time to remote site.


Server/service model add latency


Less significant than remote feedback:


Active user not affected.


But must synchronize with audio


“can you see it now?”

62

Task completion time


Depends on


Local feedback


Assuming hosting user inputs


Remote feedback


Assuming non hosting user inputs


Not the case in presentations, where centralized favored


Feedthrough


If interdependencies in task


Not the case in brainstorming, where replicated favored


Sequence of user inputs


Chung and Dewan ’01


Used Mitre log of floor exchanges and assumed interdependent tasks


Task completion time usually smaller in replicated case


Asymmetric centralized architecture good when computing power
asymmetric (or task responsibility asymmetric?).



63

Scalability and Load


Centralized architecture with powerful server more suitable.


Need to separate application execution with distribution.


PlaceWare


Webex


Related to firewall traversal. More later.


Many collaborations do not require scaling


2
-
3 collaborators in joint editing


8
-
10 collaborators in CAD tools (NetMeeting Usage Data)


Most calls are not conference calls!


Adapt between replicated and centralized based on #
collaborators


PresenceAR goals

64

Display Consistency


Not an issue with floor control systems.


Other systems must ensure that concurrent input
should appear to all users to be processed in the
same (logical) order.


Automatically supported in central architecture.


Not so in replicated architectures as local input
processed without synchronizing with other
replicas.

65

User 1

User 2

Insert e,2

Insert d,1

Insert e,2

Insert d,1

d
abc

a
e
bc

d
e
abc

d
a
e
bc

Insert d,1

Insert e,2


UI

Program

Input
Distributor

Synchronization Problems

abc

abc

Program

Input
Distributor


UI

66

User 1

User 2

Insert e,2

Insert d,1

Insert e,3

Insert d,1

d
abc

a
e
bc

d
aebc

d
a
e
bc

Insert d,1

Insert e,2


UI

Program

Input
Distributor

Peer to peer Merger

abc

abc

Program

Input
Distributor


UI

Merger

Merger

Ellis and Gibbs ‘89,
Groove, …

67

User 1

User 2

Insert e,2

Insert d,1

Insert e,3

Insert d,1

d
abc

a
e
bc

d
aebc

d
a
e
bc

Insert d,1

Insert e,2


UI

Program

Input
Distributor

Local and Remote Merger


Curtis et al ’95, LiveMeeting,
Vidot ‘02


Feedthrough via extra WAN
Link


Can recreate state through
central site


abc

abc

Program

Input
Distributor


UI

Merger

Merger

Merger

68

User 1

User 2

Insert e,2

Insert d,1

Insert e,3

Insert d,1

d
abc

a
e
bc

d
aebc

d
a
e
bc

Insert d,1

Insert e,2


UI

Program

Input
Distributor

Centralized Merger


Munson & Dewan ‘94


Asynchronous and
synchronous


Blocking remote merge


Understands atomic
change set


Flexible remote merge
semantics


Modify or delete can win



abc

abc

Program

Input
Distributor


UI

Merger

69

Merging vs. Concurrency Control


Real
-
time Merging called Optimistic
Concurrency Control


Misnomer because it does not support
serializability.


More on this later.

70

User 1

User 2

read” “f”

read “f”

read “f”


UI

Program

Input
Distributor

Reading Centralized Resources

Program

Input
Distributor


UI

Central
bottleneck!

ab

f

Read file
operation
executed
infrequently

71

User 1

User 2

write” “f”, “c”

write “f”, “c”

write“f”, “c”


UI

Program

Input
Distributor

Writing Centralized Resources

Program

Input
Distributor


UI

Multiple
writes

ab

f

abcc

72

User 1

User 2

write “f”, “c”

write f, “c”

write“f”, “c”


UI

Program

Input
Distributor

Replicating Resources


Groove Shared Space
&Webex replication


Pre
-
fetching


Incremental replication
(diff
-
based) in Groove


Program

Input
Distributor


UI

abcc

f

f

abcc

73

msg
msg

User 1

User 2

mail joe, msg

mail joe, msg

mail joe, msg


UI

Program

Input
Distributor

Non Idempotent Operations

Program

Input
Distributor


UI

74

User 1

User 2

insert d, 1

insert d, 1

mail joe, msg


UI

Program

Input
Distributor

Separate Program Component


Groove Bot: Dedicated machine for
external access


Only some users can invite Bot in shared
space


Only some users can invoke Bot
functionality


Bot data can be given only to some users


Similar idea of special “externality
proxy” in Begole 01

Program

Input
Distributor


UI

Program’

insert d,1

msg

75

User 1

User 2

mail joe, msg


UI

Program

Two
-
Level Program Component


Dewan & Choudhary ’92, Sync,
LiveMeeting


Extra comm. hop and
centralization


Easier to implement


UI

Program

Program++

mail joe, msg

insert d,1

insert d,1

insert d,1

msg

76

Classifying Previous Work


Shared layer


X Windows (XTV)


Microsoft Windows (NetMeeting


App Sharing)


VNC Framebuffer (Shared VNC)


AWT Widget (Habanero, JCE)


Model (Suite, Groove, PlaceWare,)


Replicated vs. centralized


Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)


Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)



Rep vs.
Central

Shared
Layer

77

Layer
-
specific


So far, layer
-
independent discussion.


Now concrete layers to ground discussion


Screen sharing


Window sharing


Toolkit sharing


Model sharing


78

User 2

User 3

Centralized Window Architecture

User 1

Window Client

Win. Server

Win. Server

Win. Server

a ^, w2, x, y

Press a

Output Broadcaster
& I/O Relayer


I/O Relayer


I/O Relayer

a ^, w1, x, y

draw a, w1, x, y

a ^, w, x, y

draw a, w2, x, y

draw a, w3, x, y

draw a, w1, x, y

draw a, w, x, y

79

User 2

User 3

UI Coupling in Centralized
Architecture

User 1

Window Client

Win. Server

Win. Server

Win. Server

Output Broadcaster
& I/O Relayer++


I/O Relayer


I/O Relayer

move w2

move w2

move w1

move w3

move w3

move w


Existing approach


T 120, PlaceWare


UI coupling need not be
supported


XTV

80

User 2

User 3

Distributed Architecture for UI Coupling

User 1

Window Client

Win. Server

Win. Server

Win. Server

Output Broadcaster
& I/O Relayer++


I/O Relayer ++


I/O Relayer++

move w1

move w1

move w1

move w3

move w



Need multicast server at
each LAN



Can be supported by T 120

move w1

81

Two Replication Alternatives


Replicate d in S by


S
-
1 sending input events to all S instances


S sending events directly to all peers


Direct communication allows partial sharing (e.g. windows)


Harder to implement automatically by infrastructure

S
-
1


S


S

S
-
1

S
-
1


S


S

S
-
1

82

Semantic Issue


Should window positions be coupled?


Leads to window wars (Stefik et al ’85)


Can uncouple windows


Cannot refer to the “upper left” shared
window


Compromise


Create a virtual desktop for physical desktop
of a particular user

83

UI Coupling and Virtual Desktop

84

User 2

User 3

Raw Input with Virtual Desktop

User 1

Window Client

Win. Server

Win. Server

Win. Server

a ^, w2, x’, y’

Press a

Output Broadcaster I/O
Relayer & VD

VD & I/O Relayer


VD & I/O Relayer

a ^, w1, x, y

draw a, w1, x, y

draw a, w2, x’, y’

draw a, w3, x’, y’

a ^, x’, y’

draw a, w1, x, y

draw a, x’, y’

Knows about
virtual desktop

85

User 2

User 3

Translation without Virtual Desktop

User 1

Window Client

Win. Server

Win. Server

Win. Server

a ^, w2, x, y

Press a

Output Broadcaster, I/O
Relayer & Translator

I/O Relayer

I/O Relayer

a ^, w1, x, y

draw a, w1, x, y

draw a, w2, x, y

draw a, w3, x, y

draw a, w1, x, y

a ^, w1, x, y

86

Coupled Expose Events: NetMeeting

87

User 2

User 3

Coupled Exposed Regions

User 1

Window Client

Win. Server

Win. Server

Win. Server

Output Broadcaster I/O
Relayer & VD

draw w

T 120 (Virtual Desktop)

expose w

front w3

expose w

expose w

draw w

draw w

expose w

VD & I/O Relayer


VD & I/O Relayer

draw w

88

Coupled Expose Events: PlaceWare

89

User 2

User 3

Uncoupled Expose Events

User 1

Window Client

Win. Server

Win. Server

Win. Server

Output Broadcaster I/O
Relayer & VD

draw w

draw w

expose w



XTV (no Virtual Desktop)



expose event not broadcast
so remote computers do not
blacken region



Potentially stale data

draw w

draw w

front w3

expose w

VD & I/O Relayer


VD & I/O Relayer

90

Uncoupled Expose Events


Centralized collaboration
-
transparent app draws to
areas of last user who sent expose event.


May only sent local expose events


If it redraws the entire window anyway everyone
is coupled.


If it draws only exposed areas.


Send the draw request only to inputting user


Would work as long unexposed but visible regions not
changing.


Assumes draw request can be associated with expose
event.


To support this accurately, system needs to send it
union of exposed regions received from multiple
users

91

Window
-
based Coupling


Mandatory


Window sizes


Window contents


Optional


Window positions


Window stacking order


Window exposed regions


Optional can be done with or
without virtual desktop


Remote and local windows
could mix, rather than have
remote windows embedded in
Virtual Desktop window.


Can lead to “window wars”
(Stefik et al ’87)




Couplable properties


Size


Contents


Positions


Stacking order


Exposed regions


In shared window
system some must be
coupled and others
may be.

92

Example of Minimal Window Coupling

93

User 2

User 3


UI

Program

Input
Distributor

Replicated Window Architecture

Program

Input
Distributor


UI

User 1


UI

Program

Input
Distributor

a ^, w2, x, y

Press a

a ^, w1, x, y

a ^, w3, x, y

a ^, w1, x, y

a ^, w3, x, y

draw a, w2, x, y

draw a, w2, x, y

draw a, w3, x, y

a ^, w2, x, y

94

User 2

User 3


UI

Program

Input
Broadcaster

Replicated Window Architecture
with UI Coupling

Program

Input
Broadcaster


UI

User 1


UI

Program

Input
Broadcaster

move w

move w

move w

move w

move w

move w

95

User 2

User 3


UI

Program

Input
Distributor

Replicated Window Architecture
with Expose coupling

Program

Input
Distributor


UI

User 1


UI

Program

Input
Distributor

expose w

move w2

expose w

expose w

expose w

expose w

draw w

draw w

draw w

expose w

96

Replicated Window System


Centralized only implemented commercially


NetMeeting


PlaceWare


Webex


Replicated can offer more efficiency and pass
through firewalls limiting large traffic


Must be done carefully to avoid correctness problems


Harder but possible at window layer


Chung and Dewan ’01


Assume floor control as centralized systems do


Also called intelligent app sharing



97

Screen Sharing


Sharing the screen client


Window system (and all applications running
on top of it)


Cannot share windows of subset of apps


Share complete computer state


Lowest layer gives coarsest sharing granularity.


98

Sharing the (VNC) Framebuffer Layer

99

VNC Centralized Frame Buffer
Sharing

Window Client

Win. Server

Framebuffer


I/O Relayer

Output Broadcaster
& I/O Relayer

Framebuffer

Framebuffer


I/O Relayer

draw pixmap rect
(frame diffs)

key events

mouse events

100

Replicated Screen Sharing?


Replication hard if not impossible


Each computer runs a framebuffer server and
shared input


Requires replication of entire computer state


Either all computers are identical and receive same
input from when they were bought


Or at start of sharing session download one
computer’s entire environment


Hence centralization with virtual desktop

101

Sharing pix maps vs. drawing
operations


Potentially larger size


Obtaining pixmap changes difficult


Do framebuffer diffs


Put hooks into window system


Do own translation


Single output operation


Standard operation


No context needed for interpretation


Multiple operations can be coalesced
into single pixmap


Per
-
user coalescing and compression


Based on network congestion and
computation power of user


Pixmap can be compressed


Smaller size


Obtaining drawing operations easy


Create proxy that traps them


Many output operations


Non
-
standard operations


Fonts, colormaps etc need to be
replicated


Reliable protocol needed


Possible non standard operations
for distributing state


Session initiation takes longer


Compression but not coalescing
possible


102

T. 120 Mixed Model


Send either drawing operation or pixmap.


Pixmap sent when


Remote site does not support operation


Multiple graphic operations need to be combined into
single pixmap because of network congestion or
computation overload


Feedthrough and fidelity of pixmaps only when
required


More complex


mechanisms and policies for
conversion


103

Pixmap compression


Combine pixmap updates to overlapping regions into one
update.


In VNC diffs of framebuffer done.


In T 120 rectangles computed from updates


When data already exists, send x,y of source (VNC and T
120)


Scrolling and moving windows


Function of pixmap cache size


Diffs with previous rows of pixmap (T 120)


Single color with pixmap subrectanges (VNC)


Background with foreground shapes


JPEG for still data, MPEG for moving data


Larger number of operations conflicts with interoperability.


Reduce statelessness


Efficieny gain vs loss


104

T 120 Drawing Operation
compression


Identify operands of previous operations
(within some history) rather than send new
value (T 120)


E.g. Graphics context often repeated


Both kinds of compression useless when
bandwidth abundant


But can unduly increase latency.


105

T 120 Pointer Coalescing


Multiple input pointer updates combined into one


Multiple output pointer updates combined into
one.


Reduced user experience


Bandwidth usage of pointer updates small.


Reduce jitter in variable latency situations.


If events are time stamped


Consistent with not sending incremental
movements and resizing of shapes in whiteboards.


106

Flow Control Algorithms


T 120 push
-
based approach


Sender pushes data to group of receivers


Compare end to end rate for slowest receiver by looking at application
Queue


Works with overlays (firewalls)


Adapt compression and coalescing based on this


Very slow computers leave collaboration.


VNC pull based rate


Each client pulls data at consumption rate


Gets diffs since since last pull with no intermediate points


Per client diffs must be maintained


Data might be sent along same path multiple times


Could replicate updates at all LANS (federations) [Chung 01]

107

Experimental Data


Pull
-
based vs. Push
-
based flow control


Sharing pixmaps vs. drawing operations


Replicated vs. centralized architecture

108

Remote Feedback Experiments


Nieh et al, 2000: Remote
single
-
user access experiments.


VNC


RDP (T. 120 based)


Measured


Latency (Remote feedback time)


Data transferred


Give idea of performance seen
by remote user in centralized
architecture, comparing


Sharing of pixmap vs.drawing
operations


Pull
-
based vs. no flow control


User 1

Window Client

Win. Server

Win. Server

Master I/O Distributor


Slave I/O Distributor

109

High Bandwidth Experiments


Letter A


Latency


VNC (Linux) 60 ms


RDP (Win2K, T 120
-
based) 200 ms


Data transferred


VNC 0.4KB


RDP 0.3KB


Previewers send text as
bitmaps (Hanrahan)



Red box fill


Latency


VNC (Linux) 100 ms


RDP (Win2K, T 120
-
based) 220 ms


Data transferred


VNC 1.2KB


RDP 0.5KB


Compression increases
latency reducing data

110

Web Page Experiments


Time to execute a web page
script


Load 54*2 pages (text and
bitmaps)


Scroll down 200 pixels


Common parts: blue left
column, white background, PC
magazine logo


Load time


4
-
100 Mbps < 50 seconds


100 MBps


RDP: 35s


VNC: 24s




Load time


128 Kbps


RDP 297s


VNC 25s


Data transferred


100 Mbps


Web browser 2MB


RDP 12MB


VNC 4MB


128 Kbps


RDP 12 MB


VNC 1MB


Data loss reduces load time




111

Animation Experiments


98 KB Macromedia Flash
315 550x400 frames


FPS


100 Mbps


RDP: 18


VNC: 15


512 kbps


RDP: 8


VNC: 15


128 Kbps


RDP: 2


VNC: 16


Data transferred


100 Mbps


RDP: 3MB


VNC: 2.5MB


512 kbps


RDP: 2MB


VNC: 1.2MB


128 kbps


RDP: 2MB


VNC: 0.3MB


18 fps acceptable, < 8fps intolerable


Data loss increases fps


LAN speed required for tolerable
animations




112

Cyclic Animation Experiments


Wong and Seltzer 1999,
RDP Win NT


Animated 468x60 pixel
GIF banner


0.01 mbps


Animated scrolling news
ticker


0.01mbps



Bezier screen saver


10 bezier curves repeated


0.1 mbps



GIF banner and scrolling
news ticker simultaneously


1.60 Mbps


Client side cache of pixmaps


Cache not big enough to
accommodate both animations


LRU policy not ideal for cyclic
animations


10 Mbps can accommodate
only 5 users


Load put by other UI
operations?





113

Network Loads of UI Ops


Wong and Seltzer 1999,
RDP Win NT


Typing


75 wpm word typist
generated 6.26 kbps


Mousing


Random, continuous:
2Kbps


Usefulness of mouse
filtering in T 120?




Menu navigation


Depth
-
first selection from
Windows start menu: 1.17
Kbps


Alt right arrow in word:
39.82 Kbps


Office 97 with animation:
48.88 KBps


Scrolling


Word document, PG down
key held: 60 kbps


114

Relative Occurrence of Operations


Danshkin, Hanrahan ’94, X


Two 2D drawing programs


Postscript previewer


X11 perf benchmark


5 grad students doing daily
work


Most output responses are
small.


100 bytes


TCP/IP adds 50% overhead


Startup lots of overhead ~ 20s




Bytes used

1.
Images

1.
53 bytes avg size

2.
BW bitmap rectangles

2.
Geometry

1.
Half clearing letter
rectangles

3.
Text

4.
Window enter and leave

5.
Mouse, Font, Window
movement, etc events
negligible


Grad students vs. real people?


115

User Classes vs. Load & Bandwidth
Usage


Terminal services study


Knowledge Worker


Makes own work


Marketing, authoring


Excel, outlook, IE, word


Keeps apps open all the time


Structured task worker


Claims processing, accts payable


Outlook, word


Uses each app for less time,
closing and opening apps


Data Entry worker


Transcription, typists, order entry


SQL, forms



Simulation scripts run to measure
how may of each class can be
supported before 10% degradation
in server response


2xPentium 111 Xeon 450 MHz


40 structured task workers


70 knowledge workers


320 Data entry workers


In central architecture, perhaps
separate multicaster


Network utilization


Structured task: 1950 bps


Knowledge worker: 1200 bps


Data entry: 495 bps


Encryption has little effect



116

Regular vs. Bursty Traffic


Droms and Dyksen ’90, X traffic


Regular


8 hour systems programmer usage


236 bps, 1.58 packets per second


Compares well with network file system traffic


Bursts


40,000 bps, 100 pps


Individual apps


Twm and xwd > 100, 000 bps, 100 pps


Xdvi, 60,000 bps, 90 pps


Comparable to animation loads


Bandwidth requirements as much as remote file system




117

Bandwidth in Replicated vs.
Centralized


Input in replication less data than output


Several mouse events could be discarded


Output could be buffered.


X Input vs. Output (Ahuja ’90)


Unbuffered: 6 times as many messages sent in centralized


Buffered: 3.6 times as many messages sent


Average input and output message size: 25 bytes


RDP each keystroke message 116 bytes


Letter a, box fill, text scroll: < 1 Kb


Bitmap load: 100 KB



118

Generic Shared Layers Considered


Framebuffer


Window


119

Shared Widgets


Layer above window is Toolkit


Abstractions offered


Text


Sliders


Other “Widgets”


120

Sharing the (Swing) Toolkit Layer


Different window sizes


Different looks and feel


Independent scrolling

121

Window Divergence


Independent scrolling


Multiuser scrollbar


Semantic telepointer

122

Shared Toolkit


Unlike window system, toolkit not a network layer


So more difficult to intercept I/O


Input easier by subscribing to events, and hence popular
replicated implementations done for Java AWT & Swing


Abdel Wahab et al 1994 (JCE),Chabert et al 1998 (NCSA’s
Habanero) ,Begole 01


GlassPane can be used in Swing


A frame can be associated with a glass pane whose transparent property
is set to true


Mouse and keyboard events sent to glass pane


Centralized done for Java Swing by intercepting output and
input (Chung ’02)


Modified JComponent constructor to turn debug option on


Graphics object wrapped in DebugGraphics object


DebugGraphics class changed to intercept actions


Cannot modify Graphics as it is an abstract class subclassed by
platform dependent classes

123

Shared Toolkit


Widely available commercial shared toolkits not available.


Intermediate point between model and window sharing.


Like model sharing


Independent window sizes and scrolling


Concurrent editing of different widgets


Merging of concurrent changes to replicated text widget


Like window sharing


No new programming model/abstractions


Existing programs



124

User 1

User 2

Insert w,d,1


Toolkit

Program

Input
Distributor

Replicated Widgets

abc

abc

Program

Input
Distributor


Toolkit

Insert w, d,1

Insert w, d,1

adbc

adbc

125

Sharing the Model Layer


The same model can be bound to different
widgets!


Not possible with toolkit sharing

126

Sharing the Model Layer

Increasing
Abstraction

Framebuffer

Window

Model

Toolkit

Program
Component/
Model

User
-
Interface
Component

127

Sharing the Model Layer

Increasing
Abstraction

Framebuffer

Window

Model

Toolkit

Program
Component/
Model

User
-
Interface
Component

View

Controller

Cost of
accessing
remote
model

128

Sharing the Model Layer

Increasing
Abstraction

Framebuffer

Window

Model

Toolkit

Program
Component/
Model

User
-
Interface
Component

View

Controller

Send
changed
model state
in notfication

129

Sharing the Model Layer

Increasing
Abstraction

Framebuffer

Window

Model

Toolkit

Program
Component/
Model

User
-
Interface
Component

View

Controller

No standard
protocol

130

User 2

User 3


UI

Program

Output Broadcaster
& I/O Relayer

Centralized Architecture


UI

User 1


UI


I/O Relayer


I/O Relayer

Output Broadcaster and
relayers cannot be
standard

131

User 2

User 3


UI

Program

Input
Broadcaster

Replicated Architecture

Program

Input
Broadcaster


UI

User 1


UI

Program

Input
Broadcaster

Input broadcaster
cannot be
standard

132

Model Collaboration Approaches


Communication facilities of varying
abstractions for manual implementation.


Define Standard I/O for MVC


Replicated types


Mix these abstractions

133

Unstructured Channel Approach


T 120 and other multicast approaches


Used for data sharing in whiteboard


Provide byte
-
stream based IPC primitives


Add multicast to session capability


Programmer uses these to create relayers
and broadcasters

134

RPC


Communicate PL types rather than unstructured
byte streams


Synchronous or asynchronous


Use RPC


Many Java based colab platforms use RMI


135

M
-
RPC


Provide multicast RPC (Greenberg and Marwood
’92, Dewan and Choudhary ’92) to subset of sites
participating in session:


processes of programmer
-
defined group of users


processes of all users in session


processes of users other than current inputter


current inputter


all processes of specific user


specific process


136

GroupKit Example

proc insertIdea idea {


insertColouredIdea blue $idea


gk_toOthers ''insertColouredIdea red $idea'' }

137

Model Collaboration Approaches


Communication facilities for varying
abstractions for manual implementation.


Define Standard I/O for MVC


Replicated types


Mix these abstractions

138

Sharing the Model Layer

Increasing
Abstraction

Framebuffer

Window

Toolkit

Program
Component/
Model

User
-
Interface
Component

View

Controller

Model

Define
standard
protocol

139

Sharing the Model Layer

Increasing
Abstraction

Framebuffer

Window

Model

Toolkit

Program
Component/
Model

User
-
Interface
Component

View

Define
standard
protocol

140

Standard Model
-
View Protocol


Can be in terms of model
objects or view elements.


View elements are varied


Bar charts, Pie charts


Model elements can be
defined by standard types


Single
-
user I/O model


Output: Model sends its
displayed elements to view and
updates to them.


Input: View sends input
updates to displayed model
elements


Dewan & Choudhary ‘90


Model

View

Displayed
element

141

IM Model





/*dmc Editable String, IM_History */

typedef struct { unsigned num; struct String *message_arr; }
IM_History;

IM_History im_history;

String message;

Load () {

Dm_Submit (&im_history, "IM History", "IM_History");

Dm_Submit (&message, "Message", Str
ing);

Dm_Callback(“Message”, &updateMessage);

Dm_Engage ("IM History");

Dm_Engage ("Message");

}

updateMessage (String variable, String new_message) {

im_history.message_arr[im_history.num++] = value;

Dm_Insert(“IM History”, im_history.num, v
alue);

}


Create view of element named
“IM History” whose type is
“IM_History” and value is at
address “&im_history”

Show (a la map)
the view of “IM
History”

Whenever “Message”
is changed by user call
updateMessage()

142

Multiuser Model
-
View Protocol


Multi
-
user I/O model


Output Broadcast: Output
messages broadcast to all
views.


Input relay: Multiple views
send input messages to
model.


Input coupling: Input
messages can be sent to
other views also


Dewan & Choudhary ’91

Model

View

View

143

IM Model





/*dmc Editable String, IM_History */

typedef struct { unsigned num; struct String *message_arr; }
IM_History;

IM_History im_history;

String message;

Load () {

Dm_Submit (&im_history, "IM History", "IM_History");

Dm_Submit (&message, "Message", Str
ing);

Dm_Callback(“Message”, &updateMessage);

Dm_Engage ("IM History");

Dm_Engage ("Message");

}

updateMessage (String variable, String new_message) {

im_history.message_arr[im_history.num++] = value;

Dm_Insert(“IM History”, im_history.num, v
alue);

}


Insert to to all

Called by
any user

144

Replicated Objects in Central
Architecture


Distributed view
needs to create local
replica of displayed
object.


Can build
replication into
types

Model

View

View

replicas

145

Replicating Popular Types for Central
and Replicated Architectures


Create replicated versions of selected popular types.


Changes in a type instance automatically made in all of its
replicas (in views or models)


No need for explicit I/O


Can select which values in a layer replicated


Architectures


replicated architecture (Greenberg and Marwood ’92, Groove)


semi
-
centralized (Munson & Dewan ’94, PlaceWare)


View

Model

View

Model

Model

View

View

146

Example Replicated Types


Popular primitive types: String, int, boolean …
(Munson & Dewan ’94, PlaceWare, Groove)


Records of simple types (Munson & Dewan ’94,
Groove)


Dynamic sequences (Munson & Dewan ’94,
Groove, PlaceWare)


Hashtables (Greenberg & Marwood ’92, Munson
& Dewan ’94, Groove)


Combinations of these types/constructors (Munson
& Dewan ’94, PlaceWare, Groove)



147

Kinds of Distributed Objects


By reference (Java and .NET)


reference sent to remote site


remote method invocation site results in
calls at local site


By value (Java and .NET)


deep copy of object sent


remote method invocations results in calls
at remote site


copies diverge


Replicated objects


deep copy of object sent


remote method invocations results in local
and remote calls


either locks or merging used to detect/fix
conflicts



site 1

site 2

site 1

site 2

site 1

site 2

148

Alternative model sharing
approaches

1.
Stream
-
based communication

2.
Regular RPC

3.
Multicast RPC

4.
Replicated Objects (/Generic Model View
Protocol)





149

Replicated Objects vs.
Communication Facilities


Higher abstraction


No notion of other sites


Just make change


Cannot use existing types directly


E.g. in Munson & Dewan ’94, ReplicatedSequence


Architecture flexibility


PlaceWare bound to central architecture


Replicas in client and server of different types, e.g. VectorClient &
VectorServer


Abstraction flexibility


Set of types whose replication supported by infrastructure automatically


Programmer
-
defined types not automatically supported


Sharing flexibility


Who and when coupled burnt into shared value


Use for new apps


150

Replicated Objects vs.
Communication Facilities


PlaceWare has much richer set than WebEx


Ability to include Polling as a slide in a
PowerPoint presentation


Seating arrangement


Not as useful for converting existing apps.


Need to convert standard types to replicated
types


Repartitioning to separate shared and unshared
models


151

Stream based vs. Others


Lowest
-
level


Serialize and deserialize objects


Multiplex and demultiplex operation invocations into
and from stream


Stream
-
based communication (wire protocol) is
language independent


No need to learn non standard syntax and
compilers


May be the right abstraction for converting
existing apps into collaborative ones.

152

Case Study: Collaborative Video
Viewing
(
Cadiz, Balachandran et al. 2000)


Replicated architecture
created using T 120
multicast later.


Exchanged command
names


Implementer said it
was easy to learn and
use.

153

RPC vs. Others


Intermediate ease of learning, ease of usage,
flexibility


Use when:


Overhead of channel usage < overhead of RPC
learning


Appropriate replicated types


Not available, or


Who and when coupled, architecture burnt into replicated
type


learning overhead > RPC usage overhead

154

M
-
RPC vs. RPC


Higher
-
level abstraction


Do not have to know exact site roster


Others, all, current


Can be automatically mapped to stream
-
based multicast


Use M
-
RPC when possible

155

Combining Approaches


System combining benefits of multiple
abstractions?


Flexibility of lower
-
level and automation of
higher
-
level


Co
-
existence


Migratory path


New abstractions


156

Coexistence

Support all of these abstractions in one system


RPC and shared objects (Dewan &
Choudhary ’91, Greenberg & Marwood
’92, Munson & Dewan ’94, and
PlaceWare)



157

Migratory Path

Problem of simple co
-
existence


Low
-
level abstraction effort not reused.


E.g. RPC used to built a file directory


Allow the use of low
-
level abstraction to create higher
-
level abstraction


Framework allowing RPC to be used to create new
shared objects (Munson & Dewan ’94, PlaceWare).


E.g. shared hash table


Can be difficult to use and learn


Low
-
level abstraction still needed when controlling who
and when coupled



158

New abstractions: Broadcast
Methods

Stefik et al ’85: Mixes shared
objects and RPC


Declare one or more
methods of arbitrary
class as broadcast


Method invoked on all
corresponding instances
in other processes in
session


Arbitrary abstraction
flexibility



public

class

Outline {


String getTitle();


broadcast

void setTitle(String
title);


Section getSection(
int

i);


int

getSectionCount();


broadcast

void setSection(int
i,Section s);


broadcast

void insertSection(int
i,Section s);


broadcast

void
removeSection(int i);

}

159

Association

Broadcast Methods Usage

Model

View

Window

User 1

Model

View

Window

User 2

bm

Broadcast
method

Associates/
Replicas

Associates/
Replicas

lm

l
m

lm

lm

lm

160

Problems with Broadcast Methods

public

class

Outline {


String getTitle();


broadcast

void

setTitle(String title);


Section getSection(int i);


int

getSectionCount();


broadcast

void

setSection(int i,Section
s);


broadcast

void

insertSection(int
i,Section s);


broadcast

void

removeSection(int i);


broadcast

void

insertAbstract (Section s)
{



insertSection (0, s);


}

}


Language support needed


C#?


Single multicast group


Cannot do subset of participants


Selecting broadcast methods
required much care


Sharing at method rather than
data level


Broadcast method should not call another broadcast method!

161

Method vs. State based Sharing


Method
-
based sharing for indirectly sharing state.


Programmer provides mapping between state and
methods that change it.


With infrastructure known mapping, replicated
types automatically implemented.


Mapping of internal state and methods not
sufficient because of host
-
dependent data
(specially in UI abstractions)


Need mapping of external (logical) state.

162

Property
-
based Sharing

public class Outline {


String getTitle();


void setTitle(String title);


Section getSection(int i);


int getSectionCount();


void setSection(int i,Section s);


void insertSection(int i,Section s);


void removeSection(int i);


void insertAbstract (Section s) {



insertSection(0, s);


}

}


Roussev & Dewan ’00


Synchronize external state or
properties


Properties deduced
automatically from
programming patterns


Getter and setter for record
fields


Hashtables and sequences


System keeps properties
consistent


Parameterized coupling model


Patterns can be programmer
-
defined


163

Programmer
-
defined conventions

insert = void insert<PropName> (int, <ElemType>)

remove = void remove<PropName> (int)

lookup = <ElemType> elementAt<PropName>(int)

set = void set<PropName> (int, <ElemType>)

count = int get<PropName>Count()

getter = <PropType> get<PropName>()

setter = void set<PropName>(<PropType>)

164

Multi
-
Layer Sharing with Shared
Objects

Story so far:


Need separate sharing
implementation for each
layer


Framebuffer: VNC


Window: T. 120


Toolkit: GroupKit


Problem with data layer
since no standard protocol


Create shared objects for
this layer




But objects occur at
each layer


Framebuffer


Window


TextArea


Why not use shared
object abstraction for
any of these layers?



165

Sharing Various Layers

Framebuffer

Window

Toolkit

Framebuffer

Window

Toolkit

Parameterized
Coupler

Model

View

Model

View

166

Sharing Various Layers

Framebuffer

Window

Toolkit

Framebuffer

Window

Toolkit

Parameterized
Coupler

Model

View

Model

View

167

Sharing Various Layers

Framebuffer

Window

Toolkit

Framebuffer

Window

Toolkit

Parameterized
Coupler

Model

View

Model

View

168

Experience with Property Based
Sharing


Used for


Model


AWT/Swing Toolkit


Existing Graphics Editor


Requires well written code


Existing may not be

169

Multi
-
layer Sharing



Two ways to implement colab. application


Distribute I/O


Input in Replicated


Output in Centralized


Different implementations (XTV, NetMeeting) distributed
different I/O


Defined replicated objects


A single implementation used for multiple layers


Single implementation in Distribute I/O approach?

170

Translator
-
based Multi
-
Layer Support
for I/O Distribution


Chung & Dewan ‘01


Abstract Inter
-
Layer Communication Protocol


input (object)


output(object)





Translator between specific and abstract protocol


Adaptive Distributor supporting arbitrary, external mappings
between program and UI components


Bridges gap between


window sharing (e.g. T 120 app sharing) and higher
-
level sharing (e.g.
T 120 whiteboard sharing)


Supports both centralized and replicated architectures and
dynamic transitions between them.

171

I/O Distrib: Multi
-
Layer Support

PC

Layer N

Layer N
-
1


Layer 0

Layer S

Layer N

Layer N
-
1


Layer 0

Layer S

Layer N

Layer N
-
1


Layer 0

Layer S

Translator


Translator


Translator


Adaptive Distributor

Adaptive Distributor

Adaptive Distributor

172

Translator


Translator


Translator


I/O Distrib: Multi
-
Layer Support

PC

Layer N

Layer S+1

Layer N

Layer S+1

Layer N

Layer S+1


Layer 0

Layer S


Layer 0

Layer S


Layer 0

Layer S

Adaptive Distributor

Adaptive Distributor

Adaptive Distributor

173

Experience with Translators


VNC


X


Java Swing


User Interface
Generator


Web Services




Requires translator
code, which can be
non trivial


174

Infrastructure vs. Meta
-
Infrastructure

Property/Translator
-
based Distributor/Coupler

Text Editor

Outline Editor

Pattern Editor

X

JavaBeans

Java’s
Swing

VNC

application

application

application

application

application

application

application

application

Infrastructure

Meta
-
Infrastructure

Checkers

175

The End of Comp 290
-
063
Material

(Remaining Slides FYI)

176

Using Legacy Code


Issue: how to add collaboration awareness to
single
-
user layer


Model


Toolkit


Window System





Goal


Would as little coupling as possible between existing
and new code


177

Adding Collaboration Awareness to Layer


Colab. Transp.


Colab. Aware

Extend Colab
-
Transp. Class

JCE


Colab. Transp.



Colab. Aware

Ad
-
Hoc

Suite


Colab. Aware


Colab. Transp.

Extend Colab.
Aware Class

Sync


Colab. Aware


Colab. Transp.

Colab. Aware Delegate

Roussev
’00

178

Proxy Delegate

XTV


X Server

X Client


Pseudo Server

COLA


Called Object


Calling Object


Adapter Object

179

Identifying Replicas


Manual connection:


Translators identify peers (Chung and Dewan ’01)


Automatic:


Central downloading:


Central copy linked to downloaded objects (PlaceWare, Suite, Sync)


Identical programs: Stefik et al ’85


Assume each site runs the same program and instantiates programs in the same order


Connect corresponding instances (at same virtual address) automatically.


Identical instantiation order intercepted


Connect Nth instantiated object intercepted by system


E.g. Nth instantiated windows correspond


External descriptions (Groove)


Assume an external description describing models and corresponding views


System instantiates models and automatically connects remote replicas of them.


Gives programmers events to connect models to local objects (views, controllers).



No dynamic control over shared objects.


Semi
-
manual (Roussev and Dewan ’00)


Replicas with same GID’s automatically connected.


Programmer assigns GIDs to top level objects, system to contained objects




180

Connecting Replicas vs. Layers


Object correspondence
established after containing
layer correspondence.


Only some objects may be
linked


Layer correspondence
established by session
management


E.g. Connecting whiteboards
vs. shapes in NetMeeting


181

Conference 1

App1

App2

Basic Session Management
Operations

User 1

Join/Leave (User 2)

User 2

Create/ Delete


(Conference 1)

Add/Delete
(App3)

App3

List/Query/
Set/ Notify
Properties

182

Basic Firewall


Limit network
communication to and from
protected sites


Do not allow other sites to
initiate connections to
protected sites.


Protected sites initiate
connection through proxies
that can be closed if
problems


can get back results


Bidirectional writes


Call/reply



protected site

communicating site

unprotected proxy

open

send

send

call

reply

183

Protocol
-
based Firewall


May be restricted to
certain protocols


HTTP


SIP



protected site

communicating site

unprotected proxy

open

open

call

reply

http

sip

sip

184

Firewalls and Service Access


User/client at protected site.


Service at unprotected site.


Communication and
dataflow initiated by
protected client site


Can result in transfer of data
to client and/or server


If no restriction on protocol
use regular RPC


If only HTTP provided,
make RPC over HTTP


Web services/Soap model


protected user

unprotected service

unprotected proxy

open

call

reply

rpc

http
-
rpc

185

Firewalls and Collaboration


Communicating sites
may all be protected.


How do we allow
opens to protected
user?




protected user

protected user

open

186

Firewalls and collaboration


Session
-
based forwarder


Protected site opens connection
to forwarder site outside firewall
for session duration


Communicating site also opens
connection to forwarder site.


Forwarder site relays messages to
protected site


Works well if unrestricted access
allowed and used


What if restricted protocol?


unprotected forwarder

open

open

close

close


protected user

protected user

send

send

187

Restricted Protocol


If only restricted protocol then
communication on top of it as in service
solution


Adds overhead.


188

Restricted protocols and data to
protected site


HTTP does not allow data flow to be initiated by
unprotected site


Polling


Semi
-
synchronous collaboration


Blocked gets (PlaceWare)


Blocked server calls in general in one
-
way call model


Must refresh after timeouts


SIP for MVC model


Model sends small notifications via SIP


Client makes call to get larger data


RPC over SIP?


189

Firewall
-
unaware clients


Would like to isolate specific apps from worrying protocol choice and
translation.


PlaceWare provides RPC


Can go over HTTP or not


Groove apps do not communicate directly


just use shared objects and
don’t define new ones


Can go either way


Groove and PlaceWare try unrestricted first and then HTTP


UNC system provides standard property
-
based notifications to
programmer allows them to be delivered as:


RMI


Web service


SIP


Blocked gets


Protected site polling


190

Forwarder & Latency


Adds latency


Can have multiple forwarders bound to
different areas (Webex)


Adaptive based on firewall detection
(Groove)



try to open directly first



if fails because of firewall, opens
system provided forwarder


asymmetric communication possible


Messages to user go through forwarder


Messages from user go directly


Groove is also a service based model!


PlaceWare always has latency and it
shows

unprotected forwarder


protected user

protected user

191

Forwarder & Congestion Control


Breaks congestion control
algorithms


Congestion between
communication between
protected site and forwarder
controlled by algorithms may
be different than end to end
congestion


T 120 like end to end
congestion control relevant

unprotected forwarder


protected user

protected user

Different congestions

192

Forwarder + Multi
-
caster


Forwarder can multicast to other users
on behalf of sending user


Separation of application processing
and distribution


Supported by PlaceWare, Webex


Reduces messages in link to forwarder


Separate multicaster useful even if no
firewalls


Forwarder can be much more powerful
machine