K Representation Layer optimization

newshumansvilleΔιαχείριση Δεδομένων

16 Δεκ 2012 (πριν από 4 χρόνια και 5 μήνες)

137 εμφανίσεις

A TAO roadmap for 2011 and beyond:
addressing performance, scalability,
security and new features while
preserving flexibility
Patrick.plichart@tudor.lu
Thibaud.latour@tudor.lu
SWEE 3 Workshop,Apr, 27-28, Szeged.
Conclusion
Introduction
- Multi-purpose Platform approach
-
Versatility and adaptability
-
Extendable
-
Severable
Szeged – 23.11.2010
Conclusion
Introduction
Conclusion
Intro
Scalability
Froma user perspective …
Reliability of High Stakes assessment
Data collection validity
Users reluctance
Froma technical perspective …
Uncommon Large scale use
Large Data
The system sollicitation is not uniform
Beyond the capacity limit, performances collapse.
How to adress Scalability?
Information to CBA Managers
Optimization/Performance
Adapt Software Enginneering standards
Scalable Infrastructure solutions
Conclusion
Intro
Scalability
Optimizations on a modular layered architecture
K Representation Layer
K Applicative Layer
TAO Applicative Layer
DBMS
API
Conclusion
Intro
Scalability
K Representation Layer optimization
K Representation Layer
Conclusion
Intro
Scalability
K Representation Layer optimization
K Representation Layer
Conclusion
Intro
Scalability
K Representation Layer optimization
NbSubjects 100
Test size (nbItem) 4
ramp up period (secs) 300
eAccelerator Yes
Average time (msecs)
Authenticatio
n Main View
Process
Intialization Get Item Perform Next
Simultaneous Subjects
1 226 88 560 248 547
5 212 87 543 249 557
10 260 123 604 181 334
20 378 124 661 320 732
50 377 123 662 319 727
100 481 355 797 251 524
Conclusion
Intro
Scalability
K Representation Layer optimization
NbSubjects 100
Test size (nbItem) 4
ramp up period (secs) 30
eAccelerator Yes
Average time (msecs)
Authenticatio
n Main View
Process
Intialization Get Item Perform Next
Simultaneous Subjects
1 226 88 560 248 547
5 212 87 543 249 557
10 265 125 604 181 334
20 363 216 733 209 375
50 3352 2009 3706 2315 2573
100 15117 14056 10029 8987 6551
Conclusion
Intro
Scalability
K Representation Layer optimization
NbSubjects 100 000
Test size (nbItem) 4
Ramp up period (secs) 300
eAccelerator Yes
Average time (msecs)
Authentication Main View
Process
Intialization Get Item
Perform
Next
Simultaneous Subjects
1 4269 279 575 235 727
5 4327 309 242 243 793
10 4568 556 615 275 1011
20 4829 830 603 288 1233
50 41756 31400 4905 1900 10560
100
112511
107300 75965 8139 40706
Conclusion
Intro
Scalability
K Representation Layer optimization
NbSubjects 100 000
Test size (nbItem) 4
Ramp up period (secs) 30
eAccelerator Yes
Average time (msecs)
Authentication Main View
Process
Intialization Get Item Perform Next
Simultaneous Subjects
1 3824 362 604 283 900
5 4272 596 596 283 1002
10 7662 3418
1180
592 1889
20 20972 13175 5064 1544 6903
50 58865 28312 23449 3279 23325
100 77959 22839 1873 53397 43114
Conclusion
Intro
Scalability

K Representation Layer optimization
TAO performance always depends on database size,
irrespectively of the kind of data
DBMS Indexes and general DBMS features?
RDF verbosity
Conclusion
Intro
Scalability

K Representation Layer optimization

Consider separately Design time for flexibility and production time
for efficiency
K Representation Layer
Conclusion
Intro
Scalability

K Representation Layer optimization

Consider Different Data scopes/domain
K Representation Layer
Conclusion
Intro
Scalability

K Representation Layer optimization

Consider PostGreSQL
K Representation Layer
Conclusion
Intro
Scalability

K Representation Layer optimization

Consider PostGreSQL : Database clustering
K Representation Layer
Conclusion
Intro
Scalability

K Representation Layer optimization

Consider PostGreSQL : procedure interests Triggers, checks
K Representation Layer
Conclusion
Intro
Scalability
Optimizations on a modular layered architecture
K Representation Layer
K Applicative Layer
TAO Applicative Layer
DBMS
API
Conclusion
Intro
Scalability
K Applicative Layer optimization
NbSubjects 100 000
Test size (nbItem) 4
Ramp up period (secs) 300
eAccelerator Yes
Average time (msecs)
Authentication Main View
Process
Intialization Get Item
Perform
Next
Simultaneous Subjects
1 507 162 674 330 702
5 497 225 735 369 860
10 454 229 720 375 801
20 490 316 768 379 913
50 1897 1488 853 411 1224
100 14282 16430 4282 1756 7928
Conclusion
Intro
Scalability
K Applicative Layer optimization
NbSubjects 100 000
Test size (nbItem) 4
Ramp up period (secs) 30
eAccelerator Yes
Average time (msecs)
Authentication Main View
Process
Intialization Get Item
Perform
Next
Simultaneous Subjects
1 354 170 734 365 803
5 500 213 824 396 844
10 1038 734 995 455 1143
20 2581 2727 2268 1154 2754
50 9020 13690 10093 5880 11984
100 16988 31989 29018 9269 45049
Conclusion
Intro
Scalability
Optimizations on a modular layered architecture
TAO Applicative Layer
API
Conclusion
Intro
Scalability
Optimizations on a modular layered architecture
K Representation Layer
K Applicative Layer
TAO Applicative Layer
DBMS
API
Conclusion
Intro
Scalability
Optimizations on a modular layered architecture
TAO Applicative Layer Optimization
Branching Rules
Inference Rules
Conclusion
Intro
Scalability
Methods and tools for Systematic Benchmarking
Jmeter, simulate large scale sollicitation
Conclusion
Intro
Scalability
Methods and tools for Systematic Benchmarking

Xdebug, WebGrind: optimize code
Conclusion
Intro
Scalability
Methods and tools for Systematic Benchmarking
SeleniumTests : check user experience,
test automation
Decision Systems/Information to CBA managers
Identify CBA relevant attributes

Capacity/ feasibility
Conclusion
Intro
Functionality
Main focus : Advanced Tests and Results
Test
Open Web Tests.
Define Bag of items and item selection policy (Random, Erandom,
Information Quantity, User-defined), halt criteria
Embed Tests in other test (sub-Processes management)
Design tests easily : authoring tool
Layout tests
Time Management
Add web activities.
Results
Rpad
Feedback
Conclusion
Intro
Security
Main focus : Security
Challenges in CBA
Test taker identification
Psychometric validity (Control environment)
Cheating
Items/tests Disclosure
Data privacy
Reliability
Conclusion
Intro
Security
Main focus : Security
Authentication
Extend authentification possibilities
(Login/Password, LDAP directory, proctoring,
fingerprint, smart cards)
 Controlled test delivery
Experiment test delivery including customized
version of a web browser, lock down features.
 Item exposure control
 Analyse log of behaviors using rule patterns
contact@tao.lu