Comparing Microsoft .NET and IBM Web Sphere / J2EE

groupertomatoInternet και Εφαρμογές Web

30 Ιουλ 2012 (πριν από 4 χρόνια και 8 μήνες)

424 εμφανίσεις







RESEARCH REPORT

Comparing Microsoft .NET
and IBM WebSphere/J2EE
A Productivity, Performance, Reliability and Manageability Analysis



http://www.MiddlewareRESEARCH.com

David Herst with William Edwards and Steve Wilkes
September 2004

research@middleware-company.com

Page 2 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
1 Disclosures
1.1 Research Code of Conduct
The Middleware Company offers the world’s leading knowledge network for middleware
professionals. The Middleware Company operates communities, sells consulting and conducts
research. As a research organization, The Middleware Company is dedicated to producing
independent intelligence about techniques, technologies, products and practices in the
middleware industry. Our goal is to provide practical information to aid technical decision
making.
• Our research is credible. We publish only what we believe and can stand behind.
• Our research is honest. To the greatest extent allowable by law we publish the
parameters, methodology and artifacts of a research endeavor. Where the research
adheres to a specification, we publish that specification. Where the research produces
source code, we publish the code for inspection. Where it produces quantitative results,
we fully explain how they were produced and calculated.
• Our research is community-based. Where possible, we engage the community and
relevant experts for participation, feedback, and validation.

If the research is sponsored, we give the sponsor the opportunity to prevent publication if they
deem that publishing the results would harm them. This policy allows us to preserve our
research integrity, and simultaneously creates incentives for organizations to sponsor creative
experiments as opposed to scenarios they can “win.”
This Code of Conduct applies to all research conducted and authored by The Middleware
Company, and is reproduced in all our research reports. It does not apply to research products
conducted by other organizations that we may publish or mention because we consider them of
interest to the community.
1.2 Disclosure
This study was commissioned by Microsoft.
The Middleware Company has in the past done other business with both Microsoft and IBM.
Moreover, The Middleware Company is an independently operating but wholly owned
subsidiary of VERITAS Software (www.veritas.com, NASDAQ:VRTS). VERITAS and IBM have
a number of business relationships in certain technology areas, and compete directly against
each other in other technology areas.
Microsoft commissioned The Middleware Company to perform this study on the expectation
that we would remain vendor-neutral and therefore unbiased in the outcome. The Middleware
Company stands behind the results of this study and pledges its impartiality in conducting this
study.
1.3 Why are we doing this study? What is our “agenda”?
We are compelled to answer questions such as this one, due to controversy that sponsored
studies occasionally create.

Page 3 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
First, what our agenda is not: It is not to demonstrate that a particular company, product,
technology, or approach is “better” than others.
Simple words such as “better” or “faster” are gross and ultimately useless generalizations. Life,
especially when it involves critical enterprise applications, is more complicated. We do our best
to openly discuss the meaning (or lack of meaning) of our results and go to great lengths to
point out the several cases in which the result cannot and should not be generalized.
Our agenda is to provide useful, reliable, and profitable research and consulting services to our
clients and to the community at large.
To help our clients in the future, we believe we need to be experienced in and be proficient in a
number of platforms, tools, and technologies. We conduct serious experiments such as this one
because they are great learning experiences, and because we feel that every technology
consulting firm should conduct some learning experiments to provide their clients with the best
value.
If we go one step further and ask technology vendors to sponsor the studies (with both
expertise and expenses), if we involve the community and known experts, and if we document
and disclose what we’re doing, then we can:
• Lower our cost of doing these studies
• Do bigger studies
• Do more studies
• Make sure we don’t do anything silly in these studies and reach the wrong conclusions
• Make the studies learning experiences for the entire community (not just us)
1.4 Does a “sponsored study” always produce results favorable to the sponsor?
No.
Our arrangement with sponsors is that we will write only what we believe, and only what we can
stand behind, but we allow them the option to prevent us from publishing the study if they feel it
would be harmful publicity. We refuse to be influenced by the sponsor in the writing of this
report. Sponsorship fees are not contingent upon the results. We make these constraints clear
to sponsors up front and urge them to consider the constraints carefully before they
commission us to perform a study.

Page 4 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
2 TABLE OF CONTENTS
1 DISCLOSURES.................................................................................................2
1.1 Research Code of Conduct.........................................................................2
1.2 Disclosure.....................................................................................................2
1.3 Why are we doing this study? What is our “agenda”?................................2
1.4 Does a “sponsored study” always produce results favorable to the
sponsor?.......................................................................................................3
2 TABLE OF CONTENTS.....................................................................................4
3 EXECUTIVE SUMMARY...................................................................................9
3.1 The Teams...................................................................................................9
3.2 The System..................................................................................................9
3.3 The Implementations...................................................................................9
3.4 Developer Productivity Results....................................................................9
3.5 Configuration and Tuning Results.............................................................10
3.6 Performance Results.................................................................................10
3.7 Reliability and Manageability Results........................................................11
4 INTRODUCTION..............................................................................................12
4.1 How this Report is Organized....................................................................12
4.2 Goals of the Study.....................................................................................13
4.3 The Approach.............................................................................................14
4.4 The ITS System.........................................................................................14
4.5 Development Environments Tested..........................................................16
4.6 Application Platform Technologies Tested................................................17
4.7 Application Code Availability......................................................................17
5 THE EVALUATION METHODOLOGY............................................................18
5.1 The Teams.................................................................................................18
5.1.1 The IBM WebSphere Team.....................................................................19
5.1.2 The Microsoft .Net Team........................................................................19
5.2 Controlling the Laboratory and Conducting the Analysis..........................20
5.3 The Project Timeline..................................................................................20
5.3.1 Target Schedule.....................................................................................20
5.3.2 Division of Lab Time Between the Teams.................................................21
5.3.3 Detailed Schedule...................................................................................21
5.4 Laboratory Rules and Conditions..............................................................22
5.4.1 Overall Rules..........................................................................................22
5.4.2 Development Phase................................................................................23
5.4.3 Deployment and Tuning Phase................................................................23
5.4.4 Testing Phase........................................................................................24
5.5 The Evaluation Tests.................................................................................24
6 THE ITS PHYSICAL ARCHITECTURE...........................................................25
6.1 Details of the WebSphere Architecture.....................................................27
6.1.1 IBM WebSphere.....................................................................................27
6.1.2 IBM HTTP Server (Apache).....................................................................28
6.1.3 IBM Edge Server....................................................................................28
6.1.4 IBM WebSphere MQ...............................................................................29
6.2 Details of the .NET Architecture................................................................29
6.2.1 Microsoft Internet Information Services (IIS).............................................29
6.2.2 Microsoft Network Load Balancing (NLB).................................................29

Page 5 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
6.2.3 Microsoft Message Queue (MSMQ).........................................................29
7 TOOLS CHOSEN............................................................................................30
7.1 Tools Used by the J2EE Team..................................................................30
7.1.1 Development Tools.................................................................................30
7.1.1.1 Rational Rapid Developer Implementation........................................31
7.1.1.2 WebSphere Studio Application Developer Implementation.................31
7.1.2 Analysis, Profiling and Tuning Tools.........................................................32
7.2 Tools Used by the .NET Team..................................................................32
7.2.1 Development Tools.................................................................................32
7.2.2 Analysis, Profiling and Tuning Tools.........................................................32
8 DEVELOPER PRODUCTIVITY RESULTS.....................................................34
8.1 Quantitative Results...................................................................................34
8.1.1 The Basic Data.......................................................................................34
8.1.2 .NET vs. RRD.........................................................................................36
8.1.3 .NET vs. WSAD......................................................................................37
8.2 RRD Development Process.......................................................................37
8.2.1 Architecture Summary.............................................................................37
8.2.1.1 RRD Applications............................................................................37
8.2.1.2 Database Access............................................................................38
8.2.1.3 Overall Shape of the Code...............................................................38
8.2.1.4 Distributed Transactions..................................................................39
8.2.2 What Went Well......................................................................................39
8.2.2.1 Web Interfaces................................................................................39
8.2.2.2 Web Service Integration...................................................................39
8.2.3 Significant Technical Roadblocks.............................................................39
8.2.3.1 Holding Data in Sessions.................................................................39
8.2.3.2 Web Service Integration...................................................................40
8.2.3.3 Configuring and Using WebSphere MQ............................................40
8.2.3.4 Handling Null Strings in Oracle.........................................................40
8.2.3.5 Building the Handheld Module..........................................................40
8.2.3.6 Miscellaneous RRD Headaches.......................................................41
8.3 WSAD Development Process....................................................................42
8.3.1 Architecture Summary.............................................................................42
8.3.1.1 Overall Shape of the Code...............................................................42
8.3.1.2 Distributed Transactions..................................................................43
8.3.1.3 Organization of Applications in WSAD..............................................43
8.3.2 What Went Well......................................................................................44
8.3.2.1 Navigating the IDE..........................................................................44
8.3.2.2 Building for Deployment...................................................................44
8.3.2.3 Testing in WebSphere.....................................................................44
8.3.2.4 Common Logic in JSPs...................................................................44
8.3.3 Signficant Technical Roadblocks.............................................................45
8.3.3.1 XA Recovery Errors from Server......................................................45
8.3.3.2 Miscellaneous WSAD Headaches....................................................45
8.4 Microsoft .NET Development Process......................................................46
8.4.1 .NET Architecture Summary....................................................................46
8.4.1.1 Organization of .NET Applications....................................................46
8.4.1.2 Database Access............................................................................47
8.4.1.3 Distributed Transactions..................................................................48
8.4.1.4 ASP.NET Session State..................................................................48
8.4.2 What Went Well......................................................................................48
8.4.3 Significant Technical Roadblocks.............................................................48
8.4.3.1 Transactional MSMQ Remote Read.................................................48
8.4.4 Miscellaneous .NET Headaches..............................................................50

Page 6 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
8.4.4.1 DataGrid Paging.............................................................................50
8.4.4.2 Web Services Returning DataSets....................................................50
8.4.4.3 The Mobile Application....................................................................51
8.4.4.4 Model Object Class Creation............................................................51
9 CONFIGURATION AND TUNING RESULTS.................................................52
10 WEBSPHERE CONFIGURATION AND TUNING PROCESS SUMMARY....54
10.1 RRD Round: Installing Software................................................................54
10.1.1 Starting Point..........................................................................................54
10.1.2 Installing WebSphere Network Deployment..............................................54
10.1.3 Installing IBM HTTP Server.....................................................................55
10.1.4 Installing IBM Edge Server......................................................................55
10.2 RRD Round: Configuring the System........................................................55
10.2.1 Configuring JNDI....................................................................................55
10.2.2 Configuring the WebSphere Web Server Plugin........................................56
10.3 RRD Round: Resolving Code Bottlenecks................................................56
10.3.1 Rogue Threads.......................................................................................56
10.3.2 Optimizing Database Calls......................................................................56
10.3.3 Optimizing the Web Service....................................................................56
10.3.4 Paging Query Results.............................................................................57
10.3.5 Caching JNDI Objects.............................................................................57
10.3.6 Using DTOs for Work Tickets..................................................................58
10.3.7 Handling Queues in Customer Service Application....................................58
10.4 RRD Round: Tuning the System for Performance....................................58
10.4.1 Tuning Strategy......................................................................................58
10.4.2 Performance Indicators...........................................................................58
10.4.3 Tuning the JVM......................................................................................59
10.4.3.1 Garbage Collection..........................................................................59
10.4.3.2 Heap Size.......................................................................................60
10.4.4 Vertical Scaling.......................................................................................61
10.4.5 Database Tuning....................................................................................61
10.4.6 Tuning JDBC Settings.............................................................................61
10.4.7 Web Container Tuning............................................................................61
10.4.7.1 Web Thread Pool............................................................................61
10.4.7.2 Maximum HTTP Sessions................................................................61
10.4.8 Web Server Tuning.................................................................................62
10.4.9 Session Persistence...............................................................................62
10.5 WSAD Round: Issues................................................................................62
10.5.1 Use of External Libraries and Classloading in WebSphere.........................62
10.5.2 Pooling Objects......................................................................................63
10.5.3 Streamlining the Web Service I/O............................................................63
10.5.4 Optimizing Queries.................................................................................64
10.6 Significant Technical Roadblocks..............................................................64
10.6.1 Switching JVMs with WebSphere.............................................................65
10.6.2 Configuring Linux for Edge Server, Act 1..................................................65
10.6.3 Configuring Linux for Edge Server, Act 2..................................................65
10.6.4 Configuring Linux for Edge Server, Act 3..................................................67
10.6.5 Configuring JNDI for WebSphere ND.......................................................68
10.6.6 Edge Server’s Erratic Behavior................................................................69
10.6.7 Session Persistence...............................................................................70
10.6.7.1 Persisting to a Database..................................................................70
10.6.7.2 In-Memory Replication.....................................................................71
10.6.7.3 Tuning Session Persistence.............................................................72
10.6.8 Hot Deploying Changes to an Application.................................................73
10.6.9 Configuring for Graceful Failover.............................................................74

Page 7 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
10.6.9.1 Failover Requirements.....................................................................75
10.6.9.2 Standard Topology..........................................................................75
10.6.9.3 Non-Standard Topology...................................................................76
10.6.9.4 Modified Standard Topology............................................................77
10.6.10 Deploying the WSAD Web Service..................................................78
10.6.11 The Sudden, Bizarre Failure of the Work Order Application...............78
10.6.12 Using Mercury LoadRunner.............................................................79
11 .NET CONFIGURATION AND TUNING PROCESS SUMMARY...................81
11.1 Installing and Configuring Software...........................................................81
11.1.1 Network Load Balancing (NLB)................................................................81
11.1.2 ASP.NET Session State Server...............................................................83
11.2 Resolving Code Bottlenecks......................................................................84
11.3 Base Tuning Process.................................................................................84
11.3.1 Tuning the Database...............................................................................84
11.3.2 Tuning the Web Applications...................................................................84
11.3.3 Tuning the Servers.................................................................................85
11.3.4 Tuning the Session State Server..............................................................85
11.3.5 Code Modifications.................................................................................85
11.3.6 Tuning Data Access Logic.......................................................................85
11.3.7 Tuning Message Processing....................................................................85
11.3.8 Other Changes.......................................................................................85
11.3.9 Changes to Machine.config.....................................................................86
11.3.10 Changes Not Pursued.....................................................................86
11.4 Significant Technical Roadblocks..............................................................86
11.4.1 Performance Dips in Web Service............................................................86
11.4.2 Lost Session Server Connections............................................................86
12 PERFORMANCE TESTING............................................................................88
12.1 Performance Testing Overview.................................................................88
12.2 Performance Test Results.........................................................................88
12.2.1 ITS Customer Service Application............................................................88
12.2.2 ITS Work Order Web Application.............................................................89
12.2.3 Integrated Scenario.................................................................................91
12.2.4 Message Processing...............................................................................92
12.3 Conclusions from Performance Tests.......................................................93
13 MANAGEABILITY TESTING...........................................................................95
13.1 Manageability Testing Overview................................................................95
13.2 Manageability Test Results........................................................................95
13.2.1 Change Request 1: Changing a Database Query......................................95
13.2.2 Change Request 2: Adding a Web Page..................................................97
13.2.3 Change Request 3: Binding a Web Page Field to a Database....................97
13.3 Conclusions from Manageability Tests......................................................98
14 RELIABILITY TESTING...................................................................................99
14.1 Reliability Testing Overview.......................................................................99
14.2 Reliability Test Results...............................................................................99
14.2.1 Controlled Shutdown Test.......................................................................99
14.2.2 Catastrophic Hardware Failure Test.......................................................100
14.2.3 Loosely Coupled Test...........................................................................100
14.2.4 Long Duration Test...............................................................................101

Page 8 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
14.3 Conclusions from Reliability Tests...........................................................101
15 OVERALL CONCLUSIONS...........................................................................102
16 APPENDIX: RELATED DOCUMENTS.........................................................105
17 APPENDIX: SOURCES USED......................................................................106
17.1 Sources Used by the IBM WebSphere Team.........................................106
17.2 Sources Used by the Microsoft .NET Team............................................106
18 APPENDIX: SOFTWARE PRICING DATA...................................................107
18.1 IBM Software............................................................................................107
18.2 Microsoft Software...................................................................................108


Page 9 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
3 EXECUTIVE SUMMARY
This study compares the productivity, performance, manageability and reliability of an IBM
WebSphere/J2EE system running on Linux to that of a Microsoft .NET system running on
Windows Server 2003.
3.1 The Teams
To conduct the study, The Middleware Company assembled two independent teams, one for
J2EE using IBM WebSphere, the other for Microsoft .NET. Each team consisted of senior
developers similarly skilled on their respective platforms in terms of development, deployment,
configuration, and performance tuning experience.
3.2 The System
Each team received the same specification for a loosely-coupled system to be developed,
deployed, tuned and tested in a controlled laboratory setting. The system consisted of two Web
application subsystems and a handheld device user interface, all integrated via messaging and
Web services.
3.3 The Implementations
The WebSphere team developed two different implementations of the specification, one using
IBM’s model -driven tool Rational Rapid Developer (RRD), the other with IBM’s code-centric tool
WebSphere Studio Application Developer (WSAD). The .NET team developed its single
implementation using Visual Studio.NET as the primary development tool.
3.4 Developer Productivity Results
In the development phase of the study, the time it took each team to complete the initial
implementation (including installing all necessary development and runtime software) was
carefully measured to determine overall developer productivity. The .NET implementation was
completed significantly faster than the RRD implementation, and also faster than the WSAD
implementation.

Page 10 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
.NET vs. RRD
.NET vs. WSAD
RRD vs. WSAD
Development Productivity
Significantly better.
• Greatest difference in
product installs; less to
install/configure on
Windows Server side.
• Also differences in
developer productivity for
all subsystems.
• .NET team had longer
history w/ VS than J2EE
team w/ RRD.
Better; uncertain how
much.*

Worse, uncertain how
much. *

* The team using WSAD had already built the same application in RRD, and hence realized
productivity advantages not realized for RRD or .NET, since they were familiar with the
specification.
3.5 Configuration and Tuning Results
After developing their system, each team was measured in how long they took to configure and
tune it in preparation for a series of performance, manageability, and reliability tests. The .NET
team completed this stage in significantly less time than the WebSphere team for the RRD
implementation. The .NET team took 16 man days for configuration and tuning, while the
WebSphere team took 71 man days (much of it spent addressing software installation issues
and patching the operating system, however). Later, when they deployed the WSAD
implementation to the existing WebSphere infrastructure, the WebSphere team spent an
additional 24 man days tuning and configuring.
.NET vs. RRD
.NET vs. WSAD
RRD vs. WSAD
Tuning Productivity
Significantly better.
• Huge part of RRD time
taken patching Linux,
dealing with Edge Server
issues.
• Significant time also
spent re-working RRD-
generated code to get
better performance.

Better; uncertain how much.

• Since complete runtime
platform had already
been installed/configured
and tuned for the RRD
implementation, this
stage was completed
much more quickly for
WSAD.
Uncertain.
3.6 Performance Results
In a battery of performance tests, the .NET implementation running on Windows Server 2003
significantly outperformed the RRD implementation running on Linux in four tests. Compared
to the WSAD implementation, the .NET version performed about equally well overall, doing
better on some tests and not as well on others.

Page 11 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
.NET vs. RRD
.NET vs. WSAD
RRD vs. WSAD
Performance
Significantly better on 3 of 4
tests.
• .NET achieved user
throughput 66-123%
higher than RRD in 3
tests.
• In 4
th
test, .NET achieved
40% higher message
processing thruput.
About equal.
• .NET achieved higher
user throughput in 1 test,
slightly higher in 1, worse
in 1.
• In the 4
th
test, WSAD
achieved nearly 3 times
the message processing
thruput.

Significantly worse on all 4
tests.
3.7 Reliability and Manageability Results
Manageability and reliability tests revealed better results for .NET on Windows Server 2003; it
significantly surpassed the two J2EE implementations on Linux in terms of deploying changes
under load, gracefully downing servers, and handling catastrophic failover. In terms of
sustained, long-term operation under normal load, all three implementations performed equally
well.
.NET vs. RRD
.NET vs. WSAD
RRD vs. WSAD
Manageability
Significantly better.
• .NET had many fewer
errors during deployment.
• .NET slightly faster to
deploy.
• .NET preserved sessions
more reliably.


Significantly better.
• .NET had many fewer
errors during deployment.
• .NET slightly faster to
deploy.
• .NET preserved sessions
more reliably.
Better.
• RRD had many fewer
errors during deployment.
• RRD preserved sessions
more reliably.
• Time to deploy about the
same.
Reliability: Handling Failover
Significantly better.
• RRD implementation:
Could not add server to
cluster after graceful
shutdown.
• RRD implementation
could not handle
catastrophic failover.
Significantly better.
• WSAD implementation
could not handle
catastrophic failover.
Worse.
• RRD implementation:
Could not add server to
cluster after graceful
shutdown.

Reliability: Sustained Operation
Over 12 Hour Period Under Moderate Load
Equal.
Equal.
Equal.

Page 12 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
4 INTRODUCTION
Previous studies by The Middleware Company have compared tools or platforms on the basis
of one criterion or another, such as developer productivity, ease of maintenance or application
performance.
This study compares two enterprise application platforms, Microsoft .NET and IBM
WebSphere/J2EE, across a full range of technical criteria: developer productivity, application
performance, application reliability, and application manageability.
Although sponsored by Microsoft, the study was conducted independently, in a strictly
controlled laboratory environment, with no direct vendor involvement by either Microsoft or IBM.
The Middleware Company cannot emphasize enough that Microsoft had no control over the
development, testing, and results of the study, and we firmly stand by those results as accurate
and unbiased.
Towards that end, TMC has published the methodology used and the source code for both the
.NET and J2EE application implementations for public download and scrutiny. Customers can
review and comment on the methodology, examine the code, and even repeat the tests in their
own testing environment.
4.1 How this Report is Organized
This report covers every aspect of the Microsoft.NET – IBM WebSphere/J2EE Comparison
Study: its purpose and methodology, participants, rules and procedures, schedule and working
conditions; not to mention the results, both qualitative and quantitative.
Section 1 discloses the conditions under which The Middleware Company conducted this
study, including our research code of conduct and our policy regarding sponsored studies such
as this one.
Section 3 gives a brief, high-level summary of the study and its results.
Section 4 (this section) introduces the study. It describes:
• The goals we tried to achieve with the study
• The unique overall approach that the study takes
• The software system that the two teams developed and tuned
• The development environments that were tested
• The technologies of the .NET and WebSphere platforms that were tested
• What study artifacts are available and how to obtain them

Section 5 covers the study’s methodology in detail:
• The composition of the two teams
• The independent auditor who controlled the study conditions and conducted the analysis
• The project schedule
• The rules and conditions that governed the two teams in the laboratory
• A summary of the tests conducted

Section 6 details the physical architecture of the system:
• The hardware infrastructure used by both teams
• The software infrastructure that each team installed

Page 13 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company

Section 7 describes the tools that each team used during the different phases of the study. In
particular, since the J2EE team built two implementations of the system using two different
development tools, this section compares the two IDEs.
Section 8 presents the developer productivity results:
• The quantitative results broken out by core development tasks
• The qualitative experiences of the two teams developing each of the three
implementations, including important technical roadblocks

Sections 9-11 present the deployment, configuration and tuning results:
• Section 9 lays out the quantitative results
• Section 10 describes the WebSphere team’s experience, including significant technical
roadblocks
• Section 11 describes the .NET team’s experience, including significant technical
roadblocks

Section 12 presents the results of the performance tests
Section 13 presents the results of the manageability tests
Section 14 presents the results of the reliability tests
Section 15 presents our conclusions
The final sections, from 16 on, contain various appendices:
• Where to find documents related to this report
• Important sources used by both teams
• Pricing data on the software used in this study
4.2 Goals of the Study
Commentary abounds about the technical merits of both J2EE and Microsoft .NET for
enterprise application development. The Middleware Company in particular has conducted
various studies in the past to compare these two enterprise platforms. Some of these studies,
such as Model Driven Development with IBM Rational Rapid Developer, address developer
productivity. Others, such as J2EE and .NET Application Server and Web Services
Performance Comparison, focus on performance. None, however, has spanned a wider set of
technical criteria that includes not only productivity and performance, but application platform
manageability and reliability as well.
This study is the first of its kind to measure all of these criteria, using a novel evaluation
approach. While we expect the study to spark controversy, we also hope it will fulfill two
important goals:
• Provide valuable insight into the Microsoft .NET and IBM WebSphere/J2EE development
platforms.
• Suggest a controlled, hands-on evaluation approach that organizations can use to structure
their own comparisons and technical evaluations of competing vendor offerings.

Page 14 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
4.3 The Approach
This study took the approach of simulating a corporate evaluation scenario. In it, a
development team is tasked with building, deploying and testing a pilot B2B integrated
application in a fixed amount of time, after which we evaluate the results the team was able to
achieve in this time period.
In the study we executed the scenario three times, once using Microsoft .NET 1.1 running on
the Windows 2003 platform, and twice using IBM WebSphere 5.1 running on the Red Hat
Enterprise Linux AS 2.1 platform. (The latter two cases differed in the development tool used;
more on this in Section 4.5.)
We assembled two different teams, one for each platform, each similarly skilled on their
respective platforms. Each team consisted of senior developers experienced in enterprise
application architecture, application development, and/or performance tuning. The rules limited
each team to no more than two members in the lab at any time, but did not require the same
two members for all phases of the exercise.
The IBM WebSphere/J2EE team consisted of three senior developers from The Middleware
Company with 16 years’ combined experience in J2EE. The same two of these developers
built both J2EE implementations, and all three participated at different times in the deployment,
tuning and testing phases. For the installation, deployment and initial tuning of the WebSphere
platform, the J2EE team also used two independent, WebSphere-certified consultants having a
total of 7 years’ experience with the WebSphere platform.
The Microsoft .NET team consisted of three senior developers from Vertigo Software, a
California-based Microsoft Solution Provider, with a combined 10 years experience building
software on Microsoft .NET.
The Middleware Company took pains to keep the study free of vendor influence:
• We subcontracted CN2 Technology, a third-party testing company, to prepare the
application specification, set up the testing lab, audit the development process for each
team, and independently perform the actual application testing for each platform.
• The teams did not communicate with each other during the study.
• Neither team had knowledge of the other team’s results until after the study was
completed.
• Neither Microsoft nor IBM had any influence over the development teams during the study.

It is important to note that this study represents what the development teams could achieve
using only publicly available technical materials and vendor support channels for their platform.
It does not represent what the vendors themselves might have achieved, nor what each team
might have achieved if given a longer development and tuning schedule or allowed direct
interaction with vendor consultants. Therefore, the resulting applications developed by the two
teams may not fully represent vendor best practices or vendor-approved architectures. Rather,
they reflect what customers themselves might achieve if tasked with independently building
their own custom application using publicly available development patterns, technical guidance
and vendor support channels.
4.4 The ITS System
The comparison at the heart of the study centers around the development and testing of a
loosely coupled system known as ITS. ITS is a facilities management system created for the
fictitious ITS Facilities Management Company (ITS-FMC). The system represents a B2B
integration scenario, allowing corporate customers of ITS-FMC to use a Web-based hosted

Page 15 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
application to create and track work order requests for facilities management on their corporate
premises.
The ITS system comprises three core subsystems that operate together in in both a loosely
coupled fashion (vi a messaging) and a tightly coupled fashion (via synchronous Web Service
requests):
• The ITS Customer Service Application. ITS-FMC’s corporate clients use this Web-based
application to create and track work order requests for facilities management at their
premises. The application automatically dispatches work order requests via messaging to
the central ITS system, which operates across the Internet on a separate ITS-FMC internal
network. The ITS Customer Service Application also allows customers to track the status
of their work orders via Web service calls to the ITS central system, as well as view/modify
customer and user information.

• The ITS Central Work Order Processing Application. This application is operated by
ITS-FMC itself on a separate corporate network. The application receives incoming work
order requests (as messages) from the ITS Customer Service Application. It places the
requests into a database for further business processing, including assignment to a specific
on-site technician. The application hosts the Web service that returns work order status
and historical information to the ITS Customer Service Application. Additionally, this
application has a Web user interface that ITS-FMC’s central dispatching clerks can use to
search, track and update work order requests, as well as query customer information and
query/modify technician data.

• The Technician Work Order Mobile Device Application. This application operates on a
handheld device, allowing technicians to retrieve their newly assigned work items and
update work order status as they complete their work orders at the customer premises.
Technicians use this application for dispatching purposes, and to log the time spent
working on an issue so that customer billing can occur.


Page 17 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Details on these tools, how they compare, how they were used, and other development
software used with them can be found in Section 7.
4.6 Application Platform Technologies Tested
The ITS system tests the following functionality of the two application platforms:
• Web application development
• Web application configuration/tuning
• Web application manageability, reliability and performance
• Message-based application development
• Message queue reliability and performance
• Mobile device application development
4.7 Application Code Availability
The application code for both the .NET and J2EE implementations can be downloaded from
http://www.middlewareresearch.com
. Customers can download the applications and install
them in their own environments for further testing and confirmation of the results.
The discussion forum for the study is located at http://www.theserverside.com
.
Finally, customers and vendors can email The Middleware Company to discuss the report and
propose further testing or offer comments by emailing to: research@middlewareresearch.com
.

Page 18 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
5 THE EVALUATION METHODOLOGY
This study was designed to simulate two enterprise development teams given a fixed amount of
time to build and tune a working pilot application according to a set of business and technical
requirements. One team developed the application using IBM WebSphere running on Linux,
while the other team developed the application using Microsoft .NET running on Windows
2003.
Development took place in a controlled laboratory environment where the time taken to
complete the system was carefully measured. The two teams worked from a common
application specification derived from a set of business and technical requirements. Neither
team had access to the specification until development started in the controlled lab setting.
After developing an implementation, the team then tuned and configured it as part of a
measured deployment phase. Each implementation was then put through a set of basic
performance, manageability and reliability tests while running under load on the production
equipment. Hence this study not only compares the relative productivity achieved by each
development team, but also captures the base performance, manageability and reliability of
each application in a deployed production environment.
It is extremely important to note that the study allocated a fixed amount of time to each phase
of the project, and hence objectively documents what each team was able to achieve in this
fixed amount of time.
1
The study objectively documents exactly what each team was able to
achieve, inclusive of detailed notes documenting technical roadblocks encountered by each
team, and how these were resolved. As such, the study tells an interesting story that will
undoubtedly spark much debate, but also shed valuable light on each platform based on actual
hands-on development and testing of a pilot business application.
5.1 The Teams
Each team fielded two developers skilled in their respective development platform, and each
team was selected such that their product experience levels and skill sets matched as closely
as possible. As noted in Section 4.3, each team could have only two members in the lab at one
time, but did not have to use the same two throughout the exercise.
Neither team included any representative from either IBM or Microsoft, and neither team was
allowed any direct interaction with vendor technicians from IBM or Microsoft other than the
standard online customer support channels available to any customer. In cases where a team
used a vendor support channel, support technicians were not told they were assisting a
research project conducted by The Middleware Company; so the team received only the
standard treatment afforded any developer on these channels.
To mirror the development process of a typical corporate development team, we allowed the
teams to consult with other members of their organizations outside the lab, to answer technical
questions and provide guidance as required. Such access to external resources was
monitored and logged, and we extended the rule prohibiting direct vendor interactions (other
than with standard customer support channels) to all resources contacted during the
development and testing phases of the project.
Here are details on the makeup and experience of the two teams.

1
Note that under certain circumstances we allowed a team to go beyond that fixed time period. See section 5.3.1 for details.

Page 19 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
5.1.1 The IBM WebSphere Team
The WebSphere team consisted of three developers from The Middleware Company, described
in the following table. Members A and B developed both the RRD and WSAD implementations,
while all three members participated at different times in the tuning and testing phases.
J2EE Team Members from The Middleware Company
Team
Member
Development
Experience
(years)
Java
Experience
(years)
J2EE
Experience
(years)
Other Relevant Experience
A 14 7 4 Broad experience with
development tools and
platforms. Particular strength
in design.
B 15 8 6* Experienced in RRD,
modeling and design.
C 23 8 6* Extensive experience in
tuning enterprise applications
for performance.
* Includes experience with Java servlet API predating the introduction of J2EE in 1999.
Additionally, the J2EE team used two independent, IBM-certified WebSphere consultants at
different times during the deployment and tuning phase.
• One had three years’ experience as a WebSphere administrator on various Unix platforms,
including Linux.
• The other had over four years’ experience installing, configuring and supporting IBM
WebSphere on multiple platforms, including Linux.
5.1.2 The Microsoft .Net Team
The .NET team consisted of three senior developers from Vertigo Software, a California-based
Microsoft Solution Provider, with the following credentials:
.NET Team Members from Vertigo Software
Team
Member
Development
Experience
(years)
Microsoft
Platform
Experience
(years)
.NET
Experience
(years)
Other Relevant Experience
A 7 7 3 Experienced in Web
application development and
design
B 13 13 4 Experienced in design in the
presentation, business, and
database tiers
C 7 5 3 Experienced in development
and performance tuning

Page 20 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
5.2 Controlling the Laboratory and Conducting the Analysis
The Middleware Company subcontracted a third-party testing organization, CN2 Technology, to
write a specification for the ITS system, set up the lab environment, design the tests, monitor
and control the testing environment, and conduct the actual tests of the J2EE and .NET
implementations. CN2 strictly monitored the time spent by each development team on the
various phases of the project, and controlled the lab environment. CN2 also strictly monitored
Internet access and email access, including logging all such access from within the lab, to
ensure that neither team violated the rules of the lab.
For details on those rules, see Section 5.4.
5.3 The Project Timeline
5.3.1 Target Schedule
This study was designed with the objective that each team would complete its work in 25
workdays (five calendar weeks), distributed as follows:
Phase
Description
Days
Phase 1 Development 10
Phase 2 Deployment and tuning 10
Phase 3 Formal evaluation testing up to 5 (as needed)

While we felt confident that the teams could complete Phases 1 and 3 in the allotted time, we
were less certain about Phase 2. If, after ten days of deployment and tuning, the
implementation did not perform up to even minimal standards, the results of formal testing in
Phase 3 would have little meaning.
So we added a requirement that each team continue their configuration and performance
tuning until satisfied that their implementation would perform well enough to actually undergo
the tests in the final week. This meant that each team was allowed to go beyond their allotted
ten days if they desired, with the understanding that all time spent would be monitored and
reported.

Page 21 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
5.3.2 Division of Lab Time Between the Teams
To keep the two teams from communicating with each other, while at the same time preserving
the continuity of their work, we interleaved their time in the lab in the following sequence:
.NET Team
J2EE Team
Implementation Phase Implementation Phase
.NET 1: Development
RRD 1: Development
.NET 2: Deployment / tuning
.NET 3: Evaluation testing
RRD 2: Deployment / tuning
RRD 3: Evaluation testing
WSAD * 1: Development
WSAD 2: Deployment / tuning
WSAD 3: Evaluation testing
*Note that the J2EE team developed the WSAD implementation offsite, not in the controlled lab
environment. This implementation was not in the initial scope of the project, but was added to
ensure that the performance, reliability and manageability tests painted a more complete
picture of J2EE/WebSphere for the community.

5.3.3 Detailed Schedule
The following table documents the desired project schedule including the schedule goals
established for the development, tuning/configuration and testing of each implementation. As
explained in Section 5.3.2, the two teams occupied the lab at different times, so this schedule
was repeated for each implementation.
Desired Development and Testing Schedule
(established prior to start of exercise)
Schedule
Timeline

Task/Event

Description
Phase 1: Development
Day 1 Development team arrives in lab.
Day 1
(1 hour)
Overview of lab rules and
hardware environment.
Team was introduced to the lab
environment for the first time, lab rules
were explained, and a walkthrough of the
hardware was conducted.
Day 1
(2 hours)
Development team given
application specification for first
time. Two hour application
specification overview with Q&A.
CN2 Technology provided a detailed
walkthrough of the application
specification and answered initial
questions about the specification.

Page 22 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Desired Development and Testing Schedule
(established prior to start of exercise)
Schedule
Timeline

Task/Event

Description
Day 1 Application specification review,
development tool and application
server setup.
Team reviewed the application
specification in detail, and created a
strategy for dividing the work and
beginning development.
Days 1-10 Application development. Team developed the application
according to the provided specification.
All development time in the lab was
carefully tracked for each component of
the system. CN2 deemed development
completed when the implementation
passed a series of functional tests.
Phase 2: Deployment and Tuning
Day 11 Review of base performance,
manageability and reliability
tests and requirements,
including review of Mercury Load
Runner test scripts and test tool.
CN2 reviewed with team the tests to be
performed and technical
requirements/goals for these tests. CN2
provided a walkthrough of the Mercury
LoadRunner testing environment and
base test scripts so the team could begin
configuri ng and tuning.
Days 11-20+ Application performance and
configuration tuning.
Ten 8-hour days were initially allotted for
tuning in preparation for evaluation tests.
However, the team was allowed more
time if required to ensure they felt ready
to conduct the actual tests.
Phase 3: Evaluation Testing
Days 21-25 Performance, manageability and
reliability tests conducted.
Performance, manageability and
reliability tests were conducted in the lab
and results logged.

5.4 Laboratory Rules and Conditions
This section describes the conditions each team faced as they started each phase of the study
and the various rules governing their behavior inside and outside the laboratory environment.
5.4.1 Overall Rules
Several rules applied to the entire exercise:
• Team members could only use the provided machines for development work and Internet
access. Personal laptops were barred from the lab.
• Each day was limited to 8 hours’ working time in the lab, with an additional hour for lunch.
• The team could seek technical support and guidance from other members of their
organization outside the lab as required. They could communicate via telephone or email.

Page 23 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
• Neither team members nor their offsite colleagues could have any interaction with vendor
technicians from IBM or Microsoft, other than through standard online customer support
channels.
• If they did use vendor support channels, team members could not reveal that they were
participating in a study involving IBM and Microsoft software; they received only the
standard treatment afforded any developer on these channels.

Note, however, that the WSAD implementation was developed after the RRD implementation,
and was developed offsite, not in the controlled lab environment.
5.4.2 Development Phase
When a team entered the lab for the first time, they were given the following initial environment:
• A development machine for each developer, pre-configured with Windows XP and Internet
access.
• Two machines with the two ITS databases pre-installed and pre-populated with data. The
database server was Microsoft SQL Server for the .NET team, Oracle for the WebSphere
team.
• Four application server machines pre-configured with the base OS installation only
(Windows Server 2003 for the .NET team, Red Hat Enterprise Linux 2.1 for the WebSphere
team).

As for augmenting or modifying this initial environment, both teams were under the same
restrictions:
• They had to install/configure their development environment (tools, source control, etc) as
part of the measured time to complete the application development phase.
• They had to install the application server software separately on each server as part of the
measured development time.
• They could not make changes to the database schemas, other than adding functions,
stored procedures, or indexes.

This rule applied specifically to coding of the RRD and .NET implementations:
• Team members were not allowed to work on code outside the lab. This meant they could
not remove code from or bring code into the lab.

For all implementations (RRD, WSAD and .NET) this rule applied:
• We allowed use of publicly available sample code and publicly available pre-packaged
libraries, since a typical corporate development team would also have access to such code.

5.4.3 Deployment and Tuning Phase
When Phase 2 began, the CN2 auditor introduced the test environment that the team would be
using. This environment consisted of:
• Mercury LoadRunner to simulate load
• A dedicated machine for the LoadRunner controller
• Some 40 additional machines to generate client load
• Base tests scripts created by CN2 so that the development team did not have to spend
time doing so

Page 24 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
5.4.4 Testing Phase
The rules for Phase 3 were the most restrictive, since this phase consisted of the formal
evaluation tests conducted by the CN2 auditor:
• Team members could not modify application code or system configurations except as
needed during a test.
• After a load test was launched, the team would have to leave the lab until the test reached
completion (typically 1-4 hours later).
5.5 The Evaluation Tests
During Phase 3 the CN2 auditor conducted a variety of tests to measure manageability,
reliability and performance of the implementations. Some of these tests required the active
participation of the teams; others did not.
Most of the tests were performed under load. As mentioned above, in this study Mercury
LoadRunner running on 40 client machines was used to simulate load. CN2 provided the
teams with a set of LoadRunner scripts for each implementation.
The three sets of scripts were carefully constructed to perform the same set of actions,
ensuring that they tested the exact same functionality for each implementation in a consistent
manner.
2

Here is a summary of the tests performed; for more details and for test results see Sections 12
to 14.
• Performance capacity (stress test). How many users can the system handle before
response times become unacceptable or errors occur at a significant rate?
• Performance reliability. Given a reasonable load (based on the results of the stress test),
how reliably does the system perform over a sustained period (say, 12 hours)?
• Efficiency of message processing. How quickly can the Work Order module process a
backlog of messages in the queue?
• Ease of implementing change requests. How quickly and easily can a developer
implement a requested change to the specification?
• Ease and reliability of planned maintenance. How easily and seamlessly can system
updates be deployed to the system while under load?
• Graceful failover. How well does the clustered Customer Service module respond when
an instance goes down.
• Session sharing under load. If one of the clustered Customer Service instances fails
under load, are the sessions that were handled by the failed instance seamlessly resumed
by the other Customer Service instance?

2
CN2 could not provide a single set of scripts for all three implementations because the three differed in certain low-level details, such as the
URL of a given page, the names of fields in that page and whether that page was to be invoked with GET or POST.

Page 25 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
6 THE ITS PHYSICAL ARCHITECTURE
This section describes the hardware and software infrastructure each team used to run its
implementation of the ITS system.
The specification required that the teams deploy to identical hardware; in fact, they used the
same hardware. On the machines hosting the applications and the message server, each team
had its own removable hard drive that was swapped in. On the machines hosting databases,
the two teams’ DBMSs shared the same drive, but were never run simultaneously. In this way
all three implementations used the very same processors, memory and network hardware.
On the software side, the teams started with the operating systems and database engines
already installed. They were responsible for installing the application server, message server,
load balancing software and handheld device software.

Page 26 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
This table lists the hardware and software used each by each team:
ITS
Subsystem

Servers

Hardware

.NET Software

J2EE Software
Customer
Service
application
2
(identical,
load-
balanced)
Work Order
Processing
application
1
• Windows Server
2003
• .NET 1.1
development
framework and
runtime (part of
Windows Server
2003)
• Red Hat Enterprise
Linux AS 2.1
• IBM WebSphere
Network
Deployment 5.1
Dedicated
durable
message
queue
server
1
Hewlett Packard
DL580 with 4 1.3
GHz processors,
2GB of RAM and
Gigabit networking
• Windows Server
2003
• Microsoft MSMQ
(part of Windows
Server 2003)

• Red Hat Enterprise
Linux AS 2.1
• IBM MQSeries
• IBM WebSphere
Deployment
Manager
• IBM Edge Server
Customer
Service
database
1
Work Order
database
1
Hewlett Packard
DL760 with 8 900
MHz processors
and 4 GB RAM
attached to a
SANs network
storage array with
500 GB of storage
in a RAID 10
configuration
• Windows Server
2003
• SQL Server 2000
Enterprise
Edition

• Windows Server
2003
• Oracle 9i Enterprise
Edition

Technician
Mobile
Device
application
n/a Hewlett Packard
iPAQ 5500
PocketPC
.NET Compact
Framework
• Insignia Jeode JVM
• Mobile Information
Device Profile
(MIDP) 2.0



Page 27 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
The following diagram shows the physical deployment of the ITS system to the network,
including all the machines listed above. It also shows the machine hosting the Mercury
LoadRunner controller and the 40 machines providing client load.

Figure 2. ITS Connected System Physi cal Deployment Diagram
6.1 Details of the WebSphere Architecture
The basic WebSphere infrastructure described below was used with both J2EE
implementations. The J2EE team installed it during the RRD round and did not substantially
change it during the WSAD round.
6.1.1 IBM WebSphere
The team used WebSphere Network Deployment (ND) Edition version 5.1. This version of
WebSphere has the same core functionality as basic WebSphere, but allows for central
administration of multiple WebSphere instances across a network. It also allows instances to
be clustered for the purpose of application deployment, so that, for example, one can deploy
the Customer Service application to two WebSphere instances at once.

Page 28 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Initially the team included three nodes in the WebSphere network: the two Customer Service
machines and the single Work Order machine. Later they included the Message Queue Server
machine as well, so that they could run a WebSphere instance there for sharing session state
in the Customer Service application.
WebSphere ND includes a Deployment Manager, a separate server dedicated to system
administration. This server communicates with node agents on each node to handle remote
deployment and configuration. The team installed the Deployment Manager on the same host
as the MQ server.
In terms of WebSphere instances, the team started with one per node. Along the way they
experimented with multiple instances per node (for example, to run each Work Order module in
a dedicated instance), but found no improvement and returned to the original configuration.
6.1.2 IBM HTTP Server (Apache)
WebSphere has an HTTP transport listening on port 9080 that acts as a Web server. This
transport is adequate for development and for running under very light loads, but cannot handle
the traffic of even moderate loads. For this reason the team needed a separate Web server.
Even though Red Hat Linux includes an Apache Web server distribution, the team chose to
install IBM HTTP Server (IHS) 2.0, IBM’s distribution of the Apache Web server.
Using an external Web server necessitates the use of IBM’s Web Server Plugin, an interface
between the Web server and the WebSphere HTTP transport. The plugin consists of a native
runtime library and an XML configuration file, plugin-cfg.xml. Applying the plugin consists of
these steps:
1. Modify Apache’s httpd.conf file to load the plugin library.
2. Modify Apache’s httpd.conf to point to the plugin configuration file.
3. From within WebSphere’s Deployment Manager, automatically update the plugin
configuration file and copy it to the ND nodes. Normally this process takes only a few
seconds but must be done every time there is a change in the configuration of an
application using the Web (such as the name or location of the Web application).
4. Bounce IHS if the configuration fi le has changed.
Note that along the way the team found reason to customize the plugin configuration in ways
not possible through the Deployment Manager. That meant they departed from the normal
plugin configuration update process described in Step 3. For details, see Section 10.6.9.4.
6.1.3 IBM Edge Server
The WebSphere team thought carefully about how to handle load balancing to and failover
between the two Customer Service instances. One simple solution is a DNS round-robin
arrangement, where a DNS server takes incoming requests to a single cluster IP address and
distributes them evenly between the addresses of the two Customer Service machines. This
solution addresses load balancing, but not failover.
To handle both, the WebSphere team decided to use IBM’s preferred solution, Edge Server.
This component sits in front of the Web servers and balances load among them. But it also
monitors the health of the Web servers and channels traffic away from one that fails.
The team installed Edge Server on the MQ server host, because that machine was guaranteed
not to go down. Then they had to configure that host and the Customer Service hosts at the
operating system level for Edge Server to work properly. These configuration requirements led
to some of the most vexing problems faced by the WebSphere team, as discussed in Section
10.6.

Page 29 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
6.1.4 IBM WebSphere MQ
The WebSphere team used IBM’s WebSphere MQ Series for its message server. MQ was
installed on the host designated for that purpose. Using it also required that host to have an
instance of WebSphere, whose JMS server acts as a front end for MQ.
6.2 Details of the .NET Architecture
For the servers, the .NET team required no software other than Windows Server 2003. With a
base installation of Windows Server 2003, enabling Application Server mode installs and
configures the .NET Framework, Internet Information Services, MSMQ, and all other
components that the .NET team needed to build their implement ation. The .NET Framework
has built-in support for Web services and message queuing, which enabled the team to provide
integration between the Customer Service and Work Order applications.
6.2.1 Microsoft Internet Information Services (IIS)
Microsoft Windows Server 2003 comes with Internet Information Services (IIS) version 6.0.
Like Apache for Linux, IIS 6.0 is a widely used Web server for Windows Server 2003.
ASP.NET, the Web application engine for .NET applications, is integrated directly with IIS 6.0.
In addition, Visual Studio enables developers to deploy applications to production servers or
staging servers directly from their development machines, a feature that the .NET development
team utilized during development.
6.2.2 Microsoft Network Load Balancing (NLB)
One of the requirements for the ITS Customer Service application was to support load
balancing and failover. Microsoft Network Load Balancing (NLB) Service is designed to provide
this functionality. Built into Windows Server 2003, this service balances the load among
multiple Web servers (in this case, two) and monitors their health, providing sub-second failover
from a server that fails. The .NET team had to configure NLB on the two servers that hosted the
Customer Service application using the graphical configuration tools built into Windows Server
2003.
The details on how the .NET team configured NLB for the ITS system are found in the Section
11.1.1.
6.2.3 Microsoft Message Queue (MSMQ)
The ITS specification also required loosely coupled integration between the Customer Service
and Work Order applications via a message-driven architecture. The .NET team used
Microsoft Message Queue (MSMQ) to satisfy this requirement.
Like IIS and NLB, MSMQ also comes built into Microsoft Windows 2003. The .NET team had
to enable MSMQ and create and configure the queues for the application. .NET provides
classes for accessing and manipulating the queues. As per the specification, a separate,
dedicated queue server was used for message queuing, with the Customer Service application
writing to the remote queue on this server, and the Work Order application reading messages
from this remote queue for processing.

Page 30 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
7 TOOLS CHOSEN
Each team had the freedom to choose any development, analysis, profiling and support tools
they wished to complete their work for their platform. This section describes the various tools
they chose.
7.1 Tools Used by the J2EE Team
7.1.1 Development Tools
The J2EE team had a broad choice of development environment. To give a better overview of
the tradeoffs between different types of tools, two implementations for WebSphere were built:
one using IBM’s Rational Rapid Developer (RRD), the second using IBM WebSphere Studio
Application Developer (WSAD).
RRD is a a model driven, visual tool that provides O/R mapping and data binding technology
and generates J2EE code from visual constructs. WSAD is a more mainstream J2EE
development tool dedicated to WebSphere.
These two IDEs have important differences that pertain to this study:
• RRD’s approach emphasizes developer productivity. But the code it generates is not
optimized for performance and does not lend itself to manual tuning.
• WSAD’s approach requires the developer to write much more code manually, but gives the
developer complete freedom to optimize that code.
• While both tools work well with WebSphere, WSAD integrates more tightly and provides a
lightweight version of WebSphere for development testing.

This table compares the two IDEs in greater detail:
Comparing Rational Rapid Developer (RRD)
and WebSphere Studio Application Developer (WSAD)
as Development Tools
Aspect of
Development
RRD
WSAD
Approach to
J2EE
development
Takes a model-driven approach that
removes you, the developer, from the
J2EE platform by several degrees. Has
you model your classes, pages,
components, messages and business
logic in its own format; then it generates
Java and JSP code for you. That
generated code becomes just another
product of the tool which, like generated
deployment descriptors, you would
normally not touch, much less edit.
Takes a “conventional”
approach in that you must write
and manage all the Java and
JSP code directly. WSAD
may offer templates or wizards
to get you started on a
particular coding path, but you
must still handle the resulting
code.

Page 31 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Comparing Rational Rapid Developer (RRD)
and WebSphere Studio Application Developer (WSAD)
as Development Tools
Aspect of
Development
RRD
WSAD
Approach to
page
development
Has you place controls in a page design
space, then bind them to data objects from
your class model. Each page is served by
its own subset of classes and attributes
from the model.
Again, more “conventional”:
You write business logic code
to be used in standard JSPs,
then write the JSPs
themselves. If desired you can
use Struts.
Deployment
platform for
development
Supports a number of popular platforms,
including WebSphere, WebLogic and
Apache Tomcat. For development
purposes IBM recommends deploying to
the much lighter-weight Tomcat platform,
then at the end regenerating for and
deploying to WebSphere.
Dedicated to WebSphere. You
can deploy your application
directly to a WebSphere
instance.
Also includes a WebSphere
test environment (a lightweight
version of WebSphere), that
speeds development.
Configuring
WebSphere
Has platform settings for WebSphere that
let you specify JDBC datasources, JMS
message queues and other critical
resources. But these settings affect the
application only, not the target platform.
You must still configure WebSphere
directly.
Lets you configure your target
platform, whether the
WebSphere test environment
or a real WebSphere instance,
through the IDE.
Conversely you can also
configure WSAD’s test
environment through a
standard WebSphere admin
console just as you would the
“real” WebSphere.

7.1.1.1 Rational Rapid Developer Implementation
For their first implementation the J2EE team used RRD for most, but not all, development work:
• They used RRD to build the two Web applications (Customer Service and Work Order), the
Work Order message consumption module and the Work Order Web service, which
answers work ticket queries from the Customer Service application.
• RRD was not suited for developing the handheld module, however. For that piece the
team used Sun One Studio, Mobile Edition.
• During the tuning phase they developed a small library of custom classes to solve some
performance bottlenecks. They used TextPad to write the classes and the Java
Development Kit (JDK) to compile and package the library.

For source control of the RRD code, the team used Microsoft Visual Source Safe, which
integrates nicely with RRD.
7.1.1.2 WebSphere Studio Application Developer Implementation
For their second implementation the J2EE team used WSAD for all development, except the
handheld module, which they did not redevelop in the second implementation.

Page 32 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Although WSAD works with certain source control software, including CVS and Rational Clear
Case, the team did not use either for this implementation. Instead they simply divided the work
carefully and copied changed source files between their two development machines.
7.1.2 Analysis, Profiling and Tuning Tools
To profile the application, identify bottlenecks within the code and analyze system performance,
the team used these tools at various times:
• WebSphere’s tracing service. A crude runtime monitor built into WebSphere. From the
admin console you can select all the different activities you want to monitor in WebSphere;
the list covers everything the server does. You choose the categories, restart the server,
and see the output in a log file.
• IBM Tivoli Performance Viewer (TPV). A profiler that integrates easily with WebSphere.
It displays a wide range of performance information. TPV also has a performance advisor
that recommends changes for better performance.
• VERITAS Indepth for J2EE. This is a sophisticated profiler that lets you measure the
performance of code to almost any desired granularity.
• Borland Optimizeit. This is another profiling tool that gave the team important information
about thread usage which Indepth could not provide.
• Oracle Enterprise Manager. The team used this tool to manage and tune the database,
for example to adjust the size of Oracle’s buffer cache. But Enterprise Manager also has a
suite of analysis tools that the team used from time to time. By far the most useful was Top
SQL, which gives valuable statistics on the SQL statements executed against the
database.
• top and Windows Performance Monitor. The team used these simple tools to monitor
CPU usage on the Linux and Windows machines respectively.
7.2 Tools Used by the .NET Team
7.2.1 Development Tools
For development, the Microsoft .NET team chose Microsoft® Visual Studio® .NET™ Enterprise
Architect Edition 2003, coupled with Visual SourceSafe® 6.0d for source control. They used
Visual Studio to lay out ASP.NET Web Forms graphically, but coded the back-end business
and data logic manually for all applications using C#.
For Web development, Visual Studio includes a feature that makes deployment fairly easy.
The Copy Project mechanism allows a developer to deploy a Web application to any machine
with IIS installed.
The .NET team also used Visual Studio to develop the handheld application since they chose
to target a Microsoft Windows Mobile 2003-based Pocket PC, which includes the .NET
Compact Framework. To develop the application, the team used Visual Studio’s Pocket PC
emulator; for testing and deployment, they used the real device. With Visual Studio, deploying
to a real device was straightforward.
7.2.2 Analysis, Profiling and Tuning Tools
The primary tool the .NET team used for analysis was Windows Performance Monitor. Given
the broad range of performance counters available in Windows, this tool can provide fine-
grained visibility of the resource utilization of the applications under investigation.

Page 33 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
To help them analyze database activity, the team used these Microsoft SQL Server 2000 tools:
• Query Analyzer (Index Tuning Wizard)
• Enterprise Manager
• Profiler

Page 34 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
8 DEVELOPER PRODUCTIVITY RESULTS
The focus during the development phase of the project was on developer productivity: how
quickly and easily can a team of two developers build the ITS system to specification?
Section 8.1 presents the quantitative productivity results of the development phase. The rest of
Section 8 details the experiences of the two development teams: the architecture they chose
for their implementations, what went well for them during the development phase, and the
major roadblocks they encountered.
8.1 Quantitative Results
The .NET and RRD implementations were built in the controlled environment of the lab. During
their development, the team members carefully tracked and the auditor recorded the time spent
building each of the core elements of the ITS system. Since the Oracle 9i and SQL Server
2000 databases were fully installed and configured in advance, neither team had to spend time
creating database schemas.
The WSAD implementation, on the other hand, was built later under special circumstances:
• The J2EE team had already built the application once.
• The team did not work in the lab, so the auditor could not monitor their time. Instead they
carefully tracked their own times.
• The team did not reinstall WebSphere on the development or production machines.
• The team did not redevelop the handheld application.

For these reasons the auditor’s report provides productivity results only for the .NET and RRD
implementations, while issuing a disclaimer regarding the WSAD results.
8.1.1 The Basic Data
The study tracked these core development tasks:
• Installing Products. This included time to install software on both development and
server machines. All equipment used by the two teams initially had only a core OS
installation, except for the two databases (Customer Service and Work Order) which were
already installed and pre-loaded with an initial data set.
• Building the Customer Service Web Application. This included constructing the Web UI
and backend processing for the Customer Service application according to the provided
specification, as well as the functionality to send messages. It also included creating the
Web service request that provides the ticket search functionality in the Customer Service
application, and ensuring the application could be deployed to a cluster of two load-
balanced servers with centralized server-side state management for failover.
• Building the Work Order Processing Application. This included building the Web UI
and backend processing for the Work Order application according to the provided
specification, as well as the message handling functionality. It also included creating the
Web service for handling ticket search requests from the Customer Service application.
• Building the Technician Mobile Device (Handheld) Application. This development task
included building the complete mobile device application according to the provided
specification.

Page 35 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
• System-Wide Development Tasks. This category included working out general design
issues, writing shared code and general deployment and testing.
The following table shows the actual time spent building the .NET and RRD implementations, in
developer hours. The data come from the auditor’s report:
Time Spent Developing the ITS System, by Development Task
(in developer hours)
Team / Tool Used
Development Task /
ITS System Component
.NET / VS.NET
J2EE / RRD
Customer Service
Application
40 69
Work Order Processing
Application
41 59
System-Wide Development
Tasks
2 29
Subtotal
83
157
Product Installs 4 22
Technician Mobile Device
Application
7 16
Overall total
94
196

The WSAD implementation was created later by the same team that had previously created the
RRD implementation. It was also created outside of the controlled lab setting. Hence,
productivity data for this implementation cannot be directly compared to the other two, since the
team benefited from already having already built the same application once. In addition, the
team did not reinstall the WebSphere software nor redevelop the handheld application for the
WSAD implementation.

Page 36 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Nevertheless, the following table shows the relative time spent developing the WSAD
implementation of the ITS system. The data come from the developers logs.
Time Spent Developing the ITS System, by
Development Task
(in developer hours)
Team / Tool Used
Development Task /
ITS System Component
J2EE / WSAD
Customer Service
Application
13
Work Order Processing
Application
46
System-Wide Development
Tasks
33
Subtotal
92
Product Installs n/a
Technician Mobile Device
Application
n/a
Overall total
n/a

Given how easily two developers working closely together can move quickly among several
tasks, one should not read too much precision into the breakdown of these numbers by
development task. Nevertheless, some interesting conclusions emerge:
8.1.2 .NET vs. RRD
The .NET team developed the entire system about twice as fast as the J2EE did team using
RRD. This greater speed applied across all components.
One of the greatest differences was for product installation. This is not surprising, since several
key server-side .NET components were already present as part of the base installation of
Windows Server 2003:
• Internet Information Services (IIS), the Web server
• Network Load Balancing (NLB)
• Microsoft Message Queue (MSMQ), the message server

The corresponding components on the WebSphere side – IBM HTTP Server
3
, Edge Server and
WebSphere MQ Server – had to be installed separately. So, of course, did the WebSphere
Application Server itself, on both the development and production machines.
Another significant difference was in developing the Mobile Device piece, where the J2EE team
ran into some roadblocks. (See Section 8.2.3.5 for details.)

3
As noted elsewhere, the base Linux installation included an installation of the Apache Web server, but the team chose to use IBM’s version
instead.

Page 37 of 109
.NET-WebSphere/J2EE Comparison Report
Copyright © 2004 The Middleware Company
Even within the core development (the Customer Service and Work Order applications), the