Future Grid Report April 16 2012

skillfulwolverineΛογισμικό & κατασκευή λογ/κού

2 Δεκ 2013 (πριν από 3 χρόνια και 10 μήνες)

146 εμφανίσεις

Future Grid Report

April
16

201
2

Geoffrey Fox


Introduction

This report
is the
sixt
y
-
f
if
th

for the project and
now con
tinue
s with status of each
team/committee
and the collaborating sites
.
USC funding authority has been reached and
unfortunately

they need to stop their work on FutureGrid

Summary

Operations and Change Management Committee

Operations Committee meeting on Tue Apr 10
th

(notes below). Partner invoice processing.


Software
Team

(includes Performance and Systems
Management Teams
)

Three new Inca tests were added for the FutureGrid image repository and generator tools on
India. Module files were created for the performance tools on the Redhat 6 test nodes in
preparation for deployment to Bravo. IUKB integration portlet is completed a
nd will be deployed
4/13. The UF team worked on improving the ViNe management message subsystem and solved
data exchange problems observed in previous version. We were able to get KVM in
-
guest
performance counters working so that PAPI works inside of a gue
st and can return aggregate
counter totals for workloads. Considerable amount of documentation that we worked on over the
last month was provided by the image management team. Additional projects were moved to
github and their documentation was started. We

verified that we can do SSO via crowd to the jira
service from our LDAP directory. Nimbus is preparing the release of cloud
-
client 021.

Hardware and Network

Team



GPU
Cloud
Cluster, delta, in initial testing, all nodes have OS installs. Integrated into
India
scheduler.



RHEL 6 or CentOS being widely tested at IU, TACC, UC, and SDSC for deployment. Sierra
will be upgraded first in preparation for an upgrade of India.



Nimbus VMs have been upgraded to CentOS 6 at TACC and Nimbus downgraded to version
2.9. Th
is has resolved the MAC address conflict problems that were occurring.



IU’s connection to FG in Chicago was successfully relocated on April 3
rd
.



OS System updates widely performed on clusters during maintenance.

Training, Education

and O
utreach

Team

(incl
udes user support)

TEOS team activities have focused on wrapping up and analysis of the FutureGrid user survey,
improvements in the FG portal, new user manuals and hardware pages, tutorial enhancements,
and outreach through social media and news. Members f
rom the TEOS team at UC and UF
collaborated on a tutorial proposal sent to XSEDE’12.


Knowledgebase Team

Continued progress described below

Site Reports

University of Virginia

Substantial use of FutureGrid in Cross Campus Grid XCG that is based on
and testing
XSEDE
technology.


University of Southern California Information Sciences

USC
-
ISI

reported no progress reports due to lack of funding of th
eir FutureGrid

work

by NSF.


University of Texas at Austin/Texas Advanced Computing Center

Appear to
have resolved ongoing problems with Nimbus

on Alamo


University of Chicago/Argonne National Labs

Our activities
in these 2 weeks

were balanced across support, d
evelopment and outreach thrusts
described below.


University of Florida

UF collaborated with UC on a submission to a tutorial proposal to the XSEDE’12 conference.
UF worked on updating the message exchange subsystem of ViNe management. Fortes chaired
the operations committee, and Figueiredo chaired the TEOS team and participat
ed as a deputy in
the EOT track of the XSEDE’12 conference.


San Diego Supercomputer Center at University of California San Diego

UCSD replaced a failed disk in one of Sierra’s storage servers, wrote three new Inca tests for the
FutureGrid image repository and generator tools, and created module files for the performance
tools on the Redhat 6 test nodes.


University of Tennessee Knox
ville

Using the latest versions of the relevant software components, we were able to get KVM in
-
guest
performance counters working so that PAPI works inside of a guest and can return aggregate
counter totals for workloads.


Detailed Descriptions

Operations

and Change Management Committee

Operations Committee Chair:
Jose Fortes

Change Control Board Chair: Gary Miksik, Project Manager



The Operations Committee met on Tue Apr 10
th
. Feb 28
th
. It was an Open Forum meeting.
One major discussion point was the c
urrent policy when vetting a new FutureGrid user of
requiring a valid “institutional” email address, and not accepting “gmail” addresses. The
consensus was to keep the policy as is and not accept gmail addresses. The process does
involve contacting the n
ew user and requesting that he or she provide an “institutional”
address (e.g. *.edu for educational institutions). If none is available, we then ask the new
user to provide the name and email address of a member of the specific institution that can
then
be contacted to verify the new user.



XSEDE quarterly report for January
-
march 2012 timeframe in progress.



Financials.


Note: NSF spending authority is thru December 31, 2011.

Note: USC (and IU) spending authority has been reached.


Partner Invoice
Processing to Date
:




Software
Team


Lead
: Gregor
von Laszewski

SUMMARY (
FG
-
1423

-

All)

Three new Inca tests were added

for the FutureGrid image repository and generator tools on India.
Module files were created for the performance tools on the Redhat 6 test nodes in preparation for
deployment to Bravo. IUKB integration portlet is completed and will be deployed 4/13. The U
F team
worked on improving the ViNe management message subsystem and solved data exchange problems
observed in previous version. We were able to get KVM in
-
guest performance counters working so that
PAPI works inside of a guest and can return aggregate cou
nter totals for workloads. Considerable amount
of documentation that we worked on over the last month was provided by the image management team.
IU PO #
Spend Auth
Y1 PTD
Y2 PTD
Y3 PTD
Total PTD
Remaining
Thru
UC
760159
831,845
213,235
343,809
152,531
709,576
122,269
Mar-12
UCSD
750476
598,541
192,295
183,855
95,566
471,716
126,825
Feb-12
UF
750484
434,055
83,298
110,891
51,931
246,119
187,936
Feb-12
USC
740614
450,000
154,771
115,407
171,096
441,274
8,726
Mar-12
UT
734307
920,269
547,509
159,987
85,784
793,281
126,988
Feb-12
UV
740593
257,598
53,488
103,878
72,709
230,075
27,523
Feb-12
UTK
1014379
177,084
N/A
28,271
46,402
74,673
102,411
Feb-12
3,669,392
2,966,714
702,678
Additional projects were moved to github and their documentation was started. We verified that we can
do SSO vi
a crowd to the jira service from our LDAP directory. Nimbus is preparing the release of cloud
-
client 021.

ADMINISTRATION (
FG
-
907

-

IU Gregor von Laszewsk
i)

We devised a process that documents what to do when a person is leaving the project
(more details).

IU
has added an allowhtml="on" parameter to fgjira mediawiki function. This allows
us now also to use html
in our reports to NSF. We have by default set all issues that we report to NSF in our report template to on.

Defining Activities (
FG
-
1223

-

Gregor von Laszewski)

Over the last 14 days the following activities took place in jira


Updated: 97 issues includes closed and resolved tasks

Closed: 1 issues

Resolved: 22 issues

Created: 6 issues

Improving development infrastructure (
FG
-
1204

-

Gregor von Laszewski)

FG
-
1204
-

Activity: Redeployment of the
management services


In order to relieve the core systems team from duties we have transitioned the setup of a LDAP test server
to Allan Streib.


Gregor von Laszewski has in the meanwhile verified that the task of setting up an LDAP server and using
POSIX

groups to authenticate via SSO through the crowd server actually works.

Thus there are no obstacles anymore in regards to the logic on how to use crowd. In fact we have tested
with a standard LDAP server that comes with modern Linux OS and requires littl
e setup. This was an
important test as it demonstrates that a clear pathway of doing SSO was identified. We are now waiting
for Allan Streib to complete this task for our production LDAP server. This should be able to be
completed in less than two days. W
e assume that Allan has proper access rights to the production server.

HPC SERVICES

ScaleMP Service (
FG
-
934

IU Andrew Younge)

No update reported.

Unico
re (
FG
-
927
) (Vanamala Venkataswamy UVA)

See site report from UVA.

Vanamala Venkataswamy (UVA) has reported that nothing was updated with respect to Unic
ore service.

Genesis (
FG
-
933
) (Vanamala Venkataswamy UVA)

See site report from UVA.

Vanamala Venkataswamy (UVA) has reported that nothing was updated
with respect to Unicore service.

Virtualized Globus (
FG
-
1158

-

IU Andrew Younge)

The tutorial developed for this work has been assigned to Gary Miksik

and his team for review.

HPC Globus (
FG
-
1235

-

TACC Warren Smith)

Please see site report from TACC.

EXPERIMENT

MANAGEMENT

Experiment Management (
FG
-
518

Warren Smith, Mats Rynge)

Mats Rynge reported no progress reports due to lack of funding of this activity by NSF. Mats Rynge was
introduced

to using jira for the NSF reporting. He explicitly expressed the desire that jira is used for all
ISI reports to FutureGrid and that we use the automatic site report generation that has been in use by
others.

Experiment management with support of Experim
ent Harness (
FG
-
906

-

TACC Warren Smith)

Please see site report from TACC.

Image Management (
FG
-
899

-

Creation, Repository, Provisioning
-

IU Javier Diaz)

We have improved significantly the documentation by using reStructuredText and Sphinx. This
technology allows us to create the documentation using pl
ain text, which is automatically converted and
formatted into HTML. In addition this way of documentation has now also been deployed by Gregor von
Laszewski for our cloud
-
metrics and virtual cluster code. In addition, Gregor developed a framework that
allo
ws the distribution of the documentation directly from github. This has the advantage that updates and
new code releases are accompanied by a straight forward process to update the documentation.

ACCOUNTING

Accounting for HPC (
FG
-
1081

-

IU Allen Streib)

Moab has dropped support for Gold. We will evaluate what this means for us. Problematic may be the
integration of different cloud frameworks and accou
nting for them into a unified framework. Gold was so
far our solution.

Accounting for Clouds (
FG
-
1301

-

IU Hyungro Lee)

Gregor von Laszewski developed a mechanism to use sphinx as the framework for interfacing with the
metric results produced by FG for eucalyptus. This framework is based on the command

shell developed
by Gregor von Laszewski in order to easily analyze vario
us aspects of the 300Mil data records we mine
very efficiently. The generation of all results presented, and publication of all data is less than 2 minutes.
Individual results can be generated in less than 0.2 seconds.

A screenshot follows:


Account Metri
cs (
FG
-
1377

-

Gregor von Laszewski IU, Hyungro Lee, John Bresnahna, Shava
Smallen)

We are in contact with Ti Leggett who has developed a framework for

displaying HPC account metric
data.

FG SUPPORT

SOFTWARE

AND

FG

CLOUD

SERVICES

Nimbus (
FG
-
842

-

John Bresnahan)

John Bresnahan (UC) conducted the follo
wing activities


* added the ability to report VMM location for users on sierra.

* prepared the release of cloud
-
client 021 which has two new features directly requested by FG users.

* worked with Warren to debug alamo's problems and stress test it.

* bega
n investigating a bug in termination that is causing services to hang for some users.

* debugged problems that hotel was experiencing and cleaned up its tmpfiles and logs.

* continued with scalability experiments of the N preserving service.

* added suppor
t to Nimbus on hotel to all for users of the EC2 REST interfaces to select different
networks.

Eucalyptus (
FG
-
1429

-

IU Sharif Islam)

Version 2.0



Sierra eucalyptus 2.0.3 has been backed up and taken offline for 3.0 testing.

Version 3.0
FG
-
1202




Euca3 has been installed in a small test cluster suc
cessfully. Currently, sierra eucalyptus 2.0.3 has
been taken offline and 3.0 is being installed. This should be available for testing in the next few
days.

OpenStack (
FG
-
1203

IU
-

Sharif Islam)

Koji Tanaka installed the new version Essex on two nodes in mini cluster. The plan is to build it on the
test gravel cluster next week.

Inca (
FG
-
877

-

Shava Smallen, UCSD)

In the last few weeks, three new Inca tests were added to test the FutureGrid image repository and
generator tools on India. The first tests that the image repository is available by issuing a

simple query
command and verifying the response is valid. The second verifies basic functionality of the repository by
uploading an image, verifying it is available and then removing the image. The third executes fg
-
generate
with arguments from the RAIN q
uick start guide and verifies a file is returned. The first test runs every
hour, the second every four hours, and the third test once a day. All new tests are available on the Cloud
page at http://inca.futuregrid.org:8080/inca/HTML/rest/Cloud/FG_CLOUD und
er the futuregrid section
as 'repo
-
list’, 'repo
-
upload', and ‘generate’.

ViNe: (
FG
-
140

-

UF Renato F. Mauricio T. Jose Fortes)

Comment from Mauricio Tsu
gawa (UFL)

The UF team worked on updating the message exchange subsystem of the ViNe management. In
particular, problems related to serialization/recovery of data structures transmitted over the network have
been resolved. This has been accomplished by def
ining a new message structure that only carries primary
data types and processing the recovery of data structures to the ViNe message subsystem instead of fully
relying on Java serialization (which has difficulties to serialize objects across different ver
sions of JVMs).
Moreover, the new messaging system has been designed to be easily extensible, allowing quick additions
of new features as needed. This communication subsystem enables the ViNe Management server to issue
remote commands to running ViNe route
r instances in order to dynamically reconfigure the operating
parameters of ViNe overlays.

WEB

SITE AND

SUPPORT

Portal

and Web Site


(Mathew Hanlon (TACC), Gregor von Laszewski)



FG
-
1311



Matthew Hanlon reports: IUKB integration portlet is completed and will be deployed 4/13. Cleaned up
several modules that were no longer in use. Continued work on projects revamp/redesign.

PERFORMANCE (UCSD Shava
Smallen)

Vampir (
FG
-
955

-

Thomas Williams)

Module files for the OpenMPI 1.5.4 (Vampir 5.8.4) and PAPI 4.2.0 installations on the Redhat 6 test
nodes wer
e created and were tested successfully; more in depth tests are planned before deploying to the
Bravo nodes.

PAPI (
FG
-
957

-

Piotr Luszczek (UTK))

The re
cent Linux 3.3 kernel release includes support for in
-
guest performance counters in KVM. For this
to work you also need a very recent version of the qemu/kvm tool (we used a current git snapshot). With
this combination, we were able to get KVM in
-
guest per
f counters working.


With this support, PAPI works inside of a guest and can return aggregate counter totals for workloads,
with no changes needed at all to PAPI or the programs being measured. Currently overflow handling and
sampled profiling do not work
, as KVM does not implement performance counter interrupts.


We have run this on both Core2 and Sandybridge machines and are analyzing the results. On simple test
cases the values gathered on real hardware and in the VM are similar, although we are still i
nvestigating
why the "virtual" time reported to the application is higher inside of the VM in an unexpected way.

Performance: Interface monitoring data to the Experiment Harness (
FG
-
1098

-

Shava Smallen
SDSC)

Work continues to integrate Inca and GLUE data to the messaging system and we are preparing to submit
this work for public
ation to the XSEDE’12 conference.

FG
-
1094
-

Performance: Help coordinate setup of perfSONAR


We are waiting on user feedback to finish pricing out 10G capabilities for the perfSONAR machines. The
measurement host for IU is scheduled to arrive in 10 working days.


Hardware and Network

Team

Lead
: David Hancock


Networking



All FutureGrid network mile
stones are complete and networking is in a fully operational
state.



IU’s connection to FG in Chicago was successfully relocated on April 3
rd
.

Compute & Storage Systems



IU iDataPlex (india)

o

RHEL6 testing on 4

nodes is progressing well with no user issues yet.

o

Openstack Diablo test cluster installed.

o

RHEL updates performed during maintenance.

o

System operational for production users.



IU Cray (xray)

o

Xray experienced an outage due to hardware failure after mainten
ance.

o

New software release available (danub, SLES 11 based).

o

Two newest IU admins scheduled to take Cray administration course this summer.

o

System operational for production HPC users



IU HP (bravo)

o

Swift test implementation is installed

on bravo, nodes in use for NID testing
currently.

o

50% of the system being used for testing with network impairment device, HDFS
available on the remaining systems.

o

RHEL updates performed during maintenance.

o

System operational for production users



IU GPU S
ystem (delta)

o

1 node available for testing, remainder have OS installs, cloud tools are being
installed.

o

System management switches will be delivered the first week of April.

o

System integrated into India scheduler.

o

System operational for early users



SDSC i
DataPlex (sierra)

o

Upgrade of two nodes to RHEL6 has been done to prep for upgrade of Sierra &
India.

o

RHEL updates performed during maintenance.

o

System operational for production Eucalyptus, Nimbus, and HPC users.



UC iDataPlex (hotel)

o

Deployment plan for Ge
nesis II in progress, waiting on feedback from the
Genesis team at UVA.

o

RHEL updates performed during maintenance.

o

Nimbus was unavailable on April 11
th

due to a resource leak in the service,
Nimbus was taken down for cleanup and restarted.

o

System operation
al for production Nimbus and HPC users.



UF iDataPlex (foxtrot)

o

No issues during this period.

o

System operational for production Nimbus users.



Dell system at TACC (alamo)

o

Planning for CentOS upgrade. Upgrade should be ready prior to May maintenance
day, rema
ining issue is upgrading the persistent daemon node for Genesis II.

o

Nimbus VMs have been upgraded to CentOS 6 now and Nimbus downgraded to
version 2.9. This has resolved the MAC address conflict problems that were
occurring.

o

5

nodes are provisioned for XSEDE TIS testing with SGE & Torque using the
same headnode.

o

System operational for production Nimbus and HPC users.



All system outages are posted at
https://portal.future
grid.org/outages_all

Training, Education and Outreach
Team

(includes user support)

Lead
: Renato Figueiredo


FG user survey:

A final reminder email was sent out during this reporting period, and the
survey closed on 4/13. A total of 118

responses have been logged (the total number of completed
surveys is 108). Team members are beginning the review of survey results.


Social Media:

Continued to cultivate our social media. Brainstormed with colleagues about
strategies, including streaml
ining the use of social media as a vehicle to disseminate news.


FG Portal:

continuing work on content related to hardware, user manuals, revamping of
tutorials, and other pages. Anand, Sonali, Koji, KB staff, and TEOS team were involved. A new
user manua
l document for Delta has been added to the FG Portal; new hardware pages have been
created and are under review. The TEOS team approved a tutorial enhancement process and
Sonali is implementing improvements to various existing tutorials, including formatti
ng and
providing clear information on pre
-
requisites.


Networking information:

Made plans to feature networking information more clearly, and
have received documentation needed to move forward; planned work to incorporate it into the
site next week.


Pr
oject pages redesign is underway. Will include Featured Projects for user success stories,
better ability for projects members to share updates, reports, publications, etc., as well as
highlighting how the projects are utilizing FutureGrid's capabilities.


Knowledgebase Team

Lead
:
Jonathan Bolte
, Chuck Aikman


Active document repository increase by = 11

Documents modified = 6

Portal document edites = 2

Current To
tal 'live' FG KB documents: 116

Target as of 7/1/2012: 175

Difference

between current and target: 59

Number of KB documents that must be completed per week between now and 7/
1/2012 to hit
target: (59/12) 5

Current number of documents in draft status within KB workflow: 37


Tickets

Lead:
Sharif Islam and Koji Tanaka


39 tick
ets created

13 tickets resolved


Currently:

59 tickets total

23 new tickets

33 open tickets

3 stalled tickets

17 unowned tickets


Site Reports

University of Virginia

Lead: Andrew Grimshaw


The last month has seen a marked increase in XCG activity. The first two graphs show XCG
overall jobs and number of hours delivered. The next six graphs show the jobs and hours for
India, Sierra, and Alamo via the XCG. These graphs do not show XSEDE testgr
id activity


most of which in the last month has been restricted to X
-
Ray. Nor does it show activity from
Most of these hours were delivered to an economics application, the rest were divided between a
CS neural network simulation and a biology (blastp) a
pplication.


You’ll note a sharp drop off in the use of FutureGrid resources
in the last two weeks. This has
been caused by a combination of factors.


1) The firewall rules were changed

in the second week of April at UVA centralized services
blocking external access to the GFFS data store located there, preventing jobs run on FutureGrid
resources from copying their results back. While this was being sorted out we reduced the
number of jo
b
s sent to FutureGrid resources.

2) The first Tuesday of every month is maintenance day at FutureGrid. The machines are
down for the whole day. In our experience it can take a few days to re
-
stabalize after
maintenanc
e. (Sierra is still not stable)

3)

This month, until the 23rd, India has 32 nodes dedicated to another project, reducing the
time on the most stable resource.


Note that the XCG is using the same meta
-
scheduler that XSEDE uses in the new EMS
(Execution Managemen
t Services) configurati
on item.

Note that these statistics are jobs/hours for the week that starts on the indicated date, e.g., the
week starting February 13.


Sierra




198.202.120.85

India:





149.165.146.134

Alamo



129.114.32.10







U
niversity of Southern
California Information Sciences

Lead: Ewa Deelman


USC
-
ISI

reported no progress reports due to lack of funding of this activity by NSF.


University of Texas at Austin/Texas Advanced Computing Center

Lead: Warren Smith

Dell cluster:



Kernel panic was
occurring on Nimbus nodes after upgrade to Nimbus 2.9



Upgraded Nimbus nodes to CentOS 6.2

o

S
o that these nodes have much more recent versions of QEMU, KVM, libvirt



No problems with Nimbus virtual machines in the few days since the upgrade



Updated OFED and
OpenMPI



Replaced 2 disks that failed

Experiment harness:



Continued to modify the TeraGrid/XSEDE glue2 software for use on FutureGrid

o

Adding support so that this software can provide resource information in JSON

FutureGrid user portal
:



Added two new categor
ies to the ticketing interface



Added a button to pause/play the sliding display on the front page



Added RSS icons to the news page

Outreach
:



Worked with UVa to fix problems running Genesis II jobs on Alamo.


University of Chicago
/Argonne National Labs

Lead
: Kate Keahey


Our activities
in these 2 weeks

were balanced across support, development and outreach thrusts.
In the outreach thrust, we worked with colleagues at UFL to propose a FG tutorial on cloud
computing for science and education to be held

at the XSEDE conference. Multiple support
activities came up to support various end
-
user and administration
issues

listed below
. In addition,
we also managed to make some progress on the development of various
Nimbus
features and
released cloud
-
client 21
integrating some of them as well as some of the earlier contributions.




Submitted a proposal for a "mini cloud computing summer school" as an XSEDE tutorial



Continued design and development of a multi
-
cloud infrastructure



Added support to IaaS to allow
the ec2 interfaces to select a private network



Released cloud
-
client 21



Investigated termination bug


FG Nimbus support:



Debugged various Nimbus issues on hotel



Assisted with debugging configuration problems on alamo



Tested alamo after Warren's fixes



Enab
led VMM location reporting on sierra


University of Florida

Lead: Jose Fortes


UF collaborated with Kate Keahey and John Bresnahan in the submission of a tutorial proposal
using FutureGrid for the XSEDE’12 conference. If the tutorial is approved for presen
tation at the
conference, UF will participate with a segment on virtual appliances and use of FutureGrid/cloud
environments in education. UF has continued to participate in the organization of the EOT track
of XSEDE’12.


ViNe activities:

The UF team worked on updating the message exchange subsystem of the ViNe
management. In particular, problems related to serialization/recovery of data structures
transmitted over the network have been resolved. This has been accomplished by defining a new

message structure that only carries primary data types and processing the recovery of data
structures to the ViNe message subsystem instead of fully relying on Java serialization (which
has difficulties to serialize objects across different versions of JV
Ms). Moreover, the new
messaging system has been designed to be easily extensible, allowing quick additions of new
features as needed. This communication subsystem enables the ViNe Management server to
issue remote commands to running ViNe router instances

in order to dynamically reconfigure the
operating parameters of ViNe overlays.


Decisions on how to update ViNe routers are made based on information stored in an
information subsystem (database). The database is currently being re
-
designed in order to ho
ld
additional data that are necessary to cover a wider range of overlay scenarios. The information in
the database will be also used to implement end user interfaces so that FG users can interact with
the ViNe management system. Currently, FG users can con
nect VMs on ViNe overlay, but they
are not able to deploy new overlays by themselves.


San Diego Supercomputer Center at University of California San Diego

Lead: Shava Smallen

In the past few weeks, UCSD replaced a failed disk in one of Sierra’s storage servers. We also
wrote and deployed three new Inca tests to verify the FutureGrid image repository and generator
tools on India. With help from our Dresden partner, we created m
odule files for the performance
tools installed on the Redhat 6 test nodes in preparation for deployment on to Bravo. UCSD
continues work with GRNOC and our networking user Martin Swany to price out adding 10G
capabilities to the perfSONAR machines. All
activities are described further in the software
section of this report. UCSD continues to lead the performance group activities and attended the
Software, TEOS, and Operations calls.


University of Tennessee Knoxville

Lead:

Jack Dongarra


The recent Linux 3.3 kernel release includes support for in
-
guest performance counters in KVM.
For this to work you also need a very recent version of the qemu/kvm tool (we used a current git
snapshot). With this combination, we were able to get KVM in
-
gue
st perf counters working.


With this support, PAPI works inside of a guest and can return aggregate counter totals for
workloads, with no changes needed at all to PAPI or the programs being measured. Currently
overflow handling and sampled profiling do no
t work, as KVM does not implement performance
counter interrupts.


We have run this on both Core2 and Sandybridge machines and are analyzing the results. On
simple test cases the values gathered on real hardware and in the VM are similar, although we
are s
till investigating why the "virtual" time reported to the application is higher inside of the
VM in an unexpected way.