Papers of Possible Interest to Astronomical Software Users

perchorangeSoftware and s/w Development

Dec 1, 2013 (4 years and 7 months ago)


Papers of Possible Interest to Astronomical Software Users

General articles, not about specific codes

> newest additions

Making Access to Astronomical Software More Efficient

Astronomical Software Wants To Be Free: A Manifesto

Publish your
computer code: it is good enough

Talking Amongst Ourselves

Communication in the Astronomical Software Community

Computational science: ...Error …why scientific programming does not compute

A Journal for the Astronomical Computing Community?

Where's the
Real Bottleneck in Scientific Computing?

The CRAPL: An academic
strength open source license

Scientific Software Production: Incentives and Collaboration

Linking to Data

Effect on Citation Rates in Astronomy

The case for open computer programs

rmatics: A 21st Century Approach to Astronomy

Publish or be damned? An alternative impact manifesto for research software

Best Practices for Scientific Computing

Making Access to Astronomical Software More Efficient

: Access to astronomical data through archives and VO is essential but does not solve all
problems. Availability of appropriate software for analyzing the data is often equally important for the

with which a researcher can publish results. A number of legacy systems (e.g. IRAF, MIDAS,
Starlink, AIPS, Gipsy), as well as others now coming online are available but have very different user
interfaces and may no longer be fully supported. Users may ne
ed multiple systems or stand
packages to complete the full analysis which introduces significant overhead. The OPTICON Network on
`Future Astronomical Software Environments' and the USVAO have discussed these issues and have
outlined a general archit
ectural concept that solves many of the current problems in accessing software
packages. It foresees a layered structure with clear separation of astronomical code and IT infrastructure.
By relying on modern IT concepts for messaging and distributed execut
ion, it provides full scalability
from desktops to clusters of computers. A generic parameter passing mechanism and common interfaces
will offer easy access to a wide range of astronomical software, including legacy packages, through a
single scripting lan
guage such as Python. A prototype based upon a proposed standard architecture is
being developed as a proof
concept. It will be followed by definition of standard interfaces as well as a
reference implementation which can be evaluated by the user commun
ity. For the long
term success of
such an environment, stable interface specifications and adoption by major astronomical institutions as
well as a reasonable level of support for the infrastructure are mandatory. Development and maintenance
of astronomica
l packages would follow an open
source, Internet concept.

: P. Grosbol, D. Tody

Astronomical Software Wants To Be Free: A Manifesto

: Astronomical software is now a fact of daily life for all hands
on members of our community.
built software for data reduction and modeling tasks becomes ever more critical as we handle
larger amounts of data and simulations. However, the writing

of astronomical software is unglamorous,
the rewards are not always clear, and there are structural disincentives to releasing software publicly and
to embedding it in the scientific literature, which can lead to significant duplication of effort and an
ncomplete scientific record. We identify some of these structural disincentives and suggest a variety of
approaches to address them, with the goals of raising the quality of astronomical software, improving the
lot of scientist
authors, and providing benef
its to the entire community, analogous to the benefits provided
by open access to large survey and simulation datasets. Our aim is to open a conversation on how to move
forward. We advocate that: (1) the astronomical community consider software as an integ
ral and fundable
part of facility construction and science programs; (2) that software release be considered as integral to
the open and reproducible scientific process as are publication and data release; (3) that we adopt
technologies and repositories fo
r releasing and collaboration on software that have worked for open
source software; (4) that we seek structural incentives to make the release of software and related
publications easier for scientist
authors; (5) that we consider new ways of funding the
development of
roots software; (6) and that we rethink our values to acknowledge that astronomical software
development is not just a technical endeavor, but a fundamental part of our scientific practice.

: Benjamin J. Weiner, Michael R. Blan
ton, Alison L. Coil, Michael C. Cooper, Romeel Davé,
David W. Hogg, Bradford P. Holden, Patrik Jonsson, Susan A. Kassin, Jennifer M. Lotz, John Moustakas,
Jeffrey A. Newman, J.X. Prochaska, Peter J. Teuben, Christy A. Tremonti, Christopher N.A. Willmer

Publish your computer code: it is good enough

by Nick Barnes

... openness improved both the code used by the scientists and the ability of the public to
engage with their work. This is to be expect
ed. Other scientific methods improve through
peer review. The open
source movement has led to rapid improvements within the
software industry. But science source code, not exposed to scrutiny, cannot benefit in this

Talking Amongst Ourselves


in the Astronomical Software Community

: Meetings such as ADASS demonstrate that there is an enthusiasm for communication within
the astronomical software c
ommunity. However, the amount of information and experience that can flow
around in the course of one, relatively short, meeting is really quite limited. Ideally, these meetings should
be just a part of a much greater, continuous exchange of knowledge. In
practice, with some notable

often short

exceptions, we generally fall short of that ideal. Keeping track of what is being used,
where, and how successfully, can be a challenge. A variety of new technologies such as those roughly
classed as 'W
eb 2.0' are now available, and getting information to flow ought to be getting simpler, but
somehow it seems harder to find the time to keep that information current. This paper looks at some of the
ways we communicate, used to communicate, have failed to
communicate, no longer communicate, and
perhaps could communicate better. It is presented in the hope of stimulating additional discussion

possibly even a little action

aimed at improving the current situation.

: Keith Shortridge

tional science: ...Error

…why scientific programming does not compute

by Zeeya Merali

A quarter of a century ago, most of the computing work done by scientists was relatively straightforward
But as computers and programming tools have grown more complex, scientists have hit a "steep learning
curve", says James Hack, director of the US National Center for Computational Sciences at Oak Ridge
National Laboratory in Tennessee. "The level of effo
rt and skills needed to keep up aren't in the
wheelhouse of the average scientist."

As a general rule, researchers do not test or document their programs rigorously, and they rarely release
their codes, making it almost impossible to reproduce and verify
published results generated by scientific
software, say computer scientists. At best, poorly written programs cause researchers such as Harry to
waste valuable time and energy. But the coding problems can sometimes cause substantial harm, and have
forced s
ome scientists to retract papers.

As recognition of these issues has grown, software experts and scientists have started exploring ways to
improve the codes used in science.

A Journal for the Astronomical Computing Community?

: One of the Birds of a Feather (BoF) discussion sessions at ADASS XX considered whether a
new journal is needed to serve the astronomical computing community. In this pap
er we discuss the
nature and requirements of that community, outline the analysis that led us to propose this as a topic for a
BoF, and review the discussion from the BoF session itself. We also present the results from a survey
designed to assess the suit
ability of astronomical computing papers of different kinds for publication in a
range of existing astronomical and scientific computing journals. The discussion in the BoF session was
somewhat inconclusive, and it seems likely that this topic will be deba
ted again at a future ADASS or in a
similar forum.

: Norman Gray, Robert G Mann

Where's the Real Bottleneck in Scientific Computing?

Scientists would do well to pick up some tools widely used in the software industry

By Greg Wilson

Most scientists had simply never been shown how to program efficiently. After a generic
freshman programming course in C or Java, and possibly a course on statis
tics or
numerical methods in their junior or senior year, they were expected to discover or
reinvent everything else themselves, which is about as reasonable as showing someone
how to differentiate polynomials and then telling them to go and do some tensor


Yes, the relevant information was all on the Web, but it was, and is, scattered across
hundreds of different sites. More important, people would have to invest months or years
acquiring background knowledge before they could make sense of it al
l. As another
physicist (somewhat older and more cynical than my friend) said to me when I suggested
that he take a couple of weeks and learn some Perl, "Sure, just as soon as you take a couple
of weeks and learn some quantum chromodynamics so that you can



The CRAPL: An academic
strength open source license

By Matt Might

Academics rarely release code, but I hope a license can encourage them.

Generally, academic software is stapled together on a tight deadline; an expert user has to
coerce it into running; and it's not pretty code. Academic code is about "proof of concept."
These rough edges make academics reluctant to release their software. B
ut, that doesn't
mean they shouldn't.

Most open source licenses (1) require source and modifications to be shared with binaries,
and (2) absolve authors of legal liability.

An open source license for academics has additional needs: (1) it should require
that source
and modifications used to validate scientific claims be released with those claims; and (2)
more importantly, it should absolve authors of shame, embarrassment and ridicule for ugly

Openness should also hinge on publication: once a paper

is accepted, the license should
force the release of modifications. During peer review, the license should enable the
confidential disclosure of modifications to peer reviewers. If the paper is rejected, the
modifications should remain closed to protect t
he authors' right to priority.

Toward these ends, I've drafted the CRAPL
the Community Research and Academic
Programming License. The CRAPL is an open source "license" for academics that
encourages code
sharing, regardless of how much how much Red Bull a
nd coffee went
into its production. (The text of the CRAPL is in the article body.)

Scientific Software Production: Incentives and Collaboration
, CSCW 2011

: Software plays an increasingly critical role in science, including data analysis, simulations, and
managing workflows. Unlike other technologies supporting scienc
e, software can be copied and
distributed at essentially no cost, potentially opening the door to unprecedented levels of sharing and
collaborative innovation. Yet we do not have a clear picture of how software development for science fits
into the day
day practice of science, or how well the methods and incentives of its production facilitate
realization of this potential. We report the results of a multiple
case study of software development in
three fields: high energy physics, structural biology, and

microbiology. In each case, we identify a typical
publication, and use qualitative methods to explore the production of the software used in the science
represented by the publication. We identify several different production systems, characterized primar
by differences in incentive structures. We identify ways in which incentives are matched and mismatched
with the needs of the science fields, especially with respect to collaboration.

: James Howison and Jim Herbsleb

Linking to Data

on Citation Rates in Astronomy

: Is there a difference in citation rates between articles that were published with links to data and
articles that were not? Besides bei
ng interesting from a purely academic point of view, this question is
also highly relevant for the process of furthering science. Data sharing not only helps the process of
verification of claims, but also the discovery of new findings in archival data. Ho
wever, linking to data
still is a far cry away from being a "practice", especially where it comes to authors providing these links
during the writing and submission process. You need to have both a willingness and a publication
mechanism in order to create

such a practice. Showing that articles with links to data get higher citation
rates might increase the willingness of scientists to take the extra steps of linking data sources to their
publications. In this presentation we will show this is indeed the ca
se: articles with links to data result in
higher citation rates than articles without such links. The ADS is funded by NASA Grant NNX09AB39G.

: Edwin A. Henneken, Alberto Accomazzi

The case for open computer programs

by Darrel C. Ince, Leslie
Hatton & John Graham

We examine the problem of reproducibility (for an early attempt at solving it, see ref.
) in
the context of openly available computer programs, or code. Our view is that we have
reached the point that, with some exceptions, anything less than release of actual source
code is an indefensible approach for any scient
ific results that depend on computation,
because not releasing such code raises needless, and needlessly confusing, roadblocks to

Astroinformatics: A 21st Century Approach to Astronomy


: Data volumes from multiple sky surveys have grown from gigabytes into terabytes during the
past decade, and will grow from terabytes into tens (or hundreds) of petabytes in the next decade. This
exponential growth of
new data both enables and challenges effective astronomical research, requiring
new approaches. Thus far, astronomy has tended to address these challenges in an informal and ad hoc
manner, with the necessary special expertise being assigned to e
Science or

survey science. However, we
see an even wider scope and therefore promote a broader vision of this data
driven revolution in
astronomical research. For astronomy to effectively cope with and reap the maximum scientific return
from existing and future larg
e sky surveys, facilities, and data
producing projects, we need our own
information science specialists. We therefore recommend the formal creation, recognition, and support of
a major new discipline, which we call Astroinformatics. Astroinformatics includ
es a set of naturally
related specialties including data organization, data description, astronomical classification taxonomies,
astronomical concept ontologies, data mining, machine learning, visualization, and astrostatistics. By
virtue of its new statur
e, we propose that astronomy now needs to integrate Astroinformatics as a formal
discipline within agency funding plans, university departments, research programs, graduate training,
and undergraduate education. Now is the time for the recognition of A
stroinformatics as an essential
methodology of astronomical research. The future of astronomy depends on it.

: Kirk D. Borne

Publish or be damned? An alternative impact manifesto for research software

By Neil Chue Hong

The Research Software Impact Manifesto

... we subscribe to the following principles:

Communality: software is considered as the collective creation of all who have contribut

Openness: the ability of others to reuse, extend and repurpose our software should be

One of Many: we recognise that software is an intrinsic part of research, and should not be
divorced from other research outputs

Pride: we shouldn't be embara
ssed by publishing code which is imperfect, nor should other
people embarass us

Explanation: we will provide sufficient associated data and metadata to allow the
significant characteristics of the software to be defined

Recognition: if we use a piece of so
ftware for our research we will acknowledge its use
and let its authors know

Availability: when a version of software is "released" we commit to making it available for
an extended length of time

Tools: the methods of identification and description of sof
tware objects must lend
themselves to the simple use of multiple tools for tracking impact

Equality: credit is due to both the producer and consumer in equal measure, and due to all
who have contributed, whether they are academics or not

Best Practices

for Scientific Computing

: Scientists spend an increasing amount of time building and using software. However, most
scientists are never taught how to do this
efficiently. As a result, many are unaware of tools and practices
that would allow them to write more reliable and maintainable code with less effort. We describe a set of
best practices for scientific software development that have solid foundations in re
search and experience,
and that improve scientists' productivity and the reliability of their software.

: D. A. Aruliah, C. Titus Brown, Neil P. Chue Hong, Matt Davis, Richard T. Guy, Steven H. D.
Haddock, Katy Huff, Ian Mitchell, Mark Plumbley, Ben

Waugh, Ethan P. White, Greg Wilson, Paul

Papers of Possible Interest to Astronomical Software Users

Articles about specific codes and resources

> newest additions

How well do STARLAB and NBODY4 compare? I: Simple models

Monte Carlo si
mulation of the electron transport through thin slabs: A comparative study of PENELOPE,

Computational AstroStatistics: Fast and Efficient Tools for Analysing Huge Astronomical Data Sources

Astrocomp: a web service for the u
se of high performance computers in Astrophysics

Group Identification in N
Body Simulations: SKID and DENMAX Versus Friends

Comparing Numerical Methods for Isothermal Magnetized Supersonic Turbulence

Haloes gone MAD: The Halo
Finder Comparison

GEMS: Galaxy fitting catalogues and testing parametric galaxy fitting codes

A Comparison of Cosmological Codes (TVD, ENZO, and GADGET)

Running your first SPH simulation

A Guide to Comparisons of Star Formation Simulations with Observations

s going MAD: The Galaxy
Finder Comparison Project

How well do STARLAB and NBODY4 compare? I: Simple models

: N
body simulations are widely used to simulate the
dynamical evolution of a variety of
systems, among them star clusters. Much of our understanding of their evolution rests on the results of
such direct N
body simulations. They provide insight in the structural evolution of star clusters, as well as
into t
he occurrence of stellar exotica. Although the major pure N
body codes STARLAB/KIRA and
NBODY4 are widely used for a range of applications, there is no thorough comparison study yet. Here we
thoroughly compare basic quantities as derived from simulations p
erformed either with STARLAB/KIRA
or NBODY4.

We construct a large number of star cluster models for various stellar mass function settings (but without
stellar/binary evolution, primordial binaries, external tidal fields etc), evolve them in parallel with

STARLAB/KIRA and NBODY4, analyse them in a consistent way and compare the averaged results
quantitatively. For this quantitative comparison we develop a bootstrap algorithm for functional

We find an overall excellent agreement between the c
odes, both for the clusters' structural and energy
parameters as well as for the properties of the dynamically created binaries. However, we identify small
differences, like in the energy conservation before core collapse and the energies of escaping stars
, which
deserve further studies. Our results reassure the comparability and the possibility to combine results from
these two major N
body codes, at least for the purely dynamical models (i.e. without stellar/binary
evolution) we performed. (abridged)

: P. Anders, H. Baumgardt, N. Bissantz, S. Portegies Zwart

Monte Carlo simulation of the electron transport through thin slabs: A comparative study of


: The Monte Carlo simulation of the electron transport through thin slabs is studied with five
general purpose codes: PENELOPE, GEANT3, GEANT4, EGSnrc and MCNPX. The different material
foils analyzed in the old e
xperiments of Kulchitsky and Latyshev [Phys. Rev. 61 (1942) 254
266] and
Hanson et al. [Phys. Rev. 84 (1951) 634
637] are used to perform the comparison between the Monte
Carlo codes. Non
negligible differences are observed in the angular distributions of
the transmitted
electrons obtained with the some of the codes. The experimental data are reasonably well described by
EGSnrc, PENELOPE (v. 2005) and GEANT4. A general good agreement is found for EGSnrc and
GEANT4 in all the cases analyzed.

: M. Vil
ches, S. Garcia
Pareja, R. Guerrero, M. Anguiano, A.M. Lallena

Computational AstroStatistics: Fast and Efficient Tools for Analysing Huge Astronomical Data

: I present here a review of past and present multi
disciplinary research of the Pittsburgh
Computational AstroStatistics (PiCA) group. This group is dedicated to developing fast and efficient
statistical algorithms for analysing huge astronom
ical data sources. I begin with a short review of multi
resolutional kd
trees which are the building blocks for many of our algorithms. For example, quick range
queries and fast n
point correlation functions. I will present new results from the use of Mixt
ure Models
(Connolly et al. 2000) in density estimation of multi
color data from the Sloan Digital Sky Survey
(SDSS). Specifically, the selection of quasars and the automated identification of X
ray sources. I will
also present a brief overview of the Fals
e Discovery Rate (FDR) procedure (Miller et al. 2001a) and show
how it has been used in the detection of ``Baryon Wiggles'' in the local galaxy power spectrum and source
identification in radio data. Finally, I will look forward to new research on an autom
ated Bayes Network
anomaly detector and the possible use of the Locally Linear Embedding algorithm (LLE; Roweis & Saul
2000) for spectral classification of SDSS spectra.

: R. C. Nichol, S. Chong, A. J. Connolly, S. Davies, C. Genovese, A. M. Hopkin
s, C. J. Miller, A.
W. Moore, D. Pelleg, G. T. Richards, J. Schneider, I. Szapudi, L. Wasserman

Astrocomp: a web service for the use of high performance computers in Astrophysics

: Astrocomp is a joint project, developed by the INAF
Astrophysical Observatory of Catania,
University of Roma La Sapienza and Enea
. The project has the goal of providing the scientific community
of a web
based user
friendly interface which allows running parallel codes on a set of high
computing (HPC) resources, without any need for specific knowledge about parallel progr
amming and
Operating Systems commands. Astrocomp provides, also, computing time on a set of parallel computing
systems, available to the authorized user. At present, the portal makes a few codes available, among
which: FLY, a cosmological code for studying

dimensional collisionless self
gravitating systems
with periodic boundary conditions; ATD, a parallel tree
code for the simulation of the dynamics of
free collisional and collisionless self
gravitating systems and MARA, a code for stellar l
curves analysis. Other codes are going to be added to the portal.

: U. Becciani, R. Capuzzo Dolcetta, A. Costa, P. Di Matteo, P. Miocchi, V. Rosato

Group Identification in N
Body Simulations: SKID and DENMAX Versus Friends

: Three popular algorithms (FOF, DENMAX, and SKID) to identify halos in cosmological N
body simulations are compared with each other and with the predicted mass

function from Press
Schechter theory. It is shown that the resulting distribution of halo masses strongly depends upon the
choice of free parameters in the three algorithms, and therefore much care in their choice is needed. For
many parameter values, DEN
MAX and SKID have the tendency to include in the halos particles at large
distances from the halo center with low peculiar velocities. FOF does not suffer from this problem, and its
mass distribution furthermore is reproduced well by the prediction from Pr
Schechter theory.

: M. Goetz, J. P. Huchra, R. H. Brandenberger

Comparing Numerical Methods for Isothermal Magnetized Supersonic Turbulence

: We employ s
imulations of supersonic super
'enic turbulence decay as a benchmark test
problem to assess and compare the performance of nine astrophysical MHD methods actively used to
model star formation. The set of nine codes includes: ENZO, FLASH, KT
PPML, RAMSES, STAGGER, and ZEUS. We present a comprehensive set of statistical measures
designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power
spectra for basic fields to determine the effective spectral
bandwidth of the methods and rank them based
on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and
dilatational velocity components to check for possible impacts of the numerics on small
scale density
tics. Finally, we discuss convergence of various characteristics for the turbulence decay test and
impacts of various components of numerical schemes on the accuracy of solutions. We show that the best
performing codes employ a consistently high order of a
ccuracy for spatial reconstruction of the evolved
fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation.
The best results are achieved with divergence
free evolution of the magnetic field using the constrain
transport method, and using little to no explicit artificial viscosity. Codes which fall short in one or more
of these areas are still useful, but they must compensate higher numerical dissipation with higher
numerical resolution. This paper is the larg
est, most comprehensive MHD code comparison on an
like test problem to date. We hope this work will help developers improve their numerical
algorithms while helping users to make informed choices in picking optimal applications for their specif
astrophysical problems.

: Alexei G. Kritsuk, Aake Nordlund, David Collins, Paolo Padoan, Michael L. Norman, Tom Abel,
Robi Banerjee, Christoph Federrath, Mario Flock, Dongwook Lee, Pak Shing Li, Wolf
Christian Mueller,
Romain Teyssier
, Sergey D. Ustyugov, Christian Vogel, Hao Xu

Haloes gone MAD: The Halo
Finder Comparison Project

: We present a detailed comparison of fundamental dark matter halo
properties retrieved by a
substantial number of different halo finders. These codes span a wide range of techniques including
friends (FOF), spherical
overdensity (SO) and phase
space based algorithms. We further
introduce a robust (and publicly

available) suite of test scenarios that allows halo finder developers to
compare the performance of their codes against those presented here. This set includes mock haloes
containing various levels and distributions of substructure at a range of resolutio
ns as well as a
cosmological simulation of the large
scale structure of the universe. All the halo finding codes tested
could successfully recover the spatial location of our mock haloes. They further returned lists of particles
(potentially) belonging to
the object that led to coinciding values for the maximum of the circular velocity
profile and the radius where it is reached. All the finders based in configuration space struggled to recover
substructure that was located close to the centre of the host ha
lo and the radial dependence of the mass
recovered varies from finder to finder. Those finders based in phase space could resolve central
substructure although they found difficulties in accurately recovering its properties. Via a resolution study
we found

that most of the finders could not reliably recover substructure containing fewer than 30
particles. However, also here the phase space finders excelled by resolving substructure down to 10
particles. By comparing the halo finders using a high resol
ution cosmological volume we found that they
agree remarkably well on fundamental properties of astrophysical significance (e.g. mass, position,
velocity, and peak of the rotation curve).

: Alexander Knebe, Steffen R. Knollmann, Stuart I. Muldrew,
Frazer R. Pearce, Miguel Angel
Calvo, Yago Ascasibar, Peter S. Behroozi, Daniel Ceverino, Stephane Colombi, Juerg Diemand,
Klaus Dolag, Bridget L. Falck, Patricia Fasel, Jeff Gardner, Stefan Gottloeber, Chung
Hsing Hsu,
Francesca Iannuzzi, Anatoly K
lypin, Zarija Lukic, Michal Maciejewski, Cameron McBride, Mark C.
Neyrinck, Susana Planelles, Doug Potter, Vicent Quilis, Yann Rasera, Justin I. Read, Paul M. Ricker,
Fabrice Roy, Volker Springel, Joachim Stadel, Greg Stinson, P. M. Sutter, Victor Turchani
nov, Dylan
Tweed, Gustavo Yepes, Marcel Zemp


27 interesting pages, 20 beautiful figures, and 4 informative tables accepted for publication
in MNRAS. The high
resolution version of the paper as well as all the test cases and analysis can be

this web site

GEMS: Galaxy fitting catalogues and testing parametric galaxy fitting codes

: In t
he context of measuring structure and morphology of intermediate redshift galaxies with
recent HST/ACS surveys, we tune, test, and compare two widely used fitting codes (GALFIT and
GIM2D) for fitting single
component Sersic models to the light profiles of
both simulated and real galaxy
data. We find that fitting accuracy depends sensitively on galaxy profile shape. Exponential disks are well
fit with Sersic models and have small measurement errors, whereas fits to de Vaucouleurs profiles show
larger uncerta
inties owing to the large amount of light at large radii. We find that both codes provide
reliable fits and little systematic error, when the effective surface brightness is above that of the sky.
Moreover, both codes return errors that significantly under
estimate the true fitting uncertainties, which are
best estimated with simulations. We find that GIM2D suffers significant systematic errors for spheroids
with close companions owing to the difficulty of effectively masking out neighboring galaxy light; th
appears to be no work around to this important systematic in GIM2D's current implementation. While this
crowding error affects only a small fraction of galaxies in GEMS, it must be accounted for in the analysis
of deeper cosmological images or of more
crowded fields with GIM2D. In contrast, GALFIT results are
robust to the presence of neighbors because it can simultaneously fit the profiles of multiple companions
thereby deblending their effect on the fit to the galaxy of interest. We find GALFIT's robu
stness to nearby
companions and factor of >~20 faster runtime speed are important advantages over GIM2D for analyzing
large HST/ACS datasets. Finally we include our final catalog of fit results for all 41,495 objects detected
in GEMS.

: Boris Häußl
er, Daniel H. McIntosh, Marco Barden, Eric F. Bell, Hans
Walter Rix, Andrea Borch,
Steven V. W. Beckwith, John A. R. Caldwell, Catherine Heymans, Knud Jahnke, Shardha Jogee, Sergey
E. Koposov, Klaus Meisenheimer, Sebastian F. Sánchez, Rachel S. Somerville,

Lutz Wisotzki, Christian

A Comparison of Cosmological Codes: Properties of Thermal Gas and Shock Waves in Large Scale

: We present results for the s
tatistics of thermal gas and the shock wave properties for a large
volume simulated with three different cosmological numerical codes: the Eulerian total variations
diminishing code TVD, the Eulerian piecewise parabolic method
based code ENZO, and the Lagr
particle hydrodynamics code GADGET. Starting from a shared set of initial conditions, we
present convergence tests for a cosmological volume of side
length 100 Mpc/h, studying in detail the
morphological and statistical properties of the th
ermal gas as a function of mass and spatial resolution in
all codes. By applying shock finding methods to each code, we measure the statistics of shock waves and
the related cosmic ray acceleration efficiencies, within the sample of simulations and for the

results of the
different approaches. We discuss the regimes of uncertainties and disagreement among codes, with a
particular focus on the results at the scale of galaxy clusters. We report that, even if the bulk of thermal
and shock properties are reasona
bly in agreement among the three codes, yet some differences exist
(especially between Eulerian methods and smoothed particle hydrodynamics) and are mostly associated
with a different reconstruction of shock heating and entropy production in the accretion
regions at the
outskirts of galaxy clusters.

: F. Vazza, K. Dolag, D. Ryu, G. Brunetti, C. Gheller, H. Kang, C. Pfrommer

Running your first SPH simulation

By Nathan Goldbaum ... imulation/

Today’s astrobite will be a sequel to a post I wrote a few months ago on using the
smoothed particle hydrodynamics (SPH
) code Gadget
2. In the first post, I went over how
to install Gadget and showed how to run one of the test cases included in the Gadget
distribution. Today, I’d like to show how to set up, run, and analyze a simple
hydrodynamics test problem of your own.

A Guide to Comparisons of Star Formation Simulations with Observations

: We review an approach to observation
theory comparisons we call "Taste
Testing." In this
roach, synthetic observations are made of numerical simulations, and then both real and synthetic
observations are "tasted" (compared) using a variety of statistical tests. We first lay out arguments for
bringing theory to observational space rather than o
bservations to theory space. Next, we explain that
generating synthetic observations is only a step along the way to the quantitative, statistical, taste tests
that offer the most insight. We offer a set of examples focused on polarimetry, scattering and e
mission by
dust, and spectral
line mapping in starforming regions. We conclude with a discussion of the connection
between statistical tests used to date and the physics we seek to understand. In particular, we suggest that
the "lognormal" nature of molecu
lar clouds can be created by the interaction of many random processes,
as can the lognormal nature of the IMF, so that the fact that both the "Clump Mass Function" (CMF) and
IMF appear lognormal does not necessarily imply a direct relationship between them

: Alyssa A. Goodman

Galaxies going MAD: The Galaxy
Finder Comparison Project

: With the ever increasing size and complexity of fully self
simulations of galaxy
formation within the framework of the cosmic web, the demands upon object finders for these simulations
has simultaneously grown. To this extent we initiated the Halo Finder Comparison Project that gathered
together all the experts in

the field and has so far led to two comparison papers, one for dark matter field
haloes (Knebe et al. 2011), and one for dark matter subhaloes (Onions et al. 2012). However, as state
art simulation codes are perfectly capable of not only following
the formation and evolution of dark
matter but also account for baryonic physics (e.g. hydrodynamics, star formation, feedback) object finders
should also be capable of taking these additional processes into consideration. Here we report on a
comparison of

codes as applied to the Constrained Local UniversE Simulation (CLUES) of the formation
of the Local Group which incorporates much of the physics relevant for galaxy formation. We compare
both the properties of the three main galaxies in the simulation (re
presenting the MW, M31, and M33) as
well as their satellite populations for a variety of halo finders ranging from phase
space to velocity
to spherical overdensity based codes, including also a mere baryonic object finder. We obtain agreement

codes comparable to (if not better than) our previous comparisons, at least for the total, dark, and
stellar components of the objects. However, the diffuse gas content of the haloes shows great disparity,
especially for low
mass satellite galaxies. This
is primarily due to differences in the treatment of the
thermal energy during the unbinding procedure. We acknowledge that the handling of gas in halo finders
is something that needs to be dealt with carefully, and the precise treatment may depend sensitiv
ely upon
the scientific problem being studied.

: Alexander Knebe, Noam I. Libeskind, Frazer Pearce, Peter Behroozi, Javier Casado, Klaus
Dolag, Rosa Dominguez
Tenreiro, Pascal Elahi, Hanni Lux, Stuart I. Muldrew, Julian Onions