Download Task A - MIT CMS Tier-2 Facility

Document related concepts

Faster-than-light neutrino anomaly wikipedia , lookup

Elementary particle wikipedia , lookup

Technicolor (physics) wikipedia , lookup

Minimal Supersymmetric Standard Model wikipedia , lookup

Strangeness production wikipedia , lookup

Business intelligence wikipedia , lookup

Data analysis wikipedia , lookup

Standard Model wikipedia , lookup

Super-Kamiokande wikipedia , lookup

Search for the Higgs boson wikipedia , lookup

Peter Kalmus wikipedia , lookup

ALICE experiment wikipedia , lookup

ATLAS experiment wikipedia , lookup

Future Circular Collider wikipedia , lookup

Large Hadron Collider wikipedia , lookup

Compact Muon Solenoid wikipedia , lookup

Transcript
The MIT Laboratory for Nuclear Science (LNS)
Report to the
U.S. Department of Energy
Task A
Particle Physics Collaboration
(PPC)
High Energy Physics Program
1
Task A _____________________________________________________________________ 1
High Energy Physics Program ________________________________________________ 1
Overview ____________________________________________________________________ 3
1.
CDF Experiment ________________________________________________________ 4
1.1.
Introduction ________________________________________________________ 4
1.2.
MIT Physics Interests in the CDF Experiment Run II ______________________ 4
1.2.1. Heavy-Quark Physics Program at CDF ____________________________________ 4
1.2.2. High pT Program at CDF ______________________________________________ 12
1.2.3. Monte Carlo Production Facility at MIT __________________________________ 24
1.3.
2.
CDF Summary _____________________________________________________ 24
Phobos Experiment _____________________________________________________ 25
2.1 Introduction ___________________________________________________________ 25
2.2 Role of MIT and, in particular PPC personnel in Phobos ______________________ 26
2.3 Phobos Results and Physics_______________________________________________ 26
3.
Compact Muon Solenoid Collaboration ____________________________________ 31
3.1.
Introduction _______________________________________________________ 31
3.2.
CMS Physics Program _______________________________________________
3.2.1. Physics with pp collisions _____________________________________________
3.2.2. Heavy-ion Physics at CMS ____________________________________________
3.2.3. Tracking of charged particles in heavy-ion events___________________________
3.2.4. Collaboration between High Energy and Heavy-Ion Physicists ________________
31
31
35
37
38
3.3.
MIT activities on CMS _______________________________________________
3.3.1. The CMS Trigger/DAQ system _________________________________________
3.3.2. Storage Manager ____________________________________________________
3.3.3. CMS Data Operations ________________________________________________
3.3.4. The CMS Tracker Integration and Commissioning __________________________
3.3.5. The CMS Tier-2 Facility at MIT ________________________________________
3.4.
Summary of MIT’s Role in CMS ______________________________________
41
41
43
44
47
51
53
4.
Budget Discussion ______________________________________________________ 55
5.
Summary _____________________________________________________________ 58
6.
The MIT Personnel on PPC in FY08 ______________________________________ 59
7.
The MIT Personnel on PPC in FY09 ______________________________________ 60
8.
The MIT Personnel on PPC in FY10 ______________________________________ 61
9.
Publications since 2004 __________________________________________________ 62
9.1.
CDF Publications: ____________________________________________________ 62
9.2.
Phobos Publications: __________________________________________________ 66
2
Overview
The PPC group is a collaboration of high-energy and heavy-ion physicists in the Laboratory for
Nuclear Science (LNS) sharing resources and laboratory space. DOE HEP supports the highenergy program and a small part of the heavy-ion program, the majority of the latter being
supported by DOE Nuclear funds. The high-energy group works on the Collider Detector at
Fermilab (CDF) experiment and the heavy-ion group works on the Phobos experiment at RHIC.
In addition, during the past year both groups have dramatically increased their efforts in the
Compact Muon Solenoid (CMS) collaboration. This transition has driven most of the changes to
the group since the last update.
In the CDF experiment over the last year, we have made tremendous progress in Heavy Flavor
physics, most significantly the first measurement of the Bs oscillation frequency, in an effort led
by Christoph Paus. In addition, the comprehensive scrutiny of high-pT physics at the electroweak
scale, led by Bruce Knuteson, has made progress in incorporating additional data from CDF. In
terms of operation, currently the average peak luminosity is well over 2x10 32 cm-2s-1, a factor of
two increase over the previous year. Last year, the MIT group, under the leadership of Bruce
Knuteson, implemented a 0.5 M$ upgrade to the Level 3 trigger PC farm. This complemented
the work done in 2005 by the group, upgrading CDF Event Builder and Level 3 trigger systems,
enabling efficient data collection at the increased luminosity. Since the start of CDF in 2001
these two systems are also supported 24/7 by the group.
Under the leadership of MIT (Wit Busza as spokesperson) Phobos has now participated in five
RHIC runs, collecting data on Au+Au collisions, d+Au Collisions, p+p collisions and Cu+Cu
collisions. With these data, Phobos is extracting important information on the mechanism of
particle production and the properties of the not as yet fully understood hot dense partonic matter
created in nuclear collisions at ultra-relativistic velocities. Phobos data taking is now complete,
and the collaboration focuses its effort on the analysis.
The major stimulus for change in the PPC group over the last year is the onset of the LHC era.
Christoph Paus, recently promoted to tenured professorship at MIT, is the MIT Team Leader in
CMS and Boleslaw Wyslouch continues his leadership of the CMS heavy-ion program both at
CERN and within the US. In addition, Christoph Paus, Steve Nahn, a new faculty member since
September 05, Principal Research Scientist Gerry Bauer and Research Scientist Markus Klute,
promoted from being a postdoc on CDF, have relocated to CERN to focus on CMS. Principal
Research Scientist Konstanty Sumorok continues to play a vital role in the CMS DAQ program
and is relocating to CERN this summer. Research Scientist Ilya Kravchenko is the main
manager of MIT Tier-2 center and is fully supported by the Tier-2 project. Finally, new members
of the group have been added: Guillelmo Gomez-Ceballos, a former Fulbright Fellow with MIT
at CDF from Cantabria last rejoined the group as a Research Scientist at CERN to replace
Roberta Arcidiacono, who is left to take a position in Torino last September; Postdoctoral fellow
Kristian Hahn joined the group in June ’06, and is stationed at CERN to bolster the Tracking
Integration initiative; Visiting Research Scientist Jonatan Piedra has joined the DAQ effort at
CERN at the end of July ‘06. The CMS heavy-ion program is expected to start a year later than
the pp program and Gunther Roland will be fully active on CMS by FY07. Bruce Knuteson will
maintain the CDF program and intends to move towards CMS in the coming years.
3
Graduate students Phil Harris and Matt Rudolph have relocated to CERN in January and June
this year respectively. New graduate students Kevin Sung, Pieter Everaerts and Stephen Jaditz
will spend the summer at CERN. Kevin and Pieter will relocate to CERN in January 2008 and
Stephen is expected to relocate in June 2008. Graduate student Si Xie has joined the CDF
program and David Furse is currently considering alternatives.
In the CMS collaboration, we continue our major contribution to the Data to Surface (D2S)
readout system and integration of the D2S system is well underway at the experiment. The group
has taken on two new responsibilities. Firstly, Paus is directing CMS data operations and will be
responsible for both data production and Monte Carlo production. Secondly, within the DAQ
program, Markus Klute is taking the lead in defining and the implementation of the Storage
Manager project for recording the data. In addition, MIT has joined a new effort, led by Nahn, to
complete and commission the CMS Tracker starting at the Tracker Integration Facility at CERN
and continuing at the detector site once the Tracker is installed. The MIT Tier-2 computing
facility, an NSF funded project created in September 2005, continues to evolve with more
resources being provided by MIT.
1. CDF Experiment
1.1. Introduction
Until the LHC comes online, the Tevatron collider at Fermilab is at the energy frontier,
producing the highest energy particle collisions. The MIT group in CDF is continuing to supply
leadership in the analysis of the copiously produced b hadrons and searches for new phenomena
at the highest energies. The latest physics results reported by CDF are based on a data sample
collected up until the end of last year, which is approximately one order of magnitude larger than
the Run I data sample.
We have played a major role in exploiting this increasing Tevatron luminosity by upgrading the
Event Builder, Level-3 computing farm and providing additional computing capacity for CDF.
The MIT CDF group is solely responsible for the CDF II Event Builder and Level-3 computing
farm. In addition, we provide a CDF Monte Carlo production facility, located at MIT, composed
of PCs that have been retired from the Level-3 processing in CDF because they are out of
warranty. This cluster is now operated as part of the Tier-2 facility.
The MIT role in CDF has been appropriately rescaled to allow the transition to CMS. The MIT
team now working on CDF comprises Knuteson, postdoctoral associate Conor Henderson and
graduate students Jeff Miles, Alberto Belloni, Khaldoun Mahkoul, Georgios Choudalakis and Si
Xie. This last year Nuno Leonardo, Arkadiy Bolshov and Boris Iyutin have graduated. We
expect Alberto Belloni and Jeff Miles to graduate by the end of the calendar year.
1.2. MIT Physics Interests in the CDF Experiment Run II
1.2.1. Heavy-Quark Physics Program at CDF
The heavy-quark physics program within the MIT group is coordinated by Christoph Paus. There
are a number of people who have been working on this program and a number of different
4
analyses performed which have followed two basic directions: physics with B s0 mesons and a
smaller effort in spectroscopy with charmonia. The B s0 physics analysis has gravitated around
electroweak aspects with measurements of the quantities ms and s .

Since the last report, our seven yearlong efforts at CDF have culminated in the first observation

and measurement of B s0 mixing. This is one of the most important results from the Tevatron, and
essentially marks the completion of one of the
flagship 
analyses of Run II. The significance of
our result exceeds five  , and brings to a close a two-decade quest to ascertain ms . It
complements the efforts of the B factories for a precise measurement of the CKM matrix

elements. In particular, our sub-percent precision on ms contributes an uncertainty on Vtd /Vts
that is substantially smaller than that from either md measurements or lattice theory, and as


such, largely forecloses interest in further measurements of ms for some time to come. This
measurement had interesting potential for finding a disagreement with the 
Standard Model

expectations because the oscillation frequency is very sensitive to new physics entering in the

box diagram of the mixing process. Unfortunately,
our result is in good agreement with Standard

Model predictions, but on the other hand, it does exclude
a good fraction of the parameter space
of new physics models.
This result has received much positive feedback from the community in terms of press articles,
invitations for speakers, and awards---it was a highlight of 2006 conferences. The MIT group has
played the key role in making this measurement and turning a new page in heavy-flavor physics.
The timing of this result also happened to be well matched for the heavy-flavor contingent of our
group to shift focus from CDF to CMS.
B s0 Oscillation Frequency

Neutral mesons can, via a box diagram, spontaneously change into their own antiparticles, which
is referred to as flavor mixing or oscillations. The frequency of this process is determined by the
Cabibbo-Kobayashi-Maskawa (CKM) matrix, which is determined by four fundamental
parameters in the Standard Model. The B s0 oscillation frequency is very large, about 3 trillion
times a second, which makes its observation extremely challenging. The measurement requires
three ingredients: the b flavor at production, the b flavor at decay, and the proper time of the
decay. Knowing the initial and final flavors of the b quark tells one whether the B s0 decayed in its

original state (unmixed) or has become its own antiparticle (mixed), and the proper decay time
relates to the frequency of this process. The B meson flavor at production is determined using
flavor tagging algorithms while the reconstruction of a flavor-specific final state automatically

provides the decay flavor and the decay time. The oscillations of the B s0 meson are more than 34
times faster than for the Bd0 meson, which requires superb resolution for the proper time
measurement.

The CDF collaboration presented its first complete B s0 mixing analysis in early 2005, and then

was significantly updated for fall of 2005. Both analyses resulted in competitive lower limits on
the oscillation frequency. After the inclusion of about three times more data, and thenew sameside kaon flavor-tagging technique the first precise measurement of the B s0 Bs oscillation

frequency (3  significance) was performed and first publicly presented in April 2006 by Ivan

5

Furic (published in Phys.Rev.Lett.97:062003,2006). Then, after numerous improvements in the
analysis techniques, but with the same 1 fb1 data set, CDF formally announced the observation
of Bs oscillations (exceeding 5  ) in a Fermilab seminar given by Ch. Paus in September 2006.
This analysis was very complex and a large number of people from various institutions have
been involved in this effort: Cantabria, Chicago, CMU and the University of Pennsylvania to

name the most prominent. Nevertheless,
it is fair to say that the MIT group had taken the key

role in making this effort a success and bringing it to a rapid conclusion.

The MIT group pursued two complementary analyses. For values of the mixing frequency close
to the then existing lower limit (about 14 ps1), semileptonic decays were used because they are
collected in large quantities due to their large branching fraction. For these samples the event
yield was further enhanced by a dedicated trigger, which used a lepton, either muon or electron.
For larger values of the mixing frequency, the unreconstructed neutrino in the semileptonic
decay deteriorates the sensitivityof those channels and fully reconstructed hadronic decays like
Bs0  Ds are more sensitive probes of the mixing frequency, despite much lower statistics. An
important improvement in achieving the final observation was expanding the sample (Jeff Miles)
to include partially reconstructed hadronic decays, where a photon or  0 was missing, as these
have almost as good a spatial resolution as the fully reconstructed modes. A total of 61 thousand
semileptonic events, 5.6 thousand fully reconstructed hadronic events, and 3.1 thousand partially
reconstructed hadronic events were used.

The analysis was further complicated by the need to measure the flavor tagger performance (i.e.
dilution) independently of the signal. This was originally necessary in order to set limits, but
even after the signal was visible a priori knowledge of the dilution was still vital in establishing
the credibility of the observed signal. The first flavor taggers employed utilized the b hadron
opposite to the reconstructed B meson, and for these “away-side” taggers B+ and Bd0 meson
decays can be used to measure tagger performance. However, these taggers have rather low
performance (about 1.5%). An additional “same-side” tagging method can also be used in which
the charge of particles created in the fragmentation process in the vicinity of the emerging B
 by the MIT group
meson are correlated with the B meson flavor. Such a method was developed
in Run I for tagging B+ and Bd0 mesons, and used to measure sin( 2 ) . In that case, data directly
provided a calibration of the performance of the Bd0   correlation. The situation for Bs is
significantly more challenging. Firstly, the tagger calibration cannot be directly obtained from Bs
samples until after oscillations have been established; and due to the differing fragmentation


processes involved, the performance measured in B+ and Bd0 samples does not apply to the

Bs0  K correlations we seek to exploit. Secondly, the fact that the correlation sought involved
charged kaons naturally calls for particle identification.


Same-side kaon tagging was a critical component in our exceeding 5  significance, and it was
developed and implemented by the MIT group (Stephanie Menzemer, Nuno Leonardo, Alberto
Belloni)---done for the first time at a hadron collider. The first stage was the development of the
same-side kaon tagger itself. To identify kaons in the interesting momentum range the Time-of to the construction of the CDF
Flight detector, which is one of MIT's important contributions
experiment, is critical. Further particle specific energy loss in the drift chamber is also employed
to identify kaons. This was in itself a non-trivial process in that one must contend with the
6
incidental production of kaons in the high multiplicity environment of the Tevatron, as well as
the fact that the most powerful kaon tags are at higher momentum where kaon-pion separation is
decreasing. Tagger performance for B s0 mesons was necessarily calibrated using Monte Carlo
simulation. We carefully tuned the Pythia event generator to the fragmentation properties we
could determine from B+, Bd0 and B s0 data samples, and checked the veracity of the Monte Carlo
predictions for tagging onB+ and Bd0 samples. We found that the same-side kaon tagging
performance was close to 4%---which effectively increased the existing dataset by a factor of
almost three and was the critical contribution that made the observation and measurement


possible.

Figure 1 Amplitude scan of the full dataset with combined flavor tagging
In our B s0 mixing analysis we applied a combined same side and opposite side tagger to the full
CDF Run II dataset of 1 fb1 . To first identify a signal it is easiest to perform a Fourier
Transformation, which consists of unbinned likelihood fit to the data in which, for a fixed
oscillation frequency, the fully corrected amplitude of the cosine oscillation is determined. If the

applied.
data contain a signal at the given frequency the amplitude should come out to be one while if
there is no signal 
it should be zero within the uncertainties. The amplitude scan of the combined
data is shown in Figure 1 and it exhibits an amplitude value of one at an oscillation frequency of
about 17.8 ps1.
The value of the amplitude at about 17.8 ps1 is different from zero (no signal) by a little over 6
standard deviations. A more precise characterization is the probability to see a fake signal of this


7
or even higher significance. This was done by using randomly generated flavor tags applied to
our sample. In statistics this is called the p-value. To determine the significance of this signal we
use the logarithm of the likelihood ratios assuming either the amplitude to be one (signal) or zero
(no signal). This quantity is shown in Figure 2, and shows a value of -17.26 for our signal. We
determined the probability from our data by generating large sets of random tagging decisions
and found that the p-value is 5.7x10-7. Also note that the significance of the oscillations is almost
entirely obtained via the hadronic samples.
Figure 2 Difference of the logarithms of the likelihoods of signal and no signal versus the
oscillation frequency ms

Figure 3: Unitarity Triangle before (left) and after (right) inclusion of the new Bs mixing result.
8
A likelihood fit of the data returns an oscillation frequency of ms= 17.77 +/- 0.10 (stat) +/- 0.07
(syst.) ps1. This measurement together with the Bd0 mixing frequency measurement is converted
into the world most precise measurement of the CKM matrix elements |Vtd/Vts| = 0.2060 +/0.0007 (exp.) +0.0081-0.0060 (theo.). It is important to point out that the experimental uncertainty is
more than an order of magnitude smaller than the 
theoretical uncertainty, which is due to the
 uncertainties of the QCD corrections,
 which are calculated on the lattice. Until new
developments in unquenched lattice calculations substantially improve the state of the art further
experimental improvements on our result for ms will be superfluous. Correspondingly,
improvements in the theoretical uncertainty will directly translate into stronger constraints on the
CKM matrix, and increase the impact of this measurement. In figure 3 the constraints on the
unitarity triangle are shown before (left) and after (right) including this measurement. The red
ellipse, which determines the position ofthe apex of the triangle, has shrunk significantly along
the longer axis.
Three postdocs (Guillemlo Ceballos, MIT 2002-2004, Stephanie Menzemer MIT 2003-2005 and
Ivan Furic, MIT PhD 2004) received the Tollestrup award for their innovative contribution to
this important analysis, which was published in PRL (Phys.Rev.Lett.97:242003,2006).
Measurement of s /s


The width difference of the heavy and light fb1 meson states s is an interesting quantity in
itself as it is predicted in the Standard Model, although theoretical uncertainties are present.
Predictions
 for s /s range from 5% to 20%. In the Standard Model the ratio of lifetimes for
0
the Bs  J / decay mode is especially
interesting because the final state is a CP eigenstate,


allowing the use of angular correlations to separate the heavy and light lifetimes, subsequently
providing another test of the Standard Model.

Starting from his previous analysis of lifetimes, Konstantin Anikeev has finished his thesis on
the first direct measurement of s /s by performing a full angular analysis of the vector-vector
decay Bs0  J / . This decay is predominantly parity even but has odd parity contributions,
which are separated out by using their different decay angular distributions. The analysis
procedure is benchmarked on the high statistics mode Bs0  J /K and we find that

s /s  0.65  0.25  0.33 0.01. This analysis has been performed in collaboration with Yale

University and has been published in Physics Review Letters (Phys.Rev.Lett.94:101803,2005).
Konstantin's thesis received the URA's thesis award in June 2005.

Khaldoun Makhoul, our last graduate student in heavy flavor physics at CDF, has picked up this
measurement and is extending it to > 1 fb1 dataset, at least a factor of four increase in statistics
over Anikeev's thesis. The increase in statistics will provide a much more insightful result, but
also is experimentally more challenging as the increased statistics also demands greater precision
in modeling and fitting the data. The analysis is well underway with a goal of presenting
 If enough statistics are available, he will also perform a
preliminary results this spring.
measurement of the CP angle equivalent to sin( 2 ) in the B s0 system.


9
An alternative constraint on the lifetime difference may be obtained from a measurement of the
branching ratio of the decay Bs0  Ds Ds , the focus of Boris Iyutin's thesis. As noted out by
Voloshin and Shifman, the mere measurement of this branching fraction allows for the
determination of the B s0 lifetime difference s . This decay has not been observed before, the
principal obstacle being the low efficiency for such a complex decay chain. Using

reconstructions via Ds    , Ds  K 0K  , and Ds      a signal is observed in 355 ps-1.
The reconstructed double charm decays are depicted in Figure 4. A branching ratio of 9.4 +4.4-4.2


x10-3 is obtained, which translates into a lower limit sCP /s  1.2x102 at 95% C.L. This work
will soon be submitted to Physical Review Letters for publication.



Branching ratio Measurements

To measure the B s0 mixing at such a high frequency it was critical to reconstruct the hadronic B s0
decay modes because they provide the best resolution sample. Aside from CDF results from Run
II, no exclusive hadronic decays of the B s0 had been reconstructed aside from two charmonium
modes.



One of the most prominent modes is Bs0  Ds  ( Ds    ). Ivan Furic first observed such

events in CDF for his thesis. Because absolute efficiencies are difficult to measure at CDF Ivan
measured the ratio of branching fractions. The normalization mode used was Bd0  D 
( D  K    ), kinematically
 almost identical
 but much more copiously produced. In the ratio
most
systematics
cancel.
Ivan's
thesis
has
been
published
in
PRL
(Phys.Rev.Lett.96:191801,2006). The measurement is interesting in its own right because it

allows tests of soft factorization theories.
Figure 4: Mass Distributions for B0 decays to DsD and Bs decays to DsDs

Another important decay of the B s0 is Bs0  Ds    ( Ds    ). Arkadiy Bolshov has
observed these decay modes and measured branching ratios relative to Bd0  D   
( D  K    ). The 3-pion modes are more challenging because of the increased combinatorial
background arising froma larger
 number of daughters.
 The mix of non-resonant versus resonant
0
 
Bs  Ds a1 is another experimental complication to this analysis. This result has been submitted

to publication,

10
which as a by-product, also includes an improved measurement of Bs0  Ds  ( Ds    )
branching ratios (factor of two reduction in statistical uncertainty). Aside from their
“engineering” value, these measurements are useful input for understanding decay mechanisms.
A paper for this analysis has been accepted for publication by PRL.


Measurement of B Masses
As part of the effort to calibrate the tracking of the CDF detector Andreas Korn measured the
masses of the most commonly produced b hadrons in exclusive decays involving a J /    .
The measurements are in the one MeV range, and in particular, for the B s0 and the b improve on
the PDG averages by factors of up to 8. His thesis was approved in 2004 and the associated
paper, the first of Run II, was published in PRL (Phys.Rev.Lett.96:202001,2006).



Study of the X(3872) at CDF

After Belle's discovery of a new narrow state in August 2003, the X(3872), the MIT group with
Gerry Bauer and Sham Sumorok were able to confirm this signal and publish in Physics Review
Letters (Phys.Rev.Lett.93:072001,2004) within less than four months. The narrow X(3872)
resonance, which is found in its decay to J /   does not exactly fit into the pattern of
expected charmonia states. Its mass is close to the kinematic threshold of a DD* , which has
invited speculation among others of a molecular-bound-meson state. As we approach the fourth
anniversary of its discovery, the interpretation remains controversial.


In 2004 Gerry Bauer completed the analysis in which he confirms that the X(3872) is mostly
promptly produced in proton-antiproton collisions complementing the Belle and BaBar sightings
in B decays. In Figure 5 the decay length of the  (2S) and the X(3872) is compared. In both
cases, the majority of the signal is prompt with b-decay fractions 28.31.2% for the  (2S) and
16.1 5.3% for the X(3872). These results, along with the sizable signal observed in CDF,
X(3872)  J /   rate
suggest X(3872) production is characteristic
 of charmonium unless the
is fairly large, i.e. tens of percent (Int.J.Mod.Phys. A21: 
959,2006).


Figure 5 Uncorrected proper time distributions for the  (2S) and X(3872) decays.

11
The next step in studying the X(3872) was to analyze the dipion mass distribution. Alexandre
Rakitine (thesis, summer 2005) measured the dipion mass spectrum, which is shown in Figure 6
along with the dipion spectrum for the  (2S) reference mode. The data is compared to various
models, and supports the conclusion that the dipion system is in fact a rho meson. This is
surprising in that it implies there is large isospin breaking in the decay. This result does not
determine whether the X(3872) is a charmonium state or not, but it excludes all odd C-parity

charmonia. The analysis is published
in PRL (Phys. Rev. Lett. 96:102002,2006).
Figure 6 Dipion mass distributions for the  (2S) and X(3872).
Direct Search for Magnetic Monopoles

Magnetic monopoles are an attractive possibility because they symmetrize Maxwell's
electromagnetic theory without contradicting the present picture of the Standard Model, and by
Dirac's argument would explain the quantization of electric charge. They have been searched for
in many ways but so far without success. In proton-antiproton collisions Dirac magnetic
monopoles are created in pairs. They would be highly ionizing, and if produced at the energies
available at the Tevatron they would produce very large pulses in the scintillators of the time-offlight detector, but would tend to range out in the magnet coil before entering the calorimeters.
Therefore, they would not fire existing triggers. Under the coordination of Christoph Paus the
MIT group started a magnetic monopole program and designed, built (with the University of
Florida), and commissioned a Level-1 trigger board that utilizes the time-of-flight scintillators.
The trigger board has been documented in an article, which has been published in NIM.
Many monopole searches suffer from extremely low efficiency, a few percent. Our approach on
the other hand is highly efficient, around 50%, making excellent use of the available luminosity.
The feasibility study by the MIT group in 2002 estimated that the CDF Run II data would be
sensitive to magnetic monopoles up to roughly 500 GeV. Part of that feasibility study was also a
full-blown extension to GEANT to properly track monopoles through material. This monopole
tracking software has been published in as a NIM article (Nucl. Inst. Meth. A43614 (2005)), and
we have received a number of contacts from people desiring to use our GEANT implementation
for searches at the LHC.
12
Michael Mulhearn's thesis analysis, which is based on just one month of data taking has
meanwhile been completed, but unfortunately, does not find a signal. The corresponding lower
mass limit, which is mildly model dependent, is 375 GeV. This analysis has been published in
PRL(Phys. Rev. Lett. 96:201801,2006).
1.2.2. High pT Program at CDF
Event Builder and Level 3 Trigger Farm
The CDF Event Builder is the part of the CDF data acquisition system that collects event
fragments after the Level 2 trigger decision and routes them to a single Level 3 trigger node on
which a global event decision can be made. In July 2004 the Event Builder, originally
constructed using networking technology in Tevatron Run Ib in the mid-1990s, provided a rate
bottleneck of 300 Hz and a throughput bottleneck of 75 MB/second to CDF's DAQ. The desired
trigger table for the anticipated high luminosities throughout the rest of Tevatron Run II required
the Event Builder to pass events at a rate of 1 kHz and with a throughput of 500 MB/second, a
factor of six larger throughput than achieved with the original system.
To overcome this bottleneck, Knuteson and Klute implemented an upgrade to the CDF Event
Builder, together with the expert knowledge of Steve Tether (formerly MIT, now SLAC) and
Ron Rechenmacher (FNAL). This was one of the primary upgrades planned for CDF Run IIb,
with a budget of 0.5 M$, and represented a complete replacement of the existing Event Builder
system. The upgrade consisted of a network replacement using gigabit Ethernet, and a
replacement to the boards reading out the buffers holding events after a Level 2 trigger accept.
The change in networking structure required a complete rewrite of the event and message
passing code, both to accommodate the new hardware and in the interest of future
maintainability.
This project was carried out with implementation largely by Klute in 2004 and 2005. The new
system was inserted into the CDF data acquisition system chain in August 2005, on time and
within budget. Our Run IIb Event Builder worked sufficiently well in its initial installation that
it was not removed, and for the past 1.5 years it has been the default system with which CDF
takes data. At this point most of the CDF Run II luminosity integrated by CDF has been taken
using the upgraded Event Builder, which allows for an expanded high-pT trigger table at high
luminosities.
The MIT CDF group, including Knuteson, Henderson, Choudalakis, and Makhoul, together with
continued consulting help from Klute at CERN, provides 24/7 support for this online system.
In Spring 2006, Knuteson, Henderson, Klute, Choudalakis, and Makhoul implemented a 0.5 M$
upgrade to the Level 3 trigger PC farm. This upgrade, including an addition of over 200
computing nodes, doubles the computing power of CDF's Level 3 trigger farm, providing
sufficient strength to handle luminosity projections throughout Tevatron Run IIb. Associated
with this hardware upgrade and necessary networking changes was a port of the Level 3 code to
Scientific Linux, and consolidation of the Level 3 infrastructure to ensure maintainability
throughout the remainder of Tevatron Run II. This upgrade was completed on time and within
budget by early summer 2006, finalizing the configuration for Event Builder and Level 3 running
13
throughout the rest of Tevatron Run II.
With most of the PPC group resources having shifted to CMS, substantial effort has been exerted
by Knuteson and Henderson to streamline the Event Builder and Level 3 systems to allow
continued maintenance and 24/7 support by personnel reduced by a factor of three from two
years ago. Providing prompt and expert assistance, including the debugging of problems that the
improved Event Builder and Level 3 systems are now able to diagnose in systems upstream,
continues to take substantial effort. MIT (Knuteson, Henderson, Choudalakis, and Makhoul)
continues to retain sole responsibility for these crucial online systems.
CDF High-pT Global Analysis: Past
The goal of the MIT CDF high-pT effort is to search all high-pT data for the first hint of physics
beyond the Standard Model. This effort is particularly timely, with the luminosity integrated at
CDF having now reached 1.5 fb^-1, and with a factor of up to five yet to be obtained before the
LHC experiments realize their full potential. It is incumbent upon us to maximize the discovery
potential of the significant U.S. investment in Tevatron Run II.
With the physics beyond the Standard Model not well predicted, it is clearly important to search
as much of the Tevatron's high-pT data as possible to minimize the possibility that something has
been overlooked. The Tevatron has the potential for leaving a negative legacy: if the LHC turns
on and it becomes clear that something could have (and should have) been seen in Tevatron Run
II, this will represent one of the most expensive missed opportunities in our nation's science
program.
Recognizing the importance of the world's high-pT data to possible interpretation of a hint seen
in Tevatron Run II, and lacking resources to start a global analysis effort in CDF until June 2005,
Knuteson used 2004 and the first half of 2005 to consolidate data and expert collaboration
knowledge within H1, Aleph, and L3, while overseeing the successful completion of the CDF
Run II Event Builder upgrade. HERA-I and LEP 2 data are retained as reconstructed object 4vectors for individual events of both data and Monte Carlo, and in histograms of over 20,000
kinematic distributions comparing data to Standard Model prediction in 40 final states at H1 in
HERA-I, and in over 500 exclusive final states in ALEPH and L3 in LEP 2. Although the
disbanding of collaborations has made publication of this understanding problematic, H1 has
approved the publication of Quaero@H1 by Knuteson and Caron (Freiburg) after one year of
thorough review. Quaero@H1 provides an automatic interface to the H1 HERA-I data, allowing
automatic testing of specific new physics hypotheses against H1 HERA-I data and D0 Tevatron
Run I data combined. This represents the first combined analysis tool using 4-vector-level
information at two of the world's frontier energy colliders. The Quaero@H1 article (hepph/0612201), which has been submitted to European Physics Journal C, includes three analyses
testing new physics scenarios against included data. At this point the understanding of Aleph
and L3 data accumulated by Knuteson and collaborators Cranmer (BNL) and Holzner (CERN)
perhaps represents the field's most encompassing preservation of the LEP 2 data. These data
may yet have an important role to play in the interpretation of new physics seen at the Tevatron
or at the LHC, noting that the multi-billion dollar International Linear Collider currently under
consideration is only a factor of 2.4 higher in energy than LEP 2.
14

A global view of all CDF high-pT data has been achieved in the period between June 2005 and
July 2006 in an analysis effort led by Knuteson, Henderson, and Choudalakis, with invaluable
assistance from Culbertson (FNAL). A PRL and a 70 page PRD describing the analysis of 1
fb1 of CDF Tevatron Run II data have been prepared. Recognizing the novelty and potential
impact of this analysis, the CDF Spokespersons arranged for a special Review Committee to
review the analysis in Summer 2006. The Review Committee was chaired by Goshaw (Duke).
After three months of intense review, the committee concluded the technical soundness of the
global approach in general and of the analysis of these data in particular. The analysis includes
both a model-independent (Vista) and a quasi-model-independent (Sleuth) search for new
physics in the Tevatron data, and represents the first application to an entire hadron hadron
collider high-pT data sample of techniques earlier applied in the analysis of D0 Run I data by
Knuteson [Phys. Rev. Lett. 86 3712 (2001), Phys. Rev. D 64 012004 (2001), Phys. Rev. D 62,
92004 (2000)] and in the analysis of H1 HERA-I data by Caron and others, with Knuteson
consulting [Phys. Lett. B 602:14-30 (2004)]. A vista of 340 exclusive Tevatron final states is
shown in Figure 7.
Figure 7. Numbers of events observed and predicted in 344 exclusive final states at CDF Run II.
The Vista and Sleuth analysis has since undergone review by physics groups and individuals
within CDF. After ten months of review, during which time the analysis has not changed in any
noticeable way, the final steps are now being taken toward final collaboration approval and
15
publication. Over the past two years, Knuteson, Henderson, and Choudalakis have delivered
over fifty presentations within CDF describing various elements of this encompassing analysis.
Over one dozen colleagues -- including Conveners, Godparents, Review Committee members,
and others -- have been officially responsible for the oversight and review of this analysis.
A number of studies have been performed to understand the sensitivity of this analysis to new
physics. Sleuth would have found the top quark in Tevatron Run I even if the top quark had not
been known to exist. Figure 8 shows the Sleuth Wbbjj final state with top quark pair production
subtracted. Scaling back the integrated luminosity of the sample to Run I, it is seen that Sleuth
would have found the top quark in Tevatron Run I data, even if its existence were not already
known. This conclusion has been blessed by CDF, was shown by Choudalakis at DPF 2006, and
will be described in detail in the forthcoming publication describing this analysis.
Figure 8. The Sleuth Wbbjj final state with top quark pair production subtracted from the
Standard Model prediction.
The sensitivity of this analysis is characterized in an appropriately model-independent way by
parametrizing new physics by an overall coupling g and mass scale m in each exclusive Sleuth
final state. The Sleuth Wbbjj final state is shown in Figure 9a, with all overall couplings g and
mass scales m that would be triggered by Sleuth shown as shaded. Placing an appropriate prior
on the mass scale m allows the fraction of new physics parameter space probed in this analysis to
16
be quantified for each final state. Recognizing a prior with new physics equally likely to appear
in any final state respecting basic conservation laws allows the construction of Figure 9b, in
which the grayscale in each box shows the fraction of g vs. m space covered by Sleuth in
Tevatron Run IIa in each of these final states. The expectation that this analysis would have
discovered new physics is on the order of 20%, roughly three orders of magnitude greater than a
targeted search in a single final state, such as a search for a Z' decaying to electrons. This global
analysis covers roughly 1000 times the discovery potential of any individual targeted search.
Figure 9 (a) The region (shaded) of new physics coupling (g) and mass scale (m) covered by
Sleuth in Tevatron Run IIa. (b) The shaded fraction (shown as a grayscale level) in many
specific final states, where the final state corresponding to each box is obtained by adding the
objects labeling the corresponding row to the objects labeling the corresponding column.
Final collaboration approval of the analysis is expected within roughly one month of this writing,
with articles expected to be submitted to Physical Review D and Physical Review Letters
roughly two months later, after additional CDF collaboration review. A library of roughly
17,000 kinematic distributions comparing data to Standard Model prediction in over 300
exclusive Vista final states has also been prepared, and will be discussed within the collaboration
with an eye toward the possibility of making this library of distributions publicly available. This
discussion has currently been tabled so as not to destructively interfere with the ongoing
discussion of the publication of the global Vista and Sleuth searches of 1 fb^-1 of CDF data.
17
Among the many interesting features seen in these 17,000 kinematic distributions are the
3j:deltaR(j2,j3) discrepancy of Figure 10, shown in several conferences by Henderson, Rick
Field (University of Florida), Dan Green (FNAL), and others, demonstrating an inadequacy in
our current understanding of some combination of showering, the underlying event, pileup, and
initial state radiation in hadron hadron collisions. This discrepancy is one of several first
identified by Vista two years ago, since reproduced by many, yet remaining unresolved.
Figure 10 Angle (    space) between the second and third leading jets in Vista’s low-pT 3j
final state. A clear discrepancy is observed between CDF data and event generator prediction.
Vista has also enabled a Monte Carlo based modeling of fakes not achieved by any other

Tevatron analysis,
reflecting a detailed understanding of the underlying physical mechanism by
which a quark ends up being misreconstructed as an electron, muon, tau, or photon. This
understanding has been gained from basic fragmentation physics combined with single particle
gun studies and detailed investigation of data, representing a suite of useful techniques developed
by Knuteson and Choudalakis. The underlying problem of how often a 50 GeV quark fragments
into a pion carrying 48 GeV or more of energy (for various values of 50 and 48) represents one
of the most important outstanding theoretical problems affecting our ability to do meaningful
physics at the LHC, as has been pointed out by Knuteson.
18
In addition to solving the pressing experimental problem of how to analyze in a manifestly selfconsistent way the Tevatron high-pT data, the MIT CDF High-pT group has also, together with
Mrenna (FNAL), provided the first solution to the pressing theoretical problem of how to
interpret any hint of new physics seen at the Tevatron or LHC in terms of the underlying
physical theory. In February 2006, Knuteson and Mrenna published Bard (hep-ph/0602101),
describing how to systematically draw all conceivable Feynman diagrams, introducing new
diagrams and interactions as necessary, to explain an observed Sleuth discrepancy. To test the
many explanations (“stories”) that Bard constructs to explain a particular discrepancy, it is
crucial to be able to test these hypotheses as quickly and robustly as possible. This is of course
the purpose for which Quaero (http://mit.fnal.gov/Quaero) has been designed. Subsequently
Knuteson, Mrenna, and a team of theoretical colleagues including Schuster, Toro, and ArkaniHamed (Harvard), Thaler (Berkeley), and Wang (Princeton), have developed MARMOSET
(Mass and Rate Matching using On-Shell Effective Theories) to correlate individual Sleuth
discrepancies and construct a self-consistent and appropriately model-independent description
that can in turn be used to motivate the construction of an underlying Lagrangian (hepph/0703088). This team is now incorporating MARMOSET into the global Vista comparison, so
we are able to quickly see which (if any) discrepancies suggest compelling new physics
interpretations.
A continued and insistent push by Knuteson and Culbertson (FNAL), starting in May 2006 and
continuing into the present, has started a fledgling Vista@D0 effort within D0, carried out by
Naimuddin and Atramentov (FNAL), Piper (MSU), Protopopescu (BNL), and Meyer (Aachen),
with Knuteson providing technical assistance with the Vista and Sleuth analysis infrastructure
and physics consulting. It has been emphasized by Knuteson (hep-ph/0608025) that the expected
accumulated luminosity by the Tevatron is not the often quoted 8 fb1 , but is rather 16 fb1 ,
upon combination of CDF and D0 data. Knuteson is not advocating the combination of results,
but rather the addition of histograms, where black filled circles represent the union of both CDF
and D0 data. No such histogram has ever been produced in the history of the Tevatron. With the
years, the doubling of
data that can
Tevatron's integrated luminosity doubling roughly every 1.5
be achieved by combining data collected at CDF and D0 represents a value of roughly 400 M$.
Vista@D0 is proceeding significantly more slowly at D0 than hoped, primarily because
Knuteson has been disallowed by D0 management from actively engaging in the debugging of
D0 discrepancies himself.
The global analysis of Tevatron Run II data at both CDF and D0, carried out largely by the MIT
CDF group together with invaluable collaborators, represents one of the greatest hopes for
surprising discovery in the United States before the frontier energy collider analysis center of
gravity moves to CERN's LHC.
CDF and CMS High-pT Global Analysis: Future
With continually increasing instantaneous luminosity, the prospects for discovery at the Tevatron
are good. The MIT CDF high-pT group aims to maximize its discovery potential until the end of
Tevatron Run II. Choudalakis will complete his Ph.D. thesis in Summer 2008 on the first 2.5
fb1 of CDF Run II data. Choudalakis's thesis and associated publications (expected to include
two PRDs and two PRLs) will represent the first systematic search of over 300 exclusive final
states in high transverse momentum hadron hadron collisions at the energy frontier. Xie, an MIT

19
graduate student who is finishing his coursework and exams in his first year, will be moving to
Fermilab in Summer 2007. Xie's thesis will be the analysis of all Tevatron Run IIb data,
including Tevatron data accumulated up through the end of 2009. Xie's thesis represents terrific
discovery potential. The plan for publication throughout the rest of Tevatron Run II is a PRD
and PRL describing the global analysis of 1 fb1 of Tevatron Run II data in Summer 2007, a PRD
and PRL describing the global analysis of 2.5 fb1 of Tevatron Run II data at the end of 2008,
and a PRD and PRL describing the global analysis of the entire Tevatron Run IIb data sample at
the end of 2009, in the event no new hint of new physics is seen. If a hint of new physics is seen

and a discovery case can be made, this will of course be our primary and exclusive focus. Our
 in order to aid a similar effort at D0, with an eye to
group will also do everything we can
eventual combination of data sets. In this endgame of the CDF experiment, the global approach
being carried our by our group has an important advantage: whereas other groups have students
and postdocs leaving, without a continuous analysis thread binding the group effort, the Vista
analysis infrastructure and continuity of technical expertise provided by Knuteson and
Henderson ensures a smooth transition from Choudalakis to Xie, and a strong finish to this
analysis program. Maximizing the chance for a Tevatron discovery will be the primary focus of
Knuteson, Henderson, Choudalakis, and Xie over the next 2-3 years.
In parallel, Knuteson and Xie are putting in place the infrastructure to proceed with this global
analysis program at CMS. The idea of performing a global analysis of LHC data is not one
being actively pursued by many other university or laboratory groups around the world, and
represents an area in which our group can make significant impact. It is becoming recognized
that this global analysis view -- which has so far been achieved only as an endgame at D0 Run I,
H1 HERA I, Aleph and L3 at LEP 2, and CDF and eventually D0 in Tevatron Run II -- has
substantial potential as a commissioning tool during LHC startup and CMS shakedown. Many
debugging tools already exist, and such tools will be required at all levels, from the monitoring
of individual electronics channels to the monitoring of higher-level signals. Perhaps remarkably,
there remains a niche for a debugging tool that has really not been adequately filled in collider
experiments over the past twenty years. Vista@CMS will provide a 4-vector level,
encompassing comparison of data and Standard Model prediction, which then motivates specific
lower-level debugging efforts. The list of discrepancies noted by Vista@CMS, in a format
similar to Figure 1, represents a to do list ordered by decreasing discrepancy that a sufficiently
critical mass of physicists should be able to efficiently debug. Getting to a sufficient
understanding of data, Standard Model prediction, and CMS detector response that Vista and
Sleuth can be used in the endgame clearly motivates using Vista and Sleuth to concentrate on
understanding differences between data and Standard Model prediction in the opening game,
rather than conducting random SUSY searches.
Projecting a list of the leading discrepancies at CMS against the inside wall of CERN's building
40 would almost certainly serve to focus attention on understanding these discrepancies. The
systematic understanding of discrepancies in this way has not been the primary focus of the CDF
and D0 experiments. In addition to the total numbers of events observed and predicted in each
exclusive final state, Vista also considers a large number of potentially relevant kinematic
distributions, ranking these according to decreasing shape discrepancy, and providing significant
insight into the nature of the underlying effects that require resolution. We expect Vista's
automated debugging insight to also prove useful at CMS.
20
The standard candles often talked about to ensure proper understanding of LHC detector
response and reconstruction software, such as W boson, Z boson, and top quark production, are
seen to be one of thousands candles automatically considered by Vista, which systematically
finds discrepancies in both final states and shapes of relevant distributions. Individual standard
candles only need to be considered as brought to our attention by Vista. In particular, it is noted
that the dielectron and dimuon mass distributions in the Vista 1e+1e- and 1mu+1mu- final states,
and the electron and missing energy and muon and missing energy transverse mass distributions
in the Vista 1e+1pmiss and 1mu+1pmiss final states are just four of the tens of thousands of
distributions automatically considered by Vista. As anyone who has commissioned a hadron
collider in the past twenty years knows, one of the first standard candles is not the Z peak, but is
instead the phi distribution of the missing transverse energy, a sensitive probe of detector
response that is never right at the beginning, and which can be studied already in the 900 GeV
LHC pilot run scheduled for later this year.
To jump-start Vista@CMS, Xie has already determined the Vista@CMS offline trigger.
Comparing data to Standard Model prediction must be performed on a subset of high-pT data
containing all desired control samples and including the few million most interesting events that
have been collected to date. Xie has been able to use our initial knowledge of LHC physics and
CMS detector response to determine a suite of offline triggers selecting the 5 million most
interesting events as a function of accumulated integrated luminosity. A note has been written
describing this set of offline triggers, and is being prepared for posting to the ArXiv. The
infrastructure set up by Knuteson and Xie to determine these offline triggers is quite flexible, and
can be easily adjusted to reflect continuous improvement of our understanding of LHC physics
and CMS detector performance over the next three years. An example of the number of events
selected as a function of electron pT for a single electron trigger is shown in Figure 11a; turning
this around to determine where a pT cut must be placed in order to select the 0.5 million events
passing such a trigger is shown in Figure 11b.
Figure 11 (a) Number of events (vertical axis) passing a single electron offline trigger with pT
cut on the electron (horizontal axis) in 1 fb^-1 of LHC data, as an integral plot from infinity with
prescales placed at points marked with arrows on the horizontal axis. (b) The pT cut required
(vertical axis) for this single electron offline trigger to accept the appropriate number of events
for Vista@CMS, together with lower-pT prescaled samples, as a function of time.
21
A related problem is developing a complete Standard Model background estimate that can be
applied to the events selected by the Vista@CMS offline trigger and that can be iteratively
improved with increasing sophistication, swapping individual processes in and out as desired or
required. In addition to providing a global Vista@CMS commissioning tool, we intend also to
provide the Standard Model to which CMS data will be compared. Our group's experience in
developing a similarly complete Standard Model prediction at Tevatron energies will be used to
construct a Standard Model prediction at LHC energies, in collaboration with Mrenna (FNAL)
and others.
Little event generator tuning has been done over the past ten years. Tuning for Tevatron Run II
has not occurred, largely because event generators were adequately tuned at similar energies
using data from LEP 1, HERA-I, and Tevatron Run I in the 1990s. The 3j:deltaR(j2,j3)
discrepancy shown in Figure 10 has pointed out the need for a significantly more sophisticated
event generator-tuning infrastructure than currently exists. One of the requirements for having a
general event generator tuning infrastructure is having a well understood, manifestly consistent,
and global view of all high-pT data. Our group intends to provide this global view, both at CMS
and at the lower energy colliders that have run over the past decade, including HERA-I, LEP 2,
and Tevatron Run II. Our group is working toward the development of the necessary
encompassing Standard Model prediction. It is crucial also to have a detector simulation with
time cost not significantly greater than the time cost of the employed event generator. The full
CMS detector simulation currently has a time cost of 100 seconds per event, making the
generation of 1 million events at each of one hundred event generator parameter points to
compare against data a near impossibility. Even the CMS fast detector simulation (FAMOS),
operating at speeds of roughly 1 second per event, is too slow to handle the fast turnaround that
will be required in order to quickly commission the event generators at LHC energies. In
response to this problem, Knuteson and Makhoul have developed an algorithm (TurboSim) that
uses fully simulated events to construct a gigantic lookup table that can then be used to simulate
subsequent events. This gigantic lookup table stores knowledge of the detector response
encapsulated in the detailed detector simulation. The gain in speed using TurboSim is that one
does not need to re-simulate an electron headed into a particular region of the detector if one
already has a number of examples of what happens when an electron heads into a certain region
of the detector. The TurboSim lookup table is based on the 3-vectors of reconstructed objects
(pT's,  's, and  's), and includes lines in the lookup table mapping one or two neighboring
parton-level objects to zero or more reconstructed-level objects, enabling TurboSim to learn
from the detailed simulation what happens when two objects are close together in the detector,
and how to handle isolation effects. TurboSim has already been commissioned at H1, Aleph,
 and is used within Quaero as a fast, standalone detector simulation for these
 and L3,
experiments. TurboSim can be used as a fast, standalone, detector-specific simulation both
inside and outside the Tevatron and LHC collaborations.
Knuteson and Xie will shift their intellectual weight from CDF to CMS gradually in 2008 and
2009, making sure not to miss an opportunity for a discovery at the Tevatron, while attempting to
maximize the impact of this global analysis approach as an initial physics commissioning tool at
CMS. Setting up this commissioning effort at CMS will utilize resources at the Fermilab LHC
Physics Center, which taps into an express and calibration stream within CMS data flow. This
22
effort will be carried out by part of Knuteson and a fraction of Xie, with whatever strength
remains after ensuring the adequate analysis of Tevatron data.
W Helicity in t  Wb decays at CDF
Before transitioning to CMS, Markus Klute, in collaboration with the CDF top quark group,
performed a measurement of the helicity of W bosons in top quark decays. The short lifetime of
the top 
quark implies that to a very good approximation it decays as a free quark rather than in a
bound state, which results in the propagation of the spin state of the quark to its decay
components, free from large perturbation due to QCD effects.
The top quark decays almost exclusively into a W boson and bottom quark; thus, measuring the
W helicity in top quark decays directly probes the V-A nature of the t→Wb coupling. In the
Standard Model, the fractions of W bosons in the longitudinal (f0), left handed (f-), and righthanded (f+) polarization states are dictated by the masses of the decay products. The fraction f0
dominates at 70.3 ± 0.7% for a top mass of 175 GeV/c2 due to the large coupling of the top quark
to the Goldstone mode of the Higgs field, while f- is expected to make up the remaining 30%,
and f+ is suppressed to the 10-4 level.

The measurement of f0 and f+ is performed using an analysis of the angular distribution of the W
decay products, which is sensitive to the different polarization states. From a data sample of one
fb1 220 top quark pair events with one W decays leptonically (electron or muon) and the other
hadronically are selected and reconstructed using a kinematic fitting technique. The background,
mostly coming from events with a W and multiple jets, is suppressed to the level of 20 events
with requirements on the total transverse momentum and the presence of a secondary vertex
from one of the b quark decays. The angle of the lepton in the rest frame of the W with respect
to the W direction in the rest frame of its parent top quark defines the angle * which is sensitive
to the different polarization fractions. A maximum likelihood fit is applied to the data using
templates for the different polarization states corrected for acceptance and reconstruction effects
which modify the theoretical distributions in cos*. From the likelihood fit the different
fractions are extracted.
For the first time, the statistics of the data sample has allowed a simultaneous fit to the f0 and f+
fractions, in addition to the traditional method of fixing one fraction to the Standard Model
expectation and fitting for the other. The resulting longitudinal and right-handed fractions are
f0 = 0.74 ± 0.25 ± 0.06
f+ = -0.06 ± 0.10 ± 0.03
with the first error statistical and second systematic. The dominant systematic error comes from
the possible variation in the cos* distribution of the background processes. Figure 12 shows the
data distribution of cos* with the fit results superimposed as well as the contour plot in (f0, f+)
space with the levels of constant log likelihood indicated. If we fix one fraction to its Standard
Model expectation and fit for the other, we find
f0 = 0.61 ± 0.12 ± 0.06
f+ = -0.06 ± 0.06 ± 0.03
and set an upper limit of f+ < 0.11 at the 95% confidence level. These results are in agreement
with the Standard Model predictions and have been published in Phys. Rev. D73, 111103 (2006).
23
Given that the statistical error dominates, the analysis will certainly be repeated by CDF with a
larger data sample.
Figure 12. On the left, the distribution in cos* for data with fit results for different polarization
states and background superimposed. On the right, the contour plots of constant log-likelihood
in the (f0, f+) plane, with the Standard Model point and the measured values.
1.2.3. Monte Carlo Production Facility at MIT
Since 2004 the MIT group under leadership of Paus has designed, built and maintained a Monte
Carlo production facility at MIT. The facility consists of PCs that have been retired from the
Level-3PC farm at CDF when their warranties expired. This year the computing farm, which has
served the needs of CDF, has been merged with the CMS Tier-2 computing center to form a
larger pool of resources for both collaborations. This activity will continue until the end of our
participation in CDF or until the computing resources become insignificant.
1.3. CDF Summary
We have been members of the CDF collaboration since 1991. During the past year we have
scaled down our activities while still successfully completing the upgrade of the Level-3 farm.
The Run IIb Event Builder was upgraded in 2005 to deal with the increasing luminosity of the
Tevatron. We continue to maintain the Run IIb Event Builder as well as the L3 trigger farm. We
request an additional hire at the postdoctoral level to facilitate this effort and the physics
programs of MIT interest.
In Run I we had a very productive physics program and in Run II we have continued to
concentrate on heavy-flavor physics led by Christoph Paus. Bruce Knuteson is making a search
for new phenomenon in high-pT physics, at the electroweak scale. This last year we have
graduated three students and we expect two more to graduate by the end of this year, while the
remaining three, all resident at Fermilab, will finish in 2008 and 2009. We are in a transition
phase from the CDF experiment to the CMS experiment and we are planning to have the whole
group transferred to CMS by the end of 2008.
24
2. Phobos Experiment
2.1 Introduction
Since the late 1960’s, members of the Particle Physics Collaboration have carried out, and in
most cases originated and led, a variety of high-energy experiments in which a nucleus was used
both as the target and the analyzer of the reaction products. Studies of the A-dependence of the
coherent photo production of vector mesons showed that the size of the rho meson was of the
order of that of a single pion, rather than that of two. Extensive studies at Fermilab of the Adependence of multiparticle production and also of the spectra of leading particles gave, for the
first time, detailed information on the cascading of high-energy hadrons in nuclear matter, and on
the energy loss of hadrons as they pass through nuclear matter. The former showed that highenergy hadrons take significant time to form and led to the notion of the “inside-outside cascade”
in particle production. The latter gave rise to the first data based estimate of the baryon densities
that may be achieved in relativistic heavy-ion collisions. Studies of the A-dependence of deep
inelastic electron and muon scattering shed light on the phenomenon of shadowing and on the
energy lost by a quark as it propagates through nuclear matter.
Theoretical studies of quantum chromo-dynamics [QCD] show that the “condensed matter of
QCD” has a very rich phase structure, see Figure 13a. Colliding heavy ions at higher and higher
energies is the only way we know of producing and studying hadronic matter at various densities
and temperatures and thus mapping out this phase structure. In 1991 one of us, Wit Busza,
formed a collaboration of particle and nuclear physicists to study high density QCD using the
relativistic heavy-ion collider, RHIC, planned for construction at Brookhaven National
Laboratory. The Phobos detector was constructed in time for the first RHIC collisions in June
2000. The performance of the Phobos detector, shown in Figure 13b, met or exceeded all design
specifications.
a)
b)
Figure 13a) Phase diagram for “condensed matter of QCD” b) Layout of the Phobos detector. The
beams collide at a point located just to the right of the double-dipole magnet, the top of which is not
shown.
Phobos participated in the RHIC research program for five data-taking runs, during
which AuAu collisions were studied at SNN=19.6GeV, 56GeV, 63GeV, 130GeV and
25
200GeV, CuCu at SNN= 22GeV, 62.4GeV and 200GeV, pp collisions at SNN=200GeV
and 410GeV, and dAu at 200GeV. A total in excess of one billion events were collected
and analyzed, yielding many fascinating and unexpected results. They have led to 33
publications in refereed journals, including the first physics publication from each of the
first five RHIC runs during which Phobos took data. These results, in particular the most
recent, are discussed below.
Presently we are winding down the Phobos project. Having taken the leading role we are
focusing on the even higher density QCD studies using the CMS detector at LHC. This
program is discussed in section 3.
2.2 Role of MIT and, in particular PPC personnel in Phobos
The Phobos collaboration consists of some 50 physicists from Argonne National
Laboratory, Brookhaven National Laboratory, Institute of Nuclear Physics in Krakow
(Poland), MIT, National Central University in Taiwan, University of Illinois in Chicago,
University of Maryland and University of Rochester. Unlike other RHIC collaborations,
Phobos is 50% High Energy and 50% Nuclear, which we believe is partially responsible
for its success. MIT originated the project and led it from the beginning. At MIT both the
PPC and the Relativistic Heavy Ion Group, supported by Nuclear Physics funds, are
deeply involved in the High Density QCD studies, Phobos and CMS,. A much lengthier
description of these programs, including the roles of our colleagues at LNS supported by
Nuclear Physics funds and the heavy-ion physics programs that preceded Phobos, may be
found in the Relativistic Heavy Ion Group proposal being submitted concurrently to the
Nuclear Physics Program.
Wit Busza has been the spokesperson for Phobos from its inception. Bolek Wyslouch was
the project manager during design, construction and commissioning of the Phobos
detector. Currently he is leading the Heavy Ion Program within the whole CMS
collaboration, and is spokesperson for the US institutions involved in this program.
2.3 Phobos Results and Physics
The main contribution of Phobos to the overall RHIC research program is the extensive
and unique systematic study, as a function of collision system and energy, of charged
particle production over the full range of rapidity and azimuth, and at mid-rapidity over
the broadest range of transverse momenta. An example of the quality and extent of such
studies are shown in figure 14.
Broad discussions within the RHIC community concerning the interpretation of the data
collected during the early runs culminated in a series of meetings during the summer of
2004. Recognizing that a consensus was developing on the overall conclusions, all four
experiments agreed to summarize their results in a set of White Papers. The PHOBOS
White Paper presented the collaboration’s perspective on the most important discoveries
including the nature of the medium being created and the presence of many surprisingly
simple scaling laws in the particle production. This work is published in Nuclear Physics
A and as a BNL Formal Report. The conclusions of the paper are quoted below.
26
a)
b)
Figure 14) Examples of the extensive and unique systematic studies made by Phobos.
a) Pseudorapidity distribution of charged particles produced in Au+Au collisions at three
energies and six values of the impact parameter.
b) Transverse momentum distribution near midrapidity of pi’s, K’s and protons for
central Au+Au collisions at an energy of 200GeV.
Broad discussions within the RHIC community concerning the interpretation of the data
collected during the early runs culminated in a series of meetings during the summer of
2004. Recognizing that a consensus was developing on the overall conclusions, all four
experiments agreed to summarize their results in a set of White Papers. The PHOBOS
White Paper presented the collaboration’s perspective on the most important discoveries
including the nature of the medium being created and the presence of many surprisingly
simple scaling laws in the particle production. This work is published in Nuclear Physics
A and as a BNL Formal Report. The conclusions of the paper are quoted below.
PHOBOS data and results from the other RHIC experiments, combined with very general
arguments, which are either model independent or depend on fairly simple model
assumptions, lead to a number of significant conclusions. In central Au+Au collisions at
RHIC energies, a very high energy density medium is formed. Conservative estimates of
the energy density at the time of first thermalization yield a number in excess of
3~GeV/fm3, and the actual density could be significantly larger. This is far greater than
hadronic densities and so it is inappropriate to describe such a medium in terms of simple
hadronic degrees of freedom. Unlike the weakly interacting QGP expected by a large part
of the community before RHIC turn-on, the constituents of the produced medium were
found to experience a significant level of interactions. If this medium is a new form of
QCD matter, as one would expect from lattice gauge calculations for such a high energy
density system, the transition to the new state does not appear to produce any signs of
discontinuities in any of the observables that have been studied. To the precision of the
measurements, all quantities evolve smoothly with energy, centrality, and rapidity.
Although it does not provide strong evidence against other possibilities, this feature of the
data is consistent with the results of recent lattice QCD calculations, which suggest that
the transition from this novel high energy density medium to a hadronic gas is a
crossover. An equally interesting result was the discovery that much of the data can be
expressed in terms of simple scaling behaviors. In particular, the data clearly demonstrate
that proportionality to the number of participating nucleons, Npart, is a key concept, which
describes much of the phenomenology. Further, the total particle yields per participant
27
from different systems are close to identical when compared at the same available
energy; the longitudinal velocity dependences of elliptic flow and particle yield are
energy independent over a very broad range, when effectively viewed in the rest frame of
one of the colliding nuclei (see for example Figure 15) ; and many characteristics of the
produced particles factorize to a surprising degree into separate dependences on centrality
and beam energy.
a)
b)
c)
Figure 15) Examples of the observation of limiting fragmentation or extended
longitudinal scaling. All data is for charged particles plotted in the rest frame of one of
the colliding systems.
a)Pseuorapidity distributions in Au+Au collisions
b)Comparison of pseudorapidity distributions in d+Au collisions measured in Phobos,
with p+Pb collisions, measured by our group in the 1970s.
c)Azimuthal asymmetry as measured by the “elliptic flow” parameter v2 for the 40%
most central Au+Au collisions
All of these observations point to the importance of the geometry of the initial state and
the very early evolution of the colliding system in determining many of the properties of
the final observables. Future data at RHIC, most especially collisions of lighter nuclei, as
well as higher energy nucleus-nucleus data from the LHC, will help to further evaluate
the range of validity of these scaling behaviors. It is possible that models, which describe
the initial state in terms of parton saturation, will play a role in explaining some or all of
these scaling properties, but such an identification is not yet clear. What is clear is that
these simple scaling features will constitute an integral component or essential test of
models, which attempt to describe the heavy ion collision data at ultra relativistic
energies. These unifying features may, in fact, provide some of the most significant
inputs to aid the understanding of QCD matter in the region of the phase diagram where a
very high energy density medium is created.
The physics program described in the PHOBOS White Paper was continued and
expanded in the following years. One of the primary goals of the physics program was a
continuation of the systematic study of global features of particle production, an area in
which PHOBOS has excelled. In this area, effort concentrated on the dependence of
observables on energy and system size using Au+Au data at sNN = 62.4 GeV and
Cu+Cu data at sNN = 22.4, 62.4, and 200 GeV. The previously noted scaling behaviors
were all confirmed and the discovery of complete factorization of energy and centrality
dependences for a number of observables was bothconfirmed and expanded. One

28
particularly striking result of the spectra work has been the discovery that the
factorization of energy and centrality dependences observed in the charged particle
multiplicity is also found to hold for the shape of the transverse momentum spectra, as
shown in Figure 16.
Figure 16) Two versions of nuclear modification factors for charged particle
transverse momentum spectra emitted in Au+Au collisions at two beam energies. In
the top row, spectra from Au+Au interactions at the indicated centrality were divided
by spectra from p+p multiplied by the number of participants in the Au+Au
collisions. In the bottom row, the spectra were divided by that found for the most
central collisions, again normalized by the number of participants. The dashed line
indicates the expectation from scaling with the number of collisions at the lower
energy. Note that the evolution of the shape with centrality, shown in the bottom
row, is independent of energy.
Another important result from PHOBOS was the discovery of extended longitudinal
scaling in the elliptic flow of charged particles for Au+Au interactions at sNN = 200
GeV (see Figure 15c) . This addition to the previously discovered scaling behaviors for
particle multiplicities and transverse momentum spectra reinforced the emerging view
that the initial geometry of the system and the very early distribution of the particles in
longitudinal momentum determine many of the properties of the final particle
observables.
An important extension to the physics topics covered by PHOBOS was the first analysis
of the spectra of identified particles, using data from Au+Au collisions at sNN = 62.4
GeV. These first identified particle spectra at this intermediate energy, which bridges the
gap between the SPS data and previous RHIC results, continued a long series of
PHOBOS publication “firsts” at RHIC. Combined with low transverse momentum data,
spectra spanning almost two orders of magnitude in pT (see Figure
14b) were found to be
29
well characterized by a blast wave parameterization with no evidence for any large
enhancement at low pT, a measurement that could only be performed by PHOBOS. At
large pT, invariant yields of pions and protons become comparable but the pT at which
this occurs was found to fit into a smooth trend as a function of center-of-mass energy as
shown in Figure 17a. Also, the surprising result that the net baryon yield at midrapidity is
closely proportional to the number of participant nucleons in the collision was found.
Figure 17 a) The transverse momentum at which the yield of pions and protons are equal
in central heavy ion collisions is plotted as a function of nucleon-nucleon center of mass
energy. The new PHOBOS data point (square) confirms the smooth dependence on this
crossing point as a function of energy from the AGS up to the full RHIC energy. b)
Elliptic flow, normalized by the participant eccentricity, plotted versus normalized
charged particle density.
PHOBOS studies of elliptic flow recently yielded two enormously important
contributions to our understanding of RHIC collisions.
Conventional analysis of results from Cu+Cu led to the conclusion that for comparable
initial conditions (particle densities and shape of the interaction region) Cu+Cu collisions
produced significantly more flow than Au+Au. Detailed studies by PHOBOS suggest that
this bizarre result can be explained by taking into account event-by-event fluctuations in
the locations where nucleons collide. Figure 17b shows a comparison of many colliding
systems incorporating the effects of these fluctuations by the use of the so-called
participant eccentricity (epart). This work was later expanded even further through studies
of flow, event-by-event. Here it should be mentioned that it is the flow studies of
Phobos and the other RHIC experiments that have led to the conclusion that the medium
created at RHIC appears to behave like a liquid with extremely low viscosity, a surprising
result indeed.
30
3. Compact Muon Solenoid Collaboration
3.1. Introduction
Since March 1994, members of the MIT CDF group have also been members of the
Compact Muon Solenoid (CMS) collaboration. CMS, one of the two general-purpose
experiments that will operate at the Large Hadron Collider (LHC) at CERN, presents a
natural extension of the CDF physics program.
The LHC machine is designed to collide protons at a center of mass energy of
34
2 1
s  14 TeV every 25 ns with an instantaneous luminosity of 10 cm s . First
collisions during an engineering run are expected in 2007 starting at a lower luminosity
and at
s  0.9 TeV . The first physics run at s  14 TeV will start in the spring of
2008. The LHC will address, among a plethora of physics subjects, one of the key
questions in particle physics, namely what is the origin of symmetry breaking in the
electroweak sector of the Standard Model.
In addition, the CMS apparatus will be used to study the collisions of heavy ions. Each
year there will be a one-month run initially with Pb+Pb collisions at energy of 5.5 TeV
per nucleon pair and later with p+Pb or with lighter ions. The high-energy nuclear
collisions will allow studies of QCD in hot and dense matter at energies far exceeding
that of the RHIC regime. New hard probes like Upsilon, fully formed jets and Z will be
available as tools of nuclear medium. The effects of very low x partons, whose density is
expected to saturate in the nuclei, will be dominating the production of particles.
The activities of the PPC group in CMS include development, implementation and
commissioning of the DAQ system, installation and commissioning of the all-Silicon
CMS Tracker, and development of the reconstruction and physics analysis tools and
techniques. The members of the PPC group, together with the MIT heavy-ion group, will
work on the preparation of the High-Level trigger algorithms and on the commissioning
of the online farms. We are pooling our expertise and resources working together on the
development and construction of the Tier-2 center at MIT.
3.2. CMS Physics Program
3.2.1. Physics with pp collisions
Higgs Boson Physics
The main interest of the PPC group at CMS lies in the probing the last experimentally
unverified cornerstone of the conventional Standard Model, the question of the nature of
Electroweak symmetry breaking via the Higgs mechanism. Independent of the unitarity
condition, which requires the Higgs mass to be well below 1 TeV/c2, in the Standard
Model Framework the electroweak precision data predict that the Higgs is lighter than
about 300 GeV/c2, while the direct searches from LEP exclude Higgs masses lower than
about 115 GeV/c2. For this reason the PPC group strategy is to use on a luminosity
driven search for the Higgs boson, starting at the most readily observable channels for the
31
expected starting luminosities, and expanding the focus as the growing luminosity allows
for observation in other channels. Beyond being able to discover the Higgs and perform
the mass measurement, the group is working to be able to determine the spin and the
couplings of the Higgs to establish that its signature characteristics of being a scalar
which couples to mass.
Search for the Higgs Boson
The PPC strategy for the search for Higgs bosons is fairly straight forward: first
understand the CMS detector from a physics standpoint by calibrating with real data on
Standard Model processes, then look for the Higgs boson within a Higgs mass window
that grows as the increasing luminosity opens up the sensitivity to a broader spectrum of
Higgs masses. The first channel we’ve chosen to work is the Higgs boson decay to a pair
of W bosons, which has the best sensitivity in the Higgs mass range 150–180 GeV/c2
requiring only a few inverse femtobarns for a 5 discovery, and the Higgs boson decay to
a pair of  leptons, which has good discovery potential below the diboson thresholds and
in addition provides insight into the Higgs coupling to fermions. To the list we expect to
add the decay to Z pairs and the two-photon decay as more manpower is available to
develop these analyses.
MIT is making strides increasing the involvement with the CMS Higgs analysis group,
and has taken responsibilities in the group’s goal to demonstrate the physics capabilities
of CMS in the two channels mentioned above before the end of 2007. For example, the
tools required to combine the results from several different decay channels was
introduced to the Higgs group by PPC members, accepted as the default by the group,
and is being implemented and supported by Guillelmo Gomez-Ceballos and Markus
Klute. Guillelmo is in addition contributing to the generation of simulation samples for
Higgs analysis, in conjunction with his roles in CMS Data operations. In general as the
PPC group completes its transition from the Tevatron to CMS, establishing our roles in
detector and data operations, we expect the PPC contribution to CMS Physics analysis, in
particular the Higgs program, to continue to grow with the additional involvement of
more students, researchers, and faculty. The sections, which follow, are examples of the
steps taken within the last year.
Calibration on Standard Model processes
The key to understanding any of the Higgs channels is a firm understanding of the
detector in terms of its ability to trigger on the interesting physics, distinguish signal from
background, and measure the relevant quantities with sufficient resolution to allow
definitive conclusions. While much can be done with simulation as a guide, it is
canonically the case that the simulation is too optimistic and the final determination of
the physics potential must be done with actual data. The production of W and Z bosons,
with cross sections of 160 and 50 nb respectively, provide decay products with the
appropriate kinematics to act as “standard candles” for studies of trigger efficiencies,
reconstruction performance, and selection algorithms.
To study the known Standard Model bosons, undergraduate student Phil Ilten and
graduate student Phil Harris used simulation to create a measurement of the cross section
32
for the process ppZXX, developing the machinery for understanding the behavior
of the detector as well as familiarizing themselves with the rapidly changing software
package used to perform analysis within CMS. The tracking performance in the lower
transverse momentum region is under study by student Matt Rudolph with a similar
analysis using J/ decays. These studies include exploration of track efficiencies
and track reconstruction algorithms, understanding the acceptance, and developing
measures of the track parameter resolutions. The efficiencies and resolutions continue to
improve with the evolving software, so no definitive statements are currently available
for what the “final” resolutions are, but the machinery is ready to make such
determinations. Figure 18 shows comparisons between generated and reconstructed
invariant mass of the Z boson for center of mass energies of 900 GeV expected during the
engineering run of 2007 and the ultimate value of 14 TeV.
Figure 18. Comparison of invariant mass distributions for generated and reconstructed
ppZXX events, for center of mass energies of 0.9 and 14 TeV.
Similar studies along these lines are starting, including topics for undergraduate students.
One example is the use of the copiously produced Z and W bosons to determining the
trigger efficiencies and optimizing reconstruction algorithms for final states involving 
leptons. Not only will this provide the necessary “engineering numbers” required for any
analysis using  leptons but it will also produce an evaluation of the background from
Standard Model processes for Higgs decays with  leptons in the final state. Another
project along these lines is the measurement of the cross section for associated WZ
production using leptons. The measurement of this cross section, expected to be ~ 50 nb,
also is a source of three lepton final states and missing energy, common signatures for
Higgs and SUSY searches. In addition to being an interesting number in its own right,
this channel provides an ideal source for study of electron and muon trigger and
reconstruction efficiencies, determination of WZ selection algorithms, and a handle on the
background from these sources for other searches. These are just two examples of the
strategy the MIT group is following to use completely characterize the performance of
the experiment before launching into the discovery physics stage of CMS.
33
HWW
The Higgs decay to a pair of W bosons, with both Ws decaying to either electron or muon
and associated neutrinos is a very promising channel in the mass region between 160 and
185 GeV/c2, and as such is the top physics priority for the CMS PPC group. Guillelmo
Gomez-Ceballos is leading the effort in this channel, focusing on establishing a clear
signal to background separation, which is complicated by the lack of a mass peak due to
the presence of the neutrinos. The signature is characterized by two high energetic
leptons, missing energy coming from the undetected neutrinos and low central jet
activity. There are two important backgrounds, which give quite similar signatures: Wpair and tt production. Other kinematical properties can be used to reduce such
backgrounds, for example the opening angle between both leptons, which is expected to
be smaller for the signal than for the backgrounds. The tt contribution is also heavily
reduced by requiring a small jet activity in the central region of the detector. Guillelmo is

working on optimizing the current CMS selections while simultaneously developing a
more general selection, which could give a higher efficiency over a larger Higgs mass

range in comparison with previous studies, keeping
the background contribution to
similar levels.
H
The other channel we are studying is the  decay channel when the Higgs is produced by
the “Vector Boson Fusion” process ppqqH, followed by H, with at least one 
decaying leptonically. This process is particularly interesting as it produces an additional
two jets in the very forward region which distinguishes it from background, but requires
very good  identification and forward jet tagging as well as the ability to veto jet activity
resulting from event pile up which can spoil the otherwise clean environment in the
central rapidity regions. Markus Klute has taken on the tasks of optimizing the
suppression of fake  jets from quarks as well as revisiting the selection algorithms for
the H analysis to be prepared for the end of this year. The additional advantage of
working with  final states comes from the fact that these final states are particularly
sensitive to physics coming from extensions to the Standard Model, so understanding
them in the Higgs context immediately allows one to explore other possibilities that may
well arise at the LHC.
Establishing the properties of the Higgs Boson
In the scenario where a “Higgs-like” excess is discovered in the early LHC years, and the
mass of this resonance is determined, the next natural question to answer is “Does this
resonance fit the description of the Standard Model Higgs boson?” While the Standard
Model does not predict the mass, the spin and the couplings are dictated by the
requirements of Electroweak symmetry breaking-the Standard Model Higgs is a spin 0
boson, which couples with strength proportional to the mass of its decay products. In
order to quantitatively establish the couplings the ratio of Branching ratios of two
different decay channels needs measurement, which is a natural consequence of
performing the Higgs search in multiple channels as spelled out above. For the spin
determination, one can use an angular analysis to extract information on the spin of the
resonance, in a similar manner to the  measurement of the Bs meson. For example,
34
figure 2 shows the differential cross section dependence on , the relative angle between
the two decay planes of the Higgs decay products, for decays H,
HZZ4, and HWW, contrasting a pseudoscalar Higgs with a scalar Higgs.
However, Figure 19 is purely theoretical, and needs to be pursued more thoroughly in an
experimental setting to understand how these effects can be measured with the CMS
detector.
Figure 19: Differential cross sections versus daughter decay plane angle for scalar (H)
and pseudoscalar (A) Higgs decays to tau pairs, with the taus decaying to pions, and
vector boson pairs decaying to leptons. From A. Djouadi, hep-ph/0503172v2
3.2.2. Heavy-ion Physics at CMS
The Particle Physics Collaboration, in collaboration with the Relativistic Heavy Ion
Group, intends to pursue the physics of strong interactions using the heavy ion collisions.
We are presently leading the program to use the CMS apparatus to study heavy ion
collisions at the LHC.
The CMS physics program is expected to start with a low-energy p+p commissioning run
in late 2007. First p+p collisions at the full design energy of 14TeV are expected in 2008,
with plans for a short Pb+Pb commissioning run at s NN =5.5 TeV in late 2008. Future
Pb+Pb runs are expected yearly for one month at the end of each p+p running period.
Data collected by the four experiments at the Relativistic Heavy Ion Collider (RHIC)
suggest that in heavy ion collisions at s NN = 200 GeV an equilibrated non-hadronic
system is formed. There is strong evidence that this dense medium is highly interactive,
perhaps best described as a quark-gluon liquid, and is almost opaque to fast partons. In
addition, many surprisingly simple empirical relationships describing the global
characteristics of particle production have been found. The LHC will collide lead ions at
s NN = 5.5TeV, the biggest step in collision energy in the history of the field of the
relativistic heavy ion physics. The resulting energy densities of the thermalized matter are
predicted to be more than an order of magnitude higher than at RHIC, implying a
doubling of the initial temperature. The higher densities of the produced partons result in
35
more rapid thermalization and a large increase in the time spent in the quark-gluon
plasma phase. Studies of matter under these conditions will either confirm and extend the
theoretical picture emerging from RHIC or challenge and redirect our understanding of
strongly interacting matter at extreme densities. Progress at the LHC will come not only
from the increased initial energy density, but also through a greatly expanded mass and
pT range of hard probes. In addition, many surprisingly simple empirical relationships
describing the global characteristics of particle production have been found. An
extrapolation to LHC energies suggests that the heavy ion program has significant
potential for major discoveries.
The discoveries at RHIC have not only transformed our picture of nuclear matter at
extreme densities, but have also shifted the emphasis in the observables best suited for
extracting the properties of the initial high-density QCD system. Examples of these
observables include differential studies of elliptic flow, very high p T jets and open and
hidden heavy flavors. The importance of hard probes implies the need for detectors with
large acceptance, high rate capability and high resolution, leading to a convergence of
experimental techniques between heavy ion and particle physics. Using CMS for heavy
ion collisions takes this development to its logical conclusion, leveraging the extensive
resources that have already gone into the development and construction of the apparatus.
CMS as a heavy-ion detector in particular excels in the following areas:
Rate capability: The CMS DAQ and trigger system is designed to deal with p+p
collisions event rates of up to 40MHz. At Pb+Pb design luminosity, the available
bandwidth and CPU power will allow a detailed inspection of every heavy-ion
event in the High Level Trigger (HLT) farm. This allows a near complete
selection and archiving of events containing rare probes, such as extremely high
pT jets or high mass dileptons, possible.
High resolution and granularity: At the full p+p luminosity there will be, on
average, 20 collisions per bunch crossing. To disentangle very high momentum
observables in this environment, the resolution and granularity of all detector
components has been pushed to the extreme. Our group has developed suitable
tracking algorithms that exploit the capabilities of the CMS silicon tracking
system in heavy ion collisions.
Large acceptance tracking and calorimetry: CMS includes high-resolution
tracking and calorimetry over 2π in azimuth and a uniquely large range in
rapidity. The Zero Degree Calorimeters (|η| >8.0) and the CASTOR detector
(5.2<|η|<6.6) will allow measurements of low-x phenomena and particle and
energy flow at very forward rapidities.
CMS requires only minimal hardware modifications to function as a heavy ion detector.
The CMS heavy ion group plans to install additional small calorimeters in the forward
region. The addition of heavy-ion motivated CASTOR and the Zero-Degree-Calorimeter
36
(ZDC) will widen the detector coverage and it will allow access to important forward
physics both in p+p and AA collisions.
High density QCD using Heavy-Ion Collisions in CMS
The reconfiguration of the experiment to function during heavy ion collision requires
significant software development in online and offline software and in particular the
software running on the High-Level-Trigger computer farms. The recent work done by
the MIT group validates the suitability of the CMS DAQ system for heavy-ion physics.
The unique CMS trigger architecture only employs two trigger levels: The Level-1
trigger is implemented using custom electronics and inspects events at the full bunchcrossing rate. All further online selection is performed in the High-Level Trigger (HLT)
using a large cluster of commodity workstations (the “filter farm”) running “offline”
reconstruction algorithms on fully assembled event information. The trigger system was
designed to deal with rates in CMS p+p running of up to 40MHz. At LHC Pb+Pb design
luminosity, the initial true collision rate at the beginning of a store is expected to be close
to 8 kHz. Assuming collisions at three detectors, the collision rate will drop sharply over
time, with an average of about 3 kHz over the lifetime of a store. The HLT input
bandwidth, as determined by the p+p requirements, is sufficient to send all Pb+Pb events
to the HLT, even at maximum collision rate.
This suggests a trigger strategy in Pb+Pb running that can be summarized as follows:
Every Pb+Pb collision identified by the Level-1 trigger will be sent to the HLT filter
farm. At the HLT, the full event information will be available for each event. All
rejection of Pb+Pb collisions will be based on the outcome of HLT trigger algorithms
that are identical to the corresponding offline algorithms or optimized versions of the
offline algorithms. Therefore, algorithms like the offline jet finder will be run on each
Pb+Pb event in the CMS interaction region, optimizing the CMS physics reach. This
strategy relies on the fact that the HLT in its final configuration will provide sufficient
input bandwidth to accept all collision events and sufficient computing power to run full
offline algorithms on all events. With an expected rate to tape of 10-100 Hz, depending
on the produced multiplicity and trigger conditions, a rejection rate of 97% to 99.7% in
the HLT is required.
To quantify this strategy, the members of the MIT group have performed detailed
simulations of the HLT performance for studies of dimuons and jets in two luminosity
scenarios. We measured the timing of three key algorithms: the jet finding algorithm, the
stand-alone muon finder using muon chamber information and the full muon finder
including the silicon tracker information. The timing studies indicate that the jet finding
and stand-alone muon finding algorithms in their present form use up to 25% of the HLT
CPU budget, depending on luminosity, while the full muon finder will require some
optimization to fit into the available time budget. The expected cross sections for the
physics processes as well as the tape writing rates for minimum bias and triggered events
are shown in Figure 20.
37
Figure 20; Left shows production cross-sections per nucleon-nucleon collision for various
processes considered in the trigger simulations; right shows the corresponding rates to
tape for jet and dimuon channels, comparing min bias running (dashed lines) to a triggered
event sample (solid lines).
3.2.3. Tracking of charged particles in heavy-ion events
The high acceptance, high granularity CMS calorimeters are used to reconstruct jet
energy and jet direction in the presence of background from low p T particles present in
heavy ion collisions. However that inclusion of information about individual particles in
the jet allows for more detailed and conclusive studies of hadron suppression phenomena.
Before the arrival of the MIT group in CMS the tracker was expected to be used in an
important but limited way only to improve resolution of muon momentum measurements.
The studies of tracker capabilities conducted by Christof Roland have proven that the
silicon tracker and the pixel detector can be used in heavy ion environment in much
expanded role. The track reconstruction software was adjusted and tuned to function in
high multiplicity environment. The track finding parameters were optimized to achieve
reasonable efficiency while maintaining low level of misidentified tracks or “fakes”. The
track finding algorithms are now being extended by our Hungarian colleagues to include
detailed knowledge about CMS pixel detector. The inclusion of pixels will make it
possible to reconstruct tracks down to pT as low as 100 MeV/c in p+p and 300-400
MeV/c in heavy ion events. The combined tracking algorithm will be used for both p+p
and heavy ion collision studies. Our work has shown that CMS is likely to have the best
tracker for heavy ion collisions at the LHC.
38
Figure 21; Left shows track finding efficiency in CMS silicon tracker in heavy-ion events
in high multiplicity events with dNch/dη~3500; right shows the momentum resolution for
charged tracks reconstructed in these events.
3.2.4. Collaboration between High Energy and Heavy-Ion Physicists
The idea of using the same experimental apparatus for traditionally distinct branches of
Particle and Nuclear Physics brings tremendous advantages to the CMS experiment as a
whole, to the US part of the CMS collaboration and to the Particle Physics Collaboration
in particular. The interpretation of heavy ion collisions relies on the good understating of
the physics of minimum bias p+p collisions; this opens a joint physics interest area. The
usage of detector elements and the data analysis software are very similar and can be
used interchangeably. The physicists working on different aspect of the physics program
can share service and detector development responsibilities. The joint computing
initiatives, the sharing of computing infrastructure all bring unprecedented efficiency and
spur creativity. The movement of MIT physicists from Phobos to CMS will bring the
needed transfer of expertise from RHIC to LHC.
Within CMS, the heavy-ion group is included as one of several physics groups. The
heavy-ion physicists will not only share the detector and online event selection hardware
with the high-energy physics groups, but also the offline computing model, software
infrastructure, databases and a multitude of other resources. It is therefore clear that both
the PPC and the heavy-ion group will benefit from an active collaboration in their work
on CMS. A first example of this collaboration is the CMS HEP Tier-2 center under
construction at MIT. The physicists working on the parallel effort in heavy-ion physics
are already contributing their expertise to the running of the center and also profit from
its existence in particular by the availability of a well-maintained common software
framework. The recent supplemental addition the heavy-ion group’s funds, together with
additional funding from the MIT Physics department are being used to purchase
additional compute nodes. By coordinating our presence at CERN and our CMS
activities, such as the hardware contributions in the DAQ and online event selection
39
areas, we expect to maximize our contribution to the experiment and our visibility within
the collaboration.
In addition to working on the joint hardware and software projects the members of the
heavy-ion group will be active in the studies of physics in p+p collisions.. The motivation
for this is two-fold: the experience at RHIC has clearly shown the importance of p+p and
p+A reference data in the interpretation of the results from heavy-ion collisions. Equally
important, participating in the first p+p measurements will ensure readiness of our group
for the first Pb+Pb later in 2008, in terms of data handling software preparation and
integration into the CMS analysis and detector groups. We are planning a two-pronged
approach: The first set of reference data will be obtained from the initial low-luminosity
p+p running at LHC startup at the full p+p energy. This early data set will be particularly
important for the heavy-ion physics, as the lower collision rate will allow a cleaner
determination of the properties of minimum bias p+p data than will be possible for the
subsequent high-luminosity p+p runs, due to event pile-up. Reference information for
heavy-ion data will be extracted by interpolation of the full energy data with lower
energy p+p(bar). As we will be far from any relevant thresholds, such an interpolation
should provide a good starting point for the interpretation of Pb+Pb data. An equally
important consideration dictating the full participation of the heavy-ion physicists in the
first data taking is the need to get fully involved in testing, debugging, and optimizing the
detector hardware and software configuration. Some members of the MIT heavy-ion
group have also taken responsibility for implementing HLT algorithms for the p+p pilot
run. These preparations are essential well in advance of leading the corresponding effort
for the first Pb+Pb running. For the later dedicated p+p and p+A running at the Pb+Pb
energy, we will be able to react to concrete needs for improved precision in the reference
data, as they emerge from the analysis of the first Pb+Pb data.
As stated above, our physics interests in the first minimum bias p+p runs in 2007 and
2008 are guided by our experience in Phobos and present work in CMS and are also
directly related to our plans for Pb+Pb analyses. They include multiplicity distributions
over a large range of rapidity, inclusive charged hadron pT spectra and jet cross-sections
over a large range in pT and measurements of two-particle correlations in rapidity and
azimuth. Our discussions with the QCD and B-physics groups in CMS suggest that our
contribution will greatly enhance the available manpower for this physics. Finally it is
important to note that these measurements will not only serve as reference for Pb+Pb
running, they will also potentially lead to important discoveries in their own right,
following the ideas on parton saturation developed in the context of HERA and RHIC
data.
The MIT heavy ion group together with the members of the PPC will work together on
the preparation of the High-Level trigger algorithms and on the setup of the online farms.
We will also play a major role in the preparation of the reconstruction and data analysis
software. The activities of the MIT groups are part of the larger effort of several US
universities to participate in the CMS Heavy Ion program. Prof. Wyslouch is a
spokesperson of this group. The group is negotiating with the DOE Office of Nuclear
Physics the exact scope of the level of the participation in the context of the overall US
40
program in heavy ion physics. In addition Prof. Bolek Wyslouch is presently the cocoordinator of the worldwide CMS Heavy Ion Program.
41
3.3. MIT activities on CMS
3.3.1. The CMS Trigger/DAQ system
The CMS Level-1 trigger system will reduce the 40 MHz proton-proton interaction rate to less
than 100 kHz. With event sizes expected to be of the order of 1 Mbyte, a readout capability of
100 Gbytes/sec is required. Figure 22 shows the basic structure of the CMS Data Acquisition
(DAQ) system.
Figure 22: Design of the CMS DAQ System: eight segments, each carrying a 64x64 Readout
Builder, are FED data by the front-ends. This is shown on the left. There are eight segments in
the full, final system. On the right: the other projection of the system, displaying a single
segment (i.e. a Readout Builder).
Data from the detector front ends are sent to 650 Front-End Drivers (FEDs) which are read out
by the Data to Surface (D2S) subsystem and then through a FED Builder network to the Readout
Units (RUs), where the data are assembled into super fragments and buffered in deep memories
(RUMs). The event data are then sent through a large switching fabric to a Builder Unit (BU)
that collects the data into a single buffer – this is the “event building” process. Once an event is
built inside a BU, it is sent to the next available Filter Unit (FU) for a processing decision on
whether to record the event on mass storage.
The Event Builder is a two-stage network that is made of a number of smaller switch fabrics. The
first of these is called the FED Builder, which comprises 64 FED Builder switches each with 8
inputs and 8 outputs. Each output is connected to a Readout Unit (RU), which buffers the data
from the Front End Driver (FED). This is followed by the Readout Builder, which comprises of 8
segments of 64 x 64 port switches which reads the data from the Readout Units and transfers it to
the Builder Units (BU). This DAQ system is shown in Figure 22.
MIT is responsible, together with the CERN group, for the D2S subsystem and the FED Builder
network. Sham Sumorok leads this effort for MIT. Each input link of the D2S system is
comprised of a Compact Mezzanine Card, (CMC), a SLINK64 cable, a Front-End readout Link
a
42
(FRL) card and a Myrinet Network Interface Card (NIC) for input to the FED Builder. The peak
data flow on this path is 400 MB/s and the sustained rate to the FED Builder is 200 MB/s. In
2005, long-term data transmission tests (“Stage 3”), were performed on all the link components
by Sham Sumorok and Roberta Arcidiacono. Roberta left the group to take up a position at
Torino University last September.
Figure 23: Left, shows some of the FRL crates and right, Myrinet switch crates deployed in
USC55. Data is read from the Front End Drivers on the white SLINK64 cables and subsequently
sent to the Myrinet switch on the orange optical patch cords.
Over the last year we deployed the D2S subsystem components in the underground service
cavern (USC55). Figure 23 (left) shows some of the 44 FRL crates and (right) the Myrinet
switch crates deployed. Data is read from the Front End Drivers via the SLINK64 cables and
subsequently sent via the optical patch cords to the Myrinet switch. The Data are then
transmitted to a similar set of Myrinet switches deployed in the surface building SCX5 via 200 m
optical links and then via another set of optical patch cords to the Readout Unit PCs. All the
SLINK64 cables and FRL crates were tested in place for data transmission integrity. Jonatan
Piedra performed this test, called Stage 4 testing. A further MIT activity, associated with the
D2S readout testing, managed by Jonatan, has been a portable FRL crate Test Stand to test the
FED readout crates of the various subsystems.
MIT handled the purchasing of the Myrinet switch equipment for the full FED Builder D2S
system at a cost of over 2M$. Steve Pavlon, who was responsible for the procurement of the
Myrinet equipment, negotiated an additional 20% discount from the company resulting in a
considerable saving for the project. Last summer, one quarter of this equipment was assembled
in the final configuration and used as a test bed for the DAQ software developers. Subsequently
it was used in the CMS Magnet Test and Cosmic Challenge (MTCC). During the MTCC, the
CMS superconducting-solenoid magnet was cooled down and fully powered. Triggers on cosmic
a
43
rays passing though the magnet and various detector systems were read then out and recorded
using the system.
The 200 m optical links between downstairs (USC55) and upstairs (SCX5) and the optical patch
cords in SCX5 from the Myrinet switches to the RU PCs have now been installed. Plans for this
summer are to install and commission the 640 RU PCs and the subsequent RU builder switch.
The RU Builder switch, based on Gigabit Ethernet, is the responsibility of our US CMS
colleagues at San Diego, whom have been assisting us with the FED Builder installation. The
full readout system is to be ready for August this year, in time for the first pp collisions at the
end of the year.
For commissioning all the equipment underground in USC55 we have also installed a small RU
builder called a Mini DAQ system, which will read out the FED builder crates to build full
events through the Myrinet switches underground. This system is installed in a rack beside the
Myrinet switches shown in Figure 12. Gerry Bauer is leading this effort on the Mini DAQ
system. It will also be possible to input data into the Myrinet switch from the Mini DAQ PCs
which will allow all optical links between the Myrinet switches in USC55 and SCX5 to be
verified.
3.3.2. Storage Manager
The MIT group has started to take a new major responsibility in the CMS data acquisition
project. As part of the high level trigger project the Storage Manager system is designed to store
raw data on a storage medium close to the experiment, give real-time access to a fraction of the
collected data for monitoring purposes, and collect non-event data. The design specifications for
the system are a total storage capacity of 250 TB and a throughput of 1 GB/s. The data stored
with the Storage Manager system will be transferred to the Tier-0 computing center for
processing. The communication protocol between the Storage Manager and the Tier-0 is the
interface and entry point for the data operations project.
A first test system was deployed by Christoph Paus, Markus Klute (MIT) and Emilio Meschi
(CERN) during a cosmic ray data taking period September 2006. An improved version of the
system provided a bandwidth of 40 MB/s in October 2006.
Markus Klute (MIT) is responsible for the hardware configuration, the purchase of the hardware
and development and deployment of the software. The total budget of the system is $250 000,
and the purchase is staged in 3 phases. A first system will provide 1 GB/s bandwidth and 22 TB
storage capacity in Summer 2007. For the first high energy collision in Summer 2008 the storage
capacity will be increased to 100 TB and the final system will have a storage capacity of 250 TB
in 2009.
The purchase of the hardware has been made and will be available for the first commissioning
runs of the LHC and CMS. Hereby, the setup of the Storage Manager offers, due to the available
bandwidth, huge flexibility for the commissioning of the CMS detector, the Level 1 and high
level trigger systems as well as for the commissioning of the data acquisition system.
a
44
The software development effort for the Storage Manager system is coordinated by Harry Chang
(Fermilab) and the entire high level trigger program is organized by Emilio Meschi (CERN).
After the commissioning of the new hardware and the deployment of the first production version
of the software by Markus Klute (MIT), MIT's involvement in this project will be limited to
support and maintenance. A future PhD student will take responsibility for this project.
3.3.3. CMS Data Operations
The MIT group has started to take a new major responsibility in the CMS computing project.
Since the beginning of the year the computing project in CMS has been subdivided into four
sub-projects:
1. Computing Commissioning,
2. Facilities and Infrastructure Operations,
3. User Support,
4. Data Operations.
At the end of February, 2007 Christoph Paus was approved by the CMS Collaboration Board to
coordinate the Data Operations project. A co-coordinator is being searched for.
It is the responsibility of the Data Operations project to receive the data from the online system,
specifically the Storage Manager, and perform all necessary operations so that any CMS user in
the world can access the data in any officially supported format and location to perform a
meaningful analysis. This also includes the full Monte Carlo production. After review and
discussions the project was subdivided into five separate tasks:
1. Host Laboratory Processing, coordinators: Christoph Paus (MIT, interim), TBD
2. Data Re-Processing, coordinators: Guillelmo Ceballos (MIT), TBD
3. Monte Carlo Processing, coordinators: P. Kreuzer (Aachen), A. Khomitch (Aachen)
4. Data Transfers and Integrity, coordinators: J. Rehn (CERN), TBD
5. Data Certification for Physics, coordinators: Markus Klute (MIT), TBD
From now until the data taking and production becomes reasonably stable, the Data Operations
project is going to be organized in a mostly centralized fashion where only two teams perform all
data operations tasks. It is the plan that one team is going to be working at CERN and an
equivalent team at Fermilab. Although it would be desirable to avoid a 24/7 data operations
model it has to be planned for from the beginning. The two teams, which work in different time
zones, separated by 7 hours, will ideally facilitate such an operational model. The teams will be
permanently connected through video conferencing tools. Each site will assume operational
responsibility as it comes into work each morning. Having well defined responsibilities with all
members of the team at a given time in one room will substantially reduce the communication
delay introduced by a distributed team. Site-specific issues can ultimately only be cured by the
corresponding site personnel and a close connection to the sites is crucial.
It should be pointed out that as soon as the operations have settled down it is likely that a large
fraction of the tasks can be outsourced to centers, which want to take on the responsibility and
have the necessary resources.
The five tasks are briefly described as follows.
a
45
Host Laboratory Processing
The host laboratory is CERN where the CMS detector is located. The processing at the host
laboratory is performed on the so-called Tier-0 computing center, which is the first part of the
CMS offline computing system. The CERN IT division maintains the Tier-0 and accessible to
CMS to perform the full set of steps to process the entirety of data delivered by the CMS
detector. The data processing is a complex process and comprises a number of steps, which have
to be carefully orchestrated to ensure a continuous data taking process.
The data are received as three different streams from the detector. There is the express stream to
be treated as quickly as possible to have fast turn around on most pressing physics and
monitoring issues. Then there is the bulk of the data which will be split into all physics datasets
and finally there is the calibration and alignment stream which has a specialized compact format
optimized to perform a fast first alignment of the data before they get processed.
The processing has three main threads: first the physics data streams (express and bulk) have to
be repacked before they can be processed to ensure that all events from a given luminosity
section are contained in only one file. Secondly the calibration and alignment stream has to be
processed to produce a fast alignment, which is necessary to process the corresponding physics
datasets. Thirdly there is the processing of the physics data streams (express and bulk). In that
step the bulk data get split into the various predefined datasets. All production data is stored to
tape at CERN and then transferred to predefined Tier-1 sites, where another copy of the data is
stored to tape.
Data Re-Processing
The CMS computing model foresees that each dataset in the raw and processed data format is
copied from the Tier-0 (CERN) to exactly one Tier-1 center in the world. The exact distribution
depends on the resources available at each site and still has to be determined. In the beginning,
the detector calibration, alignment and software will have to be reviewed and constantly updated.
It will therefore be necessary to re-process the data at frequent intervals in a coherent and
effective fashion. This is the task of the re-processing.
It is going to be most effective that the re-processing happens at the Tier-1 centers where the raw
and reconstructed data are hosted, that means are being kept on the mass storage system. In the
original processing as well as in the re-processing step a compact high level data format the so
called AOD data are going to be created as well, and need to be written to tape and distributed
around the world.
Monte Carlo Processing
The production of Monte Carlo datasets is the last general large scale processing tasks, which
falls under the responsibility of the Data Operations project. Monte Carlo production, though
usually under less time pressure then the production of data itself, is a huge and complex task. It
involves with the vast physics topics addressed in the various physics groups a huge amount of
different samples and normally requires significantly more computing power then the data
production. Apart from the large statistics required to avoid unnecessary statistical uncertainties
from the Monte Carlo samples in the analyses, the simulation takes about two times more CPU
then the reconstruction task.
a
46
The Monte Carlo production is foreseen to run primarily on the Tier-2 centers but can also run at
all other centers as resources become available. The Monte Carlo datasets as the data are stored
on mass storage systems at the Tier-1 centers and thus the proper transfer mechanisms have to be
setup. Monte Carlo samples in general also have the issue of reprocessing once new software
versions with improved algorithms become available.
Data Transfers and Integrity
Due to the distributed nature of the CMS computing model data transfers and the integrity of
those transfers assume an importance unrivaled by any other High Energy physics experiment
before. It is for this reason that a separate task has been assigned to this issue.
The following data transfers will have to be organized and monitored:
1. Tier-0
1.1. Transfer of raw and reconstructed data: RAW and RECO to the appropriate Tier-1
centers
1.2. Transfer of the high level compact data: AOD to all Tier-1 centers
All Tier-0 transfers are special because the Tier-0 is not the ultimate storage of the data,
although a write only copy will be kept. Data can only be considered safe when they have
reached their hosting location, which are the Tier-1 centers. Therefore these transfers can
be considered almost and a part of the data acquisition system because if transfers to the
Tier-1 centers are interrupted at some point all disk buffers at the Tier-0 will fill up and
no more data can be received.
2. Tier-1
2.1. Transfer of the original and re-processed AOD to all regional Tier-2 center who
subscribe for them
2.2. Transfer of any RAW/RECO datasets including the re-processed ones to any Tier-2
center, which subscribes for them
2.3. Transfer of any Monte Carlo SIM/DIGI/RECO/AOD datasets as requested by any
Tier2 centers
3. Tier-2
3.1. Transfer of all Monte Carlo simulation output to the regional Tier-1 centers
Apart from the pure transfer mechanics it is essential that the integrity of the data is checked at
each step of the transfers without causing delays. While most of the data integrity tools are
widely used and available a policy of what checks are necessary and which ones are really
feasible has to be developed and put in place.
Data Certification for Physics
The last task in the Data Operations project is the certification of the data for physics. As the data
are published for usage physicists need to know whether they can use the particular data for their
analysis.
The certification starts with the fact that with each new software version and the changing
database of alignment and calibration the data output the various processings has to be validated.
The largest part of this falls under the responsibility of the CMS Validation project but the initial
production of test samples and the constant monitoring of the quality of the data produced day by
a
47
day has to be provided by the operations team. Also the distribution and organization of good run
information has to be provided by the data operations team.
3.3.4. The CMS Tracker Integration and Commissioning
CMS Tracker
The tracking system at CMS relies entirely on a silicon-based detector, as opposed to more
traditional approaches combining drift chamber technology and relying on silicon only for
vertexing. The result of this decision is the world’s largest silicon strip detector, with roughly
220 m2 of Silicon wafers and 10 million channels, which reads out at the nominal L1A rates of
50 kHz. MIT has joined the USCMS tracker subgroup and is playing a leadership role in making
the Tracker operational and prepared for physics quality data.
With the final construction winding down, we now face the task of connecting the subsystems,
testing interfaces between them, installing in the collision area, and understanding the detector as
a single scientific instrument. As the start up of LHC looms, there is constantly increasing
tension between the time crunch to be ready for first beams and the necessity of ensuring the
safety and maintaining the high quality of the detector “as built” all the way through installation
to operation. This tension is amplified in the context of the CMS tracker, for several reasons.
First and perhaps foremost is certainly the shear enormous size and complexity of this detector.
Second, because of the beam pipe, the tracker is necessarily on the critical path for LHC startup.
Finally, once installed, maintenance is not feasible, resulting in the requirement of absolute
confidence in functionality before installation. Drawing on their experiences from the Tevatron,
the MIT tracker group, composed of Steve Nahn, Kristian Hahn, and incorporating students Phil
Harris, Matt Rudolph, Pieter Everaerts, and Kevin Sung, has worked with the USCMS tracker
group and the CMS tracker community in general to make the transition from construction to
successful operation of a fully qualified detector.
In the future, MIT plans to carry the work and experience gained in the installation and
commissioning into the operations of the CMS tracker. The plan from USCMS is to have
expertise in operations of various different aspects of tracker operations, including Data
Acquisition, Slow Control operation and monitoring, Data Quality Monitoring etc. MIT students
and researchers will participate in this effort, in particular in the area of Data Acquisition. In
addition, we are exploring avenues of incorporating a trigger capability into an upgraded CMS
tracker for SLHC.
Tracker Integration Facility
The CMS tracking community has made an investment to hedge against the installation crunch
by constructing the Tracking Integration Facility (TIF) at CERN, where the detector has been
assembled and can be operated in up to 25% segments for commissioning purposes. The overall
goal of this center is to deliver a quality-assured operational tracker to the experimental hall, to
avoid extra strain in the installation phase at the experiment. Over the last year, pieces of the
tracker have arrived at the TIF and in Spring of 2007 the installation of the last end-cap marked
the completion of the construction phase.
a
48
The flagship project for MIT at the TIF is the commissioning of the tracker at sustained high
trigger rates. The data acquisition at the TIF only goes as far as the Front End Driver (FED)
readout modules, which limits the data taking rates to a maximum of a few tens of hertz, several
orders of magnitude below the expected operating point. In order to allow the investigations of
noise versus various grounding schemes or other variations in operations at high rate, we have
built a small-scale version of a D2S system to the TIF. Such an extension to the DAQ allows
the data to flow at high rate through FRLs and a switch to a small computing farm, where it is
then prescaled and written to disk. In addition, this is the only system, which requires the
implementation of the trigger system and trigger throttling mechanism in order to run at high
rates without overflowing the front-end pipelines. Thus, this system provides essentially the
only opportunity to test a significant fraction of the tracker at a realistic operating point with the
complete set of final hardware before the installation, and therefore is a crucial milestone in the
commissioning process.
This high rate system has been operated on small sections of the tracker Outer barrel (TOB) and
Endcap (TEC) detectors at the TIF, as well as on smaller silicon test pieces. Subjecting the
tracker to rates beyond a few hertz has revealed some potential problems, which are under
investigation. On both TOB and TEC modules, large spikes in the occupancy correlated with
high ADC count noise clusters have appeared as the acquisition rate exceeds roughly 25 kHz.
Figure 24 demonstrates the effect where the occupancy in certain channels in the front end
ASICs grows as a function of trigger rate. Currently the effect is under study, with the hope of
being able to solve the issue with improved grounding schemes and/or better calibration of the
sparsification thresholds, but the exact cure will not be known until the cause is further
understood. It is perhaps worth pointing out that this effect would not have been seen until after
installation without the enhancement to the TIF DAQ.
This high rate project has several side benefits, which may also prove beneficial to the Tracker
commissioning and operations effort. First of all, installing the Magnet Test Tracker (which is
separate from the real Tracker) at the TIF, we can use that system to act as a “data source” for
development of control operations and monitoring, especially during the months during which
the Tracker itself is being installed. As development will certainly continue during the
installation downtime, having a “tracker proxy” to validate functionality and improvements is
essential for efficient development. Second, and perhaps more important, is that once the
Tracker is installed, it is extraordinarily difficult to carry out detailed diagnostic studies in situ
when problems arise. The “tracker proxy” can serve as a laboratory to understand and either
cure or at least treat the systematic problems, which almost certainly will arise in the postinstallation commissioning and operations.
a
49
Figure 24: Comparison of Hit Occupancy for 2 128-channel tracker readout chips. On the left,
comparison between 100 Hz (red) and 25kHz (blue) where an effect starts to appear; On the
right, comparison between the same 100 Hz (red) and 100 kHz (blue) where the occupancy
growth is quite apparent (note the change in scale).
One outgrowth of the high rate studies is the development of MIT expertise and participation in
the CMS tracker DAQ group. First, in construction of the high rate test system and operation on
various subdetectors the MIT tracker group developed a fundamental understanding of the
intricacies of the DAQ, on par with only a handful of other true DAQ ‘gurus’. The CMS tracker
Project manager acknowledges this spread in detailed knowledge and competency as exemplary,
and critical to success in the initial phases of tracker operation at the experiment. In addition to
cultivating DAQ expertise, we enhanced the development of the DAQ software architecture.
During the initial phases of the testing, the control software was under strenuous development by
several authors simultaneously, resulting in considerable instability, which plagued the execution
of the high rate tests. As a solution, which coincidently fulfilled a requirement for installation of
the DAQ at the experiment, in a Herculean effort Kristian Hahn reworked the data acquisition
software architecture, implementing a released-based system and unified compilation scheme
modeled on Linux software distribution systems, which greatly increased the operational stability
while continuing to allow the multi-author development required as experience is operating large
pieces of the tracker. The maintenance of this system and production of new releases of the
software is now an MIT responsibility.
A second activity at the TIF has been the organization and coordination of “Tracker Operations”
during the first half of 2007. As the detector pieces have come together, there has been
considerable activity many arenas; testing out as much as possible the various detectors after
installation, development of the services such as Cooling, Interlocks, Data Acquisition, the
Power Supply system, Calibration, Remote Monitoring procedures, and a concerted effort to
collect Cosmic Ray data operating the detector at both warm and cold operating temperatures.
Of course, at this stage interferences are almost unavoidable, and many tasks require the same
resources to perform their work. This problem is exacerbated by a prevailing attitude regarding
ownership of particular resources, which was valid during the construction phase but is no longer
the case. To help facilitate this transition, Steve Nahn accepted the role of “Operations
Coordinator” at the TIF for the spring of 2007. This position implies being available 24/7 as the
single point of contact coordinating the daily activities at the TIF in accordance with general
weekly priorities, resolving resource conflicts, handling incidents such as Rack trips, loss of
a
50
Cooling etc, and establishing standard operating procedures to allow the tracker community
beyond just the experts to operate the detector at the TIF. In performing these duties Nahn is
greatly benefited by his experience as leader of the CDF Silicon Operations group, understanding
the interplay between the detector and services almost instinctively, as well as a clear absence of
loyalty to any one subdetector project, which helps establish impartiality when allocating
resources to different efforts.
Beyond just attempting to steer the efforts, we are also identifying and in some cases developing
the tools required to be able to safely operate and monitor the detector. For example, MIT took
responsibility for the establishment and maintenance of a website for tracker operations,
including an electronic logbook, documentation on the various subsystems, standard operating
procedures, on call lists, and even a photographic library of the various people working at the
TIF, to help the subdetector communities match faces to names and become a tracker
community. In addition, student Matt Rudolph is developing Monitoring and Display software
to allow the real time broadcast of the TIF Run Status-how many triggers have been taken, how
many events collected, the status of the event processing etc, over the web so that one can
monitor the status of the runs from anywhere in the world. A new project to import a problemtracking software package successfully used at CDF will become the responsibility of new
students Everaerts and Sung. We expect to continue to spawn small projects based on Tevatron
experience in order to facilitate the operations at the TIF, which will eventually carry over to the
real experiment.
As a corollary to the role of Operations Coordinator, Nahn has been especially attentive to the
inclusion of the US HEP members in TIF operations, both locally at CERN and through efforts
to establish remote operations at Fermilab. First, he serves as the contact person for the USCMS
tracker members, giving weekly updates on TIF operations to the USCMS monitoring group,
coordinating members coming to CERN for three week periods to take shifts at the TIF,
establishing direct contact with the Fermilab Remote Control Room during Cosmic Ray data
taking, and generally making sure that USCMS members can stay “in the loop”. In addition, he
has further expanded on bringing Tevatron expertise to the LHC by promoting the use of the
Fermilab constructed “Web Based Monitoring” software (http://cmsmon.cern.ch/cmsdb) for
providing run summaries of TIF data taking, monitoring the power supplies, and in general
connecting the USCMS developers with the users in the TIF. While not absolutely critical to the
operation of the tracker at the TIF, maximizing US involvement is critical to the strength of the
field in the US, and MIT’s presence at CERN enables us to serve as a gateway to physicists
resident in the US to contribute to the collaboration.
Tracker Installation at Point 5
In addition to the projects at the TIF, there is an enormous amount of work to do to install the
Tracker at the experiment site (“Point 5”). A considerable amount of the work for MIT consists
of providing sheer manpower in the form of students on cabling and detector checkout shifts.
Not only does this help provide resources where they are scarce, it is in our opinion a
fundamental part of the education of a student to take part in the actual assembly of the detector.
In addition, we have been involved in the development of the tools needed for the installation. In
particular, student Phil Harris has not only concatenated the lists of cabling and cooling
connections for the tracker, (including the pixel detector in this case) but also developed a
database and web based interface to allow efficient accounting, visualization of the connection
a
51
job, and consistency checks. An example of the interface is shown in Figure 25. While not a
particularly high profile task, the cabling and connection of the detector is perhaps the crucial
stepping stone to get the detector ready for data taking at the experiment, and thus providing a
tool to check the validity of the cable lengths and routings, guide the installation, and query for
locations of connections and hardware during checkout is very important for the overall CMS
tracker installation and commissioning.
Figure 25. Example of the CMS Tracker Cabling database interface, showing connections in this
case for the Tracker End-Cap (TEC) at the detector and in “Patch Panel 1”.
Tracker Operations Future
The MIT tracker group plans to continue to develop and maintain the operation of the tracker
once it is installed, during the commissioning and initial running phase. Indeed, all the
experience from the TIF will be brought to bear on the actual installation, and it is expected that
MIT personnel will make up part of the operations team, providing 24/7 on-call support for
tracker operations, developing control and analysis algorithms and in general ensuring the safe
and efficient operation of the detector during the first years of running at the LHC. In addition,
we are starting to consider roles in the tracker upgrade for the SLHC, although admittedly our
first and primary concern is getting the current detector up and running. The novel innovation
for the SLHC is the introduction of track based triggering at the Level 1 stage, which can be
fundamental to enhancing experimental potential, as evidenced in the CDF results based on the
track trigger reported in the Heavy Flavor section of this document. In CDF the addition of
triggering to the specification for the tracker drove the requirements of virtually the entire
design-the mechanical alignment, the front end readout chip, and the Data Acquisition system all
were tailored to the necessity of being able to function as a trigger. It is expected that at CMS a
similar situation will arise, and again we are interested in applying our Tevatron experience in
this area as well.
3.3.5. The CMS Tier-2 Facility at MIT
The LHC accelerator and its experiments will be one of the largest sources of data volume
among basic research activities. The need for large data storage, fast networks and CPUs to
process the data will be enormous. The LHC collaborations plan to use a worldwide network of
computer centers arranged in a hierarchical system to analyze the data, create simulations and
perform calibrations. For CMS, the raw data collected at CERN will be partially processed at
a
52
CERN and then shipped to the Tier-1 centers, including Fermilab. In the US, the Fermilab Tier-1
center will be supported by several Tier-2 centers distributed across the country that will conduct
specialized data processing or simulations.
The PPC group proposed to host a Tier-2 center at MIT. The internal CMS committee evaluated
the proposal and it was approved. The approval provides funds to purchase the hardware and
funds to support personnel.
The initial setup of the MIT center started in the fall of 2005 and by early 2006 we were part of
the worldwide grid of computers contributing to the overall CMS and LHC computing effort. We
are part of a larger group of CMS Tier-2 centers working together with the CMS Tier-1 center at
Fermilab. We are participating in a variety of tests, performance challenges and in the CMS
Monte Carlo production. In particular the MIT Tier-2 center was part of the CSA06 exercise that
demonstrated the network and processing performance. We are now preparing for a larger
CSA07 that is exercising tools and procedures that will be used for Data Operations of CMS. We
are continuously achieving increasing performance goals and pass the project milestones.
While the funding for the center comes from NSF the center construction and operation are very
closely connected to the preparations for the CMS physics program and the PPC group activities.
From the very beginning the center was designed to exploit the synergy of activities at LNS. The
center is a collaborative effort of between PPC and Heavy Ion group at MIT both participating in
the CMS experiment. In addition, the center is a continuation of the construction of the CDF
analysis farm. The computers dedicated to CMS HEP, CMS HI and CDF physics programs are
closely networked and managed together. As of April 2007 we have about 600 CPUs on
machines purchased using CMS Tier-2 funds, DOE Nuclear Physics funds, retired CDF
machines and machines retired from RHIC computing facility at BNL. The CDF CAF farm that
was operating at MIT for the last few years was incorporated to the Tier-2 center. The operation
of this heterogeneous center allows us to learn on how to run such a center while at the same
time providing resources to the ongoing and planned physics activities.
The large number of computers in our center requires significant power and cooling resources.
Before April 2007 the center was situated in the computer rooms belonging to the LNS
computing facility. Our center maximized the capabilities of these rooms and we could not
continue purchasing additional hardware due to the lack of capacity. In April 2007 we moved the
complete center to a new location on MIT campus. We are located in the central collocation
facility belonging to the MIT Information Services. The new location will allow us to continue
expanding the center capacity and to reach the full size by 2008. The center is expected to grow
in the years leading to the start of LHC and to reach a CPU power of about 106 SP2000 and over
500 Tbyte of disk. The expansion beyond 2009 and 2010 will require a still larger location. We
are working with the MIT administration to create such space for us. One of the possibilities is
for MIT do renovate our research space and convert it to computer room, the other possibility is
to build a High Performance center at Bates Laboratory. In the latter case we would share the
power and cooling with other research groups at MIT.
a
53
The Tier-2 centers are required to have good network connectivity. MIT is presently upgrading
their external network to reach peak speeds of 10 Gbps. The network is supposed to be
connected almost directly to the LHC network operated by CMS and ATLAS.
While most of the computers are being used for official CMS and CDF production and data
processing, the proximity of the center has already significantly enhanced the access to physics
of the locally stationed PPC physicists. For example the data from CMS Cosmic Challenge in
summer of 2006 was already flowing from CERN to MIT. In addition the physicists working on
the parallel effort in heavy ion physics are contributing their expertise to the running of the
center and profit from its existence in particular by the availability of a well-maintained common
software framework.
The physicists and technical personnel in both the PPC and heavy ion groups are extremely well
qualified to run such a project. Both groups played major roles in the computing efforts of their
respective experiments: CDF and Phobos. They both are running large computer farms and are
responsible for large-scale data analysis at the Tevatron and RHIC. The heavy ion group has
received several NSF awards to develop software for large-scale interactive data processing,
which is of direct relevance to the LHC program. The successful operation of the center to-date
is largely due to the qualities of the participants.
The PPC group is leading the effort. Christoph Paus and Boleslaw Wyslouch are the co-PI’s of
the Tier-2 project at MIT. The center day-to-day operation and hardware expansion is being
managed by Ilya Kravchenko, Steve Pavlon, Dale Ross and Maarten Ballintijn (all on non-PPC
funds) with significant help from first year PPC and HIG graduate students. In addition the
management of user accounts, network and some other management functions are provided by
the LNS computer group.
We expect the center to grow rapidly in the months before the LHC startup. The deep
involvement of the large MIT group in Tier-2 operations is an excellent addition to the spectrum
of our activities. It is both a resource and the training ground for our activities in Data Operations
and Physics. We provide valuable services to both CDF and CMS collaborations. Our students
get first hand experience with CMS hardware and software immediately after arriving at MIT.
As the LHC physics program expands we expect to collaborate with the neighboring CMS
universities (Brown, Boston University, Northeastern etc.) by providing local access to LHC
computing resources while sharing maintenance and service tasks.
3.4. Summary of MIT’s Role in CMS
Paus and Nahn have been at CERN full time for the last year. Steve Nahn, profiting from
research leave opportunities offered to junior faculty, will stay until September 2007 whereas the
physics department agreed to award Christoph Paus a special research leave to spend another
year at CERN. They were joined at CERN in 2006 by Gerry Bauer in May, Kristian Hahn in
June, Jonatan Piedra in July, Markus Klute in September and Guillelmo Gomez-Ceballos in
October. Konstanty Sumorok has been spending more that 50 percent of his time at CERN over
the last two years and he will move there in summer of 2007. Ilya Kravchenko will continue his
role in the Tier-2 computing center at MIT. The significant presence of MIT faculty at CERN
has facilitated a smooth transition from CDF to CMS at a crucial time for LHC commissioning.
a
54
For the DAQ system a major milestone was achieved last summer with the powering the CMS
magnet and the readout of elements of the CMS detector with the D2S system. This was called
the Magnet Test and Cosmic Challenge (MTTC). Last year we deployed the D2S subsystem
components and the Mini DAQ system in the underground service cavern (USC55). This year
we will complete the D2S deployment in the surface building SCX5 and fully commission the
FED Builder system. The full readout system is to be ready by August this year, in time for the
first pp collisions at the end of the year.
A new activity for the MIT group in the DAQ program has been the Storage Manager project.
We have taken a lead in both defining and implementation of the system to record the data read
out by the DAQ system. First tests of the main ideas were made during the MTTC last year.
During the last year the CMS tracker community has been assembling the silicon tracker at the
Tracker Integration facility (TIF). The DAQ project overlaps nicely with the Tracker Integration
activities and the project for MIT at the TIF has been the commissioning of the tracker at
sustained high trigger rates. A second activity at the TIF has been the organization and
coordination of “Tracker Operations” during the first half of 2007.
Another major new responsibility of the MIT group is the coordination of the CMS Data
operations by Christoph Paus. This will include both data production and Monte Carlo
production.
The MIT Tier-2 center was started in the fall of 2005 and by early 2006 we were part of the
worldwide grid of computers contributing to the overall CMS and LHC computing effort. NSF
through US CMS at UCLA is now funding this effort. MIT is currently providing additional
infrastructure in terms of a new computer room.
Second year graduate student Phil Harris moved to CERN last January and Matt Rudolph will
move this June. Four new graduate students started on the CMS p+p program last September.
Two of the new students have passed all their exams and will relocate to CERN in January ’08.
A third student is expected to relocate in June ’08. The fourth of those students is changing his
research direction and we expect to take on another student this September to replace him. In
addition this summer we will have three undergraduate students at CERN working with Paus,
Nahn and Wyslouch. The students are funded under MIT’s Undergraduate Research
Opportunities Program (UROP). We plan to continue working with MIT undergraduates at
CERN.
a
55
4. Budget Discussion
During this grant period the PPC group will be finalizing the transition from the CDF and
Phobos programs to CMS at LHC. Most of the faculty, research staff and students are already
full time on CMS. While we prepare for LHC beams we continue to profit from excellent CDF
data and we put final touches on Phobos analyses. While Phobos experiment finished taking data
we plan to stay involved in CDF until FY09 when Bruce Knuteson turns his attention fully to
LHC.
The number of Faculty and Research Scientists supported by this grant has remained constant for
the last two years. We actively participate in DAQ, Tracker and Tier-2 projects within the CMS
Research Program and some of the group’s personnel are supported by these funds.
MIT Physics Department is conducting open searches in the general area of particle and nuclear
physics typically every year. We expect that the attractiveness of our program will likely
convince one of the newly appointed faculty members to join our group. For planning purposes
we assumed that this will occur in FY08.
The biggest changes are occurring among the junior members of the group. Many of our CDF
students graduated or are about to graduate and we have unusually large number of first and
second year graduate students who want to work on CMS at LHC.
We plan to continue our group’s tradition to maintain a strong presence at the experiment.
Almost all of the research scientists and senior graduate students will be stationed at CERN or at
FNAL while faculty travel to CERN, spend research leaves there and use remote
videoconferencing.
The research staff members stationed at CERN receive salary “CERN site differential” to
account of differences in cost of living between the US and Geneva. Similarly graduate students
receive a per diem to account for increases expenses. These costs are included partially in this
budget request either explicitly (CERN site differential) or as a part of travel budget (per diem).
The research staff that are involved full time in the hardware and software projects receive cost
of living adjustments through the CMS Research Program (DAQ, Tracker and Offline
Computing).
The proposed budget of the PPC group in FY08-FY10 includes new rates for the allocation of
the administrative and computing costs and changes in tuition, benefits rate and vacation accrual.
Our budget request is 1380 k$ in FY08, 1450 k$ in FY09 and 1502 k$ in FY10. The details of
planning that went into calculation of these costs are explained below.
Recent Personnel Changes
Faculty: The group consists of Christoph Paus (Associate Professor) and Steve Nahn (Assistant
Professor) who work on CMS, Wit Busza (Professor) and Boleslaw Wyslouch (Professor) who
are working on Phobos and CMS Heavy Ion programs and Bruce Knuteson who is working on
CDF. Christoph Paus recently was promoted to tenure such that the group consists of three senior
and two junior professors. In FY08 or FY09 we hope to attract a new junior faculty. The faculty
plan significant presence at CERN (Paus in FY08, Wyslouch in FY09, Knuteson in FY10). Part
of the funding increase between FY08 and FY09 is to accommodate summer salary of the second
a
56
year new assistant professor (we assume that FY08 summer salary will be covered from
University startup funds).
Research Scientists: Last year Roberta Arcidiacono left our group and Guillelmo Ceballos
replaced her. Also Markus Klute was promoted from a postdoc on CDF to a Research Scientist
on CMS. In case of Ceballos and Klute the Research Scientist positions are thought to be of
limited duration. We expect both of them to stay with our group for approximately three years to
allow them to obtain physics results from first significant LHC physics runs. Ilya Kravchenko is
the main manager of MIT Tier-2 center and he is fully supported by the Tier-2 project. Principal
Research Scientist Gerry Bauer is permanently at CERN and Konstanty Sumorok will move to
CERN this summer. Our group also includes Visiting Scientist Jonatan Piedra supported by
Spain.
Postdoctoral fellows: Connor Henderson joined the group in summer of 2005. He is stationed at
Fermilab and working on CDF under the leadership of Bruce Knuteson. He plays a major role in
maintenance and operation of CDF Data Acquisition. Kristian Hahn joined the group in June
2006. He is stationed at CERN and works with the CMS Tracker group. We expect that in FY08
50% of his salary be covered by Steve Nahn’s startup funds and he will be fully supported by
PPC budget starting in FY09.
Graduate Students: Our group of graduate students in undergoing a major transition. In FY08 we
will have only three students on CDF: Choudalakis, Makhoul and Xie. We expect the first two to
graduate by the end of FY08. The remaining students are working on CMS. Phil Harris and Matt
Rudolph will finish their coursework by the end of FY07 and they will be stationed permanently
at CERN. Pieter Everaerts who has completed his coursework already will accompany them.
Stephen Jaditz, Kevin Sung will complete their coursework in FY08 and move to CERN in
second half of FY08 or at the beginning of FY09. In FY08 we expect that Stephen Jaditz will be
teaching for one semester to relieve financial pressure on the group while we wait for the CDF
students to complete their theses. We would like to be able to accept 1-2 students each year but
the large influx of students last year makes such plan extremely difficult. At this moment we will
not be able to add any more Research Assistants during this three year period. We will try to use
all available institutional resources to provide opportunities for new students. In particular we
plan to take advantage of MIT first year fellowships and teaching opportunities to allow 1
additional student to join our group in FY09 and FY10 at no extra cost to this grant. Recently
Choudalakis won a named MIT fellowship that will largely cover the stipend and tuition during
his last year before graduation. The funding increases between FY08 and FY09 as well as FY09
and FY10 are partially due to graduate students advancing towards full time research after their
first year fellowships and teaching opportunities expire.
We believe that the period of this grant is the best time for students to get involved in CMS, both
in terms of getting to commission the detector and being present at the potentially very exciting
first years of the LHC. In the spirit of the American Competitiveness Initiative we will
accommodate as many as we can financially, considering probable graduation of current students
and available grant support.
Technical Personnel: Steve Pavlon will spend 50% of his time on CMS DAQ project and 50%
on CMS Tier-2 NSF-funded project.
Undergraduate students: We were very successful in attracting undergraduate students to the
CMS program. So far five summer students worked with us on CMS. Last year we had two
a
57
students on the DAQ program and one working on heavy ions program and this year one on each
program. The stipend for all of them came from direct MIT UROP funding. We used our
research funds to cover their airfare and housing at CERN.
Travel Budget
Majority of the members of our group are working on CMS. This in turn resulted in a
redistribution of travel expenses as well as the increase in the site differential costs for the
personnel at CERN. We now have Bauer, Klute, Hahn and Ceballos at CERN. Christoph Paus
will stay at CERN until the end of FY08. Steve Nahn will return to MIT after his FY07 research
leave and he is expected to stay engaged in the commissioning of the CMS Tracker with frequent
trips to Geneva. We expect Wyslouch to actively participate in CMS activities at CERN with
extended stays in summer 2007 as well as sabbatical in FY08/FY09. We expect the
undergraduate and graduate students to participate in the preparations for the start of LHC at
CERN.
The activities at CDF will concentrate on the exploitation of the large volume of data collected at
the Tevatron. Knuteson will spend extended periods of time at Fermilab and participate in the
CDF activities while teaching at MIT. Paus will continue to supervise the students and
participate in the meetings at Fermilab.
We estimate that the cost of travel will remain similar to previous years since a single trip to
CERN is higher than the single trip to Fermilab but the total number of trips is lower. We are
actively pursuing various housing options at CERN to minimize expenses. We continue to
increase our reliance on videoconferencing for day-to-day research activities.
Materials and Services
M&S cover office supplies, telephones, shipping and software licenses at MIT and at Fermilab
and CERN. In addition, we continue to upgrade our video facilities available to PPC to help in
keeping in touch with CMS at CERN. Due to the heavy involvement of MIT personnel in data
analysis and data acquisition of CDF and CMS we maintain and replace a number of
workstations and peripherals. We expect to upgrade some of our computer equipment to account
for the increasing luminosity at the Tevatron and the increasing involvement in CMS. We will
need to do minor additions and replacements of servers for the group’s analysis efforts at CERN,
MIT and Fermilab. The group’s participation in the construction of Tier-2 facility at MIT is
already providing an excellent access to CMS data and analysis tools. While the funding for
Tier-2 brings significant computer power to MIT we need to provide and maintain our group’s
specific hardware for individual access of group members to this resource.
Computing
The overall group’s budget includes the additional service fees towards the activities of the LNS
computing group. The PPC group profits from the availability of system managers, network
managers and the availability of shared computing space. The Laboratory has been hosting some
of the Tier-2 machines and provided network services to the analysis machines for CMS and
CDF. We expect to rely on the services of the group in FY08 and beyond.
a
58
5. Summary
The physicists forming the Particle Physics Collaboration are contributing in a major way to the
leading programs in High Energy Particle and Nuclear Physics. Over the period of last year we
have contributed to the understanding of Standard Model via significant measurements of the B
mesons’ properties with CDF and the measurements of the behavior of nuclear matter at high
temperature with Phobos. We have advanced our preparations for the explorations of the high pT
CDF data in search for new phenomena. We are involved in the preparations of the CMS DAQ,
CMS Tracker systems and CMS data operations management in the last year before beams, and
we are advancing the heavy-ion program with CMS at CERN.
In FY08 we plan to continue to exploit the data collected with CDF and Phobos and, at the same
time, continue to increase our contribution to CMS. The MIT physics department has enabled
both Christoph Paus and Steve Nahn to be at CERN this last year. The presence of Gerry Bauer,
Kristian Hahn, Jonatan Piedra, Markus Klute, Guillelmo Gomez-Ceballos, Konstanty Sumorok
and our graduate and undergraduate students at CERN has allowed us to contribute to CMS
commissioning and preparations for physics. This significant participation in the detector
activities will allow us to continue our traditional leadership roles at the frontier of high-energy
physics.
We plan to continue to strengthen our educational mission. The increase of the importance of
postdoctoral associates, the graduations of award winning students, the increased involvement of
graduate and undergraduate students in the CMS program under the supervision of outstanding
faculty and research scientists will guarantee the vitality of the program. This year in particular
we expect to make an investment in the future by significantly increasing the number of eager
students working with us on CMS. We expect to contribute to the advancement of physics and to
the discoveries expected in the large volume of data from existing accelerators and higher
energies expected at LHC.
We continue to improve our organization and efficiency. We seek additional resources and
funds. We continue to receive excellent support from the LNS and Physics Department at MIT.
a
59
6. The MIT Personnel on PPC in FY08
TASK A, PPC Group
CDF
Phobos
CMS
W. Busza
B. Knuteson
S. Nahn
C. Paus
L. Rosenson
B. Wyslouch
C. Henderson
K. Hahn
G. Bauer
K. Sumorok
G. Ceballos
I. Kravchenko
M. Klute
J. Piedra
S. Fowler
S. Pavlon
0%
100%
0%
10%
20%
0%
100%
0%
0%
0%
0%
0%
0%
0%
30%
0%
50%
0%
0%
0%
0%
10%
0%
0%
0%
0%
0%
0%
0%
0%
10%
0%
50%
0%
100%
90%
0%
90%
0%
100%
100%
100%
100%
100%
100%
100%
60%
100%
#month
s
Funded
by
DOE/
(Other)
2
2
2
2
0
1
12
6(6)
12
12
12
0
12
0
6(1)
0(12)
K. Makhoul
G. Choudalakis
S. Xie
P. Harris
M. Rudolph
S. Jaditz
K. Sung
P. Everaerts
100%
100%
100%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
100%
100%
100%
100%
100%
12
3
12
12
12
9
12
12
Position
Activity
Name
Faculty
Postdoc
Princ. Res. Sci.
Res. Sci
Visiting Sci.
Admin/Sec
Engineer
Grad Student
a
60
Faculty
Advisor
Paus
Knuteson
Knuteson
Nahn
Nahn
Paus
Paus
Paus
Comments
Nahn Startup
Tier-2
Spain
Part time help
CMS DAQ 50%
Tier-2 50%
Named Fellowship
Teaching Assistant
7. The MIT Personnel on PPC in FY09
TASK A, PPC Group
CDF
Phobos
CMS
W. Busza
B. Knuteson
S. Nahn
C. Paus
L. Rosenson
B. Wyslouch
New Faculty
C. Henderson
K. Hahn
G. Bauer
K. Sumorok
G. Ceballos
I. Kravchenko
M. Klute
J. Piedra
S. Fowler
S. Pavlon
0%
50%
0%
0%
20%
0%
100%
100%
0%
0%
0%
0%
0%
0%
0%
10%
0%
20%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
10%
0%
80%
50%
100%
100%
0%
100%
0%
0%
100%
100%
100%
100%
100%
100%
100%
80%
100%
#month
s
Funded
by
DOE/
(Other)
2
2
2
2
0
1
2
12
12
12
12
12
0
12
0
6(1)
0(12)
S. Xie
P. Harris
M. Rudolph
S. Jaditz
K. Sung
P. Everaerts
New Student
100%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
100%
100%
100%
100%
100%
100%
12
12
12
12
12
12
0
Position
Activity
Name
Faculty
Postdoc
Princ. Res. Sci.
Res. Sci
Visiting Sci.
Admin/Sec
Engineer
Grad Student
a
61
Faculty
Advisor
Knuteson
Nahn
Nahn
Paus
Paus
Paus
Comments
Tier-2
Spain
Part time help
CMS DAQ 50%
Tier-2 50%
Fellowship, TA
8. The MIT Personnel on PPC in FY10
TASK A, PPC Group
CDF
Phobos
CMS
W. Busza
B. Knuteson
S. Nahn
C. Paus
L. Rosenson
B. Wyslouch
New Faculty
C. Henderson
K. Hahn
G. Bauer
K. Sumorok
G. Ceballos
I. Kravchenko
M. Klute
J. Piedra
S. Fowler
S. Pavlon
0%
20%
0%
0%
20%
0%
0%
100%
0%
0%
0%
0%
0%
0%
0%
10%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
100%
80%
100%
100%
0%
100%
100%
0%
100%
100%
100%
100%
100%
100%
100%
90%
100%
#month
s
Funded
by
DOE/
(Other)
2
2
2
2
0
1
2
12
12
12
12
12
0
12
0
6(1)
0(12)
S. Xie
P. Harris
M. Rudolph
S. Jaditz
K. Sung
P. Everaerts
New Student
100%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
100%
100%
100%
100%
100%
100%
12
12
12
12
12
12
0
Position
Activity
Name
Faculty
Postdoc
Princ. Res. Sci.
Res. Sci
Visiting Sci.
Admin/Sec
Engineer
Grad Student
a
62
Faculty
Advisor
Knuteson
Nahn
Nahn
Paus
Paus
Paus
Comments
Tier-2
Spain
Part time help
CMS DAQ 50%
Tier-2 50%
TA
9. Publications since 2004
9.1. CDF Publications:
1.
2.
3.
4.
5.
6.
7.
8.
9.

10.
11.
12.
13.
14.

15.
16.
17.
18.
19.
“Measurement of the Average Time-Integrated Mixing Probability of b-Flavored Hadrons
Produced at the Tevatron”, D. Acosta et al., Phys. Rev. D69, 012002 (2004).
“Search for Kaluza-Klein Graviton Emission in pp Collisions at s 1.8 TeV using the
Missing Energy Signature”, D. Acosta et al., Phys. Rev. Lett. 92, 121802 (2004).
“Inclusive Double Pomeron Exchange at the Fermilab Tevatron pp Collider,”
D. Acosta et al., Phys. Rev. Lett. 93, 141601
(2004).


“Measurement of the Polar-Angle Distribution of Leptons from W Boson Decay as a
Function of the W Transverse Momentum in pp Collisions
 at s 1.8 TeV”,
D. Acosta et al., Phys. Rev. D70, 032004 (2004).
“Combination of CDF and D0 Results on W Boson Mass and Width”,
V. M. Abazov et al., Phys. Rev. D70,
092008 (2004).


“Optimized Search for Single-Top-Quark Production at the Tevatron”,
D. Acosta et al., Phys. Rev. D69, 52003 (2004).
“Heavy Flavor Properties of Jets Produced in pp Interactions at s 1.8 TeV,”
D. Acosta et al., Phys. Rev. D69, 072004 (2004).
“Observation of the Narrow State at X(3872)  J /     in pp Collisions at
(2004).
s 1.96 TeV”, D. Acosta et al., Phys.
 Rev. Lett. 93, 072001

“Inclusive Search for Anomalous Production of High- pT Like-Sign Lepton Pairs in pp
Collisions at s 1.8 TeV”,D. Acosta et al., Phys. 
Rev. Lett. 93, 061802 (2004).
“A Time-of-Flight Detector in CDF-II”, D.Acosta et al., Nucl. Instrum. Meth. A518, 605
(2004).


“Measurement of the Average Time-Integrated Mixing Probability of b-Flavored Hadrons

Produced at the Tevatron”,D. Acosta et al., Phys. Rev. D69, 012002 (2004).
“Search for Bs0    and Bd0    Decays in pp Collisions at s 1.96 TeV”,
D. Acosta et al., Phys. Rev. Lett., 93, 032001 (2004).
“Search for Doubly-Charged Higgs Bosons Decaying to Dileptons in pp Collisions at
Acosta et al., Phys. Rev.
s 
1.96 TeV“, D. 
 Lett., 93, 221802
 (2004).
“The Underlying Event in Hard Interactions at the Tevatron pp Collider,”
D. Acosta et al., Phys. Rev. D70, 072002 (2004).

“Direct Photon Cross Section with Conversions at CDF”, D. Acosta et al., Phys. Rev. D70,
074008 (2004).

“Measurement of the tt Production Cross Section in pp Collisions at s 1.96 TeV Using
Dilepton Events”, D. Acosta et al., Phys. Rev. Lett. 93, 142001 (2004).
“Measurements of bb Azimuthal Production Correlations in pp Collisions at s 1.8
TeV”, D. Acosta
 (2005).

 et al., Phys. Rev. D71, 092001
“Measurement of the W Boson Polarization in Top Decay at CDF at s 1.8 TeV”, D.
Acosta etal., Phys. Rev. D71, 031101(R) (2005).

“Measurement of Charged Particle Multiplicities in Gluon and Quark Jets in pp Collisions
at s  1.80 TeV”, D. Acosta et al Phys. Rev. Lett. 94, 171802
 (2005).
20. “Measurement of Partial Widths and Search for Direct CP Violation in D 0 Meson Decays

a
63
21.
 22.
23.
24.
25.
26.
27.
28.
29.
30.
31.

32.
to K K  and    ”, D. Acosta et al., Phys. Rev. Lett., 94,122001 (2005).
“Comparison of Three-jet Events in pp Collisions at s  1.80 TeV to Predictions from a
Next-to-leading Order QCD Calculation”, D. Acosta et al., Phys. Rev. D71, 032002 (2005).
“Measurement
of the tt Production Cross Section in pp Collisions at s  1.96 TeV

Using Kinematic Fitting of b-tagged
Lepton + Jet Events”,
D. Acosta et al., Phys. Rev.

D71, 072005 (2005).
“Measurement of W  and Z  Production in pp Collisions at s  1.96 TeV”,
D. Acosta et al., Phys. Rev. Lett. 94, 041803 (2005).
“Search for Excited and Exotic Electrons in the e Decay Channel in pp Collisions at
D.Acosta et al., Phys. Rev. Lett., 94, 101802 (2005).
s  1.96 TeV”,

“Search for Anomalous Production of Diphoton Events with Missing Transverse Energy at
CDF and Limits on Gauge-Mediated Supersymmetry-Breaking Models”,
D. Acosta et al., Phys. Rev. D71, 031104 (2005).
“Measurement of the tt Production Cross Section in pp Collisions at s  1.96 TeV
using Lepton + Jets Events with Secondary Vertex b-tagging”, D. Acosta et al., Phys. Rev.
D71, 052003 (2005).
“Search for Scalar Leptoquark Pairs Decaying to  qq in pp Collisions at s  1.96
TeV”, D. Acosta et al., Phys. Rev. D71, 119901 (2005).
“Measurement of the Forward-Backward Charge Asymmetry of Electron-Positron Pairs in
pp Collisions at s  1.96 TeV”, D. Acosta et al., Phys. Rev. D71, 052002 (2005).
“First Measurements of Inclusive W and Z Cross Sections from Run II of the Tevatron
Collider”, D. Acosta et al., Phys. Rev. Lett. 94, 091803 (2005).
“Analysis of Decay-time dependence of Angular Distributions in Bd0  J /K 0 and
Bd0  J /K 0 decays and measurement of the Lifetime difference between Bs0 mass
eigenstates”, D. Acosta et al., Phys. Rev. Lett. 94, 101803 (2005)
“Search for Electroweak Single Top Quark Production in
 pp Collisions at s  1.96
TeV”, D. Acosta et al., Phys. Rev. D71, 012005 (2005).
“Measurement of the J / Meson and b-Hadron Production Cross Sections in pp
Collisions at s  1960 GeV”, D. Acosta et al., Phys. Rev. D71, 032001 (2005).
33. “Measurement of the Moments of the Hadronic Invariant Mass Distribution in Semileptonic
B Decays”, D. Acosta et al., Phys. Rev. D71, 051103 (2005).
34. “Measurement of the Forward-Backward Charge Asymmetry from W  e Production in
pp Collisions at s  1.96 TeV”, D. Acosta et al., Phys. Rev. D71, 051104 (2005).
35. “Measurement of the Cross Section for Prompt Diphoton Production in pp Collisions at
s  1.96 TeV”, D. Acosta et al., Phys. Rev. Lett. 95, 022003 (2005).
36. “Search for Anomalous Kinematics in tt Dilepton Events at CDF II”, D. Acosta et al.,
Phys. Rev. Lett. 95, 022001 (2005).
37. “Search for ZZ and ZW Production in pp Collisions at s  1.96 TeV”, D. Acosta et al.,
Phys. Rev. D71, 091105 (2005).
38. “Measurement of the W W  Production Cross Section in pp Collisions at s  1.96 TeV
using Dilepton Events”, D. Acosta et al., Phys. Rev. Lett. 94, 211801, (2005)
a

64
39. “First Evidence for Bs0   Decay and Measurements of Branching Ratio and ACP
for B   K  ”, D. Acosta et al., Phys. Rev. Lett. 95, 031801 (2005).
40. “Search for Long-Lived Doubly-Charged Higgs Bosons in pp Collisions at s  1.96
TeV”, D. Acosta et al., Phys. Rev. Lett. 95, 071801 (2005).
41.
“Search for Higgs Bosons Decaying into bb and Produced in Association with a Vector

Boson in pp Collisions at s 1.8 TeV”, D. Acosta et al., Phys. Rev. Lett. 95, 051801
(2005).
 Cross Section in pp Collisions at s  1.96 TeV
42. “Measurement of the tt Production
Using KinematicFitting of b-tagged Lepton + Jet Events”, D. Acosta et al., Phys. Rev. D71,
072005 (2005).
43. “Measurement of the tt Production Cross Section Production in pp Collisions at s  1.96
TeV Using Lepton Plus Jets Events with Semileptonic B Decays to Muons”, D. Acosta et
al., Phys. Rev. D72, 032002 (2005).
0
44. “Measurement of the Lifetime Difference Between Bs Mass Eigenstates”, D. Acosta et al.,
Phys. Rev. Lett. 94, 101803 (2005).
45. “Study of Jet Shapes in Inclusive Jet Production in pp Collisions at s  1.96 TeV”, D.
Acosta et al., Phys. Rev. D71, 112002 (2005).
46. “Measurement of B(t Wb)/B(t Wq) at the Collider Detector at Fermilab”,
D. Acosta et al., The CDF Collaboration, Phys. Rev. Lett. 95, 102002 (2005).
47. “Measurement of the Cross Section for tt Production in pp Collisions using the Kinematics
of Lepton + Jets Events”, D. Acosta et al., Phys. Rev. D 72, 052003 (2005).

48. “Search for 0b  p  and 0b  pK  Decays in pp Collisions at s  1.96 TeV”, D.
Acosta et al., Phys. Rev. D 72,051104(R)(2005).

49. “Search for New Physics Using High Mass Tau Pairs from at 1.96 TeV pp Collisions”, D.
Acosta et al., Phys. Rev. Lett. 95, 131801(2005).


50. “Search for Bs0    and Bd0    Decays in pp Collisions with CDF II”,
A. Abulencia et al., Phys. Rev. Lett. 95, 221805 (2005).
51. “Precision Top Quark Mass Measurement in the Lepton + Jets Topology in pp Collisions at
s  1.96 TeV”, A.

Abulencia et al., Phys. Rev. Lett. 96, 022004 (2006).
0
52. “Search for Bs    and Bd0    Decays in pp Collisions at s  1.96 TeV”, D.
Acosta et al., Phys. Rev. Lett. 93, 032001 (2005).
53. “Top Quark Mass Measurement Using the Template Method in the Lepton + Jets Channel at
CDF II”, A. Abulencia et al., Phys. Rev. D 72, 032003 (2006).



54. “Evidence for the Exclusive Decay Bc  J /  and Measurement of the Mass of the Bc
Meson”, D. Acosta et al., Phys. Rev. Lett. 96, 082002 (2006).
55. “A Search for Supersymmetric Higgs Bosons in the Di   Decay Mode in pp Collisions at
et al., Phys. Rev. D72, 072004 (2005).
s 1.8 TeV”, D. Acosta
0
0
56. “ K s and  Production Studies in p p Collisions at s 1800 and 630 GeV”, D. Acosta et
al., Phys. Rev. D72, 052001 (2005).

57. “Search for Anomalous Decay of Heavy Flavor Hadrons Produced in Association with a W

et al., Phys. Rev. D73, 051101 (2006).
Boson at CDF II”, A. Abulencia



a
65
58. “Measurement of the Top Quark Mass with the Dynamical Likelihood Method using Lepton
plus Jets Events with b-tags in pp Collisions at s  1.96 TeV”, A. Abulencia et al., Phys.
Rev. D73, 092002 (2006).
59. “Search for Neutral MSSM Higgs Bosons Decaying to tau Pairs in pp Collisions at
s  1.96 TeV”, A. Abulencia et al., submitted to Phys. Rev Lett., hep-ex/0508051.
60. “Search for New High Mass Particles Decaying to Lepton Pairs Pairs in pp Collisions at
61.
62.
63.
64.
65.
66.
s  1.96 TeV”, A. Abulencia et al., Phys. Rev. Lett. 95, 252001.
“Search for First-Generation Scalar Leptoquarks in pp Collisions at s  1.96 TeV”, D.
Acosta et al., Phys. Rev. D72, 051107 (2005).
“Search for H  bb Produced in Association with W in pp Collisions at s  1.96 TeV”,
A. Abulencia et al., submitted to Phys. Rev Lett., hep-ex/0512051.
“Measurement of the Inclusive Jet Cross Section in pp Interactions at s  1.96 TeV
Using
 a Cone-based Jet Algorithm”, A. Abulencia et al., FERMILAB-PUB-05-559-E.
Submitted to Phys. Rev. Lett.
“Search for Charged Higgs Bosons from Top Quark Decays in pp Collisions at s  1.96
TeV”, A. Abulencia et al., Phys. Rev. Lett. 96, 042003 (2006).
“Measurement of the Ratios of Branching Fractions B(Bs0  Ds  ) /B(B 0  D  )
and B(B  D0  ) /B(B0  D  )”,
A. Abulencia et al., Phys. Rev. Lett. 96, 191801 (2006).
“Measurement of the Inclusive Jet Cross Section
using the K T Algorithm in pp Collisions

at s  1.96 TeV”, A. Abulencia et al., Phys. Rev. Lett. 96, 122001 (2006).

67. “Measurement of the Dipion Mass Spectrum in X(3872)  J /   Decays”,
A. Abulencia et al., Phys. Rev. Lett. 96, 102002 (2006).
68. “Measurement of mass and width of the excited charmed meson states D1 and D2* ”,
A. Abulencia et al., Phys. Rev. D73, 051104
(2006).

69. “Direct Search for Dirac Magnetic Monopoles in pp Collisions at s  1.96 TeV”,
A. Abulencia et al., Phys. Rev. Lett. 96, 201801 (2006).
70. “First Measurements of Inclusive W and Z Cross Sections from Run II of the Tevatron
Collider”, A. Abulencia et al., submitted to Phys. Rev. D.
71. “Measurement of the Bottom-Quark Hadron Masses in Exclusive J /  Decays with the
CDF Detector”, D. Acosta et al., Phys. Rev. Lett. 96, 202001 (2006).
72. “Measurement of the Inclusive Jet Cross Section in pp Interactions at s  1.96 TeV
Using a Cone-based Jet Algorithm”, A. Abulencia et al., submitted to Phys. Rev Lett., hepex/0512020.
73. “Measurement of the Helicity of W Bosons in Top-Quark Decays”, A. Abulencia et al.,
Phys. Rev. D73, 111103 (2006).
74. “A Search for   q in tt Production”, A. Abulencia et al., Phys. Lett. B 639, 172-178
(2006).
75. “Top Quark Mass Measurement from Dilepton Events at CDF II”, A. Abulencia et al.,
submitted to Phys. Rev Lett., Phys. Rev. Lett. 96, 152002 (2006).

76. “A Search for Scalar Bottom Quarks from Gluino in pp Collisions at s  1.96 TeV”, A.
a
66
Abulencia et al., Phys. Rev. Lett. 96, 171802 (2006).
77. “Search for Second-Generation Scalar Leptoquarks in pp Collisions at s  1.96 TeV”, A.
Abulencia et al., Phys. Rev. D73, 051102 (2006).
78. “Observation of Bs0  K K  and Measurements of Branching Fractions of Charmless Two0
0
body Decays of Bd and Bs Mesons Bosons in pp Collisions at s  1.96 TeV”, A.
Abulencia et al., Phys. Rev. Lett. 97, 211802 (2006).
79. “Top Quark
 Mass Measurement from Dilepton Events at CDF II with the Matrix-Element
Method”, A. Abulencia et al., Phys. Rev. D74, 032008 (2006).
80. “Measurement of the b Jet Cross Section in Events with a Z Boson in pp Collisions at
s  1.96 TeV”, A. Abulencia et al., submitted to Phys. Rev Lett.
81. “Search for New Physics in Lepton + Photon + X Events with 305 pb1 of pp Collisions at
82.
83.
84.
85.
86.
87.
88.
89.

90.
9.2.
s  1.96 TeV”, A. Abulencia et al., submitted to Phys. Rev Lett.
“Measurement of the Ratio of Branching Fractions B(D0  K   ) /B(D0  K   ) using
the CDF II Detector”, A. Abulencia et al., Phys. Rev. D74, 031109 (2006).
“Search for Large Extra Dimensions in the Production of Jets and Missing Transverse
Energy in pp Collisions at s  1.96 TeV”,
 A. Abulencia et al., Phys. Rev. Lett. 97,
171802 (2006).
“Measurement of the tt-bar Production Cross Section in pp Collisions at s  1.96 TeV
using Missing ET + jets Events with Secondary Vertex b-Tagging”,
A. Abulencia et al., Phys. Rev. Lett. 96, 202002.

“Measurement of Bc Meson Lifetime Using Bc  J /e  e ”, A. Abulencia et al., Phys.
Rev. Lett. 97, 012002 (2006).
“Search for High Mass e  Decays”, A. Abulencia et al., submitted to Phys. Rev. D.
'
 
“Search for Z  e e Using Dielectron
Mass and Angular Distribution”,

A. Abulencia et al., submitted to Phys. Rev Lett., Phys. Rev. Lett. 96, 211801 (2006).
“Observation of Bs0  (2S) and Measurement of Ratio of Branching Fractions
B(Bs0  (2S) ) /B(Bs0  (S) ) ”, A. Abulencia et al., Phys. Rev. Lett. 96, 231801

(2006).
“Measurement
of Top Quark Mass using Template Methods on Dilepton Events in pp

Collisions at s  1.96 TeV”, A. Abulencia et al., Phys. Rev. D73, 112006 (2006).
“Measurement of  (0b ) / (B 0 )xB(0b  c  ) /B(B 0  D  ) in pp Collisions at
s  1.96 TeV”, A. Abulencia et al., submitted to Phys. Rev Lett., hep-ex/0601003.

Phobos
 Publications:
2007:
Back, B. B. et al., (PHOBOS Collab.) “Identified Hadron Transverse Momentum Spectra in
Au+Au Collisions at s NN =62.4 GeV”, Phys. Rev. C75, 024910.
Alver, B. et al., (PHOBOS Collab.) “System Size, Energy, Pseudorapidity, and Centrality
Dependence of Elliptic Flow”, Submitted for publication to Phys. Rev. Lett.
a
67
Alver, B. et al., (PHOBOS Collab.) “Elliptic Flow Fluctuations in Au+Au Collisions at
s NN =200 GeV”, Submitted for publication to Phys. Rev. Lett.
2006:
Alt, C. et al., (NA49 Collab.) “Energy and centrality dependence of anti-p and p production and
the anti-Lambda/anti-p ratio in Pb+Pb collisions between 20/A-GeV and 158/A-GeV”,
Phys. Rev. C73, 044910.
Alt, C. et al., (NA49 Collab.) “Inclusive production of charged pions in p+p collisions at 158GeV/c beam momentum”, Eur. Phys. J. C45, 343.
Alt, C. et al., (NA49 Collab.) “Upper limit of D0 production in central Pb-Pb collisions at 158A-GeV”, Phys. Rev. C73, 034910.
Alver, B. et al., (PHOBOS Collab.) “System size and centrality dependence of charged hadron
transverse momentum spectra in Au+Au and Cu+Cu collisions at s NN =62.4 and 200
GeV”, Phys. Rev. Lett. 96, 212301.
Back, B. B. et al., (PHOBOS Collab.) “Centrality and Energy Dependence of Charged-Particle
Multiplicities in Heavy Ion Collisions in the Context of Elementary Reactions”, Phys.
Rev. C74, 021902 (R).
Back, B. B. et al., (PHOBOS Collab.) “Forward-Backward Multiplicity Correlations
in s NN =200 GeV Au+Au Collisions”, Phys. Rev. C74, 011901 (R).
Back, B. B. et al., (PHOBOS Collab.) “Energy dependence of directed flow over a wide range of
pseudorapidity in Au+Au collisions at RHIC”, Phys. Rev. Lett. 97, 012301.
Back, B. B. et al., (PHOBOS Collab.) “Charged Particle Pseudorapidity Distributions in Au+Au
collisions at s NN = 62.4 GeV”, Phys. Rev. C74, 021901 (R).
Back, B. B. et al., (PHOBOS Collab.) “Transverse Momentum and Rapidity Dependence of
HBT Correlations in Au+Au Collisions at s NN =62.4 and 200 GeV”, Phys. Rev. C73,
031901(R).
Wozniak, K. et al., (PHOBOS Collab.) Vertex reconstruction algorithms in the PHOBOS
experiment at RHIC, Nucl. Instrum. Meth. A566, 185.
2005:
Alt, C. et al., (NA49 Collab.) “System Size and Centrality Dependence of the Balance Function
in A+A Collisions at s NN = 17.3 GeV”, Phys. Rev. C71, 034903.
Alt, C. et al., (NA49 Collab.) “System Size Dependence of Strangeness Production in Nucleus
Nucleus Collisions at s NN = 17.3 GeV”, Phys. Rev. Lett. 94, 052301.
Alt, C. et al., (NA49 Collab.) “Omega- and anti-Omega+ production in central Pb+Pb collisions
at 40-AGeV and 158-AGeV”, Phys. Rev. Lett. 94, 192301.
Back, B. B. et al., (PHOBOS Collab.) “The PHOBOS Perspective on Discoveries at RHIC”,
Nucl. Phys. A. 757, 28.
Back, B. B. et al., (PHOBOS Collab.) “Scaling of Charged Particle Production in d+Au
Collisions at s NN =200 GeV”, Phys. Rev. C72, 031901(R).
Back, B. B. et al., (PHOBOS Collab.) “Charged Antiparticle to Particle Ratios Near Midrapidity
in p+p Collisions at s NN =200 GeV”, Phys. Rev. C71, 021901(R).
a
68
Back, B. B. et al., (PHOBOS Collab.) “Centrality and Pseudorapidity Dependence of Elliptic
Flow for Charged Hadrons in Au+Au Collisions at s NN =200 GeV”, Phys. Rev. C72,
051901(R).
Back, B. B. et al., (PHOBOS Collab.) “Energy Dependence of Elliptic Flow over a Large
Pseudorapidity Range in Au+Au Collisions at RHIC”, Phys. Rev. Lett. 94, 122303.
Back, B. B. et al., (PHOBOS Collab.) “Centrality Dependence of Charged Hadron Transverse
Momentum Spectra in Au+Au Collisions from s NN =62.4 to 200 GeV”, Phys. Rev.
Lett. 94, 082304.
Busza, W., “Structure and Fine Structure in Multiparticle Production Data at High Energies”,
Acta Phys. Polon. B 35, 2873.
2004:
Accardi, A. et al., (CERN Yellow Report) “Hard Probes in Heavy Ion Collisions at the LHC: Jet
Physics”, arXiv:hep-ph/0310274
Alt, C. et al., (NA49 Collab.) “Electric Charge Fluctuations in Central Pb+Pb Collisions at 20 A
GeV, 30 A GeV, 40 A GeV and 158 A GeV”, Phys. Rev. C70, 064903.
Alt, C. et al., (NA49 Collab.), Energy and centrality dependence of deuteron and proton
production in Pb+Pb collisions at relativistic energies. Phys. Rev. C69, 024902.
Alt, C. et al., (NA49 Collab.), Observation of an exotic S = -2, Q = -2 baryon resonance in
proton proton collisions at the CERN SPS. Phys. Rev. Lett. 92 042003.
Anticic, T. et al., (NA49 Collab.) “Lamda and Anti-lambda Production in Central Pb Pb
Collisions at 40 A GeV, 80 A GeV and 158 A GeV”, Submitted to Phys. Rev. Lett. 93,
022302.
Anticic, T. et al., (NA49 Collab.) “Transverse Momentum Fluctuations in Nuclear Collisions at
158 A GeV”, Phys. Rev. C70, 034902.
Back, B. B. et al., (PHOBOS Collab.) “Centrality Dependence of Charged Antiparticle to
Particle Ratios near Mid-rapidity in d+Au Collisions at s NN =200 GeV”, Phys. Rev.
C70, 011901(R).
Back, B. B. et al., (PHOBOS Collab.) “Charged Hadron Transverse Momentum Distributions in
Au+Au Collisions at s NN =200 GeV”, Phys. Lett. B578, 297.
Back, B. B. et al., (PHOBOS Collab.) “Collision Geometry Scaling of Au+Au Pseudorapidity
Density from s NN =19.6 to 200 GeV”, Phys. Rev. C70, 021902(R).
Back, B. B. et al., (PHOBOS Collab.) “Pseudorapidity Dependence of Charged Hadron
Transverse Momentum Spectra in d+Au Collisions at s NN =200 GeV”, Phys. Rev. C70,
061901(R).
Back, B. B. et al., (E917 Collab.) “Production of  Mesons in Au+Au Collisions at 11.7 A
GeV/c”, Phys. Rev. C69, 054901.
Back, B. B. et al., (PHOBOS Collab.) “Particle Production at Very Low Transverse Momenta in
Au+Au Collisions at sNN=200 GeV”, Phys. Rev. Lett. 94, 051901(R).
Back, B. B. et al., (PHOBOS Collab.) “Pseudorapidity Distribution of Charged Particles in d+Au
Collisions at sNN=200 GeV”, Phys. Rev. Lett. 93, 082301.
a
69
CURRICULM VITAE
GERRY BAUER
EDUCATION
University of Wisconsin-Madison
Madison, WI
Ph.D. (Physics) 1986
B.S. 1978
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Cambridge, MA
Principal Research Scientist 1999 - present
Research Scientist 1991- 1999
Boston University
Boston, MA
Assistant Research Professor 1989 - 1990
Harvard University
Cambridge, MA
Research Associate 1985 - 1988
PUBLICATIONS
o
o
o
o
o
Experimental Observation of Isolated Large Transverse Energy Electrons with
Associated Missing Energy at s = 540 GeV, G. Arnison et. al., Phys. Lett. 122B,
(1983)103
Search for B 0  B 0  Oscillations at the CERN p p Collider, C. Albajar, et. al., Phys.
Lett. 186B, 247 (1987)
Observation of Top Quark Production in p p Collisions, F. Abe, et al., Phys. Rev.
Lett.74, 2626 (1995)

Measurement of the CP-Violation Parameter Sin 2 in B 0d / B 0d  J/  K 0s Decays,
F. Abe, et al.,Phys. Rev. Lett. 81, 5513
 (1998)
Observation of the Narrow State X(3872)  J /     in p p Collisions at s = 1.96
TeV, D. Acosta, et al., submitted to Phys. Rev. Lett,
 hep-ex/0312021

a
70


CURRICULM VITAE
WIT BUSZA
EDUCATION
University College London
Ph.D. 1964
London, England
B.Sc. 1960
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Professor of Physics 1979-Present
Cambridge, MA
Associate Professor 1973-1979
Assistant Professor 1969-1973
Stanford Linear Accelerator Center
Research Associate 1966-1969
Stanford, CA
University College London
Research Associate 1963-1966
London, England
SELECTED PROFESSIONAL ACTIVITIES
o
o
o
o
o
o
o
o
o
o
o
Institute of Nuclear Research, Warszawa, Poland
Visiting Scientist 1979
CERN, Geneva, Switzerland
Visiting Scientist 1983
Originator and/or spokesperson of E178, E451 and E565 research programs at Fermilab
(study of hadron-nucleus multiparticle production)
Co-originator of research program E665 at Fermilab (study of muon-nucleus multiparticle
production)
Originator and spokesperson of Phobos research program at RHIC
Manager of warm iron calorimeter (WIC) of SLD at SLAC
Chair/Member of various program advisory committees at Fermilab, SLAC and BNL
National Institute for Nuclear Theory Advisory Committee 1994-1996
NSF/DOE Nuclear Science Advisory Committee 1999-2002
Chair of Gordon Conference in Nuclear Physics, “QCD in Extreme Conditions: High
Temperature, High Density and Small-x”, Newport, RI July 22-27, 2001
Lawrence Berkeley National Laboratory Nuclear Science Division Directors Review
Committee 2003-Present (Chair 2006)
a
71
HONORS
o
o
o
o
o
o
Member, Polish Academy of Arts and Sciences
Fellow, American Physical Society
Francis L. Friedman Chair in Physics (MIT)
Beuchner Prize for Outstanding Contributions to the Education in the Department of Physics
(MIT)
School of Science Prize for Excellence in Undergraduate Teaching (MIT)
Appointed a Margaret MacVicar Faculty Fellow for Outstanding Contribution to Undergraduate
Education (MIT)
PUBLICATIONS
















An Experimental Study of Multiparticle Production in Hadron-Nucleus Interactions at
High-Energy (J. E. Elias et al.) Phys. Rev. D22 (1980) 13.
Experimental Study of the A-dependence of Inclusive Hadron Fragmentation (D. Barton
et al.) Phys. Rev. D27 (1983) 2580.
Nuclear Stopping Power (with Alfred S. Goldhaber) Phys. Lett. 139B (1984) 235.
Energy Deposition in High-Energy Proton-Nucleus Collisions (with R. Ledoux) Ann. Rev.
Nucl. Part. Sci.38 (1988) 119-159.
Saturation of Shadowing at Very Low X (BJ) (M. R. Adams et al.) Phys. Rev. Lett. 68
(1992) 3266.
First Measurement of the Left-Right Cross-section Asymmetry in 0 Boson Production
by e+e- Collisions (K. Abe et al.) Phys. Rev. Lett 70 (1993) 2515.
Scaled Energy (Z) Distributions of Charged Hadrons Observed in Deep Inelastic Muon
Scattering at 490 GeV from Xenon and Deuterium Targets (M. R. Adams et al.) Phys.
Rev. D50 (1994) 1836.
Charged Particle Multiplicity near Mid-rapidity in Central Au+Au Collisions at s = 56
and 130 GeV (B. Back, et al., Phobos Collaboration) Phys.Rev.Lett. 85 (2000) 3100.
Ratios of Charged Particles to Antiparticles near Mid-rapidity in Au+Au Collisions at
sNN=130 GeV (B. Back et al.) Phys. Rev. Lett. 87 (2001) 102301.
Pseudorapidity and Centrality Dependence of the Collective Flow of Charged Particles
in Au+Au Collisions at sNN=130 GeV (B. Back et al.) Phys. Rev. Lett. 89 (2002) 222301.
Collision Geometry Scaling of Au+Au Pseudorapidity Density from s NN =19.6 to 200
GeV, Phys. Rev. C70, (2004) 021902(R).
Particle Production at Very Low Transverse Momenta in Au+Au Collisions at
sNN=200 GeV, Phys. Rev. Lett. 94, (2004) 051901(R).
Structure and Fine Structure in Multiparticle Production Data at High Energies, Acta
Phys. Polon. B 35, (2005) 2873.
The PHOBOS Perspective on Discoveries at RHIC, Nucl. Phys. A. 757, (2005) 28.
Energy dependence of directed flow over a wide range of pseudorapidity in Au+Au
collisions at RHIC, Phys. Rev. Lett. 97, (2006) 021301.
Centrality and Energy Dependence of Charged-Particle Multiplicities in Heavy Ion
Collisions in the Context of Elementary Reactions, Phys. Rev. C74, (2006) 021902 (R).
a
72
CURRICULM VITAE
BRUCE KNUTESON
EDUCATION
University of California at Berkeley
Berkeley, California
Ph.D. (Physics) 2000
Rice University
Houston, Texas
B.A. (Physics and Mathematics) 1997
summa cum laude
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Cambridge, MA
Assistant Professor 2003 - present
University of Chicago
Chicago, IL
Fermi/McCormick Fellow 2002-2003
Fermi/McCormick Fellow 2001
CERN
Geneva, Switzerland
NSF International Fellow 2001
PUBLICATIONS
o
o
o
o
 o
o
Statistical Challenges with Massive Data Sets in Particle Physics, B. Knuteson, P.
Padley, Journal of Computational and Graphical Statistics, special Dec. issue 2003, hepex/0305064
PDE: A New Multivariate Technique for Parameter Estimation, Computer Physics
Communications, 145 (3) (2002) pp. 351-356; physics/0108002
Search for New Physics Using QUAERO: A General interface to D0 Event Data,
D0 Collaboration, Phys. Rev. Lett. 87, 231801 (2001); hep-ex/0106039
A Quasi-Model-Independent Search for New High pT Physics at D0 , D0
Collaboration, Phys. Rev. Lett. 86, 3712 (2001); hep-ex/0011071.

A Quasi-Model-Independent Search for New Physics at High Transverse
Momentum, D0 Collaboration, Phys. Rev. D64, 012004 (21101); hep-ex/0011067
 
Search for New Physics in e_X Data at D0 using SLEUTH: A Quasi-ModelIndependent Search Strategy for New Physics, D0 Collaboration, Phys. Rev. D62,
92004 (2000); hep-ex/0006011



a
73
CURRICULM VITAE
STEVE NAHN
EDUCATION
Massachusetts Institute of Technology
Ph.D. (Physics) 1998
Cambridge, Massachusetts
University of Wisconsin
B.S. (Physics and Mathematics) 1992
Madison,Wisconsin
With honors
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Assistant Professor 2005 - present
Cambridge, Massachusetts
Yale University
Associate Research Scientist 1998-2005
New Haven, Connecticut
PUBLICATIONS
o
o
o
o
o
o
o
o
Measurement Of The B0(S) - Anti-B0(S) Oscillation Frequency , A. Abulencia et al.
[CDF -Run II Collaboration] Phys.Rev.Lett.97:062003
Measurement Of The Lifetime Difference Between B(S) Mass Eigenstates, D. Acosta
et al. [CDF -Run II Collaboration] Phys.Rev.Lett.94:101803,2005.
Wire-Bond Failures Induced By Resonant Vibrations In The Cdf Silicon Detector,
G. Bolla et al, Nucl.Instrum.Meth.A518:277-280,2004
Status Of The CDF Run II Silicon Detector, S. Nahn, Nucl.Instrum.Meth.A511:2023,2003
Measurements Of Mass, Width And Gauge Couplings Of The W Boson At Lep. M.
Acciarri et al. [L3 Collaboration] Phys.Lett.B413:176-190,1997
Measurement Of W Pair Cross-Sections In E+ E- Interactions at s = 172 GeV and
W Decay Branching Fractions, M. Acciarri et al. [L3 Collaboration],
Phys.Lett.B407:419-431,1997
Pair Production Of W Bosons In E+ E- Interactions at s = 161-Gev, M. Acciarri et
al. [L3 Collaboration], Phys.Lett.B398:223-238,1997
The Forward Muon Detector Of L3. A. Adam et al.[L3 F/B Muon Group],
Nucl.Instrum.Meth.A383:342-366,1996

a
74
CURRICULM VITAE
CHRISTOPH MARIA ERNST PAUS
EDUCATION
Rheinisch Westfaelische Technische Hochschule Aachen
Aachen, Germany
Dissertation - Ph.D (Physics) 1996
Diplom - M.A. (Mechanical Engineering) 1992
Vordiplom - B.A. (Mathematics) 1990
Vordiplom - B.A. (Physics) 1990
Vordiplom - B.A (Mechanical Engineering) 1989
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Cambridge, MA
Associate Professor 2004 - Present
Assistant Professor 1999 - 2004
CERN, European Center for High Energy Physics
Geneva, Switzerland
CERN Fellow 1997-1998
PUBLICATIONS
o
o
o
o
o
Measurement of Hadron and Lepton Pair Production at 130 GeV s 140~GeV at
LEP, M.Acciarri et al., [L3 Collaboration], Phys.Lett.B 407 361-376, 1997
Search for New Physics Phenomena in Fermion Pair Production at LEP, M.Acciarri
et al., [L3 Collaboration], Phys.Lett.B 433 163-175, 1998.
Measurements of Cross-Sections and Forward BackwardAsymmetries at the Z
Resonance and Determination of Electroweak Parameters, M.Acciarri et al., [L3
Collaboration], Eur.Phys.J.C 16 1-40, 2000.
Measurement of the Mass Difference m(Ds+ )  m(D  ) at CDF II, D.Acosta et al., [CDF
Collaboration], Phys.Rev.D 68 072004, 2003.
Observation of the Narrow State X(3872)  J /     in p p Collisions at s = 1.96
TeV, D. Acosta, et al., submitted
 to Phys. Rev. Lett, hep-ex/0312021

a
75


CURRICULM VITAE
LAWRENCE ROSENSON
EDUCATION
University of Chicago
Ph.D. (Physics) 1956
Chicago, IL
M.S. 1953
A.B. 1950
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Professor of Physic Emeritus 2001 - present
Cambridge, MA
Professor of Physics 1967 - 2001
Associate Professor 1964 – 1967
Assistant Professor 1961 – 1964
Instructor 1958 - 1961
Enrico Fermi Institute
Chicago, IL
University of Chicago
Research Associate 1956 - 1958
PUBLICATIONS
o
o
o

o
o
o
Evidence for top Quark Productionin p p Collisions at s  1.8 TeV, F. Abe et al.,
Phys. Rev. Lett 73, No. 2, July 11, (1994)
J / and  (2S) Production in p p collisions at s  1.8 TeV,F. Abe et al.,
Phys.Rev. Lett. 79, 572 (1997)


First observation of the all Hadronic Decay of tt pairs, F. Abe et al., Phys. Rev.
Lett. 79, 1992 (1997)


Measurement of the CP-Violation Parameter sin(2) in Bd0 /Bd0  J/Ks0 Decays,
F. Abe et al., Phys. Rev. Lett. 81, 5513 (1998)

0
0
Measurement of the Bd  Bd Flavor Oscillation Frequency and Study of Same
Side Flavor Tagging of B mesons in p p Collisions,
 F. Abe et al., Phys. Rev.D59,
032001 (1999)
Measurement
 of J/
 and (2S) Polarization in p p Collisions at s  1.8 TeV, T.
Affolder et al., Phys. Rev. Lett.
 85, 2886 (2000).
a
76 

CURRICULM VITAE
KONSTANTY CHARLES SUMOROK
EDUCATION
University of Birmingham
Ph.D. (Physics) 1974
B.S. 1969
United Kingdom
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Principal Research Scientist 1988 - present
Cambridge, MA
Harvard University
Research Associate 1986 – 1988
Cambridge, MA
CERN
Staff Member 1983 - 1985
Geneva, Switzerland
University of Birmingham
Science Research Council Advanced Fellow 1977 – 1982
United Kingdom
Rutherford Laboratory
Research Associate 1972 - 1977
United Kingdom
University of Birmingham
Research Student 1969 - 1972
United Kingdom
PUBLICATIONS
o
o
o
Experimental Observation of Isolated Large Transverse Energy Electrons with
Associated Missing Energy at s  540 GeV, G. Arnison et al., Phys. Lett.
122B(1983)103
Observation of Jets in High Transverse Energy Events at the CERN Proton
Antiproton Collider, G. Arnison et al., Phys. Lett. 123B(1983)115
o
Possibilities of discovering a heavy top quark in the lepton-multijet channel ,
J.M. Benlloch, K. Sumorok, W.T. Giele, Nucl. Phys. B 25 (1994) 3.
o
Measurement of J / and  (2S) Polarization in pp Collisions at
Affolder et al., Phys. Rev. Lett. 85, 2886 (2000)
o
Observation of the Narrow State X(3872)  J /     in pp Collisions at



TeV, D.
Acosta, et al., submitted to Phys. Rev. Lett., hep-ex/0312021
a
77


s  1.8 TeV, T.
s  1.96
CURRICULM VITAE
BOLESLAW WYSLOUCH
EDUCATION
Massachusetts Institute of Technology
Ph.D. (High Energy Physics) 1987
Cambridge, MA
University of Warsaw
B.A. (Physics) 1981
Warsaw, Poland
PROFESSIONAL EXPERIENCE
Massachusetts Institute of Technology
Professor of Physics 2002-Present
Cambridge, MA
Associate Professor with tenure 1998-2002
Associate Professor without tenure 1997-1998
Assistant Professor 1991-1997
Postdoctoral Associate 1989-1990
Postdoctoral Associate 1987
CERN
Postdoctoral Associate 1988-1989
Geneva, Switzerland
SELECTED PROFESSIONAL ACTIVITIES
o
o
o
o
o
CMS Heavy Ion program coordinator 2003-present
Project Manager, Phobos experiment 1993-2000
Leader and Project Manager for Silicon Pad Multiplicity Detector in WA98 experiment at
CERN 1994-1996
Muon chamber test station manager in L3 experiment at CERN 1986-1989
Member of American Physical Society
a
78
PUBLICATIONS













Heavy Ion Physics with the CMS Experiment at the LHC, (B. Wyslouch CMS
Collaboration) Prepared for 31st International Conference on High Energy Physics
(ICHEP 2002), Amsterdam, The Netherlands, 24-31 Jul 2002. Published in *Amsterdam
2002, ICHEP* 59-61
Charged particle multiplicity near mid-rapidity in central Au+Au collisions s = 56
and 130 AGeV, (B. Back, et al., Phobos Collaboration) Phys. Rev. Lett. 85, (2000) 3100.
Search for Disoriented Chiral Condensates in 158 AGeV Pb+Pb Collisions (M.
Aggrawal et al., WA98 Collaboration) Phys.Lett. B420 (1998) 169-179.
Silicon Pad Multiplicity Detector for WA98 experiment (W. T. Lin et al., WA98
Experiment) Nucl. Instrum. Methods A389 (1997) 415-420.
A High Resolution Muon Detector, (B. Adeva et al.) Nucl. Instrum. Methods A323
(1992) 109-124.
Measurement of Γ(bb) / Γ (had) from hadronic decays of the Z, (O. Adriani, et al. L3
Collaboration) Phys. Lett. B307 (1993) 237-246.
Search for anomalous production of single photon events in e+ e- annihilations at
the Z resonance. (O. Adriani, et al. L3 Collaboration.) Phys. Lett. B297 (1992) 469-476.
High Mass Photon Pairs in lepton+ lepton-γγ Events at LEP, (L3 Collaboration) Phys.
Lett. B295 (1992) 337-346.
Determination of the Number of Light Neutrino Species, (L3 Collaboration) Phys.
Lett. B292 (1992) 463-471.
A Measurement of B0-B0 Mixing in Z0 Decays, (L3 Collaboration) Phys. Lett. B252
(1990) 703.
A Measurement of the Z0->bb Forward-Backward Asymmetry, (L3 collaboration)
Phys. Lett. B252 (1990) 713.
Study of Hadron and Inclusive Muon Production from e+e- annihilations at
39.79<(s<46.78 GeV, (Mark-J Collaboration) Phys. Rev. D34 (1986) 681-691.
The L3 High-resolution Muon Drift Chambers: Systematic Errors in Track Position
Measurements, (L3 Collaboration) Nucl. Instr. Meth. A252 (1986) 304-310.
a
79