Download Introduction

Document related concepts

Time wikipedia , lookup

Cross section (physics) wikipedia , lookup

Conservation of energy wikipedia , lookup

Antimatter wikipedia , lookup

Bohr–Einstein debates wikipedia , lookup

Renormalization wikipedia , lookup

Photon polarization wikipedia , lookup

Elementary particle wikipedia , lookup

Quantum electrodynamics wikipedia , lookup

Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup

Gamma spectroscopy wikipedia , lookup

Transcript
First Direct Measurement of FL using ISR
Events in Deep Inelastic Scattering
at HERA
Jonathan Paul Scott
University of Bristol
Department of Physics
October 2000
Thesis submitted to the University of Bristol
in accordance with the requirements of the degree of
Doctor of Philosophy in the Faculty of Science.
27,000 Words
Abstract
Collisions between positrons and protons with hard photon radiation from the initial
state positron have been used to measure the proton structure functions F2 and FL. Using
an integrated luminosity of 3.78pb-1 recorded in 1996 by the ZEUS detector, F2 has
been measured over the kinematic range of 0.3 GeV2 < Q2 < 40 GeV2 and 8  10-6 < x <
1.8  10-1. These data cover the region between previous F2 measurements at HERA
and other, fixed target experiments.
Also, the structure function FL has been measured for the first time at Q2 = 5.5 GeV2
and x = 4.4  10-4. Using data recorded by ZEUS during 1996 and 1997 with an
integrated
luminosity
of
35.9pb-1,
the
measured
value
of
FL
is
0.24
0.29 00..24
is found to be consistent with perturbative QCD
25 ( stat ) 1.52 ( sys ) and
calculations.
To Mum and Dad
Acknowledgements
I would first like to thank my supervisor, Greg Heath, for providing much needed
direction and assistance. Thanks also to PPARC for providing the funding for the
research in this thesis.
For the Field Bus work, special mention goes to Steve Nash, without whom there would
have been no new circuit boards to play with. Thanks also go to Alex Mass and Dave
Newbold for introducing me to the world of digital design.
While I was at DESY, I would like to give a special mention to Adi Bornheim for
passing on his extensive knowledge of ISR and for doing much of the dirty work. Brian
Foster, Stefan Schlenstedt and especially Ken Long also provided suggestions about
where to go when things looked bleak.
Thanks also to Dave Bailey, Rod Walker, Alex Tapper, Chris Cormack, Jo Cole and
Mark Hayes for their words of wisdom on a wide range of subjects, although none of
them warned about the dangers posed by candles in pubs!
Finally, thank you to Nick Brook for reading and providing lots of helpful comments
about my thesis as well as lots of less helpful comments on a great range of subjects
ranging from football to flying!
iii
“Credit must be given to observation rather than theories,
and to theories only insofar as they are confirmed by the
observed facts” – Aristotle.
“Facts are meaningless. Facts can be used to prove
anything that’s even remotely true.” – Homer Simpson.
iv
AUTHOR’S DECLARATION
I declare that the work in this dissertation was carried out in accordance with the
regulations of the University of Bristol. The work is original except where indicated by
special reference in the text and no part of the dissertation has been submitted for any
other degree.
Any views expressed in the dissertation are those of the author and in no way represent
those of the University of Bristol.
The dissertation has not been presented to any other University for examination either
in the United Kingdom or overseas.
SIGNED:
DATE:
v
Contents
Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vi
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xi
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvii
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2. Theoretical Overview . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.2 Quantum Electrodynamics . . . . . . . . . . . . . . . . . . . . .
6
2.3 Elastic e+p  e+p scattering . .
8
. . . . . . . . . . . . . . . . . .
2.4 Inelatsic e+p  e+X scattering . . . . . . . . . . . . . . . . . . .
10
2.5 Quantum Chromodynamics . . . . . . . . . . . . . . . . . . . . .
15
2.6 Parton Evolution . . . . . . . . . . . . . . . . . . . . . . . . . .
17
2.7 DGLAP Evolution . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.8 The Longitudinal Structure function, FL . . . . . . . . . . . . . .
20
3. The Large Hadron Collider . . . . . . . . . . . . . . . . . . . . . .
26
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3.2 The Compact Muon Solenoid Experiment . . . . . . . . . . . . .
27
3.3 The Calorimeter . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.4 The Trigger . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
3.4.1
The Calorimeter Trigger . . . . . . . . . . . . . . . . . .
31
3.4.2
The Sort ASIC . . . . . . . . . . . . . . . . . . . . . . .
33
3.5 The Test system . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
vi
3.5.1
The Protocol . . . . . . . . . . . . . . . . . . . . . . . .
37
3.6 The Field Bus Node . . . . . . . . . . . . . . . . . . . . . . . . .
39
3.6.1
The XILINX Field Programmable Gate Array . . . . . . .
40
3.6.2
The Interface . . . . . . . . . . . . . . . . . . . . . . . .
41
3.6.3
The Node Decoder . . . . . . . . . . . . . . . . . . . . .
41
3.7 Simulation of the Design . . . . . . . . . . . . . . . . . . . . . .
45
4. HERA and the ZEUS detector . . . . . . . . . . . . . . . . . . . .
49
4.1 HERA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
4.2 The ZEUS Detector . . . . . . . . . . . . . . . . . . . . . . . . .
52
4.2.1
The ZEUS Coordinate System . . . . . . . . . . . . . . .
56
4.2.2
The Central Tracking Detector . . . . . . . . . . . . . . .
57
4.2.3
The Small Angle Rear Tracking Detector. . . . . . . . . .
59
4.2.4
Magnets . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
4.2.5
The Calorimeter . . . . . . . . . . . . . . . . . . . . . . .
60
4.2.6
The Presampler . . . . . . . . . . . . . . . . . . . . . . .
61
4.2.7
HES . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
4.2.8
The Muon Chambers . . . . . . . . . . . . . . . . . . . .
63
4.2.9
Beam Pipe Calorimeter and Beam Pipe Tracker . . . . . .
63
4.2.10 The VETO Wall . . . . . . . . . . . . . . . . . . . . . .
64
4.2.11 The Collimator, C5 . . . . . . . . . . . . . . . . . . . . .
64
4.2.12 The Luminosity Detector . . . . . . . . . . . . . . . . . .
64
4.3 Trigger and Data Acquisition . . . . . . . . . . . . . . . . . . . .
67
4.3.1
The First Level Trigger . . . . . . . . . . . . . . . . . . .
67
4.3.2
The Second Level Trigger. . . . . . . . . . . . . . . . . .
69
4.3.3
The Third Level Trigger . . . . . . . . . . . . . . . . . .
69
4.3.4
Reconstruction . . . . . . . . . . . . . . . . . . . . . . .
69
4.3.5
DIS Event Selection . . . . . . . . . . . . . . . . . . . .
70
4.4 Measurement of kinematic variables . . . . . . . . . . . . . . . .
70
4.4.1
Event Vertex . . . . . . . . . . . . . . . . . . . . . . . .
72
4.4.2
Positron finding . . . . . . . . . . . . . . . . . . . . . . .
72
vii
4.4.3
Positron energy corrections . . . . . . . . . . . . . . . . .
72
4.4.4
Positron position measurement . . . . . . . . . . . . . . .
74
4.4.5
Reconstruction of the Hadronic Final State . . . . . . . . .
74
4.5 Kinematic Reconstruction . . . . . . . . . . . . . . . . . . . . . .
75
4.5.1
Electron Method . . . . . . . . . . . . . . . . . . . . . .
76
4.5.2
Double Angle Method. . . . . . . . . . . . . . . . . . . .
76
4.5.3
Jaquet-Blondel Method . . . . . . . . . . . . . . . . . . .
77
4.5.4
Sigma Method . . . . . . . . . . . . . . . . . . . . . . . .
78
5. Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
5.1 QED Compton Monte Carlo . . . . . . . . . . . . . . . . . . . .
79
5.2 ISR Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . .
81
5.3 Detector Simulation . . . . . . . . . . . . . . . . . . . . . . . . .
83
5.3.1
Simulation of the LUMI- energy response . . . . . . . . .
83
6. QED Comptons . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
6.2 Event Selection . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
6.2.1
QED Compton Trigger
. . . . . . . . . . . . . . . . . .
89
6.2.3
Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
6.3 Measuring the Inelastic Contribution . . . . . . . . . . . . . . . .
92
6.4 Luminosity Measurements . . . . . . . . . . . . . . . . . . . . .
94
6.4.1
Calculation of the 1996 and 1997 Luminosity . . . . . . .
95
6.4.2
Systematic Errors . . . . . . . . . . . . . . . . . . . . . .
96
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
7. Measuring the Proton Structure Function, F2 . . . . . . . . . . . .
100
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
100
7.2 Corrections to the Kinematics due to ISR . . . . . . . . . . . . .
100
7.3 Event Selection . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
7.3.1
The Trigger for radiative events . . . . . . . . . . . . . .
viii
102
7.3.2
The FLT . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
7.3.3
The TLT . . . . . . . . . . . . . . . . . . . . . . . . . . .
103
7.3.4
Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
104
7.4 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
106
7.4.1
Background normalisation . . . . . . . . . . . . . . . . .
107
7.4.2
QED Compton Rejection . . . . . . . . . . . . . . . . . .
109
7.4.3
Cosmic Ray Rejection . . . . . . . . . . . . . . . . . . .
109
7.4.4
Positron and proton beam-gas background . . . . . . . . .
109
7.5 Corrections . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
109
7.5.1
Acceptance of the LUMI- calorimeter . . . . . . . . . . .
109
7.5.2
Vertex weighting . . .
. . . . . . . . . . . . . . . . . .
112
7.5.3
Beam pipe correction . . . . . . . . . . . . . . . . . . . .
113
7.5.4
Structure Function weighting . . . . . . . . . . . . . . . .
113
7.6 Data and Monte Carlo Distributions . . . . . . . . . . . . . . . .
113
7.7 Measuring F2 . . . . . . . . . . . . . . . . . . . . . . . . . . . .
116
7.7.1
Resolution of Q2 and y . . . . . . . . . . . . . . . . . . .
116
7.7.2
Bin Selection . . . . . . . . . . . . . . . . . . . . . . . .
117
7.7.3
Unfolding . . . . . . . . . . . . . . . . . . . . . . . . . .
120
7.7.4
Systematic Errors . . . . . . . . . . . . . . . . . . . . . .
122
7.8 Comparison with 96/97 F2 measurement . . . . . . . . . . . . . .
123
8. Measuring the Proton Structure function, FL . . . . . . . . . . . .
131
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
131
8.2 Event Selection . . . . . . . . . . . . . . . . . . . . . . . . . . .
134
8.2.1
Trigger. . . . . . . . . . . . . . . . . . . . . . . . . . . .
134
8.2.2
Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
135
8.2.3
Energy scale of the LUMI- calorimeter . . . . . . . . . .
135
8.2.4
The FL Bin . . . . . . . . . . . . . . . . . . . . . . . . .
137
8.3 Measuring FL . . . . . . . . . . . . . . . . . . . . . . . . . . . .
142
8.3.1
y scaling factor . . . . . . . . . . . . . . . . . . . . . . .
142
8.3.2 Extraction of R . . . . . . . . . . . . . . . . . . . . . . . .
144
ix
8.3.3 Systematic Errors . . . . . . . . . . . . . . . . . . . . . .
145
8.4 Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
9. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152
F2 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
154
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155
x
List of Figures
2.1 Feynman diagram illustrating the process e+ e+
. . . . . .
7
2.2 The Feynman diagram for elastic e+p scattering . . . . . . . . . .
9
2.3 The Feynman diagram for inelastic e+p scattering . . . . . . . . .
11
2.4 F2 measured at ZEUS as a function of Q2 in bins of x. The rise
with
Q2 at small x and the fall at high x is clearly visible. Results from
fixed target experiments are shown for comparison . . . . . . . .
18
2.5 Ladder Diagram showing several gluons being radiated from the
parton that interacts with the proton. . . . . . . . . . . . . . . . .
20
2.6  and  FL plotted as functions of y. The effect of non-zero R on the
cross section is only significant at high values of y . . . . . . . .
22
2.7 The emission of an ISR photon in a DIS event. The hadronic final
state is denoted by X . . . . . . . . . . . . . . . . . . . . . . . .
24
2.8 The Longitudinal structure function measured by H1 as a function
of x in bins of Q2 along with charged lepton-nucleon fixed target
experiments. The error bands are due to experimental (inner) and
model (outer) uncertainty for the calculation of FL using a NLO QCD
fit to the H1 data
. . . . . . . . . . . . . . . . . . . . . . . . .
25
3.1 The LHC Collider showing the relative locations of the four
experiments, CMS, ALICE, ATLAS & LHC-B . . . . . . . . . .
27
3.2 Section of the CMS experiment showing the relative positions of
the major components. . . . . . . . . . . . . . . . . . . . . . . .
28
3.3 The CMS trigger and DAQ . . . . . . . . . . . . . . . . . . . . .
31
3.4 The positions of the trigger towers. Each trigger tower is served by
its own trigger processor crate . . . . . . . . . . . . . . . . . . .
32
3.5 Schematic of the sort algorithm . . . . . . . . . . . . . . . . . .
34
3.6 The Layout of the Sort ASIC Field Bus Test system
36
xi
. . . . . .
3.7 The Field-Bus Protocol . . . . . . . . . . . . . . . . . . . . . . .
37
3.8 The overall node design . . . . . . . . . . . . . . . . . . . . . .
39
3.9 The Node Decoder . . . . . . . . . . . . . . . . . . . . . . . . .
42
3.10 The Input/Output module . . . . . . . . . . . . . . . . . . . . .
43
3.11 Simulation of Node Decoder through a Get Node Name command
followed by a write and read data cycle. . . . . . . . . . . . . .
46
3.12 The prototype interface board. The FPGA is shown in the centre
with the socket for connection to the PC shown at top. The TTL
output sockets for the field bus are to the right. Other resources
include 4 seven segment displays and 11 LEDs as well as global
and local reset buttons . . . . . . . . . . . . . . . . . . . . . .
47
3.13 The prototype memory module. Field bus connections are at
bottom right with the FPGA upper centre. Provision for 4 DPR is
made although only one is in place here. This board is designed
to be placed in a VMEbus crate. The resources also include 4
multifunction displays . . . . . . . . . . . . . . . . . . . . . .
48
4.1 The HERA Layout, showing the locations of the ZEUS and H1
experiments along with the preaccelerators, H Linac, e Linac,
DESY and PETRA . . . . . . . . . . . . . . . . . . . . . . . . .
50
4.2 The integrated luminosity delivered by HERA for the years
1992-1999 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
4.3 The Kinematic Range explored at HERA on the x-Q2 plane. Shown
are results from the standard low and high Q2 analyses together
with results from shifted vertex running and very low Q2 data
obtained using the BPC and BPT. This is shown in comparison with
results from fixed target experiments, including NMC, BCDMS,
CCFR, E665 and SLAC . . . . . . . . . . . . . . . . . . . . . . .
53
4.4 The ZEUS detector. The positions of the major components are
indicated. In this representation, the protons enter from the right
and the positrons from the left . . . . . . . . . . . . . . . . . . .
xii
55
4.5 The ZEUS co-ordinate system . . . . . . . . . . . . . . . . . . .
56
4.6 The arrangement of wires in the CTD . . . . . . . . . . . . . . .
57
4.7 The arrangement of the two layers of scintillator strips in
the SRTD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
4.8 The area covered by the Forward and Rear Presamplers (shaded)
superimposed on the FCAL and RCAL cells . . . . . . . . . . . .
62
4.9 Overview of the ZEUS LUMI detector. The locations of the four
main components along with the magnets upstream of the main
detector are shown . . . . . . . . . . . . . . . . . . . . . . . . .
65
4.10 The LUMI Calorimeter . . . . . . . . . . . . . . . . . . . . . .
66
4.11 The ZEUS Trigger Layout showing the flow of data from the
detector front-end electronics, through the triggers, to final
storage onto either tape or disk . . . . . . . . . . . . . . . . . .
68
4.12 A reconstructed neutral current event. . . . . . . . . . . . . . . .
71
5.1 The two leading order contributions to QED Compton scattering .
80
5.2 The difference between the measured and true photon energies
for 1996 and 1997 . . . . . . . . . . . . . . . . . . . . . . . . . .
86
6.1 The two leading order contributions to QED Compton scattering .
88
6.2 Elastic and Inelastic QED Compton MC Distributions for
1996 data set (upper plots) and 1997 data set (lower plots) . . . . .
92
6.3 Data v Elastic + Inelastic MC comparison for 1996 . . . . . . . .
93
6.4 Data v Elastic + Inelastic MC comparison for 1997 . . . . . . . .
93
6.5 Effect of systematic checks on measured 1996 luminosity. The
fractional error on each systematic check is shown with the dashed
line indicating no systematic check . . . . . . . . . . . . . . . . .
97
6.6 Effect of systematic checks on measured 1997 luminosity. The
fractional error on each systematic check is shown with the dashed
line indicating no systematic check . . . . . . . . . . . . . . . . .
xiii
98
7.1 The raw LUMI-e and LUMI- energies for 1996 (left)
and 1997 (right). The structure in the plots arises due to the acceptance
of the LUMI-e and LUMI- detectors . . . . . . . . . . . . . . . . .
107
7.2 Total E-Pz distribution for data with the normalised
background contribution. The background is normalised to the data
in the hatched region where the total E-Pz > 62 GeV
. . . . . .
108
7.3 The aperture of the LUMI- calorimeter . . . . . . . . . . . . . .
110
7.4 The x and y Beam Tilts for 1996, top, and 1997, bottom, plotted
against relative run number . . . . . . . . . . . . . . . . . . . . .
111
7.5 Acceptance of LUMI- calorimeter for different x and y beam
tilt positions. The tilt ranges from –0.25 mrad to 0.06 mrad in the x
direction and –0.1 mrad to 0.1 mrad in the y direction
. . . . . .
112
7.6 Comparison of Data and MC + BGD for the Electron energy and
theta, which are used for the kinematic reconstruction, as well as
the photon energy and z vertex position . . . . . . . . . . . . . .
114
7.7 Comparison of data and MC for hadronic-based quantities along
with E-Pz distributions. The total E-Pz plot indicates the cut on the
signal region (two vertical lines) and the area where the background
is normalised to the data (hatched area) . . . . . . . . . . . . . .
115
7.8 Comparison of data and MC + BGD for the Q2 and yHERA
distributions, used for the measurement of F2. . . . . . . . . . . .
116
7.9 Resolution of Q2 measured using the electron method and y
measured using the sigma method . . . . . . . . . . . . . . . . .
118
7.10 The acceptance and purity for each bin. The shading of the bins
indicates the purity whereas the value for the acceptance is given in
each bin as a percentage . . . . . . . . . . . . . . . . . . . . .
119
7.11 The bins used for the F2 measurement together with previous
results from ZEUS and fixed target experiments. The number
assigned to each bin is also indicated . . . . . . . . . . . . . .
xiv
120
7.12 The fractional error for each systematic check shown as a function
of bin number. . . . . . . . . . . . . . . . . . . . . . . . . . .
124
7.13 Data v MC comparison for the 96/97 analysis. The relative
contributions of the diffractive and photoproduction Monte Carlo
are indicated by the green and blue histograms . . . . . . . . . .
127
7.14 Comparison of F2 measured with this analysis and preliminary
96/97 results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
128
7.15 F2 plotted from the ISR analysis (circles) with the comparison
from the 96/97 analysis (triangles). Error bars for the ISR results
show both statistical (inner bars) and systematic errors added in
quadrature. The 96/97 analysis shows statistical errors only. Also
shown are the results from the ZEUS BPC\BPT and shifted vertex
(SVX) analyses . . . . . . . . . . . . . . . . . . . . . . . . . .
129
8.1 The effect of weighting the y spectrum to three different values of R
As R increases the curve of the ratio becomes steeper . . . . . . .
133
8.2 Kinematic peak study for 1996 . . . . . . . . . . . . . . . . . . .
136
8.3 Kinematic peak study for 1997 . . . . . . . . . . . . . . . . . . .
136
8.4 The maximum values of y obtainable for different minimum
values of the scattered positron energy . . . . . . . . . . . . . . .
137
8.5 The effect of the upper and lower cut of yHERA on the accessible
region of y . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
138
8.6 The bin used for the FL measurement shown on the x, Q2 plane
with the F2 bins superimposed for comparison . . . . . . . . . .
139
8.7 Data v MC + BGD comparison for the 1996 FL measurement. The
y distribution, used for the fit, is shown at the bottom . . . . . .
140
8.8 Data v MC + BGD comparison for the 1997 FL measurement. The
y distribution, used for the fit, is shown at the bottom . . . . . .
141
8.9 The migration and resolution of y in the range used for the
extraction of R . . . . . . . . . . . . . . . . . . . . . . . . . .
xv
142
8.10 The effect caused by fitting over different ranges of y (left). The plot
shows the MC distribution for MC weighted to R=1.4. Two fits
have been made, the black curve covering the range up to y = 0.6
and the red curve up to y = 0.45 (corresponding approximately
to the accessible range in this analysis. As can be seen, the shape of
the curves are different, giving different fitted values of R. The plot
to the right shows the fit made with Sy set to 1.149. This fit yields
the weighted value of R=1.4. . . . . . . . . . . . . . . . . . . .
143
8.11 Fit of ratio of data and MC y distributions for combined 1996
and 1997 data . . . . . . . . . . . . . . . . . . . . . . . . . . .
144
8.12 The effect of the systematic checks on the value of R. Shown for
each test is the fitted value of R together with the error on the fit.
The dashed lines indicate the nominal value together with its error
8.13 The fractional effect of the systematic checks on the value of F2 .
147
148
8.14 The measured value of FL plotted at Q2 = 5.5 GeV2 and
x = 4.4  10-4 . . . . . . . . . . . . . . . . . . . . . . . . . . .
149
8.15 FL measured by H1 over a similar range in x to the ISR
measurement. The yellow band shows the expectation from pQCD
with the dashed lines giving the limits of FL = 0 and FL = F2. The
points are given for different values of Q2 with the FL = F2 also
covering a range in Q2 explaining the fall with decreasing x . .
150
8.16 The hadronic E-Pz distributions using positron energy cut
> 5 GeV (left) and > 8 GeV (right). As can be seen there is a
deficit of Monte Carlo between 5 and 15 GeV for the lower energy
cut. This may be due to more background or problems with
the reconstruction . . . . . . . . . . . . . . . . . . . . . . . . .
xvi
151
List of Tables
2.1 The known quarks and leptons . . . . . . . . . . . . . . . . .
5
2.2 The properties of the gauge bosons . . . . . . . . . . . . . . .
6
3.1 Addressing the field bus nodes . . . . . . . . . . . . . . . . .
39
5.1 The LUMI- calibration constants for 1996 and 1997 . . . . .
85
6.1 The kinematic variables used for the COMPTON2.0 program
89
7.1 Summary of the properties of the FLT Trigger bit FLT30,
giving the values of the cuts used . . . . . . . . . . . . . . . .
103
7.2 Comparison between data, DIS01, and background, DIS02,
triggers used for the ISR F2 measurement . . . . . . . . . . . .
104
8.1 Comparison between the triggers used for the ISR analyses.
DIS01 and DIS02 are used for the F2 measurement while
DIS10 and DIS02 are used for the FL measurement . . . . . .
xvii
134
Chapter 1
Introduction
The concept of matter being composed of building blocks called ‘atoms’ was first
introduced by the ancient Greek scholars Leucippus and Democritus in the 5th century
BC. Based on philosophical considerations, they suggested that matter is composed of
small, indivisible particles moving at random and colliding with each other. It was
however over 2000 years before experiments indicated the validity of this argument
with John Dalton using empirical chemical laws to develop atomic theory. As physics
progressed at the turn of the 19th and 20th centuries it became increasingly evident that
these building blocks of nature were themselves complex objects.
J.J. Thomson discovered the electron in 1897 and Rutherford demonstrated in 1911 by
firing alpha particles at gold, that the atom could be shown to be mostly empty space
with a tiny, dense, positively charged nucleus surrounded by electrons. The nucleus was
found to consist of a number of positively charged particles known as protons.
Chadwick, in 1932, demonstrated the existence of a neutral partner to the proton called
the neutron, therefore explaining the stability of higher mass elements. Together the
proton and neutron are called nucleons.
The model of atoms consisting of a nucleus of protons and neutrons surrounded by
electrons so small they can be considered point-like was further elaborated in the late
1960s by experiments at the Stanford Linear Accelerator Centre (SLAC). These
demonstrated that nucleons themselves contain point-like particles, known as partons.
Today, Deep Inelastic Scattering (DIS) experiments reveal ever-finer structure of
nucleons. The nucleon is now known to be a complex object containing many
interacting partons called quarks and gluons.
The HERA accelerator at DESY in Hamburg, Germany, is one such DIS experiment. It
collides a high-energy beam of protons with either electron or positron beams, with
1
several detectors recording the results of these collisions. Amongst other things, this
allows the accurate measurement of the density of partons within the proton. These have
served to drastically increase our understanding of Quantum ChromoDynamics (QCD),
the theory of nuclear interactions.
In collisions at HERA, the electron can sometimes radiate another particle before
interacting with the proton. This is known as an Initial State Radiative event and is
similar to the bremsstrahlung process. Such events provide a way to extend the
accessible kinematic region at HERA to areas hitherto unexplored. Not only will this
allow us to bridge the gap between previous measurements but also these events also
allow us to make a direct measurement of the structure function known as FL, for the
first time at HERA.
The thesis begins by outlining the theoretical basis of the analysis, starting in chapter 2
with an introduction to Deep Inelastic Scattering, one of the important processes at
HERA. This is followed by a description of structure functions and their importance in
the development of QCD. Finally, Initial State Radiative events and their use in
structure function measurements are discussed.
The problem of filtering out the useful physics events, or triggering, is becoming an
ever more important and challenging problem due to the high rates and large amounts of
data found in modern particle physics detectors. For the ZEUS detector at HERA, highspeed electronics and farms of computer processors perform the triggering. Future
experiments however, will require solutions that are far more sophisticated. For
example, the Large Hadron Collider (LHC) is a proton-proton collider which, when
completed, will be used to probe physics at extremely high energies. The type of
collision and high energies used will generate conditions that will prove particularly
challenging for triggering. One of the detectors being developed for the LHC will be the
Compact Muon Solenoid (CMS). Chapter 3 describes the development of a system to
test one of the high-speed digital electronic components intended for the CMS trigger.
Chapter 4 describes the ZEUS experiment and its major components along with how the
data from these components can be turned into useful variables for physics analyses.
Chapter 5 presents details of the Monte Carlo simulations used in this thesis. These
2
simulations are important tools in modern particle physics and three Monte Carlo
generators are described.
Chapter 6 introduces QED Compton events. These are radiative events found in a
slightly different kinematic regime to Initial State Radiative events and can also be a
source of background for the measurement of the latter. As the cross section can be
calculated precisely and the event topology is relatively clean, QED Comptons can also
be used as a crosscheck for the measurement of luminosity, which is an indication of the
number of particle interactions.
Chapters 7 and 8 present the use of Initial State Radiative events to measure the proton
structure functions F2 and FL. The process used to select such events is described in
chapter 7 in making the measurement of F2. This is extended in chapter 8 for the more
difficult measurement of FL.
3
Chapter 2
Theoretical Overview
2.1 Introduction
One of the great triumphs of modern physics is the Standard Model of particle physics.
This is one of the most successful models in science and describes with great accuracy
all current measurements in particle physics. The standard model is not without its
problems though, there are many parameters within the model that are not predicted
theoretically and therefore can only be introduced ‘by hand’. Also, many other
questions remain unanswered and so, eventually, we will require a new framework for
the description of elementary particles.
The standard model of particle physics describes nature in terms of elementary
particles, which are acted upon by four forces, electromagnetic, strong, weak and
gravity.
There are three distinct types of particles, quarks, leptons and gauge bosons. The
electron is the most familiar example of a lepton. Quarks on the other hand, bind
together to form hadrons. An example is the proton, which contains three quarks.
Each quark and lepton is also accompanied by its antiparticle, which carries equal but
opposite signed charge, while its mass remains the same.
Quarks and leptons exist in three generations. Particles across the generations have
similar properties but different masses, the first generation of leptons for example,
consist of the electron and electron neutrino, together with their antiparticles, the
positron and electron antineutrino. Similarly, there are two types, or flavours, of quarks
in each generation. In the first generation, these flavours are named up and down with
charm\strange and top\bottom forming the 2nd and 3rd generations respectively.
The elementary particles that are known to exist are shown in Table 2.1:
4
1st
2nd
3rd
Generation
Generation
Generation
e (electron)
Leptons
 (muon)
 (tau)
(0.51 MeV)
(105.6
MeV)
(1784
MeV)
(mass)
e (electron
 (muon
 (tau
neutrino)
neutrino)
neutrino)
< 3 eV
< 0.19 MeV
< 18.2 MeV
u (up)
c (charm)
t (top)
Quarks
(8 MeV)
(1.3 GeV)
(175 GeV)
(mass)
d (down)
s (strange)
b (bottom)
(15 MeV)
(200 MeV)
(4.8 GeV)
Table 2.1: The known quarks and leptons.
While leptons can exist freely, quarks can only exist in bound states. This is explained
by introducing an extra property to quarks, called colour charge. Each quark has a
‘colour’ of red/antired, green/antigreen or blue/antiblue. A stable particle must have
total colour charge of zero. Therefore, the only allowed combinations are two-quark
states with, say, red-antired (mesons) or three-quark states with, for example, red-greenblue.
The third class of elementary particles are known as gauge bosons. These mediate the
four forces. The electromagnetic interaction is mediated by the photon () and is
described by the theory of quantum electrodynamics (QED) while the weak force is
carried by the charged W and neutral Z bosons. Electroweak theory is described in
terms of 4 bosons, one positively charged, one negatively charged and two neutral. At
high energies the 4 bosons are similar. However, at lower energies, symmetry breaking
gives masses to 3 of the bosons, which are identified as the W and Z while the other
boson is the photon. This is described by the GWS [1] theory of electroweak
interactions. The strong force, on the other hand, carries colour charge and is described
by quantum chromodynamics (QCD). The mediating particles in this case are an octet
of massless gluons.
It is postulated that gravity is mediated by a spin-2 boson known as a graviton. As
gravity is very much weaker than the other forces, it is neglected in typical particle
physics experiments.
The properties of the gauge bosons are shown in Table 2.2.
5
Gauge Boson
Mass (GeV)
Charge (e)
Spin
photon, 
0
0
1
gluons, g
0
0
1
weak, W
80
1
1
weak, z
91
0
1
graviton
0
0
2
Table 2.2: The properties of the gauge bosons.
2.2 Quantum Electrodynamics
Quantum field theory (QFT) attempts to explain the dynamics of elementary particles
in a manner consistent with the postulates of quantum mechanics and the special theory
of relativity.
Quantum electrodynamics is formulated by quantising the electromagnetic field.
Feynman noted that each component of an interaction, for example an incoming
particle, contributes a rule to the calculation of that interaction. By drawing a QED
process as a diagram, the cross section can be calculated by applying the rules to every
component. The cross section, , for a process is a measure of the probability that that
particular process will occur. It is defined as an area, with the naïve interpretation being
the particles must pass through this area before the process can take place.
A simple example of a cross section calculation is e+- scattering, as shown below in
figure 2.1.
6
Figure 2.1: Feynman diagram illustrating the process e+ e+.
In this case Pa, Pb, Pc and Pd are defined as the four-vectors corresponding to the
incoming and outgoing positron and muon and q is the four-vector of the photon.
The 2  2 differential cross section formula in the centre of mass frame is, for a
particle passing through the solid angle d:
2
M fi
d

d cm 64 2 s
(2.1)
where Mfi is a matrix element found by applying the Feynman rules to the diagram and
s is the square of the centre of mass energy. For the process shown in figure 2.1, |M fi|2
can be written, in the high energy limit, in terms of the Mandelstam variables:
M fi
2
 s2  u2 

 2
2
 t

7
(2.2)
where the Mandelstam variables are defined as
s  ( Pa  Pc ) 2  ( Pb  Pd ) 2
t  ( Pa  Pb ) 2  ( Pc  Pd ) 2
u  ( Pa  Pd ) 2  ( Pb  Pc ) 2
(2.3)
and e is the electron charge.
Substituting (2.2) into (2.1) gives
d
e4 s 2  u 2  2 s 2  u 2


d cm 32 2 s t 2
2s t 2
where  
(2.4)
e2
is the electromagnetic coupling constant.
4
The next two sections deal with the scattering of a positron off a proton. At HERA
electrons are also scattered off protons. The process is similar however to the positron
case.
2.3 Elastic e+p  e+p scattering
In elastic e+p scattering the proton remains intact, while for the inelastic case, the
collision is energetic enough to fragment the proton.
The diagram for e+p scattering is shown in figure 2.2. The four vectors of the particles
are indicated in brackets as well as q, the four vector of the virtual photon.
8
Figure 2.2: The Feynman diagram for elastic e+p scattering.
By considering the proton to be a point-like particle with mass M, the cross section for
elastic e+p scattering can be obtained using the same method as the positron-muon case.
This leads to the Mott Scattering cross section formula:
d
2
E  2   
q2
  


cos

sin 2   


2
4
2

d 4 E sin  2 E 
 2  2M
 2 
where the factor
(2.5)
E
is given by
E
E

E
1
2E
1
sin 2  2 
M
(2.6)
and arises from the recoil of the proton.
Next, in order to take into account the fact that the proton is not a point-like object, two
form factors, G1 and G2 must be introduced.
9

d
2
q2
 E  
2
2

G2 (q 2 ) sin 2  2
  G1 (q ) cos  2 
2
4
2
d 4 E sin  2  E  
2M

(2.7)
G1 and G2 are related to the electric form factor, GE, associated with the charge density
and the magnetic form factor, GM, associated with the magnetic moment distribution by:
G1 (q 2 ) 
GE2  GM2
1
G2 ( q 2 )  G M
where  
(2.8)
(2.9)
q2
.
2M 2
GE and GM are normalised such that GE(0) = 0 and GM(0) = p, where p is the proton
magnetic moment.
The values of GE and GM have been measured in electron and muon scattering
experiments.
2.4 Inelastic e+p  e+X scattering
The generalised process for inelastic positron-proton scattering is shown in Figure 2.3.
In this case, the proton fragments into many particles, X, with total invariant mass W.
The four-vectors are the same as before, with the exception that the four-vector of the
outgoing proton is replaced by W.
10
Figure 2.3: The Feynman diagram for inelastic e+p scattering.
Inelastic e+p scattering events are interesting as they can be used as a probe of the
internal structure of the proton. This is the case if the momentum transfer, q,
corresponds to photon wavelengths that are very small with respect to the size of the
proton.
In order to describe the event, a basic set of variables can be defined. These so called
kinematic variables are used in many of the arguments given in this thesis and are
defined below:
The centre of mass energy squared, s, of the event is given by
s  ( P  k ) 2  4Ee E p
where Ee and Ep are the energies of the incident electron and proton
respectively and the approximation is for the case where the masses are small
compared to the energies.
The negative four-momentum transfer is defined as
11
(2.10)
Q 2  q 2  k  k 
2
(2.11)
This ranges from 0 to s. As Q2 increases, the size that can be resolved by the photon is
reduced and the structure of the proton is probed at ever-smaller scales.
It is also convenient to introduce two dimensionless quantities, x and y, whose values
range from 0 to 1. The Bjorken scaling variable, x, is defined as
x
Q2
2pq
(2.12)
y
pq
kp
(2.13)
and y is defined by
which is a measure of the energy transferred from the electron (or to the proton) in the
rest frame of the initial proton.
Again, assuming masses are small, the variables Q2, x and y are related by
Q 2  sxy
(2.14)
Of these, for fixed s, only two are independent.
Finally, the invariant mass of the final hadronic system (*p), where * is the exchange
photon, is
1 x  2
2
W 2  ( p  q) 2  
Q  mp
 x 
where mp is the mass of the proton.
12
(2.15)
The double differential cross-section for positron-proton scattering at high energy,
mediated by neutral current is [2]:
d 2σ
2α 2
 4 2xy 2 F1  2(1  y)F2  1 - (1 - y) 2 xF3
2
dxdQ
Q x


 
(2.16)
In a similar argument to section 2.3, the factors F1 and F2 are introduced as
parameterisations of the structure of the proton and a new factor, F3, is also included.
The form factors F1 and F2, are related to the cross sections for longitudinally (L) and
transversely (T) polarised photons by,
F1 
Q2
T
8 x 2
(2.17)
Q2
F2 
( T   L )
4 2
(2.18)
 p
 tot
  L T
(2.19)
where the total cross section is
*
xF3 is a parity violating term arising from Z0 exchange. For Q 2  M 2Z , this is
negligible and the cross section depends purely on * exchange. In the following xF3
will be neglected for all calculations.
In the late 1960s experiments at SLAC [3] showed that for fixed x, the measured values
of F1 and F2 become approximately independent of Q2 at high Q2 and are only
dependent on x, i.e.
13
F1, 2 ( x, Q 2 )  F1, 2 ( x)
(2.20)
x is interpreted as the fractional momentum of the proton carried by the struck
constituent so, in other words, the scattering has become point-like. This is known as
Bjorken scaling [4]. The quark parton model (QPM) was introduced by Feynman [5] to
explain this.
The quark parton model introduces into the nucleon, point-like, non-interacting
scattering centres known as partons. Scaling is caused by photons scattering off a fixed
number of point-like particles within the proton.
F2 is now defined as the sum of momentum distributions xfi (x) , over all flavours:
F2 ( x)   ei2 xfi ( x)
(2.21)
i
where ei is the electric charge of the parton.
F1 depends on the spin of the partons. For spin-½ particles, the Callan-Gross [6] relation
states that:
2 xF1 ( x, Q 2 )  F2 ( x, Q 2 )
(2.22)
which is confirmed by experiment. From equations (2.17) and (2.18) this implies that
L = 0.
Experimentally, the proton was found to contain 3 quarks, two up and one down, which
are known as the valence quarks.
By summing over the momenta of all the partons in the proton the momentum sum rule
states that
 dx xf (x)  1
1
0
i
i
14
(2.23)
In the QPM, this would be the case if all the momentum in the proton were carried by
the valence quarks. Experimental evidence however, points to a value of ~0.5 and
implies that electrically neutral partons carry the remaining momentum. Such partons
were discovered at DESY in 1979 [7]. These are called gluons and their presence acts
to modify the quark-parton model in a way described below.
2.5 Quantum Chromodynamics
Quantum chromodynamics (QCD) is a gauge field theory, based on SU(3) symmetry,
describing the strong interaction. As stated earlier, quarks carry a colour ‘charge’. QCD
introduces gluons as the mediators of the strong interaction by transmitting the colour
between quarks.
In QED, as Q2 increases, the electromagnetic coupling constant, , also increases. This
is a result of the probing particle ‘seeing’ less of the screening charge caused by the
presence of electron/positron pairs. QCD is different to QED however, in that the
gluons carry colour charge themselves and can interact with each other as well as with
the quarks, in contrast with photons, which carry no electric charge. This leads to a
different behaviour for the strong coupling constant, s, which is given to leading order
in equation 2.24.
 s (Q 2 ) 
4
11  2 3 n ln  Q
f

2
2



(2.24)
where nf is the number of quark flavours and  is a constant of integration. This
constant represents the energy scale where the coupling constant becomes large. This
scale parameter is often quoted at values of ~200 MeV.
The coupling  s (Q 2 ) falls logarithmically towards 0 as Q 2   . This behaviour is
known as asymptotic freedom.
15
Asymptotic freedom is explained by the presence of higher order loops containing only
gluons. The quark loops, similar to the electron/positron loops in QED, screen the
colour charge, i.e. as smaller distances are probed less of the screening charge is seen
and the coupling constant should increase. The gluon loops however, have an
antiscreening effect. This has a greater effect than the screening contribution and the net
result is to weaken the interaction at shorter distances.
Gluons are radiated from the quarks as well as gluons themselves. These may then
couple either to more gluons or form quark/antiquark pairs. As this depends on the
strong coupling constant and is a function of Q2, Bjorken scaling is therefore only an
approximation at low Q2 and x. This is evident in variations of the structure functions
with Q2. Such scaling violations have been measured and recent results from ZEUS [8]
are shown in figure 2.4.
In addition to the valence quarks the proton must now be considered to include a ‘sea’
of quark-antiquark pairs formed by gluon splitting. The overall flavour is unchanged as
baryon conservation states:
xqisea ( x)  xqisea ( x)
(2.25)
where xqisea in this case are the probability distributions of the sea quarks.
In this new model, known as the Improved Quark Parton model, F2 must now include
contributions from the sea quarks. It is also a function of Q2 as well as x in order to take
into account the scaling violations. Equation 2.21 can now be rewritten as:
F2 ( x, Q 2 )   ei2 ( xqi ( x, Q 2 )  xqi ( x, Q 2 ))
i

4
x(u ( x, Q 2 )  u ( x, Q 2 )  c( x, Q 2 )  c ( x, Q 2 ))
9

1
x(d ( x, Q 2 )  d ( x, Q 2 )  s( x, Q 2 )  s ( x, Q 2 ))
9
16
(2.26)
The contributions from the bottom and top quarks have been neglected as the energies
reached in this analysis are not great enough.
Therefore, according to the Quark-Parton Model, F2 can be interpreted in terms of the
distribution of quarks and antiquarks in the proton. Unfortunately, the distribution
cannot be calculated from first principles. The QCD factorisation theorem [9] however,
allows the evolution of the parton distributions with Q2 to be calculated using
perturbative QCD. The accurate measurement of F2 provides constraints for fits to these
QCD calculations, therefore allowing the parton distributions to be determined.
2.6 Parton Evolution
The experimentally observed variation of the structure function with Q2 implies an
evolution of the parton distributions. The processes that lead to this are the radiation of
hard gluons by quarks and the splitting of gluons into quark-antiquark and gluon-gluon
pairs.
The probability that a quark contains another quark of lower fractional momentum z is
defined as Pqq(z), that it contains a gluon is Pgq(z), that a gluon contains a quark is Pqg(z)
and that a gluon contains a gluon is Pgg(z). These probabilities are know as the AltarelliParisi splitting functions [10] and, at leading order, are given by equations
(2.27)(2.30).
4 1  z 2 
p ( z)  

3  1 z 
0
qq

(2.27)

0
p qg
( z) 
1 2
2
z  1  z 
2
0
p gq
( z) 
4 1  (1  z ) 2 


3
z

(2.29)
1 z
 z

0
p gg
( z )  6

 z 1  z 
z
1  z

(2.30)
17
(2.28)
Figure 2.4: F2 as a function of Q2 in bins of x. The rise with Q2 at small x and the fall at
high x is clearly visible. Results from fixed target experiments are shown for
comparison.
18
2.7 DGLAP Evolution
The splitting functions can be used within the DGLAP equations [10-14] to describe the
evolution of quark and gluon densities with Q2. As stated previously, the quark
distribution evolves via gluons being radiated from quarks and by gluons splitting into a
qq pair and is given by:
dqi ( x, Q 2 )  s (Q 2 ) 1 dy 
 x
 x 

 j q j ( y, Q 2 ) Pqi q j    g ( y, Q 2 ) Pqi g  

2

x y
2
d ln( Q )
 y
 y 

(2.31)
with the evolution of the gluon distributions given by:
dg i ( x, Q 2 )  s (Q 2 ) 1 dy 
 x
 x 

 j q j ( y, Q 2 ) Pgq j    g ( y, Q 2 ) Pgg  

2

x y
2
d ln( Q )
 y
 y 

(2.32)
A diagram with several such emissions is shown in figure 2.5. This has the effect of
increasing the density of low-x partons.
Parton density functions (pdfs) were first introduced in equation 2.21. The solutions of
the DGLAP equations are used to evolve the parton distributions as a function of x and
any Q2 scale. The x dependence at some input scale, Q 02 , must be known beforehand.
This is theoretically incalculable and must be derived from fits to structure function data
obtained from experiments.
Pdfs have been published by several groups, with the majority being implemented in the
PDFLIB package [15]. This allows their convenient use in Monte Carlo simulations.
Two such pdfs used in this thesis are the MRS(a) [16] and CTEQ4D [17]
parameterisations.
19
Figure 2.5: Ladder diagram showing the effect of several gluons being radiated from the
parton that interacts with the virtual photon.
2.8 The Longitudinal Structure function, FL
In the naïve quark-parton model the valence quarks carry no transverse momentum. The
emission of a hard gluon from the quark however, introduces a small component of
transverse momentum due to momentum conservation rules. The measurement of F L
can therefore give the gluon distribution within the proton.
In order to parameterise this, the structure function FL, is introduced where
FL  F2  2xF1
20
(2.33)
Substituting the above into equation (2.16), the cross section for e+p scattering can be
written in the form:
d 2σ
2α 2
 4 Y F2 x, Q 2  y 2 FL x, Q 2  Y xF3 x, Q 2
2
dxdQ
Q x







(2.34)
where
Y  1  (1  y) 2
(2.35)
Again, the xF3 term is neglected and equation (2.34) can be written as:
d 2
2

Y ( F2  (1   ) FL )
2
dxdQ
xQ4
(2.36)
where
2(1  y )
1  (1  y ) 2
(2.37)
L
FL

 T F2  FL
(2.38)

The quantity, R, is defined to be
R
and is introduced into equation (2.36) to give:
d 2
2  1  R 

Y 
 F2
2
dxdQ
xQ4  1  R 
(2.39)
As equation 2.39 shows, the relative contribution of F2 and FL to the cross section is
dependent on  and hence y. Also, as the structure functions themselves are functions of
x and Q2, the extraction of FL requires a measurement of the cross section at fixed x and
Q2 while varying y, in effect varying the (1-) factor in the FL term. As can be seen from
21
equation (2.14), such a variation in y can only be achieved by changing the centre of
mass energy squared of the event, i.e. changing the beam energies.
Also, as the plot of  as a function of y shows in figure 2.6, there is only a large
deviation of  at large values of y. With the term in the cross section involving R
rewritten as:
 1  R 

 1 R 
F  
L
(2.40)
this can also be plotted as a function of y and is shown in figure 2.6. Here, R is set to
0.3, a value resulting from the prediction of perturbative QCD. It is clear that the effect
on the cross section from non-zero values of R (FL) will only be visible at large values
of y.
Figure 2.6:  and  FL plotted as functions of y. The effect of non-zero R on the cross
section is only significant at high values of y.
The resulting requirement of making measurements at different centre of mass energies
and high y makes the measurement of FL extremely difficult.
Many fixed target experiments can run beams at different energies and have published
results on FL [23-32]. However, changing the beam energies at an e+p collider, such as
HERA, presents enormous challenges, ranging from understanding the response of the
22
detectors under different conditions to the engineering problems involved with changing
the beam energies. Such a move would also have a large impact on other analyses.
Although Monte Carlo studies of the effect of changing the beam energies at HERA
have been made [18,19], there are no plans for changing the beam energies.
A potential way of achieving the necessary variation in centre of mass energy is to
make use of the emission of hard photons by the incoming positron, as illustrated in
figure 2.7. These events are known as Initial State Radiative Events and serve to lower
the energy of the positron and therefore change the kinematics. The variable, z, is
defined as the fraction of the initial electron’s energy remaining after the emission of
the photon, where:
z
E e  E ISR
(2.41)
Ee
Thus, the incident energy of the electron becomes zEe and the centre of mass energy
squared of the event becomes:
~
s  zs
(2.42)
Equation 2.14 can therefore be rewritten as:
Q2 Q2
y ~ 
s x zsx
(2.43)
As can now be seen, a change in y can now be achieved by changing z, or having
photons of different energies. Monte Carlo studies of the potential of this method have
also been made [20,21]. These show using ISR events to measure FL from their effect
the absolute cross section would require much larger data sets than currently available.
However, the effect of R on the shape of the  distribution should be observable using
the current available data and will be discussed further in chapter 8.
23
Figure 2.7: The emission of an ISR photon in a DIS event. The hadronic final state is
denoted by X.
The H1 collaboration has made use of another approach to determine FL that doesn’t
require changing the centre of mass energy of the interaction [22]. This method first
utilises data at low y where there is a negligible contribution from F L to the crosssection. The resulting measurement of F2 is evolved to high y using the NLO AltarelliParisi equations. In this region, where there is believed to be a sizeable contribution
from FL, the evolved value of F2 is subtracted from the measured cross section, leaving
the extracted value of FL. Resulting values for FL are shown in figure 2.8.
24
Figure 2.8: The Longitudinal structure function obtained by H1 as a function of x in
bins of Q2 along with charged lepton-nucleon fixed target experiments. The error bands
are due to the experimental (inner) and model (outer) uncertainty of the calculation of
FL using a NLO QCD fit to the H1 data.
25
Chapter 3
The Large Hadron Collider
3.1 Introduction
The Large Hadron Collider (LHC), currently under construction at CERN in Geneva, is
a proton-proton machine that will be the most important colliding beam facility in the
world for many years after it is commissioned in 2005. The LHC will probe conditions
at much higher centre of mass (CM) energies than current experiments. This will allow
it to probe a kinematic regime which, it is believed, will provide direct experimental
evidence for extremely important processes. For example, the Standard Model explains
the origin of mass via the Higgs mechanism [33,34]. The discovery of the Higgs particle
is therefore fundamental in our understanding of particle physics. There is also potential
to discover physics beyond the Standard model, possibly in the shape of supersymmetry
[35]. A schematic of the LHC is shown in figure 3.1.
The LHC will have a CM energy of 14 TeV, operating at extremely high luminosities in
the region of 1034cm-2s-1. It will be built inside the existing LEP tunnel with new halls
being built for the experiments. The experiments being built for the LHC include LHCB, a B factory for the investigation of CP violating processes, ALICE, a heavy ion
detector for nucleus-nucleus interactions and two general-purpose detectors, ATLAS
and CMS.
26
Figure 3.1: The LHC Collider showing the relative locations of the four experiments,
CMS, ALICE, ATLAS & LHC-B
3.2 The Compact Muon Solenoid Experiment
The Compact Muon Solenoid (CMS) Experiment is designed to record general physics
events, in particular those events that are the signature of new physics. Furthermore, this
has to be done at the high luminosity and background conditions at the LHC. Figure 3.2
is a section of the CMS experiment showing the relative locations of the components.
To achieve these aims, a symmetrical detector has been designed. Innermost will be
silicon trackers that will reconstruct charged tracks and provide important vertex
information. Surrounding these will be high-resolution electromagnetic and hadronic
27
calorimeters that will measure the energy of particles to a high degree of accuracy. The
solenoid will generate an extremely high magnetic field of 4 Tesla and will enable the
identification and momentum measurement of charged particles.
Finally, outermost will be Muon chambers that will allow the detection of muons,
which pass directly through the calorimeter.
Figure 3.2: Section of the CMS experiment showing the relative positions of the
major components.
The rest of this chapter will concentrate on some of the electronic components
associated with the calorimeter.
28
3.3 The Calorimeter
The calorimeter consists of an electromagnetic calorimeter (ECAL) surrounded by a
hadron calorimeter (HCAL) that work in conjunction to measure the energies and
positions of particles produced in the interaction as well as providing hermetic coverage
for the measurement of missing transverse energy.
The purpose of the ECAL is to measure, with high resolution, the energies and positions
of the electrons and photons produced in each event.
The Higgs decaying to two photons channel [36] is used as the benchmark process to
assess the calorimeter performance at the LHC. Although the cross section for this
process is very low, the background is very small and it will be the best way of
detecting the Higgs if it has a mass  130 GeV. As such, the performance of the ECAL
is determined by the di-photon mass resolution. This depends on the energy resolution
of both photons,
angle, ,
σ E1 ,2
E 1,2
, as well as the angular resolution of the two photon separation
σ
.
tan( θ/2)
The design chosen is an active ECAL array consisting of scintillating lead tungstate
(PbWO4) crystals [37,38]. In an active design, the crystal acts as both the scintillator
and the absorber. Lead tungstate was chosen because of its properties as a good
scintillator with short radiation length as well as good radiation hardness, important in
the extreme conditions at the LHC.
Light from the crystals is detected using silicon avalanche photodiodes in the barrel, and
vacuum phototriodes in the endcaps, whose signal is digitised by the front-end
electronics.
On a larger scale, the ECAL consists of a Barrel calorimeter covering the range in
rapidity ||  1.48 and two endcap calorimeters covering 1.48  ||  3.00, where
29
=-ln(tan/2). In order to reduce systematic errors, the design is arranged such that no
crack between two crystals is aligned with the interaction point.
The HCAL measures the jets produced by p-p collisions, it must be as hermetic as
possible as well as having sufficient depth in order to contain the hadronic showers. The
size of the HCAL is limited by the super-conducting coil that surrounds it. The choice
of design uses copper absorber interleaved with plastic scintillator tiles. Copper was
chosen for the absorber as it has a short interaction length and is non magnetic.
The HCAL also covers the region ||  3.0 in a barrel and two endcaps. The calorimeter
coverage is extended to ||  5.0 by two forward calorimeters with copper absorber and
quartz fibres for readout.
3.4 The Trigger
At the design luminosity of 1034cm-2s-1 there will be an average of 20 interactions per
9
bunch crossing, every 25ns, leading to an input rate of 10 interactions per second. The
maximum rate with which data can be stored on tape for analysis is ~100Hz, therefore a
reduction in rate by at least a factor of 10-7 is required. The trigger and Data Acquisition
(DAQ) System are shown in figure 3.3.
The level-1 trigger consists of front-end electronics operating on a subset of the data.
These generate trigger primitives and perform level-1 trigger processing. Information is
then passed to the Global Trigger Processor (GTP), which, on a positive decision causes
the data to be read from pipelines into front-end buffers.
The higher level triggers are implemented in a processor farm that receives inputs at a
maximum rate of 100kHz. Several levels of filter will be performed to reduce the rate to
the final output rate of ~100Hz.
30
Phenomena have been identified which may be indicative of signatures of new physics
processes. A successful trigger needs to select these events with a very high efficiency.
For example:

Muons and electrons from inclusive W bosons.

Muons with high transverse momentum, pt.

Jets at high pt.

High pt photons and electrons.

Missing Et.
Figure 3.3: The CMS trigger and DAQ.
The only two components used in the level-1 trigger are the calorimeter and muon
chambers. The muon trigger identifies muons and measures their transverse momentum,
pt, before passing information to the global first level trigger. The calorimeter trigger is
presented in detail in the following sections.
3.4.1 The Calorimeter Trigger
The calorimeter trigger has to identify and select electrons, photons and jets as well as
measure the missing ET. The calorimeter is divided into 4176 trigger towers,
corresponding to the HCAL tower structure. In the barrel, a trigger tower corresponds to
31
0.087 in  and 0.0873 (5) in , corresponding to 25 ECAL crystals. Endcap trigger
towers cover a larger range in . Trigger primitives generated by the front-end
electronics are processed in regional crates. For this purpose, the calorimeter is split into
18 regions, each served by its own crate as shown in figure 3.4. A crate can process 256
ECAL-HCAL trigger tower pairs.
Figure 3.4: The positions of the trigger regions. Each trigger region is served by its own
trigger processor crate.
The electron/photon trigger selects individual energy deposits determined using a
sliding window algorithm [39]. This takes groups of 3x3 trigger towers and performs
cuts on the ECAL energy in each tower as well as the ratio of energy of the
corresponding HCAL towers to the ECAL. Candidates are then sorted and the top four
in each crate are passed to the next stage. Jets are found by summing jet Et over 4x4
trigger towers as well as searching for isolated hadrons. Following this, another sort is
performed and the four top electron, jet and isolated hadron candidates are passed to the
global calorimeter trigger.
32
The Missing Et trigger combines the transverse energy in 0.35 x 0.35 (,) regions,
taking into account the tower angular co-ordinates, over the entire calorimeter. This
provides an estimate for the missing Et.
3.4.2 The Sort ASIC
As can be seen from above a fast sort algorithm is important for the calorimeter first
level trigger. The sort process will be implemented using 2-stage sort trees. The first
stage uses 3 sort ASICs each receiving 24 objects over 4 clock cycles and outputting 4
objects while the second stage takes the resulting 12 objects and outputs 4 objects. The
requirement of the Sort ASIC is therefore to output the four highest ranked objects from
a set of input trigger objects at a clock speed of 160MHz. The design for the
implementation of this algorithm is shown in figure 3.5.
Before the main sort stage, a pre-sort is performed and objects are arranged into input
groups. The objects in each input group are arranged into rank order.
The first stage of the sort algorithm places the input groups in order of the highest
ranked object in each group, with the highest ranked group labelled A(0,1,2,..), followed
by B(0,1,2,..) etc. As the groups have been pre-sorted, the highest ranked object in group A
must therefore be the highest ranked overall. Furthermore, if for example, there are four
input groups each containing four objects the four highest objects overall must come
from the set {A0, A1, A2, A3, B0, B1, B2, C0, C1, D0}. As the highest ranked object
overall is A0 because of the pre-sort, the second ranked object must therefore be either
A1 or B0. This is repeated for the lower ranks.
33
Figure 3.5: Schematic of the sort algorithm.
As part of the early development of the system, the sort algorithm was implemented on
an ASIC. The ability of the design to work under the required conditions is however
unknown. Therefore several such prototype sort ASICs have been built and require full
testing.
Due to the rapid developments in integrated circuit technology however, this approach
has been rendered obsolete. The test system described below therefore remains a
development exercise.
34
3.5 The Test System
The requirement of the test system is to send simulated trigger signals at the LHC clock
speed to the sort ASIC and to monitor its output for errors. The sort ASIC requires 32 
8-bit words input at a speed of 160MHz but a testing device should be able to raise the
input to 200MHz. This requirement is too high for current commercial chip testing
platforms to test it to its full capacity. Therefore, a specialist chip-testing device has
been designed. Ideally, the design of a test system should be modular where
components can be removed and replaced at any time and can have more than one
application. For example, it is envisaged that further components of the CMS trigger
will be tested on the same platform.
For this reason a field bus design was chosen. The principle of the design is to store the
high-speed output from the device under test onto a series of memories located on a bus.
The data can then be read and analysed at a slower clock speed using currently available
technology. By placing several such memories on the bus a single computer can analyse
a large amount of data. An overview of the field bus is shown in figure 3.6.
The output of the device under test is readout at high speed onto memory modules on
the field bus. Several such modules can be located on the bus at any one time. The
contents of these memories can then be read at the slower speed of 40MHz onto the
controlling computer, in this case a PC, before being analysed and checked for errors.
The input to the ASIC is provided by four pseudorandom number generators giving the
required bit rate with the output sent though a demultiplexer onto a 16-bit bus running
at 40MHz.
The following description however, is independent of the device under test.
35
FIELD BUS
4x
Random
Number
Generators
Sort
ASIC
Clock & Control
Module
CLK
Node
Insert more Modules as
required
Field-bus
Driver
Module
Node
PC
Memory Module
DPR
Node
Figure 3.6: The Layout of the Sort ASIC Field Bus Test system.
The 16-bit bus then interfaces with memory modules, each holding 4 Dual-Port RAM
(DPR) memory chips. Modules on the field bus and are linked to the other modules with
an 8-bit loop. This loop can include more memory modules where necessary, as well as
other devices. For example, the Clock and Control module provides clock distribution
and triggering facilities. The nodes have a common design and decode the field bus as
well as providing the control of each module.
Dual Port RAM (DPR) was chosen as it contains two sets of address and data lines,
labelled left (L) and right (R), as compared to conventional memory with just one set.
Both sets of lines in DPR access the same memory locations on the chip and as such
allow simultaneous reading and writing to the memory.
Finally, the field bus requires a driver. Signals are generated and analysed by a remote
computer terminal that can send control signals and data packets to the field bus.
Furthermore, the computer can read data from the field bus and analyse the output from
the device under test as well as check for errors.
The field bus driver module simply converts the PC signals to the field bus, which uses
either copper cabling or fibre-optics.
36
3.5.1 The Protocol
Messages and commands sent around the ring by the field bus driver are received by all
nodes, which then decode them. The message content should then determine the action,
if any, that the node performs. In order to achieve this any protocol has to include
commands for the full set of actions available to the nodes. In addition, addresses for
selecting nodes individually as well as for selecting memory locations on the DPR, a
data payload and finally error checking facilities should also be included. As the field
bus design implements an 8-bit bus, the protocol consists of a set of 8-bit words in the
order shown in figure 3.7.
Each node of the system is designed to be 16-bit. Therefore, every two words in the
field bus protocol correspond to one word used by the node. The aim of this is to allow
the major functions of the node to run at half the clock speed of the field bus, thereby
allowing for easier implementation using current technology.
Token
Node Address
Start Address
(32 Bit)
Data Length
Error Check Word
Data Payload
Error Check Word
Figure 3.7: The Field-Bus Protocol
37
One complete cycle of the bus first consists of a Header, consisting of 10 8-bit words.
The header starts with the token, which defines the operation to be performed, followed
by the address of the node that is intended to carry out this command. If the command
involved reading or writing to the DPR the starting address and data length follow,
otherwise these are set to zero. Two error check words follow.
Next, comes the data payload, the length of which is defined in the header up to a
maximum length of 215 words. This length was determined by software constraints.
Finally, come another two Error Check words.
The bit naming convention used has the least significant bit (LSB) in the top left hand
corner of a 16-bit word double and the most significant bit (MSB) in the bottom right
hand corner.
Six tokens control the general actions of the nodes. They are:

Bus Reset

Get Node Name

Write Data Word

Read Data Word

Write Data Block

Read Data Block
The Bus Reset and Get Node Name tokens are global and act on every node on the
system. The Bus Reset token, as its name implies, resets all the signals and registers of
the nodes on the network.
The Get Node Name token loads the first node in the network with an 8-bit address, a,
specified by the user in the Node address word. This address is then incremented by 1
and the protocol, otherwise unchanged, passes to the next node and so on. In this way,
all the nodes are given addresses as shown in table 3.1
38
Node 1
Node 2
Node 3
Node 4
Address
Address+1
Address+2
Address+3
Table 3.1: Addressing the Field bus nodes
3.6 The Field Bus Node
The aim of the node design is to make the field bus nodes modular. Therefore, all
nodes, regardless of the function of the board on which they are situated, have the same
general design. This overall design is shown in figure 3.8 and is described in the
following sections. This is implemented onto a Field Programmable Gate Array
(FPGA), which is the core component for each board.
Interface
Node Decoder
Memory Management
Interface to External
RAM
Figure 3.8: The overall node design.
39
3.6.1 The XILINX Field Programmable Gate Array
The XILINX FPGA is an integrated circuit consisting of an array of complex logic
blocks connected by reroutable links. This gives the device a large amount of flexibility.
A huge range of digital circuits can be designed and then downloaded to the FPGA.
Furthermore, the FPGA can be reset allowing different designs to be implemented.
When the final design has been completed, a Programmable Read-Only Memory
(EPROM) can be used to store it and reload the FPGA whenever it is reset or when the
power is switched on.
In general, the size and complexity of modern digital circuits is such that it is
impossible to design them by building circuits at the schematic level. It is evident
therefore that a way of synthesising these from a set of standard instructions is required.
VHDL (Very High Speed Integrated Circuits Hardware Description Language) [41,42]
is a programming language that allows the design of such large and complex integrated
circuits.
VHDL uses objects called ‘entities’ which are descriptions of some logical process with
the input and output to the process also defined.
An entity can be as simple as a description for a simple logic operation, for example
C  A and B
(3.1)
defines a logical ‘and’ operation with inputs A & B and output C.
Another entity could be a state machine with many different states and input signals, the
transition from state to state depending on a complex interaction between these signals.
Entities are joined in a hierarchical structure to form the overall design. The advantage
in this case is that neither a top-down nor a bottom-up design methodology is favoured.
One could just as easily start with the top-level design as start from the individual
components.
40
After the VHDL has been written it must be converted into a form that can be
downloaded to the FPGA. This is called implementation.
The first stage of the implementation of the design after the code is written is known as
Elaboration. Here, the entities are placed in the correct hierarchical structure. Then,
follows the architectural synthesis stage. Here the synthesis of the required logic
components is performed.
It is after this stage that the physical pin locations on the chip of the inputs and outputs
are assigned. Of theses pins, some are reserved for power supplies, resets and loading
the design. The others however, can be assigned to whatever purpose is required.
Finally, the Logic is optimised for either speed or area and the circuit diagrams are
created. These diagrams must then be converted into a netlist, or list of the connections
used within the FPGA, which is then used in the final stage to create a bit-file that can
be downloaded to the chip.
3.6.2 The Interface
The interface is the only part of the design dependent on the hardware being used. Its
function is to provide the necessary connection to external resources that may vary from
board to board. For example, the memory module board will require address, data and
control lines for the DPR. There may also be the requirement for control lines to any
displays or LEDs mounted on the board. Also, the interface to the field bus itself may
vary from board to board, i.e. from coaxial cable to fibre optic inputs.
The design should allow the greatest amount of flexibility in order to make changes as
simple as possible.
3.6.3 The Node Decoder
The Node Decoder monitors and, if necessary, acts upon the field bus protocol. It has
four main components, the Input/Output (IO) module, a clock divider, a Header decoder
and internal bus control. The layout of the Node Decoder is shown in figure 3.9.
41
FB IN
IO Module
CLK divider
HeaderDecoder
Internal Bus Control
To Memory Management
Figure 3.9: The Node Decoder.
The clock divider provides a control signal with double the period of the field bus clock.
CLK2 = high is used throughout the design to synchronise the 16 bit words. The signal
is generated using a 2-bit counter clocked with the field bus clock.
The IO Module converts the 8-bit field bus to internal 16-bit words used by the decoder.
These 16-bit words are then passed into the rest of the system. The data however also
passes through a shift register that delays it by the same number of clock cycles that are
required by the Node Decoder. In this way, the header always passes though unaffected
by any other process. An illustration of the IO module is shown in figure 3.10. Data is
only written to the field bus if the output enable signal is set high. The latches in the
shift register are clocked with the field bus clock while the input latch is clocked with
the internal CLK2 signal. On CLK2 = high the first 8 bits of the 16 bit internal words
are filled while on CLK2 = low the last 8 bits of the 16 bit words are filled.
42
Shift Register
LATCH
FB IN
8-bit
FB OUT
8-bit
FB CLK
IN
Output
Enable
CLK2
To
Node
Decoder
16-bit
Output
16-bit
Figure 3.10: The Input/Output module
The Header Decoder decodes the protocol. The main component is a state machine that
waits for the arrival of a valid token and, for those tokens that require it, a node address.
Then, on each subsequent CLK2=high signal it cycles through the states:

Load Address 1

Load Address 2

Load Length

Header Error Check

IO Data

Data error check

Error word

End State

Idle
If however, the token is the Bus Reset token a bus reset signal is set high for 2 Clock
cycles, performing a ‘soft’ reset, as the state machine will continue to its end state.
43
During the Load states, appropriate registers are loaded with the current data word. The
Address and IO Data Counters are loaded with the contents of these registers during the
Header Error Check state. The Address counter counts up starting from the loaded
address while the IO data counter counts down from the load length to 0. On reaching 1,
a terminating signal is sent to the State Machine.
If the token is Read/Write Data, the state machine will remain in the IO Data state until
the terminating signal is received.
Error checking is done by performing an XNOR on each word with the previous one.
The result is compared with the check word. If the two are equal, no error is returned.
This is equivalent to an odd parity check.
The Get Node Name Token is implemented by a separate module. If this token is
received, the state machine sets a get name valid signal and a node address register is
then loaded with the word in the data payload. An address-loaded signal is also set. The
data word is then incremented by one and sent to the IO module to be sent on to the
next node on the field bus.
Data will only be sent to the field bus if the right conditions are met. Firstly, for the Get
Node Name token the signal Data to Bus enable is set during the Header Error check
state. Secondly, if the token is Read (word/data), Data to Bus enable is only set on the
correct state and if an output enable signal is set. This only occurs when the node is
being addressed.
The memory manager controls the necessary address and read/write enable signals
necessary for reading and writing data to memory.
A node also possesses an internal memory comprising of a set of 8  16-bit registers
that can be used for control and testing purposes.
44

0-
control register
- Read/Write

1-
Node address register
- Read only (except on Get Node Name)

2-
Type and Version Number
- Read Only

3-
Status Register
- Read Only

4-
Reserved for future use
- Read Only

5-
General Purpose/resources
- Read/Write

6-
General Purpose
- Read/Write

7-
General Purpose
- Read/Write
3.7 Simulation of the Design
In order to simulate the design a test bench was defined. The test bench simulates the
field bus inputs and clock. It is applied after the architectural synthesis stage. The
current state of any signal within the design can then be monitored throughout the
duration of the simulation.
Figure 3.11 shows the signals monitored during the running of a typical testbench on a
simulation of a field bus node. The simulation runs three tokens. First, a Get Node
Name command loads the node address with the address 1. This is followed by a write
data command that writes six 8-bit (three 16-bit) words to the node with address 1
starting at the address 0101. Finally, a read data command is again sent to the node with
address 1. This reads three words from the starting address 0101.
Points 1,2 and 3 show the starting points of each data packet. The inputs are clocked on
the rising edge of the field bus clock. However, the 16 bit signals of the node decoder
have to be synchronised with the rising of the clock 2 signal. Points A show the actual
‘CLK2’ signal used in the node can either be equal to or a NOT of the nominal CLK2
signal.
45
Figure 3.11: Simulation of Node Decoder through a Get Node Name command,
followed by a write and read data cycle.
This ensures that the node decoder is always synchronised regardless of which point in
the CLK2 cycle that the field bus data bunch arrives. Point B shows the output-enable
signal set to high, this allows data from the internal registers or memory to be written to
the field bus. At all other times the signals pass through the input-output module with
appropriate delay. As can be seen, the output to the field bus is delayed by 10 clock
cycles.
The prototype consisted of a PC running a Windows based Graphical User Interface,
coded in C++, with a physical interface to the field bus provided by a dedicated circuit
board. This was connected to a board via a 40-way cable. For testing, the PC also
provided the clock. The node on the board is linked to the field bus via ECL-TTL chips
that drive the 8-bit bus. The clock is also delivered around the field bus. A picture of
this board is shown in figure 3.12
46
A prototype memory module was also included on the field bus. This was designed to
be placed in a VMEbus crate and was run successfully in tandem with the first node.
The memory module is illustrated in figure 3.13
Figure 3.12: The prototype interface board. The FPGA is shown in the centre with the
socket for connection to the PC shown at top. The TTL output sockets for the field bus
are to the right. Other resources include 4 seven segment displays and 11 LEDs as well
as global and local reset buttons.
47
Fig 3.13: The prototype memory module. Field bus connections are at bottom left with
the FPGA upper centre. Provision for 4 DPR chips is made although only one is in
place here. This board is designed to be placed in a VMEbus crate although it only
draws power from the crate. The resources also include 4 multifunction displays.
The field bus was successfully tested with two nodes, on separate cards. The node
addressing and ability to read and write data to the internal registers on the nodes was
demonstrated.
The next stage required reading and writing operations to the DPR. This required
overcoming synchronisation problems caused by the distribution of the clock around the
field bus and hence required the introduction of the clock distribution card. Also, the
entire design required running at much higher clock speeds than that provided by the
PC.
The progress of technology though, finally ended the field bus design. Due to a large
increase in speed and reduction in cost of FPGAs, it is now possible to implement the
sort algorithm on one of these rather than a specialist chip. This therefore enables the
FPGA simulation to be used directly for testing the operation of the chip.
48
Chapter 4
HERA and the ZEUS detector
4.1 HERA
The Hadron Electron Ring Accelerator, HERA [43], is an electron-positron collider
located at the DESY Laboratory in Hamburg, Germany. Built by an international
collaboration it was commissioned in 1991 with the first collisions observed in May
1992.
The two main storage rings are each 6.3 km in circumference; one contains a beam with
820 GeV protons, the other a counter-rotating beam of 27.5 GeV positrons. This gives a
centre of mass energy of 300 GeV. (For the 1998 running period onwards, 920 GeV
protons were used.) The design luminosity is 1.5 x 1031cm-2s-1. Figure 4.1 shows the
layout of HERA along with the locations of the main experimental areas. The North and
South Halls house the H1 and ZEUS experiments respectively with the HERMES fixed
target experiment in the East Hall and the HERA-b experiment located within the
PETRA ring.
Both the protons and electrons are collected into 220 bunches, crossing at each
interaction point every 96s. Some of the bunches are left unfilled and are used for
background studies. In order to study all possible beam-related backgrounds there are
unpaired electron bunches (no protons), unpaired proton bunches and completely empty
bunches.
The protons are initially accelerated to 7.5 GeV in DESY III before being transferred to
PETRA, here they are accelerated to 40 GeV and injected into HERA. When all of the
bunches are filled the beam is then accelerated to 820 GeV. The high energy of the
49
protons requires a bending field of approximately 4.7T, requiring the use of
superconducting magnets.
Figure 4.1: The HERA Layout, showing the locations of the ZEUS and H1 experiments
along with the preaccelerators, H Linac, e Linac, DESY and PETRA.
The positron beam, however, only needs a field of 0.164T and conventional magnets are
used. The positrons are initially accelerated in a 500 MeV linac and accumulated in a
single, 60mA bunch before being injected at 7 GeV into PETRA from DESY III. Once
there are 70 such bunches in PETRA they are accelerated to 14 GeV and injected into
50
HERA. Once the required bunch structure has been reached the positrons are then
ramped to 27.5 GeV.
Figure 4.2 shows the accumulated luminosity delivered by HERA for the years of
running up to 1999. For the 1996 and 1997 running periods, ZEUS collected
approximately 10.7pb-1 and 27pb-1 of data respectively.
Figure 4.2: The integrated luminosity delivered by HERA for the years 1992-1999. The
early 1999 running period used electrons before being switched to positrons for the
remainder of the year.
The HERA accelerator allows the study of lepton-proton scattering in previously
unreachable kinematic regimes. For example, to achieve the same centre of mass energy
51
a fixed target experiment would require an electron(lepton) beam with energy of several
TeV. Furthermore, a wide variety of physics can be studied, from accurate
measurements of the proton structure functions to photoproduction and searches for
physics beyond the Standard Model.
Figure 4.3 shows the kinematic regime reached by HERA DIS data taken using the
ZEUS experiment, alongside regions probed by fixed-target experiments. The ZEUS
data include results from shifted vertex running in 1995 [44] where the nominal
interaction point was moved in order reach lower Q2. The Beam Pipe Calorimeter
(BPC) and Beam Pipe Tracker (BPT) are two detectors placed close to the beam pipe in
order to measure scattered positrons at very small angles and low Q2. The data shown
were taken over a short period in 1997 [45]. The fixed target experiments included are
NMC [46], BCDMS [47], CCFR [48], E665 [49] and SLAC [50].
As can be seen, very low values of x as well as very high values of Q2 can be probed at
HERA. There is also potential for overlap with fixed-target data. Also, low Q2 and low
x regions can be reached, probing the transition region between perturbative and nonperturbative QCD.
4.2 The ZEUS Detector
The ZEUS detector [51,52] is a general-purpose detector designed to record all the
physics processes at HERA. The imbalance in the beam energies results in a boost in
the direction of the proton. The detector is therefore asymmetric and is split into three
distinct regions, forward, barrel and rear. The forward region contains more particles
with higher energies and therefore has the most instrumentation. The general layout of
the ZEUS detector is shown in Figure 4.4.
52
Figure 4.3: The Kinematic Range explored by the ZEUS experiment at HERA shown
on the x-Q2 plane. Shown are results from the standard low and high Q2 analysis
together with results from shifted vertex running and very low Q2 data obtained using
the BPC and BPT. This is shown in comparison with results from fixed target
experiments, including NMC, BCDMS, CCFR, E665 and SLAC.
53
The interaction point is surrounded by a central tracking detector (CTD) and forward
and rear tracking detectors. All of these detectors are drift chambers. In the forward
direction is the Forward Tracking Detector (FTD), consisting of three modules with the
Transition Radiation Detector (TRD) sandwiched in between. The TRD provides
electron identification in the momentum range 1 to 30 GeV. In the rear direction is the
Rear Tracking Detector (RTD), which is of the same form as a single FTD module.
Surrounding the drift chambers is a superconducting solenoid providing a field of 1.43T
that allows the measurement of track momenta.
Enclosing the drift chambers and solenoid is a uranium-scintillator calorimeter. It is
split into three sections, the forward calorimeter (FCAL), barrel calorimeter (BCAL)
and rear calorimeter (RCAL) and is used for accurate position and energy measurement
of particles in jets as well as the scattered positron.
The backing calorimeter (BAC) serves to measure leakage from high-energy jets that
are not fully contained within the calorimeter. It also acts as the return yoke for the
superconducting magnet.
Finally, the outermost components are the muon drift chambers, FMUON, BMUON
and RMUON. These allow measurement of muons, which pass directly through the
calorimeter.
54
Figure 4.4: The ZEUS detector. The positions of the major components are indicated. In
this representation, the protons enter from the right and the positrons from the left.
55
4.2.1 The Zeus co-ordinate system
The ZEUS co-ordinate system [53] is defined with the +z direction aligned with the
direction of the proton beam, the +x direction pointing toward the centre of the HERA
ring while the +y direction pointing vertically upwards.
Angles are defined around the interaction point. The polar angle, , is measured relative
to the z-axis and the azimuthal angle, , is measured relative to the x-axis in the xy
plane. This is shown graphically in figure 4.5.
Figure 4.5: The ZEUS co-ordinate system.
The components used for the analyses in this thesis are described, innermost outwards
in the following sections.
56
4.2.2 The Central Tracking Detector
The Central Tracking Detector (CTD) [54] is the innermost detector of the ZEUS
experiment. It is a cylindrical drift chamber, 2m long, with nine radial superlayers each
divided into cells containing 8 sense wires. The arrangement of these wires in part of
the chamber is shown in figure 4.6. The total number of anode (sense) wires is 4608.
The wires in the odd-numbered superlayers run parallel to the beam direction while
those in the even-numbered superlayers have a small stereo angle to allow
determination of the z co-ordinate of the hits.
Figure 4.6: The arrangement of the wires in the CTD.
57
The angle (~5) was chosen such that the polar and azimuthal angular resolutions are
approximately equal. An accuracy of 1.4mm in the z measurement can be obtained.
The chamber operates in a high magnetic field in order to obtain information about
particle momenta. The electrostatics and choice of gas combined with the magnetic
field cause the electron drift trajectories to rotate through a Lorentz angle of 45 relative
to the electric field direction. Furthermore, the cells are aligned at 45 relative to the
radius vector therefore ensuring the drift direction is perpendicular to high momentum
tracks. The resulting configuration brings advantages with track finding, in particular
with the left-right ambiguity, i.e. with two field wires either side of a sense wire, it is
not immediately obvious which side of the sense wire the track passes through. Also,
the maximum drift distance is ~2.5cm. This helps to resolve close tracks by reducing
the likelihood that they pass through the same cell.
The gas used is an 85:8:7 Argon: CO2: Ethane gas mix with a trace amount of ethanol
(0.84%) added to prevent whisker growth on the chamber wires [55]. This choice was
made to provide a high enough drift velocity (50 m ns-1) to allow a fast readout time
and enable the CTD to be used in the triggering.
Signals from the CTD are amplified before being digitised by 104MHz Flash Analogue
to Digital Converters (FADCs). The digitised signal is stored in a digital pipeline until a
decision to keep or reject it is received from the trigger. Events that pass the trigger are
processed to remove pedestals.
An additional measurement of z is provided by measuring the difference in arrival time
from pulses at each end of the wires. All wires in superlayer 1 and alternate wires in
superlayers 3 and 5 are instrumented for this. The resolution for this time difference is
~300ps which implies a resolution of ~4.4cm in the z position. The system can resolve
multiple hits with a minimum time separation of 48ns. Although z by timing resolution
is worse than that provided the stereo superlayers, the speed with which the information
is obtained allows it to be used in the trigger.
58
4.2.3 The Small Angle Rear Tracking Detector
The Small Angle Rear Tracking detector (SRTD) [56] is mounted on the face of the
RCAL (z = -146cm) and covers the hole surrounding the beam pipe. It is built from an
array of scintillator strips, arranged into two layers, one with horizontal strips, the other
vertical. The SRTD covers the angular range 162 to 176 and has an area of 68 x 68cm2
with the exception of a 20cm x 20cm hole in the centre for the beam pipe. The layout of
the scintillator strips is shown in figure 4.7.
Figure 4.7: The arrangement of the two layers of scintillator strips in the SRTD.
Due to the fine segmentation of the scintillator fingers, the SRTD gives a position
measurement with a better resolution than the calorimeter. The SRTD position
resolution is ~3mm.
The SRTD is also used as a presampler to correct the energy of positrons for the energy
loss caused by the crossing of the dead material between the interaction point and
calorimeter. This is related to the resulting shower multiplicity and therefore to the
energy deposit in the SRTD. The correction is done on an event-by-event basis.
These two considerations allow improved reconstruction of positrons at high angles,
therefore improving the ability to do accurate low Q2 analyses.
59
4.2.4 Magnets
In order to measure the momentum of charged particles in the inner region, a
superconducting solenoid is located between the CTD and the calorimeter. The magnet
provides a 1.43 T field. The flux return consists of the iron in the yoke and in the
Backing Calorimeter (BAC). The field in the central region is calibrated to a precision
of  1%. The effect of this magnetic field on the beams is compensated by another
superconducting solenoid, called the compensator, located in the rear endcap of the iron
yoke.
4.2.5 The Calorimeter
The calorimeter is designed to measure accurately the energy and position of isolated
positrons and jets [57]. The design is such that the response should be equal for both
hadronic and electromagnetic particles. Furthermore, it has to cover as much solid angle
as possible.
The coverage obtained is 99.5% in the forward hemisphere and 98.8% in the rear
hemisphere with the calorimeter divided into 3 separate components, the forward
calorimeter (FCAL), barrel calorimeter (BCAL) and rear calorimeter (RCAL). The
energy of incident particles is much higher in the forward regions due to the relative
energy of the proton and positron. The depth of the calorimeter therefore varies, with
the FCAL being deepest in order to contain the highest energy jets. The overall
absorption length of the sections is 7 for the FCAL, 5 for the BCAL and 4 for the
RCAL.
The calorimeter is made from layers of depleted uranium (238U) and plastic scintillator
arranged into modules. Each module is further segmented into EMC and HAC sections,
with the EMC sections on the inner face of the module and the HAC sections behind
them. For the FCAL and BCAL there are two HAC sections, HAC1 and HAC2.
Wavelength-shifter bars carry light pulses from each cell to photomultiplier tubes
(PMTs) at the back of each module. There are two PMTs per cell. These provide both
redundancy and an indication of faulty PMTs by examining the difference in readout.
60
The calorimeter is arranged so that no cracks between modules point directly at the
interaction point. This reduces the chance of energy leaving the calorimeter undetected.
The choice of uranium arises from the desire for equal hadronic and electromagnetic
response. While electromagnetic showers reach the scintillator with high efficiency,
hadronic showers tend to be partially absorbed in nuclear binding energy in the
absorber. Depleted uranium,
238
U, was chosen as the absorber as it has the property of
releasing extra neutrons as it absorbs energy. These neutrons can be absorbed in the
scintillator to produce extra scintillation light. This process is known as compensation.
The relative thickness of the absorber and scintillator was adjusted to achieve a required
ratio of electromagnetic to hadronic response of 1  0.05. In test beam an
electromagnetic energy resolution of 18%/E and a hadronic energy resolution of
32%/E was achieved. A further advantage in the use of
238
U is that, due to its high
atomic number, Z, it has a shorter radiation length, allowing the calorimeter to be more
compact.
Several methods are employed for calibration of the calorimeter. These include utilising
the natural radioactivity as a calibration signal, which provides a steady reference
calibration [58]. Also, a cobalt source scan, charge injection [59] and laser calibration
pulses [60] are used.
In order to improve energy resolution and particle identification, two components, the
presampler and Hadron Electron Separator (HES) are placed in front of the calorimeter.
4.2.6 The Presampler
The presampler [61] is a segmented scintillator array placed immediately in front of the
calorimeter. It consists of a layer of scintillator tiles, connected by wavelength shifting
fibres to photomultiplier tubes. The tiles have a 20cm segmentation that maps directly
onto the hadronic cells. The area covered by the rear and forward presamplers in
relation to the RCAL and FCAL is shown in figure 4.8.
61
From the beginning of 1999 running, the Barrel Presampler (BPRES) was introduced.
This consists of 32 cassettes each containing 13 scintillator tiles orientated along the zaxis. These cassettes are installed directly in front of the BCAL modules.
The presampler is used to correct for the loss in energy due to the showering of
positrons in dead material in the same way as the SRTD, on an event by event basis.
Figure 4.8: The area covered by the Forward and Rear presamplers (shaded)
superimposed on the FCAL and RCAL cells.
62
4.2.7 HES
The HES is a plane of 3cm  3cm silicon diodes located 3 radiation lengths inside the
electromagnetic calorimeter both in the forward (FHES) and rear (RHES) directions.
This position corresponds to the electron shower maximum and therefore
electromagnetic showers provide higher deposits in the HES. The segmentation is
significantly better than the F/RCAL and allows the HES to improve the detection of
electrons and 0s.
4.2.8 The Muon Chambers
The muon chambers are divided into three components, the Forward Muon (FMUO),
Barrel Muon (BMUON) and Rear Muon (RMUON) located in the forward, barrel and
rear directions respectively. The chambers are drift chambers and measure tracks
created by the passage of muons through them.
In addition to the main detector components, there are several forward and rear
components that aid the detection of very low angle particles or provide a control of the
background.
4.2.9 Beam Pipe Calorimeter and Beam Pipe Tracker
The Beam Pipe Calorimeter (BPC) is a small tungsten-scintillator calorimeter located
close to the beam pipe to the rear of the RCAL, 3m from the interaction point. It is
designed to detect electrons scattered through very small angles and therefore at very
low Q2. Using the BPC, scattered positrons scattered through the angular range 18mrad
<  < 32mrad can be detected, thereby extending the accessible Q2 range to 0.1 GeV2.
In 1997, the Beam Pipe Tracker (BPT) was installed in front of the BPC. It consisted of
two silicon micro-strip detectors providing extra information on the track of the
electron, therefore allowing better control of the background and systematic effects
associated with the measurements made by the BPC.
63
4.2.10 The VETO Wall
The VETO Wall is an 87cm thick iron wall located 7.5m from the interaction point in
the rear direction near the tunnel exit. Its primary purpose is to protect the detector
against particles from the beam halo accompanying the proton bunches. However,
scintillator detectors are placed on both sides of the wall and provide a veto on beam
gas background events.
4.2.11 The Collimator, C5
The collimator C5 is connected to the support of the compensator coil 3.1m from the
interaction point in the rear direction. It protects the central detector from radiation
reflected by absorbers which are designed to absorb the direct synchrotron radiation
produced both up and downstream of the interaction point. Connected to the collimator
are four scintillation counters that provide an accurate timing signal of the background.
This allows the ability to distinguish the background from the positron or proton side
and to veto the showers produced by the proton beam halo scattering off C5 itself.
4.2.12 The Luminosity Detector
Measurement of luminosity is achieved using the bremsstrahlung process
e+p  e+p
(4.1)
where the final-state photon is emitted at very small angles with respect to the initial
positron direction. This process has a clean experimental signature and a large cross
section, well known from QED. The bremsstrahlung differential cross section is given,
in the Born approximation, by the Bethe-Heitler formula:
E e
d BH
 4re2
dE
E E e
 E e E e 2   4 E p E e E e 1 


   ln
 


E
E
3
MmE
2 
e

 e

64
(4.2)
where E is the photon energy, Ee , E e are the incident and scattered electron energies,
E p is the proton mass, M and m are the proton and electron masses,  is the fine
structure constant and re is the classical electron radius.
For photon energy > 10 GeV, this results in a cross section of 37.08mb and a rate of 10 6
events per second for the HERA luminosity of 1031cm-2s-1.
As the luminosity, L, is simply given by
L
N

(4.3)
where N is the measured number of events and  is the cross section, Bremsstrahlung
events can therefore be used for fast and accurate luminosity monitoring. In this case,
the luminosity measurement requires the detection of the bremsstrahlung photon [62].
The luminosity monitor (LUMI) consists of a photon detector and positron detector.
The position of these components is shown in figure 4.9. The positron detector is
located 35m from the interaction point and is used to measure the energy of positrons in
coincidence with the bremsstrahlung photon.
Figure 4.9: Overview of the ZEUS LUMI detector. The locations of the four main
components along with the magnets upstream of the main detector are shown.
65
The photon detector, LUMI-, is located 107m from the interaction point in the
direction of the positron beam. It consists of a 24X0 depth lead-scintillator calorimeter
with two radiation-lengths of absorber placed in front for the 1996 data taking period
and 3 radiation lengths of absorber from 1997 onwards. The change in absorber length
was made in order to prevent radiation damage to the calorimeter. The energy resolution
was 23%
E for the 2X0 absorber and 32%
E for the 3X0 absorber. The absorber is
required to shield against the large flux of direct synchrotron radiation. Inside the
calorimeter at a depth of 3X0 are two crossed layers of scintillator fingers that act as a
position detector. The design of the LUMI- calorimeter is shown in figure 4.10
showing the positions of the filter, the layers of scintillator and lead and the location of
the position detector (POS-DET).
Figure 4.10: The LUMI Calorimeter.
In addition, there are also taggers used to identify electrons from photoproduction.
These consist of further positron detectors found at 8m and 44m from the interaction
point. They are not however used directly for the luminosity measurement. The 44m
66
tagger is a 24X0 deep tungsten-scintillator calorimeter located 44m from the interaction
point. It is designed to detect electrons in the energy range 22-26 GeV.
4.3 Trigger and Data Acquisition
The large amount of data from the detector components coupled with the short bunch
crossing interval means there is an enormous amount of data to store. As each event
produces approximately 100Kbytes of data, it is therefore impossible to store all the
data produced.
Most events however are caused by background processes and the rates from interesting
physics processes are much lower. An efficient trigger must therefore select the
interesting events with a high efficiency while rejecting as much of the background as
possible.
At ZEUS, a three level trigger is used. An overview of the trigger and data acquisition
system is shown in figure 4.11.
4.3.1 The First Level Trigger
The aim of the first level trigger (FLT) is to reduce the input rate from the bunch
crossing rate of 10.4 MHz to an output rate of 1 kHz.
Individual FLT component processors process the data of their component then make a
decision after 1.0-2.5s before sending a summary of the trigger information to the
Global First Level Trigger (GFLT). This combines all the information and decides
whether to reject the data or not. The GFLT trigger calculations take 46 bunch crossings
(4.42s), during which time data from the components is pipelined, ensuring the
process is deadtime free. If it accepts the event, the GFLT sends a Trigger_1_Accept
signal, which includes the bunch crossing number, to each component in order to read
out the data. On receipt of this, each component keeps a Busy_Bit set for the duration of
the readout. All components must reset their Busy_Bit before the GFLT can send
another Trigger_1_Accept.
67
Figure 4.11: The ZEUS Trigger Layout showing the flow of data from the detector
front-end electronics, through the triggers, to final storage onto either tape or disk.
The Fast Clear (FCLR) can reject events between the FLT and SLT. It uses calorimeter
information to identify signatures for background processes and can send an abort to the
component buffers. The FCLR was active during the 1995 to 1997 running period but
was never used in the trigger as the FLT rates were never high enough to make it
worthwhile.
68
4.3.2 The Second Level Trigger
The Second Level Trigger (SLT) has more precise and complete data available to it than
the FLT. After a GFLT accept is received the data is digitised and processed by second
level component triggers using a network of transputers operating in parallel. The
Global Second Level Trigger (GSLT) combines the results from the components and
sends the decision to the Event Builder (EVB). The requirement of the GSLT is to
reduce the rate from 1kHz to 100Hz.
The Event Builder stores the data from the components until the Third Level Trigger is
ready to process it and combines all the component data into one coherent event. The
EVB can build up to 75 events in parallel.
4.3.3 The Third Level Trigger
The Third Level Trigger (TLT) consists of a computer farm running a software trigger.
The software is a simplified form of the offline reconstruction. Tracking reconstruction
is performed by the VCTRAK software package [63] and other electron finding and jet
finding routines are used to tag physics events. These events are then allocated a
physics category depending on which physics filter is used. The input rate from the
SLT of ~100Hz is reduced in the TLT to an output rate of 3-4Hz. Events that pass this
level are written to tape at the DESY main site over a dedicated connection (FLINK).
4.3.4 Reconstruction
In order to reconstruct the event fully, the raw data is processed using the reconstruction
software package ZEPHYR [52] some time (usually a few days) after the data was
taken. The reconstruction code uses the raw data from the detector to determine the
variables that can be used for physics analyses. This includes running the processor
intensive code, not otherwise used at the TLT, as well as including calibration constants
that are not available online. For each event, a set of tables containing relevant
information from all the components of the ZEUS detector is then filled.
69
Finally, similar physics events are selected using a filter and allocated a code, or DST
bit. This is in effect a 4th level trigger as events with a common DST bit can be easily
selected therefore saving computer time.
An example of a reconstructed event shown using the LAZE event display package is
shown in figure 4.12. The event shown is a relatively high Q2 neutral current event. The
tracks, clusters in the calorimeter and electron candidate can be seen.
4.3.5 DIS Event selection
As the dominant process at HERA is photoproduction, various criteria must be used in
order to select DIS events. The most significant cut is applied to the E-Pz of the event
where E is the total energy measured and Pz is the z component of the momentum.
Photoproduction events are characterised very low Q2, i.e. Q2~ 0 GeV2 and the positron
travels down the beam pipe. It is therefore not detected in the calorimeter and E-Pz is
low, with most of the detected energy of the event being in the direction of the proton.
Selecting events with minimum E-Pz therefore serves to eliminate most of the
photoproduction.
4.4 Measurement of kinematic variables
Following reconstruction, the data must then be used to generate the kinematic variables
required for physics analyses. This process is described in the following sections.
70
Figure 4.12: A reconstructed neutral current event.
71
4.4.1 Event vertex
The reconstruction of the position of the event in z, the z vertex, is required for the
measurement of angles used for the determination of kinematic variables and also has
an effect on both the detector and trigger acceptance. For example, at a fixed low Q2
positrons scattered from events closer to the RCAL have a higher probability of passing
down the beam pipe and remain undetected.
The vertex is reconstructed using VCTRAK. In this case, only information from the
CTD is used. If no vertex is found, the vertex is set to the nominal value of Z = 0cm
4.4.2 Positron finding
The signature of a neutral current event is the presence of an isolated positron in the
final state. Event selection therefore relies on the efficient detection of this particle.
In the analysis reported in this thesis, positron candidates are identified using the
SINISTRA95 electron finder [64]. SINISTRA95 uses a neural-net based algorithm to
identify the scattered electron using information from the calorimeter. First, calorimeter
objects, which may belong to the shower of a single particle, are clustered to form cell
islands. Each cell is considered and, if it has enough energy, becomes a candidate to be
connected to its neighbouring cells. The process is repeated with cells being joined with
their highest energy neighbours to form islands.
SINISTRA95 then processes all islands in the electromagnetic section of the
calorimeter and returns a probability P, for each island. This ranges from 0, where the
island is of hadronic origin, to 1, the island is a positron.
If there is more than one positron candidate in an event the routine FINDIS95 selects
the candidate with the highest probability.
4.4.3 Positron Energy corrections
The precise measurement of the energy of the scattered positron is vitally important for
the kinematic reconstruction. The energy measured in the calorimeter cells
corresponding to the positron is subject to various detector effects which must be
corrected for. In the RCAL region, where most of the scattered positrons are found,
three different corrections are applied.
72
First, a global correction to the energy of the calorimeter is applied. This is done on a
cell-by-cell basis with the aim of matching the measured positron energy in data and
Monte Carlo simulation. This offset is caused by inadequate simulation of the dead
material within the detector. Details of the simulation of events at ZEUS using Monte
Carlo techniques are given in Chapter 5. The energy scale of the data is corrected
upwards to improve agreement with Monte Carlo.
The correction factors are as follows:

FCAL – no correction to data.

BCAL – data corrected upwards by 5%

RCAL – data corrected by a factor varying from 2% downwards to 5% upwards in
order to account for variations in the response of the different RCAL cells.
Following this, the energy of the scattered electron in the RCAL is corrected using the
SRTD and presampler. If there is SRTD information present, the preshowering of the
electron is measured in terms of the number of MIPS (Minimum Ionising Particles) in
the SRTD. This is proportional to the energy loss and the calorimeter energy is
corrected accordingly. If there is no SRTD information, the presampler is used to
correct the energy. Again, the approach is based on the number of MIPS in the
presampler.
Finally, a further correction is made to the scattered positron energy to take into account
the non-uniformity of the RCAL. This results in energy loss due to part of the
electromagnetic shower passing through gaps and cracks between modules, towers and
cells. A correction is determined by comparing the measured energy at a certain
position with the energy predicted by measuring the kinematics using the double angle
method, described in section 4.5.2. The double angle method uses only the angles of the
positron and hadronic system for kinematic reconstruction and is therefore independent
of the calorimeter energy scale. The correction is determined separately for data and
Monte Carlo due to inaccuracies in the simulation of the cracks in the Monte Carlo.
73
4.4.4 Positron position measurement
The precise measurement of the scattered angle of the positron is also important for
kinematic reconstruction. The scattering angle is defined as the angle between the z-axis
and the line joining the z vertex to the position of the positron on a well-defined plane
perpendicular to the z-axis. The measurement of this position makes use of the best
available detector component, i.e. if there is a signal in the SRTD associated with the
positron, the SRTD position is used.
4.4.5 Reconstruction of the Hadronic Final State
For the hadronic reconstruction, all energy in the calorimeter except the scattered
positron is considered. Therefore, an important aspect of the energy measurement is
calorimeter noise suppression. The sources of the noise in the calorimeter range from
sparks in the PMTs, electronic noise in the DAQ system to general background noise
from the radioactivity of the uranium. This is controlled by setting the energy of these
cells to zero.
All cells with EEMC < 80 MeV, EHAC < 110 MeV are removed while for isolated cells
i.e. those without neighbouring cells with energy above some threshold value, the cut is
raised to EEMC < 100 MeV, EHAC < 140 MeV. Furthermore, a cut on the imbalance
between the two PMTs in a cell helps to remove sparking PMTs as generally in this
case only one of a pair sparks. Finally, hot cells, which have on average much higher
activity than other cells over certain run ranges, are removed ‘by hand’ from calorimeter
tables used for the reconstruction.
In order to reconstruct the hadronic kinematics in the event the simplest method would
be to use the calorimeter cell information only. There are possible sources of inaccuracy
with this approach however. For example, the energy measurement can be affected by
so-called backsplash; i.e. scattering or showering in the material in front of the
calorimeter can send some energy back into the detector. This is particularly significant
at low y where the hadronic energy is concentrated in the forward region. Any
extraneous energy in the RCAL can therefore strongly bias the E-Pz measurement.
74
A more subtle method therefore would include information from tracking, which for
low energies has better resolution than the calorimeter. An algorithm that uses this
approach is the hadronic energy flow algorithm, which combines calorimeter and
tracking information into objects called ZUFOs (ZEUS Unidentified Flow Objects).
The first step of this algorithm involves applying the calorimeter noise and energy
corrections before creating cell islands. Following this, cone islands are created, where
islands belonging to either a single particle or a jet of particles are collected together.
Here, cell islands are taken and a clustering in - space is performed. Cell islands from
different calorimeter sections are matched, starting from the outermost HAC2 cells and
moving inwards. After the clustering has been completed, tracks are matched to these
islands. This leaves three types of objects, islands with a matched track, islands with no
and tracks with no islands. Finally, the algorithm decides for each object which
information to use, employing various selection criteria. For example, where there is a
calorimeter object with no track it is considered a neutral particle and the calorimeter
information is used.
The backsplash correction uses a cut on ZUFOs found at large polar angles from the
angle of the current jet, h.
4.5 Kinematic Reconstruction
In deep inelastic scattering the interaction of interest is electron-quark scattering. This is
a two-body process with two degrees of freedom and as such only two measured
quantities are required to completely reconstruct the event kinematics. As ZEUS is, to a
good approximation, a hermetic detector, both the scattered positrons and jets are
detected thus allowing several reconstruction methods to be used. The choice of which
method to use depends largely on the kinematic region being probed.
The two kinematic variables calculated in each of the following sections are Q2 and y,
as defined in equations 2.11 and 2.13 respectively.
75
4.5.1 Electron Method
The electron method reconstructs the kinematics purely using information from the
scattered electron. In this case, the energy of the electron E e and the angle through
which it is scattered e are used. Using the electron method, Q2 and y are given by:
Qe2  2 Ee Ee (1  cos e )
(4.4)
 1  cos e
y e  1  Ee 
 2 Ee
(4.5)



At high y and moderate Q2 the electron method provides a good reconstruction of the
event kinematics. However the resolution at low y is poor.
4.5.2 Double Angle Method
The double angle method makes use of the fact that angles are often measured more
accurately than energies. The angles of the scattered positron, , and hadronic system, ,
only are used for the reconstruction. This method yields Q2 and y of
2
QDA
 4 Ee2
y DA 
sin  (1  cos )
sin   sin   sin(    )
sin  (1  cos  )
sin   sin   sin(    )
(4.6)
(4.7)
In the analyses described in this thesis however, the angle of the scattered positron is
usually too low to make this method practicable.
76
4.5.3 Jaquet-Blondel Method
In principle, as all the particles are detected, the hadronic angle and energy can be used
for reconstruction of kinematics as well as the electron variables. This however, is
complicated by the fact that the struck quark, as well as not being completely
independent from the proton remnant, also fragments into a number of particles. The
Jaquet-Blondel method therefore uses all the detectable final state particles, including
the proton remnant, with Q2 and y given by:
Q
2
JB
 2 Ee
y JB 
p 
h 2
T
(4.8)
2 Ee  

2 Ee
(4.9)
where
   Ei  p z ,i 
(4.10)
i
p 
h 2
T
2

 

   p z ,i     p y ,i 
 i
  i

2
(4.11)
and the summations exclude the scattered positron.
These variables are chosen as they are not affected significantly by events lost down the
beam pipe nor affected by final state fragmentation. However, at high y, the JaquetBlondel method has very poor resolution for the measurement of both Q2 and y.
77
4.5.4 Sigma Method
In an attempt to use the advantages of the electron and Jaquet-Blondel methods, the
sigma method was introduced [65]. This method uses hadronic as well as the scattered
electron quantities in order to utilise the benefits of both in the high and low y regions.
In the sigma method, Q2 and y are given by:
E e2 sin 2 θ e
Q 
1  yΣ
yΣ 
2
Σ
(4.12)
Σ
Σ  E e 1  cosθ e 
(4.13)
In the kinematic region covered by the following analysis, i.e. low Q2 and high y, the
sigma method provides the best reconstruction of y while the electron method will be
used for the reconstruction of Q2.
78
Chapter 5
Monte Carlo
Monte Carlo simulations are extremely important tools in modern particle physics. They
are useful both for determining the acceptance and resolution of detectors as well as for
checking the validity of many theories. By simulating well-understood interactions and
comparing with the data an understanding of the detector can be built up. Conversely,
comparing a simulated process with data corrected for the response and behaviour of
the detector can shed light on areas of physics that are not yet understood. Monte Carlo
simulations therefore provide an invaluable handle for comprehending the complex
interactions that take place in a modern particle physics detector.
This chapter describes the Monte Carlo event generators used for the analyses described
in this thesis, beginning with generator used for simulating QED Compton events and
moving onto the two generators used for simulating ISR events. Finally, the ZEUS
detector simulation is described, whereby the response of the detector for each
simulated event is determined.
5.1 QED Compton Monte Carlo
For the simulation of QED Compton events, the reaction considered is ep  eX and
corresponds to the diagrams shown in figure 5.1.
79
Figure 5.1: The two leading order contributions to QED Compton scattering.
These diagrams also describe Bremsstrahlung and both initial and final state radiative
events. The QED Compton case however, corresponds to the case where q12 is finite.
Here, the scattered positron and photon are observed and the hadronic system travels
down the beam pipe.
The program used to generate QED Compton Monte Carlo events is COMPTON 2.00
[66]. This generates events in two steps. In the first step ep  eX are generated
according to an approximation of the cross section. For elastic events, similar to the
interaction described in section 2.3, the conventional expressions for the proton’s
electromagnetic form factors are used, while, for inelastic events, which are similar to
the interaction described in section 2.4, a structure function is used. In the second step,
these events are weighted by comparing the value of the cross section from the first step
to the exact cross section. However, as one moves into the radiative regime where
q 22 > q12 , these weights become large. Therefore in order to keep the weights small, a cut
on the angle | - | < 45 is made, where  is the difference in azimuthal angle
between the scattered electron and photon. This quantity is known as the acollinearity
and is a measure of the transverse momentum balance between the two particles.
The cross section is also affected by radiative corrections, in particular, Initial State
Radiation. This is taken into account by reducing the incident positron’s energy
according to a probability law,
d (k )    k  1 (1  k  k 2 / 2)dk
80
(5.1)
where

 2E 1 
ln 
 ,
  me 2 
2
k
E
E
(5.2)
E is the energy of the beam electron, E the energy of the ISR photon and me is the mass
of the electron.
Again, the events are given weights according to this correction. So, in order to limit the
size of the weights, the hard photon tail of the  spectrum is removed by a lower limit
on the e- visible energy of 10 GeV.
Chapter 6 describes analyses of QED Compton events taken in 1996 and 1997 with
different detector and trigger configurations. Therefore, for the 1996 analysis, 60000
elastic and 20000 inelastic events were generated, corresponding to 23.4pb-1 and
23.5pb-1 respectively.
For the 1997 analysis, 120000 elastic events (47pb-1) and 40000 inelastic events
(47pb-1) were generated.
5.2 ISR Monte Carlo
Simulated ISR events are generated using the Monte Carlo programs DJANGO 6.24
[67]. The DJANGO 6.24 program contains four separate components. Initially the
primary electron-quark interaction is simulated using the program HERACLES 4.5.2
[68,69]. At this stage, deep inelastic e+p collisions via both neutral and charged current
interactions are simulated at the parton level to order 2. Input parameters to the
simulation therefore include the parton distribution functions that parameterise the
interaction at the quark vertex. The functions are taken from PDFLIB 7.06 [15] using
the parameterisation from MRS set (A) [16]. The longitudinal structure function, FL, is
set to zero. This stage also includes QED radiative corrections to the cross section at the
electron vertex.
81
ARIADNE 4.08 [70-72] describes the development of the parton shower. In this case,
the colour dipole method is used to perform the gluon radiation. The conversion of the
partonic final state into the observed hadrons is performed by the JETSET 7.409 [73,74]
program using the Lund string model.
There is also a class of events observed at HERA, known as Diffractive events, that are
not described by DJANGO.
Diffractive events are defined by a large rapidity gap in the hadronic final state. In other
words there is nearly no energy filling the angular region between the hadrons coming
from the struck quark and those originating from the proton remnant. Such events could
be thought of as involving a colourless object, known as a pomeron, in the interaction.
Diffractive events are generated using RAPGAP 2.06/26 [75]. In order to take into
account QED radiative effects RAPGAP is interfaced to HERACLES to produce the e * - e vertex. Furthermore, in order to take into account initial state QCD radiative
corrections, the simulation of parton showers is performed by interfacing to PYTHIA
[76] and LEPTO [77]. The final state partonic showers however, are again simulated by
JETSET.
As the initial kinematic distributions for the diffractive and non-diffractive events are
similar, the two samples can be mixed in order to obtain the correct final distribution.
This mixing ratio is found by optimising the agreement between data and MC in the
distribution of max [45], with the fraction of diffractive events found to be 15%1.
Four samples of non-diffractive MC were generated, Q2 > 0.1 GeV2, Q2 > 0.2 GeV2, Q2
> 0.5 GeV2 and Q2 > 2 GeV2 while for the diffractive MC, three samples of Q2 > 0.1,
0.2 and 0.5 GeV2 were generated. The samples include a cut of > 3 GeV on the energy
of both the photon and scattered positron plus a cut on the polar angle, , of the
hadronic final state of less than 11mrad.
max refers to the pseudorapidity,  = -ln(tan/2), of the most forward energy deposit or track in the
event.
1
82
Finally, in order to increase the MC statistics for the FL measurement, further nondiffractive samples at Q2 > 0.75 GeV2 were generated. In addition, these samples also
include a cut of y > 0.04 in order to increase the luminosity corresponding to a given
number of events generated. This is acceptable as there is a high minimum y cut in the
FL analysis. In total, 51.74pb-1 of non diffractive and 4.6pb-1 of diffractive Monte Carlo
were generated.
5.3 Detector simulation
The output of the Monte Carlo event generators consists of a list of the 4-vectors of all
the final state particles. These are entered into a simulation of the detector that describes
the response of all the components and the efficiency of the triggers. This simulation of
the detector is performed by the MOZART [52] package, which is based on GEANT
[79]. The simulation of the trigger is done by the ZGANA package [80] with the same
trigger logic simulated as that used online. After simulating the response of the detector
to the passage of the individual particles, the reconstruction code used is identical to the
online code. Hence, the output is given in an identical format to the data format,
allowing a direct comparison between Monte Carlo and data.
5.3.1 Simulation of the LUMI- energy response
Although most of the detector is well simulated by MOZART, the energy response of
the LUMI- calorimeter does not adequately take into account the energy loss due to the
filter. Also, a new filter was introduced for the 1997 running period, further changing
the response. It is therefore necessary to perform the simulation of this component
separately.
For the 1996 data-taking period, the energy response of the calorimeter is described by
introducing an energy smearing function and then correcting for the energy loss in the
filter. This is given by the following expression:
E γ  measured  (E True  offset).(1.0  nonlin  (E e  E True ))  res
83
(5.3)
where,

ETrue is the true energy of the photon.

offset is a constant that shifts the endpoints of the photon energy spectrum to
smaller values in order to take into account the energy loss in the absorber in
front of the calorimeter.

nonlin is another constant that takes into account the nonlinearity in the energy
measurement, caused by effects in the detector electronics.

Ee is the energy of the positron beam.

res is the resolution of the calorimeter, this is calculated on an event by event
basis by performing a smearing of the true photon energy using a gaussian of
width E.
The extra filter used for the 1997 data-taking period changed the form of the energy
response. The new spectrum is described well by the gamma function, with two new
parameters, Add and  introduced in order to take into account the asymmetric response
of the calorimeter, i.e. the smearing for low photon energies is relatively wider.
PΓ (x, α) 
1 x  t α 1
e t dt
Γ(α) 0
(5.4)
 = 19.5
or, in the limit:
PΓ (t  x  t  dt, α) 
84
1  t α 1
e t dt
Γ(α)
(5.5)
So, the energy response becomes:

α  1.0 

E γ measured   E True  offset  E Γ 
β 


 1.0  nonlin


α  1.0  
 
  E e  E True  E Γ 
β


(5.6)
where,


(5.7)
ETrue  offset    E2   Add
E 
P  
(5.8)

The calibration constants [81] are summarised in table 5.1.
Calibration Constant
1996
1997
offset
0.2 GeV
0.38 GeV
E
0.23
0.323
nonlin
0.0011
-0.0005

-
19.5
Add
-
0.3
Table 5.1: The LUMI- calibration constants for 1996 and 1997.
The simulation of the photon energy compared to the true energy for the two years is
shown in figure 5.2. As can be seen, the extra filter serves to widen the distribution as
well as shift the simulated energy to slightly lower values. Finally, only simulated
photons that pass through the LUMI- calorimeter are considered by performing a cut
on the physical aperture of the calorimeter. This is defined in section 7.5.1.
85
Figure 5.2: The difference between the measured and true photon energies for 1996 and
1997.
86
Chapter 6
QED Comptons
6.1 Introduction
The classes of events including QED Compton events, Radiative events and
bremsstrahlung, all have the same final state and may therefore be difficult to
distinguish. In this thesis, QED Comptons are a possible source of background to initial
state radiative events. A good understanding of QED Comptons would therefore enable
them to be efficiently tagged and removed.
This chapter describes the use of QED Compton events as cross check for the standard
luminosity measurement. This measurement is both useful in its own right and also
requires a good understanding of the process. The event selection can therefore be used
to efficiently remove QED Comptons from the ISR analysis.
The leading order diagrams for radiative events were shown in figure 5.1 and are given
again here in figure 6.1. QED Compton events are in principle distinguished from
Bremsstrahlung and initial and final state radiative events are in principle distinguished
by the values of the 4-momentum transfer q12 and q 22 . The bremsstrahlung process is
characterised by q 22  0 and q12  0. Here both the scattered photon and positron travel
in the direction of the electron beam. Initial State Radiative Events have a large value of
q 22 and q12  0. In this case the photon travels in the direction of the electron beam
while the positron can be scattered at large angles. QED Compton events however, have
q 22  0 but q12 is non zero. In this case both photon and positron can be scattered at
large angles and measured in the main detector. The 4-momentum between the proton
and the positron is very small in this case and as such the positron-photon pair and
hadronic final state have transverse momenta close to zero.
87
Furthermore, the QED Compton process can be defined as either elastic or inelastic. In
the elastic case the proton remains intact and passes undetected down the beam pipe. As
the elastic form factors of the proton are well known the cross-section for this can be
precisely calculated.
In the inelastic case the proton fragments and as such the cross-section calculation isn’t
well known. As the final state has little transverse momentum most of the hadronic
activity also passes undetected down the beam pipe. This can become a significant
background when making measurements using elastic Comptons.
The experimental signature of QED Compton events is therefore a positron-photon pair,
balanced in pt with little or no detected hadronic activity.
Fig 6.1: The two leading order contributions to QED Compton scattering.
QED Compton Monte Carlo events are generated using the COMPTON 2.00 program
described in section 5.1. The kinematic cuts used for generating the events are given in
Table 6.1
88
Parameter
Min
Max
Positron Beam Energy (GeV)
27.5
27.5
Proton Beam Energy (GeV)
820
820
Scattered Positron Energy (GeV)
3
-
Photon Energy (GeV)
3
-
Energy Sum (e+ and ) (GeV)
10
-
Invariant Mass of e (GeV/c2)
1
300
Acollinearity ()
-
45
Angle between outgoing e+ and  ()
3
177
Total transverse momentum (GeV)
-
20
Table 6.1: The kinematic variables used for the COMPTON 2.00 program.
6.2 Event selection
6.2.1 QED Compton Trigger
The trigger used for the selection of QED Comptons requires that events pass the third
level trigger bit DIS11. At the FLT this requires that global first level trigger bit 62 is
set. For 1996 running, GFLT62 requires an isolated cluster of calorimeter cells and a
veto from the C5 and VETO wall. These vetos remove so called ‘beam gas’ events
where a proton interacts with a gas molecule upstream of the detector. In addition, for
1997 running there are further requirements on the presence of good SRTD and CTD
tracks.
At the second and third trigger levels, there is a cut on the presence of hadronic and
electromagnetic islands in the calorimeter. Particles entering the calorimeter will
shower and deposit their energy over several adjacent cells. An algorithm runs over
clusters of such cells and those whose neighbouring cells pass energy and probability
cuts are added to the island. The trigger requires that there are no hadronic islands
present. Furthermore, one electromagnetic island must be present with energy greater
than 2 GeV accompanied by a second electromagnetic island with energy greater than 4
89
GeV and the total energy in the FCAL inner ring must be less than 50 GeV. The FCAL
inner ring takes into account only the FCAL modules nearest the beam pipe and acts to
suppress the background from the proton remnant. Finally, there is a requirement that
the total E-Pz of the event is greater than 30 GeV and the acollinearity is less than 90
or /2 radians.
As an initial selection, events that fire the QED Compton trigger bit are taken. In
addition at least one of the electron candidates, identified using the SINISTRA95
electron finder must have a probability > 0.5. The corrections outlined in section 4.4 are
then applied to the data before a more detailed set of cuts are made.
6.2.2 Cuts
After the preselection described above, 247782 events remain for the 1996 data set and
567968 events for the 1997 data set.
The following cuts are then applied:

Two electromagnetic clusters, “electron candidates”, are found using
SINISTRA95, both with probabilities greater than 0.9.

The energy of both candidates, measured by the calorimeter or SRTD is greater
than 8 GeV, i.e. Ee > 8 GeV and E > 8 GeV.

No more than one cluster has an associated track.

Only RCAL clusters outside a region of 13cm x 13cm around the beam pipe are
accepted. This ‘boxcut’ prevents leakage out of the electron shower into the
beam pipe and ensures the complete reconstruction of the electron and photon
candidates. It also acts to suppress the initial state radiative background.

The z position of the event vertex must be within 50cm of the nominal
interaction point.
90

The total E-PZ of the event must lie in the range 35 GeV – 65 GeV.

The total hadronic component of the calorimeter energy, i.e. not including the
electron or photon, is not greater than 2 GeV. This is made to reduce the
inelastic QED Compton contribution.
Finally, two further cuts were applied to reduce the ISR and DIS backgrounds.

To remove the ISR background, the acollinearity,     5 .

The removal of the DIS background requires the quantity  e     85 , where
 e and   are the polar angles of the electron and photon respectively.
After the above cuts 6183 events remained for 1996 and 14540 events for 1997.
Figure 6.2 shows some resulting distributions for elastic and inelastic Monte Carlo
normalised to the same luminosity as each other. The plots show the acollinearity, the
total transverse momentum, PT, the invariant mass of the electron/photon system Me
and the difference in polar angle,  e    . As 1996 and 1997 data will be treated
separately later, the distributions for 1996 and 1997 are given separately. The difference
between the elastic and inelastic distributions can clearly be seen. The largest difference
is in the acollinearity and PT distributions where the elastic QED Comptons have
relatively more events with lower total transverse momentum.
91
Figure 6.2: Elastic and Inelastic QED Compton MC Distributions for 1996 data set
(upper plots) and 1997 data set (lower plots).
6.3 Measuring the inelastic contribution
In the data, the sample of elastic QED Compton candidates is contaminated by inelastic
QED Comptons. As the Q2 of such events is generally low, in many cases it is likely
that the proton remnant passes down the beam pipe and remains undetected therefore
faking an elastic candidate. In order to correct for this contamination inelastic MC is
added to the elastic MC.
The elastic MC are scaled to the data luminosity while the inelastic fraction is
calculated by varying the relative proportions of elastic and inelastic Monte Carlo to
find the minimum 2 of a fit to the data. The fraction of inelastic events found from the
1996 Monte Carlo is 14.7%  2.2% while for 1997 the percentage is 12.2%  2.7%.
Comparisons between the data and Monte Carlo for 1996 and 1997 are shown in figures
6.3 and 6.4 respectively.
92
Figure 6.3: Data v Elastic + Inelastic MC comparison for 1996
Figure 6.4: Data v Elastic + Inelastic MC comparison for 1997.
93
In general, there is a good agreement between data and Monte Carlo. However, for the
acollinearity plots there is a deficit of MC at values of higher than 2.5. This is due to
the poor description of the inelastic contribution. The probable cause of this is the
inadequate simulation of the hadronic final state in the COMPTON2.0 Monte Carlo.
The simulation of the parton shower and hadronic final state is not performed by the
generator and therefore any hadrons produced in inelastic events cannot be tagged in the
Monte Carlo. The use of DJANGO to produce QED Compton events would overcome
this problem as the hadronic components are produced in full by ARIADNE and
JETSET and would be seen in the final event.
6.4 Luminosity Measurements
An upgrade to the luminosity at HERA is being planned for the second half of 2000 and
early 2001. This will involve both improvements to the accelerator in the form of
magnet upgrades, changes to the interaction regions, and improvements to the ZEUS
detector in the form of new tracking detectors. The integrated luminosity that has been
provided by the HERA accelerator for the years 1994 - 1999 has now reached over
80pb-1. After the upgrade the luminosity that will be available will increase to an
expected 150 pb-1 per year.
The increased event rate however will have profound implications for the measurement
of luminosity using the current luminosity monitor. Indeed, even with improvements,
the required accuracy of  1% may not be achieved with this monitor. Therefore two
new detectors will be installed in order to provide complementary luminosity
measurements while the current LUMI detector will be upgraded in order to act as a
cross check.
There are three different aspects to the upgrade in Luminosity measurements at ZEUS.
First, the current LUMI- calorimeter will have the current filter replaced by an active
aerogel filter [82]. This gives the advantage of increasing the radiation length of the
filter, reducing the damage to the calorimeter from synchrotron radiation while using
the Cherenkov radiation emitted by converted photons to maintain the bremsstrahlung
signal.
94
Also, at 92m from the interaction point, a photon spectrometer will be installed [83].
Bremsstrahlung photons convert with an efficiency of ~11% in the beam pipe exit
window into e+e- pairs. These in turn will be split by a dipole magnet and detected in the
current Beam Pipe calorimeters, moved into the new position. Finally, a new electron
tagger will be introduced at 6m [84] to detect electrons in the range 6-9 GeV (5-8 GeV
positrons). This will reduce the contribution of the acceptance & energy scale of the
photon detectors to the overall systematic error of the luminosity measurement.
Like the bremsstrahlung events used for the current luminosity measurement Elastic
QED Comptons have a precisely calculable cross section and therefore are suited to
luminosity calculations. As they are detected in the calorimeter however, they are not
affected by the above problems caused by the Luminosity upgrade and are a potential
way of cross checking the luminosity measurement.
6.4.1 Calculation of the 1996 and 1997 luminosity
The integrated luminosity is calculated using elastic QED Compton candidates via the
equation
L
N Candidates
σA
(6.1)
NCandidates is the total number of candidates corrected for the fraction of inelastic
candidates.  is the cross section for the elastic process and A is the acceptance, or
fraction of all events that are detected. The Luminosity is calculated by taking the
scaled number of elastic and inelastic MC candidates, using the method described in
[85].

# Scaled inelastic MC candidates
N Candidates  N Measured 1 
 # scaled inelastic MC# scaled elastic MC candidates
A
 Acceptance

# Elastic MC candidates after cuts
# Elastic MC candidates generated
95



(6.2)

= cross section for the elastic MC generated Comptons (in this case,
2556.6nb)
For the 1996 luminosity calculation the result is:
L
= 10.77  0.19 (stat) pb-1
The 1997 luminosity is also calculated using equation (6.1), with
L
= 28.05  0.40 (stat) pb-1
6.4.2 Systematic errors
It is important to assess the uncertainty in the measurement caused by the choice of cuts
and the extraction method used. In addition to knowing the statistical limitation of the
measurement, this indicates the extent to which the precision of the measurement is
limited by detector effects
Systematic errors are estimated by independently varying the cuts by the error on the
measured value and measuring the difference of this result with the nominal result. The
individual shifts are then added in quadrature to give the total systematic error.
The changes to the cuts are as follows:
1. Increase the Boxcut to 14cm.
2. Decrease the energy cut for the electron/photon by 1 GeV.
3. Increase the energy cut for the electron/photon by 1 GeV.
4. Change the cut on the z vertex to  60cm.
5. Decrease the |1 - 2| cut to 84.
6. Increase the |1 - 2| cut to 86.
7. Change the Total E-PZ cut to 25 – 65 GeV.
8. Change the Total E-PZ cut to 35 – 75 GeV.
96
9. Change the assumed fraction of inelastic Comptons to 12.5% for 1996 and 9.5%
for 1997.
10. Change the assumed fraction of inelastic Comptons to 16.9% for 1996 and
14.9% for 1997
11. Change the cut on hadronic energy to < 3 GeV.
12. Increase the  cut by 1.
13. Decrease the  cut by 1.
The effects of these systematic checks on the 1996 and 1997 luminosities are shown in
figures 6.5 and 6.6 respectively.
Figure 6.5: Effect of systematic checks on measured 1996 luminosity. The fractional
change for each systematic check is shown, with the dashed line indicating no
systematic shift.
97
Figure 6.6: Effect of systematic checks on measured 1997 luminosity. The fractional
change for each systematic check is shown, with the dashed line indicating no
systematic shift.
The largest contributions to the systematic error are from the calculation of the inelastic
contribution and the acollinearity cut. The uncertainty in both these systematics could
be reduced by using Monte Carlo that describes the final hadronic components. This,
unfortunately, is not performed by the COMPTON 2.0 Monte Carlo. Also, the use of
forward tagging detectors could provide more information about the proton remnant.
The resulting value for the 1996 luminosity is
L
1996
1
 10.77  0.19( stat ) 00..38
(10.710.12 pb-1)
33 ( sys ) pb
While the value for 1997 is
L
1997
22
 28.05  0.40( stat ) 11..06
( sys ) pb 1 (27.870.42 pb-1)
98
In both cases, the value given by the standard luminosity measurement is given in
brackets.
6.5 Conclusion
The integrated luminosity obtained for 1996 using the luminosity monitor is
10.7050.12 pb-1. A comparison with the result from the 1996 QED Compton analysis,
L
1996
1
 10.77  0.19( stat ) 00..38
shows that they are in good agreement within
33 ( sys ) pb
experimental error.
Also, the value obtained from the luminosity monitor in 1997 of 27.870.42 pb-1
compares well with the value measured using elastic QED Comptons of
L
1997
22
 28.05  0.40( stat ) 11..06
( sys ) pb 1 .
With the use of improved Monte Carlo with full simulation of the hadronic final state,
the dominant systematic errors from the measurement of the acollinearity and inelastic
contamination could be significantly reduced. Also, as the 1997 measurement shows,
with sufficient luminosity the statistical error for the QED Compton measurement is
comparable to the error from the standard luminosity measurement. Therefore, with
better control of the systematic errors this method may become competitive.
However, even if the use of QED Comptons proves unable to measure the luminosity to
the required accuracy of  1%, it would provide a useful independent crosscheck to the
proposed new detectors.
99
Chapter 7
Measuring the structure function F2
7.1 Introduction
Initial State Radiative events lower the energy of the incident positron and hence the
centre of mass energy of the event. This results in a shift of the kinematic regime
accessible by such events towards lower Q2 and x. This new regime is interesting as it
allows us to bridge the gap between the low Q2 events measured with the BPC and BPT
and the region covered in the standard F2 analysis. The range accessible in x also results
in some overlap with results from fixed target experiments.
7.2 Corrections to the Kinematics due to ISR
As the ISR photon takes with it some of the energy of the incident electron the
kinematics of the event are changed accordingly. The quantity, z, was defined in chapter
2 as the fraction of the electron beam energy available for the interaction after the
emission of the ISR photon, i.e. restating equation (2.40):
z
(E e  E γ )
(7.1)
Ee
In order to take the ISR correction into account this factor is now applied to the
kinematic variables defined in section 4.5 [86].
100
As the electron method uses the scattered electron for reconstruction the kinematics are
changed by scaling the initial electron energy by z, i.e. Ee  zEe. Q2 and y therefore
become:
Qel2 ( corrected)  z * Q2
(7.2)
(y  z  1)
z
(7.3)
y el ( corrected) 
This also applies to the Jacquet-Blondel method where the scaling due to ISR
results in:
1  y jb
Q 2jb( corrected)  zQ 2jb
y jb( corrected) 
z  y jb
y jb
(7.4)
(7.5)
z
In principle, the sigma method requires no correction due to ISR. This can be seen by
rewriting (4.13) in terms of yel and yjb:
y 
y jb
1  y e   y jb
(7.6)
It can be shown that the correction to the top and bottom of (7.6) cancel.
The above result however, gives y at the reduced centre of mass energy caused by the
emission of the ISR photon. Therefore, in order to achieve consistency with previous F2
analyses which are performed at the nominal centre of mass energy, a new variable,
yHERA, is introduced such that:
y HERA  y  z
101
(7.7)
The true y of the event, including the reduced centre of mass energy is given by:
ytrue 
y HERA
 y
z
(7.8)
This ensures that F2 values measured using yHERA can be directly compared with F2
measured using standard analyses.
7.3 Event Selection
7.3.1 The Trigger for radiative events
In order to identify Initial State radiative events from 1996 and 1997 data taking,
purpose-built trigger configurations were used. Two separate triggers are used in the
analysis, one was active for part of the 1996 data taking period and is used for the F2
analysis while the other ran over the whole 96/97 period and is used for the FL analysis.
The trigger used is based on the neutral current trigger, which is well understood and
simulated.
7.3.2 The FLT
At the first level, trigger bits are set using calorimeter information, essentially triggering
on isolated electromagnetic clusters. The F2 trigger uses the first level trigger bit FLT
30.
FLT 30 is the inclusive-DIS trigger and is used for the measurement of F2 using nonradiative events. After vetos from the C5 counter and veto wall are applied, the trigger
bit is described as follows2:
E
ISOE
REMC
  E
1 ring
 E REMC
 ECAL  SRTD
st
REMC

(7.9)
where ISOE means the energy was isolated in one trigger tower, EREMC refers to energy
The notation introduced here is set notation where A  B is a logical ‘and’ of A and B, A  B is a
logical ‘or’ of A and B.
2
102
measured by the CFLT in the Rear electromagnetic calorimeter, 1st ring means the
energy in the innermost ring around the beam pipe and SRTD implies the GOOD SRTD
TRACK bit is fired.
A summary of the cuts on trigger bit FLT30 is shown in table 7.1:
FLT Variable
FLT30
EISOE [GeV]
2.08
EREMC [GeV]
2.032
EREMC (1st inner ring) [GeV]
3.75
ECAL [GeV]
0.464
SRTD
YES
LUMI- [GeV]
-
Table 7.1: Summary of the properties of the FLT Trigger bit FLT30, giving the values
of the cuts used.
As well as FLT30, which is used for 14.8pb-1 of data in 1996 and 1997, FLT46 is used
for 19.01pb-1 of data in 1997. This bit requires that there is an energy deposit in the
LUMI- calorimeter of > 0.8 GeV in addition to the standard energy cuts.
7.3.3 The TLT
At the TLT, higher-level calculations are performed using most of the detector
information. In this case the trigger uses 4 electron finders with an energy cut of E e > 4
GeV. If any finder finds a candidate electron a further cut on its position is applied. A
boxcut is applied for |x| > 12cm, |y| > 6cm cutting electron candidates that are too close
to the beam pipe and cannot be reconstructed well. E-Pz is calculated at this stage and a
cut implemented on its value.
A further trigger, DIS02, is used to measure background events. This is a Neutral
current trigger with a relaxed E-Pz cut and passes more events at the lower E-Pz range,
103
which can be characteristic of photoproduction events. The reduction of the cuts for this
trigger result in an increase of the trigger accept rate. Therefore, in order to restrict the
number of events to a manageable size, the trigger is prescaled i.e. only a fraction of the
events passing the trigger are stored. Table 7.2 shows the differences between the two
triggers for the F2 measurement.
TLT Variable
DIS01
DIS02
FLT BIT
30
30
E-Pz SLT [GeV]
-
19
Ee [GeV]
4
4
Box Cut [cm]
12 x 6
12 x 6
E-Pz [GeV]
30
20
LUMI- [GeV]
-
-
Prescale
1
40 – 100
Table 7.2: Comparison between data, DIS01, and background, DIS02, triggers used for
the ISR F2 measurement.
7.3.4 Cuts
An initial sample is selected by requiring, in the run range where the DIS01 trigger was
turned on in 1996, that there is at least one electron found using the SINISTRA95
electron finder with probability greater than 0.5 and energy > 3 GeV. Also, a photon
must be detected with the LUMI- calorimeter with energy > 3 GeV. This gives an
initial sample of 341551 data events.
The energy corrections and reconstruction described in section 4.4 are performed. In
addition, the LUMI- energy is scaled upwards by 0.36% in order to overcome an
additional calibration offset introduced by the LUMI group [87]. The scaling factor for
1997 photons is 1%.
The following cuts are then applied:
104

The energy of the electron found using SINISTRA95 after all corrections is > 8
GeV.

The probability given for this electron by SINISTRA95 is > 0.9.

The hadronic energy in a cone drawn from the interaction point through the
electron cluster is < 5 GeV.

The energy of the ISR photon measured by the LUMI- calorimeter > 6 GeV.

The total E-Pz of the event measured by the calorimeter alone > 20 GeV. This
serves to remove the photoproduction background.

The total E-Pz of the event (i.e. E-Pz + 2E) lies in the range 48 – 60 GeV.

The so-called ‘H-boxcut’ is applied [88]. This cut is required to reduce
differences in data and Monte Carlo caused by the incorrect modelling of copper
cooling pipes in the rear direction.

The z position of the event vertex must be within 50cm of the nominal
interaction point.

In order to remove Bremsstrahlung background, the energy in the 35m Tagger <
2 GeV.

The energy in the 44m Tagger is less than 60 ADC counts.

The value of yjb, corrected for ISR is > 0.001.

In order to remove photoproduction background, y, measured using the electron
method and corrected for ISR is < 0.95.
105

The total hadronic energy of the event is > 2 GeV.

Events with sparking cells, muons from cosmic rays, and QED Comptons are
rejected.
7.4 Background
The major sources of background in this analysis include:

Non-radiative DIS events with an overlay photon in the LUMI- calorimeter
from bremsstrahlung.

Photoproduction fake electrons with an overlay bremsstrahlung photon.

ISR-photoproduction events. These events occur mainly at low E-PZ and are
mostly rejecting by applying the lower E-PZ cut.
A sample of events is taken from the same runs as those used by the analysis by
selecting the TLT-bit DIS02. As noted previously, this is characterised by a lower E-PZ
cut, therefore allowing relatively more photoproduction events to pass. The overlay of a
bremsstrahlung photon can cause these events to be shifted up into the signal region so,
in addition to this, LUMI-e and LUMI- energies from bremsstrahlung events are added
randomly to each DIS02 event, creating an artificial overlay sample. Bremsstrahlung
events from 1996 are selected using a random trigger while those from 1997 are
selected using triggered data. The electron energy distribution for the two years is
shown in figure 7.1. As can be seen, the 1997 events include a cut on the LUMI-e
energy of < 2 GeV which is not present in 1996. Performing the same cut on the 1996
LUMI-e energy yields LUMI- distributions that are almost identical. Differences in
these can be explained by the change in LUMI- filter for 1997. The fine structure in the
106
LUMI-e energy for 1997 is caused by a stuck bit in the DAQ and is also present in the
1996 distribution, although not visible on this scale.
Figure 7.1: The raw bremsstrahlung LUMI-e and LUMI- energies for 1996 (left) and
1997 (right). The structure in the plots arises due to the acceptance of the LUMI-e and
LUMI- detectors.
7.4.1 Background normalisation
In order to determine the number of background events and subtract them from the data,
the artificial overlay sample has to be normalised to the data.
107
The Total E-PZ distribution above 62 GeV contains very few ISR-DIS events. Most of
the events in this region are non-radiative DIS processes measured in the calorimeter
together with an overlay bremsstrahlung photon.
If the artificial sample is normalised to the data above total E-Pz > 62 GeV, the data is
well described in this region, as shown in figure 7.2. The assumption is therefore made
that the artificial background sample at lower total E-Pz is also a good description of the
background. The normalised background is then added to the Monte Carlo sample for
all kinematic distributions in order to fully describe the data.
Figure 7.2: Total E-Pz distribution for data with the normalised background
contribution. The background is normalised to the data in the hatched region where the
total E-Pz > 62 GeV.
In addition to the major sources of background described above, other, minor sources of
background are also considered.
108
7.4.2 QED Compton Rejection
QED Comptons are rejected by applying the set of cuts used in Chapter 6. Over the run
range used here, 383 QED Comptons are rejected, roughly 3% of the number of events
that pass the cuts.
7.4.3 Cosmic Ray Rejection
Cosmic rays provide a continuous background signal. Such events generally involve the
passage of muons through the detector. These can look like a single track passing
though the BMUON on two opposite sides, and the CTD. Cosmic events are rejected if
two BMUON tracks are found on opposite sides of the detector pointing to each other.
7.4.4 Positron and proton beam-gas background
Particles from the unpaired positron and proton beams used for beam related
background studies can hit residual gas in the beam pipe and the resulting interaction
can be measured in the detector. This effect can be measured by examining the bunch
crossing number for the event. The backgound from beam gas in this analysis however,
is negligible and is ignored.
7.5 Corrections
7.5.1 Acceptance of the LUMI- calorimeter
A certain fraction of the ISR photons remain undetected by the LUMI- calorimeter.
The ratio of number of photons detected to the total number of photons defines the
acceptance of the LUMI- calorimeter. The data must be corrected by this fraction and
so an accurate determination of the acceptance is important.
As the probability for the detection of photons in the sensitive range of the calorimeter
is very high, it can be approximated to 1. Therefore the acceptance depends mainly on
the geometrical acceptance, i.e. whether the photon actually hits the LUMI-
calorimeter. This, in turn, depends on the physical size of the aperture of the LUMI-
calorimeter. The dimensions of the aperture are shown in figure 7.3.
109
Several parameters can affect the incidence of ISR photons on the LUMI detector. The
vertex position of the event as well as the direction of the positron beam at the
interaction point can both change the direction of the outgoing photon to such an extent
that it misses the LUMI- calorimeter entirely. The latter can be defined by beam tilt
and beam divergence. Furthermore, these can change run by run. Changes over the
duration of a run are smaller and therefore the average over a run is used. The beam tilt
is measured by fitting a gaussian curve to the measured position of the LUMI- photons
from a given run. This position is measured with respect to the nominal position of x=0,
y=0 for the LUMI calorimeter. The run-by-run values of the beam tilt are shown in
figure 7.4.
Figure 7.3: The aperture of the LUMI- calorimeter
The geometric acceptance is calculated using the Q2 > 0.5 non-diffractive Monte Carlo
sample as described in chapter 5 [92]. The true photon angle is first randomly smeared
according to the beam divergence and LUMI- resolution and a correction applied to
take into account a small twist of the beam. Given that the LUMI- detector is located
110
107m from the interaction point, this angle is easily converted into a position and a cut
is applied according to the physical aperture of the LUMI- calorimeter.
Figure 7.4: The x and y Beam Tilts for 1996, top, and 1997, bottom, plotted against
relative run number.
This process is repeated for 187 different values of beam tilt, 17 in the x direction, from
–0.26 mrad to 0.06 mrad, and 11 in the y direction, from –0.1 mrad to 0.1 mrad. The
acceptance for these values is shown in figure 7.5 and is on average 70% over the range
of tilts with drops in acceptance at the extreme tilts as expected.
111
Figure 7.5: Acceptance of LUMI- calorimeter for different x and y beam tilt positions.
The tilt ranges from –0.25 mrad to 0.06 mrad in the x direction and –0.1 mrad to 0.1
mrad in the y direction
7.5.2 Vertex weighting
The correct position of the z vertex of the event is of vital importance to the MC
simulation. However, due to trigger effects and biases from reconstruction the simulated
vertex generally does not agree with the measured one and therefore needs to be
weighted. By selecting a minimum bias sample, the underlying vertex distribution can
be determined and used to weight the MC.
A minimum bias sample is selected by introducing the following cuts:
112

No vertex cut.

electron < 150.

45 < hadronic < 135.
This is repeated using MC. Then 5 gaussians are fitted to the primary vertex distribution
and satellite bunches. The MC is weighted using these fits.
7.5.3 Beam pipe correction
An additional correction is made because the forward cooling pipes around the beam
pipe are not simulated in the MC. The simulation of these pipes has been performed on
a small set of Monte Carlo therefore allowing its effect to be parameterised. A
correction routine is then used to weight the existing MC accordingly [89].
7.5.4 Structure Function weighting
Finally, for the measurement of F2, the parameterisation from ALLM97 [78] is used.
ALLM97 uses a Regge motivated approach based on all available F2 measurements,
including those made in the very low Q2 region using data from the BPC. It is extended
into the high Q2 region in a way compatible with QCD expectations. In addition data
from measurements of the total photoproduction cross section are included. Due to this,
it can be used over the Q2 and x range covering 3  10-6 < x < 0.85 and 0 GeV2 < Q2 <
5000 GeV2 and gives a good description of all data within this region.
As the Monte Carlo was generated using the MRS(a) parameterisation. Each event is
therefore weighted by the factor
2
F2ALLM ( xtrue , Qtrue
)
.
MRSA
2
F2
( xtrue , Qtrue )
7.6 Data and Monte Carlo distributions
After the cuts and corrections, distributions of data are compared with summed
distributions of Monte Carlo and background. Comparisons for the main kinematic
variables are shown in figures 7.6, 7.7 and 7.8.
113
Figure 7.6: Comparison of Data and MC + BGD for the Electron energy and theta,
which are used for the kinematic reconstruction, as well as the photon energy and z
vertex position.
114
Figure 7.7: Comparison of data and MC for hadronic-based quantities along with E-Pz
distributions. The Total E-Pz plot indicates the cut on the signal region (two vertical
lines) and the area where the background is normalised to the data (hatched area).
115
Figure 7.8: Comparison of data and MC + BGD for the Q2 and yHERA distributions, used
for the measurement of F2.
As can be seen, there is in general a good agreement between data and Monte Carlo.
Overall, though, there is a 1.6% normalisation offset. The error on the luminosity
measurement for 1996 is 1.1%. There is also a slight excess in the lower region of the
total E-Pz plot. This could indicate that there is still some photoproduction background
contributing to the signal after the background subtraction.
7.7 Measuring F2
7.7.1 Resolution of Q2 and y
In order to check the reconstruction of the kinematics of the event, the Monte Carlo
sample is used to estimate the resolution and migration of the kinematic variables used.
The migration is a measure of the mean difference between the reconstructed and true
values while the resolution is defined as the spread on the difference between these
values.
The migration and resolution are determined by fitting a gaussian to the fractional
difference between the measured and true values,
measured  true
. The mean of this fit
true
gives the average kinematic migration and the width the resolution of the reconstructed
variable.
116
Figure 7.9 shows the resolution and migration of the two kinematic variables used for
the measurement of F2. In this case, Q2 is measured with the electron method and y is
measured with the sigma method. As can be seen Q2 is well measured over the whole
kinematic region of interest. The migration is much less than 5% for Q2 values ranging
from ~3 GeV2 to 100 GeV2 with only a slight increase at the high and low Q2 ends. The
resolution is less than 20% over the whole range.
The measurement of y is subject to greater uncertainties with the resolution being
between 20 and 40% over the whole y range. Migration to lower values of y is also
evident, being of order 10% over most of the y range with some improvement towards
lower y. Other reconstruction methods for y show more migration and worse resolution
however. For the measurement of F2 therefore, the electron method and sigma method
are used for Q2 and y reconstruction respectively.
7.7.2 Bin Selection
The bins used for the measurement of F2 are selected according to the criteria that as
well as the presence of enough statistics, the purity and acceptance in each bin should
be sufficiently high. The acceptance and purity are defined such that:
acceptance (i) 
purity(i) 
# of events generated and reconstruc ted in bin i
# of events generated in bin i
# of events generated and reconstruc ted in bin i
# of events reconstruc ted in bin i
117
(7.10)
(7.11)
Figure 7.9: Resolution of Q2 measured using the electron method and y measured using
the sigma method.
The acceptance and purity in each bin used are shown in figure 7.10. For consistency
with the nominal 96/97 F2 measurement, the criteria for the use of a bin is that the purity
> 30% and the acceptance > 20%. As can be seen all the bins defined meet the selection
criteria.
118
Figure 7.10: The acceptance and purity for each bin. The shading of the bins indicates
the purity whereas the value for the acceptance is given in each bin as a percentage.
Figure 7.11 shows the bins used, with the regions covered by previous analyses and
experiments shown for comparison.
As has been previously stated, this measurement spans the region between the low Q2
BPT analysis and medium to high Q2 analysis, as well as some overlap with the fixed
target results.
119
Figure 7.11: The bins used for the F2 measurement together previous results from ZEUS
and fixed target experiments. The number assigned to each bin is also indicated.
7.7.3 Unfolding
The measured number of events in a bin gives the raw cross section but, as illustrated in
figure 7.10, this is subject to inefficiencies in the detector performance and
reconstruction method, which together result in a probability of detection of somewhat
less than 100%. In order to correct for the effects of smearing, migration and acceptance
a procedure known as bin-by bin unfolding is used.
120
The unfolding uses the Monte Carlo simulation to correct the data. Here, the unfolded
number of data events, Ni, is calculated be multiplying the measured number of events
Ni, meas by a correction factor ci.
N i  ci N i ,meas
(7.12)
where
ci 
# of events generated in bin i
# of events measured in bin i
(7.13)
This method requires that the Monte Carlo simulation describe the data well over the
whole kinematic region used for the measurement. This is valid however, as in general
there is good agreement between data and Monte Carlo in the distributions shown in
figures 7.6, 7.7 and 7.8.
Restating equation (2.34), the differential cross section is:
d 2
2 2

Y F2 ( x, Q 2 )  y 2 FL ( x, Q 2 )  Y xF3 ( x, Q 2 )
2
4
dxdQ
xQ


(7.14)
where Y+ = 1 + (1-y)2. Here FL is set to zero as it has a negligible effect on the absolute
cross section and F3 is ignored as it only arises from Z0 exchange and is not relevant in
this kinematic regime.
The unfolded number of data events in each bin, i, Ni is related to the cross section by:
N i  L
d 2
dxi dQi2
2
dxdQ
(7.15)
As previous ZEUS and H1 results are used in the structure function parameterisation,
the values of the structure functions of data and Monte Carlo are related to the ratio of
the number of unfolded events in the bin
121
Ni

N iGenMC
F2Data ( x, Q 2 )
F2MC ( x, Q 2 )
(7.16)
It follows that the ratio of observed events is related to the ratio of unfolded events
Ni
N
GenMC
i

N iData Obs
N iMC
(7.17)
where N i MC  N iMC Obs  N iBgd obs . The second term here refers to the artificial overlay
sample described in section 7.4.
The structure function F2Data is now given by,
Data
2
F
F
MC
2
N iData Obs
N iMC
(7.18)
F2 can therefore be measured using only the measured number of data and MC events in
the bin and the known MC structure function.
7.7.4 Systematic Errors
As in section 6.4.2, the systematic errors are determined by varying each cut by the
error on the quantity used for the cut. In this case, the following checks are made:
1. Vary the electron energy cut by  1 GeV.
2. Vary the photon energy cut by  0.5 GeV.
3. Vary the Total E-PZ cut from 46 – 62 GeV to 49 – 59 GeV.
4. Change the Cal E-Pz cut to 22 GeV.
5. Vary the Boxcut by  0.5cm.
122
6. Change the cut on the Z vertex to  60cm.
7. Normalise the Background in the E-PZ region > 64 GeV.
8. Change the diffractive fraction to 11% and 19%.
9. Worsen the LUMI- resolution from 23% to 25%.
10. Remove the veto on the 35m Tagger.
11. Remove the veto on the 44m Tagger.
12. Instead of the ‘offline’ trigger, use the TLT for data and no trigger for MC.
13. Scale the hadronic energy by  3%.
14. Vary the SRTD or presampler correction by 10%.
15. Remove the backsplash correction.
The effects on each bin for the above checks are shown in figure 7.12.
As can be seen the dominant systematic over all the bins is the simulation of the LUMI energy and electron energy and E-Pz cuts. These all have greatest effect on the high
y/low x bins These bins also have the worst statistics for each Q2 bin. The other
systematic effects are well controlled.
7.8 Comparison with 96/97 F2 measurement
ZEUS has recently published [8] a preliminary F2 analysis using data from 1996 and
1997. The 1996 low Q2 results use the same trigger (DIS01) as the ISR analysis. These
F2 results are in good agreement with previous results [90] and theoretical predictions.
In order to check the ISR F2 measurement the 96/97 F2 analysis is repeated using
events from the same run range as that covered by the DIS01 trigger i.e. the same data
set as the ISR measurement.
123
Figure 7.12: The fractional error for each systematic check shown as a function of bin
number.
The initial selection takes all events that pass the DST bits 9 and 11. These bits select all
events in which a positron is found with energy > 4 GeV and with total E-Pz > 30 GeV.
This leads to an initial sample of 939496 events. The cuts then applied are:

An electron is found with the Sinistra95 electron finder with a probability rising
from > 0.96 at 8 GeV to > 0.99 at 20 GeV, falling back to > 0.9 at 30 GeV and
above. This cut is intended to remove photoproduction background, which is
more evident at lower probabilities.

The energy of this scattered electron is > 8 GeV.
124

The hadronic energy in a cone drawn from the interaction point through the
electron cluster is < 5 GeV.

38 GeV < E-Pz < 65 GeV.

ye < 0.95.

The ‘H’ shaped boxcut is used.

Outside a radius of 80cm, a track is required to the electron. This must have
momentum > 5 GeV and a distance of closest approach to the electron candidate
< 10cm.

The total hadronic activity in the event must be > 2 GeV.

The PT balance, PTh/PTe < 0.3. For low values of PTh/PTe there is a large deficit of
Monte Carlo compared to data. This cut therefore selects events with a well
reconstructed hadronic final state.

The z position of the event vertex must be within  50cm of the nominal
interaction point.

Events with sparks, muons from cosmic rays, and QED Compton events are
rejected.
The Monte Carlo in this case again consists of diffractive and non-diffractive MC
generated with the CTEQ4D parameterisation. In addition, Monte Carlo is also used to
simulate the photoproduction background. Two samples of photoproduction Monte
Carlo are generated, direct and resolved.
125
In direct photoproduction the photon has a point-like interaction with the proton and
does not appear to have any substructure. For resolved photoproduction however, the
photon appears to have substructure, which can be using a photon structure function.
The photoproduction MC is generated at low Q2 (< 1 GeV2) and high y (> 0.36). Lower
y events are not generated since the scattered positron would carry so much energy out
of the detector that the event is rejected by the other cuts.
The data and Monte Carlo comparison for these events are shown below in figure 7.13.
As with the ISR distributions, there is good agreement between the data and MC with
the exception of a small excess of MC at high y.
Again, a bin-by-bin extraction of F2 is performed. This time however, the CTEQ4D
parameterisation is used.
A comparison of F2 results from this analysis and the preliminary 96/97 F2 figures [8] is
shown in figure 7.14. The Q2 range shown is roughly the same as the range covered by
the ISR events. As can be seen there is good agreement and therefore a direct
comparison with the ISR results can be made.
Following this, F2 is extracted in the same bins as the ISR analysis using the ALLM97
parameterisation. This is shown in figure 7.14 along with the results from the ISR
analysis.
126
Figure 7.13: Data v MC comparison for the 96/97 analysis. The relative contributions of
the diffractive and photoproduction Monte Carlo are indicated by the green and blue
histograms.
127
Figure 7.14: Comparison of F2 measured with this analysis and preliminary 96/97
results.
128
Figure 7.15: F2 plotted from the ISR analysis (circles) with the comparison from the
96/97 analysis (triangles). Error bars for the ISR results show both statistical (inner
bars) and systematic errors added in quadrature. The 96/97 analysis shows statistical
errors only. Also shown are results from the ZEUS BPC/BPT and shifted vertex (SVX)
analyses.
129
In general, there is a good agreement, within errors, between the ISR F2 and the 96/97
F2 values and ALLM97 parameterisation. In the Q2 = 10 GeV2 and Q2 = 20 GeV2 bins
however, the ISR F2 values are generally higher.
The results in kinematic region covered by the Q2 = 0.6 GeV2, Q2 = 1.3 GeV2 and Q2 =
2.5 GeV2 bins, where the gap between ZEUS medium and low Q2 results lies show
good agreement with the ALLM97 curve.
The errors for the ISR measurement are much larger than the 96/97 result. This is in
part due to the reduced statistics. Improving statistics, especially at low x, where the
systematic errors are also worst, could be achieved by lowering the electron energy cut
to 5 GeV. The advantages of this would also be of benefit for the measurement of F L
and this will be discussed in the next chapter.
130
Chapter 8
Measuring the Proton Structure
Function, FL
8.1 Introduction
As stated in section 2.8, the emission of a gluon from the quark introduces a small
component of transverse momentum. This can be parameterised by the structure
function, FL, which can give a determination of the gluon distribution within the proton.
Indeed, a measurement of FL at y ~ 2.5x is almost a direct measure of the gluon
distribution [91]. The measurement of FL requires varying y at constant x and Q2, in
other words, varying the centre of mass energy, s. Varying s can be achieved through
reducing the beam energy [18,19]. This however, has the drawback that the detector is
not being run in optimal conditions. Also, a large amount of luminosity is lost for highenergy physics. Finally, there is a significant source of systematic error due to the
relative normalisation of data sets under the different beam conditions due to the
difference in luminosities, caused, for example, by an increase in the proton beam
divergence at lower energies.
Instead of physically changing the beam energy, the use of Initial State Radiative events
has been proposed [20] as a means to effectively vary the centre of mass energy without
physically changing the beam conditions.
As has been shown in Chapter 2, the differential cross section for deep inelastic e+p
scattering can be written as:
d 2
2  1  R 

Y 
 F2
2
dxdQ
xQ4  1  R 
131
(8.1)
where R 
L
FL
, and , the polarisation parameter is

 T F2  FL
ε
2(1  y)
1  (1  y) 2
(8.2)
In the above equation, y is the true kinematic y, as given in equation 7.16, i.e.
y
y HERA
 y where y is measured using the sigma method.
z
Since  depends on y, which is related to x and Q2 through y 
Q2
, changing s implies
xs
changing .
One method of extracting R using ISR events is to plot the cross section as a function of
 and perform a linear fit to the points [20]. This method is independent of F2 but
requires a large integrated luminosity of ~200pb-1 in order to gain sufficient statistics. A
second approach however, uses the knowledge of F2 from previous measurements at
HERA. This exploits the influence of R on the shape of the  FL distribution [21, 92],
where  FL was defined in equation 2.40 as
 1  R 

 1 R 
F  
L
(8.3)
Since the ISR Monte Carlo is generated with FL=0, the measured y spectrum for data
can be compared with that for the Monte Carlo. This gives the difference in the cross
section which, as shown in figure 2.6, becomes larger with increasing y. In order to
illustrate this, the ratio, MC, is defined as:
 MC 
# events in MC ( FL  0)
# events in MC ( FL  0)
(8.4)
As shown in figure 8.1, MC has a different shape for different values of R. It is clear
that as the value of R increases the ratio curves downwards more steeply at higher
values of y
132
Figure 8.1: The effect of weighting the y spectrum to three different values of R. As R
increases the curve of the ratio becomes steeper.
The comparison shows unweighted Monte Carlo and Monte Carlo weighted to values of
R of 0.2, 1.4 and 10  . As can be seen this shows the same fall at high y as figure 2.6.
With ascending y the overall effect on the cross section increases. For the highest values
of y this can have as much as a 15% effect for R=0.3 but, for small values of y, FL has a
very small effect. This renders the measurement of FL using the absolute cross-section
difficult, if not impossible.
In order to measure R, the ratio R is defined such that
R 
# events in DATA
# events in MC ( FL  0)
(8.5)
This is measured as a function of y and is equivalent to  FL . Therefore, instead of
measuring the absolute cross section, R can be extracted by fitting the measured R to a
function of the form
 R ( y )  N Fit F ( y )
L
where NFit is a factor introduced to take into account the relative normalisation.
133
(8.6)
The method is possible because the structure function, F2, has been measured very
precisely at HERA and there is a good understanding of deep inelastic processes with
initial state radiation.
8.2 Event selection
8.2.1 Trigger
Unlike the F2 analysis, the trigger DIS10 is used for the FL analysis. At the first and
second level triggers, this is identical to DIS01. However, at the TLT, DIS10 includes
an ISR specific cut on the energy of the photon in the LUMI- calorimeter of E > 4
GeV. Also, a cut on y measured using the electron method is included. The DIS10
trigger corresponds to 35.9pb-1 of data.
As before, the DIS02 trigger is used for the background determination.
Table 8.1 shows the properties of the DIS10 trigger compared with the DIS01 and
DIS02 triggers described in the previous chapter.
TLT Variable
DIS01
DIS02
DIS10
FLT BIT
30
30
30/46
E-Pz SLT [GeV]
-
19
-
Ee [GeV]
4
4
4
Box Cut [cm]
12 x 6
12 x 6
12 x 6
E-Pz [GeV]
30
20
30
ye
-
-
0.1
LUMI- [GeV]
-
-
4
Prescale
1
40 – 100
1
Table 8.1: Comparison between the triggers used for ISR analyses. DIS01 and DIS02
are used for the F2 measurement while DIS10 and DIS02 are used for the FL
measurement.
134
8.2.2 Cuts
Initial event selection was the same as the F2 measurement with the events being taken
from the DIS10 run range. This yielded 861 860 events for 1996 and 1 331 496 events
for 1997.
The cuts are the same as the F2 analysis with the following exceptions:

Calorimeter E-PZ cut raised to 22 GeV.

Total E-PZ for 1997 changed to 46 GeV – 60 GeV. (The cut for 1996 is unchanged.)

An outer boxcut of 30cm x 30cm is added. This is to restrict events to those within
the SRTD acceptance.

The Trigger bit selected is DIS10.
The same corrections, reconstruction and background subtraction as used for the F2
measurement were performed.
8.2.3 Energy scale of the LUMI- calorimeter
The correct simulation and measurement of the ISR photons is vitally important for the
measurement of FL.
The simulation of the LUMI- calorimeter is therefore checked by performing a
kinematic peak study on the data and MC for both 1996 and 1997 events. The kinematic
peak arises at low y and low Q2 where the energy of the scattered positron becomes
approximately independent of the kinematics of the event and is roughly equal to the
beam energy. In the case of ISR, the sum of positron and photon energies becomes
~27.5 GeV.
Kinematic peak events are selected for DIS01 events by performing the boxcut and
electron probability cut along with a cut of 0.005 < yJB < 0.04. Histograms of electron
energy + photon energy are made in bins of electron energy with gaussians fitted to the
kinematic peak.
135
The results of the gaussian fits for 1996 are shown in figure 8.2.
Figure 8.2: Kinematic peak study for 1996.
The left-hand plot shows the fitted energy of the peak. The dashed lines indicate  1%
error on the data, which corresponds to the error on the electron energy measurement.
As can be seen, there is a good agreement between data and Monte Carlo.
For 1997 the rate for the DIS01 trigger became too high and it was prescaled to 100.
Therefore, in order to gain enough statistics for this study, a new trigger, DIS29, was
introduced. This trigger is the same as the DIS10 trigger with the y cut removed. The
results for 1997 are shown in figure 8.3.
Figure 8.3: Kinematic peak study for 1997.
Again, the energy and resolution are well simulated by the Monte Carlo. This indicates
the validity of the offline simulation of the LUMI- calorimeter described in Chapter 5.
136
8.2.4 The FL Bin
Due to limited statistics, the measurement of FL is only performed in one bin. A large
aim of the bin selection is to maximise the accessible y range. The available range
depends on the measured positron and photon energies as well as the cut on yHERA for
the bin boundaries.
Considering first the scattered positron energy, this is related to y by:
y  1
E e
1  cosθ e 
2E e
(8.7)
The maximum value of y obtainable for several values of E e is shown in figure 8.4.
The accessible values of y lie below the lines. It is clear that in order to maximise the y
range the scattered electron should be measured to as low values as possible.
Figure 8.4: The maximum values of y obtainable for different minimum values of the
scattered positron energy.
137
As the bin is measured in terms of yHERA, the true kinematic y range is a function of z,
i.e. the photon energy. The effect of the yHERA bin boundaries is shown in Figure 8.5.
Overlaid on the previous plot are lines showing the range in y available due to the
choice in bin of 0.1 < yHERA < 0.23. The upper cut is placed to limit the amount of
background, which is predominantly at high yHERA. This can be seen for the F2 case in
figure 7.8. The lower cut on yHERA is limited by the cut on y introduced in the trigger.
Only the y range between these two lines is accessible for the measurement.
The vertical line indicates the cut on the ISR photon of 6 GeV. The shaded region
indicates where measurements of y can be made. Therefore, the maximum range in y
that can be measured ranges from approximately 0.11 < y < 0.42 although statistics are
limited at the extremities.
Figure 8.5: The effect of the upper and lower cut of yHERA on the accessible region of y.
The Q2 range of the bin was chosen to be 1 GeV2 < Q2 < 30 GeV2 in order to achieve as
high statistics as possible. The final bin chosen is shown in figure 8.6 and is defined as:
1 GeV2 < Q2 < 30 GeV2
0.1 < yHERA < 0.23
138
Figure 8.6: The bin used for the FL measurement shown on the x, Q2 plane with the F2
bins superimposed for comparison.
As with the previous measurements a comparison of data and Monte Carlo is shown in
Figure 8.7 (1996) and Figure 8.8 (1997). In both cases, the plot of the y distribution,
used for the fit, is highlighted. In order to gain sufficient statistics for the fit, especially
at high and low y, the y distribution is limited to 5 bins. The distributions show only the
events found within the y and Q2 bin.
139
Figure 8.7: Data v MC + BGD comparison for the 1996 FL measurement. The y
distribution, used for the fit, is shown at the bottom.
140
Figure 8.8: Data v MC + BGD comparison for the 1997 FL measurement. The
y distribution, used for the fit, is shown at the bottom.
The agreement between data and Monte Carlo for these distributions is reasonable.
For the extraction of R, the 1996 and 1997 y distributions are combined.
141
8.3 Measuring FL
8.3.1 y scaling factor
The extraction of F2 uses the bin-by-bin unfolding method, which uses Monte Carlo to
take into account the effect of migration and acceptance. The extraction of R, described
in section 8.1 however, requires a ‘raw‘ measurement of the kinematic variable y and
therefore any resolution and migration effects on this must be taken into account before
a fit can be made to the shape of the y distribution.
The resolution of y in the region covered in this analysis is shown in figure 8.9. As with
the corresponding plot for the F2 measurement, shown in the previous chapter, there is a
migration of approximately 10% towards lower values of y over most of the kinematic
region covered by this measurement. Only at the lowest values of y does this effect
disappear.
Figure 8.9: The migration and resolution of y in the range used for the extraction of R.
The use of other reconstruction methods in this kinematic range gives worse resolution
with a stronger y-dependence. Therefore, in order to take this effect into account a
scaling factor, Sy, is introduced into the formula for , resulting in:
142
s 
2(1  S y y)
1  (1  S y y) 2
(8.8)
One way of determining Sy would be to make a straight-line fit to the points in figure
8.9 this would result in a value of ~1.10. This however, corresponds to the whole region
in y, whereas the fit is affected by the range in y over which the fit is performed i.e. by
size of the bin. This is shown in figure 8.10. As can be seen, changing the range in y
covered by the bin would, for the same Sy, result in a different fitted value of R.
Figure 8.10: The effect caused by fitting different ranges of y (left). The plot shows the
MC distribution for MC weighted to R = 1.4. Two fits have been made, one curve
covering the range up to y = 0.6 and the other up to y = 0.45 (corresponding
approximately to the accessible range in this analysis). As can be seen, the shape of the
curves are different, giving different fitted values of R. The plot to the right shows the
fit made with Sy set to 1.149. This fit yields the weighted value of R=1.4.
The value of Sy is therefore determined within the bin used for this analysis. This is
achieved by weighting the Monte Carlo to R=1.4 and varying Sy to find a minimum 2.
The value R=1.4 was chosen for consistency with previous attempts to determine Sy
[87]. This results in a value of Sy of 1.149  0.03. Also shown in figure 8.10 is the result
of a fit made using this value of Sy. As can be seen, the fit yields the correct result for R.
143
8.3.2 Extraction of R
R can now be extracted using by using the corrected form of equation (8.6):
 R  N Fit
1 s R
1 R
(8.9)
This fit is performed for the ‘96 and ‘97 data with the result shown in figure 8.11.
Figure 8.11: Fit of ratio of data and MC y distributions for combined 1996 and 1997
data.
The value of R obtained from this fit is:
R  0.45 00..88
41 ( stat )
F2 is measured in the FL bin using the method outlined in chapter 7 and is found to be
144
F2  0.933 00..010
010 ( stat )
The ALLM97 value of F2 in this bin is 0.912 which is ~2% lower than the measured
value. This is consistent with the normalisation offset found with the measurement of F2
described in chapter 7.
8.3.3 Systematic Errors
The estimate of systematic uncertainties includes many of the same variations in
quantities as the F2 measurement. These are:
1. Decrease the electron energy cut by 1 GeV.
2. Increase the electron energy cut by 1 GeV.
3. Decrease the photon energy cut by 0.5 GeV.
4. Increase the photon energy cut by 0.5 GeV.
5. Change the Total E-PZ cut to 46 – 61 GeV.
6. Change the Total E-PZ cut to 47 – 59 GeV.
7. Raise the calorimeter E-Pz cut by 1 GeV.
8. Reduce the calorimeter E-Pz cut by 1 GeV.
9. Increase the Boxcut by 0.5cm.
10. Decrease the Boxcut by 0.5cm.
11. Change the cut on the Z vertex to  60cm.
12. Remove the veto on the 44m Tagger.
In addition to varying the cuts, changes to the reconstruction are also performed.
13. Normalise the Background in the E-PZ region > 64GeV.
14. In the LUMI- simulation, increase the energy scale of the LUMI- calorimeter by
0.1 GeV and the resolution by 1%.
15. Change the diffractive fraction to 11%.
16. Change the diffractive fraction to 19%.
17. Scale the hadronic energy by 1.03.
145
18. Scale the hadronic energy by 0.97.
19. Scale the measured electron energy by 1.005.
20. Scale the measured electron energy by 0.995.
21. Use the ‘offline’ cut for the trigger.
22. Remove the backsplash correction.
23. Scale the SRTD and presampler corrections by 10%.
Finally, the size of the bin is varied in order to take into account the uncertainty in the
scaling parameter, Sy.
24. Decrease the y bin boundaries to 0.11 – 0.22.
25. Increase the y bin boundaries to 0.09 – 0.24.
The effect of these checks on the measured value of R is shown in figure 8.12.
Removing the backsplash correction (22) affects the ratio for the low y bins causing a
reduction in the overall fitted R.
The reduction of the bin size (24) also affects the fit at low y and leads to a negative
value for R. Another large downward shift is caused by the reduction of the electron
energy cut. This is probably caused by the change in the relative SINISTRA95
efficiency for finding scattered positrons for data and Monte Carlo.
The largest positive shift is caused decreasing the photon energy cut (3). Also there is a
large positive effect caused by changing the simulation (14) demonstrating the
sensitivity of this measurement to accurate simulation of the LUMI- calorimeter.
146
Figure 8.12: The effect of the systematic checks on the value of R. Shown for each test
is the fitted value of R together with the errors on the fit. The dashed lines indicate the
nominal value together with its error.
In the calculation of the error, correlations between systematic errors have not been
taken into account and the total systematic error is therefore an overestimate.
8.4 Results
Including systematic errors, the measurement of R is
0.88
R  0.45 00..88
41 ( stat ) 1.07 ( sys )
The above systematic checks are also applied to the F2 measurement and the effect on
the measurement is shown in figure 8.13. In this case, decreasing the photon energy cut
(3) and changing the energy scale of the LUMI- simulation (14) again causes the
biggest effect.
147
Figure 8.13: The fractional effect of the systematic checks on the value of F2.
Including systematic errors, the measured value of F2 is
0.038
F2  0.93300..010
010 ( stat )  0.017 ( sys )
Given R and F2, FL can be extracted using:
R
FL
F2  FL
(8.10)
The measured value of FL is found to be
0.24
FL  0.29 00..24
25 ( stat ) 1.52 ( sys )
This is measured at Q2 = 5.5 GeV2 and x = 4.4  10-4 and is shown in figure 8.14. Also
included on the figure are the limits on the value of FL of FL = 0 and FL = F2. As can be
seen the measured value of FL falls within these limits although the errors are large. The
148
large asymmetry in the systematic error in particular is the result of converting R to F L
using equation 8.10.
Figure 8.14: The measured value of FL plotted at Q2 = 5.5 GeV2 and x = 4.4  10-4.
This result can be compared with the values obtained by the H1 experiment [22], which
are shown in Figure 8.15. The scale of the x-axis in both plots is the same although for
the H1 plot the points are given at different values of Q2.
Although this result is consistent with the H1 measurement, the large error here
prevents a statement on the implications of this measurement. Indeed, given the size of
the error, the result is also consistent with a value of zero. Taking into account only the
statistical error, FL=0 can be rejected at the 10% significance level. This analysis
however, demonstrates the viability of using Initial State Radiative events for future,
more accurate measurements of FL.
149
In order to improve this measurement it is clear that a large improvement in statistics is
required. The above result uses ~35pb-1 of data from the 1996 and 1997 running
periods. Up to the shutdown for the HERA upgrade in September 2000 ZEUS collected
a further 68pb-1 of e+p data and 16.7pb-1 of e-p data. Even if the e-p data is not taken into
account this represents a doubling of the available statistics.
Figure 8.15: FL measured by H1 over a similar range in x to the ISR measurement. The
yellow band shows the expectation from pQCD with the dashed lines giving the limits
of FL = 0 and FL = F2. The points are given for different values of Q2 with the FL = F2
line also covering a range in Q2. This explains the fall with decreasing x.
A further increase in statistics could be obtained by lowering the cut on the scattered
positron energy to 5 GeV. More importantly however, this would extend the range in y
available for the fit as demonstrated in figure 8.5. The statistics in the bins of y in the
high y region would be improved. This is also the region where the ratio R shows the
greatest deviation from 1 for non-zero values of FL and therefore it would be of benefit
to gain as much data here as possible.
150
However, reducing this cut would introduce new uncertainties into the measurement as
shown by the systematic error in figure 8.11. The electron finders are tuned to find
positrons at energies greater than 8 GeV and lose efficiency dramatically as the energy
is lowered below this value. Furthermore, the reduction in efficiency is different for
data and Monte Carlo, necessitating the introduction of a new correction factor. Also,
the hadronic reconstruction routines eliminate the positron by taking the number of
positron cells directly from the electron finder. Any problems from the electron finder
may also affect the hadronic reconstruction. This is illustrated in figure 8.16. Finally,
cutting on lower energy positrons may also allow more photoproduction background
into the sample.
It is clear that background plays a significant role in improving the measurement of both
F2 and FL. The next step should be to utilise photoproduction Monte Carlo in order to
estimate the background rather than use the DIS02 trigger with overlay bremsstrahlung
events. This may also aid investigation into the problems involved with lowering the
positron energy cut.
Figure 8.16: The hadronic E-Pz distributions using positron energy cut > 8 GeV (left)
and > 5 GeV (right). As can be seen there is a deficit of Monte Carlo between 5 and 15
GeV for the lower energy cut. This may be due to more background or problems with
the reconstruction.
151
Chapter 9
Conclusions
In this analysis, Initial State Radiative events have been used to measure the F2 and FL
structure functions of the proton. These measurements require a very good
understanding of both the main ZEUS detector and the ZEUS luminosity monitor, as
ISR events are subject to contamination from both bremsstrahlung and photoproduction
backgrounds.
Using a subset of data from 1996, taken with a low Q2 trigger and having an integrated
luminosity of 3.78pb-1, the proton structure function F2 has been measured as a function
of x and Q2. The kinematic region used covers the Q2 range of 0.3 GeV2 < Q2 < 40
GeV2 and the range in x of 7.6  10-6 < x < 4.8  10-2. In general, there is good
agreement with the standard ZEUS F2 measurement. There is also agreement with a
Regge based parameterisation, which uses previous measurements of F2 including very
low Q2 ZEUS results.
These data cover a region on the kinematic plane that spans the gap between the very
low Q2 ZEUS results and the medium and high Q2 ZEUS results. They also overlap
with fixed target data at higher values of x. Measurements of F2 at ZEUS now cover
continuously the Q2 range 0.045 GeV2 < Q2 < 30000 GeV2 and values in x ranging from
6  10-7 at very low Q2 to 0.65 at high Q2.
Also, using a special trigger for radiative events applied during 1996 and 1997 data
taking, the longitudinal proton structure function, FL has been measured. This data, with
an integrated luminosity of 35.9pb-1, covers the kinematic region 1 GeV2 < Q2 < 30
152
GeV2 and 2.6  10-4 < x < 6.1  10-4. Due to low statistics FL is measured in one bin
with central values of Q2 and x of 5.5 GeV2 and 4.4  10-4 respectively.
The ratio, R, of the cross sections for longitudinal and transverse polarised photons has
been
found
to
be
0.88
R  0.45 00..88
41 ( stat ) 1.07 ( sys )
while
F2
is
measured
as
0.036
F2  0.93300..010
010 ( stat )  0.017 ( sys ) .
These
two
quantities
are
related
to
FL
by
R
FL
F2  FL
and
give
0.24
FL  0.29 00..24
25 ( stat ) 1.52 ( sys ) .
This is the first direct measurement of FL at HERA and is consistent with the prediction
from perturbative QCD and with indirect QCD based extrapolations.
The measurement of FL should be further improved by using data taken up to the HERA
shutdown in September 2000.
153
Appendix A
F2 Measurement
Bin
Q2 (GeV2)
x
# Data events
# BGD events
F2  stat  sys
1
2
3
4
5
6
0.3
7.483E-6
1.363E-5
2.217E-5
3.88E-5
7.76E-3
3.954E-4
11.17
32.17
60.16
134.7
122.0
177.1
3.30
4.92
0.82
2.45
0.81
0.83
0.252  0.133  0.358
0.203  0.053  0.610
0.337  0.065  0.217
0.266  0.033  0.080
0.259  0.034  0.137
0.194  0.021  0.085
7
8
9
10
11
12
0.6
1.602E-5
2.915E-5
4.731E-5
8.315E-5
1.663E-4
8.837E-4
92.42
301.7
243.6
459.8
383.0
588.5
72.31
70.67
18.06
14.02
4.90
3.28
0.240  0.120  0.641
0.472  0.047  0.128
0.441  0.042  0.071
0.440  0.030  0.068
0.413  0.031  0.088
0.270  0.016  0.065
13
14
15
16
17
18
1.3
3.204E-5
5.83E-5
9.463E-5
1.663E-4
3.326E-4
1.767E-3
186.7
469.2
374.1
701.4
583.1
1051
96.16
134.2
68.47
47.74
15.57
28.66
1.247  0.252  0.313
0.703  0.060  0.085
0.498  0.041  0.103
0.552  0.031  0.055
0.494  0.029  0.082
0.382  0.017  0.045
19
20
21
22
23
24
2.5
6.408E-5
1.166E-4
1.892E-4
3.326E-4
6.652E-4
3.534E-3
232.6
616.3
405.0
736.9
615.4
922.2
143.81
186.31
58.33
28.75
13.14
13.17
1.029  0.209  0.259
0.939  0.072  0.121
0.698  0.054  0.074
0.740  0.040  0.034
0.655  0.039  0.057
0.451  0.021  0.048
25
26
27
28
29
30
5
1.282E-4
2.332E-4
3.785E-4
6.652E-4
1.33E-3
7.069E-3
196.4
502.6
376.5
627.7
494.6
758.0
117.74
164.61
59.92
31.28
9.80
12.31
1.096  0.230  0.142
0.988  0.084  0.139
0.920  0.077  0.068
0.865  0.052  0.052
0.740  0.049  0.040
0.524  0.028  0.046
31
32
33
34
35
36
10
2.563E-4
4.664E-4
7.57E-4
1.33E-3
2.661E-3
1.414E-2
138.8
318.4
211.9
395.4
295.5
426.0
69.20
80.53
19.81
21.35
6.57
9.80
1.679  0.366  0.219
1.403  0.147  0.158
1.073  0.114  0.104
1.036  0.081  0.046
0.793  0.068  0.049
0.534  0.038  0.035
37
38
39
40
41
42
20
5.127E-4
9.328E-4
1.514E-3
2.661E-3
5.321E-3
2.827E-2
64.54
172.3
144.3
192.3
161.4
240.4
26.47
16.47
9.14
9.06
1.63
4.08
1.963  0.565  0.323
1.693  0.218  0.054
1.460  0.199  0.074
1.024  0.113  0.047
0.843  0.098  0.065
0.513  0.048  0.048
43
44
45
46
47
48
40
1.025E-3
1.866E-3
3.028E-3
5.322E-3
1.064E-2
5.655E-2
27.95
77.03
58.82
68.73
67.36
161.3
8.20
10.76
5.80
2.48
2.45
0.82
2.186  0.860  0.458
1.588  0.313  0.051
1.090  0.223  0.157
0.722  0.122  0.143
0.669  0.118  0.130
0.592  0.071  0.054
Table A.1:Values of F2 measurements and bins with number of events found and errors.
154
Bibliography
[1] S.Weinberg, Phys. Rev. Lett. 19 (1967) 1264
A.Salam, “Elementary Particle Theory”, Ed. N. Svartholm, Almquist & Wiksells,
Stockholm (1969) 367
S.L.Glashow, J.Iliopoulos, L.Maiani, Phys. Rev. D2 (1970) 1285
[2] R.G. Roberts, “The Structure of the Proton”, CUP (1990)
[3] E.D. Bloom et al., Proceedings of the XVth International conference on High
Energy Physics, Kiev (1970)
[4] J.D. Bjorken, Phys. Rev. 179 (1969) 1547
[5] R.P. Feynman, Phys. Rev. Lett. 23 (1969) 1415
[6] C.G. Callan & D. Gross, Phys. Rev. Lett. 22 (1969) 156
[7] TASSO Collaboration, Phys. Lett. B86 (1979) 243
[8] ZEUS Collaboration, “Measurement of the Proton Structure Function F2 in e+p
collisions at HERA”, Abstract 1048, ICHEP00, Osaka
[9] J.C.Collins, Nucl. Phys. B261 (1985) 104
[10] G. Altarelli & G. Parisi, Nucl. Phys. B126 (1977) 298
[11] G. Altarelli, Nucl. Phys. B91 (1981) 1
[12] V.N. Gribov & L.N. Lipatov, Sov. J. Nucl. Phys. 15 (1972) 438
[13] L.N. Lipatov, Sov. J. Nucl. Phys. 20 (1975) 96
[14] Y.L. Dokshitzer, Sov. Phys. JETP 46 (1977) 641
[15] H.Plotow-Besch, Comp.Phys.Commun. 75 (1993) 396, Version 7.05, CERN-PPE
1996.11.06
[16] A.D.Martin, W.J.Stirling, R.G.Roberts, Phys. Rev. D51 (1995) 4756
[17] CTEQ Collaboration, “Improved Parton Distributions from Global Analysis of
Recent Deep Inelastic Scattering and Inclusive Jet Data”, hep-ph/9606399, Phys. Rev.
D55 (1997) 1280
[18] A.M. Cooper-Sarker et al., Proc. of the HERA workshop, Hamburg 1987, ed. R.D.
Peccei, Vol.1, p.231
155
[19] A.M. Cooper-Sarker et al., Proc. of the HERA workshop, Hamburg 1991, eds. W.
Buchmuller and G. Ingelman, Vol.1, p.155
[20] M.W Krasny et al., Z. Phys. C53 (1992) 687
[21] L. Favart et al., Z. Phys. C72 (1996) 425
[22] H1 Collaboration, Phys. Lett. B393 (1997) 452
[23] BCDMS Collaboration, A.C. Benvennuti et al., Phys. Lett. B195 (1987) 91
[24] BCDMS Collaboration, A.C. Benvennuti et al., Phys. Lett. B233 (1989) 485
[25] BCDMS Collaboration, A.C. Benvennuti et al., Phys. Lett. B237 (1990) 592
[26] CDHSW Collaboration, P. Berge et al., Z. Phys. C49 (1991) 187
[27] E143 Collaboration, K.Abe et al., Report SLAC-PUB-7927, SLAC (1998), hepex/9808028
[28] E140 Collaboration, S.Dasu et al., Phys. Rev. Lett. 61 (1988) 1061
[29] E140 Collaboration, S.Dasu et al., Phys. Rev. D49 (1994) 5641
[30] L.W.Whitlow et al., Phys. Lett. B250 (1990) 193
[31] NMC Collaboration, M.Arneodo et al., Nucl. Phys. B483 (1997) 3
[32] CCFR/NuTeV Collaboration, U.K.Yang et al., “Measurements of the Longitudinal
Structure Function and |Vcs| in the CCFR experiment”, Proc. 6th International
Workshop on Deep Inelastic Scattering and QCD (DIS98), ed.
G.Coremans,R.Roosen,Singapore, World Scientific (1998) 131
[33] P.Higgs, Phys. Rev. Lett. 12 (1964) 132
[34] P.Higgs, Phys. Rev. 145 (1966) 1156
[35] P.G.O.Freund, “Introduction to Supersymmetry” ; CUP (1986)
[36] C.Seez & T.S. Virdee, “Detection of an intermediate mass Higgs Boson at LHC
via its two photon decay mode”, CMS TN/92-56 (1992)
[37] W. van Loo, Phys. Stat. Sol. 28 (1975) 227
[38] P.Lecoq et al., “Lead Tungstate Scintillators for LHC e.m. Calorimetry”, CMS
TN/94-308 (1994)
[39] S.Dazu et al., “The Level-1 Calorimeter Trigger Performance Studies”, CMS
TN/94-285 (1994)
[40] U Schäfer, Private Communication
[41] P.J. Ashenden, “The Designer’s Guide to VHDL”, Morgen Kaufmann (1996)
156
[42] A. Rushton, “VHDL for Logic Synthesis”, Second Edition, John Wiley & Sons
(1998)
[43] G.A. Voss & B.H. Wiik ; “The Electron-Proton Collider HERA” ; Ann. Rev.
Nucl. Part. Sci. 44 (1994) 413
[44] ZEUS Collaboration, Eur. Phys. J. C7 (1999) 609
[45] ZEUS Collaboration, Phys. Lett. B487 (2000) 53
[46] P.A. Mauchiz et al., Phys. Lett. B295 (1992) 159
[47] A.C. Benvenuti et al., Phys. Lett. B237 (1990) 592
[48] S.R.Mishra & F.Sciulli, Ann. Rev. Nucl. Part. Sci. 39 (1989) 259
[49] Phys. Rev. D54 (1996) 3006
[50] L.W. Whitlow et al., Phys. Lett. B250 (1990) 193
[51] ZEUS Collaboration, “A detector for HERA”, PRC 87-02
[52] ZEUS Collaboration, “The ZEUS Detector”, Status Report (1993)
[53] E.Hilger, “ZEUS coordinate System”, ZEUS Note-86-17
[54] B.Foster et al., NIM A315 (1992) 397-403
B.Foster et al., NIM A338 (1994) 254-283
[55] B.Foster, “Whisker Growth in Test Cells”, Preprint 29054 (1986)
[56] A.Bamberger et al., “The Small Angle Rear Tracking Detector at ZEUS”, NIM
A401 (1997), 63-80
[57] A.Caldwell et al., NIM A321 (1992) 356
[58] H.Brueckmann, “A Precision Calibration Method (DU) for the ZEUS Hadron
Calorimeter”, ZN-86-36
[59] J.F.Zhou & D.Krakauer, “Calorimeter calibration triggers in the ZEUS luminosity
run”, ZN-94-130
[60] J.Michell & D.Hanna, “Status of the FCAL/BCAL Laser Calibration”, ZN-90-104
[61] A.Bamberger et al., “The Presampler for the Forward and Rear Calorimeter in the
ZEUS Detector”, DESY-96-139, Proc. ICCHEP96 Frascati, Italy 6 209-218
[62] K.Piotrzkowski & M.Zachara, “Determination of the ZEUS Luminosity in 1994”,
ZEUS Note-95-138
[63] G.Hartner et al., “VCTRAK (3.07/04) Offline Output Information”, ZEUS Note 97064
157
[64] H.Abramowicz, A.Caldwell and R.Sinkus, “Neural Network Based Electron
Identification in the ZEUS Calorimeter”, N.I.M. A365 (1995) 508-517
[65] U. Bassler & G. Bernardi, DESY H1-03/93-274 (1993)
[66] A. Courau et al., “Quasi-Real QED Compton Monte Carlo for HERA”,
Proceedings of the Workshop Physics at HERA 2 (1991) 902
[67] H.Spiesberger, “DJANGO6 version 2.4 – A montecarlo generator for Deep
Inelastic lepton proton scattering including QED and QCD radiative effects”, 1996.
[68] A. Kwiatkowski, H Spiesberger, H.J.Mohring, Comp. Phys. Commun 69 (1992),
155
[69] H.Spiesberger, “HERACLES. An event generator for ep interactions at HERA
includingRadiative processes (Version 4.6)”, 1996
[70] L.Lonnblad, Nucl. Phys. B306 (1988) 746
[71] L.Lonnblad, Z.Phys. C43 (1989) 625
[72] L.Lonnblad, Nucl. Phys. B339 (1990) 393
[73] T. Sjöstrand, Comp. Phys. Commun. 39 (1986) 347
[74] T.Sjöstrand, Comp. Phys. Commun. 82 (1994) 74
[75] H, Jung, “RAPGAP. The RAPGAP montecarlo for Deep Inelastic Scattering”,
1997,
[76] H.U.Bengtsson, T.Sjöstrand ; Comp.Phys.Commun 46 (1987) 43
[77] G.Ingleman,A.Edin,J.Rathsman, DESY 96-057, DESY (1996), hep-ph/9605286
[78] H. Abramowicz, A. Levy, DESY 97-251, DESY (1997), hep-ph/9712415
[79] CERN, CH-1211 Geneva 23, Switzerland, “GEANT Detector Description and
Simulation Tool”, October 1994
[80] E. De Wolf et al., ZGANA, ZEUS Trigger Simulation Library
[81] K. Olkiewicz & A. Eskreys, “Off-line Luminosity Calculation in the ZEUS
Experiment in 1997, 1998 and 1999”, ZEUS Note 99-044 (1999)
[82] J. Adamczyk et al.,“Proposal for upgraded Lumi monitor for ZEUS Experiment”
ZEUS Note 99-078.
[83] A. Caldwell et al., “A Luminosity Spectrometer for ZEUS”, HERA-ZEUS
Document 22/10/99
[84] R. Graciani, “A New Electron Tagger for ZEUS”, ZEUS Note 99-067
158
[85] D.Roff, “Luminosity Measurements from Virtual Compton Scattering”, ZEUS
Note 97-026
[86] N. Wulff, “Radiative Corrections for ep-interactions in Leading Log
Approximation”, ZEUS Note 95-143
[87] A. Bornheim, Private Communication
[88] R.Deffner, “Measurement of the Proton Structure Function F2 at HERA using the
1996 and 1997 ZEUS Data”, Thesis, Bonn (1999)
[89] N. Tuning, Private Communication
[90] ZEUS Collaboration, Zeit. f. Phys. C72 (1996) 399
[91] A.M. Cooper-Sarker, R.C.E. Devenish, A. De Roeck, “Structure Functions of the
Nucleon and their Interpretation”, DESY 97-226, DESY (1997), hep-ph/9712301
[92] A. Bornheim, “Messung der Protonstrukturfunktionen F2 und FL in radiativer epStreuung mit dem ZEUS-Detektor”, Thesis, Bonn (1999)
159