«STUDENT RESEARCH PAPERS SUMMER 2011 VOLUME 22 REU DIRECTOR UMESH GARG, PH.D. REU Student Research Papers – Summer 2011 University of Notre Dame – ...»
0.5 GeV/c. After this, the 100 bin equal curvature function from 2.0 GeV/c to 300 GeV/c is called for the last 33 bins (there are 33 bin between 6 and 300 GeV/c in that binning, which gives an estimation of how ﬁnely sliced the low pT bins were before the linear step sizes). The ﬁnal bin is from 300 GeV/c to -300 GeV/c. The very high momentum tracks all move straight through the detector, and so tracing continuously through φ space is equivalent to tracing through inﬁ- Figure 7: Center and boundary values for pT nite momentum and coming back from neg- bins ative inﬁnite momentum. As such, the midTo Table of Contents dle bin contains very positive and very negative momenta. The bins are symmetric and there are a total of 48 bins on each side and 1 in the middle making 97 bins in total.
At ﬁrst we used a slop of 1.2 to get a reasonable eﬃciency plot, but then we realized that we were using a magnet strength of 4.0T when we should have been using a strength of 3.8T. This caused errors in the proper pT binning. We were then able to decrease the slop to 0.8 without a signiﬁcant negative impact to the eﬃciency plot. One ﬁnal upgrade to our algorithm was to remove all of the stubs which reports less than 2 GeV/c for their momentum. The current binning scheme is seen here in Figure 7.
6 Eﬃciencies and Fake Rates In order to test the eﬃciency of our tracking algorithms we ﬁrst tested them on a spectrum of single muon tracks. These tracks should be the easiest to ﬁnd, as muons are one of the most penetrating particles through the detector material as evidenced by the enormous volume of the detector devoted to detecting only them.
Muons also have a small chance of interacting and being deﬂected, and their large mass means they radiate less, which makes their tracks very close to perfect circles and don’t produce additional tracks due to radiation. The muon eﬃciency plot from 2 GeV/c to 1 TeV is seen in Figure 8. Note that the initial very low values for 2 GeV/c and 3 GeV/c are due to cutting out stubs which reported below 2 GeV/c. The uncertainty on the stub measurement, even at this large angle of curvature, ensures that many of these stubs will get cut and hurt the track eﬃciencies. The eﬀect abates above 3 GeV/c because the uncertainty range does not reach that low for higher momentum tracks. Between 6 GeV/c and 10 GeV/c, there is a small dip due to the change of styles of momentum measurements. After 10 GeV/c, it remains ﬂat save for 2 dips: one at about 200 GeV/c and about 400 GeV/c. These features of the eﬃciency plot are not well understood, but are likely due to how particles at these energies are deﬂected and where the bin cuts occur in curvature. Also see Figure 9 for a detail of the lower pT range of the same plot.
As a test of how this would aﬀect a trigger, “MinBias” events were analyzed. A minimum bias event is what we expect to constitute a pileup event. In order to get an estimate of how often this algorithm would ﬁnd a nonexistent track at high pT s, the algorithm was run on 200 thousand individual minimum bias events and the highest momentum fake for a given event (if there were any) was placed in a histogram. The overplotted histogram shows this without vetoing stubs less than 2 GeV/c versus the eﬀect with vetoing Figure 10.
Unfortunately, these fake rate do not scale linearly with detector occupancy, but this is a good starting estimate of algorithmic performance. Assuming it is possible to look at only
Figure 9: Eﬃciency of ﬁnding muon tracks in the rage 2 GeV to 25 GeV 1 MinBias event in the detector, that these events occur with cross section 71mb, and that the LHC is operating at luminosity 1034 cm−2 /s, then one fake on this plot corresponds to a readout rate of about 3.6kHz. This estimation needs to be taken with a grain of salt not only due to the nonlinearity of the eﬀect, but also due to the fact that only a few counts are
obtained above 10 Gev, so the uncertainty on the rate of counts is quite high.
7 Future Investigation The algorithm developed for the new geometry has performed well under single track events without pileup. The tuning of the algorithm presented a unique challenge; selecting the best types of φ bins, the proper amount of φ bins, and slop in φ each took time to compile and analyze. Due to the improper magnetic ﬁeld constant being used in our algorithm, a signiﬁcant amount of overestimation in slop was given for high momentum, resulting, initially, in very high fake rate. Further study is needed to optimize the algorithm, investigating diﬀerent slop values, tuning the number of φ bins, and how to bin in φ. The algorithm should also account for high η tracks, and incorporate the forward barrels into the algorithm as well. In addition, once the optimal algorithm has been determined, single track events with pileup as well as realistic events with pileup will need to be studied. If the algorithm proves to operate well under high amounts of pileup, with high eﬃciency of ﬁnding real tracks and low fake rate, then this algorithm should be moved to a simulated FPGA code.
To Table of ContentsUpon successful implementation into hardware code, the algorithm should move towards actual production in hardware to be tested for integration into the new silicon detector for CMS.
8 Acknowledgements We would ﬁrst like to thank our advisors, Professor Michael Hildreth and Professor Kevin Lannon for all of their guidance and help. They both provided an excellent working environment through their experience and expertise, while maintaining a light-hearted attitude conducive to a healthy atmosphere.
We would also like to thank the graduate students, Jamie Antonelli and Sean Lynch, who were very helpful in answering questions related to NDCMS, the Condor computer cluster, concentric circles, and ROOT. They were great sources of entertainment when the computers were down.
Lastly, we would like to thank the University of Notre Dame, the REU program, and the Notre Dame Physics Department for their support in this research. We would like to thank NSF and COS-SURF donors for their funding to conduct this research over the summer.
References  “CMS Physics Technical Design Report Volume 1: Detector Performance and Software.” Ed. D. Acosta. Feb. 2006. Web. http://cdsweb.cern.ch/record/922757/ files/lhcc-2006-001.pdf.
 The CMS Collaboration. “Technical Proposal for the Upgrade of the CMS Detector Through 2020.” Spring 2011. Web. http://cdsweb.cern.ch/record/1355706/files/ LHCC-P-004.pdf.
 Mannelli, Marcello. “CMS from LHC to SLHCMotivation for L1 Tracking Trigger.” Feb. 2010. Web. http://indico.cern.ch/getFile.py/access?contribId= 11&sessionId=8&resId=0&materialId=slides&confId=74957.
A recoil mass separator, St. George, was proposed to study nuclear reactions that control the energy production and nucleosynthesis in stellar and explosive helium burning in inverse kinematics. Dipole magnets will be used to separate out the charged states of the reaction products and filter out background radiation. It is not possible to measure the full magnetic field within these dipole magnets without interfering with the ion beam. For this reason a hall probe was used to measure the magnetic field outside of the strongest magnetic field region of the dipole magnet. Data taken comparing the magnetic field reading on the hall probe to that on a nuclear magnetic resonance probe in the center of the magnet, show the hall probe is sufficient to determine the magnetic field inside the dipole magnet.
Complementing the St. George mass separator is the Georgina project. This project will be used to study stellar burning by efficiently detecting low energy gamma rays from in-beam experiments. In order for the germanium detectors to operate correctly they need to be cooled down to liquid nitrogen temperatures. This is done by filling the detectors with liquid nitrogen and refilling the detectors every six hours. To allow for continuous use of the Georgina project, Labview code was printed on a CampactRIO system to fill the germanium detectors at specified offset times.
To Table of ContentsIntroduction The Nuclear Science Lab at Notre Dame focuses on nuclear astrophysics. This is the field of study that links nuclear physics to astrophysics by studying the nuclear processes that occur in stars and supernova. The Nuclear Science Lab is specifically interested in studying stellar nucleosynthesis. This is the process in which most elements heavier than hydrogen are formed. The goal of the lab is to determine the origins of elements and to further understand the universe .
The Strong Gradient Electromagnetic Online Recoil separator for capture Gamma Ray Experiments or St. George was designed to study stellar helium burning. St. George is a recoil mass separator located in the Nuclear Science Lab at Notre Dame. This separator will be used to study low energy reactions in inverse kinematics for beam masses with an atomic number as high as 40. Alpha capture reactions are important in stellar helium burning and affect the formation of heavy nuclei through nucleosynthesis. Low energy proton or alpha particle beams are traditionally used to study these reactions. The St. George provides the possibility to use heavy ion beams to bombard a 4He jet gas target; the heavy ion recoil nuclei produced by alpha capture reactions in the gas jet are separated from the intense primary beam by the St. George separator. St. George is made of six dipole magnets, eleven quadrupole magnets, a Wien filter, and a detection system . The project is broken into three sections. The first section chooses a certain charged state from the heavy ion recoil beam. Once the heavy ion beam collides with the helium gas target, the small percentage of heavy ion recoil products are of many different charged states. The first section on the St. George allows only the selected charged state to pass.
This is done using dipole magnets to bend the other charged states too little or too much. Section two separates the heavy ion recoil from the original heavy ion beam by selecting only particles of a certain velocity. Due to conservation of momentum, the heavy ion recoil and the original heavy ion beam have the same momentum after the initial collision with 4He jet gas target. The recoil product, however, has a higher mass and therefore a slower velocity than the starting ion beam. This makes it possible to separate the two beams by using perpendicular electric and magnetic fields. The final section of St. George is the detection stage. This section uses two targets to further filter the heavy ion recoil from the beginning heavy ion beam. The first target starts timing once the total beam collides with it and the second target, a distance away from the first target, determines the energy of the total beam. The amount of heavy recoil can then be
To Table of Contentsdetermined, using the time and distance to get to the second target along with the energy of the beam.
Georgina, which stands for Ge-detector Online aRray for Gamma ray spectroscopy In Nuclear Astrophysics, was designed to efficiently detect gamma rays from in-beam experiments.
Georgina will also be a part of the Nuclear Science Lab at Notre Dame. The Georgina project will give additional data on low energy stellar burning experiments, which St. George cannot supply. Georgina can be used in alpha capture reactions along with St. George, and Georgina can collect data on proton capture experiments, which St. George cannot supply information on.
Georgina consists of five germanium detectors which can be arranged in different formations to reduce background interference and more efficiently detect gamma rays . Georgina and St.
George will broaden the experimental techniques at the Nuclear Science Lab and help further understand fundamental concepts of the universe.
Georgina Code The Georgina computer code is written to start experiments, continuously run experiments, and be accessible anywhere in the Notre Dame network. This code is written in Labview using a while loop to ensure the program runs continuously. The code also consist of a series of nested while loops to starting filling tanks with liquid nitrogen, set an offset time between the time the liquid nitrogen tanks are filled, indicate if a failure has occurred, and to reset the program. The program is written to fill the five germanium detectors one at a time, and then wait a set amount of time before refilling the detectors. LEDs are used as sensors to tell the program when the tanks are full of liquid nitrogen. When liquid nitrogen cools the LED sensors the resistance across the LEDs decreases which raises the voltage read by the program. Once the voltage is above 2.2 volts the program reads the tanks as being full. To ensure that the LEDs do not falsely indicate that the tanks are full, the program requires the tanks to fill a minimum amount of time before they can be considered full. This means the LED sensors must go off and the minimum amount of time specified must be reached before the next tank can fill.
Additionally if the tanks take too long to fill, meaning a problem with the filling has occurred, the program will skip through any tanks left unfilled and indicate a failure has occurred.
Start and Reset The start and reset button was added to allow for manual changes in the program during the filling process. Upon starting the program the start or reset button will need to be pressed to start the tanks filling. No offset time occurs between start and the first filling. After the first filling is started the start button becomes a reset button. Pressing the reset button after a failure occurs starts the program at the beginning of the offset time. This means the program will take the specified offset time to start filling tanks again. Pressing the reset button during the offset time of the program starts the first tank filling again. This means pressing the reset button twice after a failure will cause the tanks to start filling again. Pressing the reset button while the tanks are filling stops the tanks from filling and sends the program back to the beginning of the offset time.