«STUDENT RESEARCH PAPERS SUMMER 2011 VOLUME 22 REU DIRECTOR UMESH GARG, PH.D. REU Student Research Papers – Summer 2011 University of Notre Dame – ...»
muon passes through the detector must be completely ignored, and that amount of dead time would contribute signicantly to the measurement of the neutrino rate. I don't see why that should necessarily be called dead time if we make our cuts correctly. In fact, with the cuts that are most likely to nd neutrinos, including that constraint only reduces the number of neutrinos by 0.1%. In the time I've been here there has been no solid consensus on what criteria count as dead time, so it is a topic that I have not been able to work on much, as it is very complex and there are many points to the system that I am simply not educated in.
Another important thing I found, which was brought to the research group's attention, was that the trigger rate has increased by nearly 5% in 3 months of running. This has been explained as a slight temperature increase in the area of the detector, so the noise in the system has increased. My professor doesn't believe that this is a completely adequate answer, because the rate continues to rise all the time. I guess the only way that we'll know is as we approach winter the rate goes back down. This is my only original contribution to the experiment that I know of.
Another thing I worked on was rates based on certain cuts. A group at MIT developed a list of cuts on the data that they believed would be benecial to the data analysis and my job was to double check their results. In the end things matched, but I haven't heard any news of further things that are planned to be done with those cuts.
The last thing that I have been working on is developing a program that will sort through the data and TTrees (a data class of ROOT) full of possible neutrinos from all the runs taken
so far. It is designed to look for the coincidence events that I have discussed in detail so it cuts a good amount of the data. It reads in 1428 les and creates a single le that is very close to the same size as any one of those 1428 les. I was eventually able to create a graph showing probable neutrinos.
4 Conclusions With the rough estimate of the number of neutrinos that we have we can make some premature
reach these conclusions is through comparison with a simulation which simulates a year's worth of neutrino interactions in the detector. I rst look at the amount of neutrinos from the simulation that are within my cuts, then I compare the ratio of the time of the live run and time of the simulation run with the ratio of the number of neutrinos from the real run to the number of neutrinos from the simulation.
If this is satised then the mixing angle matches that which is used in the simulation. Another way to compare the real data with the simulation is by looking at the shape of the neutrino
spectrum. The neutrino mixing angle will have an eect on the energy distribution that we are able to see, so by comparing the shapes we are able to get a value for the mixing angle.
The following images are comparing real data to simulation data with identical cuts. The red dotted lines will be the real data with the black being simulation data. In gure 2 we start with comparing the Gadolinium capture peaks. The benets of these peaks are discussed elsewhere as far as background is concerned, but for the sake of data analysis, these are the starting point for making sure the two sets of data are comparable. This is the set of data on which I matched up the peaks and scaled the simulation data to match the real data so that the area under each curve is the same. The reason we have to do so much comparison to begin with is because neither the simulation or the real data is calibrated. On the x-axis of Figure 2 and Figure 4 the measure is, to a close approximation, MeV, but this is done by dividing the total charge that comes out of our PMTs by a function. This function can be called the gain, and the gain for the simulation and the real data is dierent; it is not even constant. The gain appears to vary with the charge
variable within the program, the ct stands for the name of the data processing program and the q stands for charge. I attempted to come as close as possible to obtaining a result where the two peaks matched up, but as can be seen, the shape of the gadolinium peaks is still dierent between the real and simulation data. One possible reason for this is that there is background in our data and no background in the simulation. That would mean that my scaling factor is
Figure 3: A comparison of the time between an annihilation event and a neutron capture event.
o, because by attempting to match the area under the curve I am assuming that my real data is all neutrinos, when they can only be called neutrino candidates. The simulation, on the other hand, is limited to only showing pure neutrino interactions.
Figure 3 is the spectrum of neutron capture times on gadolinium. It should be an exponential
one time constant, 13.5% of max at two time constants, etc, it closely resembles an exponential.
But more examination will show that it doesn't approach zero very well. Looking at the real data it can be seen that the shape does not match perfectly. At low times there is less, but the area under the two is still the same, because this plot is based on the same cuts as Figure 2.
This likely reects on the ability of our detector and the strength of our cuts.
positron, 0.511 MeV, to get the energy of the neutrino. The energy spectrum of the neutrinos is related to the mixing angle and squared mass dierence, so from this last graph we can extract
I did not have the time to learn and put into practice the statistical tools which would allow me to make the comparisons between the graphs. Also, for the sake of a non-biased result, the scientists making these cuts shouldn't know the mixing angle put into the simulation. Another important piece of data to include is that the inverse from equation 1 is 0.163 for the ratio of times and 0.145 for the ratio of suspected neutrinos.
simulation, on the other hand, assumes full power all the time. In addition, the simulation data that I have access to most likely does not take into account fuel evolution from mostly uranium to mostly plutonium. With all these errors to take into account I don't believe that there can be a statistically signicant comparison between the data and the simulation. Nevertheless, the experiment will continue to run for years to come, and with more data comes more statistical signicance. The experiment will continue to be rened and hopefully we can shed more light
It can be found in deep-sea crust and could be a signature of possible recent nearby supernovae activity. If the half-life of 60Fe is accurately measured we can assess how far from the earth a supernova occurred and precisely date how long ago it transpired. The half-life of radioactive isotope 60Fe has an accepted value of 2.62 x 106 yr. This new value measured at the Technical University of Munich is in contradiction to the previously accepted value of 1.49 x 106 yr. Our new experiment is to re-measure the half-life of 60Fe through Accelerated Mass Spectroscopy (AMS) and a low level counting station to eliminate some of the background radiation and other error in the accepted value. In this paper I will discuss the data analysis for the activity of 60Fe.
A major key for understanding the development of our universe is looking at radioactive nuclei that are produced by astrophysical processes. Specifically 60Fe is produced naturally in supernovas only. The half-life of 60Fe plays a major role in multiple stellar investigations. The half-life is defined as the time for the radioactivity of a certain isotope to fall to half its original value. The new measurement for the half-life of 60Fe is 2.62 x 106 yr has a significant amount of error in it. To measure a precise value for the half-life of 60Fe, two different approaches are used, a low background activity measurement and accelerated mass spectrometry (AMS). A direct measurement of the half-life of 60Fe cannot be done due to how long it is. So the goal is to look at the decay scheme of 60Co to 60Ni as shown in figure 1, because of the gamma rays emitted after the beta minus decay. It is much easier to measure the activity from 60Co to 60Ni because the gamma ray ejection can be measured by our detector. The β-minus decay of 60Fe to 60Co cannot be measured. A low level counting station can be used to measure the activity of the gamma-ray ejection.
counting station. The activity of 60Co is looked at rather than 60Fe is because the decay of 60Fe is such a long process and it would take to long to measure. Since 60Co is the daughter nucleus of 60Fe and has a known half-life of 5.27 years, it can be used to determine the activity of 60Fe using equation 1.
In this equation, A(t) is the activity of 60Co, t is time, λ is the decay constant of 60Co, and A is the amount of 60Fe activity, which is our unknown. The decay constant of Cobalt, λCo, is found through the use of equation 2, where t1/2 is the half- life of 60Co.
constant of 60Co, the activity of 60Fe can be calculated using equation 1. To determine the decay constant of 60Fe the radioactive decay law (equation 3) is used to derive a formula for activity, equation 4.
and λ is the decay constant. In equation 4 which is used to find the decay constant λ of 60Fe, A is the activity of 60Fe and N is the amount of 60Fe nuclei present. The amount of 60Fe present is found through a process called AMS, which a technique similar to mass spectroscopy where a particle accelerator is used to disassociate
these two values from the AMS technique and the low level counting, the decay constant of 60Fe can be calculated from equation 2. However instead of measuring 60Co, 60Fe values will be placed in the formula.
Our low level counting station consists of detector and lead castle. A Germanium γ-ray detector is a solid-state detector, which uses a crystalline scintillating material rather than an ionization chamber to detect and measure radiation. Germanium detectors must be cooled to liquid nitrogen temperatures to produce spectroscopic data. At high-temperatures, the electrons can easily cross the band gap in the crystal and reach the conduction band where they are free to respond to the electric field. The system will then produce too much electrical noise to be useful as a spectrometer. Cooling to liquid nitrogen temperatures reduces the excitation of valence electrons so that only a gamma ray interaction can give an electron the energy needed to cross the band gap and reach the conduction band.
Since the detector cannot eliminate external or background radiation found in the surroundings a lead housing is built around the detector for improved results.
The lead castle was created in the summer of 2010. From then to the summer of 2011 there have been multiple runs. I have analyzed both runs 5 and 6. Each run takes about 14 days. There are two background runs each taking 24 hours each, then four 60Co standards each taking 30 minutes, and then there is a 12 day run on a 60Fe sample. The 60Co standards are used for calibration and efficiency. A HighPurity Germanium detector (HPGe) measures the amount of activity in the 60Co sample. The detector runs on software that can analyze the data and the background radiation in the running room. My data analysis focused on run 5, which occurred April 21, 2011, and run 6 which occurred May 28, 2011. The difference between run 5 and run 6 is that a different detector was used. In run 5 the efficiency of the detector was 1% and in run 6 the detector had an efficiency of 3%. In each run I was looking at the 6 different peaks. Peak 1 was 59Fe with energy of 1099 keV, peak 2 was 60Co with energy of 1173 keV, peak 3 was 59Fe at a different energy level of about 1289 keV, peak 4 was 60Co at a different energy level 1332 keV, peak 5 was 40K with energy of 1465 keV, and peak 6 was 208Ti with energy of 2615 keV. My main concern was the net count of the 59Fe and 60Co because this was the activity that needed to be measured. The elements 40K and 208Ti are in the data analysis from background radiation; they are present because of the bricks and concrete from the building. To make this background radiation as miniscule as possible a lead brick house was built around the detector. Lead bricks are used to block radiation because they are high in both density and atomic mass.
The graphs above represent the number of counts per day, for run 6, of both energy levels of 59Fe. Figure 2 is 59Fe with energy of 1099 keV over a period of 12 days.
Figure 3 is 59Fe with energy of 1291 over a period of 12 days.
Figure 4 as shown above shows a linear fit for the peak 1099 of 59Fe for Run 6. The values on the y-axis for this graph are the natural log of the net counts. A linear line is easier to fit than an exponential curve. In other words the decay is a value, which will always be the same no matter how much material is present. Figure 5 is an
1291 keV. The theoretical value will always be higher than the value actually obtained from the experiment because the geometry of the detector. To take this
the detector the efficiency was calculated to be about 3%.