avatar University of Washington Inc Services
  • Location: Washington 
  • Founded:
  • Website:


  • Page 1

    INTRODUCTION The Nuclear Physics Laboratory at the University of Washington in Seattle pursues a broad program of nuclear physics. These activities are conducted locally and at remote sites. The current programs include �in-house� research using the local tandem Van de Graaff and superconducting linac accelerators and non-accelerator research in solar neutrino physics at the Sudbury Neutrino Observatory in Canada and at SAGE in Russia, and gravitation as well as user-mode research at large accelerators and reactor facilities around the world. Significant progress has been made in the test of CVC and Second Class currents in the Mass-8 system. Data taking for the Mass-8 b -decay experiment has been finished, after accumulating almost as much data from 8B beta decay as from 8Li. The final data analysis is in progress. In the 4He(a ,g )8Be radiative capture measurement, we have made a precise g -ray spectrum shape measurement using the long gas cell. In a shakedown run we have obtained (preliminary) high quality excitation function data with all 3 NaI spectrometers using a new short gas cell. In addition the response of the three spectrometers in the new high-energy g -ray setup has been measured to +3% at Eg = 15.1 MeV. Our experimental measurements of Giant Dipole Resonance decay in hot Sn compound nuclei formed in 18O + 100Mo collisions have been completed, and data analysis is underway including effects of pre-equilibrium losses and bremsstrahlung. The important role of K-state equilibration in the calculation of the statistical model fission width has been noted. Results of sample calculations are compared to experimental pre-scission neutron multiplicities. Both fusion-fission and fusion-evaporation have been explored for the 19F + 181Ta -> 200Pb system from Elab = 121 to 195 MeV. The results for light charged particles measured in coincidence with evaporation residues provide insight into the Fermi-gas level density parameter. The residue and fission cross sections will be used to explore the importance of K-states in the competition between residue formation and fission. Preliminary investigations into the possible dependence of fission fragment anisotropies on the shape of target nuclei have been performed. An experiment to test the suggestion that anomalous fission fragment anisotropies for high-Z compound nuclei near the fusion barrier are due to quasifission has been performed. The evaporation residue and fission cross sections have been measured at energies near the fusion barrier. No evidence for suppression of the evaporation residue yields, as would be expected for a quasifission interpretation, has been found. We have previously suggested an alternative explanation for the anomalous anisotropies in terms of slow equilibration of the projection of the angular momentum on the symmetry axis of deformed nuclei. Small but statistically significant differences have been found in the stopping power of small carbon clusters as compared to single carbon atoms at the same velocity. The design and construction of the target chamber and related apparatus for our planned 7Be(p,g )8B experiment is roughly 80% complete. In collaboration with TRIUMF, our target development project has made 9Be targets which have achieved 60% metallic Be purity. This fabrication process is the same as will be used to make the necessary 7Be targets. The Russian-American Gallium Experiment (SAGE) has submitted an archive paper on the 51Cr neutrino source experiment. A large amount of solar neutrino data has been analyzed and is being prepared for publication. Construction of the Sudbury Neutrino Observatory (SNO) is complete and commissioning of the detector is underway. Water fill has commenced. The UW SNO group is currently heavily involved with the installation of the data acquisition and monitoring systems and with the overall detector commissioning process. Work continues, both at UW and at site, on refinements to the UW produced SNO data acquisition and monitoring systems. The Neutral-Current Detector project (NCD) has begun production of the counters which comprise the array. All parts and electronics are being assembled and counters are being shipped to site. The remotely-operated vehicle to be used during installation of the counters into the acrylic vessel is being tested. With the steadily improving precision of data from currently operating solar neutrino experiments, we have updated our model- independent analysis in which we originally showed that, if the experimental uncertainties are correct, a solar neutrino problem exists at the 95% confidence level that cannot be resolved even by scaling the individual neutrino fluxes arbitrarily. New SuperKamiokande data have reduced uncertainties, but also a reduced central value, and our current analysis gives approximately the previous conclusion. Our precision measurement of the electron-neutrino correlation in the 0+ ® 0+ beta decay of 32Ar ran at ISOLDE in late summer l997. Our instrument gave the highest resolution delayed proton data ever achieved. The results set tight constraints on exotic beta decay processes that could arise from multipole Higgs doublets or leptoquarks. Final analysis of our results is nearly completed.

  • Page 2

    A measurement of the parity-violating rotation of neutron spins in liquid helium was run at the NIST reactor. Although the statistics were not good enough to resolve the effect, our device produced the most precise measurement of a neutron spin rotation and demonstrated the power of the experimental design. A second-generation experiment will be mounted that should provide, in conjunction with existing parity-violating p+alpha data, the dominant weak isovector and isosclar meson-nucleon-nucleon coupling constants. The emiT experiment had its first data run at the NIST Center for Neutron Research. Data were collected during five six-week reactor cycles starting in January and continuing into September of 1997. In general, the emiT detector performed well and a total of roughly 15 million coincidence events were recorded. Analysis of these data is ongoing. The detector is undergoing hardware upgrades at NPL in preparation for a second run. The new Monte Carlo algorithm for simulating ultrarelativistic heavy ion collisions including high-order Bose-Einstein and Coulomb correlations is now being used to produce a library of STAR-type events with various source sizes, multiplicities, and temperatures. A new HBT analysis program written in C++ is being developed for STAR. Analysis of NA49 interferometer results show a surprisingly high phase-space occupancy in Pb+Pb systems. During the past year the URHI group has completed a major upgrade of an event-by-event analysis system based on scaled topological measures which searches for deviations from equilibrium behavior in heavy ion collisions possibly due to formation of a quark-gluon plasma. This system has been applied in a pilot program to 400k events of NA49 data and has resulted in the identification of one or more anomalous event classes at the few permil level. Work is underway to understand the nature of the observed anomalous behavior in the transverse-mass spectrum, and a full-scale analysis of more than 1.5M central Pb-Pb events is about to begin. With RHIC turn-on a little over a year away efforts are intensifying to prepare the STAR solenoidal tracker experiment for first data. The URHI group has contributed to final cosmic-ray testing of the STAR TPC prior to its shipment to RHIC last November, and are playing a leading role in design and implementation of the STAR offline physics analysis system within the RHIC computing facility. As always, we encourage outside applications for the use of our facilities. As a convenient reference for potential users, the table on the following page lists the vital statistics of our accelerators. For further information, please write or telephone Professor Derek W. Storm, Executive Director, Nuclear Physics Laboratory, the University of Washington, Seattle, Washington, USA, 98195; (206)543-4085 (e-mail; STORM@NPL.WASHINGTON.EDU). We close this introduction with a reminder that the articles in this report describe work in progress and are not to be regarded as publications or to be quoted without permission of the authors. In each article the names of the investigators have been listed alphabetically, with the primary author to whom inquiries should be addressed underlined. We thank Richard J. Seymour and Karin M. Hendrickson for their help in producing this report. Steve Elliott, Editorsre@u.washington.edu, (206)543-9522 Barbara Fulton, Assistant Editor

  • Page 3

    TANDEM VAN DE GRAAFF ACCELERATOR A High Voltage Engineering Corporation Model FN purchased in 1966 with NSF funds; operation funded primarily by the U.S. Department of Energy. See W.G. Weitkamp and F.H. Schmidt, "The University of Washington Three Stage Van de Graaff Accelerator," Nucl. Instrum. Meth. 122, 65 (1974). Some Available Energy Analyzed Beams Ion Max. Current Max. Energy Ion Source (particle m A) (MeV) 1H or 2H 50 18 DEIS or 860 3He or 4He 2 27 Double Charge-Exchange Source 3He or 4He 30 7.5 Tandem Terminal Source 6Li or 7Li 1 36 860 11B 5 54 860 12C or 13C 10 63 860 * 14N 1 63 DEIS or 860 16O or 18O 10 72 DEIS or 860 F 10 72 DEIS or 860 * Ca 0.5 99 860 Ni 0.2 99 860 I 0.01 108 860 * Negative ion is the hydride, dihydride, or trihydride. Additional ion species available include the following: Mg, Al, Si, P, S, Cl, Fe, Cu, Ge, Se, Br and Ag. Less common isotopes are generated from enriched material.

  • Page 4

    BOOSTER ACCELERATOR We give in the following table maximum beam energies and expected intensities for several representative ions. "Status of and Operating Experience with the University of Washington Superconducting Booster Linac," D.W. Storm et al., Nucl. Instrum. Meth. A 287, 247 (1990). Available Energy Analyzed Beams Ion Max. Current Max. Practical (pm A) Energy (MeV) p >1 35 d >1 37 He 0.5 65 Li 0.3 94 C 0.6 170 N 0.03 198 O 0.1 220 Si 0.1 300 35Cl 0.02 358 40Ca 0.001 310 Ni 0.001 395

  • Page 5

    University of Washington April, 1998 Sponsored in part by the United States Department of Energy under Grant #DE-FG03-97ER41020/A000. This report was prepared as an account of work sponsored in part by the United States Government. Neither the United States nor the United States Department of Energy, nor any of their employees, make any warranty, expressed or implied or assumes any legal liability or responsibility for accuracy, completeness or usefulness of any information, apparatus, product or process disclosed, or represents that its use would not infringe on privately-owned rights. Introduction Accelerator Beams available (Much of this material is in PDF format. Download Adobe Acrobat Reader) Detailed Table of Contents with links to each chapter Chapters: 1. Fundamental Symmetries, Weak Interactions and Nuclear Astrophysics (322kB) 2. Neutrino Physics (99kB) 3. Nucleus-Nucleus Reactions (100kB) 4. Ultra-Relativistic Heavy Ions (401kB) 5. Atomic and Molecular Clusters (57kB) 6. Electronics, Computing and Detector Infrastructure (60kB) 7. Van de Graaff, Superconducting Booster and Ion Sources (53kB) 8. Nuclear Physics Laboratory Personnel 9. Degrees Granted Academic Year 1997-1998 10. List of Publications from 1997-1998

  • Page 6

    8. Nuclear Physics Laboratory Personnel 9. Degrees Granted Academic Year 1997-1998 10. List of Publications from 1997-1998

  • Page 7

    1.0 FUNDAMENTAL SYMMETRIES, WEAK INTERATIONS AND NUCLEAR ASTROPHYSICS 1.1 Beta delayed alpha spectra from 8Li and 8B decays and the neutrino spectrum in 8B decay E.G. Adelberger, J.-M. Casandjian, H.E. Swanson and K.B. Swartz* SNO will measure the energy spectrum of 8B solar neutrinos reaching the earth. If neutrino oscillations occur, the spectrum will be distorted from its original shape and the deviation will contain information on the neutrino mixing parameters. Bahcall et al.1 have pointed out that our ability to predict the undistorted shape of the 8B neutrinos is limited by our knowledge of the final state continuum fed in 8B decay. Two kinds of data are useful here, the beta spectrum and the beta-delayed alpha spectrum in 8B decay. Bahcall et al. showed that existing delayed-alpha data were inconsistent and chose to use a single measurement of the beta spectrum in obtaining a ‘standard’ 8B spectrum. Because of the difficulty of making accurate beta spectrum shape measurements, we have chosen to make a careful remeasurement of the delayed alpha spectra in 8Li and 8B decays, paying special attention to the absolute calibration of the energy scale and to understanding the energy response of detectors. The 8Li data were taken during the period covered in the previous annual report.2 During the last year we used the ‘Mass 8’ rotating target and moveable catcher foil apparatus to implant 8B [produced by 6 Li(3He,n)] into 10 microgram/cm2 C catcher foils. The foils were viewed on opposite sides by a pair of Si telescopes consisting of 75 micron thick E counters followed by 500 micron thick veto detectors. The telescopes had small solid angles (∆Ω/4π = 2.2 × 10-3) to minimize summing of alphas with the associated beta particle. We could, without breaking vacuum, insert thin 148Gd and 241Am sources in front of the detectors. In addition we could place thin Al sheets in front of the telescopes to eliminate the alphas and see only betas. Finally we could measure the thickness of the catcher foil by measuring the energy loss of 244Cm alphas passing through the foils - all without breaking vacuum. The detector telescopes were cooled to 0°C, and the electronics (except for preamps) were mounted in a special temperature-controlled rack. In subsequent measurements, we used special jigs to move the sources in arcs centered on the Si detectors. By measuring the energy loss as a function of sec θ, the detector dead layer was determined. The energy loss in the α–sources were found by rotating the sources about their centers so that energy losses in the detectors were constant, but the sources thickness varied in sec θ. Analysis of 8B data is in progress. * Physics Department, Yale University, New Haven, CT. 1 J.N. Bahcall et al., Phys. Rev. C 54, 411 (1996). 2 Nuclear Physics Annual Report, University of Washington (1997) p. 1. 1

  • Page 8

    1.2 Positron-neutrino correlation in the 0+ 0+ decay of 32Ar E.G. Adelberger, M. Beck, H. Bichsel, M.J.G. Borge,* A. García,† I. Martel-Bravo,# C. Ortiz,† H.E. Swanson, O. Tengblad* and the ISOLDE collaboration# We searched for possible scalar weak interactions by measuring precisely the e- correlation in the 0+ ✁ ✂ 0+ ✄ + decay of 32Ar. In such processes, the decay rate has the form d 2ω p m ∝ 1 + a cos θ eν + b . (1) dΩ e dΩν E E where E, p and m are the total energy, momentum and mass of the particle. We assume that the Standard Model ✄ provides an exact description of the W exchange process and use our result to probe scalar interactions that could ☎ arise from scalar boson or leptoquark exchange. Then the e- correlation coefficient, ✆ ~ ~ ~ ~ 2 −| C S |2 −| C'S |2 +2 Zαm / p Im(C S + C'S ) a= ~ ~ , (2) 2 +| C S |2 +| C'S |2 and the Fierz interference coefficient, ~ ~ Re[CS + CS' ] b = −2 1 − ( Zα )2 ~ ~ , (3) 2 +| CS |2 +| CS' |2 ~ ~ ~ are functions of CS and CS' which are related to the Jackson, Trieman and Wyld coefficients by CS = CS/C , ✝ ~ C'S = C'S / C ν . The e- correlation must be inferred from the recoil momentum of the low-energy daughter nucleus. We ✞ extracted a from the lepton broadening of a narrow ( 15 eV) delayed proton group that follows the ✟ ✠ superallowed decay of 32Ar. In an experiment conducted last summer we implanted 32Ar and 33Ar beams from ISOLDE into a 22.7 g/cm2 carbon foil inclined at 45 to the beam axis and detected the delayed protons in a pair ✡ ☛ of 9mm 9mm PIN diode detectors. Beta summing effects were eliminated by placing the detection apparatus ☞ inside a 3.7 T superconducting solenoid which prevented the betas from reaching the proton detectors but had little effect on the protons. The system, developed in Seattle, gave excellent resolution; the pulser peaks had full- widths at half-maximum of 2.98 and 3.27 keV. Data were taken under 12 different conditions: with the normal to the stopper foil at 45 , 135 , 225 and ✌ ✌ ✌ 315 with respect to the beam axis, and for two different beam tunes. We continually alternated between 2 h ✌ ✍ long 32Ar runs and 5-15 m long 33Ar runs that provided energy calibrations for the 32Ar data. We computed intrinsic proton shapes for a = +1, b = 0 and a = –1, b = 0 using Monte Carlo routines that simulated the decays using the value QEC = 6086.9 3.3 keV extracted from the masses of all 5 members of the A ✎ = 32 isospin quintet. The routines took into account the Breit-Wigner shape of the daughter state, and the mean energy losses of the delayed protons in the stopper foil and in the detector dead layer. The intrinsic shapes were then folded with a detector response function whose functional form reproduced ‘first-principles’ calculations of the response to protons as well the measured response to 148Gd 's. ✏ We fitted our 12 pairs of delayed proton spectra by adjusting the normalizations of the a = +1 and a = –1 intrinsic shapes, the peak position, and the response function parameters. This procedure yielded 12 independent measurements of ã a/(1 + b(m/E)). The results were combined to yield a preliminary value, ã =1.0027 0.0050. ✑ ✒ The dependencies of ã on the exact values of QEC and Qp, on the energy calibration and on the analytic form of the response function give an additional systematic error ã 0.005. The constraints on scalar couplings from ✓ ✔ ✕ our result are substantial improvements over previous work. * Instituto de Estructura de al Materia, SCIC, Madrid, Spain. † Department of Physics, University of Notre Dame, Notre Dame, IN. # PPE Division, CERN, Geneva, Switzerland. 2

  • Page 9

    Fig. 1.2-1. Fit of the 0+ 0+ delayed proton peak from the spectrum taken at a stopper foil angle of 315 degrees. ✖ The pulser peak (divided by a factor of 50) shows the electronic resolution. 3

  • Page 10

    1.3 Ionization spectra of 3 MeV protons and α particles in silicon E.G. Adelberger, M. Beck, H. Bichsel and H.E. Swanson We have continued our calculation of the response function for α particles and protons in Silicon surface- barrier detectors for the 32Ar delayed-proton measurement of the e-ν correlation (see Sec. 1.2). The Monte Carlo model used is described in last year’s annual report.1 We found a mistake in the nuclear straggling calculation of the response function, which invalidates last year’s result for the response function. New measurements of the source and deadlayer thicknesses, partly with a new setup where the source was rotated in an arc around the detector, yielded more accurate values of Gd2O3 and Si thicknesses 31 nm and 96 nm Si, respectively. Since the source distribution in the stopper foil is already included in the model of the 32Ar experiment only an average straggling spectrum is used for the protons from the foil incident on the detector. The experimental lineshape for 3.182 MeV α particles is compared to the calculated response function in Fig. 1.3-1. Obviously, the calculation is not a good description of the measurement, presumably due to a non- uniform thickness of the α source and other effects (see below). Fig. 1.3-2 compares the calculated response function for protons with the lineshape deduced from a fit of the 32Ar IAS peak (see Sec. 1.2). Despite some minor deviations, the calculation and the measurement yield consistent lineshapes. Both calculated response functions underestimate the long tail, more so for α particles than for protons. We learn from this that there are contributions to the long tail that are not included in the model. These could be e.g. small angle scattering, backscattering or incomplete charge collection in the detector. An investigation of this is in progress. The calculation for α particles significantly underestimates the short tail. This could be due to, besides the already mentioned uncertainty of the shape of the α-source, a difference in the ratio r of nuclear stopping power to electronic stopping power for protons (r < 1) and α-particles (r > 1) at small energies (O(1keV)), leading to the breakdown of some approximations made in the MC-program for α particles at small energies. In order to investigate some of these problems an analytic calculation of the nuclear straggling is in progress. Fig. 1.3-1. Calculated and measured lineshape for Fig. 1.3-2. Calculated (solid line) and experimental 3.182 MeV α particles. (dotted line, see Sec. 1.2) lineshape for 3.35 MeV protons. 1 Nuclear Physics Laboratory Annual Report, University of Washington (1997) p. 4. 4

  • Page 11

    1.4 Beta decay of 40Ti and the efficiency of the ICARUS 40Ar neutrino detector E.G. Adelberger, R. Anne,* M. Bhattacharya,† C. Donzaud,# A. García,† S. Grévy,# D. Guillemaud- Mueller,# N.I. Kaloskamis,† M. Lewitowicz,* A.C. Mueller,# F. Pougheon,# M.G. Saint-Laurent,* O. Sorlin,# H.E. Swanson and W. Trinder* The large-volume liquid-argon detector ICARUS1 will have several advantages over existing solar neutrino detectors because it will detect neutral and charged-current neutrino interactions in a very symmetrical way. Neutral-current interactions will be characterized by single-track e( , )e events, while charged-current ✗ ✗ 40 Ar( e,e-)40K interactions will produce multiple tracks because the J = 1+ states fed in allowed neutrino capture ✗ ✘ emit several rays as they decay to the 40K J = 4- ground state. Therefore, the multiplicity and angular ✙ ✚ distribution of the event will signal its neutral or charged current nature. The neutral-current efficiency of ICARUS can be accurately calculated using electro-weak theory. However, the charged-current efficiency depends on the matrix elements for neutrino-capture transitions on 40Ar to excited states of 40K. A recent shell- model calculation2 predicts a capture rate of 6.7 2.5 SNU [1 SNU = 10-36 captures per target atom per second], ✛ where 2.2 SNU arises from the model-independent Fermi cross section and 4.5 SNU is expected from the model- dependent Gamow-Teller transition strengths, B(GT). An empirical calibration of the 40Ar( e,e-) transition ✜ strengths, is therefore essential. We made such a calibration by studying the + decays of 40Ti and used isospin ✢ symmetry to relate the 40Ar 40K transitions to the strengths of the mirror 40Ti 40Sc transitions. ✣ ✣ We produced Ti at GANIL by fragmenting 82.6 MeV/u Cr beam on a 272.4 mg/cm2 nickel target. 40 50 The 50Cr beam was produced in an ECR ion source using isotopically enriched feed material. The momentum analysis of the reaction products was performed using the ALPHA3 spectrometer with a momentum acceptance of 0.6% around B = 715.6 MeV/ec. Fragments of interest were then selected using the LISE3 spectrometer. A ✤ brief account of this work has been published.4 We found that the ICARUS 40Ar detector has an effective (total energy threshold Wβ =5 MeV) absorption cross section for 8B solar neutrinos of 14.5(4) × 10 cm2; 73% of the total cross section arises from Gamow Teller transitions that were neglected in early estimates of the ICARUS efficiency. A more refined analysis of the data, centered at Notre Dame University is in progress and will be reported in a second publication. * GANIL, BP 50-27, 14021 Caen Cedex, France. † University of Notre Dame, Notre Dame, IN. # Institut de Physique Nucléaire, 91406 Orsay Cedex, France. 1 ICARUS Collaboration, Proposal, ICARUS II, A second-generation proton decay experiment and neutrino observatory at Gran Sasso. 2 W.E. Ormand, P.M. Pizzochero, P.F. Bortignon and R.A. Broglia, Phys. Lett. B 345, 343 (1995). 3 M. Lewitowicz et al., Phys. Lett. B 332, 20 (1994). 4 W. Trinder et al., Phys. Lett B. 415, 211 (1997). 5

  • Page 12

    1.5 Lineshape and efficiency determination for the 3 large NaI spectrometers J.F. Amsbaugh, M.P. Kelly, K.A. Snover D.W. Storm and J.P.S. van Schagen Last year we reported on the construction of a new high-energy γ-ray detector set-up,1 consisting of three large volume NaI spectrometers. We are using this setup in a precision measurement (see Sec. 1.6) of the 4 He(α,γ)8Be radiative capture reaction in order to extract the radiative isovector M1 and E2 decay widths to test CVC and search for second class currents in the Mass-8 system.2 Since in the (α,γ) reaction γ rays with energies of typically 14 MeV are emitted, it is of the utmost importance to have a good knowledge of both the response function of each NaI-spectrometer to γ rays in this energy range and the product of the detector efficiency and solid angle η∆Ω/4π. An elegant way of measuring these quantities simultaneously is with the 10B(3He,pγ)12C reaction at E(3He)= 4.1 MeV.3,4 By detecting the protons associated with the population of the 15.1 MeV excited state in 12 C, γ rays emitted in the deexcitation of this state can be tagged. Since the 15.1 MeV state has a branching ratio of 88.2% to the ground state, most γ rays will be of this energy. By placing the NaI-spectrometers at the zeroes of P2(cosθ) and measuring the protons at 0°, complications due to the p-γ angular correlation can be bypassed. This requires the 3He to be stopped before it reaches the proton detector. This was achieved by placing a stack consisting of a 7.1 mg/cm2 Ni foil and a 3.6 mg/cm2 Al foil in front of the Si counter. The protons were also measured in singles mode. The ratio of the proton-γ coincidence yield to the proton singles yield, corrected for the branching ratio and dead times, provides a direct measurement of η∆Ω/4π. We have measured the lineshape and the efficiency for each of the NaI spectrometers to ±3% using the above reaction as well as with the 12C(p,γ)13N reaction,4 with the same accuracy. Currently work is underway to improve the precision of the 10B(3He,pγ)12C measurement by reducing the continuous background and by improving the resolution in order to reduce the interference from a nearby proton singles group from the 12 C(3He,p)14Ng.s. reaction. We plan to repeat this measurement when the final short-gas cell measurements are carried out. 1 Nuclear Physics Laboratory Annual Report, University of Washington (1997) pp. 57-58. 2 L De Braeckeleer et al., Phys. Rev. C 51, 2778 (1995). 3 R.E. Marrs et al., Phys. Rev. C 16, 61 (1977). 4 E.G. Adelberger et al., Phys. Rev. C 15, 484 (1975). 6

  • Page 13

    1.6 First measurements of the 4He(α α,γγ)8Be reaction with the new NaI setup J.F. Amsbaugh, M.P. Kelly, K.A. Snover, D.W. Storm and J.P.S. van Schagen We have begun our Mark II high precision measurements of the 4He(α,γ)8Be radiative capture reaction, with the purpose of determining the isovector M1 and E2 decay widths for a precision test of CVC and second class currents in the Mass-8 system (see De Braeckeleer et al.1 for our Mark I measurements). A new scattering chamber for monitoring the beam energy has been designed and built, and is installed in the beamline downstream of the last analyzing magnet. It contains two Si counters located at ±25° and a C scattering foil which can be remotely inserted in (and removed from) the beam in order to measure elastic scattering. A 228Th source is used periodically to monitor the detector energy calibration. In order to determine precisely the spectral distribution of γ rays from the decay of the isospin-mixed 16.6 MeV - 16.9 MeV doublet, we have made a measurement of the γ-ray spectrum using a newly designed long gas cell2 and the Seattle detector at 90°. This cell and its shielding were designed to suppress background due to beam scattering from the entrance and exit windows and from the gas. In order to correct properly for residual background due to the exit foil, runs were alternated with helium gas at 1.00 atm and hydrogen gas at 0.91 atm. This hydrogen pressure assures a similar beam energy loss in the gas. A clean spectrum was obtained for Eγ > 9 MeV. Fits using R-matrix calculations are underway to extract the final-state strength distribution. We have also made new excitation-curve measurements over the 16 MeV doublet using a short (3.5'' diameter) gas cell and all 3 NaI spectrometers, thus obtaining simultaneous excitation functions at 3 different angles. In these measurements, we used anode cable clipping and short QDC integration gates to minimize pileup of γ rays produced in the cell windows, which we had identified in an earlier run as a major source of background. The resulting excitation curves are shown in Fig. 1.6-1 for the Seattle, Illinois3 and OSU4 detectors. Analysis of these results is currently underway. An optimized final series of measurements are planned in the coming months. ✩ o o ✪ o 1400 Seattle 270 Illinois 140 OSU 40 1200 1000 Yield [arb. units] 800 600 400 200 0 −200 33.0✥ 33.5 ✥ 34.0 ✦ 34.5 ✦ 35.0 ✧ 33.5 ✥ 34.0 ✦ 34.5 ✦ 35.0 ✧ 33.5 ✥ 34.0✦ 34.5 ✦ 35.0 ✧ Eα [MeV] ★ Eα [MeV] ★ Eα [MeV] ★ Fig. 1.6-1. Excitation curves for the 4He(α,γ)8Be reaction measured with the Seattle, Illinois and OSU detectors. The magnitude of the yields differ due to the detector efficiencies and the reaction angular distribution. 1 L.D. De Braeckeleer et al., Phys. Rev. C 51, 2778 (1995). 2 Nuclear Physics Laboratory Annual Report, University of Washington (1996) p. 56. 3 On long- term loan from the University of Illinois, Urbana, IL. 4 On long-term loan from the Ohio State University, Columbus, OH. 7

  • Page 14

    1.7 α angular correlations in the decays of 8Li and 8B The β-α M. Beck, E. Mohrmann, D.W. Storm, H.E. Swanson, J.P.S. van Schagen and D.I. Will The β-α angular correlation in the β-delayed α-decays of 8B and 8Li can be used to test the Conservation 1 of the Vector Current (CVC) and to search for Second Class Currents (SCC) (see Sec. 1.6). In the following the β-α angular correlation will be parameterized by (E , ) = a0(E )(1 + a1(E )cos ✫ ✬ ✭ ✮ ✯ ✰ ✰ + a2(E )cos2 ), with E the ✱ ✰ ✯ ✰ ✱ ✰ ✯ ✰ β-energy and ✱ the angle between the β- and α−particles. The angular correlation coefficients are the ✰ ✯ normalization a0, the kinematic coefficients a1, and a2, which is the relevant coefficient for the CVC test and the 2 SCC search. For details of the experiment see ref.1, We noticed in 1996 that pulse heights differed between detectors. On more careful monitoring of the pulse heights we saw that they decreased with time (most pronounced for the downstream detector – the detectors had a serial gas delivery system). Also, the detectors showed different behavior when varying the gas flow. For these reasons we changed the gas-supply plumbing of the three counters from serial to parallel in January 1997. Since then the detectors showed more stable pulse heights and less sensitivity to gas flow rate variation. We increased our statistics on the two angular correlations from 1.16 × 108 to 1.4 × 108 accepted events for 8Li with a one week 8Li run in February and from 1.6 × 107 to 8.1 × 107 for 8B with two four week runs of 8B in March and November. This concludes our main data taking. The a-coefficients describing the β-α angular correlations from 8Li and 8B derived from all of our measurements from November 1995 to November 1997 are shown in Fig. 1.7-1. a2 can approximately be described by a straight line a2(E tot) = m × (E tot -0.511MeV) in the main energy range 5-13 MeV. The deviations ✰ ✰ ✲ at small and large energies from the straight line most likely are caused by the response function of the −counters which is not yet included in the analysis of the angular correlation. The preliminary analysis yields - - - m = (3.24±0.11) GeV-1 for 8Li and m+ = (–4.32±0.12) GeV-1 for 8B, leading to /E = m -m+ = (7.56±0.17) GeV-1 ✳ - 34 and Mn/E = (7.09±0.16) (all uncertainties are statistical only) compared to previous measurements , giving ✳ (7.0±0.5) and (6.5±0.2), respectively. The statistical uncertainty of our result is now smaller than those of the other experiments. The investigation of the systematic uncertainties is in progress. Fig. 1.7-1. β-α angular correlations from 8Li and 8B. 1 Nuclear Physics Laboratory Annual Reports, University of Washington 1993-1997. 2 M. Beck et al., The Mass-8 Experiment - Measuring the β−α angular correlations, Proceedings of the 6th Conference on the Intersections of Particle and Nuclear Physics, Big Sky, MT, USA, May 1997, ed. by T.W. Donnelly, AIP Conference Proceedings 412, 416. 3 R.E. Tribble and G.T. Garvey, Phys. Rev. C 12, 967 (1975). 4 R.E. Mckeown, G.T. Garvey and C.A. Gagliardi, Phys. Rev. C 22, 738 (1980). 8

  • Page 15

    1.8 Time reversal in neutron beta decay: The first run of emiT M.C. Browne, H.P. Mumm, A.W. Myers, R.G.H. Robertson, T.D. Steiger, T.D. Van Wechel, J.F. Wilkerson and D.I. Will The emiT experiment is a search for a violation of time-reversal (T) invariance in the beta decay of free neutrons. The experiment utilizes a beam of cold (<5 meV), polarized neutrons from the Center for Neutron Research at the National Institute of Standards and Technology (NIST) in Gaithersburg, MD. A sizable team of scientists has been assembled to perform this experiment from Los Alamos National Laboratory, NIST, the University of California at Berkeley/Lawrence Berkeley National Laboratory, the University of Michigan, the University of Notre Dame, and the University of Washington's Nuclear Physics Laboratory (NPL). emiT probes the T-odd triple correlation (between the neutron spin and the momenta of the neutrino and electron decay products) in the neutron beta-decay distribution. The coefficient of this correlation, D, is measured by detecting decay electrons in coincidence with recoil protons while controlling the neutron polarization. Technological advances in neutron polarization and an improved detector geometry should allow emiT to eventually attain a sensitivity to D of 3 × 10-4. This level of sensitivity represents a factor of five improvement over previous neutron T tests, and may permit restrictions to be placed on several extensions to the Standard Model that allow values of D near 10-3. emiT is the first neutron T test to make use of a ‘supermirror’ neutron polarizer. Thus, emiT achieves a polarization of 95+2%, as opposed to the 65-85% polarizations typical in previous experiments. The emiT detector consists of four plastic scintillator paddles for electron detection and four arrays of large-area PIN diodes to detect the protons. The eight detector segments are arranged in an alternating octagonal array about the neutron beam so that each segment of one type lies at an angle of 135° relative to two segments of the other type. This geometry takes advantage of the fact that the electron-proton angular distribution is strongly peaked due to the disparate masses of the decay products. When compared to the 90° geometry used in previous experiments, this octagonal geometry results in an increase in signal rate which is the equivalent of roughly a factor of three increase in neutron beam flux. The emiT experiment was installed on the NG-6 beamline at NIST from November of 1996 until September of 1997. Roughly two months were spent carefully characterizing the neutron beam and performing the initial shakedown of the detector. Data were then collected during five six-week reactor cycles starting in January and continuing into September of 1997. In general the emiT detector performed well; a total of roughly 15 million coincidence events were recorded and the maximum sustained coincidence rate observed was ~7 Hz. Analysis of these data is ongoing With the data collected thus far emiT should have a statistical sensitivity to D of ~1 × 10-3, roughly a factor of 2 better than the previous best measurement. However, the final error bar on this limit is likely to be dominated by systematic uncertainties. The emiT detector has been shipped to NPL and solutions to problems which occurred during the first run are aggressively being sought. After hardware upgrades it is expected that a second run of emiT will occur during 1999. 9

  • Page 16

    1.9 Target development for the planned 7Be(p,γγ)8B experiment E.G. Adelberger, J.-M. Casandjian, K.A. Snover, T.D. Steiger, H.E. Swanson and the TRIUMF collaboration* We have recently completed a development program in which metallic 9Be targets (with trace 7Be) have been fabricated at TRIUMF using the same procedure planned for the eventual fabrication of high-activity 7Be targets which will be used in the 7Be(p,γ)8B cross section measurements planned in this laboratory (see Sec. 1.10). In our technique, Be initially is dissolved in HCl followed by separation and purification chemistry. The last stages involve a 2-step vacuum reduction/distillation process in which Be metal is deposited onto a solid Mo backing. The development program has concentrated on producing pure and uniform Be depositions on suitable metallic backings. Diagnosis of impurities and nonuniformities in the Be deposition has been made primarily by 9 Be(p, γ) narrow resonance studies at NPL, and also by Germanium-detector gamma spectroscopy in MeV proton bombardment at NPL and low energy heavy-ion sputtering spectroscopy in Vancouver, Canada. The result is shown in Fig. 1.9-1, which displays the 9Be(p, γ1) resonance profile for a typical chemically- prepared target (right panel) compared to the profile measured for a pure metallic evaporated target (the evaporation technique is not feasible for a high-activity radioactive target). Also shown is the renormalized 9 Be(p, γ0) yield (open boxes), which serves as a measure of the non-resonant background. A comparison of the resonance profiles demonstrates that the chemical target is highly uniform and has a contaminant/Be ratio of about 35%. This is a factor of 100 better than the target used by Filippone.1 Fig. 1.9-1. 9Be(p,γ) resonance profiles. Left panel: pure evaporated target; right panel: chemical target. * A. Zyuzin, N. Bateman, L. Buchmann et al., TRIUMF, University of British Columbia, Vancouver, Canada. 1 B.W. Filippone et al., Phys. Rev. Lett. 50, 412 (1983); Phys. Rev. C 28, 2222 (1983). 10

  • Page 17

    1.10 Progress in development of apparatus and techniques for the 7Be(p,γγ) experiment E.G. Adelberger, J.-M. Casandjian, K.A. Snover, T.D. Steiger and H.E. Swanson A dedicated experiment chamber is being designed and built for the upcoming measurement of the 7 Be(p,γ)8B cross section. The pertinent features of the design are outlined below. The target will be mounted on a flipper arm that will allow it to be transferred in a fraction of a second from an irradiation position in the beam to a counting position in front of a silicon alpha detector. The flipper arm features a low-mass construction, yet still allows the removable target to be water cooled to prevent evaporative losses of target material. A computer-controlled stepper motor will be used to drive the flipper arm. The 7Be target will produce 10-40 mCi of γ rays with an energy of 478 keV. Thus, radiation safety was a major concern in the design of the chamber. First, the pumping system is completely enclosed to prevent the accidental venting of any radioactive material. Sorption pumps will be used for roughing and a cryopump will provide high vacuum. A liquid-nitrogen cold trap near the target will improve the local vacuum, prevent carbon build-up on the target, and collect any material which evaporates or sputters from the target. Second, a retractable tungsten-alloy shield is incorporated into the chamber. This shield will be remotely operated and will allow the target to be enclosed in shielding without breaking vacuum. It will provide roughly 6 cm of tungsten shielding, and should reduce the exposure rate to less than 2 mR/hr on contact. This will allow people to work safely around the chamber when the target is not in use. Finally, the connection between the chamber and the beamline was designed to be as small as possible (1" dia.). This, combined with an interlocked pneumatic valve directly upstream of the chamber, will serve to contain any loose radioactive material in the event of a vacuum accident in the chamber. The design and construction of the chamber is roughly 80% complete. It is expected that the system will be online and taking preliminary data during the summer of 1998. 11

  • Page 18

    1.11 Measurement of the PNC spin-rotation of cold neutrons in a liquid helium target E.G. Adelberger, B.R. Heckel, D.M. Markoff,* S.D. Penn† and H.E. Swanson The strength of the weak neutron-nucleus interaction is poorly known because of the absence of definitive experiments in odd neutron systems. The n + system is ideally suited for determining the strength of the PNC ✴ ✵ neutron-nucleus interaction.1 We are especially interested in the strength of the isovector coupling, f , that is ✶ sensitive to the neutral-current contribution to the weak NN interaction. Recent atomic PNC measurements of the anapole moment in 133Cs indicate2 a value for f that is larger than the constraints set by -ray circular polarization ✶ ✷ measurements in 18F. We measured the parity non-conserving spin-rotation of cold neutrons through a 46 cm liquid helium target and obtained ✸ PNC( n , ✹ ✺ ) = (8.0 ✻ 14 (stat) ✻ 2.2 (syst)) ✼ 10-7 radians/m (1) where the first error is statistical and the second error is systematic. This is the most sensitive spin-rotation measurement to date. Dmitriev et al. calculated the dependence of the neutron spin-rotation in the n + ✹ ✺ system on the weak meson exchange coupling constants and obtained3 ✸ PNC( n , ✹ ✺ ) [rads/m] = – 0.97f – 0.32h 0ρ + 0.11h1ρ – 0.22(h ω0 – h1ω ) ✽ (2) In addition, Lang et al. measured the longitudinal analyzing power of polarized protons on a helium target and determined the dependence of this observable on the meson exchange coefficients.4 Our measurement provides the first opportunity to compare parity violation in two mirror isospin systems. We combined the results for the n + system with those from the analog p + system to obtain an ✹ ✺ ✾ ✿ expression for f , that depends only weakly on the remaining coupling constants. ❀ f = –[0.51 ❀ ❁ PNC( n , ❂ ✿ ) + 1.47AL( p , )] + 0.04 h 0ρ + 0.13 h1ρ – 0.03 h ω0 + 0.21 h1ω ✾ ✿ (3) We substituted our value for PNC, given in Equation 1, the measured analyzing power, AL( p , ) = –(3.3 ❁ ✾ ✿ ❃ 0.9) ❄ 10-7, and the theoretical5 ‘best values’ for the coupling coefficients, h 0, h 1, h 0, h 0 to obtain, ❅ ❅ ❆ ❆ -7 f = (–1.75 ❇ ❈ 10.5) ❉ 10 (4) For one standard deviation, we predict the acceptable range to be –34 < f < 23 ❇ units of 3.8 × 10-8 (5) -8 For comparison, the theoretical range is 0 < f < 30 in units of the sum rule value 3.8 ❇ ❉ 10 . We have shown that the spin-rotation measurement in the n + system can be used to extract information ❊ ❋ on f . Our current measurement, however, was limited by statistics with a sensitivity of only 2.6 10 -6 radians ❇ ❉ per day of data accumulation. With improvements to the cold neutron beam line at the NIST reactor in Gaithersburg, an increase in the neutron transmission through the apparatus and a decrease of the data acquisition dead time, we expect to achieve a sensitivity of about 6 10-7 radians per day. In a second round of ❉ -7 measurements, we hope to reach an overall sensitivity of 10 radians, and put new constraints on the value of f . ❇ * TUNL, North Carolina State University, Durham, NC. † Syracuse University, Syracuse, NY. 1 B. Desplanques, Physics Reports (1998). 2 V.V. Flambaum and D.W. Murray, Phys. Rev. C, 56, 1641 (1997). 3 V.F. Dmitriev et al., Phys. Lett. B 125, 1 (1983). 4 J. Lang et al., Phys. Rev. C 34, 1545 (1986). 5 B. Desplanques, J.F. Donoghue, and B.R. Holstein, Ann. Phys. (NY) 172, 100 (1986). 12

  • Page 19

    1.12 Development of a new rotating torsion balance E.G. Adelberger, J.H. Gundlach, B.R. Heckel, B.P. Henry and H.E. Swanson We are building a new rotating torsion balance to improve our sensitivity to possible equivalence- principle violating interactions with ranges from 1 m to . The attractor masses include the local topography, the ● entire earth, the sun, the galaxy (including the galactic dark matter) and the entire universe (defined by the rest frame of the cosmic microwave background). We plan to obtain sufficient precision to test for cosmological scalar fields which should violate the Equivalence Principle at the level of 10-3 or more. In common with our previous equivalence principle tests,1 the instrument consists of a torsion balance mounted on a uniformly rotating turntable. The apparatus is located close to the West wall of the old cyclotron cave of our laboratory. A first test version of the apparatus is completed and in operation. It has been used to map out local gravity gradients, to test and improve the turntable, and to study the noise susceptibility and sensitivity of the setup. We used the torsion balance with q21 and q31 pendulums to measure the most important gravity gradients at the site of the instrument. The measured Q21 gradient agreed within 20% with predictions based on local mass distributions. The gradients at the location of the pendulum are nulled by large gravitational field shaping masses. The appropriate Q21 gradient compensator masses (approx. 1.0 ton) were manufactured from a hard Pb-alloy and are installed on a sturdy frame that allows us to rotate, translate vertically and center them horizontally. We will now determine the residual Q21 field to fine-tune the compensator masses, and then null the Q21 and Q31 gradients. The turntable rotation rate is controlled by a feedback loop that compares the output of a 36000 line shaft encoder to a constant frequency source. The feedback signal is generated by a digital signal processor that digitizes the sinusoidal shaft encoder signals. The phase difference between the encoders and a precision oscillator is used in a digital PID loop that controls the turntable’s direct-drive eddy-current motor. The encoder phase deviation with respect to the clock is 1 nrad/ Hz . The dominant rotation rate variations are due to non- ❍ linearities in the shaft encoder. The most disturbing non-linearity, introduced by eccentricity of the shaft encoder, would cause velocity changes at the signal frequency. These effects were effectively reduced by adding the signals from a second readhead that is mounted diametrically opposite to the first readhead. We have the torsion balance system to map out the shaft encoder and produce a harmonic correction function that linearizes the angle readout to < 10 nrad. This was done by rotating the turntable faster than the free torsional resonance of the pendulum. The pendulum remains inertial at the average speed of the turntable and the shaft encoder non- linearities were read by the autocollimator. A special pendulum that is immune to gravitational torques was developed for this purpose. The torsion balance vacuum chamber is suspended from the turntable by a two axis gimbal that uses flexures and provides a tilt isolation of about 200. The vacuum chamber is pumped by a 20 l/s ion pump to 10–7 torr. One -metal shield inside the vacuum chamber and one attached to the outside the vacuum chamber are ■ installed. A third cylindrically symmetric stationary -metal shield surrounds the balance. Constant temperature ■ water is circulated in a double walled cylindrical temperature shield. All electrical signals as well as the high voltage for the ion pump are brought out through a servoed slip ring assembly. The electronics rack and the data taking station are located 10 m away from the apparatus. The current level of statistical uncertainty is currently < 5 nrad for one day of data taking. The first version of a highly symmetric 8-testbody aluminum pendulum was fabricated. A set of Al and Ti testbodies that are held with small Ti screws into conical seats on the pendulum were manufactured. 1 Y. Su et al., Phys. Rev. D 50, 3614 (1994). 13

  • Page 20

    1.13 Progress with the measurement of Newton's constant G E.G. Adelberger, J.H. Gundlach, B.R. Heckel and H.E. Swanson Challenged by the controversial situation of widely disagreeing recent measurements of the gravitational constant we have invented a measurement technique that eliminates the leading systematic errors in previous torsion balance measurements. The method is based on a vertical flat pendulum rotating torsion balance in acceleration feedback. Details are outlined in previous annual reports and in Phys. Rev D 54, R1256, (1996). We have been awarded funding from NIST to begin with the construction of the apparatus. The vacuum chamber and the autocollimator are currently being manufactured. The optical readout scheme uses 4 reflections off the front and back side of the pendulum plate to amplify the angle optically. The scheme is insensitive to small tilts of the pendulum plate so that corner reflectors are not necessary. A small intentional tilt of one stationary mirror prevents returned light from reentering the diode laser. The design of the attractor mass turntable is nearing completion. For the first round of measurements we will use 8 12.5 cm diameter brass spheres, while for the final measurement we will use tungsten spheres. The radial distance from the torsion fiber to the center of the attractor spheres is 16.5 cm. The torsion pendulum turntable will be made with a custom air-bearing turntable for which we are currently negotiating with commercial companies. The apparatus will be located on top of the old cyclotron magnet in the center of the circular cyclotron cave. This location maximizes the distance to masses that could introduce gravitational noise. 14

  • Page 21

    1.14 Progress and results of the Rot-Wash torsion balance E.G Adelberger, S. Conner, J.H. Gundlach, B.R. Heckel, C.D. Hoyle, G.L. Smith and H.E. Swanson Since our last report, and publication in Physical Review Letters,1 we have made several improvements to our rotating attractor torsion balance. We have improved the angle-detection system. Our most recent autocollimator angle-detection system utilizes a polarizing beamsplitter and a quarter waveplate to optically isolate the laser diode light source, and return as much light to the position sensitive photo-detector. The system was designed with the assumption that the gold mirrors were perfect conductors – which is not a good assumption at the frequency of the laser’s light. In the original design about 30% of the light was returned to the laser. We were able to rotate the beamsplitter and quarter waveplate about the beam axis and successfully isolate the laser diode. This improved the noise performance of the system from 0.6 nrad/ day to 0.4 nrad/ day. One of the strengths of this apparatus has been our ability to measure the gravitational imperfections of the pendulum and attractor. We measure the imperfections of the pendulum by rotating sections of the attractor by 180 degrees about the turntable axis to generate large gravity gradient fields. After these measurements we would then reassemble the attractor into its ‘normal’ configuration. However the stray gravity fields of the attractor were often changed during this operation. While the attractor was never changed during an equivalence principle test, our ability to make the attractor more gravitationally perfect was hindered by these changes. We installed a belt/capstan system that rotates the 1.5 ton load more smoothly and reproduceably. Additionally we have developed an improved air temperature control system. This new system keeps all fan motors out of the temperature controlled air, and allows for a better coupling between the water bath (reference temperature) and the air circulating about the apparatus. We are continuing to collect equivalence principle data and expect to publish a complete description of our experiment with new results in the coming year. Our current preliminary results yield a differential acceleration of Be-Cu of (2.1+3.5) × 10-13 cm/s2. For comparison, it would take 9 days at this acceleration for an object’s speed to match that of the continental drift in the east pacific (8.8 cm/yr). 1 J.H. Gundlach et al. Phys. Rev. Lett. 78, 13 (2523) 1997. 15

  • Page 22

    1.15 Test of the Strong Equivalence Principle: does gravitational binding energy gravitate? E.G. Adelberger, S. Baessler, J.H. Gundlach, B.R. Heckel, B.P. Henry, C.D. Hoyle, R. O'Neill, A. Sharp, G.L. Smith and H.E. Swanson Einstein's Strong Equivalence Principle (SEP) requires that the gravitational and inertial mass of all bodies be identical, including the amount of mass due to gravitational self interaction. We tested the principle for the contributions of the strong, electromagnetic and weak interactions,1 but the gravitational binding energy (GBE) of lab-sized test bodies is so small ( mGBE/m~10-42) that their gravitational properties cannot be tested in ❏ the lab. Nordtvedt2 proposed a sensitive way to test the gravitational properties of GBE. Lunar Laser Ranging (LLR) determines the orbit of the moon ( mGBE/m~2 1 0-11) around the earth ( mGBE/m~4 10 -10) with high ❏ ❑ ❏ ❑ precision. A violation of the SEP would lead to a differential acceleration a/a of the earth and of the moon ❏ toward the sun. This would polarize the moon orbit. Recently, Williams et al.3 finished such a study. Their result was a/a = 3.2 4.6 ❏ ▲ ❑ 10-13. Since the earth and the moon differ in composition as well as in the contribution of GBE, a violation of the equivalence principle due to composition could mask the effect due to GBE. This can be excluded by an experiment that is being performed with the Eöt-Wash apparatus at the NPL. In effect, the apparatus consists of a torsion pendulum containing small models of the earth’s core and the moon. The ‘earth core’ test bodies consist of a demagnetized stainless steel alloy, the ‘moon bodies’ consist of Mg + SiO2. The test body elemental compositions closely match those of the earth’s core and of the moon or earth’s mantle respectively. The pendulum, its suspension, and an autocollimator that measures the pendulum twist, rotate continuously on a turntable. If the SEP were violated the sun would exert a differential acceleration on the test bodies along the direction towards the sun. Hence the equilibrium position of the pendulum changes periodically during its rotation. The Eöt-Wash apparatus was used in the past for similar investigations. The latest experiment with a comparable pendulum1 had a precision of a/a = 10-11 per day . ❏ We have improved the Eöt-Wash Mark II apparatus used by Su et al.1 in order to obtain a result that is sensitive enough to take full advantage of the precision of the LLR data. We used an ion pump to reduce the gas pressure around the pendulum from ~1 mbar to 10-6 mbar, which decreases the torsional noise. A new turntable controller holds the turntable speed more constant. Our statistical accuracy is now a/a = 3 10-12 per day . ❏ ❑ Systematic effects are negligible, because they are lab-fixed and our signal is correlated with the sun. We are confident of reaching our goal in spring 1998. Six months later this test will be repeated to exclude effects of the orientation of the solar system in the galaxy. One of us (S.B.) thanks the Alexander v. Humboldt-Stiftung for their financial support. 1 Y. Su, B.R. Heckel, J.H. Gundlach, M. Harris, G.L. Smith and H.E. Swanson, Phys. Rev. D 50, 3614 (1994). 2 K. Nordtvedt, Phys. Rev. D 37, 1070 (1988). 3 J.G. Williams, X.X. Newhall and J.O. Dickey, Phys. Rev. D 53, 6730 (1996). 16

  • Page 23

    2.0 NEUTRINO PHYSICS 2.1 The neutral current detector project at SNO M.C. Browne, T.H. Burritt, P.J. Doe, C.A. Duba, S.R. Elliott, J.E. Franklin, K.M. Heeger, A.W. Myers, A.W.P. Poon, R.G.H. Robertson, M.W.E. Smith, T.D. Steiger, T.D. Van Wechel and J.F. Wilkerson SNO will detect Cerenkov light emitted from electrons or positrons produced by charged-current neutrino interactions. This reaction will provide a measure of the flux of electron neutrinos from the sun. Neutrinos of any flavor can produce free neutrons in the heavy water by neutral current interactions. Thus the measurement of the neutron production is a measurement of the total flux of neutrinos from the sun. Since solar burning produces only electron neutrinos, a comparison of the total neutrino flux to the electron neutrino flux could provide strong evidence for neutrino oscillations and therefore neutrino mass. The neutral current detectors (NCD's) are He-3 filled proportional counters designed to detect such neutrons. These NCD's are made by chemical vapor deposition (CVD) on a mandrel to form Ni tubing and endcap components. A quartz tube forms the high voltage and signal feedthrough to a Cu anode wire. We have received about 400 2-meter CVD Ni tubes from a company called Mirotech in Toronto, Canada. The remaining 135 tubes should arrive by summer 1998. Many of these tubes contained fragments of Al which originated from the deposition mandrel. As Al typically has a high contamination of U and Th, these tubes had an unacceptable high level of radioactivity. However studies of the depth profile of this Al indicated that it resided within approximately 13 microns of the surface. Thus by etching away 26 microns of material, the radioactivity could be removed. It was found, however, that a simple etch in nitric acid resulted in radioactive ions originating from the Al contamination plating out on the surface of the tubes. An electropolishing technique, which keeps the radioactivity in solution, has been implemented in tandem with a final acid etch to produce tubes of sufficient cleanliness. The various parts needed to assemble endcaps are being fabricated and most are completed. As they are produced, they are being shipped to IJ Research in Santa Ana, California and being assembled. As of February 1998, we have received 300 finished endcaps from IJ Research. The quartz tube has broken in approximately 5% of these parts. As a result we commissioned a finite element analysis of the stresses with Silverado Software and Consulting in Huntington Beach, California. The conclusions were that the stresses were well within limits of the quartz strength. We have begun an investigation of the possibility that stress corrosion is the cause. This is a process not simulated in the finite element analysis. Long term, high pressure underwater testing of the wet end connector of the cable indicated the seals were not sufficient. We are presently testing a redesign of the connectors. The NCD's must have very little radioactivity as they will reside in the sensitive inner region of the SNO detector. All parts which comprise the NCD's are being radioassayed to verify their cleanliness. We have assayed samples of almost all materials to be used in the array and all small fabricated parts. This radioassay program is nearing completion. The results indicate that the added photodisintegration background in SNO due to the NCD's will be less than that due to the impurity of the heavy water. Tests of the remotely operated vehicle to be used in the installation of the counters into SNO are beginning. The initial plans for the deployment hardware are in place and we are ready to begin the engineering design. The design, prototype and test of the anchor assemblies for the counter strings is complete and fabrication of the parts is beginning. The electronics to test counters as they arrive underground at SNO has been installed and tested in the electronics corridor. Although we have had delays while researching how to reduce the surface activity of the tubes, we have successfully built and operated 25 detectors. These have included spare counters for the array, counters to verify the radiopurity, and counters for underwater pressure testing. Our present production schedule will have all detectors underground in Sudbury by late 1998, ready for deployment after a period of cooldown during which cosmogenic 56Co decays away. 17

  • Page 24

    2.2 The SNO data acquisition system Q.R. Ahmad, J.C. Beck,* Y. Chan,† C.A. Duba, P. Harvey,# K.M. Heeger, P.T. Keener,$ J.R. Klein,$ M.A. Howe, P. Green,% F. McGirt,& C. Okada,† R. Meijer Drees,‡ A.W. Myers, P. Thornewell, T.D. Van Wechel, P. Wittich$ and J.F. Wilkerson Our group is responsible for providing the data acquisition (DAQ) system which reads out the signals from the SNO detector. This DAQ system is up and running and is being used in the detector commissioning process that is currently underway. The electronics and DAQ system is designed to handle modest background rates in excess of 1 kHz and burst rates in excess of 1 MHz and to have essentially no deadtime, in case of a galactic supernova. The DAQ system is written in an object-oriented programming language (C++), supports the VME bus, and utilizes a VME embedded processor for continuous readout of the detector. Since SNO will be required to acquire data continuously, a strong emphasis has been placed on developing code that is as reliable and robust as possible. The DAQ system supports the readout of the 9557 individual channels via 19 custom 9u ‘SNO Crates’ each containing 16 32-channel front end cards (FEC) which are coupled to PMT interface cards, that are in turn connected to the individual PMT cables. In addition, each SNO Crate also contains a trigger card and a custom controller card which connects the SNO Crate backplane to a 6u VME interface card located in a standard VME crate. The VME interface crate contains all SNO translator/controller cards, a custom master trigger card (MTC) as well as a VME Motorola MVME 167 embedded processor (eCPU). Commercial VME bus controllers are used to allow control and data transfer from the VME bus to data acquisition computers. The user interface and primary control of the VME crate is achieved using a C++ acquisition program called SNO Hardware Acquisition Real-time Control or SHaRC. This program runs on a PPC based platform under the Mac O/S 8.0. The VME eCPU, running a C based code, is used for continuous readout of the all the front end cards and also the master trigger card. Event information is shipped by the eCPU to dual port memory shared between one of VME controllers and a SUN Ultra Sparc computer. A C based Builder program on the SUN computer handles event building, while data recording to disk and tapes and shipment of the data stream to the surface is controlled by a FORTRAN based Recorder code that supports the required ZEBRA based final output format. Overall coordination of these eCPU and SUN processes is controlled via the SHaRC master program. The SHaRC program also interfaces to a calibration computer which allows the insertion and control of calibration sources. The primary data stream and several ancillary SHaRC based data streams are also sent to a general dispatcher program that runs on a surface DAQ computer. A variety of monitoring programs running on multiple computers and processes subscribe to this dispatched data allowing near-time display of the data stream. The UW group is playing the major role in providing the SNO data acquisition system. Since October, 1997 we have been providing a continuous and substantial DAQ presence on site in Sudbury to support both the shakedown of the final electronics system and the commissioning of the detector. We continue to add refinements to the system as we approach putting the detector into routine operation. * Comforce, Redmond, WA. † Lawrence Berkeley National Laboratory, Berkeley, CA. # Queen’s University, Kingston, Ontario, Canada. $ University of Pennsylvania, Philadelphia, PA. % University of Alberta, Edmonton, Calgary, Canada. & Los Alamos National Laboratory, Los Alamos, NM. ‡ 8140 Lakefield Drive, Burnaby, BC, Canada. 18

  • Page 25

    2.3 Overview and status of the SNO DAQ SHaRC software Q.R. Ahmad, Y. Chan,* P. Harvey,† M.A. Howe, F. McGirt,# R. Meijer Drees,$ P. Thornewell and J.F. Wilkerson The SNO Hardware Acquisition Real-time Control code (SHaRC) runs on Macintosh and PPC computers using C++ compiled with MetroWerks CodeWarrior. The object-oriented nature of C++ allows data acquisition hardware, crate controllers, and other hardware objects to be fully described and encapsulated into software objects which contain a complete interface to that particular piece of hardware. A large number of VME, Camac, NuBus and PCI based hardware modules are supported. The SHaRC code is based on a generic form developed by Frank McGirt and John Wilkerson, however, the current version has undergone extensive modifications here at NPL. It now has a highly user intuitive interface and is much more extensible. In the original program, the job of collecting, processing, and storing data was done with task objects which were written specifically for each experiment to control the interactions between multiple hardware modules. In some cases, these tasks also had self-contained plotting packages. As a result, these task objects were quite complex and were written by people who were intimately familiar with both the hardware and the underlying framework of the code. In an effort to simplify the writing of such tasks, the original program was converted to a data-flow model with a new user-interface that allows all objects to be visually represented on the screen as icons with input/output pads. In this model, each object is as simple and self-contained as possible and can either produce, modify, display, or store data packet objects. The flow of data is set up by drawing lines between input/output pads using the mouse. In this way, data analysis chains (i.e. tasks) can be built or modified in seconds simply by adding/removing objects and drawing or moving connection lines. Data processing objects are simple and reusable. For example, an object that does histogramming or plotting only has to be written once and can then be connected into any analysis chain. The power of this code model is being fully exploited in the current SHaRC code. For example, the control of the 19 SNO crates, master trigger card, eCPU, and SUN based processes is facilitated using a custom SNO run control module that allows one to display information for the entire detector, for a selected crate, or for selected card. Furthermore, in any view users are allowed to perform standard functions, that apply to the entire system, crate, or card depending on the view. The code supports automatic scheduling of tasks, Hardware Wizard a powerful tool that allows one to program a highly selectable set of hardware, and run master, a tool to start and stop runs in a well defined manner. SHaRC is now in routine and continuous use in the SNO experiment. * Lawrence Berkeley National Laboratory, Berkeley, CA. † Queen’s University, Kingston, Ontario, Canada. # Los Alamos National Laboratory, Los Alamos, NM. $ 8140 Lakefield Drive, Burnaby, BC, Canada. 19

  • Page 26

    2.4 Monitoring the SNO DAQ detector data stream Q.R. Ahmad, Y. Chan,* C.A. Duba, M.A. Howe, P. Green,† C. Okada,* R. Meijer Drees,# P. Thornewell and J.F. Wilkerson A variety of monitoring tools have been developed to track the performance of the SNO detector. The core process is based on shipping ‘live’ data to a UNIX workstation running a TCP/IP socket manager called a ‘Dispatcher,’ from where it can be further distributed to analysis and visualization programs running on a number of different machines. The Dispatcher program, developed and in use at CERN, has been installed at both Sudbury and NPL and has been extended for use with Macintosh and PPC computers. Several monitoring, analysis and visualization programs have been written, which can receive events from the Dispatcher and can show updating plots as the data arrives. These programs include a UNIX based HISTOSCOPE based program known as SNOStream, an HTML interface for automatic updating of a Web page, a variety of 2-d and 3-d event displays, an electronics calibration program, and SNOMON, an extended version of the CERN analysis program PAW, tailored for SNO. We have also implemented code to enable the SNO offline analysis package (SNOMAN) to be able to hook up to the Dispatcher. These tools allow one to look at information for all individual channels within a crate, or on a crate by crate basis, or for the entire detector. They also allow one to monitor electronics settings and the SHaRC based tools also allow one to feedback information to the data acquisition program, allowing automated adjustment of electronics variables. The monitoring tools have been used extensively during the electronics shakedown phase and the detector commissioning process and are an integral part of the data acquisition system. Refinement of the tools is continuing as SNO approaches its continuous data collection phase. * Lawrence Berkeley National Laboratory, Berkeley, CA. † University of Alberta, Edmonton, Calgary, Canada. # 8140 Lakefield Drive, Burnaby, BC, Canada. 20

  • Page 27

    2.5 The SNO electronics production testing system Q.R. Ahmad, J.C. Beck,* M.A. Howe, D. McDonald,† R. Meijer Drees# and J.F. Wilkerson An automated, four stage test system was developed and used in the production acceptance testing of the 350 SNO electronics front-end cards. A variation of the system was also developed to test the 350 PMT interface cards that both connect the PMT cables to the front end cards and also distribute the high-voltage bias to each PMT. The testing system allowed all components to be checked for proper functionality before being accepted for use in the detector. The test stand system performed basic analog and digital tests, produced a log of all test results, and decided whether the card was working properly or not. If the card passed the initial ensemble of tests, then the front-end cards would have four qualified daughter cards inserted onto the front-end card and another automated routine was used to adjust the voltage settings and timing of the electronics to their final operational values. If a card failed, it was sent to another testing station along with the testing log for repairs. All hardware tests were performed using modules developed within the SNO Hardware Acquisition Real- time Control (SHaRC) code. Existing modules for low-level control of the front end cards and the master trigger cards were used along with a specially developed test stand module that was written to deal with specific calibration and qualification requirements. The Userland Frontier scripting language was used to control the flow of the testing and to generate the log files with test results. This allowed the test system to be run by non-experts which facilitated the testing process. The code runs on either 680xx Macintosh computers or PPC systems. The system was used both at TRIUMF and at the SNO site in Sudbury to successfully test and qualify all front-end and PMT interface cards. It continues to be used for testing front end cards as they are populated with their final daughter cards. * Comforce, Redmond, WA. † University of Pennsylvania, Philadelphia, PA. # 8140 Lakefield Drive, Burnaby, BC, Canada. 21

  • Page 28

    2.6 Status of the acrylic vessel for the Sudbury Neutrino Observatory P.J. Doe and the SNO collaborators The 40' diameter acrylic vessel which will hold the 1000 tonnes of D2O in the SNO detector is now complete. On November 21, 1998, the bond holding the 80" diameter south pole plug in the vessel was inspected and declared to be a success. Thus ended a remarkable 30 month construction effort. The vessel is a structure is without precedent in the acrylic industry. It was constructed under extremely difficult conditions and is a testament to the dedication and cooperation of teams from industry and of the scientists, engineers and students of SNO. Detailed information on the dimensions of the sphere and photographic records of all bonds have been collected during construction of the sphere to ensure that the vessel meets specification. Final proof of its integrity was provided by a pneumatic test which was completed February 12, marking the last construction milestone for the vessel. The first stage of this test involved subjecting the vessel to a pneumatic vacuum of 56 inches of water (10 percent higher than the maximum operating load the sphere is expected to experience at the intersection of the chimney and the shell). This subjects the vessel to buckling forces and is a measure of the sphericity of the vessel. The second stage of the test involved subjecting the vessel to a pneumatic internal pressure of 28 inches of water (10 percent higher than the maximum internal pressure at the south pole under one possible operating condition). Internal pressure tests the bonds by putting them under tension. After the tests were complete key bonds were re-examined by theodolite to confirm there were no visible changes. All 96 attachments to which the Neutral Current Detector strings will be anchored have been bonded to the inside of the vessel and surveyed to confirm their exact location. The last act was to thoroughly clean the vessel and confirm that the dust levels were well below our limit of 0.1 units set to control radioactive backgrounds. The water fill of the cavity began on 14 April. 22

  • Page 29

    2.7 A compact 20 MeV gamma-ray source for energy calibration at the SNO M.C. Browne, R.J. Komar,* N.P. Kherani,† H.B. Mak,± A.W.P. Poon, R.G.H. Robertson and C.E. Waltham* We have developed a compact 20-MeV gamma-ray source for energy calibration at the Sudbury Neutrino Observatory (SNO). The gamma rays are produced in the 3H(p,γ)4He (“pT”) radiative capture reaction. The design and the operational characteristics of the source can be found in our previous reports.1 Over the past year, we have performed extensive tests on this pT source. Using the source, we made a measurement of the gamma-ray angular distribution in the pT reaction at an unprecedented low beam energy of 29 keV. In this experiment, three 14.5-cm diameter by 17.5-cm cylindrical barium fluoride (BaF2) crystals were used as the gamma-ray detectors. These detectors were placed at 45°, 90° and 135° to the beam direction. The 90° detector was placed at a distance of 35.6 cm from the target, and this source-detector separation was 25.4 cm for the other two detectors. Extensive Monte Carlo simulations were performed to calculate the response function of the BaF2 detectors, and the attenuation of gamma rays by the pT source hardware. The gamma-ray count rate in each detector was extracted by fitting the data spectra to a combination of the measured cosmic background shape and the simulated response functions. The extracted count rate in each detector was then normalized to the rate in the 90° detector. This normalized count rate was subsequently fitted to the functional form of W(θ)=A+Bsin2θ. Our results are consistent with a picture of the pT reaction proceeding through E1 capture of p-wave protons at this energy, as evidenced by the predominantly sin2θ angular distribution. The ratio A/B is less than 0.35 at the 90% confidence level. * Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada. † Ontario Hydro Technologies, 800 Kipling Avenue, Toronto, Ontario, Canada M8Z 5S4. ± Department of Physics , Queen's University, Kingston, ON, Canada K7L 3N6. 1 Nuclear Physics Laboratory Annual Report, University of Washington (1995) p. 10; (1996) p. 22; (1997) p. 25. 23

  • Page 30

    2.8 SAGE: The Russian American Gallium experiment S.R. Elliott and J.F. Wilkerson The Russian-American Gallium Experiment (SAGE) is a radiochemical solar neutrino flux measurement based on the inverse beta decay reaction, 71Ga(n,e-)71Ge. The threshold for this reaction is 233 keV which permits sensitivity to the p-p neutrinos which comprise the dominant contribution of the solar neutrino flux. The target for the reaction is in the form of 55 tonnes of liquid gallium metal stored deep underground at the Baksan Neutrino Observatory in the Caucuses Mountains in Russia. About once a month, the neutrino induced Ge is extracted from the Ga. 71Ge is unstable with respect to electron capture 1/2 = 11.43 days) and, therefore, the amount of extracted Ge can be determined from its activity as measured in small proportional counters. The experiment has measured the solar neutrino flux extractions between January 1990 and March 1997 with the result; 70+8 (statistical) +4 (systematic) SNU which was reported at the solar neutrino conference at Santa Barbara, California in December 1997. This is well below the standard solar model expectation of 138 SNU. Additional extractions are being analyzed. The collaboration has used a 517-kCi 51Cr neutrino source to test the experimental operation. The energy of these neutrinos is similar to the solar 7Be neutrinos and thus makes an ideal check on the experimental procedure. The extractions for the Cr experiment took place in January and February of 1995 and the counting of the samples lasted until fall. We have published this result this past year. The result, expressed in terms of a ratio of the measured production rate to the expected production rate, is 0.95±0.11.1 This indicates that the discrepancy between the solar model predictions and the SAGE flux measurement cannot be an experimental artifact. In collaboration with the Institute for Nuclear Research, we submitted a grant request to CRDF.2 This two year grant request was funded in 1997 and the moneys are being used to support Russian scientists employed to continue solar neutrino observations. We will maintain a modest involvement during 1998. SAGE is a mature experiment whose operation has become routine. The University of Washington plays a major role in the statistical analysis of the data and in the determination of systematic uncertainties. We are active in the remaining analysis of the solar neutrino data. With the publication of the Cr data, the focus is now on the writing of archive papers summarizing the experimental procedure and its solar neutrino results. 1 J.N. Abdurashitov et al., Phys. Rev. Lett. 77, 4708 (1996). 2 Civilian Research and Development Foundation for the Independent States of the Former Soviet Union, Award #RP2-159, Proposal 3126. 24

  • Page 31

    2.9 Model-independent approach to the solar neutrino problem K.M. Heeger and R.G.H. Robertson With the steadily improving precision of data from currently operating solar neutrino experiments, we have updated our model-independent analysis1 in which we originally showed that, if the experimental uncertainties are correct, then at the 95% confidence level a solar neutrino problem exists that cannot be resolved even by scaling the individual neutrino fluxes arbitrarily. New SuperKamiokande data have reduced uncertainties, but also a reduced central value, and our current analysis gives approximately the previous conclusion. The data from the five independent experiments thus continue to suggest that there is a departure from the simple picture of massless, unmixed neutrinos given by the minimal standard model of particle physics. Currently, the Cl-Ar experiment gives 2.54+0.14+0.14 SNU, SAGE 69 +−8.0 + 3.9 +4 . 5 7. 7 − 4 .1 SNU, Gallex 76.4 ± 6.3 −4.9 68 -2 -1 SNU, and Super-Kamiokande (2.37 +−00..06 + 0. 09 05 − 0 .05 ) × 10 B neutrinos cm s . In our ansatz the experimental capture rates for the Cl-Ar and Ga-Ge experiments and the experimental 8 B flux from Kamiokande are described as the sums of the products of the nuclear cross-sections and the pp, 7 Be+CNO, and 8B neutrino fluxes from the Sun. The 7Be and CNO fluxes play a qualitatively interchangeable role in the existing experiments -- the Cl-Ar and Ga experiments are sensitive to both and Kamiokande to neither. As a result, a condition that the sum of those fluxes should be non-negative is testable. The measured total solar luminosity is an additional constraint on the neutrino flux, but model-dependent at a primitive level. Propagating the uncertainties in the cross-sections and solving this set of equations one finds that the 7Be+CNO flux is negative, and thus unphysical. In luminosity constrained and unconstrained fits, that flux is negative at the 97% and 85% confidence level, respectively. Luminosity-constrained fits of the three types of experiment in pairs show that the anomaly of the 7 Be+CNO flux being negative emerges from all combinations of pairs of experiments. The present situation does not appear to reflect a (single) experimental result being outside its estimated uncertainty. At least at this significance level even non-standard solar models cannot resolve the inconsistency of the current experimental data. The data can be well fit with neutrino oscillation solutions. Interestingly, since Ga and Cl-Ar have no neutral-current sensitivity but still yield a non-physical solution, a non-standard 8B spectrum shape such as might result from MSW small-angle enhancement is somewhat favored (with more strength at high energies and less at low). 1 K.M. Heeger and R.G.H. Robertson, Phys. Rev. Lett. 77, 3720 (1996). 25

  • Page 32

    3.0 NUCLEUS-NUCLEUS REACTIONS 3.1 The giant-dipole resonance in hot Sn nuclei M.P. Kelly, M. Kicinska-Habior,* J.P. Lestone,† J.F. Liang,± K.A. Snover, A.A. Sonzogni,# Z. Trznadel,* and J.P.S. van Schagen We have recently completed new measurements of 18O + 100Mo reactions, from E(18O) = 125 to 217 MeV bombarding energy, in order to better determine the width evolution of the hot GDR versus excitation energy in near Sn compound nuclei. In making these measurements, we used our new setup with three large NaI spectrometers described in the 1997 Annual Report.1 At these relatively high bombarding energies preequilibrium effects become increasingly important. To address this concern we first measured light charged particle emission and deduced the effect of preequilibrium energy and mass loss prior to compound nucleus decay. We find that approximately 20% of the full fusion excitation energy and several mass units are lost due to preequilibrium emission for bombarding energies as low as 11 MeV/nucleon.2 The heavy residues resulting from fusionlike (complete + incomplete) events were also measured in order to help determine the initial compound nucleus formation cross section. This quantity is necessary to extract the giant-dipole strengths. Using our array of three large NaI spectrometers along with a γ-ray multiplicity array, we have measured the γ-ray strength functions and angular distributions at five bombarding energies. The angular distributions permit a direct separation of the statistical GDR component from the bremsstrahlung due to the different rest frames for γ-ray emission. We therefore determine the bremsstrahlung yield underlying the GDR component without the uncertainty introduced by a bremsstrahlung extrapolation from higher energies. To analyze measured GDR data we perform a simultaneous fit of statistical emission summed with bremsstrahlung to both the measured γ-ray strength function and the a1(Eγ) coefficient determined from the angular distributions. In our analysis we account for the important dynamical effects of preequilibrium and bremsstrahlung in an effort to determine reliably the evolution of the GDR parameters and in particular the giant-dipole width versus E*. * Warsaw University, Warsaw, Poland. † Present address: Los Alamos National Laboratory, Los Alamos, NM. ± Oak Ridge National Laboratory, Oak Ridge, TN. # Argonne National Laboratory, Argonne, IL. 1 Nuclear Physics Laboratory Annual Report, University of Washington (1997) pp. 57-58. 2 M.P. Kelly et al., Phys. Rev. C 56, 3201 (1997). 26

  • Page 33

    3.2 Anomalous fission fragment anisotropies: quasifission or slow K-equilibration? A.L. Caraley, J.P. Lestone,* A.A. Sonzogni† and R. Vandenbosch There has been a persistent puzzle about the behavior of experimental fission fragment anisotropies in heavy ion induced reactions near the Coulomb barrier. As the bombarding energy decreases the anisotropy starts to rise rather than to continue to decrease with decreasing initial angular momentum as expected from a transition state statistical model. Recently it has been proposed that the origin of this discrepancy is the competition between quasifission and fusion-fission for collisions with the tips of prolate deformed nuclei.3,4 A consequence of this suggestion is that nucleon emission leading to evaporation residues should be suppressed when quasifission is important. To test this idea we have measured the yield of the 4n evaporation residue for the 12C + 236U reaction at near-barrier energies where the anisotropy changes from normal to anomalous. We have measured the evaporation residue yield of 20-minute 244Cf by an activation technique. A thin Al foil is placed downstream to catch the recoiling residues. After a bombardment of about 40 minutes the catcher foil is rotated to a position in front of a surface barrier detector and the alpha activity is followed for several half-lives. The excitation function we have obtained is shown in Fig. 3.2-1. The full curve shows an excitation function calculated with the statistical model code PACE2,5 with a normalization based on scaling of the liquid drop model fission barrier to approximately reproduce the evaporation residue yield at the higher energies where the quasifission contribution is expected to be small. It is seen that the experimental evaporation yield at lower energies, where the anisotropy becomes anomalous, is consistent with fusion fission. Also shown is a curve making the assumption that all collisions corresponding to an angle between the beam axis and the target nucleus symmetry axis of less than 30 degrees lead to quasifission. At low beam energies most of the collisions are with the tips due to the lower Coulomb barrier for such orientations. Hinde et al.1,2 suggested that for the 16O + 238U reaction the critical angle was 35 degrees. This assumption is inconsistent with our observed evaporation residue yields at low energies. The observation of the expected amount of evaporation residues for fusion reactions is consistent with formation of a compound nucleus with most of its degrees of freedom equilibrated, but with a lifetime too short for full equilibration of the K (projection of angular momentum on the nuclear symmetry axis) degree of freedom. Fig. 3.2-1. The ratio of the 4n channel evaporation residue yield to the fission cross section as a function of bombarding energy in the center of mass. The circles represent the experimental data and the full curve represents a standard statistical model calculation. The dashed curve represents the result expected when interactions with the tips of the nucleus result in quasifission. * Present address: Los Alamos National Laboratory, Los Alamos, NM. † Argonne National Laboratory, Argonne, IL. 3 D.J. Hinde et al., Phys. Rev. Lett. 74, 1295 (1995). 4 D.J. Hinde et al., Phys. Rev. C 53, 1290 (1996). 5 A. Gavron, Phys. Rev. C 21, 230 (1980). 27

  • Page 34

    3.3 Why the standard methods of calculating fission rates are flawed at high spin A.L. Caraley, J.P. Lestone* and R. Vandenbosch The methods presently used to calculate fission rates fail to correctly take into account the rotational degrees of freedom of compound nuclei rotating in 3 dimensions. The equations used by others do not describe the full fission decay width, but the fission decay width for a system with fixed spin, K, about the symmetry/fission axis. The fission barrier heights, Bf, and the potential curvatures, eq and sp, should all be considered functions of K, and the fact that K is not a constant of the motion of the system needs to be taken into account before a correct expression for the fission decay width can be determined. This problem is easily overcome by labeling states by their orientation in space in addition to their shape and collective momentum/kinetic energy. By assuming axially symmetric shapes, the sum over all possible orientations in space can be obtained by summing over all possible K from K =-J to J, where J is the total spin and K is the projection of J onto the symmetry axis of the system. The Bohr-Wheeler fission decay width then becomes Σ K P(K)ΓfBW (K) ΓfBW = , (1) Σ K P(K) where ΓfBW (K ) is the Bohr-Wheeler decay width as a function of K ✆ ω eq ✁✂ ✄☎ − Bf ΓfBW (K) = exp T (2) 2π and P(K) is the probability that the system is in a given K state ✝✞✟ ✠✡ ☛ T −V P( K ) = ☞ exp Teq . (3) ω eq Veq is the sum of the Coulomb, nuclear and rotational energies at the equilibrium position as a function of K. The latest version of the statistical model code, JOANNE4, calculates fission decay widths using Eq. (1). In Fig. 3.3-1 JOANNE4 calculations (solid lines) are compared to measured pre-scission neutron multiplicities, pre. A parameter which controls the temperature dependence of the potential energy surfaces was adjusted such ✌ that the measured evaporation residue and fission cross sections are reproduced. The JOANNE4 model calculations give a reasonable reproduction of the pre data for the three O-induced reactions considered here. The ✌ dashed lines show ‘standard model’ calculations,6,7 which fail to reproduce the pre data. ✌ From the JOANNE4 calculations presented here it is concluded that in O-induced fusion-fission reactions, with initial excitation energies <~ 80 MeV, the pre data are consistent with the fission of fully ✌ equilibrated systems and that the collective motion in the fission degree of freedom is not necessarily strongly overdamped, in contradiction with the conclusions drawn by others. Many previously deduced properties of the viscosity of nuclear matter should be viewed with caution. The large volume of heavy-ion induced fission data measured over the past decade, with the aim of deducing the properties of nuclear viscosity, needs to be reanalyzed using the concepts discussed here. * Present address: Los Alamos National Laboratory, Los Alamos, NM. 6 D.J. Hinde et al., Nucl. Phys. A 452, 550 (1986). 7 D.J. Hofman et al., Phys. Rev. C 51, 2597 (1995). 3 H. Rossner et al., Phys. Rev. C 45, 719 (1992). 28

  • Page 35

    Fig. 3.3-1. Pre-scission neutron multiplicities, pre, as a function of the projectile energy for three O-induced ✍ reactions. The triangles and circles show the data of Ref. 1 and Ref. 3, respectively. The dashed lines show ‘standard model’ calculations.1,2 The solid lines show calculations of pre obtained using Joanne4 with no ✍ dynamical fission delay time. 29

  • Page 36

    3.4 Light-charged particles from fusion-evaporation in the 19F + 180Ta system A.L. Caraley, B.P. Henry and J.P. Lestone* 181 200 Lately, our investigations of large-scale nuclear shape changes have focused on the 19F + Ta → Pb system. In this past year we have conducted a systematic exploration of both the fusion-fission and the fusion- evaporation channels. Light-charged particles (p’s, d’s and α particles) in coincidence with evaporation residues were measured at Elab=121, 154 and 195 MeV with the aim of determining the Fermi-gas level density parameter. As described in an earlier report,8 we are investigating the conclusion made by Fabris et al.9 (based on α particle -1 results from the same system) that the level density parameter decreases dramatically from A/8.3 (MeV ) at a -1 thermal excitation energy of U=20 MeV to A/12 (MeV ) at U=100 MeV. In our current experiment light-charged particles (LCP) were detected at θlab=120° and 160°. Although the analysis is still in progress, the resulting proton and α particle center-of-mass spectra appear identical to those that we have measured previously.1 More extensive simulations, to account for the efficiency of the deflector plate setup, will be necessary to determine the LCP multiplicities. The LCP angular distribution information will be used as an additional check of the statistical model simulations and as an aid in the evaluation of any pre- equilibrium particle emission contribution to the data. The greatly improved statistics of our current results enable detailed comparisons with various statistical model predictions. Preliminary calculations with JOANNE10 reproduce, with slight reductions in the optical model emission barrier heights, the α particle spectra at Elab=154 and 195 MeV using constant level density -1 -1 parameters of A/11 (MeV ) and A/12 (MeV ), respectively. As suggested by recent theoretical discussions,11,12 an equally good description of such particle emission spectra can be made using a level density parameter that varies smoothly with excitation energy from ~A/8(MeV-1) at U=0 MeV to some minimum value, amin~A/(9- 15???) (MeV-1), at the highest compound nucleus excitation energy. In such calculations, the range over which the continuously varying level density parameter needs to vary is much less than when fixed excitation energy independent values are used at each bombarding energy. In fact, our previous results at Elab=150 and 190 MeV, while reproduced with constant level density parameters similar to those needed at Elab=154 and 195 MeV, were consistent also with calculations using a level density parameter with a modest dependence on excitation energy. -1 -1 Specifically, a linear decrease from A/8.1 (MeV ) at U=0 MeV to A/9.2 (MeV ) at U=100 MeV was sufficient to describe the observed spectra. In any case, the need for the strong excitation energy dependence of the Fermi- gas level density parameter as proposed by Fabris et al. is not indicated by any of our results. * Present address: Los Alamos National Laboratory, Los Alamos, NM. 8 Nuclear Physics Laboratory Annual Report, University of Washington (1996) p. 30. 9 D. Fabris et al., Phys. Rev. C 50, R1261 (1994). 10 J. P. Lestone, Nucl. Phys. A 559, 277 (1993). 11 S. Shlomo and J.B. Natowitz, Phys. Rev. C 44, 2878 (1991). 12 J.P. Lestone, Phys. Rev. C 52, 1118 (1995). 30

  • Page 37

    19 3.5 F + 181Ta: Fission fragment and evaporation residue measurements A.L. Caraley and J.P. Lestone* Angular distributions of evaporation residues and of fission fragments, along with fission fragment folding angle distributions, have been measured at Elab=121, 135, 150, 164, 180, 188 and 195 MeV. As well as extending earlier such measurements by Hinde et al.,13 the fission and residue results provide supplementary information for the statistical model calculations used in the level density investigation. At all beam energies, the fission fragment angular distributions and the folding angle distributions are compatible with fission following complete fusion. At 121 MeV the fission cross section is in agreement with that measured previously by Hinde et al. In addition, the cross sections at the higher beam energies are consistent with expectations that the fission channel should comprise the majority of the fusion cross section in this energy regime for this system. The measured evaporation residue angular distributions are well reproduced by statistical- model-based simulations. The shapes of the distributions are dominated by target effects and are not particularly sensitive to the choice of level density parameter used in the simulations. A nominal value of A/11 (MeV-1) reproduces the measured residue angular distributions at all seven beam energies. The evaporation residue cross section measured at 121 MeV is also in agreement with that measured by Hinde et al. However, standard statistical model calculations (i.e., JOANNE,14 CASCADE15) fail to reproduce the yields at the higher beam energies. The measured cross sections remain at ~400mb from 121 to 195 MeV, while the calculated values drop by approximately a factor of 2 over the same energy range. However, the overall shapes of the residue velocity distributions are consistent with statistical model predictions made assuming complete fusion. Even at Elab=195 MeV, the amount of any incomplete fusion contamination is estimated to be less than 5%. Thus, incomplete fusion is not considered to be responsible for the observed “excess” residue cross sections at any of the beam energies. Excess residue cross sections have been observed in only a few other systems: 16O + 208Pb, 32S + 184W, , , and Ni + 112Sn.16 17 18 Interpretations of these results have focused primarily on the role of nuclear viscosity in 58 the decay of the compound nuclei formed in these reactions and have led to estimates of the magnitude of the , nuclear viscosity coefficient, γ.19 20 However, the K-state model presented by Lestone [see Sec. 3.3] reproduces successfully the measured residue cross sections, as well as the pre-scission neutron multiplicities, for the 16O + 208 Pb system without the need for any viscosity. A preliminary calculation, using this K-state model, for the 19F + 181 Ta system is also successful in reproducing the observed residue cross sections. In the future, it would be interesting to conduct a systematic measurement of residue cross sections formed by many different reactions to explore further the role of K-states in the competition between residue formation and fission. * Present address: Los Alamos National Laboratory, Los Alamos, NM. 13 D. J. Hinde et al., Nucl. Phys. A 385, 109 (1982). 14 J. P. Lestone, Nucl. Phys. A 559, 277 (1993). 15 F. Pühlhofer, Nucl. Phys. A 280, 267 (1977). 16 K.-T. Brinkmann et al., Phys. Rev. C 50, 309 (1994). 17 B. B. Back et al., “Studies of Fission Hindrance in Hot Nuclei,” International Workshop on Physics with Recoil Separators and Detector Arrays, New Delhi, India, Jan. 1995. 18 A. L. Caraley, SUNY Stony Brook Ph.D. thesis, unpublished (1997). 19 D. J. Hofman et al., Phys. Rev. C 51, 2597 (1995). 20 A. L. Caraley et al., in preparation. 31

  • Page 38

    3.6 Angular distributions of fission fragments from 40Ca + 192Os, nat Ir, 194Pt and 197Au A.L. Caraley and J.P. Lestone* We have measured the angular distributions of fission fragments from the 40Ca + 192Os, natIr, 194Pt and 197 Au reactions. A considerable change in the nuclear shape occurs in these reactions when the projectile plus target system changes into two nearly equally sized fission fragments. The details of this shape change might affect these four reactions and influence the angular distribution of fission fragments. The aim of the experiment was to look for a dependence of the fission fragment anisotropy on the shape of the ground state target nucleus. 192 Os is prolate with β2=0.165; 194Pt and 197Au are oblate with β 2’s of –0.143 and –0.10, respectively. The shape of Ir nuclei is unclear but they are expected to have an absolute deformation significantly less than the other three target nuclei involved in the study. The measured fission fragment anisotropies from three reactions, 40Ca +Ir, Pt and Au, are in agreement with each other when compared at the same center-of-mass energy relative to the fusion barrier. However, the 40 Ca + (prolate) 192Os reaction has a significantly higher fission fragment anisotropy, relative to the other three reactions, at sub-barrier energies. Further work needs to be done to confirm this interesting possible dependence of fission fragment anisotropies on the shape of the target nuclei. All four reactions mentioned need to be restudied with longer (higher statistics) experimental runs. In addition, reactions involving more highly deformed prolate targets like W (β2~0.23) and Hf (β 2~0.28) need to be explored. * Present address: Los Alamos National Laboratory, Los Alamos, NM. 32

  • Page 39

    4.0 ULTRA-RELATIVISTIC HEAVY IONS 4.1 A scale-local approach to percolation theory L.D. Carr and T.A. Trainor Percolation is used as a paradigm to model critical phenomena in a broad spectrum of physical contexts. In the study of ultrarelativistic heavy ion collisions percolation has been used to model phenomena ranging from QCD effects in the transition from hadronic matter to color-deconfined quark matter on the one hand to nonlinear effects in the high-density tracking of final-state charged particles on the other. Bond percolation on a square lattice is an elementary example. The central problem of percolation theory is the topological description of the population of connected bonds or clusters on the lattice as a function of bond probability or mean bond density. Near the critical density large bond clusters form, with a largest or ‘infinite’ cluster emerging at and above the critical density. The fraction of the lattice occupied by this infinite cluster is the 'order parameter' of the system. We have been concerned with finite-size effects in percolation theory. Conventional analysis of critical phenomena invokes the thermodynamic limit, in which large-scale measures, such as the order parameter and/or its derivatives, exhibit nonanalyticity; that is, they become discontinuous. The thermodynamic approach is inappropriate for finite systems. One must deal consistently with finite-size effects. Color deconfinement in a nuclear collision can be viewed as a phase transition on a finite system. Thus, it is important for us to understand the implications of finite system size for the QCD phase transition using a scale-local approach. 1 p(e) 0.9 df = 4/3 0.8 d1(scale) 0.7 ✆ 0.6 ☎ ✄ 1 0.5 0.4 0.3 0.8 0.2 0.1 0 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 10 10 10 10 10 10 10 10 10 10 1 10 0.6 log(e/L) 1 p(e) 0.9 df = 9/5 0.8 0.4 0.7 0.6 0.5 0.4 0.2 0.3 0.2 0.1 0 -8 -7 -6 -5 -4 -3 -2 -1 2 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 10 10 10 10 10 10 10 10 1 ✂ 10 10 10 10 10 ✁ 10 ✁ 10 10 10 ✁ 10 ✁ 10 ✁ 10 ✁ 1 10 log(scale) log(e/L) Fig. 4.1-2. Space filling by recursion of Cantor sets. Fig. 4.1-1. Two-point connection functions (bond This models the problem of percolation clusters of probabilities) as functions of lattice scale for two fractal dimension filling a finite fraction of an fractal cluster dimensions, df = 4/3, 9/5. The integer-dimensioned space. critical bond probability is 0.5. 33

  • Page 40

    4.2 Ultra-relativistic heavy ion collision simulators: visualization and analysis L.D. Carr, D.J. Prindle and T.A. Trainor The non-perturbative regime of QCD is currently not well-understood theoretically. However, this regime is extremely interesting from the point of view of color deconfinement and chiral symmetry restoration in ultrarelativistic heavy ion collisions. It is particularly important to achieve a better understanding of the overall dynamics of such collisions, including excursion to a state of color-deconfined matter: the quark-gluon plasma. Because of its complexity the time evolution of an ultrarelativistic collision can best be studied via a Monte Carlo approach using so-called Event Generators (EGs). Currently available EGs are phenomenological in nature, and often stress one aspect of the collision dynamics at the expense of others. In some cases, in order to make the computation task manageable, basic kinematic issues are skirted or ignored, as for example cascade codes which are not Lorentz covariant. And, because of the complexity of algorithms and their implementations, bugs are seemingly unavoidable at some level. Recently there has been a major effort in the theory community to achieve improved simulations, both through better models and algorithms, and through establishment of a general quality assurance program for event generators. In support of this program, and to provide a better understanding of the functioning of EGs and their constituent physical models, we have developed two new techniques for understanding event generators. The first technique is a much improved visualization of generator output. The second technique is characterization of the time evolution of a collision by topological measures which supplement conventional dynamical variables. These methods have already proven useful for quality assurance and feedback to theorists. They are also useful as a test bed for experimentalists studying event-by-event physics. The current version of the event visualization program QCDisplay1 allows one to view HI collisions in perspective with mouse-driven zoom and rotation capability, to employ any algebraic combination of simulator output variables as axes, and to trace particle ID and parentage by color coding and linear connections. Points and lines representing individual particles and parentage can be expanded to reveal associated data structures. One can navigate through these data structures conveniently. Individual frames from visual data extracted from the EG at a number of intermediate time steps can be concatenated into a movie with this visualizer. Movies of collisions simulated with event generators RQMD and VNI are available for viewing at http://www.npl.washington.edu/duncan. In the process of developing this event visualization package we produced a general data-format conversion tool. Since there are at least five major event generators, several heavy-ion experiments with different data formats, and many private simulation efforts, this tool may be quite useful in the future. Our goal in the next year is to apply two-dimensional scaled correlation analysis (see Sec 4.7) described elsewhere in this report) to intermediate states of VENUS, VNI, and other simulators. A given event may be partitioned into partonic, hadronic, and final detection stages, the time evolution of which need to be understood by simulations, correlation analysis and visualization. Predicted symmetry variations, as for example formation of a Disoriented Chiral Condensate (DCC) and subsequent attenuation of the correlation by hadronic rescattering, can be studied by following the time dependence of the correlation state of the system. We will use SCA to study the development and observability of such phenomena. 1 Nuclear Physics Laboratory Annual Report, University of Washington (1995) p. 45. 34

  • Page 41

    4.3 STAR computing requirements task force D.J. Prindle, T.A. Trainor and D.D. Weerasundara The computing requirements for experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) will be qualitatively larger than has been previously encountered in nuclear physics. The STAR experiment for instance will acquire about 200 Tbytes (2 × 1014 bytes) of raw data each year. These data must be processed in two major steps: event reconstruction, in which raw data from each event are analyzed to extract particle trajectories and properties, and offline analysis, in which momentum spectra of observed particles are analyzed to study relativistic nuclear collision dynamics, and specifically to look for evidence of a transition to color-deconfined matter. Much of this data processing will occur at the RHIC Computing Facility (RCF) at BNL. This facility will serve the four experiments currently under construction: STAR, Phenix, Phobos and Brahms. The RCF is undergoing a very rapid expansion in preparation for RHIC turn-on in June, 1999. As part of its design and procurement program, RCF management has asked each experiment to prepare a detailed reassessment of its computing requirements. This has in turn required an extensive internal exercise in describing the details of a complex physics analysis program. This planning study for STAR has been carried out by the STAR Computing Requirements Task Force from August, 1997 to the present. The final report of this study has just been released. The Seattle group (task-force member Trainor with supplementary contributions by Weerasundara and Prindle) has made a major contribution to the study, especially in the area of event-by-event physics. The study considers STAR computing needs for the RHIC Central Reconstruction Server (CRS) and Central Analysis Server (CAS). Each ‘server’ will be a large processor farm, with over 1000 processors in each case divided among the four experiments. In addition to these CPU facilities there will be a Mass Data Store (MDS) with up to 1 Pbyte of tape robot volume and up to 40 Tbyte of cache disk volume. STAR will acquire data for about 107 central (zero impact parameter) full-energy Au-Au events per year, or the data-volume equivalent of p-p, p-A and medium impact parameter events, and lighter or lower energy A-A events. A critical area of the study involves detailed description of so-called ‘data mining’ and analysis activities. With seven major physics working groups identified within STAR it is important to understand how each will load the analysis resources, and what degree of coordination can be achieved in order to reduce the burden on I/O channels. Data mining involves the definition of event classes and extraction of corresponding events from the MDS to cache disk. Analysis involves inclusive analysis of momentum spectra, histogramming and other visual presentation activities using the CAS processor farm. The results of the study include detailed estimates of CPU, storage volume and I/O bandwidth needs for the computation tasks of STAR as they are distributed over the various physics working group programs and the event reconstruction activity. The basic functioning and interaction of these groups are also sketched out. 35

  • Page 42

    4.4 STAR mock data challenge J.C. Prosser, T.A. Trainor and D.D. Weerasundara In response to a recommendation by the RCF Technical Advisory Committee (TAC) this past June, a Mock Data Challenge (MDC) has been scheduled for the RCF and the four RHIC experiments. The MDC will take place in two waves, MDC1 in August, 1998 and MDC2 in January, 1999. The purpose of the MDC is to exercise all aspects of data acquisition and processing at some reduced scale to test RCF and experiment software and hardware infrastructure. MDCs have been applied to other large experimental programs to good effect. The MDC is intended to be a 10% model of the final RCF configuration. Preparations for the STAR MDC include generating a number of simulated events to be used in place of real data. Full simulation of about 1M events is being carried out as a distributed project over various computation centers around the country. Event generation, with particle lists as output, is being carried out on the currently implemented CAS processor farm at the RCF and on a CRAY T3E located in NERSC at LBNL. Event generators, which model the collision dynamics, include RQMD, VNI/HIJET, HIJING and FRITIOF. Computer-intensive detector simulation of generated events with event-generator particle lists as input and using GEANT and a slow simulator to model detector response is being carried out on the NERSC T3E. The detector simulation program is also planned to be expanded to the Pittsburgh supercomputing center T3E shortly. Two major aspects of the MDC relevant to STAR are initial operation of the CRS and CAS. The Central Reconstruction Server or CRS must handle about 107 events per year for STAR. The output is a Data Summary Tape (DST) volume, containing kinematic particle properties and tracking information. The MDC1 version of this facility for STAR will be 30-50 processors (Pentium II) operating in batch mode. About 20 Tbyte of MDS will also be available. Simulated raw data for up to 1M events will be produced, as described above. The second aspect of the MDC is preliminary operation of the Central Analysis Server or CAS. While the CRS operation is quite complex, and sustained operation of several hundred processors in batch mode for high-volume events will take considerable effort and ingenuity to achieve, there is nevertheless considerable experience already in the operation of such a facility, especially in NA49 at CERN. So, the questions likely to arise in the CRS component of MDC1 will be matters of scalability and reliability. On the other hand, the coordinated usage of the CAS by up to 100 physicists in a very heterogeneous analysis program with major novel elements is a significant design task requiring innovative approaches. Central to this program is event-by-event physics analysis, a major emphasis of the UW group within STAR. Event-by- event physics has both an event selection aspect and an event analysis aspect. Drawing from our NA49 experience we intend to examine each of 107 events per year to extract information content, examine events on this basis for unusual behavior, and sort the event population on the basis of any such behavior. Different event classes, having been identified and recorded in a data base, must be retrieved in an efficient way from the MDS and made available for more conventional inclusive analysis, and any alternative analysis required to understand the physics behind the nature of the event category. The UW group is actively involved in planning the details of the MDC, especially innovative techniques required for successful CAS operation. 36

  • Page 43

    4.5 STAR event-by-event physics L.D. Carr, D.J. Prindle, J.C. Prosser, J.G. Reid, T.A. Trainor and D.D. Weerasundara Event-by-event analysis looks for differential manifestations of symmetry reduction or increased correlation with respect to a nominally thermalized reference system. Event-by-event physics has been an active field of research for almost two decades, emphasizing mainly flow and jet studies. Jet production stems from partonic degrees of freedom interacting at small space-time scales. Jet phenomena therefore reveal aspects of perturbative QCD. Flow relates to large-scale correlations between momentum and configuration space in the hadronic regime at later times. Jets and flow are `large-amplitude' phenomena, accessible even with low- multiplicity collision systems. ✝ With the advent of s = 200 GeV/nucleon colliding gold beams at RHIC we expect substantially increased energy densities and resulting event multiplicities which can provide sensitivity to smaller-amplitude symmetry reductions over a broad scale interval. This additional sensitivity and scale range should allow us to explore more deeply so-called soft or nonperturbative QCD phenomena, in particular color deconfinement and chiral symmetry restoration -- the transition to a quark-gluon plasma. An event-by-event (EbyE) physics working group has been formed in STAR, co-convened by Iwona Sakrejda (LBNL) and Trainor (UW). An immediate task of this group is to provide computational and theoretical infrastructure for event-by-event analysis within STAR suitable for year-one operation, beginning in June, 1999. This is presently a major emphasis of the UW group. The first hurdle in this program is preparation for the RHIC Mock Data Challenge or MDC (described elsewhere in this report), a 10% exercise to be held in August, 1998 (1) and January, 1999 (2). Technical preparations for the EbyE component of the MDC are being directed by Weerasundara. These include porting EbyE analysis software developed for the NA49 experiment to a STAR-compatible computing environment running on RCF platforms, setting up a pilot operation on a small number of RCF machines within the CAS facility, extending this pilot facility to a 10%-scale production facility under batch control and integrating the EbyE system with other general-purpose infrastructure such as the RCF mass data store (MDS) and the high- performance storage system (HPSS) interface to MDS. Event-by-event analysis will be carried out as a production operation on every full-energy Au-Au event (107 per year) and a substantial fraction of lighter A-A and p-A events at full and reduced energies. This program will require a major computation facility at the RCF in its own right, about 1/10 of the Central Reconstruction Facility (CRS) used for tracking and DST production. The output of this production analysis will be an event parameterization or ‘event spectrum' which can in turn be analyzed to sort events into event classes based on comparison with control events from a reference system. Some event classes may correspond to anomalous behavior. Samples from these event classes are then analyzed with more traditional inclusive procedures to understand the nature of any anomaly. This program has already been pursued within NA49 with good results. A possible outcome is isolation of event classes manifesting the effects of color deconfinement and/or chiral symmetry restoration. 37

  • Page 44

    4.6 STAR TPC cosmic ray tests G.W. Harper, T.A. Trainor and D.D. Weerasundara During the period August-October, 1997 the STAR TPC underwent cosmic ray testing at Lawrence Berkeley National Laboratory prior to shipment to the RHIC accelerator at Brookhaven National Laboratory. The University of Washington group contributed to this exercise in two ways. 1) The cathode high voltage system was installed and tested with laser and cosmic-ray tracks and 2) the cluster residuals on cosmic ray and laser tracks were analyzed and compared to a fast simulator. The cathode high voltage system determines the electron drift speed in the TPC, and thereby the z-axis length scale. This scale must be controlled to a few parts in 105 to meet the required spatial accuracy. This is achieved in principle by servoing the high voltage to maintain a fixed time-of-flight of laser-produced photoelectrons drifting 2.2 m from the cathode plane to the readout wire chambers. The servo system was put into operation early in the testing period and found to perform according to specifications. The University of Washington also performed an analysis of tracking residuals due to diffusion and track- angle effects. Observed residuals were found to be excessive. It was found that cross talk in the front-end electronics was producing systematic shifts in cluster centroids. We determined that an empirical correction derived from cluster moments was sufficient to remove this cross-talk effect. The cross talk has since been reduced. ☞ ☞ 1.5 1.5 0.8 0.8 resxy (cm) resxy-skew*sigma**1.5/9 (cm) resxy (cm) skew*sigma**1.5/9 ✍ 35 30 0.6 0.6 1 1 ✍ 30 25 ☞ ☞ 0.4 0.4 25 ✟ ✟ 0.5 0.5 20 0.2 0.2 20 0 0 15 0 0 ✌ 15 -0.2 -0.2 -0.5 -0.5 10 ✌ 10 -0.4 -0.4 -1 -1 5 -0.6 -0.6 5 -1.5 -1.5 0 -0.8 -0.8 0 -20 -10 0 10 20 -20 -10 0 10 20 -0.5 -0.25 0 0.25 0.5 -0.5 -0.25 0 0.25 0.5 skew*sigma**1.5/9 skew*sigma**1.5/9 ✡ ✡ ✞ pad ((x-.335)/0.67) ✞ pad ((x-.335)/0.67) 1.5 resxy-skew*sigma**1.5/9 54.14 / 25 31.93 / 23 25.07 / 22 ✏ Constant 235.5 Constant 95.94 10 2 Constant 326.1 Mean 0.4765E-01 Mean 0.4062E-01 Mean 0.6317E-01 Sigma 0.1313 Sigma 0.1214 Sigma 0.1476 1 ✠ 10 2 10 2 0.5 ✎ 0 10 10 10 abs(skew*sigma**1.5/9)0.1 -0.5 abs(skew*sigma**1.5/9)0.1 -1 1 1 1 -1.5 ☞ ✎ ☞ -20 -10 0 10 20 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 ✞ pad ((x-.335)/0.67) resxy-skew*sigma**1.5/9 ☛ residual (cm) ☛ residual (cm) Fig. 4.6-1. Upper left plot shows residuals vs pad Fig. 4.6-2. Upper left plot shows correlation between number before correction. Upper right plot shows bend-plane residuals containing electronics cross-talk empirical correction formed with cluster moments. effect and empirical combination of cluster moments. Lower left plot shows residuals after correction. Upper right plot shows the result of removing the Lower right plot is projected residuals distribution. cross-talk contribution. 38

  • Page 45

    4.7 NA49 event-by-event physics program L.D. Carr, D.J. Prindle, J.C. Prosser, J.G. Reid, T.A. Trainor, D.D. Weerasundara and the NA49 Collaboration* The NA49 event-by-event physics program searches for dynamical fluctuations in single and multi- particle distributions which may be attributed to color deconfinement or chiral symmetry restoration phenomena associated with the formation of a Quark-Gluon Plasma (QGP). Lattice-gauge theory predicts such fluctuations to occur with some observable frequency at CERN SPS and higher energies for the heaviest collision systems. In the NA49 event-by-event program we search for evidence in the hadronic sector for such rare phenomena by looking for a) dynamical fluctuations in and correlations among global thermodynamic variables and b) departures from thermal symmetry over a range of scale in momentum space. In 1997, the NA49 experiment completed a DST production of 300k events of central Pb+Pb collisions. Event-by-event analyses have been performed on these 300k events and the results were presented by D.D. Weerasundara at the Quark Matter '97 international conference in Tsukuba, Japan in December 1997. The fluctuations observed in the analysis of the conventional global thermodynamic variables for primary vertex particles did not depart significantly from conventional statistics. The dynamical variations observed in the strangeness production in Pb+Pb collisions are found to be much smaller than the strangeness enhancement observed in nucleus+nucleus collisions as compared to nucleon+nucleon collisions. We also find evidence for strong attenuation of event-wise pt dynamical correlations in Pb+Pb collisions compared to those in p+p collisions. Scaled Correlation Analysis (SCA),1 a model-independent differential correlation measure designed to look for dynamical fluctuations over a range of scale, searches for significant deviations in the correlation content of individual event distributions compared to a reference distribution.2 In the 1997 annual report we reported that in the application of SCA to the NA49 Main-TPC data, we were able to identify, at the 1 per mil level, a set of anomalous events in the NA49 data, which exhibit excess yield in the transverse mass range 0.6< mT<1GeV/c2, compared to normal events from real data as well as Poisson events. Due to the quality of the reconstructed MTPC tracks the physics interpretation of these anomalous events remained ambiguous. Since then NA49 have produced DSTs with reconstructed global tracks that combines information from all four TPCs. After performing SCA on these events, we continue to identify anomalous events at the 1 per mil level. We find that the anomalous events contain a significantly higher yield of V0 candidates, although the pair kinematics are generally inconsistent with Λ or K so decay. An extensive study of Monte Carlo events revealed that the tracks with 0.6<mT<1GeV/c2 in these anomalous events exhibit characteristics similar to charged particles produced by pair conversions of γs coming from π° decays. We also find that the secondary particle multiplicity in these anomalous events are anti-correlated with the primary vertex track multiplicity compared to the normal events. This may perhaps be an indication that the extra yield of particles in the anomalous events is not produced by the interaction of spectator nucleon(s) with the TPC gas. We continue to study these anomalous events in greater detail and hope to identify the source of the extra particle production. Nevertheless the SCA has proved to be a powerful tool to search for rare phenomena in high-energy heavy-ion collisions. * CERN, Geneva, Switzerland. 1 Nuclear Physics Laboratory Annual Report, University of Washington (1996) p. 36. 2 Nuclear Physics Laboratory Annual Report, University of Washington (1997) p. 41. 39

  • Page 46

    4.8 NA49 event by event physics: event characterization D.J. Prindle, T.A. Trainor and D.D. Weerasundara The class of anomalous events found by SCA (see Sec 4.7) are characterized by excess particles in the Main TPCs near φ = 0° and φ = 180°. These particles appear to originate downstream of the primary vertex. This suggests particle production with small transverse momentum. The magnetic field will sweep these to relatively large |x| while |y| will remain small. One possible mechanism to produce secondaries with small transverse momentum is production of e+e- pairs from γ-ray conversion. The maximum e+e- opening angle is mec2/Eγ, so an e+e- pair from a 10 GeV γ ray has a negligible opening angle. Thus we can characterize the conversion of a γ ray that originates at the primary vertex by five numbers; the x, y, z position and the and e+ and e momenta. The e+ and e- directions are initially along the vector from the target to the assumed conversion point. The Main TPCs are outside the magnetic field, so they measure straight lines, which are characterized by four parameters; the x and y intercepts at a reference z plane, and the x and y slopes. Given a vertex and e+ and e- momenta we can swim the particles through the magnetic field and calculate the track intercepts and slopes. We search for γ-ray conversion by taking pairs of tracks, one in the left TPC and one in the right TPC, and varying the γ-ray conversion parameters to minimize the deviation of the calculated slopes and intercepts from the measured slopes and intercepts. With eight measurements and five unknowns this is a three constraint fit. After requiring χ2 < 5 there is a large combinatorial background. This is studied using GEANT events and is primarily due to tracks that are not electrons. Plotting the momenta determined in this fit versus the truncated mean dE/dx we get a clear separation of electrons from hadrons. Selecting candidate vertices with e+ and e- identified in this way reduced the background enough so we can observe a clear γ-ray signal amounting to about one reconstructed γ ray every two events. We examined the anomalous event sample, a reference ‘vanilla’ event sample and a GEANT event sample. All samples had similar numbers of reconstructed γ rays. Using dE/dx to select for hadrons we find the anomalous event sample has a large excess of vertices compared to the ‘vanilla’ event sample. The ‘vanilla’ event sample is very similar to the GEANT sample for this cut. The anomalous vertices have a rather narrow distribution in x centered near zero but an asymmetric distribution in y with a pronounced tail to negative values. This is consistent with an interaction of a nucleus traveling along the beam but later in time than the central collision that triggered data taking. If we are seeing a high energy hadronic interaction we expect multiple secondary particles arising from the same vertex. The magnetic field has a small effect on the y slope. Taking all pairs of Main TPC tracks for a given event and plotting the y-z intercept we find a large concentration near the target position due to tracks from the primary vertex. Selecting the subset of tracks that are ‘wrong’ side (i.e., pz is positive but they were measured at negative x) individual events from the anomalous sample generally show one concentration of track crossings downstream from the primary vertex. The y and z positions vary from event to event. Generally the y position is negative. This analysis strongly suggests that about once in a thousand central Pb-Pb events there is another nucleus or nucleon in the beam that interacts with a gas nucleus in (or near) the second Vertex TPC within 20µs of the triggering collision and produces roughly twenty particles. The distribution of these ‘pile-up’ vertices should be determined by the distribution of matter, so there should be a substantial number of events in which the pile-up vertex is in the target. In some cases these may be distinguishable from the central collision, but this needs to be determined. 40

  • Page 47

    4.9 NA49 DST production monitoring M.E. Reitz and D.D. Weerasundara In January 1998, the NA49 experiment completed the DST production of 400k central Pb+Pb collisions. This represents the full data set collected by NA49 during the 1995 heavy-ion run at the CERN SPS. NA49 DST production, carried out at the SHIFT computer facility at CERN, is a fully automated process managed by a job control script written in POSIX shell script language. During the DST production for each run period (typically 10k events), a set of detector hardware and reconstruction software status quality monitoring variables are extracted and histogramed on the fly and saved onto disk files. A typical set of variables that monitor detector hardware quality per event are: 1) drift velocity, 2) TPC gas gain, 3) TPC gas temperature and 4) TPC gas pressure. Variables that monitor reconstruction software quality per event are: 1) total charge in each TPC, 2) total reconstructed space points in each TPC, 3) total reconstructed tracks in each TPC, 4) total reconstructed global tracks that span two or more TPCs, 5) ratio of local to global tracks, and 6) total number of vertices in an event. We currently have 0.125 FTE working on manually scanning the monitoring histograms looking for potential failures in detector hardware or reconstruction software in the DST production. It is very important in general to any physics analysis and in particular to the event-by-event physics program to isolate and understand intermittent failures in detector hardware and reconstruction software. 41

  • Page 48

    4.10 Analysis of the Henon map using scaled correlation analysis J.G. Reid and T.A. Trainor Scaled correlation analysis extends the basic topological concepts of entropy, dimension and volume to functions of scale (scale-local measures). We require that analyses based on this extended measure system agree with conventional measures in limiting cases (e.g., ‘zero’ scale) in order to demonstrate a consistent procedure. This motivated us to examine scale-local rank 0 and 1 dimensions (corresponding to ‘covering’ and ‘information’ dimensions) of the strange attractor of the Henon map. Standard dimension measures (in the limit of zero scale) have been extensively applied to this system, and thus provide a benchmark to which we can compare our analysis. The Henon map itself is generated recursively by the equations: x j+1 = a + b * y j − x 2j y j+1 = x j With a = 1.4 and b = 0.3 this system converges rapidly to a strange attractor, for which the limiting dimension values are given as d0 = 1.28 and d1 = 1.258.1 We applied an updated form of 2-D scaled correlation analysis over as large a scale range as reasonable computation time would permit, from 10-4 up to 101. This is not the ‘scale zero limit,’ but our results are consistent with the existing literature, as can be seen in Fig. 4.10-1, where the solid line is our result for the covering dimension as a function of scale and the dashed line corresponds to our information dimension result. We have also used these data to investigate recent theoretical developments (see Sec. 4.12) in the relationship between the dimensionalities of the marginal distributions for a joint distribution and the dimensionality of the joint distribution itself. This is a particularly interesting case since yj+1 = xj insures that the x and y marginal distributions are identical up to a single point (the initial value). This example has facilitated further theoretical developments, and we are still working on interpreting and extending our analysis of the marginal distributions and their relationship to the joint distribution. However, the extensive run-time for this 2-D analysis has motivated a code upgrade (see Sec. 4.11) which will be completed soon, and which will allow us to investigate more accurately and quickly the dimensionality of the Henon map and other complex distributions. 1 1.4 dimension 0.9 0.8 1.35 0.7 1.3 0.6 0.5 1.25 0.4 0.3 1.2 0.2 1.15 0.1 0 ✑ ✑ ✑ 1.1 ✑ 0 0.2 0.4 0.6 0.8 1 -3 -2 -1 0 henon attractor -- 300k points log(e/L) Fig. 4.10-1. Scaled dimension of the Henon attractor. 1 Nonlinearity 9, 845 (1996). 42

  • Page 49

    4.11 New algorithms for the generalization and optimization of scaled, dithered binning analysis J.G. Reid With application of the SCA system to a broader set of problems our existing software required a major overhaul. Along with necessary changes to the code to allow for extra functionality provided by new theory developments, I also undertook the task of generalizing the analysis package so that we could distribute this code to other collaborators and institutions. With this in mind several new algorithms were developed with an emphasis on generality and run-time optimization. The language of choice is C++ because of the extensible nature of well-constructed object-oriented code. To exploit fully the benefits of data abstraction I was forced to rewrite all of the code from scratch. Because I was starting from a tabula rasa there were many minor issues and adjustments which I will not detail here. In this description I concentrate on the major problems of data structure construction and rebinning, and the algorithms I wrote to deal with them. The first problem that one encounters in writing generalized SCA code is reading the data to be analyzed and constructing a sorted data structure which summarizes it. We have found that it is absolutely essential to store the data in a sorted format to cut down on run time, obviously it is faster to traverse through and process a sorted data structure than an unsorted one. The difficulty here is that we want the code to be general enough to be able to process data distributions embedded in an arbitrary n-dimensional space If the data are given to us as a point set they must first be binned as part of the analysis. If the data have been prebinned (e.g., by a detector) then the embedding space dimensionality is based on the detector geometry, and our job has already been done for us. It is easy enough to determine the default dimensionality of an embedding space from the data file format, so reading and binning the point-set data is also easily done. Now we can sort the data and construct a data structure for the analysis. Doing the analysis itself consists of traversing through this data structure, which is a sorted list of occupied bins. We want to pass through this structure and rebin the data at a larger scale. We also need to do this in a general way so that we can bin 13-dimensional distributions with the same code as 2-dimensional data. There are two major aspects to solving this problem. First, one must design a general data structure, then find an algorithm for traversing it efficiently. To maintain the generality of the code I found it necessary to treat each axis independently, so I let that issue drive both the data-structure design and the analysis algorithm. The clearest way to describe the data structure is to describe its construction from the unhashed data. Consider the first axis. We determine a list of the unique bin indices for data elements on this axis, then build a sorted, linked list of those indices. Each element in this linked list acts as the head of another linked list which is a sorted list of the unique bin indices on the next axis for data elements with the same bin index value for the previous axis as the 'head' element in the list. Again, each element in this list serves as the head of a linked list of bin indices for the next axis. Thus, we can build a data structure for data of any dimensionality, and each axis is treated independently. Note that the list is terminated when we run out of axes, and instead of the elements of the last axis' linked list acting as the head of a linked list of indices it points to the data to which it corresponds. To analyze the data we have to rebin it (the ‘analysis’ binning) at scales larger than the binning with which the data structure was built. This means we need to traverse the data structure and determine which data elements belong in which analysis bins. To do this we take the projection of the current analysis bin onto the current axis and find which elements fall within the current analysis bin along this axis. From there we can traverse the linked lists corresponding to these elements and find which elements out of those on the lists fall inside the current analysis bin along the next axis. The procedure is continued until we run out of axes to traverse, and we have identified which data elements fall within the current analysis bin. This is repeated until the entire data structure has been traversed, and we have rebinned the whole system. Of course this process is repeated many times at different scales over the desired scale interval. With these optimized and more general algorithms our code runs much faster, and using an object- oriented design we have been able to implement many other features easily by inserting new routines into existing code. We expect many more code developments soon, and hope to have a distributable software package available in about eight months. 43

  • Page 50

    4.12 Further development of scale-local analysis J.G. Reid and T.A. Trainor The system of scale-local topological measures has been considerably expanded and generalized. This system should now be widely applicable in the analysis of complex phase spaces and in pattern recognition problems. Here we describe generalized information and volume concepts. The correlation integral Cαq ( E, E ref , ) measures the correlation content of a set E at scale e in terms of its ✒ q-tuples or q-point density functions. It is the basis for other scale-local measures. It has been generalized to the case of arbitrary reference distribution Eref and arbitrary partition . In terms of the correlation integral one can ✓ define the set capacity qα as ✔ 1 1−q C αq ( E, E ref , ✒ α ) = ) ✒ q ( E, E ref , ✕ (1) ✖✗ ✘✙ φ α ✒ = exp( − I q ) ⋅ M1 ( E ref , ). The capacity is the q-tuple weighted effective partition-element number for the set. This expression also defines the information Iqα in terms of an exponential reduction factor applied to M1 ( Eref , ) , the support (number ✒ of occupied elements) of the reference set. This illustrates information as a measure of capacity reduction or ratio. The information can be written explicitly as ✚ ✢ 1 Σ M1 b ( q / b )α ( pi / qi )q Iqα ( E, Eref , ) = log i =1 i M1 i i ✒ (2) Σ j =1b j (q j / b j )α ✛✜ ✣ ✤ q −1 where b represents the partition (when applied to a uniform distribution), p represents the object distribution E ✥ ✥ and q represents the reference distribution Eref. The index q is the rank of the Rényi entropy, and the continuous ✥ index is newly introduced to determine the role of the reference distribution as a weight in the correlation ✦ integral. A scale-local rank-q generalized volume q (E,e) can be defined as a product of the set capacity and a ✧ specific bin volume expressed in terms of the rank-q dimension dq(E,e). An application of this concept can be illustrated by a comparison of the rank-1 volume with Boltzmann's H functional expressed as a Riemann integral. log[ 1 (e) ⋅ e d1 ( e ) ] ✕ log[ 1 ( E,e)] = ★ (3) = − ∑ pi (e)log[ pi (e)] + d1 (e) ⋅ log(e) i ✯ H( E ) = f ( x ) log[ f ( x )]dx n ✩ (4) ∑ p (e)log[ p (e)] + n ⋅ log(e) ✪ ✫ ✬ ✭ = − elim →0 − i i ✮ i This comparison makes it clear that the H functional is (within a sign) the zero-scale limit of the log of the rank-1 volume assuming topological dimension n. For complex dynamical systems and nonequilibrium systems the assumption of integer topological dimension and restriction to q = 1 and zero scale are inappropriate. 44

  • View More

Get the full picture and Receive alerts on lawsuits, news articles, publications and more!