#+LATEX_CLASS: book-noparts # LOL: The [a4paper] did not apply! #+LATEX_CLASS_OPTIONS: [a4paper] #+LATEX_HEADER: \usepackage[english,ngerman]{babel} #+LATEX_HEADER: \usepackage{shellesc} #+LaTeX_HEADER: \usepackage{pdfpages} # #+LaTeX_HEADER: \usepackage{tikz} # Include the setup file that configures the majority of LaTeX export settings # NOTE: for the *PRINT* version, use emacs style color scheme for code # ALSO need to adjust the default text color, which we change in # ~default_latex_header.org~ to a grey for the monokai dark # background. So use the ~/phd/latex_header_print_version.org file # as the SETUPFILE! # For PDF: # #+SETUPFILE: ~/phd/latex_header_pdf_version.org # For Print: #+SETUPFILE: ~/phd/latex_header_print_version.org # 'externalize' all TikZ plots, i.e. cache them # #+LaTeX_HEADER: \usepackage{pgfplots} # #+LaTeX_HEADER: \usepgfplotslibrary{external} # #+LaTeX_HEADER: \tikzexternalize[prefix=cache/] #+LATEX_HEADER: \usepackage{tikz} #+LATEX_HEADER: \usetikzlibrary{external} #+LATEX_HEADER: \tikzexternalize[prefix=cache/] % activate! # got an error suddenly with the 'externalize' section above # https://tex.stackexchange.com/questions/365777/cannot-run-tikz-externalize-with-lualatex-but-it-used-to-work # for mini table of contents for each chapter #+LATEX_HEADER: \usepackage{minitoc} #+LATEX_HEADER: \usepackage{slashed} # UPDATE <2023-12-03 Sun 18:56>: # -> In here I simply forgot to set the fallback fonts. It *is* # possible to do correctly! I even use it in # `~/.config/latexdsl/lualatex.tex`! # by using: # \setmainfont{STIXTwoText}[RawFeature={fallback=FallbackFonts}] # UPDATE <2024-01-14 Sun 12:21> # -> We now use this finally. With the `combofont` approach below, the # italics are broken! #+LATEX_HEADER: \usepackage{fontspec} #+LATEX_HEADER: #+LATEX_HEADER: \directlua{ #+LATEX_HEADER: luaotfload.add_fallback( #+LATEX_HEADER: "FallbackFonts", #+LATEX_HEADER: { #+LATEX_HEADER: "DejaVu Serif:mode=harf;", #+LATEX_HEADER: "DejaVu Sans Mono:mode=harf;", #+LATEX_HEADER: % we could add many more fonts here optionally! #+LATEX_HEADER: } #+LATEX_HEADER: ) #+LATEX_HEADER: } #+LATEX_HEADER: #+LATEX_HEADER: \setmainfont{STIXTwoText}[RawFeature={fallback=FallbackFonts}] #+LATEX_HEADER: \setmathfont{STIXTwoMath-Regular}[RawFeature={fallback=FallbackFonts}] #+LATEX_HEADER: \setmonofont{Inconsolata}[RawFeature={fallback=FallbackFonts}] # Bibliography related: #+LATEX_HEADER: \usepackage[backend=biber]{biblatex} #+LATEX_HEADER: \addbibresource{references.bib} # Epigraphs #+LATEX_HEADER: \usepackage{epigraph} #+LATEX_HEADER: \newcommand{\rmst}{$\text{RMS}_T$~} #+LATEX_HEADER: \newcommand{\goldArea}{$\SI[parse-numbers=false]{5 \times 5}{mm²}$~} #+LATEX_HEADER: \newcommand{\lnL}{$\ln\mathcal{L}$~} # With Dejavu Serif a linespacing of 1.2 is too tight. 1.5 looks nice, # maybe 1.4 is optimal? # The default *I think* is 1.2 # #+LATEX_HEADER: \linespread{1.4} % change line spacing to be a bit larger. TODO: find good value! # HTML Export #+OPTIONS: html-style:nil #+OPTIONS: toc:nil # turn off Table of Contents here and place it elsewhere #+LATEX: \dominitoc % initialize the package # Disable evaluation of Org babel source code blocks on export For # reasons I don't understand we don't see any org code block # evaluation regardless of any settings that I have. I don't get # it. #+PROPERTY: header-args :eval no-export # XXX: Set the koma stuff so that it only appears in the *export*. It # breaks the TeX previews! #+LATEX_HEADER: \newcommand*{\thesisrefereeonetext}{1.\ Gutachter} #+LATEX_HEADER: \newcommand*{\thesisrefereeone}{Prof.\ Dr.\ Klaus Desch} #+LATEX_HEADER: \newcommand*{\thesisrefereetwotext}{2.\ Gutachter} #+LATEX_HEADER: \newcommand*{\thesisrefereetwo}{Prof.\ Dr.\ Igor García Irastorza} #+LATEX_HEADER: \newcommand{\thesistitle}{Search for solar axions using a 7-GridPix IAXO prototype detector at CAST} #+LATEX_HEADER: \newcommand{\thesisauthor}{Sebastian Michael Schmidt} #+LATEX_HEADER: \newcommand{\thesistown}{Solingen} #+LATEX_HEADER: \newcommand{\thesisyear}{2024} #+LATEX_HEADER: % Font and layout for figure and table captions #+LATEX_HEADER: \setkomafont{caption}{\normalfont\small} #+LATEX_HEADER: \setcapindent{0pt} #+LATEX_HEADER: \setkomafont{title}{\normalfont\bfseries\huge} #+LATEX_HEADER: \setkomafont{subtitle}{\normalfont\Large} # Overwrite our default line spread? #+LATEX_HEADER: \linespread{1.4} % change line spacing to be a bit larger. TODO: find good value! #+LATEX_HEADER: \KOMAoptions{fontsize=11pt, paper=a4, twoside=true, DIV=14, BCOR=5mm} # Change the spacing in the table of contents #+LATEX_HEADER: \RedeclareSectionCommand[ #+LATEX_HEADER: tocnumwidth=2.5em % Adjust the width as needed #+LATEX_HEADER: ]{section} #+LATEX_HEADER: \RedeclareSectionCommand[ #+LATEX_HEADER: tocnumwidth=3.5em % Adjust the width as needed #+LATEX_HEADER: ]{subsection} #+LATEX_HEADER: \RedeclareSectionCommand[ #+LATEX_HEADER: tocnumwidth=4em % Adjust the width as needed #+LATEX_HEADER: ]{subsubsection} # Include the submission front page #+LATEX: \input{/home/basti/phd/PhD_Submit_title.tex} #+TOC: headlines 2 #+OPTIONS: H:4 # Part 0: Introduction * Published thesis :noexport: The published thesis can be found at: https://doi.org/10.48565/bonndoc-303 * Compile :noexport: :PROPERTIES: :CUSTOM_ID: sec:compile_thesis :END: Before you can compile the TeX file or produce the HTML version of this thesis, you will need my Emacs configuration: https://github.com/Vindaar/emacs.d It defines a variety of Org related settings, in particular related to LaTeX and HTML exports. *Note*: Depending on whether you intend to produce the regular or extended thesis, you need to adjust the ~org-export-exclude-tags~ variable. See [[src:latex:setup]] for the default setup without ~extended~ sections. ** LaTeX For LaTeX, aside from my Emacs config, you need to evaluate the elisp snippet below, [[src:latex:setup]]. For the print version also [[src:latex:setup_printed]]. Produce the TeX file using ~C-c C-e r l~ (~M-x org-export-dispatch~) and choose ~org-ref~ ~to LaTeX~. Let =latexmk= take care of the actual compilation of the produced TeX file. #+begin_src latexmk -pvc -pdf -view=none -shell-escape -output-directory=texout -pdflatex=lualatex thesis.tex #+end_src it watches the file and automatically recompiles if the file changed on disc. Alternatively, manually do: #+begin_src sh lualatex --shell-escape thesis.tex biber thesis lualatex --shell-escape thesis.tex #+end_src ** HTML For the HTML version, aside from my Emacs config, you don't need anything particular to do the first export. Produce the single HTML file using ~C-c C-e r h~ (~M-x org-export-dispatch~) and choose ~org-ref~ ~to html~. This produces one ginormous HTML file. Note: it _also_ automatically converts all PDFs it finds to SVGs or PNGs (depending on size of the PDF) and copies them to a ~./figs~ directory reusing the internal paths of the location of the original figures, i.e. a ~/home/user/foo/bar/plot.pdf~ will become ~./figs/home/user/foo/bar/plot.svg~. With the single HTML file present, we can produce the multiple HTML pages: #+begin_src sh cd ~/phd # or wherever you have the PhD thesis located nim r code/split_thesis_html.nim #+end_src which will produce all HTMLs in [[file:html/]]. * Emacs and Org settings :noexport: :PROPERTIES: :CUSTOM_ID: sec:latex_book_class :END: #+NAME: src:latex:setup #+begin_src emacs-lisp (add-to-list 'org-latex-classes '("book-noparts" "\\documentclass{scrbook}" ("\\chapter{%s}" . "\\chapter*{%s}") ("\\section{%s}" . "\\section*{%s}") ("\\subsection{%s}" . "\\subsection*{%s}") ("\\subsubsection{%s}" . "\\subsubsection*{%s}") ("\\paragraph{%s}" . "\\paragraph*{%s}") ("\\subparagraph{%s}" . "\\subparagraph*{%s}"))) (setq bibtex-completion-bibliography "~/phd/references.bib") (add-to-list 'org-latex-packages-alist '("outputdir=texout" "minted")) (defun my-custom-org-exclude-tags (backend) "Set tags to exclude depending on BACKEND." (cond ((eq backend 'latex) (setq-local org-export-exclude-tags '("noexport" "extended" "html"))) ;; `extended` is always exported ((eq backend 'html) (setq-local org-export-exclude-tags '("noexport" "latex"))))) (add-hook 'org-export-before-processing-hook 'my-custom-org-exclude-tags) ;; Finally, use our custom IDs, if present. Otherwise we cannot link to a figure from ;; the subfigure DSL! (setq org-latex-prefer-user-labels t) #+end_src #+RESULTS: src:latex:setup : t For the printed version, the color scheme for code should be using a bright background (we remove our background setting): #+NAME: src:latex:setup_printed #+begin_src emacs-lisp (setq org-latex-minted-options '(("frame" "lines") ("linenos=true") ("fontsize=\\footnotesize"))) #+end_src #+RESULTS: src:latex:setup_printed | frame | lines | | linenos=true | | | fontsize=\footnotesize | | And place in the header of this file: #+LATEX_HEADER: \usemintedstyle{emacs} #+begin_src emacs-lisp (add-to-list 'org-latex-classes '("book-noparts" "\\documentclass{scrbook}" ("\\chapter{%s}" . "\\chapter*{%s}") ("\\section{%s}" . "\\section*{%s}") ("\\subsection{%s}" . "\\subsection*{%s}") ("\\subsubsection{%s}" . "\\subsubsection*{%s}") ("\\paragraph{%s}" . "\\paragraph*{%s}") ("\\subparagraph{%s}" . "\\subparagraph*{%s}"))) #+end_src #+RESULTS: | book-noparts | \documentclass{scrbook} | (\chapter{%s} . \chapter*{%s}) | (\section{%s} . \section*{%s}) | (\subsection{%s} . \subsection*{%s}) | (\subsubsection{%s} . \subsubsection*{%s}) | (\paragraph{%s} . \paragraph*{%s}) | (\subparagraph{%s} . \subparagraph*{%s}) | | article | \documentclass[11pt]{article} | (\section{%s} . \section*{%s}) | (\subsection{%s} . \subsection*{%s}) | (\subsubsection{%s} . \subsubsection*{%s}) | (\paragraph{%s} . \paragraph*{%s}) | (\subparagraph{%s} . \subparagraph*{%s}) | | | report | \documentclass[11pt]{report} | (\part{%s} . \part*{%s}) | (\chapter{%s} . \chapter*{%s}) | (\section{%s} . \section*{%s}) | (\subsection{%s} . \subsection*{%s}) | (\subsubsection{%s} . \subsubsection*{%s}) | | | book | \documentclass[11pt]{book} | (\part{%s} . \part*{%s}) | (\chapter{%s} . \chapter*{%s}) | (\section{%s} . \section*{%s}) | (\subsection{%s} . \subsection*{%s}) | (\subsubsection{%s} . \subsubsection*{%s}) | | Bibliography for org-ref: #+begin_src emacs-lisp (setq bibtex-completion-bibliography "~/phd/references.bib") #+end_src #+RESULTS: : ~/phd/references.bib #+begin_src emacs-lisp (add-to-list 'org-latex-packages-alist '("outputdir=texout" "minted")) #+end_src #+RESULTS: | outputdir=texout | minted | | | labelformat=simple | subcaption | | | margin=2.5cm | geometry | | | | mhchem | | | | amsmath | | | | unicode-math | | | | fontspec | | | | siunitx | | | | pdfpages | | | | longtable | | | | booktabs | | | | minted | | | | minted | t | | | booktabs | nil | In addition make sure you have an environment variable referring to the ~TimepixAnalysis~ repository: #+begin_src sh # A short helper to reference the TPA directory export TPA=~/CastData/ExternCode/TimepixAnalysis #+end_src ** Customize exclude tags per backend #+begin_src emacs-lisp (defun my-custom-org-exclude-tags (backend) "Set tags to exclude depending on BACKEND." (cond ((eq backend 'latex) (setq-local org-export-exclude-tags '("noexport" "html"))) ((eq backend 'html) (setq-local org-export-exclude-tags '("noexport" "extended" "latex"))))) (add-hook 'org-export-before-processing-hook 'my-custom-org-exclude-tags) #+end_src #+RESULTS: | my-custom-org-exclude-tags | my/org-process-subfigure-dsl | my/org-html-export-figs | org-blackfriday--reset-org-blackfriday--code-block-num-backticks | * Publication :noexport: The thesis will be published digitally via the ULB, in particular using 'bonndoc': https://bonndoc.ulb.uni-bonn.de/xmlui/page/about References about the how-to: - https://www.ulb.uni-bonn.de/de/forschen-lehren-publizieren/dissertation-publizieren - https://www.ulb.uni-bonn.de/de/forschen-lehren-publizieren/dissertation-publizieren/online-publikation-auf-bonndoc Form I have to fill: https://merry.ulb.uni-bonn.de/anmeldungen/bin/dissmeld2.php * Errata :extended: :PROPERTIES: :CUSTOM_ID: sec:errata :END: For the list of errata, see the main website overview, [[file:website/index.org::#sec:errata]] (link to Org source) or [[../index.html]] (link to hosted website). * Introduction :Intro: :PROPERTIES: :CUSTOM_ID: sec:introduction :END: #+begin_export latex \epigraph{ Life before death. \\Strength before weakness. \\Journey before destination. }{ \textit{The Stormlight Archive \\ by Brandon Sanderson} } #+end_export Mathematics has long served as a guiding tool in theoretical physics. Symmetries, mathematical 'beauty' and the notion of 'naturalness' have long been successfully used to predict new phenomena in physics to be verified experimentally later. Although we may never know whether the universe actually cares about our funny intuitions, in the lack of empirical evidence to the contrary we tend to stick to such approaches. Arguably in the infinite space of mathematical avenues physics /could/ express itself, this may be considered an application of Occam's razor to physics. In a sense it underlines the intersection between philosophy, mathematics and physics and indicates that the term 'philosophy of nature' is not actually wholly inapplicable to modern physics. And so we arrive at one of our current representations of this in the form of an angle $θ$ in quantum chromodynamics. Measurements seem to indicate this angle is essentially zero. However, we tend to reject the idea that our universe simply /is/ such that $θ$ happens to be close to zero. Instead we derive the simplest explanations as to why this might be the case. And who can blame us when the result of accepting $θ = 0$ by nature would lead us precisely nowhere? Finding an explanation spawns a new hypothetical friend in our zoo of particles, the /axion/. The point of this thesis is to continue the search for this zoo member by way of staring into the core of the Sun. With the help of a very large number of /virtual/ photons we will attempt to entice some axions to become real X-rays, directly detectable by us. And if we fail in this quest, we can put our philosopher's hat back on and muse about how little our axion friends want to dance with our virtual photons. # The one place where we want a little bit of space to go from # a little bit less serious to a more serious language :) #+begin_export latex \vspace{1cm} #+end_export After a short side note about this thesis as a document in chapter [[#sec:about_thesis]], we introduce the theoretical foundation of axion physics in chapter [[#sec:theory]]. From a historical standpoint as to why axions were invented in the first place to the avenues of detection and related, the expected (model dependent) solar axion flux. We will see that the Sun acts as a strong source of axions in the soft X-ray energy range. This leads to chapter [[#sec:helioscopes]], which fully introduces the concept of an axion helioscope as a way to potentially detect axions of a solar origin. A large magnet is used as a solar telescope in an attempt to reconvert solar axions into photons, X-rays. In particular it introduces the CERN Axion Solar Telescope (CAST) as the experiment at the center of this thesis. Its successor, the International AXion Observatory (IAXO) will also be introduced. With an understanding of possible detection mechanisms for axions, we will focus next on the required hardware to actually measure axions indirectly. That is, via gaseous detectors for X-ray detection thanks to the axion energy spectrum from the Sun. In chapter [[#sec:theory_detector]] we cover the relevant physics related to X-ray interactions with matter and gaseous detector physics. Next we introduce the detector used in this thesis, the 'Septemboard' 7-GridPix detector, in chapter [[#sec:septemboard]]. Here we first introduce the concept of a 'Micromegas detector', which our GridPix detectors are a type of. We will motivate the different detector features this 7-GridPix detector has over the single GridPix detector used previously. From the hardware of the detector the reader may optionally go over to the data acquisition software and the monitoring tools used during the CAST data taking campaign, in appendix [[#sec:daq]]. With a fully operational detector in mind, we then introduce the software to reconstruct and analyze data taken with this detector in chapter [[#sec:reconstruction]]. We discuss cluster finding in the GridPix data, calculation of geometric properties of clusters and reconstruction of FADC spectra. Then it is finally time to show the installation of the Septemboard detector at the CAST experiment in chapter [[#sec:cast]], talk about potential issues encountered which affect data quality and summarize the total data taken, which will later be used for a limit calculation. Data taken at CAST still needs to be processed and further calibrated to be useful, done in chapter [[#sec:calibration]]. This includes mitigating effects of slight detector instabilities and calibrating the data in energy. At that point we have everything ready to try and filter the entire dataset to the most X-ray like clusters. When applied to the background dataset this yields our irreducible background rate of events that are either real X-rays due to non-axion sources or other type of X-ray like data. Applied to the axion-sensitive solar tracking dataset the same techniques yield a set of axion candidates. The different classification techniques and how all detector features are used for this purpose is explained in chapter [[#sec:background]]. This finally brings us to chapter [[#sec:limit]], in which we introduce our method to evaluate our axion-sensitive solar tracking dataset against our background dataset. This is done using a Bayesian extended likelihood approach. We will mainly compute a limit on the axion-electron coupling constant $g_{ae}$. But secondarily, we will also consider the axion-photon coupling constant $g_{aγ}$ and the chameleon-photon coupling $β_γ$, a separate hypothetical particle. We will compare each obtained limit with the current best limits. As a concluding outlook we will discuss potential improvements on different levels possible for future detectors and physics searches in chapter [[#sec:outlook]]. The lessons learned in this thesis will be summarized to give ideas about which aspects should be emphasized more for future data taking campaigns and which techniques might be worthwhile to investigate for possible improvements to the background rate and similar. This will be placed into context for a potential Timepix3 based detector in the future. We finally summarize the results and conclude in chapter [[#sec:summary]]. # Textwidth: \the\textwidth # \\ # Linewidth: \the\linewidth # \\ # Textheight: \the\textheight ** Line and text heights :noexport: Textwidth: 458.29268pt Linewidth: 458.29268pt Textheight: 677.3971pt Ratio: 1.47808841285 ** TODOs for this section [/] :noexport: - [ ] *CHECK CHUCK'S COMMENT* about my usage of /entice/ above! The below was an early version of the section above. I think the statistics part at the beginning could still be a decent idea. #+begin_quote ... - [ ] *There is still the idea to structure it like this, i.e. move basic limit method to beginning* As we finally wish to compute a limit on different axion coupling constants, an understanding of basic statistics for limit calculations is required. This we will cover in chapter [[Statistics & limit calculations]]. ... This leads to a discussion about the type of calibrations necessary to bring the Septemboard detector into operation, chapter [[#sec:operation_calibration]]. Stopping at the software running the detector, we will then transfer over to the software suite built to analyze the data in chapter [[Software]]. A further chapter about the analysis principles follows, which explains the ideas of how the data reconstruction works, what kind of calibrations are applied to the data and how it all fits together to compute a background rate and limit. This is chapter [[Chapter about analysis principle]]. Raytracing chapter will be moved somehow. What follows in chapter [[Detector preparation / study / characterization etc.]] is the explanation of all characterization measurements, an introduction to the \cefe calibration measurements and how the energy calibration works. From there we go to the actual deployment of the detector at CAST in chapter [[Detector installation at CAST & data taking]], in which we describe the physical setup and give an overview over the different data taking periods. With the description of the data periods out of the way, we can make use of the data to compute the background rate of the detector for different cases in chapter [[Background rate computation]]. These background rates are combined with the expected signals from chapter *TODO which one Raytracing + theory?* and the measured candidates during tracking time to compute a limit in chapter [[Limit calculation]]. #+end_quote ** Choice of quote :noexport: Why did I choose the quote from Stormlight Archive? If you haven't read The Stormlight Archive either this won't mean anything to you and/or it will spoil you somewhat. So read at your own discretion. 'The First Ideal' quoted for the Knights Radian is sort of a metaphor for many aspects of doing a PhD in our real world. Joining a PhD program is a bit like joining one of the orders of the Knights Radiant. I suppose many PhD students can tell their own stories about the struggles encountered during the PhD work. While different for each of us, a common theme is struggling with "why am I even doing this?". To many the notion of _having a PhD_ ends up becoming the dominant reason not to just quit. So we forget "Journey before Destination". The point of a PhD is not the title we get at the end, but being part of academia and pushing science forward together. Kaladin's mental struggles and bordering on breaking his oaths speaks to me during my PhD. Questions of self worth, if my approach to my PhD and general intelligence is enough to achieve what I want, my mental health and whether anyone even cares about the research I do at many times let me wonder about if doing a PhD is worth it. In that sense I was close to breaking my own "oaths" of doing a PhD and doing it for the right reasons. While I don't think I was ever really considering to quit, I certainly wondered many times about the importance of my work. And mentally I frequently visited the chasms, I can say that much. To this day now I'm very torn about the concept of PhDs in our modern society. I think [[https://www.quantamagazine.org/a-math-puzzle-worthy-of-freeman-dyson-20140326/][Freeman Dyson's thoughts]] deserve thought. I surely could have picked many other quotes (or none at all) to include in the beginning of the thesis. I like this one in particular though because its meaning is a bit ambiguous. For most readers (who will likely have _not_ read Cosmere novels), it will likely be a mysterious, but very serious sounding quote that leaves much room for personal interpretation. I hope that the quite lighthearted introduction afterwards is a nice uplifting contrast for those readers who interpret the quote in an ominous or dark way. And well, duhh. It's only fitting that this thesis whose raw Org document is >2MB quotes someone who writes even more than me! lol ( Also it's kind of fitting. My girlfriend uses the nick 'windspren' on Matrix. So unbeknownst to her (because she still has to finish all of Stormlight!), it's only fitting that I quote something related to Kaladin. Someone who is forever changed, mentally a bit scarred and has his personal windspren, Syl, at his side to support him :) ) * About this thesis :Intro: :PROPERTIES: :CUSTOM_ID: sec:about_thesis :END: If you are reading this as a printed thesis, it is likely a shortened version of the full document it is part of. To conform to the expectations of a PhD thesis many parts are removed that are irrelevant for the basic presentation of the work done during the thesis. However, a fellow researcher who wishes to understand all the details, in particular in terms of reproducibility of the results, the full document should be read instead. If possible, a PhD thesis should be a tome of knowledge about the topic that allows the interested reader to absorb as much of the authors knowledge as possible to help with the continuation of the research. Crucially, this includes details about how results were obtained and access to results in numerical formats, which is both commonly lacking in regular PhD theses. For this reason the idea of an extended thesis was born, found at [fn:when_find] https://phd.vindaar.de. There you will not only find a PDF of the thesis you are likely reading right now, but also: - a PDF version of the extended document, - an HTML version of the extended document, - the raw Org document of the thesis [fn:org], - a link to the repository of the thesis, - a link to a repository of the entire raw datasets used in the thesis (plus reconstructed data) following the FAIR [[cite:&wilkinson16_fair]] guiding principles, - a large amount of additional documents of notes taken during research and development. The extended document contains many additional subsections tagged 'extended', similar to the 'Intro' tag (see the top right of the page) of this chapter. These either contain personal thoughts about why certain decisions were taken or additional research done that either lead nowhere or simply is not very relevant for the thesis. Maybe most importantly, you will find a subsection for *every single plot and table* included in this thesis (that was created by me) with the commands or code snippets to produce them! In addition, the linked git repository of the thesis not only contains all the figures included in the thesis as vector graphics, but also a *CSV file for every plot*. [fn:csv_files] Use them freely under the only condition to reference this thesis when you do so. /In essence the idea is to provide a fully reproducible thesis/. #+LATEX: \vspace{0.5cm} Furthermore, every thesis, especially those relying on large pieces of software, contains mistakes, bugs, wrong assumptions and more. Most of these are not known to the researcher, in case of wrong assumptions possibly due to lack of knowledge in a specific topic. Other times shortcomings _are_ known, but left out for convenience. In this thesis I try to be /transparent/ about known shortcomings. There are almost certainly bugs in the referenced code that I'm not aware of, bad assumptions about certain things, etc. Where I /am/ aware of sketchy choices and mistakes, I will highlight them honestly. During the work of my PhD I have written and contributed to a large number of free software / open source projects. The TimepixAnalysis [fn:TPA] project -- the code base at the heart of all data reconstruction and analysis for this thesis -- is just one more such project. What this means is that I have no intention of abandoning it in the future, even if I leave academia. As long as there is demand for it (for example for a GridPix3 detector used at BabyIAXO), I'll be happy to maintain and extend it as needed. Future reader, don't let "last updated N years ago" scare you! Just open an issue on Github [fn:or_wherever]. Note that I'm always available for questions about my work or this thesis either via Matrix (@vindaar:matrix.org) or on other channels (Twitter / Discord / Github @vindaar) or plainly by email phd@vindaar.de. Please do not hesitate to contact me, even if it is several years since this thesis was initially released. [fn:when_find] The actual thesis will be found there once the thesis has been published. Links to data, figures and other resources are available before though. [fn:org] The thesis is not written directly in LaTeX, but in Emacs Org mode. From there it is exported to different targets. [fn:TPA] https://github.com/Vindaar/TimepixAnalysis [fn:csv_files] CSV is not a great format (it's not even standardized after all), but it is convenient for such a purpose. [fn:or_wherever] Or whatever is the go-to in the year you currently reside. I'll be sure to update the link on my website if we move host. ** Extended notes for the extended thesis [/] :extended: The extended version includes many source code blocks that can be extracted either directly using Emacs by calling ~org-babel-tangle~ in the Org document (~M-x org-babel-tangle~ or ~C-c C-v t~) or alternatively using [[https://github.com/OrgTangle/ntangle][ntangle]]: #+begin_src sh ntangle thesis.org #+end_src - [ ] *REPRODUCE ALL RESULTS* Ideally, all results can be reproduced with a single: #+begin_src sh ./generateResults.sh #+end_src call (we'll see how that will work out). The package versions for the code used in the extended version will be frozen at a specific time. The list of version numbers will be found below. ** Why Org mode :extended: Well, if you read this, you probably understand why I wrote it in Org mode! ** TODOs for this section [/] :noexport: *TODO:* (maybe) insert the essay written on my phone one night here (into the full version of the thesis at least) Explanation about the thesis structure and introduction of the "full thesis document". *TODO*: insert link, probably to GitHub as well as some other source *TODO*: Add version numbers of all packages used for final plots. *TODO*: Have specific marking in (sub)sections if they contain more information in extended version? *TODO*: It would be sick if we could do something like #+begin_src sh curl -s <backblaze link> | sh foo.sh #+end_src to download and generate everything in one go. Seems a bit insane though. But who knows. *TODO*: In =noexport= sections, possibly have a "Skip this section if:" introduction? So that readers know exactly why a certain section might be of interest to them. Old paragraph: #+begin_quote The main difference between the regular thesis document and the extended version are the following. The extended version contains: - either inline code or links to the code that produces *every plot* (that is created by me); inline code is used if the required code to generate the plot is less than a certain amount of lines. For non-inlined code the used code is referenced (link to the code + correct git commit) - access to *all* raw and reconstructed data to reproduce the results - additional chapters that were not relevant enough / polished enough for inclusion into the thesis. This includes additional plots, investigations of detector behavior etc., theoretical calculations and more. #+end_quote Old paragraph: #+begin_quote Further, this thesis does not attempt to cover *every* aspect of the theoretical foundation required to understand every part. For example we will not introduce the Standard Model or explain certain detector features, if they are not of importance for the understanding of our data. Good references, if available, will however be given for an interested reader / a reader attempting to fill in gaps in knowledge. #+end_quote ** Notes for future PhD students and IAXO analyses :extended: Note that even if it's 2037 right now and you are a PhD student trying to understand my analysis, because you are working on (Baby)IAXO data that doesn't mean you cannot reach me to ask questions. Unless I died (let's hope not!) I'll still be reachable via phd@vindaar.de no matter when. But then again at that point you probably already told your personal AGI to just reconstruct all I did, so well. Let's see how this ages, shall we? * TODO List of todos [0/9] :Intro:noexport: :PROPERTIES: :CUSTOM_ID: sec:todos :END: ** TODO Use Git LFS for all the CSV files of plots Github supports up to 2GB files for LFS. All the CSV files are much smaller than that (but maybe up to O(500MB) in rare cases). Should work well enough. -> Uhh, never mind, Github has a quota of 1 GB for LFS in terms of bandwidth! https://docs.github.com/en/billing/managing-billing-for-git-large-file-storage/upgrading-git-large-file-storage 50GB cost 5$ per month! ** DONE Check correct title for Igor!!! I don't know if he is a Prof etc ** DONE Implement fallback fonts for LuaLaTeX! Using STIX Two we don't have all unicode in text!! - [X] We did that now, but some things are still not there. Maybe STIX Two pretends it has the characters? -> Done ** STARTED Current important TODOs until thesis draft is done - [X] Finish introduction - [X] Finish outlook - [X] Write summary - [X] Write / check introduction of each chapter - [X] Rewrite parts of CAST chapter, move interlock etc. related stuff to Appendix, shorten! - [ ] Move FADC veto discussion about signal shape etc. somewhere else. Maybe introduce a "future thoughts about vetoes etc." section - [X] Fix calculation of gas parameters used in FADC veto section based on [[~/CastData/ExternCode/TimepixAnalysis/Tools/septemboardCastGasNimboltz/septemboardGasCastNimBoltz.nim]] to use NimBoltz instead. - [X] Added a version of the code using NimBoltz - [ ] Rerun using NimBoltz version with 2e9 interactions! We use 1e7 as base, so 200. - [ ] *REPLACE NUMBERS / PLOTS IN THESIS* - [X] Update all plots to have good text sizes - [X] Write / finish raytracing appendix. Needs to include introduction, screenshots of CAST setup, comparisons of HPD for PANTER, result of axion image ** DONE Define final font sizes for different types of plots We need to define a common plot size, font size and margins to use for different types of plots. 1. single plots 2. side-by-side plots 3. without a legend 4. with a legend 5. with a possibly long title Based on the 55Fe run used in sec. [[#sec:calib:energy_gen_example_cefe]] we started a small script to test some plots sizes. - [ ] *FOR SINGLE PLOT WITH / WITHOUT LEGENDS, USE BACKGROUND RATE* The total ~linewidth~ and ~textwidth~ of the thesis are: #+begin_src latex \the\textwidth \the\linewidth #+end_src 455.24411pt 455.24411pt That will serve as the basis for the deduction of correct sizes of all plots. So, what does this tell us? E.g. if we use font size 12 on a TeX generated plot that is inserted at ~1\textwidth~ it will match text in the regular document at font size 12. *Q*: Is height or width the relevant dimension? Very flat plot of full text width vs. very tall, but narrow plot? -> Neither! Only our given font size matters of course, scaled to the size of the final image in the plot. If we insert by ~\linewidth~ it will the e.g. ~11pt · 0.5\linewidth~ is size. Thus, we need to define: - How wide are our plots supposed to be? (each of the above types) The ratio of ~\textwidth~ and our plot width must be used to scale all font sizes by the ratio! - So ideally: Define a set of inputs: - fraction width of plot in page, e.g. 90% for single plot, 50% of side-by-side - desired ratio? Or width, height? If we use width + ratio or width + height we can then keep the quantities used for the margins the same? Do we want that? If we then have a 1 cm margin at the top it will be ~half the size in a side-by-side plot! I'm still not 100% clear on how to handle margins on the plot though. But fixing the text sizes is essentially independent in the way we implement now. - [X] *DECIDE TO USE STIXTwo*? At this point it sounds like a good idea. *** Side-by-side plots First parse the data: #+begin_src sh raw_data_manipulation \ -p /mnt/4TB/CAST/Data/2018/CalibrationRuns/Run_149_180219-17-25.tar.gz \ -r calib \ -o /tmp/run_149.h5 #+end_src #+begin_src sh raw_data_manipulation \ -p /mnt/1TB/CAST/2018/CalibrationRuns/Run_149_180219-17-25 \ -r calib \ -o /tmp/run_149.h5 #+end_src Now reconstruct the data file and _save the CSV_: #+begin_src sh :results none WRITE_PLOT_CSV=true PLOT_OUTPATH=~/phd/playground/Figs/ reconstruction -i /tmp/run_149.h5 --out /tmp/reco_149.h5 #+end_src storing the figures (which we don't care about here) _and the CSV_ file of the data in [[~/phd/playground/Figs/run_149_2023-10-31_18-44-29]], in particular the CSV file of the 55Fe plot [[file:playground/Figs/run_149_2023-10-31_18-44-29/fe_spec_run_149_chip_3_charge.pdf.csv]]. We'll use that one to define a test for font sizes: #+begin_src nim :tangle code/determine_font_sizes_plots.nim import ggplotnim import std / [strutils, strformat] ## Notes: ## Font: STIXTwoText ## Text size: 11pt ## Figure caption size: 10pt ## Subcaption subfigure caption size: 9pt ## A4 paper: 210 × 297 mm² ## So: Total width in LaTeX pt: ## width: 8.26771653543 inch ## 597.507874016 pt (using 72.27 DPI) ## ## Using KOMAoption DIV 14 and BCOR=5mm yields 458.29268pt 458.29268pt ## To get that with a fixed margin on both sides needs: ## 139.215194016pt in the margin. ## That is 1.92632065886 inches and thus ## 4.8928544735 cm. ## Ergo: 2.44642723675cm on each side. proc escapeLatex(s: string): string = result = s.multiReplace([("e^-", r"$e^-$"), ("\n", r"\\"), ("%", "\\%")]) #func side_by_side_theme(): Theme = # result = Theme(titleFont: some(font(16.0)), # labelFont: some(font(12.0)), # tickLabelFont: some(font(12.0)), # tickLength: some(7.5), # tickWidth: some(1.5), # legendFont: some(font(12.0)), # legendTitleFont: some(font(12.0, bold = true)), # facetHeaderFont: some(font(12.0, alignKind = taCenter)), # baseScale: some(1.0)) ## This was the sizes used by adjusting by hand #func side_by_side_theme(): Theme = # result = Theme(titleFont: some(font(20.0)), # labelFont: some(font(16.0)), # tickLabelFont: some(font(16.0)), # tickLength: some(10.0), # tickWidth: some(1.5), # legendFont: some(font(16.0)), # legendTitleFont: some(font(16.0, bold = true)), # facetHeaderFont: some(font(16.0, alignKind = taCenter)), # baseLabelMargin: some(0.5), # baseScale: some(1.0)) #func theme_scale(scale: float, family = ""): Theme = # ## Returns a theme that scales all fonts, tick sizes etc. by the given factor compared # ## to the default values. # ## # ## If `family` given will overwrite the font family of all fonts to this. # result = default_scale() # proc `*`(x: Option[float], s: float): Option[float] = # doAssert x.isSome # result = some(x.get * s) # proc `*`(x: Option[Font], s: float): Option[Font] = # doAssert x.isSome # let f = x.get # let fam = if family.len > 0: family else: f.family # result = some(font(f.size * s, bold = f.bold, family = fam, alignKind = f.alignKind)) # result.titleFont = result.titleFont * scale # result.labelFont = result.labelFont * scale # result.tickLabelFont = result.tickLabelFont * scale # result.tickLength = result.tickLength * scale # result.tickWidth = result.tickWidth * scale # result.legendFont = result.legendFont * scale # result.legendTitleFont = result.legendTitleFont * scale # result.facetHeaderFont = result.facetHeaderFont * scale # result.baseScale = result.baseScale * scale # in ggplotnim.nim # func default_scale*(): Theme = # result = Theme(titleFont: some(font(16.0)), # labelFont: some(font(12.0)), # tickLabelFont: some(font(8.0)), # tickLength: some(5.0), # tickWidth: some(1.0), # legendFont: some(font(12.0)), # legendTitleFont: some(font(12.0, bold = true)), # facetHeaderFont: some(font(8.0, alignKind = taCenter)), # baseLabelMargin: some(0.3), # baseScale: some(1.0)) #func side_by_side_theme(): Theme = # result = Theme(titleFont: some(font(10.0)), # labelFont: some(font(10.0)), # tickLabelFont: some(font(8.0)), # tickLength: some(5.0), # tickWidth: some(1.0), # gridLineWidth: some(1.0), # legendFont: some(font(10.0)), # legendTitleFont: some(font(10.0, bold = true)), # facetHeaderFont: some(font(8.0, alignKind = taCenter)), # baseLabelMargin: some(0.4), # annotationFont: some(font(8.0, family = "monospace")), # baseScale: some(1.0)) const xLabel = r"Charge [$\SI{1e3}{e^-}$]" yLabel = "Counts" runNumber = 149 chipNumber = 3 suffix = "_charge" titleSuffix = "" useTeX = true #let df = readCsv("~/phd/playground/Figs/run_149_2023-10-31_18-44-29/fe_spec_run_149_chip_3_charge.pdf.csv") let df = readCsv("~/phd/playground/Figs/run_149_2023-12-03_18-26-39/fe_spec_run_149_chip_3_charge.pdf.csv") let texts = @["μ = 917.1e3 e^-", "6.4 eV / 1000 e^-", "σ = 9.07 %", "χ²/dof = 0.65"] let annot = if not useTeX: texts.join("\n") else: texts.join("\n").strip.escapeLatex() echo annot const textWidth = 455.24411 / 72.27 * 72.0 ggplot(df, aes("bins")) + geom_histogram(aes(y = "hist"), stat = "identity", hdKind = hdOutline) + geom_line(aes("bins", y = "fit"), color = some(parseHex("FF00FF"))) + xlab(xlabel) + ylab(ylabel) + annotate(annot, left = 0.02, bottom = 0.4, font = font(16.0, family = "monospace")) + ggtitle(&"Fe spectrum for run: {runNumber}{titleSuffix}") + themeLatex(fWidth = 0.5, width = 600, sideBySide) + ggsave(&"~/phd/playground/Figs/run_149_2023-10-31_18-44-29/fe_spec_run_{runNumber}_chip_{chipNumber}{suffix}.pdf", #width = width, height = height, useTeX = useTeX, standalone = useTeX) #+end_src If we copy this file to =~/phd/Figs/energyCalibration/run_149/= we see the result in the thesis in the section of the 55Fe example after recompilation. The numbers in use above there are a good starting point. *NOTE*: <2023-12-04 Mon 12:13> The above served as an extremely useful tool to calculate a decent way to get the correct sizes. We have moved the core logic (~sideBySide~ and ~latexTheme~) into ~ggplotnim~ now. ** WONTFIX Have reference to Firmware used at CAST in each run -> We mention them in the extended thesis, but it's not really very important, because people won't reproduce the experiment :) ** TODO Linking to thesis & plots When finally creating the links to the thesis, the figures, the data etc. we should additionally create links using ~vindaar.de/phd_related_link~. That way we can later still change the hosting location without making the links in the written thesis outdated. ** DONE Run list (appendix) ** DONE Include exact results from Geometer measurements - [X] Find in EDH and include, even if we don't use it. Referenced in X-ray finger measurements. ** TODO fix up schematic of V6 Septemboard connections ** TODO fix up schematic of MM working principle The existing schematic is not very clear. Change the drift gap behavior and amplification gap one by reversing their drawing style. Add some alpha to different regions to highlight amount of electrons drifting. Add a text label with O(magnitude) ** WONTFIX insert the first LaTeX + Vega-lite based plot This gives us an idea of how this will work. For a start I'd say base the Vega-lite plots on Github gists. That allows for easy replacement for the time being. - Using Vega-Lite at least in the version that I hand in is out. Too much additional work for no real benefit. It would be neat, but well. In the future once I've implemented something to automate it maybe. ** DONE generating plots Currently most plot we insert are either placeholders or generated by hand. For plots that are already conveniently generated by running TPA on data, we should probably do the following: - add an option to ~config.toml~ to activate "pretty" (of some kind) plots, meaning TikZ + Vega-Lite backend for _all plots_ placed into the default output path when running TPA - change the output path for all plots into a thesis local ~Figs/TPA_generated~ directory of sorts ** TODO implement nothing ⇒ background rate as reproducible build This one will be a bit ambitious, but maybe it's a day of work. *If* we get this working we're at a point where generating other plots is just a simple shell command (call script X with args Y), as we will have all =Calibration/DataRunsX_Y.h5= files ready somewhere. Steps: *** Setup Nim + all packages of fixed versions (take versions from a TOML file) *** Have config file storing paths of raw data + output paths *** run raw data, reco, ... *** generate CDL datasets *** compute logL files *** plot background ** STARTED Upload data We will host all the data on Zenodo, mainly. Can also just use Zenodo https://zenodo.org B2 maybe as an alternative? We could start by a simple Backblaze B2 hosting. Pricing is competitive: Hosting: 0.005 $/Month/GB Download: 0.01 $/GB https://www.backblaze.com/b2/cloud-storage-pricing.html Which is 1.5$ for 300 GB and 3$ to download it. Certainly cheap enough to try! Data to store *** All 2017/18 data runs (raw & reconstructed) **** Run 2 **** Run 3 *** X-ray finger runs *** Development runs **** Septem with sparking **** FADC testing runs of different settings (50 & 100 ns) *** Outer chip 55Fe runs I don't have these on my computers at home. They are still only at uni, I guess... But of course I don't have access anymore <3. *** Detector calibration files for Run 2, Run 3 *** FADC pedestal run *** All our notes + thesis *** CAST log files *** Nim code? :pending: *** Trained MLP snapshots *** Zenodo archives Let's list all the archives we upload to Zenodo here, including the README / repository description (and obv. link). The annoying thing in Zenodo is that directories are not supported. So we'll create a TAR ball for the directories we want to upload. This only has one downside: we cannot create the archive directly from the existing directories on our hard drive, because they contain all sorts of other files. So, copy over the files with directories to a temporary directory, tar it, upload. Then we can delete it. *UPDATE*: <2024-02-19 Mon 17:51> In the end the reconstructed and misc archives each were larger than 50GB. I asked whether the two archives could be increased in size, but as it turned out Zenodo only allows /a single/ archive of one user to exceed 50GB and they raise the size to 200GB. So in the end I asked them to add everything to the raw data archive. Funnily, enough their data limits are in real GigaBytes and not GibiBytes, as commonly reported by things like ~ls~ or ~du~. So in the end the Zenodo support person actually increased my limit to 210 GB, whoopsie! Zenodo ID: https://zenodo.org/records/10521887 **** Uploading To upload the data, we'll use the bash script from: https://github.com/jhpoelen/zenodo-upload It's simple enough. Uploading via the browser failed on the first try (what a surprise). We need to set our Zenodo token to the ~ZENODO_TOKEN~ environment variable, before running ~zenodo_upload.sh~, [[file:~/src/zenodo-upload/zenodo_upload.sh]]. ***** Raw data upload :FAILED: It can be found in our password database (make sure to set it before!): #+begin_src sh ./zenodo_upload.sh 10521887 ~/CastData/data/Zenodo/RawDataArchive/raw_data_gridpix_CAST_2017_18.tar --verbose #+end_src In progress <2024-01-17 Wed 12:28>. -> This did not work. Ended up uploading by uploading the data to Hetzner and making it available via a sub account. **** Raw data archive This archive will contain: - 2017/18 calibration & background runs - CDL 2019 data - 2017 development files, specifically run 241 and 268 (old run numbers!) from 2017 where we debugged the Septemboard sparking issues. - FADC 50ns / 100ns development files? - [X] *FIND THEM* -> Were on my laptop. - [ ] Outer 55 Fe data. But these are still only on lab computer at uni and I don't have access anymore. So we might upload them later. We never really used them anyway. We will use the directory [[file:~/CastData/data/Zenodo/RawDataArchive/]] as our temporary data archive. We'll copy ever ~.tar.gz~ run file to the correct folder structure and then tar everything at the end. Copy over the 2017 files (all ~.tar.gz~) using rsync: #+begin_src cd ~/CastData/data/Zenodo/RawDataArchive/ rsync --dry-run -hvrPt -m --include="*/" --include="*.tar.gz" --exclude="*" ~/CastData/data/2017 . #+end_src where we use (-h human readable, -v verbose, -r recursive, -P partial progress, -t preserve times of files), -m prune empty directories. Our include / exclude logic is needed to only copy those files that are ~.tar.gz~. Adapted from [[https://stackoverflow.com/questions/11111562/rsync-copy-over-only-certain-types-of-files-using-include-option][this]] stack overflow answer. Remove the ~--dry-run~ in practice of course. And for 2018, 2018_2 and CDL_Runs: #+begin_src sh cd ~/CastData/data/Zenodo/RawDataArchive/ for dir in 2018 2018_2 CDL_2019; do rsync --dry-run -hvrPt -m --include="*/" --include="*.tar.gz" --exclude="*" "~/CastData/data/$dir" . done #+end_src These files together as a gzipped tarball are ~34 GB in size. So 16GB to spare for the Zenodo limit, in principle. Maybe that's fine for this archive. Create the archive: #+begin_src sh cd ~/CastData/data/Zenodo/RawDataArchive/ tar cf raw_data_gridpix_CAST_2017_18.tar * #+end_src No need to compress it again! Time to upload... The README for the archive (will be uploaded as an Org file). Upload to Hetzner: #+begin_src sh #+end_src ***** Raw data archive for 7-GridPix 'Septemboard' CAST detector This data archive contains all raw data taken with the 'Septemboard' detector. Raw data means it is the data produced by the [[https://github.com/Vindaar/TOS][Timepix Operating Software (TOS)]]. This archive assumes familiarity with the operation of the Septemboard detector at CAST and the PhD thesis it was used in. Once the thesis is published, I will update the Zenodo meta data to include a link to the thesis. Otherwise look at https://phd.vindaar.de The archive is a single TAR ball, which contains multiple directories. They are split by the date in which they were taken and their purpose. The directory structure is as follows: #+begin_src ├── 2017 │ ├── CalibrationRuns │ ├── DataRuns │ ├── XrayFingerRuns │ └── development ├── 2018 │ ├── CalibrationRuns │ ├── DataRuns │ ├── FADC_100ns_50ns_comparisons │ │ ├── 100ns │ │ └── 50ns │ └── XrayFingerRuns ├── 2018_2 │ ├── BadRuns │ ├── CalibrationRuns │ └── DataRuns └── CDL_2019 #+end_src - 2017 :: Contains 'Run-2' data taken in 2017. - CalibrationRuns: 55Fe runs from CAST - DataRuns: Background runs from CAST (contains solar tracking data) - XrayFingerRuns: Single X-ray finger run from before data taking, not directly useful. - development: Contains runs from development in 2017, in particular the two runs showing excessive sparking from before the water cooling was installed. - 2018 :: Contains 'Run-2' data taken in 2018 (up to Apr 2018). - CalibrationRuns: 55Fe runs from CAST - DataRuns: Background runs from CAST (contains solar tracking data) - XrayFingerRuns: Single X-ray finger run, taken after Run-2 data taking. Useful. - FADC: Contains laboratory runs with the detector mounted pointing towards the zenith. Multiple runs with an FADC integration time of 50ns and multiple with 100ns. - 2018_2 :: Contains all 'Run-3' data taken in 2018. - CalibrationRuns: 55Fe runs from CAST - DataRuns: Background runs from CAST (contains solar tracking data) - BadRuns: A single run to be ignored. Faulty. - CDL_2019 :: Data taken in the CAST detector lab (CDL) behind an X-ray tube. **** Reconstructed data archive - [X] Data/CalibrationRuns_Raw/Reco.h5 - [X] CDL_2019_Raw/Reco.h5 - [X] calibration-cdl-2018.h5 - [X] FakeData <- the MLP training data [[file:~/CastData/data/FakeData/]] - [X] output HDF5 files? -> At least for likelihood combinations, and limits? ~likelihood~ output files: - ~/org/resources/lhood_lnL_17_11_23_septem_fixed/ - ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/ - ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/ - [X] Limit output files: - ~/org/resources/lhood_limits_21_11_23/ All the reconstructed files (including CDL): #+begin_src sh cd ~/CastData/data/Zenodo/RecoDataArchive/ for typ in Raw Reco; do for year in 2017 2018; do cp -v "${HOME}/CastData/data/CalibrationRuns${year}_${typ}.h5" . cp -v "${HOME}/CastData/data/DataRuns${year}_${typ}.h5" . cp -v "${HOME}/CastData/data/CDL_2019/CDL_2019_${typ}.h5" CDL_2019/ done done cp -v ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 CDL_2019/ cp -v ~/CastData/data/FakeData/*.h5 FakeData/ #+end_src The ~likelihood~ output files: #+begin_src sh cp -v -r ~/org/resources/lhood_lnL_17_11_23_septem_fixed ~/CastData/data/Zenodo/RecoDataArchive/lhoodOutput/ cp -v -r ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k ~/CastData/data/Zenodo/RecoDataArchive/lhoodOutput/ cp -v -r ~/org/resources/lhood_lnL_17_11_23_septem_fixed ~/CastData/data/Zenodo/RecoDataArchive/lhoodOutput/ #+end_src The limit outputs (axion-electron): #+begin_src sh cp -v -r ~/org/resources/lhood_limits_21_11_23/ ~/CastData/data/Zenodo/RecoDataArchive/limitOutput/ cp -v -r ~/org/resources/lhood_limits_axion_photon_11_01_24/ ~/CastData/data/Zenodo/RecoDataArchive/limitOutput/ cp -v -r ~/org/resources/lhood_limits_chameleon_12_01_24/ ~/CastData/data/Zenodo/RecoDataArchive/limitOutput/ #+end_src Create the archive: #+begin_src sh cd ~/CastData/data/Zenodo/RecoDataArchive/ tar cf reco_data_gridpix_CAST_2017_18.tar * #+end_src ***** Reconstructed data archive for 7-GridPix 'Septemboard' CAST detector This data archive contains all reconstructed data of the dataset taken with the 'Septemboard' detector at CAST in 2017/18. That means it is the data produced from dataset: https://https://doi.org/10.5281/zenodo.10521887 The reconstruction of the data is done via the tools part of [[https://github.com/Vindaar/TimepixAnalysis][TimepixAnalysis]]. This archive assumes familiarity with the operation of the Septemboard detector at CAST and the PhD thesis it was used in. Once the thesis is published, I will update the Zenodo meta data to include a link to the thesis. Otherwise look at https://phd.vindaar.de The archive is a single TAR ball, which contains multiple directories. They are split by the type of data mainly. The main data files are those named ~Calibration/DataRuns_2017/8_Raw/Reco.h5~ as well as the similarly named ~CDL~ files. The naming follows the naming of the aforementioned Zenodo archive. See below the directory structure for more details. The directory structure is as follows: #+begin_src . ├── CDL_2019 │ ├── CDL_2019_Raw.h5 │ ├── CDL_2019_Reco.h5 │ └── calibration-cdl-2018.h5 ├── CalibrationRuns2017_Raw.h5 ├── CalibrationRuns2017_Reco.h5 ├── CalibrationRuns2018_Raw.h5 ├── CalibrationRuns2018_Reco.h5 ├── DataRuns2017_Raw.h5 ├── DataRuns2017_Reco.h5 ├── DataRuns2018_Raw.h5 ├── DataRuns2018_Reco.h5 ├── FakeData │ ├── fakeData_500k_0_to_3keV_decrease.h5 │ └── fakeData_500k_uniform_energy_0_10_keV.h5 ├── lhoodOutput │ ├── lhood_lnL_17_11_23_septem_fixed │ │ ├── lhood_c18_R2_crAll_sEff_0.7_lnL.h5 │ │ ├── lhood_c18_R2_crAll_sEff_0.7_lnL.log │ │ ├── .... similar other files │ └── lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k │ ├── lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 │ ├── lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.log │ ├── .... similar other files ├── limitOutput │ ├── lhood_limits_21_11_23 │ │ ├── lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv │ │ │ ├── .... similar other files │ │ ├── lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv │ │ │ ├── .... similar other files │ │ ├── lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662 │ │ │ ├── .... more files │ │ ├── sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662 │ │ │ ├── .... more files │ │ ├── sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662 │ │ │ ├── .... more files │ │ └── sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662 │ │ │ ├── .... more files │ ├── lhood_limits_axion_photon_11_01_24 │ │ │ ├── .... more files │ └── lhood_limits_chameleon_12_01_24 │ │ │ ├── .... more files 28 directories, 249 files #+end_src - Root of the archive :: - ~CalibrationRuns2017/18_Raw~: Raw data files of the 55Fe calibration runs taken during the CAST data taking. - ~CalibrationRuns2017/18_Reco~: Fully reconstructed data of the same. - ~Data*~: Same schema for the actual CAST data, containing both background and solar tracking data. - ~FakeData~ :: A directory of two HDF5 files containing synthetic X-ray data used for the training of MLPs as classifiers. - ~lhoodOutput~ :: A directory containing a large number of files containing the output files of the ~likelihood~ program part of ~TimepixAnalysis~. That is, files containing clusters passing cuts of different setups of classifiers and vetoes. These are the files needed as inputs for background rate and limit calculations. - ~limitOutput~ :: A directory of output results from limit calculations. **** Miscellaneous files - calibration -> What calibration files? - Solar flux files? (differential and emission?) -> Not strictly speaking needed, how to compute them is explained. - trained MLP snapshots? -> Yes. -> ~/org/resources/nn_devel_mixing The entire directory contains all tests. Needs ~journal.org~ of course - ? Trained MLPs: #+begin_src sh cp -v -r ~/org/resources/nn_devel_mix/* ~/CastData/data/Zenodo/MiscArchive/MLP_snapshots/ #+end_src And the entire ~resources~ directory. Note that this needs to be cleaned to not contain data we shouldn't publish. #+begin_src sh cp -v -r ~/org/resources ~/CastData/data/Zenodo/MiscArchive/ #+end_src -> Well, if we want to do that, it won't fit into the 50 GB limit! - [X] Removed all personal or unrelated stuff Now the ~phd/resources~ directory: #+begin_src sh cp -v -r ~/phd/resources ~/CastData/data/Zenodo/MiscArchive/phdResources #+end_src ***** Miscellaneous files for the 7-GridPix 'Septemboard' CAST detector This data archive contains a large amount of miscellaneous data related to the 2017/18 data taking campaign of the 7-GridPix 'Septemboard' detector at CAST. That is, data related to the analysis of the following raw dataset: https://https://doi.org/10.5281/zenodo.10521887 This archive assumes familiarity with the operation of the Septemboard detector at CAST and the PhD thesis it was used in. Once the thesis is published, I will update the Zenodo meta data to include a link to the thesis. Otherwise look at https://phd.vindaar.de In particular though, to understand the context of these files it is mandatory to read the extended version of the PhD thesis as well as the additional notes (~StatusAndProgress~ as well as the ~journal~ linked under the URL above). Note that the vast majority of these files is likely not of significant interest, unless someone wishes to understand certain studies that were done. If however, someone reads one of these files and wishes to look into any of the referenced data files, I prefer to make them available. One particular set of interesting data is contained in the ~MLP_snapshots~ directory. It contains all snapshots of every MLP I ever trained during the work on my thesis. This includes the best performing MLP I eventually used for the results in my thesis. The other two directories contained are ~phdResources~ and ~orgResources~. They are named such as they represent a ~resources~ directory part of my ~phd~ git repository and my ~org~ git repository (the latter is a repository for miscellaneous notes and things). See the ~list_of_files.txt~ file for everything contained in this archive. **** Log files CAST log files, allowed? -> <2024-02-07 Wed 11:45> Still no answer from Theodoros **** Notes All personal notes? -> These don't need to be on Zenodo? Or should they? **** Combined README of data archive ***** Data archive for 7-GridPix 'Septemboard' CAST detector This data archive contains datasets related to the 7-GridPix 'Septemboard' detector used at the CERN Axion Solar Telescope (CAST) experiment in 2017/18. This archive assumes familiarity with the operation of the Septemboard detector at CAST and the PhD thesis it was used in. Once the thesis is published, I will update the Zenodo meta data to include a link to the thesis. For the time being see https://phd.vindaar.de The archive is split into three different files. For each file an explanation follows below. - raw_data_gridpix_CAST_2017_18.tar :: A single TAR ball of the entire raw data recorded at CAST (and related). - reco_data_gridpix_CAST_2017_18.tar :: A single TAR ball of the entire reconstructed data computed from the raw data. - miscResourcesArchive.tar.gz :: A single gzipped TAR ball of a large number of miscellaneous files. As the latter two archives contain a large number of files, a ~*_list_of_files.txt~ file is provided, which contains a ~tree~ view of the entire TAR ball. ****** raw_data_gridpix_CAST_2017_18.tar - Raw data archive This file contains all raw data recorded with the aforementioned detector. Raw data means it is the data produced by the [[https://github.com/Vindaar/TOS][Timepix Operating Software (TOS)]]. The archive is a single TAR ball, which contains multiple directories. They are split by the date in which they were taken and their purpose. The directory structure is as follows: #+begin_src ├── 2017 │ ├── CalibrationRuns │ ├── DataRuns │ ├── XrayFingerRuns │ └── development ├── 2018 │ ├── CalibrationRuns │ ├── DataRuns │ ├── FADC_100ns_50ns_comparisons │ │ ├── 100ns │ │ └── 50ns │ └── XrayFingerRuns ├── 2018_2 │ ├── BadRuns │ ├── CalibrationRuns │ └── DataRuns └── CDL_2019 #+end_src - 2017 :: Contains 'Run-2' data taken in 2017. - CalibrationRuns: 55Fe runs from CAST - DataRuns: Background runs from CAST (contains solar tracking data) - XrayFingerRuns: Single X-ray finger run from before data taking, not directly useful. - development: Contains runs from development in 2017, in particular the two runs showing excessive sparking from before the water cooling was installed. - 2018 :: Contains 'Run-2' data taken in 2018 (up to Apr 2018). - CalibrationRuns: 55Fe runs from CAST - DataRuns: Background runs from CAST (contains solar tracking data) - XrayFingerRuns: Single X-ray finger run, taken after Run-2 data taking. Useful. - FADC: Contains laboratory runs with the detector mounted pointing towards the zenith. Multiple runs with an FADC integration time of 50ns and multiple with 100ns. - 2018_2 :: Contains all 'Run-3' data taken in 2018. - CalibrationRuns: 55Fe runs from CAST - DataRuns: Background runs from CAST (contains solar tracking data) - BadRuns: A single run to be ignored. Faulty. - CDL_2019 :: Data taken in the CAST detector lab (CDL) behind an X-ray tube. ****** reco_data_gridpix_CAST_2017_18.tar - Reconstructed data archive This data archive contains all reconstructed data of the dataset taken with the 'Septemboard' detector at CAST in 2017/18. The reconstruction of the data is done via the tools part of [[https://github.com/Vindaar/TimepixAnalysis][TimepixAnalysis]]. It is a single TAR ball, which contains multiple directories. They are split by the type of data mainly. The main data files are those named ~Calibration/DataRuns_2017/8_Raw/Reco.h5~ as well as the similarly named ~CDL~ files. The naming follows that of the raw data archive. See below the directory structure for more details. The directory structure is as follows: #+begin_src . ├── CDL_2019 │ ├── CDL_2019_Raw.h5 │ ├── CDL_2019_Reco.h5 │ └── calibration-cdl-2018.h5 ├── CalibrationRuns2017_Raw.h5 ├── CalibrationRuns2017_Reco.h5 ├── CalibrationRuns2018_Raw.h5 ├── CalibrationRuns2018_Reco.h5 ├── DataRuns2017_Raw.h5 ├── DataRuns2017_Reco.h5 ├── DataRuns2018_Raw.h5 ├── DataRuns2018_Reco.h5 ├── FakeData │ ├── fakeData_500k_0_to_3keV_decrease.h5 │ └── fakeData_500k_uniform_energy_0_10_keV.h5 ├── lhoodOutput │ ├── lhood_lnL_17_11_23_septem_fixed │ │ ├── lhood_c18_R2_crAll_sEff_0.7_lnL.h5 │ │ ├── lhood_c18_R2_crAll_sEff_0.7_lnL.log │ │ ├── .... similar other files │ └── lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k │ ├── lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 │ ├── lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.log │ ├── .... similar other files ├── limitOutput │ ├── lhood_limits_21_11_23 │ │ ├── lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv │ │ │ ├── .... similar other files │ │ ├── lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv │ │ │ ├── .... similar other files │ │ ├── lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99 │ │ │ ├── .... more files │ │ ├── Similar directories │ ├── lhood_limits_axion_photon_11_01_24 │ │ │ ├── .... more files │ └── lhood_limits_chameleon_12_01_24 │ │ │ ├── .... more files 28 directories, 249 files #+end_src - Root of the archive :: - ~CalibrationRuns2017/18_Raw~: Raw data files of the 55Fe calibration runs taken during the CAST data taking. - ~CalibrationRuns2017/18_Reco~: Fully reconstructed data of the same. - ~Data*~: Same schema for the actual CAST data, containing both background and solar tracking data. - ~FakeData~ :: A directory of two HDF5 files containing synthetic X-ray data used for the training of MLPs as classifiers. - ~lhoodOutput~ :: A directory containing a large number of files containing the output files of the ~likelihood~ program part of ~TimepixAnalysis~. That is, files containing clusters passing cuts of different setups of classifiers and vetoes. These are the files needed as inputs for background rate and limit calculations. - ~limitOutput~ :: A directory of output results from limit calculations. See the file ~reco_data_gridpix_CAST_2017_18_list_of_files.txt~ for a list of all files contained. ****** miscResourcesArchive.tar.gz - Miscellaneous files This data archive contains a large amount of miscellaneous data related to the 2017/18 data taking campaign of the 7-GridPix 'Septemboard' detector at CAST. In particular, to understand the context of the files stored in this TAR ball, it is mandatory to read the extended version of the PhD thesis as well as the additional notes (~StatusAndProgress~ as well as the ~journal~ linked under the URL linked at the top). Note that the vast majority of these files is likely not of significant interest, unless someone wishes to understand certain studies that were done. If however, someone reads one of these files and wishes to look into any of the referenced data files, I prefer to make them available. One particular set of interesting data is contained in the ~MLP_snapshots~ directory. It contains all snapshots of every MLP I ever trained during the work on my thesis. This includes the best performing MLP I eventually used for the results in my thesis. The other two directories contained are ~phdResources~ and ~orgResources~. They are named such as they represent a ~resources~ directory part of my ~phd~ git repository and my ~org~ git repository (the latter is a repository for miscellaneous notes and things). See the file ~miscDataArchive_list_of_files.txt~ for a list of all files contained. ** TODO use some package for abbreviations ** TODO update all links to code - [ ] There are especially many references (and needed) in the reconstruction and calibration chapters obviously. Currently we use some links in footnotes to code on github. 1. replace the links to master branch by permalinks to a git tag for my thesis 2. add citations (*OR* find a way to add a "secondary" bibliography only for code references?) Apparently this is possible, either using biblatex directly or using a package called =multibib= https://www.overleaf.com/learn/latex/Questions/Creating_multiple_bibliographies_in_the_same_document ** TODO Adjust spacing in itemize etc. environments Can be done using the ~enumitem~ package like here: https://tex.stackexchange.com/questions/10684/vertical-space-in-lists ** TODO Use something like ~isodate~ to format dates ** TODO Define environment variables? We could define a variable like ~TPA~ set to the path of the TimepixAnalysis directory for convenience in the shell snippets that appear, running code? At the very least this should not really be needed for most things! Instead what we should do is to make sure we: a) before starting any work of the whole data analysis pipeline first compile every program we will use. Check the code of this thesis for shell code snippets that contain ~nim c~ commands! b) add all binaries we use to the TPA ~bin~ directory to have them in our PATH. *Note*: we can even make use of ~set/getEnv~ in some of our tools themselves to use the environmental variables. This could be useful to put into ~projectDefs.nim~. Having a CT variable is great, but adding some helpers to get runtime versions from env variables could be very useful and simplify things. ** TODO Think about signal-like or X-ray like We currently prefer "signal-like" in the background rate chapter. But we should think about whether we actually prefer that over the whole course of the thesis or not. ** TODO Language about likelihood method, probability density Make sure our language is consistent about the relationship between individual probability densities of each geometric property and the full likelihood distribution. Do not call an individual property distribution a likelihood distribution! ** DONE About gas gain variation and ingrid properties Note: now that we have some understanding about where the changes in the gas gain come from (and proof that it _is_ real gas gain change and not electronics, via FADC amplitudes), a very important and related concept is how such changes affects the properties of all clusters. In the CDL data we have those ridgeline plots (appendix [[#sec:appendix:fit_by_run_justification]]) comparing the properties of different CDL runs with very different gas gains showing very little differences. It *is worth a thought* whether we might want to have a very short section in an earlier part (where we talk about variation and its causes in the first place for example) where we provide some short "proof" that gas gain variations leave the properties _mostly_ unchanged. ** Points of contention At the moment <2021-07-31 Sat 11:36> my biggest point of uncertainty is the whole detector calibration part + how this plays into a software framework. Difficult to come up with good layout at this point. Will be easier once more notes are added in each part I think. ** DONE Structure I'm very lost <2022-08-22 Mon 13:53> about how to structure the thesis at this point. :( Maybe it's easier to think of the ingredients for the limit calculation as tips of strands. Follow each back to its introduction. But: should each of these simply be its own chapter? E.g. axion image: - raytracing - solar axion flux - axion models So therefore have a chapter "Deriving the expected axion image" that starts from: - pick a solar model & axion model - compute expected flux - use raytracing, explain, to compute the image? - but: needs average absorption depth to know at what point to even compute something! Well, this _can_ work, as long as the setup & detector are explained _before_ this chapter. However: none of that makes any sense without the context of the limit calculation! Why else would one need to compute such an image etc? Instead could also start part 2 (or whatever) of the thesis as "limit calculation" and have this be a huge part that first introduces the math of what & how to compute a limit, and *then* introduces how one ends up at the necessary inputs? If I do it this way, then the first part of the thesis is purely: - axion theory generically - axion helioscopes - gaseous detector physics & micromegas - the septemboard detector - deployment at CAST - data analysis to an extent? to what extent though? Part 2: limit calculation - how to compute limit, method - ingredients, show them. - then: each ingredient, how to derive it - finally: - put all ingredients together, short overview - compute ** Current thoughts about structure <2022-11-03 Thu 10:32>: So, as I'm currently finishing up the chapter about TOS and the Timepix calibrations, I'm unclear about how to structure the next steps: - real detector calibrations used, Septemboard FSRs, Thresholds etc. When performed and link to appendix containing all of them. - scintillator calibrations - FADC pedestal runs - Data reconstruction must have an introduction that motivates why we even compute geometric properties and so on. Comparison of events etc. - reconstruction before deployment at CAST? - ** DONE Use ~booktabs~ everywhere Simply done by adding ~:booktabs t~ to the ~#+ATTR_LATEX:~ above a table! https://orgmode.org/manual/Tables-in-LaTeX-export.html (Note that it isn't really clear from the documentation that it needs an argument, as it is otherwise interpreted as receiving ~nil~ I presume) ** STARTED HTML export of the thesis Examples of websites with great HTML layout that I might want to copy: - https://mpv.io/manual/master - a typical mdbook / nimibook - ar5ix (the HTML5 access to arxiv) - https://news.ycombinator.com/item?id=34050835 There are some interesting links here, e.g. the Peter Scholze page about the computer math proof website - This is also a great looking site: https://cpu.land/how-to-run-a-program - Another simple but nice looking page: https://jaylittle.com/ - Create a 'title page', maybe similar to the style common for ML releases of papers? A picture, an abstract, some short and simple overview / results? Like this https://nihalsid.github.io/mesh-gpt/ based on https://nerfies.github.io/ Or https://qtransformer.github.io/ from https://jonbarron.info/ Or https://voyager.minedojo.org/ - CAST, detector, reconstruction, background, MLP, training synthetic, vetoes, limit calculation, results Idea: In the HTML export version of the thesis, it might be a good idea to have all :noexport: sections by default folded. So if one goes to the page of a chapter (or section) all sections are by default not folded _except_ the :noexport: ones. That keeps things clean by default, especially given all the code sections (which should probably be in an _extra_ fold by default). Another note: For the HTML version for certain references having direct inline links is a plus of course. No need to hide them behind a citation for example. E.g. when linking to the relevant parts of the code, we can just make some piece of text clickable! The HTML export also requires better handling of subfigures that are side by side. In general we made good progress on the HTML export today thanks to GPT4 helping us to auto convert all PDFs to SVGs on export! - [ ] Extend the HTML PDF->SVG conversion such that any PDF larger than some cutoff will be converted to PNG instead (i.e. for those that freeze brave which are 19MB SVGs) - [ ] Fix PDF->SVG conversion logic to handle ~file:~ type links! In terms of table of contents: It would be nice to have the ToC for the entire thesis on the left, but something like this: https://agraphicsguynotes.com/posts/fiber_in_cpp_understanding_the_basics/ on the right for the minitoc for each chapter! *** DONE Attempt at automatic folding of ~:extended:~ sections The way we handle automatic folding of sections is by relying on how Org mode exports tags to HTML. It looks something like the following: #+begin_src html <h5 id="org4ff57a5"><span class="section-number-5">10.2.2.1.</span> Table of fit lines   <span class="tag"><span class="extended">extended</span></span></h5> <div...> #+end_src With that in mind, I've (with GPT4o) written some JS that marks sections with extended tags as foldable and thus hides them. This work surprisingly well. See the added code in [[file:~/.emacs.d/myinit.org]] related to the added HTML preface script. It also needs the following two CSS fields: #+begin_src css .folded-content { display: none; } .foldable-header { cursor: pointer; } #+end_src **** Full working example of folding sections :noexport: #+begin_src html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <style> .folded-content { display: none; } .foldable-header { cursor: pointer; } </style> </head> <body> <h5 id="org4ff57a5"><span class="section-number-5">10.2.2.1.</span> Table of fit lines   <span class="tag"><span class="extended">extended</span></span></h5> <div>Content to be folded <p> what is goign on </p> </div> <h5 id="org47a6"><span class="section-number-5">10.2.2.2.</span> Another heading</h5> <div>More content</div> <script> document.addEventListener('DOMContentLoaded', function () { var headers = document.querySelectorAll('h1 .extended, h2 .extended, h3 .extended, h4 .extended, h5 .extended, h6 .extended'); headers.forEach(function (header) { var foldableHeader = header.closest('h1, h2, h3, h4, h5, h6'); if (foldableHeader) { foldableHeader.classList.add('foldable-header'); var nextElement = foldableHeader.nextElementSibling; var contentToFold = []; while (nextElement && !nextElement.matches('h1, h2, h3, h4, h5, h6')) { contentToFold.push(nextElement); nextElement = nextElement.nextElementSibling; } contentToFold.forEach(function (element) { element.classList.add('folded-content'); }); foldableHeader.addEventListener('click', function () { contentToFold.forEach(function (element) { if (element.style.display === 'none' || element.style.display == '') { element.style.display = 'block'; } else { element.style.display = 'none'; } }); }); } }); }); </script> </body> </html> #+end_src **** Attempts we didn't end up using :noexport: *NOT USED*: #+begin_src emacs-lisp (defun my/org-html-headline-with-folding (headline contents info) "Custom function to add folding functionality to headlines with :extended: tag." (if (member "extended" (org-element-property :tags headline)) (let* ((text (org-export-data (org-element-property :title headline) info)) (level (org-element-property :level headline)) (contents (or contents ""))) (format "<details><summary>%s</summary>%s</details>" text contents)) (org-html--headline headline contents info))) (with-eval-after-load 'ox-html (advice-add 'org-html-headline :override #'my/org-html-headline-with-folding)) (advice-remove 'org-html-headline #'my/org-html-headline-with-folding) ;;(advice-add 'org-html-headline :override #'my/org-html-headline-with-folding) #+end_src #+RESULTS: *NOT USED*: #+begin_src emacs-lisp (require 'ox-html) (defun my/org-html-headline (headline contents info) "Custom function to handle headlines with :folded: tag for HTML export." (let ((tags (org-element-property :tags headline))) (if (member "folded" tags) (format "<details><summary>%s</summary>%s</details>" (org-export-data (org-element-property :title headline) info) (or contents "")) (org-html--headline headline contents info)))) (org-export-define-derived-backend 'my-html 'html :translate-alist '((headline . my/org-html-headline))) #+end_src *NOT USED*: #+begin_src emacs-lisp (defun my/org-folded-preprocess (backend) "Preprocess org file to convert :extended: tag into HTML <details> elements. This function will only be called during HTML export." (when (org-export-derived-backend-p backend 'html) (goto-char (point-min)) (while (re-search-forward "^\\*+ \\(.*\\) :extended:$" nil t) (let ((level (org-current-level))) (replace-match (format "<details><summary>\\1</summary>\n"))) (org-end-of-subtree t t) (insert "\n</details>")))) (add-hook 'org-export-before-processing-hook #'my/org-folded-preprocess) (remove-hook 'org-export-before-processing-hook #'my/org-folded-preprocess) #+end_src #+RESULTS: | my/org-folded-preprocess | my/org-process-subfigure-dsl | my/org-html-export-figs | *** DONE Bibliography for HTML We should use ~org-ref~ to use do our citations and cross links in this document: https://github.com/jkitchin/org-ref It handles exporting a bibliography to HTML too! See for example: https://emacs.stackexchange.com/questions/62236/org-ref-exporting-org-file-to-html-with-its-style-exactly-same-as-a-specific-sc and https://github.com/jkitchin/org-ref/issues/319 and the manual: https://raw.githubusercontent.com/jkitchin/org-ref/master/org-ref.org *IMPORTANT*: When exporting to HTML to get the bibliography, we need to use ~org-ref~ export explicitly! That is ~C-c C-e r h~ instead of 'Export to html' via ~h~! *** DONE Multiple file from thesis For the HTML export we'll likely want to have all sections to be on separate pages. In the end we decided to produce multiple HTML files from a single HTML file using a small Nim program. This now handles all of the annoying little things that I couldn't get to work in a reasonable time. The program: [[file:code/split_thesis_html.nim]] which is simply run, pointing to the *single exported HTML* file produced by a ~C-c C-e r h~ (export to HTML via ~org-ref~). The latter is important for the bibliography and citations of course. However, because of our subfigure DSL we have to hack around _quite a bit_. We use ~advice-add~ with ~:around~ as seen in the below section to first perform our DSL replacement, _then_ let ~org-ref~ do its thing and finally walk the document again to put all ~<figure>~ tags into a ~#+BEGIN_EXPORT HTML~ block. This is required, because if we emit the expanded DSL into such a block, ~org-ref~ just ignores it. We still have to do some postprocessing in the above mentioned Nim script however! **** DONE Making org-ref citations work in subfigures :PROPERTIES: :CUSTOM_ID: sec:todos:html_export:org_ref_subfigure_dsl :END: *EXPERIMENT*: We will try to use ~advice-add~ to run our subfigure DSL before the org-ref export functions are being run. When calling ~org-export-dispatch~ (C-c C-e) and selecting any org-ref function, it calls e.g. ~org-ref-export-to-html~. This looks like: [[file:~/.emacs.d/elpa/org-ref-20230921.1327/org-ref-export.el]] #+begin_src emacs-lisp (defun org-ref-export-to-html (&optional async subtreep visible-only body-only info) "Export the buffer to HTML and open. See `org-export-as' for the meaning of ASYNC SUBTREEP VISIBLE-ONLY BODY-ONLY and INFO." (org-ref-export-to 'html async subtreep visible-only body-only info)) #+end_src Inside of ~org-ref-export-to~ it calls #+begin_src emacs-lisp (org-ref-process-buffer backend subtreep) #+end_src which performs the actual org-ref replacements etc. Only at the end does it call: #+begin_src emacs-lisp (org-open-file (org-export-to-file backend export-name async subtreep visible-only body-only info) 'system) #+end_src where *importantly* ~org-export-to-file~ is the function that the #+begin_src emacs-lisp ; (add-hook 'org-export-before-processing-hook 'my/org-process-subfigure-dsl) #+end_src ~org-export-before-processing-hook~ are being run! This implies that our subfigure DSL is being evaluated after org-ref is through and so we cannot turn our subfigure DSL ~cite~ commands into the ~org-ref~ syntax. What we need to do? We need to make sure that our subfigure logic is run *before*. And we'll use ~advice-add~ to do so. #+begin_src emacs-lisp (defun replace-html-tags-in-region (start stop) "Replace @@html:( some text ) @@ with some text in the region from START to STOP. Return the total number of characters removed. NOTE: The space after `)` and before `@@` MIGHT EXIST in the file!" (interactive "r") ;; Make the function interactive to select the region (save-excursion (goto-char start) (let ((pattern "@@html:\\([^@]+\\)@@") ;; Define the regex pattern (total-removed 0)) ;; Initialize the counter (while (re-search-forward pattern stop t) (let ((match-length (- (match-end 0) (match-beginning 0))) (replacement-length (length (match-string 1)))) (replace-match (match-string 1) nil nil) (setq total-removed (+ total-removed (- match-length replacement-length))))) (message "Total characters removed: %d" total-removed) total-removed))) (defun wrap-html-figures-in-org () "Wrap <figure> HTML blocks inside `#+BEGIN_EXPORT HTML ... #+END_EXPORT` in the current Org buffer." (interactive) (save-excursion (goto-char (point-min)) (condition-case err (while (re-search-forward "<figure class=\"figure-wrapper\"" nil t) (let ((start (match-beginning 0)) (figure-count 1)) ;; Move forward to find the matching closing </figure> tag (while (and (> figure-count 0) (re-search-forward "</?figure" nil t)) (if (string= (match-string 0) "<figure") (setq figure-count (1+ figure-count)) (setq figure-count (1- figure-count)))) (if (zerop figure-count) (let ((end (match-end 0))) ;; First remove any possible @@html:( ... ) @@ org-ref statements and leave ... (setq end (- end (replace-html-tags-in-region start end))) ;; Insert the export block delimiters (goto-char (+ end 1)) ;; We match `</figure`, so need to insert after closing `>` (insert "\n#+END_EXPORT") (goto-char start) (insert "#+BEGIN_EXPORT HTML\n") ;; Move point to end to avoid reprocessing the same block (goto-char (+ end (length "\n#+END_EXPORT")))) (error "Unmatched <figure> tags")))) (error (let ((error-position (point))) (switch-to-buffer (current-buffer)) (goto-char error-position) (message "Error: %s" (error-message-string err))))))) (defun org-html-subfigure-dsl-process (fn backend &optional subtreep) "Wraps around the `org-ref-process-buffer`. The idea is that we: - before org-ref does its thing, apply the regular subfigure DSL processing (but without inserting #+BEGIN_EXPORT statements for HTML) - perform the org-ref processing so that the cite:&foo statements are handled correctly - then, wrap the resulting code of the <figure> ... </figure> statements in a #+BEGIN_EXPORT HTML .. #+END_EXPORT environment " ;; (switch-to-buffer (current-buffer)) ;; open buffer to investigate (my/org-process-subfigure-dsl backend subtreep) ;; process subfigure DSL ;;(debug) (apply fn backend subtreep) ;; call the org-ref processing ;;(debug) ;;(switch-to-buffer (current-buffer)) ;; open buffer again to investigate (wrap-html-figures-in-org) ;;(debug) ) ;; wrap the figure envrionments up #+end_src #+begin_src emacs-lisp (advice-add 'org-ref-process-buffer :around #'org-html-subfigure-dsl-process) #+end_src #+begin_src emacs-lisp ;; (advice-remove 'org-ref-process-buffer 'my/org-process-subfigure-dsl) #+end_src This makes it so that when the org-ref function is called it *first* performs the subfigure DSL replacement and *then* the org-ref code. That should give us precisely what we want. *UPDATE*: <2024-06-18 Tue 19:47> The above *DOES WORK* now! :partying_face: **** Old ideas about multiple files from single Org I found the following code here: https://stackoverflow.com/a/65428989 which is slightly modified to kill the correct buffer (thanks GPT) #+begin_src emacs-lisp (defun my-org-export-each-level-1-headline-to-html (&optional scope) (interactive) (org-map-entries (lambda () (let* ((title (car (last (org-get-outline-path t)))) (dir (file-name-directory buffer-file-name)) (filename (concat dir title ".html")) (current-buffer (current-buffer))) (org-narrow-to-subtree) (org-html-export-as-html) (write-file filename) ;; switch to current export buffer and kill it (switch-to-buffer (other-buffer current-buffer 1)) (kill-current-buffer) (switch-to-buffer current-buffer) (widen))) "LEVEL=1" scope)) #+end_src While this works (aside from having issues with all the figures if we just copy this file elsewhere to test), it leaves open the issue about links to other sections that thus end up in other documents! How should we handle this? By hand by just parsing the HTML and replacing all ~[BROKEN LINK: section]~ by an ~href~ ? GPT4 has the following to say about it: and it proposed the following code to automatically perform the replacement: #+begin_src emacs-lisp (defun my-org-export-each-level-1-headline-to-html (&optional scope) (interactive) ;; 1. Build a mapping of CUSTOM_ID to filenames (let ((id-to-filename (mapcar (lambda (headline) (let ((custom-id (org-entry-get (point) "CUSTOM_ID"))) (when custom-id (cons custom-id (concat (file-name-directory buffer-file-name) (car (last (org-get-outline-path t headline))) ".html"))))) (org-map-entries (lambda () (org-heading-components)) "LEVEL=1" scope)))) (org-map-entries (lambda () (let* ((title (car (last (org-get-outline-path t)))) (dir (file-name-directory buffer-file-name)) (filename (concat dir title ".html")) (current-buffer (current-buffer))) (org-narrow-to-subtree) ;; 2. Replace internal links with corresponding filenames (goto-char (point-min)) (while (re-search-forward org-link-bracket-re nil t) (let* ((desc (match-string 4)) (path (match-string 2)) (new-filename (cdr (assoc path id-to-filename)))) (when new-filename (replace-match (format "[[%s][%s]]" new-filename (or desc path)))))) (org-html-export-as-html) (write-file filename) (switch-to-buffer (other-buffer current-buffer 1)) (kill-current-buffer) (switch-to-buffer current-buffer) (widen))) "LEVEL=1" scope))) #+end_src Alternatively it proposed to write a custom Org export backend that derives from the HTML backend. That looks pretty elegant actually. #+begin_src emacs-lisp ;;(org-export-define-derived-backend 'custom-html 'html ;; :translate-alist '((link . custom-html-link-transcoder))) (defun custom-html-link-transcoder (link contents info) "Transcode a LINK from Org to custom HTML." (let ((type (org-element-property :type link)) (path (org-element-property :path link)) (raw-path (org-element-property :raw-link link))) (cond ;; For CUSTOM_ID links, replace them with corresponding filenames. ((and (string= type "id") (assoc path (org-export-get-id-to-filename-alist info))) (format "<a href=\"%s.html\">%s</a>" (cdr (assoc path (org-export-get-id-to-filename-alist info))) contents)) ;; Default handling for other links (t (org-html-link link contents info))))) (defun custom-html-export-to-separate-files () "Export all level 1 headings in the current buffer to separate HTML files." (interactive) ;; Build a mapping of CUSTOM_IDs to filenames based on level 1 headlines. (let* ((base-dir (file-name-directory (buffer-file-name))) (id-to-filename (mapcar (lambda (headline) (let ((id (org-element-property :CUSTOM_ID headline))) (when id (cons id (concat base-dir (org-element-property :raw-value headline) ".html"))))) (org-element-map (org-element-parse-buffer) 'headline (lambda (hl) hl) nil nil 'headline t)))) (org-map-entries (lambda () (let ((title (nth 4 (org-heading-components)))) (org-narrow-to-subtree) (org-export-to-file 'custom-html (concat base-dir title ".html")) (widen))) "LEVEL=1"))) (org-export-define-derived-backend 'custom-html 'html :menu-entry '(?X "Export to Custom HTML" ((?h "to html" custom-html-export-to-separate-files)))) (defun custom-html-export-menu-entry () "Menu entry for the custom HTML export." (interactive) (org-export-backend-dispatch)) ;;(defun custom-html-export-menu-entry () ;; "Menu entry for the custom HTML export." ;; (interactive) ;; (org-export-dispatch)) ;; Ensure the key binding is correctly defined (define-key org-mode-map (kbd "C-c e H") 'custom-html-export-menu-entry) #+end_src #+begin_src emacs-lisp (defun custom-html-link-transcoder (link contents info) "Transcode a LINK from Org to custom HTML." (let ((type (org-element-property :type link)) (path (org-element-property :path link)) (raw-path (org-element-property :raw-link link))) (cond ;; For CUSTOM_ID links, replace them with corresponding filenames. ((and (string= type "id") (assoc path (org-export-get-id-to-filename-alist info))) (format "<a href=\"%s.html\">%s</a>" (cdr (assoc path (org-export-get-id-to-filename-alist info))) contents)) ;; Default handling for other links (t (org-html-link link contents info))))) (defun custom-html-export-to-separate-files (&optional async subtreep visible-only body-only ext-plist) ;;(plist filename pub-dir extra) "Export all level 1 headings in the current buffer to separate HTML files. PLIST is the property list for the export. FILENAME is the name of the output file. PUB-DIR is the publishing directory." (let* ((base-dir (file-name-directory (buffer-file-name))) (id-to-filename (mapcar (lambda (headline) (let ((id (org-element-property :CUSTOM_ID headline))) (when id (cons id (concat base-dir (org-element-property :raw-value headline) ".html"))))) (org-element-map (org-element-parse-buffer) 'headline (lambda (hl) hl) nil nil 'headline t)))) (org-map-entries (lambda () (let ((title (nth 4 (org-heading-components)))) (org-narrow-to-subtree) (org-export-to-file 'custom-html (concat base-dir title ".html") nil nil nil plist pub-dir) (widen))) "LEVEL=1"))) (org-export-define-derived-backend 'custom-html 'html :menu-entry '(?X "Export to Custom HTML" ((?h "to html" custom-html-export-to-separate-files)))) (defun custom-html-export-menu-entry () "Menu entry for the custom HTML export." (interactive) (org-export-dispatch)) ;; Ensure the key binding is correctly defined (define-key org-mode-map (kbd "C-c e H") 'custom-html-export-menu-entry) #+end_src *NOTE*: This is currently broken: Adding it to the Org export dispatch window doesn't work yet, but more importantly: - [ ] The links are not actually working right now. So we'll need to fix it! But it's a good start and gives us the idea on how to handle this! - [ ] We must use the CUSTOM_ID as a file name instead of the title of the section! The title is useless, as it contains ~/~ etc that cause trouble producing files! - [ ] The produced HTML files do not have a table of content on the left hand side! Anyway we want a table of content for the full thesis on the left! Here is the Org manual about adding a custom backend: https://orgmode.org/manual/Adding-Export-Back_002dends.html See also the syntax for referencing something in a separate file: https://www.gnu.org/software/emacs/manual/html_node/org/Search-Options.html *Example*: [[file:~/org/Doc/StatusAndProgress.org::#sec:list_different_uncertainties]] *** Org mode subfigures for LaTeX I found this here (https://www.mail-archive.com/emacs-orgmode@gnu.org/msg140190.html): #+begin_src #+name: fig:fig #+caption: plots of.... #+begin_figure #+name: fig:sfig1 #+attr_latex: :caption \subcaption{1a} #+attr_latex: :options {0.5\textwidth} #+begin_subfigure #+attr_latex: :width 0.8\linewidth [[~/s/test/mip.png]] #+end_subfigure #+name: fig:sfig2 #+attr_latex: :options {0.5\textwidth} #+attr_latex: :caption \subcaption{1b} #+begin_subfigure #+attr_latex: :width 0.8\linewidth [[~/s/test/mip.png]] #+end_subfigure #+end_figure #+end_src Does it work to produce a subfigure? If so, what happens on HTML export? Also relevant: https://kitchingroup.cheme.cmu.edu/blog/2016/01/17/Side-by-side-figures-in-org-mode-for-different-export-outputs/ And another one which looks very simple: https://list.orgmode.org/87mty1an66.fsf@posteo.net/#t #+begin_quote Hi, I have come up with a way to export subfigures to LaTeX (with the subfigure package) by defining a new link type. The 'subcaption' of the subfigure would be the description of the link. If we want to add parameters such as width, scale, etc., we can put them next between the marks '>( ... )' The code: #+begin_src emacs-lisp (org-link-set-parameters "subfig" :follow (lambda (file) (find-file file)) :face '(:foreground "chocolate" :weight bold :underline t) :display 'full :export (lambda (file desc backend) (when (eq backend 'latex) (if (string-match ">(\\(.+\\))" desc) (concat "\\subfigure[" (replace-regexp-in-string "\s+>(.+)" "" desc) "]" "{\\includegraphics" "[" (match-string 1 desc) "]" "{" file "}}") (format "\\subfigure[%s]{\\includegraphics{%s}}" desc file))))) #+end_src Example: #+begin_src org ,#+CAPTION: Lorem impsum dolor ,#+ATTR_LaTeX: :options \centering ,#+begin_figure [[subfig:img1.jpg][Caption of img1 >(width=.3\textwidth)]] [[subfig:img2.jpg][Caption of img2 >(width=.3\textwidth)]] [[subfig:img3.jpg][Caption of img3 >(width=.6\textwidth)]] ,#+end_figure #+end_src Results: #+begin_src latex \begin{figure}\centering \subfigure[Caption of img1]{\includegraphics[width=.3\textwidth]{img1.jpg}} \subfigure[Caption of img2]{\includegraphics[width=.3\textwidth]{img2.jpg}} \subfigure[Caption of img3]{\includegraphics[width=.6\textwidth]{img3.jpg}} \caption{Lorem impsum dolor} \end{figure} #+end_src If we want to export to HTML it would be something more tricky. In this case, the export function could be like this (a width parameter would be enclosed between >{ ... }): #+begin_src emacs-lisp (lambda (file desc backend) (cond ((eq backend 'latex) (if (string-match ">(\\(.+\\))" desc) (concat "\\subfigure[" (replace-regexp-in-string "\s*>.+" "" desc) "]" "{\\includegraphics" "[" (match-string 1 desc) "]" "{" file "}}") (format "\\subfigure[%s]{\\includegraphics{%s}}" (replace-regexp-in-string "\s*>.+" "" desc) file))) ((eq backend 'html) (if (string-match ">{\\(.+\\)}" desc) (concat "<td><img src=\"" file "\" alt=\"" file "\"" " style=\"width:" (match-string 1 desc) "\"" "/><br>" (replace-regexp-in-string "\s*>.+" "" desc) "</td>") (format "<td><img src=\"%s\" alt=\"%s\"/><br>%s</td>" file file (replace-regexp-in-string "\s*>.+" "" desc)))))) #+end_src Example: #+begin_src org ,#+CAPTION: Lorem impsum dolor ,#+ATTR_LaTeX: :options \centering ,#+begin_figure @@html:<div class="org-center"><table style="margin-left:auto;margin-right:auto;"><tr>@@ [[subfig:img1.jpg][Caption of img1 >(width=.3\textwidth) >{300px}]] [[subfig:img2.jpg][Caption of img2 >(width=.3\textwidth) >{300px}]] @@html:</tr></table><p> </p><table style="margin-left:auto;margin-right:auto;"><tr>@@ [[subfig:img3.jpg][Caption of img3 >(width=.6\textwidth) >{600px}]] @@html:</tr></table><br>Lorem ipsum dolor</div>@@ ,#+end_figure #+end_src As you can see, it is not the panacea, and you have to apply some direct format... Happy holidays Juan Manuel #+end_quote **** Our implementation *UPDATE*: The below is now essentially finished. We've finalized our subfigure DSL and it produces the correct code for both backends. With some CSS and JS we have also implemented being able to click on subfigures in HTML to resize them. We use ~org-ref~ to have custom references ~sref~ and ~ssubref~ to reference figures / subfigures within the generated code. We started an implementation of the former (the DSL) here [[file:~/org/Misc/side_by_side_subfigure_elisp_dsl.org]] Once starting to work on the HTML version I realized that it will likely be tricky to get the figure counters working. They are hardcoded by the HTML export logic of Org here [[file:/usr/share/emacs/29.1/lisp/org/ob-C.el.gz::3417]] I just learned about CSS counters though: https://tympanus.net/codrops/2013/05/02/automatic-figure-numbering-with-css-counters/ which might be perfect. Maybe we need to replace the current Org logic by something custom for it (i.e. replace the ~"<span class=\"figure-number\">"~ by something that uses ~<figure>~). *UPDATE* <2023-09-15 Fri 22:12>: *Ohhhh!* There already *is* support for HTML5 ~<figure>~ environments! That should take care of one aspect. We can activate it using Ref: https://emacs.stackexchange.com/questions/27691/org-mode-export-images-to-html-as-figures-not-img #+begin_src emacs-lisp (setq org-html-html5-fancy t org-html-doctype "html5") #+end_src or alternatively set it only for a single file using #+begin_src :html-doctype "html5" :html-html5-fancy t #+end_src *However*, this still inserts the hard coded ~Figure %d~ into the code. So our rebinding the function is still necessary. For *referencing* our custom IDs, it should be fine to set the ~org-html-prefer-user-labels~! #+begin_src emacs-lisp (setq org-html-prefer-user-labels t) #+end_src which then perfectly leaves the labels we assign! Maybe we can disable the counting for the ~figure-number~ span classes though! https://emacs.stackexchange.com/a/17625 #+begin_src css .figure-number { display: none; } #+end_src And to get our custom counting logic, we use CSS counters: #+begin_src css /* Initialize the counter */ body { counter-reset: fig-counter; } figure { /* Increment the counter for every figure */ counter-increment: fig-counter; } figcaption::before { /* Display the counter value before each figcaption */ content: "Figure " counter(fig-counter) ": "; } #+end_src Let's try it out by merging it into [[file:~/org/Doc/SolarAxionConversionPoint/nimdoc.css]] *IT WORKS PERFECTLY!!!!!* :partying_face: Ok, the HTML export now also more or less works! Had some small issues, but all good. It is to be noted that we need to emit ~#+begin_export html~ blocks. Also we need to be careful about the order in which we add both our HTML export hooks! This one and the one that converts the PDFs to SVG! Also: we probably want a different CSS counter for the subfigures internally. That we can generate the correct counters for *within* the subcaption and using letters instead. So outer counter + a, b etc. In particular, the final problem (outside of potentially having trouble aligning the figures?) is the conversion of PDFs to SVGs. Ahhh! We can just call the conversion and copy function from *this* hook! -> YUP, that works! Now all that is left: - [ ] Resize the images to the desired size (currently no size given! nor CSS sets size!) - [ ] Change the subfigure counter logic and text so that it's better. Ref: https://orgmode.org/manual/HTML-doctypes.html *** Inline images Found this interesting article. He inserts all SVG images directly as base64 data into the HTML file! That's pretty neat. It relies on ~cl-letf~ to rebind the function ~org-html--format-image~ to do that. https://niklasfasching.de/posts/org-html-export-inline-images/ *** TODO Make heading sections clickable Currently all the headings are not clickable. Of course the ToC exists, but ideally we could directly click on each heading. *** TODO Add buttons to increase / decrease font size That could be useful for people reading. *** TODO Finalize theme - [ ] Maybe changing the font size to 14px instead of 16px and increasing line height from 1.5 to 1.8 is prettier. ** DONE TikZ backend / useTeX When the legend is too large to fit onto the regular plot, TikZ or rather latex with extend the figure. But the rectangle for the background is then too small! We should apply the background color of the background rectangle to the entire document! That way this cannot happen. -> Fixed. ** TODO In context of determining gas diffusion [0/1] - [ ] As mentioned in the relevant section, don't forget to clear up the distinction between: - σ_T, the actual transverse diffusion coefficient - D_T the diffusion constant - transverse RMS, our actual value computed from the cluster, which is related to D_T(z), but D_T(z) is the sigma of the deviation after z drift distance of the *full population*, but the transverse RMS data is the sigma of a *small sample of the population*! -> Explaining it like this in the text and coming up with good terminology for transverse RMS should help. ** TODO Fixup pygmentize support of unicode for Nim [/] Nim allows unicode characters. The Pygmentize lexer however highlights those as errors in the code. There is some stackoverflow question about something similar for a different language (maybe even python?) - [ ] Need to fix that. - [ ] Also change the font colors for certain cases, e.g. ~sh~ -> ~sh~ uses black text. For the monokai colors that's super broken. Text should be white! - [ ] Google on how to change default colors if possible ** TODO Create a Docker image for the software? [/] At least for all CPU related stuff (i.e. *not* training the MLP) we could create a docker image that we can distribute. That way at least there is *one guaranteed* way to run the software by others without running into struggles building on certain platforms. https://stackoverflow.com/questions/36808396/how-to-create-new-docker-image-based-on-existing-image My idea would be: - download a bare docker image we like - install the entire software stack in the image - save the image as a new image At least that seems simpler than creating a full VM image. https://askubuntu.com/questions/308897/convert-ubuntu-physical-machine-to-virtual-machine https://askubuntu.com/questions/34802/convert-my-physical-operating-system-to-a-virtualbox-disk ** TODO Cross check our usages of the differential, e.g. dx Define a ~\dd~ operator like: https://tex.stackexchange.com/questions/14821/whats-the-proper-way-to-typeset-a-differential-operator/637613#637613 and then make sure we use that everywhere instead of our manual mathrm usage! - [X] Defined - [ ] Check its usage! ** TODO Copy other documents to hoster (for now Backblaze B2) *** Index file for PhD /docs - [ ] ! *** Axion mass - buffer gas calculation Note: here we simply don't copy the whole directory as there are so many files in the directory that don't really need to be on the remote. #+begin_src cd ~/org/Code/CAST/babyIaxoAxionMassRange/ rclone copy axionMass.org b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy axionMass.nim b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy axionMass.pdf b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy axionMass.html b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy figs b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/figs/ rclone copy nimdoc.css b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy mass_attenuation_nist_data.txt b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy polypropylene_window_10micron.txt b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ rclone copy index.html b2:vindaarNotes/phd/docs/bufferGasIAXO/v1/ #+end_src *** SolarAxionConversionPoint #+begin_src cd ~/org/Doc/SolarAxionConversionPoint/ rclone copy SolarAxionConversionPoint/ b2:vindaarNotes/phd/docs/SolarAxionConversionPoint/ #+end_src ** DONE Fix alignment of the table of contents overlapping section numbers Ref: - https://tex.stackexchange.com/questions/296545/overlapping-numbers-and-titles-in-toc - https://www.reddit.com/r/LaTeX/comments/24ao1g/section_numbers_overlapping_titles_in_table_of/ -> See the ~RedeclareSectionCommand~ KOMA options added to the TeX header. ** STARTED Print the thesis :noexport: Printing the thesis in: https://maps.app.goo.gl/2ZmgT2hZBq9ub1TVA 436 pages 193 colored only about 75€ for 100g paper in a soft cover! * Theory of axions :Theory: :PROPERTIES: :CUSTOM_ID: sec:theory :END: #+LATEX: \minitoc A deep understanding of the axion requires a diverse set of knowledge of different aspects of the theoretical foundations of modern physics. The Standard Model of particle physics (SM), quantum field theory (QFT) and quantum chromodynamics (QCD). The \cpt invariance of the SM and related \cp violation [fn:cp_violation] of the weak force. The common algebraic structure of the weak and strong force via $\mathrm{SU}(2)$ and $\mathrm{SU}(3)$. Anomalies in QFT, in particular the Adler-Bell-Jackiw anomaly. The structure of the QCD vacuum and the related $\mathrm{U}(1)$ problem, which is solved by instantons and the concepts of Goldstone's theorem. To do justice to all these aspects in the context of a non-theory PhD thesis is not possible [fn:for_me]. As such, the theory part of this thesis will be kept short and instead a focus is placed on referencing useful material for an interested reader, in particular see sec. [[#sec:theory:useful_reading_material]]. At low energies the Standard Model can be described by a combination of three different forces, the electromagnetic, the weak and the strong force. These can be represented mathematically by an internal group structure of $\mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3)$, respectively. [fn:groups] The weak force, represented by $\mathrm{SU}(2)$, has long since been known to exhibit a \cp violation [fn:cp_violation]. Due to the similar group structure between the weak and the strong force ($\mathrm{SU}(2)$ vs. $\mathrm{SU}(3)$) many parallels exist between the mathematical descriptions of the two forces in the Standard Model. In particular, the following term, [fn:theta_parameter] \[ \mathcal{L}_θ = θ \frac{g_s²}{32π²} G^{μν}_a \tilde{G}_{aμν}, \] is allowed under the requirements for a Standard Model conform term (i.e. gauge invariant, Lorentz invariant and so on). Here, $θ$ is an angle, $g²_s$ is the strong coupling constant and $G^{μν}_α$ the gluon field strength tensor. As a matter of fact, this term is even required if the instanton solution to the $\mathrm{U}(1)$ problem is considered cite:hooft1986instantons,Peccei2008 and is a result of the complex vacuum structure of QCD cite:tHooftU1. This term violates $\mathrm{P}$ and $\mathrm{T}$ transformations and as a result of \cpt symmetry also violates \cp. Peculiarly, any effect expected from this \cp violation has still not been observed. One such effect is an expected electric dipole moment of the neutron cite:CREWTHER_NEDM,CREWTHER_NEDM_ERRATA,Baluni_NEDM. Such a dipole moment may naively be expected plainly from the fact that the constituent quarks of a neutron are charged after all. However, very stringent limits place an extremely low upper bound on it at cite:NEDM_Limit,Revised_NEDM_Limit \[ d_{\text{NEDM}} \leq \SI{3e-26}{\elementarycharge \cm}, \] where $e$ is the electron charge. Nature's deviation from our expectation in this context is coined the _strong \cp problem_ of particle physics. One possible solution to the strong \cp problem would be a massless up or down quark, in which case the QCD Lagrangian would feature a global $\mathrm{U}(1)$ axial shift symmetry, which could shift $θ ↦ 0$. Even in the late 1980's this was not entirely ruled out, despite the already understood mass ratio $m_u / m_d = 5/9$ cite:Weinberg1977_mass, due to $2^{\text{nd}}$ order chiral effects cite:PhysRevLett.56.2004. However, nowadays based on lattice QCD calculations this has been ruled out cite:AndrewG_Cohen_1999,PhysRevD.92.054004,10.1093/ptep/ptac097. While it is possible that our universe is simply one in which the effect of the strong \cp violation is suppressed (or even exactly zero) "by chance", Helen Quinn and Roberto Peccei realized in 1977 cite:PecceiQuinn1977_1,PecceiQuinn1977_2 that this behavior can be explained in the presence of an additional scalar field. Shortly after both Weinberg and Wilczek cite:AxionWeinberg,AxionWilczek realized the implication of such an additional field, namely a pseudo Nambu-Goldstone boson. Wilczek named it the _axion_, after a washing detergent, as it "washes the Standard Model clean" of the strong \cp problem. The most straightforward axion model based on the work by Wilczek and Weinberg yields a coupling of the axion to matter that is already excluded, because they associate the spontaneous symmetry breaking with the electroweak scale. Models for an 'invisible axion' manage to unify the solution to the strong \cp problem with the current lack of experimental evidence for an axion-like particle. There are two main models for the invisible axion, the KSVZ (Kim-Shifman-Vainshtein-Zakharov) cite:Kim_KSVZ,SHIFMAN_KSVZ and the DFSZ (Dine-Fischler-Srednicki-Zhitnitskii) cite:DINE_DFSZ,Zhitnitskii_DFSZ models. For the most comprehensive modern overview of the theory of axions, different models, best bounds on different axion couplings and a general axion reference, make sure to look into the aptly named "landscape of QCD axion models" cite:DILUZIO20201! From here we will first say a few more words about the two invisible axion models in sec. [[#sec:theory:invisible_axion_models]]. Then we will look at the implications an effective axion Lagrangian has on axion interactions. We will sketch the derivation of the axion-photon conversion probability in sec. [[#sec:theory:axion_interactions]]. This conversion probability leads to a discussion of the expected solar axion fluxes in sec. [[#sec:theory:solar_axion_flux]]. At this point a very succinct interlude will introduce the chameleon particle, sec. [[#sec:theory:chameleon]]. Finally, we will briefly go over the current relevant bounds on different axion couplings in sec. [[#sec:theory:current_bounds]]. [fn:for_me] At least for me. [fn:groups] $\mathrm{U}(1)$ refers to the "circle group", i.e. the group that describes rotations on a unit circle (consider a phase shift on the complex plane). The group operation as such can be considered as multiplication by a complex phase. $\mathrm{SU}(n)$ is the special unitary group, which means the group of unitary matrices of rank $n$ with determinant 1, where the group operation is matrix multiplication of these matrices (for $\mathrm{SU}(2)$ the Pauli matrices multiplied by $\frac{i}{2}$ are a possible set of infinitesimal generators for example). [fn:theta_parameter] The parameter $θ$ as written in the equation is actually a compound of a pure $θ$ from the QCD $θ$ vacuum and an electroweak contribution. A chiral transformation is required to go to the physical mass eigenstates to diagonalize the quark mass matrix. This adds a term to $θ$, $\overline{θ} = θ + \arg \det M$. We simply drop the bar over $\overline{θ}$. [fn:cp_violation] $C$ refers to the discrete transformation of charge conjugation and $P$ for parity transformation. Both refer to the idea of studying a physical system with either (or both) of these transformations applied. A \cp conserving theory (or system) would behave exactly the same under the combined transformation. The Standard Model is mathematically \cpt invariant ($T$ being time reversal). As such, if a system exhibits different behavior under time reversal it implies a violation of \cp to achieve a combined \cpt invariance. [fn:axion_overview] To be brutally honest, as a combination of the significant growth of the axion community (both experimentally and theoretically) and my lack of theory work in the last years, I cannot do an overview of axion theory justice. Fortunately, there are a huge number of amazing reviews of the current axion landscape out there! In particular "landscape of QCD axion models". ** TODOs for this section [/] :noexport: - [ ] *CITE REFERENCE WEAK CP PROBLEM -> PDG chapters* - [X] *ADD SENTENCE ABOUT CPT INVARIANCE OF SM* -> Done I think. - [X] *SHOW WEAK CP VIOLATING TERM* - [X] *GET BEST CURRENT LIMIT ON NEDM* - [X] *WRITE THETA TERM HERE TO REFERENCE IT IN QUINN PECCEI* *REFERENCE REVIEW BY PECCEI, REVIEW BY SIKIVIE* *FIND REFERENCE TO ORIGINAL AXION BEING VISIBLE* *FIND REFERENCE TO INVISIBLE AXION* *ADD REMARKS ABOUT*: - theta term - axion mass -> Mass is relevant for exclusion plot, if we make one! - axion coupling constants -> It's what we search for / limit on From a historical standpoint including the strong CP problem, we go over to the Peccei-Quinn solution. Another way to look at it is from a modern standpoint asking why does the neutron not have a dipole moment? *Take a look at lectures for Axion School* *QCD AXION LANDSCAPE PAPER* *PAPER RINGWALD LIKED SO MUCH* *NOTE*: I think after introducing the axion the way I've done up there now, maybe it's a good idea to just review the relevant parts that will show up in the thesis? Mass, coupling, etc? ** Useful reading material :optional: :PROPERTIES: :CUSTOM_ID: sec:theory:useful_reading_material :END: The following is a list of materials I would recommend, if someone wishes to understand the theory underlying axions better. The focus here is on things not already mentioned in the rest of the text. Your mileage may vary, of course. First of all, at least have a short look at the original papers by Roberto Peccei and Helen Quinn [[cite:&PecceiQuinn1977_1;&PecceiQuinn1977_2]], as well as the 'responses' by Wilczek and Weinberg [[cite:&AxionWilczek;&AxionWeinberg]]. If your interest is in understanding the reasoning behind the origin of the strong CP problem via the QCD vacuum structure and the $\mathrm{U}(1)$ problem, it may be worthwhile to look at (some of) the original papers. See Weinberg's paper introducing it [[cite:&PhysRevD.11.3583]] and 't Hooft's paper on symmetry breaking via the Adler-Bell-Jackiw anomaly [[cite:&tHooftU1]] as the next logical step. In [[cite:&hooft1986instantons]] 't Hooft then shows how instantons solve the $\mathrm{U}(1)$ problem. In 1999 't Hooft also wrote a 'historical review' [[cite:&hooft1999glorious]] about renormalization of gauge theories, which touches on the $\mathrm{U}(1)$ problem as well. It may be easier to follow (as these things so often are in hindsight). In the vein of review papers, Roberto Peccei wrote a nice review paper in 2008 [[cite:&Peccei_2008]] covering the connection between the $\mathrm{U}(1)$ problem, the QCD vacuum structure, the strong CP problem and axions. Similarly, Jihn Kim (the /K/ in the KSVZ model) wrote another very nice review in 2010 [[cite:&kim2010axions]]. I would probably recommend to read these two for a good overview. They give enough references to dive deeper, if desired. On the side of axion detection, you should of course look into the two papers by Pierre Sikivie, introducing all our modern axion detection experiment techniques [[cite:&PhysRevLett.51.1415;&PhysRevD.32.2988]]. Sikivie wrote another review in 2020 [[cite:&Sikivie2020]] covering search methods for invisible axions. Another great overview, covering both theory and experimental parts of the axion, are Igor Irastorza's notes here: [[cite:&10.21468/SciPostPhysLectNotes.45]]. And there is also the text book 'Axions' [[cite:&kuster2007axions]], which may be worth a look as a summary of different aspects of axion physics. If you only want to look into a single document covering all aspects of axions, from theory to experimental searches, look no further than the 'landscape of QCD axion models' [[cite:&DILUZIO20201]], also mentioned in the main text. Finally, in my master thesis [[cite:&SchmidtMaster]] I attempted to introduce the relevant physics for the axion (and chameleon) to a level satisfactory to me at the time. Whether I succeeded or not, you can be the judge. This may (or may not) be a good introduction if you are a student, for whom courses in QFT are fresh in their mind. *** TODOs for this section :noexport: This will become a written section, maybe with bullet points, to be shown in extended version about the reading material I think is valuable. - [X] Original papers by Peccei & Quinn - [X] Original paper by Wilczek and Weinberg - [X] t'Hooft paper about instantons as solution to U(1) problem - [X] t'Hooft review about renormalization etc - Landscape of QCD and Igor's paper for current intro and overview of axions - [ ] (? which do I mean?) NEDM review paper for QCD vacuum - My master thesis for an attempt to pull together all relevant aspects for students at the MSc level - [X] Axions and the strong CP problem by Kim - [X] Review by Peccei - ...? Check my MSc for other references - Review by Sikivie - Redondo paper about axion-electron flux ** Illustration of the \cpt symmetry [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:theory:illustration_cpt :END: Fig. [[fig:theory:cpt_symmetry_schematic]] is an illustration of the three discrete transformations part of the \cpt symmetry in a 1 dimensional spacetime. The three transformations are: - $C$ - charge conjugation: replaces each particle by its anti particle and thus reversing the charge $q ↦ -q$ - $P$ - parity transformation: mirrors all positions at some origin, $\vec{x} ↦ -\vec{x}$. - $T$ - time reversal: reverses the arrow of time, $t ↦ -t$ The \cpt symmetry of the Standard Model states that a hypothetical mirror universe to ours - obtained by the three transformations together - follows the exact same physical laws and thus 'evolves' identically (due to the time reversal 'evolution' in quotation marks). #+CAPTION: Schematic showcasing the three transformations $C, P$ and $T$ in a 1 dimensional #+CAPTION: spacetime. Each of the three are discrete transformations essentially reversing #+CAPTION: the value along its axis. The combination of all three operations yields the #+CAPTION: schematic on the right. #+NAME: fig:theory:cpt_symmetry_schematic [[~/org/Figs/CPT_explanation/cpt_explanation_extended_linear.pdf]] *** TODOs for this section :noexport: - [X] *Change* the structure of the extended schematic to follow what Cristina proposed here: [[file:~/org/Figs/CPT_explanation/idea_cristina_better_structure.png]]. Also, this section is likely not going to remain in the thesis, only in the extended version. -> Done. ** Historical origins :noexport: - [X] DONE IN INTRODUCTION From electroweak theory we know about CP violation. Standard model for strong force is just SU(3) vs. SU(2) for electroweak. Lagrangian allows mostly the same terms for both forces. This implies there should be a strong CP violation. This isn't observed and even today theh neutron electric dipole moment is restricted to values smaller $d_N \leq 1e-26 \text{? some units}$. Merge the next section into this one and change title? ** Strong CP problem :noexport: - [X] DONE IN INTRODUCTION More info here? Use the schematic I created for Hendrik's presentation? ** Peccei-Quinn solution :noexport: - [X] DONE IN INTRODUCTION Main Peccei-Quinn paper citation. Solution by introducing another global U(1) symmetry that is spontaneously broken below some energy scale. ** The axion :noexport: - [X] DONE IN INTRODUCTION Leads to a pseudo Nambu-Goldstone boson that Wilzcek named the Axion (ref a pic of axion detergent), as it washes the Standard Model clean of an ugly stain. ** Invisible axion models and axion couplings :PROPERTIES: :CUSTOM_ID: sec:theory:invisible_axion_models :END: - Kim-Shifman-Vainshtein-Zakharov model :: The so called KSVZ model cite:Kim_KSVZ,SHIFMAN_KSVZ is the simplest invisible axion model. It adds a scalar field $σ$ and a superheavy quark $Q$, which $σ$ couples to via a Yukawa coupling. The main problem with the standard axion is that its energy scale is the electroweak scale $v_F \approx \SI{250}{GeV}$ resulting in too strong interactions. The KSVZ model effectively achieves a symmetry breaking scale $f_a \gg v_F$ and thus results in an 'invisible axion'. It contains a tree-level axion-photon coupling $g_{aγ}$, but no axion-electron couplings. The latter can be found at one-loop level cite:SREDNICKI1985689. - Dine-Fischler-Srednicki-Zhitnitskii model :: The DFZS axion model cite:DINE_DFSZ,Zhitnitskii_DFSZ is another axion model, in which the scalar field $σ$ couples to two Higgs doublet fields, $H_u$ and $H_d$. It does not require an extra superheavy quark, in this case the coupling of the scalar to the doublets achieves the decoupling of the axion symmetry breaking scale $f_a$ from the electroweak scale $v_F$. The end result is the same, an 'invisible axion' which is very light and has only light interactions with other matter. However, in contrast to the KSVZ model axion-lepton couplings appear at tree level! - Generalizations :: Further generalizations of axion models over the KSVZ and DFSZ models are possible, resulting in more flexible couplings. From a practical standpoint of an axion experiment it is usually better to consider axion interactions via effective couplings, as the limiting factor is detecting _something_ rather than determining its properties. In particular because the two models above yield very small expected coupling constants in regions of axion masses easily accessible via laboratory experiments. Therefore, an effective Lagrangian like the following #+NAME: eq:theory:axion:general_axion_couplings \begin{equation} \mathcal{L}_{a,\text{eff}} = \frac{1}{2} \partial^{μ} a \partial_{μ} a - \frac{1}{2} m_a^2 a^2 - \frac{g_{aγ}}{4} \widetilde{F}^{μν} F_{μν} a - g_{ae} \frac{\partial_{μ} a}{2 m_e} \overline{ψ}_e γ^5 γ^{μ} ψ_e, \end{equation} which contains an axion-photon coupling $g_{aγ}$ and an axion-electron coupling $g_{ae}$ is useful for experimental searches. Limits given on one of these parameters can always be converted to the specific couplings of one of the existing models if needed. In principle other couplings exist, for example the axion-nucleon coupling $g_N$. For brevity we ignore these as they are not relevant in the context of this thesis. Future helioscopes like IAXO may be able to be sensitive to at least $g_N$ though cite:di2022probing. *** TODOs for this section :noexport: Old footnote: #+begin_quote [fn:other_couplings] This Lagrangian can of course be extended by other couplings, like $g_{aN}$ an axion-nucleon coupling and others. We restrict ourselves here to those that are considered in the remainder of the thesis. #+end_quote *** Notes on KSVZ, DFSZ :extended: - KSVZ :: The Lagrangian for this model can be written cite:SHIFMAN_KSVZ \begin{equation} \mathcal{L}_{\text{KSVZ}} = \overline{Q}\slashed{D}Q - h \left( σ \overline{Q}_R Q_L + σ^{\dag} \overline{Q}_L Q_R \right) + \partial^{μ} σ^{\dag} \partial_{μ} σ + m^2 σ^{\dag} σ - λ \left( σ^{\dag} σ \right)^2, \label{eq:theory:axion:KSZV_lagrangian} \end{equation} with the dimensionless Yukawa coupling $h$. The vacuum expectation value for the field $σ$ calculates to \begin{equation} f_a = ⟨σ⟩ \equiv σ_0 = \frac{m}{\sqrt{2 λ}}. \end{equation} The mass of this axion turns out to be exactly as for the standard axion eq. \ref{eq:theory:axion:effective_axion_mass}, with the replacement $\nu_{\text{EW}} \rightarrow ⟨σ⟩$. This means, the KSVZ model can be seen as the simplest extension of the standard axion, which allows for an arbitrary symmetry breaking scale $f_a$. - DFSZ :: While not needing an additional superheavy quark it relies on two scalar Higgs doublet fields. $\Phi_u$ has hypercharge $-1$ and couples to $u$ type right-handed quarks, whereas $\Phi_d$ has hypercharge $+1$ and couples to $d$ type right-handed quarks as well as leptons. The different Higgs field have different vacuum expectation values, with the requirement \begin{equation} \nu^2_{\text{EW}} = \nu^2_u + \nu^2_d, \end{equation} while the additional degree of freedom allows for $\sqrt{2} ⟨σ⟩ \equiv f_{σ} \gg \nu_{\text{EW}}$. The mass term is again similar to the standard axion and KSVZ axion \begin{equation} m_a = m_{a0} / N_g, \end{equation} where $N_g$ is the number of quark generations of the theory. Again $\nu_{\text{EW}}$ is replaced by $⟨σ⟩$. One interesting property of the DSFZ axion is its coupling directly to electrons (and other leptons [[cite:kim2010axions,kim2010axions_erratum]]), given by cite:Redondo_2013,Peccei2008: \begin{equation} \mathcal{L}_{al} = -i \frac{\nu_d^2}{\nu^2_{\text{EW}}} \frac{m_l}{f_a} a \overline{l} \gamma^5 l. \end{equation} where $l$ refers to lepton. This type of coupling may allow for easier detection of axions than models only including axion photon couplings at tree level. ** Implications for axion interactions - conversion probability :PROPERTIES: :CUSTOM_ID: sec:theory:axion_interactions :END: Starting with the Lagrangian in eq. [[eq:theory:axion:general_axion_couplings]] and extending it by the Lagrangian for a free photon $A_ν$, \[ \mathcal{L} = \mathcal{L}_{a,\text{eff}} + \mathcal{L}_γ = \mathcal{L}_{a,\text{eff}} - \frac{1}{4} F_{μν} F^{μν}, \] where $F_{μν} = ∂_μ A_ν - ∂_ν A_μ$ is the electromagnetic field strength tensor. And noting that the axion interaction term of eq. [[eq:theory:axion:general_axion_couplings]] can be rewritten as, \[ \mathcal{L}_{aγ} = \frac{1}{4}g_{aγ} \tilde{F}^{μν} F_{μν} a = -g_{aγ} a \vec{E}·\vec{B}, \] we can apply the Euler Lagrange equations to both the axion $a$ and photon $A_ν$ to derive a modified Klein-Gordon equation for the axion, \[ \left(\Box + m_a²\right) a = \frac{1}{4}g_{aγ} F_{μν} \tilde{F}^{μν}, \] which has a photon source term. Similarly, for the photon equation of motion we derive the homogeneous Maxwell equations with an axion source term, \[ ∂_μ F^{μν} = g_{aγ} (∂_μ a) \tilde{F}^{μν}. \] Without going into too much detail, let's shortly sketch how one derives the axion-photon conversion probability from here. By choosing a suitable gauge and specifying directions of electric and magnetic fields in a suitable coordinate system, we can then derive the mixing between photon and axion states. For example if the propagation of particles is along the $z$ axis and we fix the two degrees of freedom of $A_ν$ by the Lorenz gauge ($∂_μ A^μ = 0$) and Coulomb gauge ($\vec{\nabla}·\vec{A} = 0$) we can derive a single equation of motion for the axion and $A_ν$ field by starting with a plane wave approach. We obtain \[ \left[ (ω² + ∂²_z) \mathbf{1} - \mathbf{M} \right] \vektor{A_{\perp}(z) \\ A_{\parallel}(z) \\ 0} = 0, \] with the parallel and orthogonal polarization of the photon $A_{\parallel}$ and $A_{\perp}$, respectively and matrix $\mathbf{M}$: \[ \mathbf{M} = \mtrix{ m²_γ & 0 & 0 \\ 0 & m²_γ & -ω g_{aγ} B_T \\ 0 & -ω g_{aγ} B_T & m²_a \\ } \text{ where } m²_γ = ω²_p. \] Here effects of quantum electrodynamic (QED) vacuum polarization and other polarization effects are ignored. $ω_p$ refers to the plasma frequency, $ω_p = 4πα n_e / m_e$, where $n_e$ the electron density and $m_e$ the electron mass. The mass $m_γ$ refers to an effective photon mass that can appear in media, $ω$ is the frequency and $B_T$ the transverse magnetic field. The constant magnetic field $B_T$ appears, because we assume for our purpose the magnetic field is constant along $z$, the propagation direction of the photon. By recognizing that the orthogonal component $A_{\perp}$ is decoupled from the other two, the problem reduces to a 2 dimensional equation. Note that a side effect of this decoupling is that photons produced from an incoming axion in a magnetic field are always linearly polarized in the direction of the external magnetic field! Further, this equation can be linearized in the ultra relativistic limit $m_γ² \ll ω²$ to \[ \left[ \left( ω + i∂_z \right) \mathbf{1} - \frac{\mathbf{M}}{2ω} \right] \vektor{ A_{\parallel}(z) \\ a(z) } = 0. \] As the mass matrix $\mathbf{M}$ is non diagonal, the fields $A_{\parallel}$ and $a$ are interaction eigenstates and not propagation eigenstates. If we wish to compute the axion to photon conversion probability, we need the propagation eigenstates however. Transforming from one to the other is done by a regular rotation matrix $\mathbf{R}$, which diagonalizes $\mathbf{M}/2ω$. In the basis of the propagation eigenstates the fields $A'_{\parallel}$ and $a'$ are then decoupled and can be easily solved by a plane wave solution. The fields we can _measure_ in an experiment are those of the interaction eigenstates of course. [fn:eigenstates] The interaction eigenstates after a distance $z$ can therefore be expressed by \[ \vektor{ A_{\parallel}(z) \\ a(z) } = \mathbf{R}^{-1} \mathbf{M_{\text{diag}}} \mathbf{R} \vektor{ A_{\parallel}(0) \\ a(0) }, \] where $\mathbf{M_{\text{diag}}}$ is the diagonalized mass matrix, \[ \mtrix{ e^{-i λ_+ z} & 0 \\ 0 & e^{-λ_- z} }, \] where $λ_{+, -}$ are its eigenvalues and coefficients in the exponential of the plane wave solutions of the propagation eigenstate fields $A'_{\parallel}$ and $a'$ given by \[ λ_{+,-} = \pm \frac{1}{4ω} \sqrt{ \left(ω²_P - m²_a\right)² + \left(2 ω g_{aγ} B_T\right)²}. \] Then finally, one can compute the conversion probability by starting from initial conditions where no electromagnetic field is present, $A_{\parallel}(0) = 0, a(0) = 1$. Computing the resulting $A_{\parallel}(z)$ with these conditions yields the expression which needs to be squared for the probability to measure a photon at distance $z$ when starting purely from axions in an external, transverse magnetic field $B_T$ #+NAME: eq:theory:axion_interaction:conversion_probability \begin{equation} P_{a↦γ}(z) = |{A_{\parallel}(z)}|² = \left( \frac{g_{aγ} B_T z}{2} \right)² \left(\frac{\sin\left(\frac{q z}{2}\right)}{\frac{q z}{2}}\right)², \end{equation} with $q = \frac{m²_γ - m²_a}{2ω}$ and we dropped additional terms $∝ g_{aγ} B_T$ as arguments to $\sinc = \sin(x)/x$, because they are extremely small compared to $q z$ for reasonable axion masses, magnetic fields and coupling constants. If further the coherence condition $qL < π$ is such that $qL \ll π$, the $\sinc$ term approaches 1 and the relevant conversion probability is finally: #+NAME: eq:theory:conversion_prob \begin{equation} P_{a↦γ, \text{vacuum}} = \left(\frac{g_{aγ} B L}{2} \right)^2, \end{equation} where we dropped the $T$ suffix for the transverse magnetic field and replaced $z$ by the more apt $L$ for the length of a magnet. This is the case for long magnets and/or low axion masses, which is generally applicable in this thesis. The point at which this condition does not strictly hold anymore is the axion mass at which helioscope experiments start to lose sensitivity. Note that the above conversion probability is given in natural units, specifically Lorentz-Heaviside units, $c = \hbar = ε_0 = 1$ (meaning $α = e²/4π \approx 1/137$). Arguments ($B, L$) need either be converted to natural units as well or the missing factors need to be added. The same equation in SI units is given by: \[ P_{a↦γ, \text{vacuum}} = ε_0 \hbar c^3 \left( \frac{g_{aγ} B L}{2} \right)^2. \] A detailed derivation for the above can be found in cite:masaki2017photon. [fn:biljana_kreso_doc] An initial derivation for the first axion helioscope prototype is found in cite:vanBibber1989 based on cite:raffelt1988mixing. Sikivie gives expected rates in his groundbreaking papers about axion experiments, cite:PhysRevLett.51.1415,PhysRevD.32.2988 but is extremely short on details. Another source in the form of cite:raffelt1996stars in which G. Raffelt covers a very large number of topics relevant to axion searches. [fn:biljana_kreso_doc] There is a more detailed derivation written by Biljana Lakić and Krešimir Jakovčić available internally in the IAXO collaboration. If you don't have access to it, reach out! [fn:eigenstates] For all practical purposes the terms interaction eigenstates and propagation eigenstates (the latter often also called mass eigenstates) are a convenient tool to work with. A field $X$ in the interaction eigenstate corresponds to the field we can actually measure in an experiment. However, if fields interact, say with another field $Y$, then along their space and time evolution they may mix. Therefore, it is useful (and convenient) to introduce a propagation eigenstate $X'$ in which that new field (which is different from the physical field $X$!) propagates without any interaction. The nature of the field interactions have been absorbed into the time and space evolution of the field itself -- it is a superposition of the $X$ and $Y$ interaction eigenstates. In the simplest case a field $X'$ in the propagation state may just be oscillating between $X$ and $Y$, for example. *** TODOs for this section [1/2] :noexport: - [ ] *FIND OUT IF WE CAN LINK AXION DOCUMENT OF BILJANA & KRESO!* - [X] *Mention $A_ν$ related to F* *** Effects of a buffer gas :PROPERTIES: :CUSTOM_ID: sec:theory:buffer_gas :END: As seen in the conversion probability above, there is a term for an effective photon mass $m_γ$ as part of $q$. And indeed, $q$ becomes zero if $m_γ = m_a$, which means the suppression effect of the $\sinc$ term disappears. This is something that can be used to increase the conversion probability inside of a magnet, by filling it with a buffer gas (for example helium), as introduced in cite:raffelt1988mixing,vanBibber1989. However, one also needs to account for the attenuation effect of the gas on the produced X-rays. As such the derivation above needs to include this as part of the evolution of the field $\vec{A}$ [fn:refractive_index]. By doing this and following the rest of the derivation, the conversion probability in the presence of a buffer gas comes out to: #+NAME: eq:theory:full_conversion_prob \begin{equation} P_{a\rightarrow\gamma} = \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{1}{q^2 + \Gamma^2 / 4} \left[ 1 + e^{-\Gamma L} - 2e^{-\frac{\Gamma L}{2}} \cos(qL)\right], \end{equation} where $\Gamma$ is the inverse absorption length for photons (or attenuation length), $B$ the transverse magnetic field, $L$ the length of the magnetic field and $q$ the axion-photon momentum transfer given by: \[ q = \left|\frac{m_{\gamma}^2 - m_a^2}{2E_a}\right| \] One can easily see that this reduces to the vacuum case we discussed before, where the attenuation length $Γ$ and effective photon mass $m_γ$ are zero. Filling the magnet with a buffer gas to induce an effective photon mass and thus minimizing $q$ has been done at CAST with $\ce{He4}$ and $\ce{He3}$ fillings, as we will mention in chapter [[#sec:helioscopes:cast]]. #+begin_quote Note: for a potential buffer gas run in BabyIAXO I did some calculations about the required gas pressure steps and the effect of different possible filling configurations. These are not really relevant for this thesis, but can be found in [fn:axion_mass_link] both as a PDF as well as the original Org file plus the tangled source code. #+end_quote [fn:refractive_index] This can be done easily by treating the buffer gas as a refractive medium with complex refractive index $n_γ = 1 - m²_γ / (2ω²) - iΓ/2ω$, which then produces attenuation via the $Γ$ attenuation term. This is useful to know as it relates to the X-ray properties as discussed later in sec. [[#sec:theory:xray_matter_gas]] and sec. [[#sec:theory:xray_reflectivity]]. [fn:axion_mass_link] http://phd.vindaar.de/docs/bufferGasIAXO/v1/index.html contains the ~axionMass.pdf~, ~axionMass.nim~ and finally ~axionMass.org~, the document from which both other files are generated. **** TODOs for this section [/] :noexport: - [X] *VAN BIBBER cite:vanBibber1989 IS SOURCE OF CONVERSION WITH ATTENUATION* -> Based on cite:raffelt1988mixing ! - [ ] *INSERT ~axionMass.org~ CALCULATIONS SOMEWHERE* -> We need to link to the - [ ] *INSERT OUR DOCUMENT ABOUT AXION MASS BUFFER GAS HERE* -> maybe in the end it might be a good idea to mention the buffer gas in the actual thesis in a short paragraph with a couple of words on how it works, but mention that this is not immediately relevant. Then we can link to the appendix of the extended version. Due to the length of this part it makes more sense to have it in the appendix than here where it does affect the flow otherwise. -> Put them into a ~docs~ directory? **** Simplification :extended: The conversion probability simplifies to: \begin{align} P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{1}{q^2} \left[ 1 + 1 - 2 \cos(qL) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B L}{2} \right)^2 \left(\frac{\sin\left(\frac{qL}{2}\right)}{ \left( \frac{qL}{2} \right)}\right)^2 \end{align} *** TODOs for the section :noexport: - [X] *Mention* that A field is decoupled from background field B - [X] Interaction eigenstates vs propagation eigenstates - [X] Compute result for propagation eigenstates easily, because matrix becomes diagonal - [X] In laboratory we measure interaction eigenstates, so the result of the propagation eigenstates must be rotated back - [X] calculate transition amplitude from axion to photon by simply taking $A_{\parallel}²$ of rotated interaction eigenstate. As it has axion coupled into it, gives result if we start with initial conditions A|| = 0, a = 1. - [X] How to compute version with gas attenuation of photons? Start not with plane wave solution of photon, but plane wave + decay? I suppose. - [X] Explain origin of the conversion probability by starting with modified Maxwell equations and then solving the Klein-Gordon equation. The resulting mixing of $A_{\parallel}$ and $a$ leads to it. *** Dropping an additional $Δ$ term :extended: Notation: \[ q = \frac{m²_γ - m²_a}{2ω} \] and \[ Δ = \frac{-g_{aγ} B_T}{2} \] In principle the conversion probability contains \[ \sinc²(\frac{πz}{L_{\text{osc}}}) \] with the osciallation length \[ L_{\text{osc}} = \frac{π}{\sqrt{ \left( \frac{q}{2} \right)² + Δ² } } \] I assume we can safely drop the term $Δ²$, as it is likely orders of magnitude smaller than the first term. Let's check that: #+begin_src nim import unchained defUnit(GeV⁻¹) proc q(m_a: meV, ω: keV): keV = result = (m_a*m_a / (2 * ω)).to(keV) proc Δ(g_aγ: GeV⁻¹, B: Tesla): keV = result = (-g_aγ * B.toNaturalUnit() / 2.0).to(keV) echo "q² = ", q(1.meV, 3.keV)^2 echo "Δ² = ", Δ(1e-12.GeV⁻¹, 9.T)^2 #+end_src #+RESULTS: | q² | = | 2.77778e-26 keV² | | Δ² | = | 7.72795e-43 keV² | Yeah, as expected that's the reason. *** Deriving the missing constants in the conversion probability [/] :extended: - [ ] *Move this into an appendix?* The conversion probability is given in natural units. In order to plug in SI units directly without the need for a conversion to natural units for the magnetic field and length, we need to reconstruct the missing constants. The relevant constants in natural units are: \begin{align*} ε_0 &= \SI{8.8541878128e-12}{A.s.V^{-1}.m^{-1}} \\ c &= \SI{299792458}{m.s^{-1}} \\ \hbar &= \frac{\SI{6.62607015e-34}{J.s}}{2π} \end{align*} which are each set to 1. If we plug in the definition of a volt we get for $ε_0$ units of: \[ \left[ ε_0 \right] = \frac{\si{A^2.s^4}}{\si{kg.m^3}} \] The conversion probability naively in natural units has units of: \[ \left[ P_{aγ, \text{natural}} \right] = \frac{\si{T^2.m^2}}{J^2} = \frac{1}{\si{A^2.m^2}} \] where we use the fact that $g_{aγ}$ has units of $\si{GeV^{-1}}$ which is equivalent to _units_ of $\si{J^{-1}}$ (care has to be taken with the rest of the conversion factors of course!) and Tesla in SI units: \[ \left[ B \right] = \si{T} = \frac{\si{kg}}{\si{s^2.A}} \] From the appearance of $\si{A^2}$ in the units of $P_{aγ, \text{natural}}$ we know a factor of $ε_0$ is missing. This leaves the question of the correct powers of $\hbar$ and $c$, which come out to: \begin{align*} \left[ ε_0 \hbar c^3 \right] &= \frac{\si{A^2.s^4}}{\si{kg.m^3}} \frac{\si{kg.m^2}}{\si{s}} \frac{\si{m^3}}{\si{s^3}} \\ &= \si{A^2.m^2}. \end{align*} So the correct expression in SI units is: \[ P_{aγ} = ε_0 \hbar c^3 \left( \frac{g_{aγ} B L}{2} \right)^2 \] where now only $g_{aγ}$ needs to be expressed in units of $\si{J^{-1}}$ for a correct result using tesla and meter. *** Further notes on units of conversion probability :extended: The conversion probability follows the derivation by Biljana and Kreso: [[file:~/org/Papers/Axion-photon_conversion_report_biljana.pdf]] Fortunately, they state explicitly, which units they use, to quote: #+begin_quote We stress that the equations above are written in terms of natural, rationalized electromagnetic units (natural Lorentz-Heaviside units) where hbar = c = 1 and the fine-structure constant is given as α = e² / 4π ≈ 1/137. #+end_quote This is very important to know that we did not miss some 4π or √4π factor. The fine structure constant is defined by \[ α = \frac{e²}{4π ε_0 \hbar c} \] in SI units. This means, $ε_0 = 1$ as well (but explicitly *not* something like $4π ε_0 = 1$, which one also finds, e.g. here http://ilan.schnell-web.net/physics/natural.pdf). A PDF about natural unit systems I personally like a lot: https://www.seas.upenn.edu/~amyers/NaturalUnits.pdf It describes natural units in the same way as we use them here, i.e. 'Lorentz-Heaviside'. These are also implemented as the (current only) natural units in ~unchained~. In particular in these units, the conversion factors for tesla and meter are as follows: #+begin_src nim :results drawer import unchained echo 1.T, " in natural units = ", 1.T.toNaturalUnit() echo 1.m, " in natural units = ", 1.m.toNaturalUnit() #+end_src #+RESULTS: :results: 1 T in natural units = 195.353 eV² 1 m in natural units = 5.06773e+06 eV⁻¹ :end: which are the relevant conversion factors, if one wishes to work with the natural unit version (and not relying on a library with an option to convert units for you). See the table of the aforementioned ~NaturalUnits.pdf~ to deduce these conversion factors yourself. Also, there exists a natural unit conversion map for [[https://www.gnu.org/software/units/][GNU Units]]: https://github.com/misho104/natural_units which also explicitly mentions it uses Lorentz-Heaviside units. Therefore, feel free to use that to help with your conversions. *** Full simplification of conversion probability in vacuum :extended: The full simplification for the vacuum case from the buffer gas conversion probability is as follows: \begin{align*} P_{a\rightarrow\gamma} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{1}{q^2 + \Gamma^2 / 4} \left[ 1 + e^{-\Gamma L} - 2e^{-\frac{\Gamma L}{2}} \cos(qL)\right] \\ \text{for vacuum } Γ &= 0, m_γ = 0 \text{ and thus} \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{1}{q^2} \left[ 1 + 1 - 2 \cos(qL) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{2}{q^2} \left[ 1 - \cos(qL) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B}{2}\right)^2 \frac{2}{q^2} \left[ 2 \sin^2\left(\frac{qL}{2}\right) \right] \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(g_{a\gamma} B\right)^2 \frac{1}{q^2} \sin^2\left(\frac{qL}{2}\right) \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B L}{2} \right)^2 \left(\frac{\sin\left(\frac{qL}{2}\right)}{ \left( \frac{qL}{2} \right)}\right)^2 \\ P_{a\rightarrow\gamma, \text{vacuum}} &= \left(\frac{g_{a\gamma} B L}{2} \right)^2 \left(\frac{\sin\left(\delta\right)}{\delta}\right)^2 \\ \end{align*} *** Details on the polarization of axion induced photons :extended: These notes refer to the notes by Bijlana and Kreso [[file:~/org/Papers/Axion-photon_conversion_report_biljana.pdf]] *UPDATE*: <2023-09-08 Fri 20:14> I just noticed the following sentence in Biljana's notes: #+begin_quote The Faraday rotation term $∆_R$ , which depends on the energy and the longitudinal component of the external magnetic field, couples the photon polarization states A⊥ and Ak . While Faraday rotation is important when analyzing polarized sources of photons, it plays no role in the problem at hand. #+end_quote which refers to off diagonal elements $Δ_R$, which were dropped in their calculation! While we don't consider polarized *sources*, do we still need to consider this? See eq. (3.4) (reproduced here) \[ a(z, t) = a(z) e^{-i ω t} \:\: , \:\: \vec{A}(z, t) = \vektor{A_{\perp}(z) \\ A_{\parallel}(z) \\ 0} e^{-iωt} \] for the definition of the axion and photon in the context. The equation of motion is eq. (3.8) \[ \left[ (ω² + ∂²_z) \mathbf{1} - \mathbf{M} \right] = \vektor{A_{\perp}(z) \\ A_{\parallel}(z) \\ 0} = 0 \] and with the definition of eq. (3.14) \[ \mathbf{M} = \mtrix{ m²_γ & 0 & 0 \\ 0 & m²_γ & -ω g_{aγ} B_T \\ 0 & -ω g_{aγ} B_T & m²_a \\ } \text{ where } m²_γ = ω²_p \] we can (as mentioned in the text) see that indeed the orthogonal $\vec{A}$ component is independent of the axion field (the first row of $\mathbf{M}$ only has an entry in the first column, i.e. the product only yields an equation for $A_{\perp}$ alone without the axion field. Due to the difference between the interaction ($A_{\parallel}, a$) and propagation eigenstates ($A'_{\parallel}, a'$) (connected via a rotation matrix) the initial separate eigenstates $A_{\parallel}$ and $a$ end up mixing in the propagation eigenstates (see page 9). Eq. 3.34 \[ i∂_z \vektor{A'_{\parallel}(z) \\ a'(z)} = \mathbf{H_D} \vektor{A'_{\parallel}(z) \\ a'(z)} \] is then the equation of motion for the propagation eigenstates. The solutions are then eq. 3.38 \begin{align*} A_{\parallel}(z) &= A_{\parallel}(0) \left( \cos² θ e^{-iλ_+z} + \sin² θ e^{-iλ_-z}\right) + a(0) \frac{\sin 2θ}{2} \left( e^{-iλ_+z} - e^{-iλ_-z}\right) \\ a(z) &= A_{\parallel}(0) \frac{\sin 2θ}{2} \left( e^{-iλ_+z} - e^{-iλ_-z}\right) + a(0) \left( \sin² θ e^{-iλ_+z} + \cos² θ e^{-iλ_-z}\right) \\ \end{align*} which - with our assumption in our experiment \[ A_{\parallel}(0) = 0 \text{ and } a(0) = 1 \] i.e. we start with purely axions and no photons before the magnet - can then be simplified to 3.39 and 3.40 \begin{align*} A_{\parallel}(z) &= a(0) \frac{\sin 2θ}{2} \left( e^{-iλ_+z} - e^{-iλ_-z}\right) \\ a(z) &= \sin² θ e^{-iλ_+z} + \cos² θ e^{-iλ_-z} \\ \end{align*} What this implies is that the photon contribution after mixing that can end up as a detected physical photon is only of $A_{\parallel}$ type, which (again if my rusty understanding is not failing me) implies that the produced photons all have the same polarization, the one parallel to the constant $\vec{B}$ field (compare with fig. 1). ** Solar axion flux :PROPERTIES: :CUSTOM_ID: sec:theory:solar_axion_flux :END: The effective Lagrangian as shown in eq. [[eq:theory:axion:general_axion_couplings]] allows for multiple different axion interactions, which result in the production of axions in the Sun. For KSVZ-like axion models (models with only $g_{aγ}$) the only interaction allowing for axion production in the Sun is the Primakoff effect [fn:primakoff_effect] for axions. For DFSZ models with an axion-electron coupling $g_{ae}$ multiple other production channels are viable. One of the first papers to look at the implications of the axion in terms of astrophysical phenomena is cite:PhysRevD.18.3605 by K. Mikaelian. G. Raffelt expanded on this later in cite:raffelt1986astrophysical with calculations for the Compton and Bremsstrahlung production rates for DFSZ axion models. In cite:raffelt1988plasmon he further calculates the production rate for the Primakoff effect (and later reviews the physics cite:raffelt1996stars). J. Redondo combined all production processes in cite:Redondo_2013 to compute a full solar axion flux based on the axion-electron coupling $g_{ae}$ using numerical calculations of the expected metallicity contents at different points in the Sun. This is done by making use of the opacities for different elements at different pressures and temperatures as tabulated by the 'Opacity Project' cite:team1995opacity,hummer1988equation,seaton1987atomic,seaton1994opacities,badnell2005updated,seaton2005mnras, based on the values provided by the numerical AGSS09 [[cite:&agss09_chemical]] solar model. All these considered axion production channels are: - ($P$) Primakoff production via $g_{aγ}$ in both KSVZ and DFSZ axion models \[ γ + γ ↦ a \] - ($\text{ff}$) electron ion bremsstrahlung (in radio astronomy low energetic cases with photons are also called free-free radiation) \[ e + Z \longrightarrow e + Z + a \] - ($ee$) electron electron bremsstrahlung cite:raffelt1986astrophysical, \[ e + e \longrightarrow e + e + a \] - ($\text{fb}$) electron capture (also called recombination or free-bound electron transitions) \[ e + Z \longrightarrow a + Z^- \] - ($C$) Compton scattering cite:raffelt1986astrophysical \[ e + \gamma \longrightarrow e + a \] - ($\text{bb}$) and de-excitation (bound-bound electron transitions) via an axion \[ Z^* \longrightarrow Z + a \] See fig. [[fig:theory:axion:axion_couplings]] for the corresponding Feynman diagrams. #+CAPTION: Feynman diagrams of all contributing axion production #+CAPTION: channels in the Sun for non-hadronic models. In hadronic models only #+CAPTION: the Primakoff interaction has meaningful contributions, because axion-electron #+CAPTION: couplings only arise at loop level. Taken from cite:Redondo_2013. #+NAME: fig:theory:axion:axion_couplings [[~/phd/Figs/axion_prod_channels_javi.pdf]] With these production channels we can write down the expected axion flux on Earth, based on the production rate per volume in the Sun as the integral \begin{equation} \frac{d Φ_a}{dω} = \frac{1}{4π R²_{\text{Earth}}} \int_{\text{Sun}} \mathrm{d}V\, \frac{4π ω²}{ (2π)³ } Γ_a(ω). \end{equation} where \begin{equation} Γ_a(\omega) = Γ^{\text{ff}}_a + Γ^{\text{fb}}_a + Γ^{\text{bb}}_a + Γ^C_a + Γ^{ee}_a + Γ^P_a \end{equation} are all contributing axion production channels. The superscripts correspond to the bullet points above. Most important without going into the details of how each $Γ$ can be expressed, is that they scale with the coupling constant squared. $g²_{aγ}$ for $Γ^P_{e}$ and $g²_{ae}$ for the others. In cite:Redondo_2013 these production rates are expressed by relating them to the corresponding photon production rates for these processes, which are well known. For a detailed look, see cite:Redondo_2013 and the master thesis of Johanna von Oy cite:vonOy_MSc in which she -- among other things -- reproduced the calculations done by Redondo. Her work is used as part of this thesis to compute the axion production as required for the expected axion flux in the limit calculation (and provides the data for the plots in this section). The code responsible for computing the emission rates for different axion models is cite:JvO_axionElectron, in particular the ~readOpacityFile~ program. It also uses the Opacity Project as the basis to compute the opacities for different elements. Fig. sref:fig:theory:solar_axion_flux:differential_flux shows the differential axion flux arriving on Earth based on the different contributing interactions, in this case using $g_{ae} = \num{1e-13}, g_{aγ} = \SI{1e-12}{GeV⁻¹}$. We see that the total flux (blue line) peaks at roughly $\SI{1}{keV}$, with the small spikes from atomic interactions visible. The Primakoff flux has been amplified by a factor of $100$ here to make it more visible, as for this choice of coupling constants its contribution is negligible. As a result of the expected extremely low mass of the axion, the expected solar axion spectrum for Primakoff only models is essentially a blackbody spectrum corresponding to the temperatures near the core of the Sun, $\mathcal{O}(\SI{15e6}{K})$, see fig sref:fig:theory:solar_axion_flux:blackbody and compare it to the Primakoff flux on the left [fn:blackbody_differences]. Partially for this reason, in many cases analytical expressions are given to describe the Primakoff axion flux, which are solutions obtained for specific solar models. One such recent result is from cite:weighingSolarAxion, #+NAME: eq:theory:solar_axion_flux:primakoff \begin{equation} \frac{\dd Φ_a}{\dd E_a} = Φ_{P10} \left( \frac{g_{aγ}}{\SI{1e-10}{GeV⁻¹}} \right)² \frac{E^{2.481}_a}{e^{E_a / \SI{1.205}{keV}}} \end{equation} where $Φ_{P10} = \SI{6.02e10}{keV⁻¹.cm⁻².s⁻¹}$ and $E_a$ the energy of the axion in $\si{keV}$. cite:weighingSolarAxion also contains analytic expressions for the Compton and Bremsstrahlungs components. For the atomic processes this is not possible. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Differential axion flux") (label "fig:theory:solar_axion_flux:differential_flux") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/axions/differential_solar_axion_flux_by_type.pdf")) (subfigure (linewidth 0.5) (caption ($ (SI "15e6" "K")) " blackbody spectrum") (label "fig:theory:solar_axion_flux:blackbody") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/blackbody_spectrum_solar_core.pdf")) (caption (subref "fig:theory:solar_axion_flux:differential_flux") "Differential solar axion flux based on the different interaction types using " ($ "g_{ae} = " (num 1e-13) ", g_{aγ} = " (SI 1e-12 "GeV^{-1}")) ". The Primakoff contribution was scaled up by a factor 100 to make it visible, as at these coupling constants the $g_{ae}$ contributions dominate. " (subref "fig:theory:solar_axion_flux:blackbody") " shows a blackbody spectrum corresponding to " ($ (SI "15e6" "K")) ", roughly the temperature at the solar core. Up to a scaling factor it is essentially the Primakoff flux.") (label "fig:theory:solar_axion_flux:flux_blackbody")) #+end_src Fig. sref:fig:theory:solar_axion_flux:flux_vs_energy_and_radius shows how the solar axion flux (for DFSZ models) depends both on the energy and the relative radius in the Sun. We can see clearly that the major part of the axion flux comes from a region between $\SIrange{7.5}{17.5}{\%}$ of the solar radius. The reason is the cubic scaling of the associated volumes per radius on the lower end and dropping temperatures and densities at the upper end. Interesting substructure due to the details of the axion-electron coupling is visible. The radial component alone comparing it to KSVZ models is seen in fig. sref:fig:theory:solar_axion_flux:radial_dependence_ksvz_dfsz, where we can see that the DFSZ flux drops off significantly at a specific radius resulting in the net flux from slightly smaller radii. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Flux") (label "fig:theory:solar_axion_flux:flux_vs_energy_and_radius") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/axions/flux_by_energy_vs_radius_axion_electron.pdf")) (subfigure (linewidth 0.5) (caption "Radial emission") (label "fig:theory:solar_axion_flux:radial_dependence_ksvz_dfsz") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/axions/solar_axion_radial_emission.pdf")) (caption (subref "fig:theory:solar_axion_flux:flux_vs_energy_and_radius") "Solar axion flux for DFSZ models showing the flux dependence on both the energy and relative solar radius. Dominant contribution below " ($ "0.3 R_{\\odot}") ". " (subref "fig:theory:solar_axion_flux:radial_dependence_ksvz_dfsz") "Difference in the radial contributions of axion flux for KSVZ models against DFSZ models. DFSZ model production is constrained to slightly smaller radii.") (label "fig:theory:solar_axion_flux:flux_radial")) #+end_src The used solar model is an important part of the input for the calculation of the emission rate and thus differential flux. cite:Hoof_2021 conclude -- based on a similar code cite:Hoof_SolarAxionFlux_2021 -- that the statistical uncertainty of the solar models is $\mathcal{O}(\SI{1}{\%})$, while the systematic uncertainty can reach up to $\mathcal{O}(\SI{5}{\%})$. This is an important consideration for the systematic uncertainties later on. [fn:primakoff_effect] The Primakoff effect -- named after Henry Primakoff -- is the production of neutral pions in the presence of an atomic nucleus. Due to the pseudoscalar nature and coupling to photons of both neutral pions and axions the equivalent process is allowed for axions. [fn:blackbody_differences] The difference of the peak position of the blackbody spectrum and the Primakoff flux is due to the axion production being dominantly from $\SIrange{7.5}{17.5}{\%}$ of the solar radius (compare fig. sref:fig:theory:solar_axion_flux:flux_radial) where temperatures on average are closer to $\sim\SI{12e6}{K}$. # [fn:sikivie_effect] The Primakoff effect for axions is sometimes also called the Sikivie # effect, after Pierre Sikivie who realized *** TODOs for this section [/] :noexport: - [X] *MENTION THAT Γ ARE PROPORTIONAL TO g²* - [ ] *REORDER THIS. THIS NEEDS TO BE FURTHER UP? DOESNT MAKE SENSE AFTER EQUATION ABOVE* -> Referring to starting from "One of the first papers to look at the implications of the axion in" - [ ] Mention Sikivie effect? - [X] Mention axion-nucleon coupling cite:di2022probing -> Mentioned in the context of the effective Lagrangian - [X] Axion flux directly follows from couplings. Show the Feynman diagrams from Redondo - [ ] Show the basic gist from Redondo, i.e. Γ of the different contributions -> Not sure! - [X] Primakoff is effectively just Blackbody spectrum *!!! WRITE ME* - [ ] *INSERT PLOTS!!!!!!!!!* For Primakoff relate to blackbody? Show axion-electron + primakoff left, then pure blackbody at solar core on right? - [X] Primakoff has a analytical expressions for the entire flux - [ ] can of course also be computed numerically from a solar model - [X] Explain that Γ axion is being related to Γ photon - [X] Raffelt derived a whole bunch of those Γ for axion electron coupling!!! -> cite:raffelt1986astrophysical There is also cite:PhysRevD.18.3605 from earlier! *KEEP IN MIND:* [[file:~/org/Papers/CAST/cast_phase_I_results_andriamonje2007.pdf]] Mention it as a very succinct derivation / explanation of origin solar axion flux etc. Important for us? How do we detect them. Interaction tells us conversion is proportional to B and L. Where are strong Bs for long Ls? Solar core. Take modern solar model to plot the density profile & especially temperature. Density + temperature allows us to compute: - number of photons - at various photon energies By wrapping blackbody radiation (ref, 3 sentences about it) present in solar core with Primakoff coupling, we get an effective axion flux equivalent to: $dΦ/dE ∝ g_{aγ}² · \text{black body radiation}$ *CHECK CAST PHASE I RESULT PAPER FOR OVERVIEW* (contains physics + integration over solar model!) Refer to that paper in particular to answer the question: "do axions escape from the sun?" *BIBBER* cite:vanBibber1989 contains derivation of axion flux based on black body radiation. First CAST paper bases their flux on this, with a modification from some other paper & a newer solar model from 2001 ("reference" 15 in that CAST paper). This reference *also* contains a derivation of axion equations of motion etc. via KG equation. \begin{equation} \frac{d N_a}{dV\, dt} = \int \frac{\mathrm{d}^3 \mathbf{k}}{(2\pi)^3} \Gamma^P_a(\omega) = \int^{\infty}_0 \frac{\omega^2 \mathrm{d}\omega}{2\pi^2} \Gamma^P_a(\omega), \end{equation} *** Primakoff flux :extended: Including analytical equation for flux... :) #+begin_src nim :tangle /tmp/solar_axion_flux.nim :results silent import unchained, ggplotnim, math, chroma, ginger defUnit(keV⁻¹•m⁻²•yr⁻¹) defUnit(keV⁻¹•cm⁻²•s⁻¹) defUnit(GeV⁻¹) proc axionFluxPrimakoff(E_a: keV, g_aγ: GeV⁻¹): keV⁻¹•cm⁻²•s⁻¹ = ## dΦ_a/dE taken from paper about first CAST results cite:PhysRevLett.94.121301 let g₁₀ = g_aγ / 1e-10.GeV⁻¹ # * 10e10.GeV¹ # result = g₁₀^2 * 3.821e10.cm⁻²•s⁻¹•keV⁻¹ * (E_a / 1.keV)^3 / (exp(E_a / (1.103.keV)) - 1) proc axFluxPerYear(E_a: keV, g_aγ: GeV⁻¹): keV⁻¹•m⁻²•yr⁻¹ = result = axionFluxPrimakoff(E_a, g_aγ).to(keV⁻¹•m⁻²•yr⁻¹) proc axionFluxPrimakoffMasterThesis(ω: keV, g_ay: GeV⁻¹): keV⁻¹•m⁻²•yr⁻¹ = # axion flux produced by the Primakoff effect # in units of m^(-2) year^(-1) keV^(-1) # From the CAST 2013 paper on axion electron coupling, eq 3.1 result = 2.0 * 1e18.keV⁻¹•m⁻²•yr⁻¹ * (g_ay / 1e-12.GeV⁻¹)^2 * pow(ω / 1.keV, 2.450) * exp(-0.829 * ω / 1.keV) let E = linspace(1e-3, 14.0, 1000) let df = seqsToDf(E) .mutate(f{float: "Flux" ~ axionFluxPrimakoff(`E`.keV, 1e-11.GeV⁻¹).float}) .mutate(f{float: "FluxYr" ~ axFluxPerYear(`E`.keV, 1e-11.GeV⁻¹).float}) .mutate(f{float: "FluxMSc" ~ axionFluxPrimakoffMasterThesis(`E`.keV, 1e-11.GeV⁻¹).float}) ggplot(df, aes("E", "Flux")) + geom_line() + #geom_line(aes = aes(y = "FluxMSc"), color = some(parseHex("0000FF"))) + ggtitle("Solar axion flux due to Primakoff production, g_aγ = 10⁻¹¹·GeV⁻¹") + xlab("Energy [keV]") + #ylab("Axion flux [keV⁻¹·cm⁻²·s⁻¹]") + ylab("Axion flux [keV⁻¹·m⁻²·yr⁻¹]") + ggsave("/tmp/primakoff_axion_flux.pdf") ggplot(df.mutate(f{"Flux" ~ `Flux` / 1e8}), aes("E", "Flux")) + geom_line() + #xlab("Energy [keV]", tickFont = font(12.0), margin = 1.5) + xlab(r"\fontfamily{lmss}\selectfont Energy [$\si{\keV}$]", margin = 2.0, font = font(16.0), tickFont = font(16.0)) + xlim(0, 14) + #ylab("Axion flux [10¹⁰ keV⁻¹·cm⁻²·s⁻¹]", margin = 1.5) + ylab(r"\fontfamily{lmss}\selectfont Axion flux [\SI[print-unity-mantissa=false]{1e11}{\keV^{-1} \cm^{-2} \second^{-1}}]", margin = 2.0, font = font(16.0)) + # tickFont = font(12.0)) + #ggtitle(r"Expected solar axion flux, g_aγ = 10⁻¹⁰ GeV⁻¹", titleFont = font(12.0)) + annotate(r"\fontfamily{lmss}\selectfont Expected solar axion flux" & r"\\$g_{aγ} = \SI[print-unity-mantissa=false]{1e-11}{\GeV^{-1}}$", #10⁻¹⁰ GeV⁻¹", x = 6.2, y = 6.2, font = font(16.0), backgroundColor = transparent) + #ggtitle(r"Expected solar axion flux, $g_{aγ} = \SI{1e-11}{\GeV^{-1}}$", titleFont = font(12.0)) + #ggsave("/tmp/cristina_primakoff_axion_flux.pdf", width = 400, height = 300) #, useTeX = true, standalone = true) ggsave("/tmp/cristina_primakoff_axion_flux.pdf", useTeX = true, standalone = true) defUnit(m⁻²•yr⁻¹) echo 1.cm⁻²•s⁻¹.to(m⁻²•yr⁻¹) #+end_src There are different analytical expressions for the solar axion flux for Primakoff production. These stem from the fact that a solar model is used to model the internal density, temperature, etc. in the Sun to compute the photon distribution (essentially the blackbody radiation) near the core. From it (after converting via the Primakoff effect) we get the axion flux. Different solar models result in different expressions for the flux. The first one uses an older model, while the latter ones use newer models. *** Axion-electron flux :noexport: *citations*: Redondo 2013, maybe (Johanna + Sebastian Hoof something?) *Keep in mind errors in Redondo 2013*! *possibly write a mail to Sebastian Hoof* Expected axion flux combined. Reference to file storing the results for specific coupling constants. Much more complicated. ABC components. B and C can be expressed analytically. A cannot, needs opacity project. Show plot of differential axion flux. For a derivation of this, consider section about ray tracing. Custom computation of A done by Johanna in code developed by her & me in *LINK*. *** Generate figures for Primakoff and axion flux from ~readOpacityFile~ output [4/5] :extended: :PROPERTIES: :CUSTOM_ID: sec:theory:gen_solar_axion_flux_plots :END: - Use logic we use in ~TrAXer~ to compute differential fluxes as a function of radii. - [X] Do it for each type of flux independently - [X] Compute a smooth version of the *radial distribution* comparing axion-electron to Primakoff! - [X] Black body spectrum -> Produced below in a separate plot! - [X] Radial emission vs energy heatmap!! - [X] *ADD COMMAND TO PRODUCE CSV FILE!* First we produce the differential flux CSV file. For more details, see sec. [[#sec:appendix:raytracing:generate_axion_image]]. #+begin_src sh :dir ~/CastData/ExternCode/AxionElectronLimit/src ./readOpacityFile \ --suffix "_0.989AU" \ --distanceSunEarth 0.9891144450781392.AU \ --fluxKind fkAxionElectronPhoton \ --plotPath ~/phd/Figs/readOpacityFile/ \ --outpath ~/phd/resources/readOpacityFile/ #+end_src Then the differential flux by type: #+begin_src nim :results drawer :flags -d:experimentalSDL2 -d:QuietTikZ=true import ggplotnim const fluxPath = "~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv" let fCol = "Flux / keV⁻¹ m⁻² yr⁻¹" let df = readCsv(fluxPath) .filter(f{string -> bool: `type` notin ["LP Flux", "TP Flux", "57Fe Flux"]}) .mutate(f{float: "diffFlux" ~ (if (idx("type", string) == "Primakoff Flux"): idx(fCol) * 100 else: idx(fCol))}) .mutate(f{string: "type" ~ (if (`type` == "Primakoff Flux"): "Primakoff·100" else: `type`)}) ggplot(df, aes("Energy [keV]",f{`diffFlux` / 1e20}, color = "type")) + geom_line() + xlab(r"Energy [$\si{keV}$]") + ylab(r"Flux [$\SI{1e20}{keV^{-1}.m^{-2}.yr^{-1}}$]") + xlim(0, 15) + ggtitle("Differential axion flux arriving on Earth") + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + # width 360? # ggshow(800, 480) ggsave("~/phd/Figs/axions/differential_solar_axion_flux_by_type.pdf", useTeX = true, standalone = true, width = 600, height = 360) #+end_src #+RESULTS: :results: [INFO]: No plot ratio given, using golden ratio. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/axions /home/basti/phd/Figs/axions/differential_solar_axion_flux_by_type.tex Generated: /home/basti/phd/Figs/axions/differential_solar_axion_flux_by_type.pdf :end: Now the radial distribution of the flux by coupling constant. #+begin_src nim :flags -d:danger :results none import std / [sequtils, algorithm] import ggplotnim, unchained import ggplotnim/ggplot_sdl2 type FluxData* = object fRCdf*: seq[float] diffFluxR*: seq[seq[float]] fluxesDf*: DataFrame radii*: seq[float] energyMin*: float energyMax*: float const alpha = 1.0 / 137.0 g_ae = 1e-13 # Redondo 2013: 0.511e-10 gagamma = 1e-12 #the latter for DFSZ #1e-9 #5e-10 # ganuclei = 1e-15 #1.475e-8 * m_a #KSVZ model #no units #1e-7 m_a = 0.0853 #eV m_e_keV = 510.998 #keV e_charge = sqrt(4.0 * PI * alpha)#1.0 kB = 1.380649e-23 r_sun = 696_342_000_000.0 # .km.to(mm).float # SOHO mission 2003 & 2006 hbar = 6.582119514e-25 # in GeV * s keV2cm = 1.97327e-8 # cm per keV^-1 amu = 1.6605e-24 #grams r_sunearth = 150_000_000_000_000.0 const factor = pow(r_sun * 0.1 / (keV2cm), 3.0) / (pow(0.1 * r_sunearth, 2.0) * (1.0e6 * hbar)) / (3.1709791983765E-8 * 1.0e-4) # for units of 1/(keV y m²) import ggplotnim proc getFluxRadiusCDF*(path: string): FluxData = var emRatesDf = readCsv(path) # get all radii and energies from DF so that we don't need to compute them manually (risking to # messing something up!) # sort both just to make sure they really *are* in ascending order let radii = emRatesDf["Radius"] .unique() .toTensor(float) .toSeq1D .sorted(SortOrder.Ascending) let energies = emRatesDf["Energy [keV]"] .unique() .toTensor(float) .toSeq1D .sorted(SortOrder.Ascending) var emRates = newSeq[seq[float]]() ## group the "solar model" DF by the radius & append the emission rates for all energies ## to the `emRates` for tup, subDf in groups(emRatesDf.group_by("Radius")): let radius = tup[0][1].toFloat #doAssert subDf["Energy [keV]", float].toSeq1D.mapIt(it.keV) == energies #doAssert radius == radii[k], "Input DF not sorted correctly!" emRates.add subDf["emRates", float].toSeq1D var fluxRadiusCumSum: seq[float] = newSeq[float](radii.len) diffRadiusSum = 0.0 template toCdf(x: untyped): untyped = let integral = x[^1] x.mapIt( it / integral ) var fluxesDf = newDataFrame() var diffFluxR = newSeq[seq[float]](radii.len) var rLast = 0.0 for iRad, radius in radii: # emRates is seq of radii of energies let emRate = emRates[iRad] var diffSum = 0.0 var diffFlux = newSeq[float](energies.len) for iEnergy, energy in energies: let dFlux = emRate[iEnergy] * (energy.float*energy.float) * radius*radius * (radius - rLast) * factor diffFlux[iEnergy] = dFlux diffSum += dFlux fluxesDf.add toDf({"Energy" : energies.mapIt(it.float), "Flux" : diffFlux, "Radius" : radius}) diffRadiusSum += diffSum fluxRadiusCumSum[iRad] = diffRadiusSum diffFluxR[iRad] = diffFlux rLast = radius result = FluxData(fRCdf: fluxRadiusCumSum.toCdf(), fluxesDf: fluxesDf, diffFluxR: diffFluxR, radii: radii, energyMin: energies.min, energyMax: energies.max) proc fluxToDf(data: FluxData, typ: string): DataFrame = result = toDf({ "Radius" : data.radii, "fluxPerRadius" : data.diffFluxR.mapIt(it.sum), "Type" : typ }) .mutate(f{"fluxPerRadius" ~ `fluxPerRadius` / col("fluxPerRadius").max}) proc main = const ksvzPath = "~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe_fluxKind_fkAxionPhoton_0.989AU.csv" const dfszPath = "~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe_fluxKind_fkAxionElectronPhoton_0.989AU.csv" let ksvz = getFluxRadiusCdf(ksvzPath) let dfsz = getFluxRadiusCdf(dfszPath) var df = newDataFrame() df.add ksvz.fluxToDf("KSVZ"); df.add dfsz.fluxToDf("DFSZ") ggplot(df, aes("Radius", "fluxPerRadius", color = "Type")) + geom_line() + xlab("Relative solar radius") + ylab("Relative emission") + ggtitle("Radial emission: KSVZ ($g_{aγ}$), DFSZ ($g_{aγ}, g_{ae}$)", titleFont = font(11.0)) + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + # golden ratio or height = 360, ? #ggshow() ggsave("/home/basti/phd/Figs/axions/solar_axion_radial_emission.pdf", useTeX = true, standalone = true) # , width = 800, height = 480) echo dfsz.fluxesDf let dfFlux = dfsz.fluxesDf .filter(f{`Energy` <= 10.0 and `Radius` <= 0.5}) ggplot(dfFlux, aes("Energy", "Radius", fill = "Flux")) + geom_raster() + scale_fill_continuous(scale = (0.0, percentile(dfFlux["Flux", float], 99))) + ggtitle("Flux by energy and fraction of solar radius") + xlab(r"Energy [$\si{keV}$]") + ylab(r"Relative solar radius") + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + # golden ratio or height = 360, ? xlim(0, 10) + ylim(0, 0.5) + legendPosition(0.72, 0.0) + ggsave("/home/basti/phd/Figs/axions/flux_by_energy_vs_radius_axion_electron.pdf", useTeX = true, standalone = true) #ggshow() # 640, 480) main() #+end_src Hmm, this looks a bit bizarre. But comparing with - [[~/org/Figs/statusAndProgress/axionProduction/sampled_radii_axion_electron.pdf]] - [[~/org/Figs/statusAndProgress/axionProduction/sampled_radii_primakoff.pdf]] the weird structure of the axion-electron emission is actually visible a bit (in the form of sharp drops, in particular near 0.2). The question is really "why", but well. If it was some kind of sampling issue, I would assume it comes out of the solar model. *** Black body radiation in solar core :extended: Let's compute the black body radiation for the solar core and see if it matches the energy spectrum we expect for axions. Planck's law is defined as *CITE SOMETHING*: \[ B_ν(ν, T) = \frac{2hν³}{c²} \frac{1}{e^{hν/kT} - 1} \] where $ν$ is the frequency of the photon and $T$ the temperature in Kelvin. $k$ is of course the Boltzmann constant and $h$ the Planck constant. Let's see what this looks like for $T = \SI{15}{\mega\kelvin}$. [[~/phd/Figs/blackbody_spectrum_solar_core.pdf]] #+begin_src nim :tangle /home/basti/phd/code/black_body_sun_core.nim :results drawer :flags -d:QuietTikZ=true import ggplotnim, unchained, sequtils import ggplotnim / ggplot_sdl2 #defUnit(s⁻¹) #defUnit(μs⁻¹) defUnit(Watt•Steradian⁻¹•Meter⁻²•NanoMeter⁻¹) defUnit(Joule•Meter⁻²•Steradian⁻¹) let T_sun = 15.MegaKelvin.to(Kelvin) proc blackBody(ν: s⁻¹, T: Kelvin): Joule•Meter⁻²•Steradian⁻¹ = result = (2 * hp * ν^3 / c^2 / (exp(hp * ν / (k_B * T)) - 1)).to(Joule•Meter⁻²•Steradian⁻¹) proc xrayEnergyToFreq(E: keV): s⁻¹ = ## converts the input energy in keV to a correct frequency result = E.to(Joule) / hp echo 1.keV.xrayEnergyToFreq echo "Solar core temperature ", T_Sun, " in keV : ", T_Sun.toNaturalUnit().to(keV) echo blackBody(1.μHz.to(Hz), T_sun) echo blackBody(1.keV.xrayEnergyToFreq, T_sun) let energies = linspace(0.01, 15.0, 1000) let radiance = energies.mapIt(blackBody(it.keV.xrayEnergyToFreq, T_sun).float) let df = seqsToDf(energies, radiance) ggplot(df, aes("energies", "radiance")) + geom_line() + ggtitle(r"Black body radiation @ $T = \SI{15e6}{K}$") + xlab(r"Energy [$\si{keV}$]") + ylab(r"Radiance [$\si{J.m^{-2}.sr^{-1}}$]") + xlim(0, 15) + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + # golden ratio or height = 360, ? #ggshow() ggsave("/home/basti/phd/Figs/blackbody_spectrum_solar_core.pdf", useTeX = true, standalone = true, width = 600, height = 360) #+end_src #+RESULTS: :results: 2.41799e+17 Hz Solar core temperature 1.5e+07 K in keV : 1.2926 keV inf J•sr⁻¹•m⁻² 178.526 J•sr⁻¹•m⁻² [INFO]: No plot ratio given, using golden ratio. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs /home/basti/phd/Figs/blackbody_spectrum_solar_core.tex Generated: /home/basti/phd/Figs/blackbody_spectrum_solar_core.pdf :end: ** Chameleons :PROPERTIES: :CUSTOM_ID: sec:theory:chameleon :END: Chameleons are a different type of hypothetical particle. A scalar particle arising from extensions to general relativity, which acts as a "fifth force" and can be used to model dark energy. We will not go into detail about the underlying theory here. Refer to [[cite:&waterhouse2006chameleons]] for an in-depth introduction to chameleon gravity and [[cite:&brax15_distinguish]] on how they differ from other modified gravity models. Suffice it to say, chameleonic theory also yields a coupling to photons, $β_γ$, which can be utilized for conversion into X-rays in transverse magnetic fields. Similarly, they can therefore be produced in the Sun. However, the chameleon field has a peculiar property in that its mass is dependent on the density of the surrounding medium. This means chameleons cannot be produced in the solar core and escape. A suitable location to produce chameleons, is the solar tachocline region. A region of differential rotation at the boundary between the inner radiative zone and outer convective zone, at around $0.7 · R_{\odot}$ in the Sun. This leads to strong magnetic fields at low enough densities. Due to the much lower temperatures present in the solar tachocline compared to the solar core, the peak of the solar chameleon spectrum is below $\SI{1}{keV}$. See [[cite:&brax12_chameleons]] and [[cite:&chameleons_sdd_cast]] for details about the chameleon production in the tachocline region and resulting spectrum. Their possible detection with CAST was proposed in 2012 [[cite:&brax12_chameleons]], with actual searches being performed initially with a Silicon Drift Detector (SDD) [[cite:&chameleons_sdd_cast]] and later with a GridPix detector using data taken in 2014/15 [[cite:&krieger2018search;&krieger_chameleon_jcap]]. In addition, the KWISP (Kinetic WISP) detector [[cite:&kwisp_first_results;&justin_phd]] was deployed at CAST, attempting search for the chameleon via their chameleon-matter coupling $β_m$, which should lead to a chameleons interacting with a force sensor (without conversion to X-rays in the CAST magnet). As the chameleon flux is highly dependent on details of solar physics that are understood to a much lesser extent than the temperature and composition in the solar core (required for axions), uncertainty on chameleon results is much larger. See also [[cite:&zanzi15_chamel_field_solar_physic]] for a discussion of chameleon fields in the context of solar physics. The conversion probability for back conversion into X-rays in a magnetic field, assuming coherent conversion, is given by cite:brax12_chameleons: #+NAME: eq:theory:chameleon_conversion_prob \begin{equation} P_{c↦γ} = \left( \frac{β_γ B L}{2 m_{\text{pl}}} \right)². \end{equation} Here, $m_{\text{pl}}$ is the /reduced/ Planck mass, \[ m_{\text{pl}} = \frac{M_{\text{pl}}}{\sqrt{8 π}} = \sqrt{ \frac{ \hbar c }{ 8 π G } }, \] i.e. using natural units with $G = \frac{1}{8π}$ instead of $G = 1$, used in cosmology because it removes the $8π$ term from the Einstein field equations. Similar to the axion-photon conversion probability, it is in natural units and $B$ and $L$ should be converted or the missing constants reinserted. The equation is valid in the chameleon-matter coupling range $1 \leq β_m \leq \num{1e6}$, which corresponds to non-resonant production in the Sun as well as evades significant chameleon interaction with materials on its path to the CAST magnet. Note that in many cases $M_γ = \frac{m_{\text{pl}}}{β_γ}$ is introduced. If however, one defines its inverse, $g_{βγ} = \frac{β_γ}{m_{\text{pl}}}$, eq. [[eq:theory:chameleon_conversion_prob]] takes the form of eq. [[eq:theory:conversion_prob]] for the axion, including an effective coupling constant of units $\si{GeV⁻¹}$. *** TODOs for this section [/] :noexport: - [X] ?? will depend on whether we do a chameleon limit (which we should, as our detector is much better here!) Should be easy after all, as everything is the same as for axions, except different flux, raytracing and thus limit calc (from a number perspective; concept is the same). ** Current bounds on coupling constants :PROPERTIES: :CUSTOM_ID: sec:theory:current_bounds :END: The field of axion searches is expanding rapidly in recent years, especially in haloscope experiments. A thorough overview of all the different possible ways to constrain axion couplings and best limits in each of them is not in the scope of this thesis. We will give a succinct overview of the general ideas and reference the best current limits on the relevant coupling constants in the regions of interest for this thesis. A great, frequently updated overview of the current best axion limits is maintained by Ciaran O'Hare, available here cite:ciaran_o_hare_2020_3932430. Generally, axion couplings can be probed by three main avenues: - Pure, indirect astrophysical constraints :: Different astrophysical phenomena can be used to study and constrain axion couplings. One example is the cooling rate of stars. If axions were produced inside of stars and generally manage to leave the star without interaction, they carry energy away. Similar to neutrinos they would therefore contribute to stellar cooling. From observed cooling rates and knowledge of solar models, constraints can be set on potential axion contributions. Many other astrophysical sources can be probed in similar ways. In all cases these constraints are indirect in nature. Which coupling can be constrained depends on the physical processes considered. - Direct astrophysical constraints :: Certain types of laboratory experiments attempt to measure axions directly and produce constraints due to non-detection. Solar helioscopes attempt to directly measure axions produced in the Sun; more on these in chapter [[#sec:helioscopes]]. Haloscope experiments utilize microwave cavities in an attempt to tune to the frequency resonant with the axion mass of cold axions part of the dark matter halo. While relying on astrophysically produced axions, the intent is direct detection. Recent interest in axions also means data from WIMP experiments like the XENON collaboration [[cite:&aprile17_xenon_dark_matter_exper]] is being analyzed for axion signatures. Haloscopes and helioscopes depend on the axion-photon coupling $g_{aγ}$, while WIMP experiments may consider the axion-electron $g_{ae}$ or axion-nucleon $g_{aN}$ couplings. For the astrophysical production mechanism an additional dependency on the coupling producing the axion source is added, which may result in only being able to give constraints on products of different couplings. - Direct production constraints :: The final approach is full laboratory experiments, which both attempt to first _produce_ axions and then _detect_ them. This idea is commonly done in so called 'light shining through the wall' (LSW) experiments like the ALPS experiment at DESY [[cite:&baehre13_any_light_partic_searc_ii]]. Here, a laser cavity in a magnetic field is intended as an axion production facility. These produced axions would leave the cavities, propagate through some kind of wall (e.g. lead) and enter a second set of equivalent cavities, just without an active laser. Produced axions could convert back into photons in the second set of cavities. The disadvantage is that one deals with the $g_{aγ}$ coupling both in production and reconversion. [fn:amount_axions_lsw] The bounds of interest for this thesis are the axion-photon coupling $g_{aγ}$ for masses below around $\SI{100}{meV}$ [fn:mass_expl] and the product of the axion-photon and axion-electron couplings $g_{ae}·g_{aγ}$ in similar mass ranges. The latter for the case of dominant axion-electron production $g_{ae}$ in the Sun and detection via $g_{aγ}$ in the CAST magnet. The current best limit on the axion-photon coupling from laboratory experiments is from the CAST Nature paper in 2017 cite:cast_nature, providing a bound of \[ g_{aγ, \text{Nature}} < \SI{6.6e-11}{GeV^{−1}} \text{ at } \SI{95}{\%} \text{ CL}. \] Similarly, for the product on the axion-electron and axion-photon couplings, the best limit from a helioscope experiment is also from CAST in 2013 [[cite:Barth_2013]]. This limit is \[ g_{ae}·g_{aγ} \lesssim \SI{8.1e-23}{GeV^{-1}} \text{ for } m_a \leq \SI{10}{meV} \] and acts as the main comparison to the the results presented in this thesis. From astrophysical processes the brightness of the tip of the red-giant branch (TRGB) stars is the most stringent way to restrict the axion-electron coupling $g_{ae}$ alone. This is because axion production would induce more cooling, which would lead to a larger core mass at helium burning ignition, resulting in brighter TRGB stars. [[cite:&capozzi20_axion_neutr_bound_improv_with]] calculate a limit of $g_{ae} = \num{1.3e-13} \text{ at } \SI{95}{\%} \text{ CL}$. A similar limit is obtained in cite:&straniero20_rgb_tip_galac_globul_clust. cite:&bertolami14_revis_axion_bound_from_galac compute a comparable limit from the White Dwarf luminosity function. However, for purely astrophysical coupling constraints the strong assumptions needing to be made about the underlying physical processes imply these limits are by themselves not sufficient. In fact there is reason to believe at least some of the current astrohpysical bounds are overestimated. In cite:&dennis2023tip the authors use a machine learning (ML) model to predict the brightness of TRGB stars to allow for much faster simulations of the parameter space relevant for such bounds. Using Markov Chain Monte Carlo models based on the ML output, they show values up to $g_{ae} = \num{5e-13}$ are not actually excluded if the full uncertainty of stellar parameters is included. As their calculations only went up to such values, even larger couplings may still be allowed. Finally, observations from X-ray telescope can also be used for limits on the product of $g_{ae}·g_{aγ}$. This has been done in cite:&PhysRevLett.123.061104 based on data from the Suzaku mission and followed up on by the same authors in cite:&dessert22_no_eviden_axion_from_obser using data from the Chandra mission. In the latter they compute an exceptionally strong limit of \[ g_{ae} · g_{aγ} < \SI{1.3e-25}{GeV^{-1}} \text{ at } \SI{95}{\%} \text{ CL}, \] valid for axion masses below $m_a \lesssim \SI{5e-6}{eV}$. - Chameleons :: The current best bound on the chameleon-photon coupling was obtained using a previous GridPix detector with data taken at CAST in 2014/15. The observed limit obtained by C. Krieger in [[cite:&krieger2018search;&krieger_chameleon_jcap]] is \[ β_{γ, \text{Krieger}} = \num{5.74e10} \text{ at } \SI{95}{\%} \text{ CL}. \] [fn:amount_axions_lsw] While astrophysical sources of course also introduce an additional $g²$ from their production (resulting in all experiments depending on $g⁴$ effectively), the advantage is that in absolute terms an astrophysical source produces orders of magnitude more axion flux than a LSW experiment. In that sense LSW experiments deal with a squared suppression over those depending on astrophysical sources. [fn:mass_expl] $\SI{100}{meV}$ is the range in which the CAST experiment is not in the fully coherent regime of the $\sinc$ term of the conversion probability anymore, resulting in significant sensitivity loss. *** Potential solar axion hints For completeness a few words about previous results which do not actually provide a limit, but rather show small hints of possible axion signals. One of the first credible hints of such a solar axion signal comes from the XMM Newton telescope. Seasonal variation in the X-ray flux at a level of $11σ$ is observed. The explanation provided in [[cite:&fraser14_poten_solar_axion_signat_x]] is axion reconversion into X-rays in Earth's magnetic field. While criticism exists [[cite:&roncadelli15_no_axion_from_sun]], there is anyhow recent interest in this signal [[cite:&ge22_x_ray_annual_modul_obser]], this time due to axion quark nuggets (AQN). An AQN explanation would produce a signal up to $\SI{100}{keV}$ outside the XMM Newton sensitive range. The authors propose to check archival data of the Nuclear Spectroscopic Telescope Array (NuSTAR) or the Gamma-Ray Burst Monitor of the FERMI telescope for such seasonal variations. Further, while the CAST Nature result [[cite:&cast_nature]] provides the current best limit on the axion-photon coupling, its axion candidate dataset actually shows a signal excess at \SI{3}{keV} at a $3σ$ level. Statistical effects or possibly not perfectly accounted for systematic variation resulting in slightly more argon fluorescence during tracking data is more likely. In 2020 the XENON collaboration announced having seen an excess in their electron recoil counts at energies compatible with a solar axion signal [[cite:aprile20_exces_elect_recoil_event_xenon]]. A possible explanation not requiring solar axions was given as trace amounts of tritium below their sensitivity threshold. This garnered a lot of attention, because while only of $3.4σ$ significance it was the first hint of a potential solar axion signal published by a large collaboration as such. However, combining the resulting axion coupling with astrophysical results indicates a more likely non axion origin for the signal [[cite:&athron21_global_fits_axion_like_partic;cite:&luzio20_solar_axion_cannot_explain_xenon_exces]]. With the release the first XENONnT results in 2022 [[cite:&aprile22_searc_new_physic_elect_recoil]] about new physics in which no excess is visible, the old result is ruled out. Add to that, the LUX-ZEPLIN collaboration, a similar xenon filled experiment, recently published their results on new physics. Although not as sensitive as XENONnT, but more sensitive than XENON1T, no excess was observed there either [[cite:&lux_zeppelin_2023]]. *** A few more words on haloscopes :extended: A haloscope is a type of axion experiment consisting of a (typically microwave) cavity placed in a magnetic field. It intends to detect axions of the dark matter halo of our galaxy. Axions that are part of the dark matter component are necessarily very low energy as they decoupled long ago and underwent cooling ever since. Thus, their energies are in the microwave range. If a cavity has a resonance frequency matching the axion mass (the kinetic energy is negligible, so the majority of the energy is in the mass), the conversion probability is enhanced by the quality factor $Q$ of the cavity (effectively the number of reflections in the cavity). The upside of such experiments are the strong enhancements possible, which allow to reach very low coupling constants. However, a cavity has a single resonance frequency, limiting the mass range to be studied to a very narrow range. Most experiments use cavities that can be tuned to expand the mass range. At each tuned frequency data is taken for a fixed amount of time to reach a certain coupling constant. As such a tunable cavity experiment can scan a narrow band of axion masses over the course of its data taking campaign. Due to the simplicity of the setup these type of experiments are very popular nowadays. *** TODO about this section [/] :noexport: - [ ] Not sure what I had in mind here: #+begin_quote Note that the below mentioned coupling constants technically are squared, due to the conversion probability being the squared conversion amplitude. #+end_quote Astronomical axion bounds. Cavity bounds. Helioscope bounds. - - [X] Mention CAST Nature excess - [X] Mention XMM Newton 2014 signal, mention 2022 update - [X] Mention that TIP paper somewhat brings in doubt direct rejections of axion + astrohysical rejections of XENON result *Talk about:* - [X] cite:dennis2023tip *!!!* *CITE*: X-Ray Signatures of Axion Conversion in Magnetic White Dwarf Stars cite:PhysRevLett.123.061104 with limit g_ae·g_aγ = 2e-24! -> They also show a plot of g_ae g_aγ exclusion - [X] *TODO*: include newest Chandra results for coupling constant if they exist? +I don't think so.+ Yes, it does cite:dessert22_no_eviden_axion_from_obser. - [X] *TODO*: include newest Chandra results for coupling constant -> Mostly for ultra light <1e-12 eV masses! https://iopscience.iop.org/article/10.3847/1538-4357/ab6a0c/meta https://academic.oup.com/mnras/article/510/1/1264/6448485 But this: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.128.071102 goes to around m_a < 5e-6 and g_aγ < 5.5e-13. -> Not these, but the above is included and more relevant. - [X] *TODO*: include Xenon-1T results - [X] *TODO*: include the new result I sent to Cristina of that other XenonNT like experiment showing limit of g_ae ~ 4e-12 or whatever it was. -> Can also find it by looking for tweet of Will cosmology guy commenting something like "suprise they didn't find anything" -> Found it: https://arxiv.org/2307.15753 !! - [X] Mention Ciaran O'Hare's github page about axion limits! * Axion helioscopes :Theory: :PROPERTIES: :CUSTOM_ID: sec:helioscopes :END: #+LATEX: \minitoc As discussed in the previous chapter in section [[#sec:theory:solar_axion_flux]], stars are strong axion factories. In 1983 Pierre Sikivie proposed cite:PhysRevLett.51.1415,PhysRevD.32.2988 multiple methods to potentially detect axions, one of these making use of this solar axion production. The fundamental realization is that a transverse magnetic field can act as a means to reconvert axions back into photons, as discussed in sec. [[#sec:theory:axion_interactions]]. This allows to design a kind of telescope consisting of a magnet, which tracks the Sun. One expects a small fraction of the axions produced in the Sun to reconvert to photons inside of the magnetic field via the inverse Primakoff effect. These photons will carry the energy of the original particles that produced the axions, namely the energy corresponding to the temperature in the solar core. Some kind of X-ray detector is installed behind the magnet. Ideally, an X-ray telescope can be added to focus any potential X-rays from the full magnet volume onto a smaller area of the detector to massively increase the signal-to-noise ratio. The first implementation of the helioscope idea was the Rochester-Brookhaven-Florida experiment cite:vanBibber1989,PhysRevLett.69.2333. It was followed by the SUMICO experiment in Tokyo cite:MORIYAMA1998147,INOUE200218,INOUE200893. The third and latest helioscope is the CERN Axion Solar Telescope (CAST), which we will present in more detail in section [[#sec:helioscopes:cast]]. In the final section we will introduce the next generation of axion helioscopes, the International AXion Observatory (IAXO), section [[#sec:helioscopes:iaxo]]. *** TODOs for this section [/] :noexport: *EXPLAIN WHY TRANSVERSE MAGNETIC FIELD IN THEORY* - [X] Explained in theory section, derivation via KG eq *PUT PRIMAKOFF FEYNMAN DIAGRAM* - [X] Done *POSSIBLY MOVE TO THEORY ITSELF AND REFERENCE* - [ ] Not everything referenced *DIFFERENTIATE BETWEEN PRIMAKOFF AND INVERSE PRIMAKOFF* *INSERT FIG BLACKBODY HERE OR IN SOLAR AXION FLUX SECTION* Essentially black body radiation of $\sim\mathcal{O}(\SI{15}{\mega\kelvin}$. This means the reconverted photons are mostly in the soft X-ray range between \SIrange{1}{7}{\keV}. - [X] In theory about solar axion flux - [ ] This paragraph can go, redundant. From the theory on axions (ref. section [[#sec:theory:axion_interactions]]) we know there is an effective coupling to the photon $g_{aγ}$. This coupling is an equivalent to the Primakoff effect, which describes a resonant production of mesons via a Fermion loop in strong electromagnetic fields when interacting with a nucleus. In the Primakoff effect two photons are present, an incoming real photon and a virtual photon of the electromagnetic interaction of the nucleus. Axions can take the place of the physical photon, either in the initial state or in the final state. In the former case we have an axion to photon conversion and in the latter a photon to axion conversion. As it turns out, the relevant aspect for the Primakoff effect is not the presence of a nucleus, but simply the fact that the nucleus provides an electromagnetic field. This means the nucleus can also be replaced by - for example - a transverse, constant magnetic field. ** TODO small section about other kinds of experiment? :noexport: ** CERN Axion Solar Telescope (CAST) :PROPERTIES: :CUSTOM_ID: sec:helioscopes:cast :END: The CERN Axion Solar Telescope (CAST) was proposed in 1999 cite:ZIOUTAS1999480 and started data taking in 2003 cite:PhysRevLett.94.121301. Fig. [[fig:helioscopes:cast:cast]] shows a panorama view of the experiment in its final year of data taking during some maintenance work. #+CAPTION: Panorama view of the CAST experiment during some maintenance work. #+NAME: fig:helioscopes:cast:cast [[~/phd/Figs/CAST_panorama_mine.jpg]] Using a $\SI{9.26}{m}$ long Large Hadron Collider (LHC) prototype dipole magnet that was available from the developments for the LHC, CAST features an $\SI{8.8}{T}$ [fn:magnetic_field] strong transverse magnetic field for axion-photon conversion produced by a current of $\SI{13}{kA}$ in the superconducting $\ce{Nb Ti}$ wires cite:bona1992design at $\SI{1.8}{K}$ cite:bona1994performance. It is placed on a movable platform that allows for solar tracking both during sunrise as well as sunset. The vertical range of movement is about $\sim\pm\SI{8}{°}$, but practically within $\sim\SIrange{-7}{7.7}{°}$ [fn:angles_origin]. This range of motion allows for solar tracking of approximately $\SI{90}{\minute}$ each during sunrise and sunset per day, the exact duration depending on time of the year. Due to the incredibly feeble interactions of axions, solar tracking can already start before sunrise and stop after sunset as they easily traverse through large distances of Earth's mantle. An LHC dipole magnet has two bores for the two proton beams running in reverse order. Being a prototype magnet it is *not* bent to the curvature required by the LHC. These two bores have a diameter of $\SI{4.3}{cm}$. [fn:confusion_bore_diameter] In total then, two bores on each side allow for 4 experiments to be installed at CAST, two for data taking during sunrise and two during sunset. The first data taking period (often referred to as 'phase I') took place in 2003 for 6 months between May and November and was a pure vacuum run with 3 different detectors. On the side observing during sunset was a Time Projection Chamber (TPC) that covered both bores. On the 'sunrise' side a Micromegas detector (Micromesh Gaseous Detector) and a Charged Coupled Device (CCD) detector were installed. The CCD was further behind a still in place X-ray telescope originally designed as a spare for the ABRIXAS X-ray space telescope cite:ABRIXAS_0a,ABRIXAS_0b. cite:PhysRevLett.94.121301 The full first phase I data taking period comprises of data taken in 2003 and 2004 and achieved a best limit of $g_{aγ} < \SI{8.8e-11}{\GeV^{-1}}$ cite:Andriamonje_2007. In what is typically referred to as 'phase II' of the CAST data taking, the magnet was filled with helium as a buffer gas to increase the sensitivity for higher axion masses by inducing an effective photon mass (as mentioned in sec. [[#sec:theory:buffer_gas]]). First between late 2005 and early 2007 with $^4\text{He}$. From March 2008 a run with $^3\text{He}$ was started, which ran until 2011 cite:Arik_2009,PhysRevD.92.021101. $\num{160}$ steps of different gas pressures were used to optimize sensitivity against time per step. In 2012 another $^4\text{He}$ data run took place cite:PhysRevD.92.021101. From 2013 on the CAST experiment has only run under vacuum configuration cite:&cast_nature. Further, the physics scope has been extended to include searches for chameleons cite:krieger2018search,krieger_chameleon_jcap,chameleons_sdd_cast,justin_phd,kwisp_first_results and axions in the galactic halo via cavity experiments cite:sergio_phd,rades_2021,marios_phd,cast_capp_nature. A video showing the CAST magnet during a typical solar tracking can be found under the link in this [fn:cast_video] footnote. [fn:confusion_bore_diameter] There is some confusion about the diameter and length of the magnet. The original CAST proposal cite:ZIOUTAS1999480 talks about the prototype dipole magnets as having a bore diameter of $\SI{42.5}{mm}$ and a length of $\SI{9.25}{m}$. However, every CAST publication afterwards uses the numbers $\SI{43}{mm}$ and $\SI{9.26}{m}$. Digging into references about the prototype dipole magnets is inconclusive. For better compatibility with all other CAST related publications, we will use the same $\SI{43}{mm}$ and $\SI{9.26}{m}$ values in this thesis. Furthermore, measurements were done indicating values around $\SI{43}{mm}$. [fn:magnetic_field] The magnetic field of $\SI{8.8}{T}$ corresponds to the actual field at which the magnet was operated at CAST, as taken from the magnet slow control data. [fn:angles_origin] These numbers are from CAST's slow control logs. See the extended thesis. [fn:cast_video] [[https://www.youtube.com/watch?v=XY2lFDXz8aQ]] *** TODOs about section :noexport: - [ ] Mention mini timeline in 2 sentences about GridPix 1 and Septemboard? - [X] *CROSS SECTION OF LHC DIPOLE MAGNET* - [X] *NAME SUPERCONDUCTING MATERIAL OF THESE MAGNETS* - [X] *CITE PAPER ABOUT LHC PROTOTYPE MAGNET* - [ ] Since about 2019 the movement of the magnet was slightly restricted due to the belt issues iirc. We can check the angles ourselves by looking at the slow control data! - [ ] A cross section can be seen in fig. *INSERT ME*. -> Do we want such a picture? Not really needed, no? - [ ] *2 ANNOTATED PICTURES OF CAST W/ HIGHLIGHT OF SUNRISE, SUNSET, AIRPORT, JURA* *INTRODUCE THESE IN TEXT* -> Neither is really needed, no? I'm not sure if we ever mention "sunrise", "airport" etc. in the thesis. - [ ] In addition, with the MicroMegas dataset taken in *CHECK EXACT* phase I a limit on the axion electron coupling was computed *CITE 2013*.... - [X] *160 STEPS WERE PERFORMED WITH BUFFER GAS* cite:Arik_2009 *BETTER SEPARATE X-ray OPTICS* - [X] *MENTION COHERENCE CONDITION* (here or in theory?) -> THEORY! *CAST PROPOSAL MENTIONS 9.25m and 42.5mm DIAMETER!! CHECK* Data taking periods. - [X] *INSERT VIDEO IN FOOTNOTE* - [ ] *CAN WE REFERENCE THE MEASUREMENTS THEODOROS SENT US?* *** Generate a plot of the CAST magnet's magnetic field :extended: #+begin_src sh cd ~/CastData/ExternCode/TimepixAnalysis/LogReader/ ./cast_log_reader sc -p ../resources/LogFiles/SCLogs/2017_18 -s Version.idx #+end_src - [ ] *SAVE A PLOT* - [ ] GET TABLE FROM JAIME's THESIS - [ ] FIND REFERENCE OF JAIME's TABLE *** Calculate the maximum tilting angles of the magnet :extended: The tilting angles of the CAST magnet are stored (among others, the tracking files also contain angle data, as well as the "angles" file {that existed at some point at least?} ) in the slow control log files. We'll use our log file parser to parse all the log files between 2016 and #+begin_src sh cd ~/CastData/ExternCode/TimepixAnalysis/LogReader/ ./cast_log_reader sc -p ../resources/LogFiles/SCLogs -s Version.idx #+end_src This yields fig. [[fig:cast:vertical_angles_data_taking]] where we can see that the real angles are between $\sim\SIrange{-7}{7.7}{°}$. #+CAPTION: Vertical angle of the CAST magnet during the data taking discussed in this #+CAPTION: thesis. The magnet moves in a range of $\sim\SIrange{-7}{7.7}{°}$ during #+CAPTION: trackings. #+NAME: fig:cast:vertical_angles_data_taking [[~/phd/Figs/CAST_magnet_vertical_angle.pdf]] We also have _all_ of the slow control logs in [[file:~/CastData/ExternCode/TimepixAnalysis/resources/LogFiles/AllLogs/logfiles/]]. Feel free to run the command on this directory. But don't expect amazing performance given the amount of data! A PNG of the full data in fig. [[fig:cast:vertical_angle_all_data]] showing the same range as the figure only showing the data taking period discussed in this thesis. #+CAPTION: Vertical angle of the CAST magnet during (almost) the entire data taking period from 2005 #+CAPTION: to end of 2021. Angles go from roughty $\SI{-7}{°}$ to about $\SI{7.7}{°}$. #+NAME: fig:cast:vertical_angle_all_data [[~/org/Figs/statusAndProgress/cast_magnet_vertical_angles_2005_2021.png]] *** CAST X-ray optics :PROPERTIES: :CUSTOM_ID: sec:helioscopes:cast:xray_optics :END: The first X-ray telescope used at CAST as a focusing optics for the expected axion induced X-ray flux was a Wolter I type X-ray telescope cite:wolter_1_type originally built for a proposed German space based X-ray telescope mission, ABRIXAS cite:ABRIXAS_0a,ABRIXAS_0b. The telescope consists of 27 gold coated parabolic and hyperbolic shells and has a focal length of $\SI{1.6}{m}$. Due to the small size of the dipole magnet's bores of only $\SI{43}{mm}$ only a single section of the telescope can be exposed. The telescope is thus placed off-axis from the magnet bore to expose a single mirror section. An image of the mirror system with a rough indication of the exposed section is shown in fig. sref:fig:cast:abrixas_mirrors. The telescope is owned by the 'Max Planck Institut für extraterrestrische Physik' in Garching. For that reason it will often be referred to as the 'MPE telescope' in the context of CAST. The efficiency of the telescope reaches about $\SI{48}{\%}$ as the peak at around $\SI{1.5}{keV}$, drops sharply at around $\SI{2.3}{keV}$ to only about $\SI{30}{\%}$ up to about $\SI{7}{keV}$. From there it continues to drop until about $\SI{5}{\%}$ efficiency at $\SI{10}{keV}$. The efficiency is shown in a comparison with another telescope in the next section [[#sec:helioscopes:llnl_telescope]] in fig. [[fig:cast:telescope_efficiency_comparison_mpe_llnl]]. A picture of the telescope installed at CAST behind the magnet on the 'sunrise' side of the magnet is shown in fig. sref:fig:cast:abrixas_installed. [fn:terminology] This telescope was used for the data taking campaign in 2014 and 2015 using a GridPix based detector discussed in cite:krieger2018search and serves as a comparison for certain aspects in this thesis. #+begin_src subfigure (figure () (subfigure (linewidth 0.56) (caption "Side view") (label "fig:cast:abrixas_installed") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Figs/thesis/CAST/cast_abrixas_telescope_image_clear.png")) (subfigure (linewidth 0.44) (caption "Mirrors") (label "fig:cast:abrixas_mirrors") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Figs/thesis/CAST/abrixas_cast_telescope_system.png")) (caption (subref "fig:cast:abrixas_installed") ": Image of the ABRIXAS telescope installed at CAST on the 'sunrise' side. The image is taken from " (cite "CAST_telescope_ccd") " as it provides a relatively clear image of the telescope, which is hard to take nowadays." (subref "fig:cast:abrixas_mirrors") ": Image of the ABRIXAS telescope mirror system. The different shells of the Wolter I type telescope system are visible. One section is exposed to the magnet bore, the white line indicating roughly the extent of the bore. The sproke like structure is the support for the mirror shells. Image taken from " (cite "CAST_telescope_ccd") ".") (label "fig:cast:abrixas")) #+end_src [fn:terminology] See appendix section [[#sec:appendix:cast_operations:terminology]] for a schematic of the common terminology like 'sunrise' used at CAST. *** Lawrence Livermore National Laboratory (LLNL) telescope :PROPERTIES: :CUSTOM_ID: sec:helioscopes:llnl_telescope :END: Up to 2014 there was only a single X-ray telescope in use at CAST. In August 2014 a second X-ray optic was installed on the second bore next to the ABRIXAS telescope. This telescope, using technologies originally developed for the space based NuSTAR telescope by NASA cite:Harrison_2013,Harrison2006,nustar_design_performance,nustar_fabrication,nustar_overview_status, was purpose built for axion searches and in particular the CAST experiment. Contrary to the ABRIXAS telescope only a single telescope section of $\SI{30}{°}$ wide mirrors of the telescope was built as the small bore cannot expose more area anyway. It uses a cone approximation to a Wolter I optic, meaning the hyperbolic and parabolic mirrors are replaced by cone sections cite:Petre:85. A side view and rear view of the open telescope after decommissioning at CAST are seen in fig. sref:fig:cast:llnl. It consists of 13 platinum / carbon coated glass shells in each telescope section, for a total of 26 mirrors. Each mirror uses one of four different depth graded multilayer (see sec. [[#sec:theory:xray_reflectivity]]) coating recipes to improve reflectivity over a larger energy and angle range. Further, the focal length was shortened to $\SI{1.5}{m}$. The telescope section is rotated such that the focal point is pointing away from the other magnet bore to make more space for two detectors side by side. [fn:focal_point_llnl] This can be seen in the render of the 2017/18 detector setup in fig. [[llnl_telescope_setup_2017_render]], seen from the top. The development process of the telescope is documented in the PhD thesis by A. Jakobsen cite:anders_phd. cite:llnl_telescope_first_cast_results gives an overview and shows preliminary results from CAST. The telescope was characterized and calibrated at the PANTER X-ray test facility in Munich. #+begin_src subfigure (figure () (subfigure (linewidth 0.75) (caption "Side view") (label "fig:cast:llnl_side_view") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/LLNL/llnl_side_view_cropped.jpg")) (subfigure (linewidth 0.25) (caption "Rear view") (label "fig:cast:llnl_back_view") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/LLNL/llnl_open_back_rotated.jpg")) (caption (subref "fig:cast:llnl_side_view") ": Image of the LLNL telescope outside its housing after decommissioning at CAST. Entrance side towards the magnet on the right with the largest shell radii. Both sets of mirror shells are clearly seperated in the middle." (subref "fig:cast:llnl_back_view") ": View of the telescope shells as seen from the detector side. Installed at CAST the telescope was rotated by about " ($ (SI 76 "°")) " counter clockwise from this view. Both images courtesy of Cristina Margalejo Blasco.") (label "fig:cast:llnl")) #+end_src #+CAPTION: Render of the setup of the GridPix septemboard detector in 2017/18 showing the #+CAPTION: LLNL telescope on the left side. The diversion away from the extension of the #+CAPTION: bore is visible, to have more space for detector installation. The #+CAPTION: lead shielding and veto scintillator are not shown in the render. Render created by Tobias Schiffer. #+NAME: llnl_telescope_setup_2017_render [[~/phd/Figs/llnl_cast_gridpix_render_small_annotated.png]] This telescope achieves a significantly higher effective area than the ABRIXAS telescope in the energy range between $\SIrange{2}{3}{keV}$ fig. [[fig:cast:telescope_efficiency_comparison_mpe_llnl]] (relevant for axion searches, compare with fig. sref:fig:theory:solar_axion_flux:differential_flux). But outside this range the efficiency is comparable or lower. The precise understanding of the position, size and shape of the focal point as well as the effective area is essential for the limit calculation later in the thesis. In appendix [[#sec:appendix:raytracing]] we discuss a raytracing simulation of this telescope. The aim is to simulate the expected distribution of axions from the Sun. #+CAPTION: Comparison of the efficiency between the two telescopes, the MPE (ABRIXAS) as the #+CAPTION: original CAST telescope and the LLNL telescope purpose built for axion searches. #+CAPTION: The LLNL telescope has superior efficiency in the energy range where the axion #+CAPTION: flux is assumed to dominate, but falls off sharper at high energies. #+CAPTION: The data for the LLNL telescope is extracted from fig. 3 in cite:llnl_telescope_first_cast_results, #+CAPTION: whereas for the ABRIXAS telescope it is extracted from the red line in fig. 4 #+CAPTION: of cite:CAST_telescope_ccd. #+NAME: fig:cast:telescope_efficiency_comparison_mpe_llnl [[~/phd/Figs/telescopes/effective_area_mpe_llnl.pdf]] [fn:focal_point_llnl] Because this telescope only consists of one section, its focal point is 'parallel' to the magnet bore. In a full Wolter I like optic the focal point is exactly behind the center of the optic. With only a portion of a telescope this implies an offset. Further, the telescope is not actually rotated exactly $\SI{90}{°}$ to move the focal spot perfectly parallel to the two bores, but only about $\SI{76}{°}$. **** TODOs for this section [/] :noexport: - [ ] *REVISE THIS SECTION ABOUT EFFECTIVE AREA ONCE TALKED TO JAIME AGAIN* - [ ] https://doi.org/10.1007/s10686-006-9068-8 <-- Possible citation for PANTER? - [ ] *Insert an image of the LLNL telescope internals* - [ ] Refer to later section. Raytracing? About way more infos about the telescope. - [ ] In the section where we talk about our raytracing of this telescope include: - table of parameters of the telescope - numbers for telescope design 'as built' - Wolter equation with correct radius to use - the multilayer recipes used *BETTER INTRODUCE 2 LENGTH WISE SECTION THING OF WOLTER TELESCOPES* #+begin_comment Note: Refer to DTU thesis [[/home/basti/org/Papers/CAST_IAXO_telescopes/llnl_telescope_optimizations_phdthesis_for_DTU_orbit.pdf]] around page 65 (and shortly before for effective area definition; and another eff area def on page 7). #+end_comment **** Generate effective area plot for LLNL and MPE telescopes [/] :extended: We'll now create the comparison plot of the effective areas for the two X-ray telescopes used at CAST. For the MPE telescope: As mentioned in the caption of the figure above, the data for the MPE telescope is extracted from fig. 4 of cite:CAST_telescope_ccd, also found here [[~/phd/resources/MPE/mpe_xray_telescope_cast_effective_area.pdf]]. This is done using [[file:~/phd/code/other/extractDataFromPlot.nim]] using a simplified version of fig. 4 as an input (i.e. crop the plot to exactly the area of the actual plot and remove any lines that are not the data to be extracted. This plot looks like file:~/phd/resources/MPE/mpe_xray_telescope_cast_effective_area_cropped_no_axes.png. Then run #+begin_src sh ./extractDataFromPlot \ -f ~/phd/resources/MPE/mpe_xray_telescope_cast_effective_area_cropped_no_axes.png \ --xLow 0.305 --xHigh 10.0 \ --yLow 0.0 --yHigh 8.0 #+end_src which produces a plot from the extracted data and writes a CSV file, which - [X] Make sure to add the effective area files to a ~resources~ directory of the thesis repo! - [ ] Ask Jaime if this really looks sensible, because to me it does not! That's barely better than the MPE telescope... #+begin_src nim :flags -d:QuietTikZ=true :tangle code/plot_mpe_llnl_effective_areas.nim import ggplotnim, ggplotnim/ggplot_sdl2 import unchained const mpe = "~/phd/resources/MPE/mpe_xray_telescope_cast_effective_area.csv" const llnl = "~/phd/resources/LLNL/EffectiveArea.txt" let dfMpe = readCsv(mpe) let dfLLNL = readCsv(llnl, sep = ' ') .rename(f{"Energy[keV]" <- "E(keV)"}, f{"EffectiveArea[cm²]" <- "Area(cm^2)"}) let df = bind_rows([("MPE", dfMpe), ("LLNL", dfLLNL)], "Telescope") const areaBore = (4.3.cm / 2.0)^2 * π ## Area of the CAST bore in cm² ggplot(df, aes("Energy[keV]", "EffectiveArea[cm²]", color = "Telescope")) + geom_line() + xlab(r"Energy [\si{keV}]") + ylab(r"EffectiveArea [\si{cm^2}]") + scale_y_continuous(secAxis = secAxis(f{1.0 / areaBore.float}, name = r"Transmission [\si{\%}]")) + legendPosition(0.83, -0.2) + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggshow("~/phd/Figs/telescopes/effective_area_mpe_llnl.pdf", useTeX = true, standalone = true) #+end_src #+RESULTS: | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/telescopes | /home/basti/phd/Figs/telescopes/effective_area_mpe_llnl.tex | | | | | Generated: | /home/basti/phd/Figs/telescopes/effective_area_mpe_llnl.pdf | | | | | | | | AudioDeviceAdded | | | | | | | | | AudioDeviceAdded | | | | | | | | | Clicked | at: | (468, | 519) | button: | 1 | | | | Lifted | at: | (468, | 519) | button: | 1 | | | *** Best limits set by CAST In the many years of data taking and countless detectors taking data at the CAST experiment, it has put the most stringent limits on different coupling constants over the years. Specifically, CAST sets the current best helioscope limits on the: - Axion-photon coupling $g_{aγ}$ - Axion-electron coupling $g_{ae}$ - Chameleon-photon coupling $β_γ$ For the axion-photon coupling the best limit is from cite:cast_nature in 2017 based on the full Micromegas dataset including the data behind the LLNL telescope and constricts the coupling to $g_{aγ} < \SI{6.6e-11}{\GeV^{-1}}$. For the axion-electron coupling the best limit is still from 2013 in cite:Barth_2013 using the theoretical calculations for an expected solar axion flux done by J. Redondo in cite:Redondo_2013 for a limit on the product of the axion-electron and axion-photon coupling of $g_{ae} · g_{aγ} < \SI{8.1e-23}{\GeV^{-1}}$. The limit calculation was based on data taken in CAST phase I in 2003 - 2005 with a pn-CCD detector behind the MPE telescope. CAST was also used to set a limit on the hypothetical chameleon particle. The best current limit on the chameleon-photon coupling $β_γ$ is based on a single GridPix based detector with data taken in 2014 and 2015 by C. Krieger in cite:krieger2018search,krieger_chameleon_jcap, limiting the coupling to $β_γ < \num{5.74e10}$, which is the first limit below the solar luminosity bound. **** TODOs about this section [/] :noexport: - [ ] *Mention the limit method with foreshadowing to statistics chapter that we will use the same?* -> No, I don't think so - [ ] Subsection about gaseous phase, affecting conversion Extract parts of the axionMass.org file and place it here. Essentially the: - conversion probability in gas - how to compute that - one step showing conversion prob outside coherent condition -> We've discussed the buffer gas stuff in the axion theory and mentioned our calculations. I think that's enough. ** International AXion Observatory (IAXO) :PROPERTIES: :CUSTOM_ID: sec:helioscopes:iaxo :END: Barring a revolution in detector development or a lucky find of a non QCD axion, the CAST experiment was unlikely to detect any signals. A fourth generation axion helioscope to possibly reach towards the QCD band in the mass-coupling constant phase space is a natural idea. The first proposal for a next generation axion helioscope was published in 2011 cite:Irastorza_2011, with the name International AXion Observatory (IAXO) first appearing in 2013 cite:vogel2013iaxo. A conceptual design report (CDR) was further published in 2014 cite:Armengaud_2014. The proposed experiment is supposed to have a total magnet length of $\SI{25}{m}$ length with eight $\SI{60}{cm}$ bores with an average transverse magnetic field of $\SI{2.5}{T}$ [fn:magnetic_field_iaxo]. With a cryostat and magnet design specifically built for the experiment, much larger tilting angles of the magnet of about $\pm\SI{25}{°}$ are proposed to allow for solar tracking for $\SI{12}{\hour}$ per day for a 1:1 data split between tracking and background data. cite:Armengaud_2014 In cite:Irastorza_2011 a 'figure of merit' (FOM) is introduced to quantify the improvements possible by IAXO over CAST, defined by \[ f = f_M f_{\text{DO}} f_T = (B² L² A)_M \left(\frac{ε_d ε_o}{\sqrt{b a}}\right)_{\text{DO}} (\sqrt{ε_t t})_T \] which is split into individual FOMs for the magnet $f_M$, the detector and optics $f_{\text{DO}}$ and the total tracking time $f_T$ ($B$: magnetic field, $L$: magnet length, $A$: total bore area of all bores, $ε_d$: detector efficiency, $ε_o$: X-ray optic efficiency, $b$: background in counts per area and time, $a$: area of the X-ray optic focal spot, $ε_t$: data taking efficiency, $t$: total solar tracking time). The biggest improvements would come from the magnet FOM (MFOM), due to the much larger magnet volume. Compared to the CAST MFOM $f_{M,\text{CAST}} = \SI{19.3}{T^2.m^4}$ IAXO would achieve a relative improvement of $f_{M,\text{CAST}} / f_{M,\text{IAXO}} \approx 300$ with its $f_{M,\text{IAXO}} = \SI{5654.9}{T^2.m^4}$. As the number of expected counts in a helioscope scales with $g^4$ [fn:coupling_notation], the figure of merit directly relates possible limits of a helioscope in case of a non-detection. The aspirational target for IAXO would be a full $f_{\text{CAST}} / f_{\text{IAXO}} > \num{10000}$ for a possible improvement on $g$ bounds by an order of magnitude. A schematic of the proposed design can be seen in fig. sref:fig:helioscopes:iaxo. Given the comparatively large budget requirements for such an experiment, a compromise was envisioned to prove the required technologies, in particular the magnet design. This intermediate experiment called BabyIAXO will be discussed in the next section, [[#sec:helioscopes:baby_iaxo]]. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "IAXO") (label "fig:helioscopes:iaxo") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/IAXO/iaxo_annotated_schematic.png")) (subfigure (linewidth 0.5) (caption "BabyIAXO") (label "fig:helioscopes:baby_iaxo") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/IAXO/babyIAXO_schematic_annotated_ringwald23.png")) (caption (subref "fig:helioscopes:iaxo") "An annotated schematic of a potential IAXO design with 8 magnet bores. Taken from the IAXO CDR " (cite "Armengaud_2014") ". " (subref "fig:helioscopes:baby_iaxo") "An annotated schematic of the current BabyIAXO design, showing the two bores. One bore intended for an XMM-Newton optic of " (SI 7.5 "m") " focal length, and a shorter " (SI 5 "m") " focal length custom optic behind the other bore. Taken from " (cite "ringwald23_axions_hamburg") "." ) (label "fig:helioscopes:iaxo_babyiaxo_schematics")) #+end_src [fn:magnetic_field_iaxo] Due to the envisioned toroidal magnet design and much larger diameter, the magnetic field inside such a bore would not be as homogeneous as in the LHC dipole magnets. The peak field would be about $\SI{5.4}{T}$. This also means a correct $f_M$ calculation for IAXO must take into account the actual magnetic field map. [fn:coupling_notation] $g⁴$ is a placeholder for $g⁴_{aγ}$ or $g²_{aγ}·g²_{ae}$. The flux scales with this product as explained in sec. [[#sec:theory:current_bounds]] due to axion production and reconversion happening via $g²$ each. *** Calculate magnet figure of merit :extended: For more details on the figure of merit calculation, see sec. 3.1 in cite:Irastorza_2011. For completeness, the figure of merit is: \[ f = f_M f_{\text{DO}} f_T = (B² L² A)_M (\frac{ε_d ε_o}{\sqrt{b a}})_{\text{DO}} (\sqrt{ε_t t})_T \] The full relationship with the axion-photon coupling is: \begin{align*} N_γ ∝ N_a · g⁴ &= f · g⁴ \\ N_b &= b a ε_t t \end{align*} So 10,000 more "produced" (i.e. not produced) photons due to f'/f = 10,000 for IAXO over CAST would imply the bounds on $g$ could be improved by a factor of 10, one order of magnitude. Let's calculate the figure of merit for the CAST magnet as well as the IAXO / BabyIAXO magnets: #+begin_src nim import unchained defUnit(T²•m⁴) proc fMagnet(B: Tesla, L: Meter, d: CentiMeter): T²•m⁴ = B^2 * L^2 * (d / 2.0)^2 * π let fCAST = fMagnet(8.8.T, 9.26.m, 4.3.cm) echo "CAST magnet, single bore f_M = ", fCAST, " at 8.8 T" echo "CAST, both bores f_M = ", fCAST * 2 let fIAXO = fMagnet(2.5.T, 20.m, 60.cm) echo "IAXO magnet, single bore f_M = ", fIAXO, " at 2.5 T *average* field, 20 m length." echo "IAXO, all bores f_M = ", fIAXO * 8 echo "IAXO FOM over CAST: ", fIAXO * 8 / (fCAST * 2) #+end_src #+RESULTS: | CAST | magnet, | single | bore | f_M | = | 9.64304 T²•m⁴ | at | 8.8 | T | | | | | | | CAST, | both | bores | f_M | = | 19.2861 T²•m⁴ | | | | | | | | | | | IAXO | magnet, | single | bore | f_M | = | 706.858 T²•m⁴ | at | 2.5 | T | *average* | field, | 20 | m | length. | | IAXO, | all | bores | f_M | = | 5654.87 T²•m⁴ | | | | | | | | | | | IAXO | FOM | over | CAST: | 293.21 UnitLess | | | | | | | | | | | Time scaling: #+begin_src nim import unchained, math let g = 8e-11 let g4 = g^4 let t = 1 # g⁴ = √t # g⁴ / √t = g'⁴ / √t' let tp = 2 let g4p = g4 * sqrt(2.0) echo "New g for twice the time: ", pow(g4p, 0.25) #+end_src #+begin_src nim import unchained, math let g = 8e-11 let g4 = g^4 let t = 1 # g⁴ · √t = N # <- some number of photons: we observe 0 # g⁴' · √t' = N' # <- new number of photons: we still observe 0 # So # g⁴ · √t = g⁴' · √t' # should hold, meaning # g⁴ · √1 = g⁴' · √2 # g⁴' = g⁴ / √2 let g4p = g4 / sqrt(2.0) echo "New g for twice the time: ", pow(g4p, 0.25) #+end_src #+RESULTS: : New g for twice the time: 7.336032345637369e-11 *** TODOs for this section :noexport: - Make use of PRC (?) mainly for data, citation both that and first proposal. - [X] *MAYBE PICTURE OF IAXO LEFT, BABYIAXO RIGHT* - [ ] *FIGURE OF MERIT* -> Including definition. - [ ] *EXPECTED LIMIT* *** BabyIAXO :PROPERTIES: :CUSTOM_ID: sec:helioscopes:baby_iaxo :END: The major difference between full grown IAXO and BabyIAXO is restricting the setup to 2 bores instead of 8 with a magnet length of only $\SI{10}{m}$ to prove the magnet design works, before building a larger version of said design. Since the first conceptual design of IAXO cite:Armengaud_2014 the bore diameter for the two bores of BabyIAXO has increased from $\SI{60}{cm}$ to $\SI{70}{cm}$. cite:abeln2021conceptual The BabyIAXO design was approved by the 'Deutsches Elektronen-Synchrotron' (DESY) for construction onsite. The project has suffered multiple delays most notably due to the magnet construction. First COVID19 due to its severe effects on supply chains and in 2022 the horrific Russian invasion of Ukraine have caused multi year delays. The latter in particular was problematic as the only two companies being able to supply the type of superconducting cable needed for the magnet are from Russia. The magnet situation is still in flow as of writing this thesis. For the two bores two different X-ray telescopes are planned to be operated. One bore will be used with a flight spare of the XMM-Newton X-ray satellite mission with a focal length of $\SI{7.5}{m}$. The second bore would receive a custom built X-ray optic based on a hybrid design of a NuSTAR-like optic for the inner part -- similar to the LLNL telescope introduced in sec. [[#sec:helioscopes:llnl_telescope]] (just as a full telescope) -- and a cold slumped glass design following cite:civitani16_cold_hot_slump_glass_optic for the outer part. This optic would have a focal length of only $\SI{5}{m}$. An annotaded schematic of the BabyIAXO design can be seen in fig. sref:fig:helioscopes:baby_iaxo. The magnet figure of merit for BabyIAXO is aimed at being at least a factor 10 higher than for CAST, with cite:abeln2021conceptual listing values between $\SIrange{232}{326}{T^2.m^4}$ depending on how it is calculated. **** TODOs for this section :noexport: Any? Note on our language of Russian invasion? * X-rays, cosmic muons and gaseous detectors :Theory: :PROPERTIES: :CUSTOM_ID: sec:theory_detector :END: #+LATEX: \minitoc As we have seen in the previous two chapters, solar axions should produce photons in the soft X-ray energy range. In experiments like CAST, gaseous detectors -- like the one used in this thesis -- are therefore a common and suitable choice for the detection of the X-rays focused by the X-ray optics. In order to give a guiding reference for the relevant physics associated with the detection of particles at CAST, we will now cover each aspect that we make use of in explanations or calculations. The focus will be on aspects either not commonly discussed (for example depth graded multilayers for X-ray telescopes) or essential for explanation (gas diffusion). We start with a section on particle interactions with matter, sec. [[#sec:theory:particle_int]], where we discuss X-ray interactions as well as charged particles in gases. Next in sec. [[#sec:theory:cosmic_radiation]] follows a short section about cosmic radiation, in particular the expected muon flux as it serves as a dominant source of background in experiments like CAST. Finally, in sec. [[#sec:theory:gas_fundamentals]] we cover concepts of gaseous detector physics that we depend on. ** TODOs for this section [/] :noexport: Gaseous detectors, keep a bit short. Before writing properly read Lucian. Best if read Lucian and then write a couple of weeks later. This chapter will be kept reasonably short. Instead of introducing all physics relevant for gaseous detectors, we will focus on the things that are relevant for the understanding in the context of the thesis. For better general overview of the physics of gaseous detectors, read some of the following references: *Lucian, Markus MSc; Lupberger, Krieger PhD, Elisa PhD, PDG, some book?...* *Highlight which reference for what* The theory sections covered in the following parts all have in common that their understanding is required to make certain assumptions in the data analysis or *???* It should be noted though that no part will be thorough enough to stand on its own. Further reading is required in many places. This theory section is supposed to serve as a reference for the later parts of the thesis. Of particular interest are all sections that give the theoretical foundation for different kinds of background we might measure or the understanding of our calibration data. ** Particle interactions with matter :PROPERTIES: :CUSTOM_ID: sec:theory:particle_int :END: On the one hand we will discuss how X-rays interact with matter. Both in terms of solids as well as gases, focused on their attenuation, because this is required to describe signal attenuation due to -- for example -- a detector window of a gaseous detector or the absorption of X-rays in the detector gas. In addition, X-ray reflectivity will be discussed as it is of interest for the behavior of X-ray telescopes. On the other hand the interaction of highly energetic charged particles with matter will be discussed and its relation to cosmic radiation as a source of background. Finally, X-ray fluorescence will be covered as well. As a source of background in an axion helioscope experiment it is indistinguishable [fn:vetoes] from axion-induced X-rays. For a detailed overview of the interaction of X-rays with matter, see the X-ray data booklet cite:williams2001x. [fn:vetoes] Outside of vetoes to tag muons that cause fluorescence for example. *** X-rays in solid matter & gases :PROPERTIES: :CUSTOM_ID: sec:theory:xray_matter_gas :END: Lambert-Beer's law cite:bouguer1729essai,lambert1760photometria,beer1852bestimmung #+NAME: eq:theory:beer_lambert_law \begin{equation} I(z) = I_0 e^{-μz}, \end{equation} gives the intensity of radiation $I(z)$ after traversing through a medium with constant attenuation $μ$ of length $z$, given a starting intensity of $I_0$. Directly related is of course the absorption length $l_{\text{abs}} = 1/μ$ (or mean free path), which is a useful property when considering typical absorption depths. This law is of vital importance for the behavior of X-rays traversing through matter, which is needed to compute the efficiency of a gaseous detector with an entrance window. In addition, it is also related to the mean free path of X-rays in a gas, which is an important parameter in gaseous detectors to understand the absorption efficiency of X-rays at different energies and the resulting expected diffusion. In the context of X-rays the factor $μ$ is typically rewritten via the 'mass attenuation coefficient' $μ_m = μ · ρ$ with $ρ$ the density of the material, commonly in $\si{g.cm^{-3}}$. $μ_m$ is then defined by \[ μ_m = \frac{N_A}{M} σ_A, \] where $N_A$ is Avogadro's number, $M$ the molar mass of the medium in units of $\si{g\per\mol}$ and $σ_A$ is the photoabsorption cross section in units of $\si{cm^2}$. Thus, the mass attenuation coefficient is usually given in $\si{cm^2.g^{-1}}$ such that $μ = μ_m · ρ$ is of inverse length as expected. This directly yields the definition of the absorption length, \[ l_{\text{abs}} = \frac{1}{μ}. \] Further, the photoabsorption cross section can be described via the atomic scattering factor $f₂$ \[ σ_A = 2 r_e λ f₂, \] where $r_e$ is the classical electron radius and $λ$ the wavelength of the X-ray. $f₂$ is the imaginary part of the forward scattering factor $f$ \[ f = f₁ - i f₂ \] which itself is the simplification of the general atomic scattering factor that describes the atom specific part of the scattering cross section. This way of expressing it has the nice property of relying on a well tabulated parameter $f₂$. Together with $f₁$ these tabulated values can be used to compute everything from the refractive index at a specific X-ray energy of a compound to the attenuation coefficient and even reflectivity of a multi-layer substrate. It generalizes from single element to compounds easily by \[ μ_m = \frac{N_A}{M_c} \sum_i n_i σ_{A,i}, \] with $M_c$ the molar weight of the compound and $n_i$ the number of atoms of kind $i$. X-ray absorption and transmission properties can be calculated from this only requiring the atomic scattering factors, which can be found tabulated for different elements, for example by [[https://www.nist.gov/pml/x-ray-form-factor-attenuation-and-scattering-tables][NIST]] and [[https://henke.lbl.gov/optical_constants/asf.html][Henke]]. There is an online calculator for calculations of X-ray transmission found under [fn:henke_gov] cite:henke1993x, as well as a library implementation developed during the course of this thesis under [fn:scinim_xrayAttenuation] cite:Schmidt_xrayAttenuation_2022 for this purpose. Fig. [[fig:theory:transmission_examples]] shows an example of X-ray transmission through a $\SI{300}{nm}$ thick layer of \ccsini as well as transmission through $\SI{3}{cm}$ of argon at normal temperature and pressure (NTP), $\SI{1}{atm}$, $\SI{20}{°C}$. All information about the absorption lines and general transmission is encoded in $f₂$. #+CAPTION: X-ray transmission through a \SI{300}{nm} thick layer of \ccsini #+CAPTION: and \SI{3}{cm} of argon calculated with cite:Schmidt_xrayAttenuation_2022. #+CAPTION: Calculation of the transmission based on tabulated scattering form factors. #+NAME: fig:theory:transmission_examples [[~/phd/Figs/theory/transmission_example.pdf]] [fn:henke_gov] https://henke.lbl.gov/optical_constants/ [fn:scinim_xrayAttenuation] https://github.com/SciNim/xrayAttenuation **** Absorption length of Argon :extended: Fig. [[fig:theory:absorption_length_argon]] also shows the absorption length for argon at NTP. #+CAPTION: Absorption length of \SI{3}{cm} of argon calculated with cite:Schmidt_xrayAttenuation_2022. #+CAPTION: Calculation based on tabulated scattering form factors. #+NAME: fig:theory:absorption_length_argon [[~/phd/Figs/theory/absorption_length_example.pdf]] **** TODOs for this section :noexport: - [ ] *RADIATION?* Or different word for intensity in Lambert law? -> It is explicitly in the section about *X-rays* though! - [ ] *FIND REFERENCE TO MODERN LAW IN SOMETHING LIKE DEMTRÖDER* -> Is this needed? It seems like the Demtröder doesn't mention it. Nor does the PDG. - [X] *INSERT ABSRPTION LENGTH!!!* **** Generation of \ccsini transmission figure :extended: :PROPERTIES: :CUSTOM_ID: sec:theory:generate_transmission_plot :END: Let's compute an example transmission plot using the Lambert-Beer law as presented above based on =xrayAttenuation= now, on the one hand for \ccsini as well as argon (common detector gas). *TODO*: update ginger to use =-output-directory= to put the plot in the right path & turn it into a TikZ plot. #+begin_src nim :tangle /home/basti/phd/code/transmission_example.nim :flags -d:QuietTikZ=true import std / strutils import xrayAttenuation, ggplotnim # generate a compound of silicon and nitrogen with correct number of atoms let Si₃N₄ = compound((Si, 3), (N, 4)) # instantiate an Argon instance let ar = Argon.init() # compute the density using ideal gas law at 1 atm let ρ_Ar = density(1013.25.mbar.to(Pascal), 293.15.K, ar.molarMass) # define energies in which to compute the transmission # (we don't start at 0, as at 0 energy the parameters are not well defined) let energies = linspace(1e-2, 10.0, 1000) proc compTrans[T: AnyCompound](el: T, ρ: g•cm⁻³, length: Meter): DataFrame = result = toDf({ "Energy [keV]" : energies }) .mutate(f{float: "μ" ~ el.attenuationCoefficient(idx("Energy [keV]").keV).float}, f{float: "Trans" ~ transmission(`μ`.cm²•g⁻¹, ρ, length).float}, f{float: "l_abs" ~ absorptionLength(el, ρ, idx("Energy [keV]").keV).to(cm).float}, f{"Compound" <- el.name}) var df = newDataFrame() # compute transmission for Si₃N₄ (known density and desired length) df.add Si₃N₄.compTrans(3.44.g•cm⁻³, 300.nm.to(Meter)) # and for argon df.add ar.compTrans(ρ_Ar, 3.cm.to(Meter)) # create a plot for the transmissions echo df let dS = pretty(300.nm, 3, short = true) let dA = pretty(3.cm, 1, short = true) let si = r"$\mathrm{Si}₃\mathrm{N}₄$" ggplot(df, aes("Energy [keV]", "Trans", color = "Compound")) + geom_line() + xlab(r"Energy [$\si{keV}$]") + ylab("Transmission") + ggtitle("Transmission examples of $# $# and $# Argon at NTP" % [dS, si, dA]) + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/theory/transmission_example.pdf", useTex = true, standalone = true, width = 600, height = 360) let dff = df.filter(f{float -> bool: classify(`l_abs`) != fcInf}, f{`Compound` == "Argon"}) echo dff ggplot(dff, aes("Energy [keV]", "l_abs")) + geom_line() + xlab(r"Energy [$\si{keV}$]") + ylab(r"Absorption length [$\si{cm}$]") + ggtitle("Absorption length of $# Argon at NTP" % [dA]) + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/theory/absorption_length_example.pdf", useTex = true, standalone = true, width = 600, height = 360) #+end_src #+RESULTS: | DataFrame | with | 5 | columns | and | 2000 | rows: | | | Idx | Energy | [keV] | μ | Trans | l_abs | Compound | | | dtype: | float | float | float | float | string | | | | 0 | 0.01 | 0 | 1 | inf | Si3N4 | | | | 1 | 0.02 | 0 | 1 | inf | Si3N4 | | | | 2 | 0.03 | 186110.0 | 4.5571e-09 | 1.562e-06 | Si3N4 | | | | 3 | 0.04 | 112870.0 | 8.7311e-06 | 2.5754e-06 | Si3N4 | | | | 4 | 0.05 | 75370.0 | 0.0004187 | 3.8569e-06 | Si3N4 | | | | 5 | 0.06 | 56300.0 | 0.002997 | 5.1635e-06 | Si3N4 | | | | 6 | 0.07 | 43720.0 | 0.01098 | 6.6497e-06 | Si3N4 | | | | 7 | 0.08 | 32570.0 | 0.03471 | 8.9263e-06 | Si3N4 | | | | 8 | 0.09 | 26100.0 | 0.06765 | 1.114e-05 | Si3N4 | | | | 9 | 0.1 | 36590.0 | 0.02292 | 7.945e-06 | Si3N4 | | | | 10 | 0.11 | 56150.0 | 0.003044 | 5.1773e-06 | Si3N4 | | | | 11 | 0.12 | 72810.0 | 0.0005456 | 3.9928e-06 | Si3N4 | | | | 12 | 0.13 | 75120.0 | 0.0004299 | 3.87e-06 | Si3N4 | | | | 13 | 0.14 | 66950.0 | 0.000999 | 4.3423e-06 | Si3N4 | | | | 14 | 0.15 | 65280.0 | 0.001186 | 4.453e-06 | Si3N4 | | | | 15 | 0.16 | 62970.0 | 0.001505 | 4.6163e-06 | Si3N4 | | | | 16 | 0.17 | 63080.0 | 0.001488 | 4.6083e-06 | Si3N4 | | | | 17 | 0.18 | 52040.0 | 0.00465 | 5.5858e-06 | Si3N4 | | | | 18 | 0.19 | 47710.0 | 0.00727 | 6.0926e-06 | Si3N4 | | | | 19 | 0.2 | 43810.0 | 0.01088 | 6.6362e-06 | Si3N4 | | | | | | | | | | | | | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/theory | /home/basti/phd/Figs/theory/transmission_example.tex | | | | | Generated: | /home/basti/phd/Figs/theory/transmission_example.pdf | | | | | | | | DataFrame | with | 5 | columns | and | 998 | rows: | | | Idx | Energy | [keV] | μ | Trans | l_abs | Compound | | | dtype: | float | float | float | float | string | | | | 0 | 0.03 | 411760.0 | 0 | 0.001462 | Argon | | | | 1 | 0.04 | 70620.0 | 1.5681e-153 | 0.008526 | Argon | | | | 2 | 0.05 | 13280.0 | 1.8069e-29 | 0.04533 | Argon | | | | 3 | 0.06 | 16540.0 | 1.6261e-36 | 0.0364 | Argon | | | | 4 | 0.07 | 20750.0 | 1.2457e-45 | 0.02901 | Argon | | | | 5 | 0.08 | 21890.0 | 4.2599e-48 | 0.0275 | Argon | | | | 6 | 0.09 | 21030.0 | 3.1193e-46 | 0.02863 | Argon | | | | 7 | 0.1 | 19910.0 | 8.2029e-44 | 0.03024 | Argon | | | | 8 | 0.11 | 18220.0 | 3.8752e-40 | 0.03306 | Argon | | | | 9 | 0.12 | 16670.0 | 8.6334e-37 | 0.03613 | Argon | | | | 10 | 0.13 | 14570.0 | 3.046e-32 | 0.04134 | Argon | | | | 11 | 0.14 | 12910.0 | 1.1568e-28 | 0.04664 | Argon | | | | 12 | 0.15 | 11750.0 | 3.7574e-26 | 0.05124 | Argon | | | | 13 | 0.16 | 10760.0 | 5.2432e-24 | 0.05596 | Argon | | | | 14 | 0.17 | 9905 | 3.7061e-22 | 0.06079 | Argon | | | | 15 | 0.18 | 9079 | 2.2701e-20 | 0.06632 | Argon | | | | 16 | 0.19 | 8285 | 1.1851e-18 | 0.07268 | Argon | | | | 17 | 0.2 | 7411 | 9.2091e-17 | 0.08125 | Argon | | | | 18 | 0.21 | 6628 | 4.5608e-15 | 0.09085 | Argon | | | | 19 | 0.22 | 6004 | 1.0212e-13 | 0.1003 | Argon | | | | | | | | | | | | | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/theory | /home/basti/phd/Figs/theory/absorption_length_example.tex | | | | | Generated: | /home/basti/phd/Figs/theory/absorption_length_example.pdf | | | | | | | The absorption length of pure argon at NTP is seen in fig. [[fig:theory:absorption_length_argon]]. #+CAPTION: Absorption length of argon at NTP. The absorption edge and long absorption #+CAPTION: length around the Kα line is clearly visible showing the reason for the escape #+CAPTION: peak in the \cefe spectrum. #+NAME: fig:theory:absorption_length_argon [[~/phd/Figs/theory/absorption_length_example.pdf]] *** X-ray reflectivity & scattering :PROPERTIES: :CUSTOM_ID: sec:theory:xray_reflectivity :END: The same atomic scattering factors $f₁$ and $f₂$ introduced in section [[#sec:theory:xray_matter_gas]] for the attenuation can also be used to compute the reflectivity of X-rays under shallow angles. A great overview of the relevant physics for X-ray reflectivity is found in cite:windt98_imd, which introduces the ~IMD~ program for the simulation of multilayer coatings for X-rays. By defining the combined scattering factor \[ f(E) = f₁(E) + i f₂(E) \] at energy $E$, the refractive index $n$ of a medium can be computed using \[ n(E) = 1 - r_e \frac{λ²}{2π} \sum_i n_{ai} f_i(E) \] where $n_{ai}$ is the number density of the $i\text{-th}$ compound of the medium. By expressing the refractive index for X-rays in this fashion, reflectivity can be expressed using the Fresnel equations, just like for visible light. The reflectivity for s-polarization is calculated by #+NAME: eq:theory:fresnell_reflectance_s \begin{equation} r^s_{ik} = \frac{n_i · \cos(θ_i) - n_k · \cos(θ_k)}{n_i · \cos(θ_i) + n_k · \cos(θ_k)} \end{equation} while for p-polarization it is done via #+NAME: eq:theory:fresnell_reflectance_p \begin{equation} r^p_{ik} = \frac{n_i · \cos(θ_k) - n_k · \cos(θ_i)}{n_i · \cos(θ_k) + n_k · \cos(θ_i)}. \end{equation} Here $θ_i$, $θ_k$ are the incident and refracted angles and $n_i$, $n_k$ the refractive indices on the incident and outgoing side $i$ and $k$, respectively. The total reflected energy, the reflectance $R$, is expressed as \[ R = \frac{1}{2}\left( \left|r^s\right|² + \left|r^p\right|²\right) \] for unpolarized light. This can be generalized to multiple layers of material on a substrate and including a surface roughness. Combined, these provide the essence for a realistic computation of the efficiency of an X-ray telescope mirror shell. This is also implemented in [fn:scinim_xrayAttenuation] cite:Schmidt_xrayAttenuation_2022 and [fn:henke_gov] cite:henke1993x also provides an online calculator for such reflectivities. - Depth graded multilayers :: One particular kind of surface, the depth graded multilayer, is used in certain kinds of modern X-ray telescopes, for example the LLNL telescope at CAST following the NuSTAR design. In such a multilayer, repeating layers of a low atomic number $Z$ material and a high $Z$ material are stacked at decreasing thicknesses. A depth-graded multilayer is described by the equation: \[ d_i = \frac{a}{(b + i)^c} \] where $d_i$ is the depth of layer $i$ (out of $N$ layers), \[ a = d_{\text{min}} (b + N)^c \] and \[ b = \frac{1 - N k}{k - 1} \] with \[ k = \left(\frac{d_{\text{min}}}{d_{\text{max}}}\right)^{\frac{1}{c}} \] where $d_{\text{min}}$ and $d_{\text{max}}$ are the thickness of the bottom and top most layers, respectively. For example for the the LLNL telescope a Pt/C depth graded multilayer is used, in which a top layer of carbon is stacked on top of a platinum layer. Between 2 to 5 repetitions of decreasing thickness are stacked with a ratio of $\SIrange{40}{45}{\%}$ carbon to platinum in thickness. More details on this will be discussed in appendix [[#sec:appendix:raytracing]], as it is of vital importance to calculate the axion image required for the limit calculation correctly. The reflectivity for a depth-graded multilayer is computed recursively from the bottom of the stack to the top layer using \[ r_i = \frac{r_{ik} + r_k \exp(2 i β_i)}{1 + r_{ik} r_k \exp(2 i β_i)} \] with \[ β_i = 2π · d_i · \frac{\cos(θ_i)}{λ}, \] where $θ_i$ as seen from the normal axis and with the wavelength of the incoming X-rays $λ$. The $r_{ik}$ values are computed following equations [[eq:theory:fresnell_reflectance_s]] and [[eq:theory:fresnell_reflectance_p]]. Such multilayers work by summing the reflecting contributions from the different layer transitions. Different thicknesses of the different multilayers mean X-rays at different energies and angles are best reflected from different layers. Thus, a much improved overall reflectivity over a wider energy and angle range can be achieved compared to a normal single layer on a substrate (e.g. a gold coating as used for the XMM-Newton or ABRIXAS optics). [fn:henke_gov] https://henke.lbl.gov/optical_constants/ [fn:scinim_xrayAttenuation] https://github.com/SciNim/xrayAttenuation **** TODOs for this section :noexport: Think about this footnote again (after "respectively") before removing / keeping it. Should be easy to test by including the relevant part in xrayAttenuation and seeing if the result works. #+begin_quote [fn:practical_calculation] In practice care must be taken to compute these. Instead of attempting to explicitly compute refracted and reflected angles, one should work with the complex $\sin$ or $\cos$ expressions. *CHECK ME AGAIN, I DONT THINK THIS IS REALLY NEEDED!!* #+end_quote - [ ] *FIX UP REFERENCE TO APPENDIX WHEN WRITTEN TO LINK EXPLICIT LLNL SECTION ABOVE!* - [ ] *IN APPENDIX ABOUT MULTILAYERS* explain how the reflectance for depth graded multilayer is calculated! *TODO: FIX THIS UP LIKELY TO INCLUDE SURFACE ROUGHNESS. ALSO LOOK AT XRAY DATA BOOKLET FOR IT AGAIN* - [ ] *REWRITE THIS IN KNOWLEDGE OF MULTILAYER CODE* I.e. Fresnel equations & complex refractive indices!!! **** Other notes :extended: \[ R = \left| \frac{k_m - k_p}{k_m + k_p} \right|² \] where $k_m$ and $k_p$ are \[ k_m = \sqrt{k² - (k \cos{θ})²} \] and \[ k_p = \sqrt{ k² n² - (k \cos{θ})² } \] defined via the wave number $k$, which itself is computed via \[ k = 2π \sin{θ} / λ. \] *** Bethe-Bloch equation :PROPERTIES: :CUSTOM_ID: sec:theory:bethe_bloch :END: Another relevant aspect for gaseous detectors is the energy deposition of charged particles. In particular for experiments that sit near Earth's surface, a major source of background is due to cosmic radiation, with cosmic muons making up more than $\SI{95}{\%}$ cite:Zyla:2020zbs of radiation (aside from neutrinos) at the surface, see sec. [[#sec:theory:cosmic_radiation]]. The energy loss of such muons can be calculated with the Bethe-Bloch equation, which describes the average energy loss per distance for a charged particle with charge $z$ in a homogeneous medium with charge carriers $Z$. cite:Zyla:2020zbs [fn:bethe_equation_form] #+NAME: eq:theory:bethe_bloch_eq \begin{equation} \left⟨ -\frac{\mathrm{d}E}{\mathrm{d}x} \right⟩ = K z² \frac{Z}{A} \frac{1}{β²} \left[ \frac{1}{2} \ln\frac{2m_e c² β² γ² W_{\text{max}}}{I²} - β² - \frac{δ(βγ)}{2} \right] \end{equation} where the different variables are as follows: - $K = 4π N_A r_e² m_e c² = \SI{0.307075}{MeV.mol^{-1}.cm^2}$ - $W_{\text{max}}$: maximum possible energy transfer to an electron in a single interaction - $I$: mean excitation energy of the absorber material in \si{\eV} - $δ(βγ)$: density-effect correction to energy loss - and $r_e = \frac{e²}{4π ε_0 m_e c²}$ the classical electron radius, $N_A$ Avogadro's number, $m_e$ electron mass, $c$ speed of light in vacuum, $z$ charge number of incident particle, $Z$ atomic number of absorber material, $A$ atomic mass of absorber material, $β = v/c$ speed of incident particle, $γ$ Lorentz factor This interaction behavior of muons leads to a specific, expected energy loss per distance. Commonly they are 'minimum ionizing particles' (MIPs), as their energies lies between $\SIrange{0.1}{100}{GeV}$, the large 'valley' in which the Bethe equation is applicable [fn:reasoning_energy]. For argon gas at normal conditions this is shown in fig. [[fig:theory:muon_argon_3cm_bethe_loss]]. As the Bethe formula was derived from quantum mechanical perturbation theory, higher order corrections can be computed. For our purposes here the leading order is enough. It is important to keep in mind that the Bethe-Bloch equation gives the /mean energy/ per distance. When considering short distances as typically encountered in particle detectors, this mean is skewed by rare interactions that deposit large amounts of energy (towards $W_{\text{max}}$). The energy deposition along short distances is typically described by a Landau-Vavilov distribution (similar, but different from a normal Landau distribution) [[cite:Zyla:2020zbs,BICHSEL2006154]]. The most probable energy loss is often a more appropriate number to look at. It can be expressed as #+NAME: eq:theory:most_probable_loss \begin{equation} Δ_p = ξ \left[ \ln{ \frac{2 m_e c² β² γ²}{I}} + \ln{\frac{ξ}{I}} + j - β² - δ(βγ) \right], \end{equation} where $ξ$ is \[ ξ = \frac{1}{2} K z² \left⟨ \frac{Z}{A} \right⟩ \frac{x}{β²} \, \si{MeV}, \] for a detector in which the material column the particle travels through is expressed as $x = d · ρ$ of a distance $d$ in $\si{g.cm^{-2}}$. $j = \num{0.200}$ is an empirical constant cite:Zyla:2020zbs,bichsel1988straggling. Further, $⟨Z / A⟩$ is simply the average $Z/A$ for a material compound $⟨Z/A⟩ = \sum_i w_i Z_i / A_i$. The large difference typically encountered between the most probable and the mean value for the energy loss in particle detectors, makes studying the expected signals a complicated topic. For a detailed description relevant for thin gaseous detectors, see especially cite:BICHSEL2006154. Fig. [[fig:theory:muon_argon_3cm_bethe_loss]] shows the comparison of the most probable energy loss via equation [[eq:theory:most_probable_loss]] and the mean energy loss via the Bethe-Bloch equation [[eq:theory:bethe_bloch_eq]] for muons of different energies traversing $\SI{3}{cm}$ of argon gas. #+CAPTION: Mean energy loss via Bethe-Bloch (purple) equation of muons in \SI{3}{\cm} of argon at #+CAPTION: conditions in use in GridPix detector at CAST. \SI{1050}{mbar} of chamber pressure at room #+CAPTION: temperature. Note that the mean is skewed by events that transfer a large amount of energy, #+CAPTION: but are very rare! As such care must be taken interpreting the numbers. Green shows the most #+CAPTION: probable energy loss, based on the peak of the Landau-Vavilov distribution underlying the #+CAPTION: Bethe-Bloch mean value. #+NAME: fig:theory:muon_argon_3cm_bethe_loss [[~/phd/Figs/muonStudies/ar_energy_loss_cast.pdf]] [fn:bethe_equation_form] Note that there are different common parametrizations of the Bethe-Bloch equation. [fn:reasoning_energy] Energies of muons detected near Earth's surface are in this range precisely because they are MIPs. Below and above the much stronger interactions result either in stopping and decay or loss until they are in the MIP range. **** TODOs for this section [/] :noexport: No more interactive vega lite plots... #+begin_quote Interactive Vega-Lite version available at cite:vega_fig:theory:muon_argon_3cm_bethe_loss. #+end_quote - [ ] *UNITS IN BETHE BLOCH PLOT* writing dE/dx is a bit funny, as it is for a fixed distance of 3cm! - [ ] Maybe restructure bethe bloch parameter explanation? - [X] *GIVE ENERGY RANGE FOR MUONS, SAY THEY ARE MINIMALLY IONIZING PARTICLES* - [X] *TODO: ADD MOST PROBABLE LOSS TO PLOT BELOW!* - [ ] Either replace referencing figure from bibliography and simply put mini description into reference or something like this: https://chat.openai.com/share/cdec7de9-40d3-4dfa-a0ff-0f5e43263ec3 Landau distribution! Also check out this $f$ function that is mentioned here: https://doi.org/10.1016/j.nima.2006.03.009 as a better way to compute the actual energy loss per distance? - [ ] Also: read again PDG part about PDG and later in chapter the average energy loss. Of course cannot take the mean of the Landau distribution due to the long tail. We don't really do that in our muon simulation though. - [ ] *Mention* higher order corrections and names? -> The next corrections proportional to $Z³$ and $Z⁴$ are called /??/ and /shell correction/ respectively. At higher energies also the density correction by Fermi *CITE* needs to be accounted for. These higher order corrections are mainly relevant for very low energies. *SEE PDG "Energy loss at low energies" section* *SHOW WITH OR WITHOUT. EQUATION WITH, BUT DROP IN CALCS* -> From cite:sauli2014gaseous on page 30: #+begin_quote The expression shows that the differential energy loss depends only on the particle’s velocity β and not on its mass; the additional term C/Z represents the so-called inner shell corrections, that take into account a reduced ionization efficiency on the deepest electronic layers due to screening effects, and δ/2 is a density effect correction arising from a collective interaction between the medium and the Coulomb field of the particle at highly relativistic velocities; its contribu- tion is small for non-condensed media. It should be noted, however, that in thin absorbers electrons produced with high momentum transfer might escape from the layer, thus reducing the effective yield. #+end_quote So: - C/Z term is the "inner shell correction" -> reduced ionization efficiency at deepest layers due to screening - δ/2 density effect correction from collective interaction medium and Coulomb field at highly relativistic speeds - [ ] *Add C/Z term to Bethe-Bloch equation!* - [ ] *Explain δ/2 term!* ***** Muons The below is not really needed here, right? We have an entire section about muons after all! *REPHRASE* instead focus on fact that they lose > 2 GeV instead of talking about typical muon energies. Muons arriving at the surface have energies typically above \SI{100}{\MeV}. For that reason the higher order corrections are not of importance for the study of muons in gaseous detectors. At each point the formula gives the *expectation value* for the energy loss after a distance large enough to include many interactions. In each interaction the particle loses energy according to a Landau distribution *CITE WHAT*, shown in fig. *LANDAU PLOT*. *EXPLANATION NOT QUITE CORRECT* *MOVE FOLLOWING TO SEPARATE SECTION LATER (noexport about muon studies?)* By taking into account the Bethe formula and a Landau distribution for each point, we can compute an expectation for the energy loss for muons under typical conditions met in a gaseous detector. **** Parameters Bethe-Bloch long :extended: - $K = 4π N_A r_e² m_e c² = \SI{0.307075}{MeV.mol^{-1}.cm^2}$ - $r_e = \frac{e²}{4π ε_0 m_e c²} = \SI{2.817 940 3227(19)}{fm}$: classical - $N_A = \SI{6.022 140 857(74)e23}{\mol^{-1}}$: Avogadro's number electron radius - $m_e = \SI{9.1093837015(28)e-31}{\kg}$: electron mass - $c = \SI{299792458}{\meter\per\second}$: speed of light in vacuum - $z$: charge number of incident particle - $Z$: atomic number of absorber material - $A$: atomic mass of absorber material - $β = \frac{v}{c}$: speed of incident particle - $γ = \frac{1}{\sqrt{1 - β²}}$: Lorentz factor - $W_{\text{max}}$: Maximum possible energy transfer to an electron in a single interaction - $I$: mean excitation energy of the absorber material in \si{\eV} - $δ(βγ)$: density-effect correction to energy loss **** Bethe equation for muons traversing $\SI{3}{\cm}$ of argon gas :extended: We will now compute the energy loss for muons traversing the \SI{3}{\cm} of argon gas that are seen by a muon traversing orthogonally to the readout plane (i.e. such that it may look like a photon). #+begin_src nim :results silent :tangle /home/basti/phd/code/bethe_bloch.nim import math, macros, unchained, ggplotnim, sequtils, strformat, strutils import thesisHelpers import ggplotnim / ggplot_vegatex let K = 4 * π * N_A * r_e^2 * m_e * c^2 # usually in: [MeV mol⁻¹ cm²] defUnit(cm³•g⁻¹) defUnit(J•m⁻¹) defUnit(cm⁻³) defUnit(g•mol⁻¹) defUnit(MeV•g⁻¹•cm²) defUnit(mol⁻¹) defUnit(keV•cm⁻¹) defUnit(g•cm⁻³) defUnit(g•cm⁻²) proc I[T](z: float): T = ## use Bloch approximation for all but Argon (better use tabulated values!) result = if z == 18.0: 188.0.eV.to(T) else: (10.eV * z).to(T) proc calcβ(γ: UnitLess): UnitLess = result = sqrt(1.0 - 1.0 / (γ^2)) proc betheBloch(z, Z: UnitLess, A: g•mol⁻¹, γ: UnitLess, M: kg): MeV•g⁻¹•cm² = ## result in MeV cm² g⁻¹ (normalized by density) ## z: charge of particle ## Z: charge of particles making up medium ## A: atomic mass of particles making up medium ## γ: Lorentz factor of particle ## M: mass of particle in MeV (or same mass as `m_e` defined as) let β = calcβ(γ) let W_max = 2 * m_e * c^2 * β^2 * γ^2 / (1 + 2 * γ * m_e / M + (m_e / M)^2) let lnArg = 2 * m_e * c^2 * β^2 * γ^2 * W_max / (I[Joule](Z)^2) result = (K * z^2 * Z / A * 1.0 / (β^2) * ( 0.5 * ln(lnArg) - β^2 )).to(MeV•g⁻¹•cm²) proc mostProbableLoss(z, Z: UnitLess, A: g•mol⁻¹, γ: UnitLess, x: g•cm⁻²): keV = ## Computes the most probable value, corresponding to the peak of the Landau ## distribution, that gives rise to the Bethe-Bloch formula. ## ## Taken from PDG chapter 'Passage of particles through matter' equation ## `34.12` in 'Fluctuations in energy loss', version 2020). ## ## `x` is the "thickness". Density times length, `x = ρ * d`. The other parameters ## are as in `betheBloch` above. let β = calcβ(γ) let ξ = K / 2.0 * Z / A * z*z * (x / (β*β)) const j = 0.200 let I = I[Joule](Z) result = (ξ * ( ln((2 * m_e * c^2 * β^2 * γ^2).to(Joule) / I) + ln(ξ.to(Joule) / I) + j - β^2)).to(keV) # - δ*(β*γ) proc density(p: mbar, M: g•mol⁻¹, temp: Kelvin): g•cm⁻³ = ## returns the density of the gas for the given pressure. ## The pressure is assumed in `mbar` and the temperature (in `K`). ## The default temperature corresponds to BabyIAXO aim. ## Returns the density in `g / cm^3` let gasConstant = 8.314.J•K⁻¹•mol⁻¹ # joule K^-1 mol^-1 let pressure = p.to(Pa) # pressure in Pa result = (pressure * M / (gasConstant * temp)).to(g•cm⁻³) proc E_to_γ(E: GeV): UnitLess = result = E.to(Joule) / (m_μ * c^2) + 1 type Element = object name: string Z: UnitLess M: g•mol⁻¹ A: UnitLess # numerically same as `M` ρ: g•cm⁻³ proc initElement(name: string, Z: UnitLess, M: g•mol⁻¹, ρ: g•cm⁻³): Element = Element(name: name, Z: Z, M: M, A: M.UnitLess, ρ: ρ) let M_Ar = 39.95.g•mol⁻¹ # molar mass. Numerically same as relative atomic mass #let ρAr = density(1050.mbar, M_Ar, temp = 293.15.K) let ρAr = density(1013.mbar, M_Ar, temp = 293.15.K) let Argon = initElement("ar", 18.0.UnitLess, 39.95.g•mol⁻¹, ρAr) proc intBethe(e: Element, d_total: cm, E0: eV, dx = 1.μm): eV = ## integrated energy loss of bethe formula after `d` cm of matter ## and returns the energy remaining var γ: UnitLess = E_to_γ(E0.to(GeV)) var d: cm result = E0 var totalLoss = 0.eV while d < d_total and result > 0.eV: let E_loss: MeV = betheBloch(-1, e.Z, e.M, γ, m_μ) * e.ρ * dx result = result - E_loss.to(eV) γ = E_to_γ(result.to(GeV)) d = d + dx.to(cm) totalLoss = totalLoss + E_loss.to(eV) result = max(0.float, result.float).eV func argonLabel(): string = "fig:theory:muon_argon_3cm_bethe_loss" ## TODO: add in the most probable value calc! func argonCaption(): string = result = r"Mean energy loss via Bethe-Bloch (purple) equation of muons in \SI{3}{\cm} of argon at " & r"conditions in use in GridPix detector at CAST. \SI{1050}{mbar} of chamber pressure at room " & r"temperature. Note that the mean is skewed by events that transfer a large amount of energy, " & r"but are very rare! As such care must be taken interpreting the numbers. Green shows the most " & r"probable energy loss, based on the peak of the Landau-Vavilov distribution underlying the " & r"Bethe-Bloch mean value." & interactiveVega(argonLabel()) proc plotDetectorAbsorption(element: Element) = let E_float = logspace(-2, 2, 1000) let energies = E_float.mapIt(it.GeV) let E_loss = energies.mapIt((it.to(eV) - intBethe(element, 3.cm, it.to(eV))).to(keV).float) let E_lossMP = energies.mapIt(mostProbableLoss(-1, element.Z, element.M, E_to_γ(it), ρ_Ar * 3.cm).float) let df = seqsToDf({E_float, "Bethe-Bloch (BB)" : E_loss, "Most probable (MP)" : E_lossMP}) .gather(["Bethe-Bloch (BB)", "Most probable (MP)"], "Type", "Value") ggplot(df, aes("E_float", "Value", color = "Type")) + geom_line() + #xlab(r"μ Energy [\si{\GeV}]") + ylab(r"$-\left\langle \frac{\mathrm{d}E}{\mathrm{d}x}\right\rangle$ [\si{\keV}]") + xlab(r"μ Energy [\si{\GeV}]") + ylab(r"$-\left\langle \frac{\mathrm{d}E}{\mathrm{d}x}\right\rangle$ (BB), $Δ_p$ (MP) [\si{\keV}]") + scale_x_log10() + scale_y_log10() + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + margin(right = 6) + ggtitle(r"Energy loss of Muons in \SI{3}{\cm} " & &"{element.name.capitalizeAscii} at CAST conditions") + ggsave(&"/home/basti/phd/Figs/muonStudies/{element.name}_energy_loss_cast.pdf", useTeX = true, standalone = true) #ggvegatex(&"/home/basti/phd/Figs/muonStudies/{element.name}_energy_loss_cast", # onlyTikZ = false, # caption = argonCaption(), # label = argonLabel(), # width = 600, height = 360) plotDetectorAbsorption(Argon) proc plotMostProbable(e: Element) = let E_float = logspace(-1.5, 2, 1000) let energies = E_float.mapIt(it.GeV) let E_loss = energies.mapIt(mostProbableLoss(-1, e.Z, e.M, E_to_γ(it), ρ_Ar * 3.cm)) let df = toDf({"E_loss" : E_loss.mapIt(it.float), E_float}) ggplot(df, aes("E_float", "E_loss")) + geom_line() + scale_x_log10() + xlab("Energy [GeV]") + ylab("Most probable loss [keV]") + ggsave("/tmp/most_probable_loss.pdf") plotMostProbable(Argon) #+end_src *** X-ray fluorescence :PROPERTIES: :CUSTOM_ID: sec:theory:xray_fluorescence :END: Cosmic muons in their interactions with matter can ionize atoms, leading to the possible emission of X-rays if the removed electron is part of an inner shell, mostly K (and some L) shell electrons. This leads to a form of background based on real X-rays and thus represents a kind of background that is impossible to distinguish from any kind of axion signal unless external scintillator based vetoes are used. Of course, to be relevant as a form of detector background the material must be close to the detector, as the X-rays will otherwise be absorbed. This makes the detector material, the gas itself and all material in the direction of the detectors' sensitivity a candidate for X-ray fluorescence background. Table [[tab:theory:xray_fluorescence]] lists the X-ray fluorescence lines for elements one commonly encounters in the context of gaseous detectors for a helioscope experiment. Table [[tab:theory:binding_energies]] lists the atomic binding energies of the same elements. This is intended as a useful reference to understand possible lines in background, potential targets for an X-ray tube for reference X-ray data and understand the absorption edges for materials. The difference between the binding energy and the energies of the most likely fluorescence lines is the explanation for why commonly a material is more transparent for a fluorescence X-ray than energies slightly above it. This (among other effects) explains the 'escape peak' seen in many types of gaseous detectors. For example see fig. [[fig:theory:transmission_examples]] for the argon transmission. The argon $K 1s$ binding energy of $\SI{3.2}{keV}$ is visible in the form of the sudden drop in transmission (which is of course directly proportional to the absorption length!). At the same time an X-ray produced by an ionized argon atom from the $K 1s$ electron via the Kα line yields an X-ray of only $\SI{2.95}{keV}$ and thus argon gas is extremely transparent for such X-rays (this is the cause of so called 'escape photons', see sec. [[#sec:theory:escape_peaks_55fe]]). #+NAME: tab:theory:xray_fluorescence #+CAPTION: Excerpt of X-ray fluorescence energies of K, L and M emission lines for different elements #+CAPTION: mostly relevant for the context of this thesis in in \si{eV}. #+CAPTION: Taken from the X-ray data book cite:williams2001x, specifically https://xdb.lbl.gov/Section1/Table_1-2.pdf. #+ATTR_LATEX: :booktabs t :font \footnotesize :float sidewaystable |----+---------+-----------+----------+----------+----------+----------+----------+----------+----------+---------| | Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 | |----+---------+-----------+----------+----------+----------+----------+----------+----------+----------+---------| | 6 | C | 277 | | | | | | | | | | 7 | N | 392.4 | | | | | | | | | | 8 | O | 524.9 | | | | | | | | | | 13 | Al | 1,486.70 | 1,486.27 | 1,557.45 | | | | | | | | 14 | Si | 1,739.98 | 1,739.38 | 1,835.94 | | | | | | | | 18 | Ar | 2,957.70 | 2,955.63 | 3,190.5 | | | | | | | | 22 | Ti | 4,510.84 | 4,504.86 | 4,931.81 | 452.2 | 452.2 | 458.4 | | | | | 25 | Mn | 5,898.75 | 5,887.65 | 6,490.45 | 637.4 | 637.4 | 648.8 | | | | | 26 | Fe | 6,403.84 | 6,390.84 | 7,057.98 | 705.0 | 705.0 | 718.5 | | | | | 28 | Ni | 7,478.15 | 7,460.89 | 8,264.66 | 851.5 | 851.5 | 868.8 | | | | | 29 | Cu | 8,047.78 | 8,027.83 | 8,905.29 | 929.7 | 929.7 | 949.8 | | | | | 47 | Ag | 22,162.92 | 21,990.3 | 24,942.4 | 2,984.31 | 2,978.21 | 3,150.94 | 3,347.81 | 3,519.59 | | | 54 | Xe | 29,779 | 29,458 | 33,624 | 4,109.9 | — | — | | — | | | 78 | Pt | 66,832 | 65,112 | 75,748 | 9,442.3 | 9,361.8 | 11,070.7 | 11,250.5 | 12,942.0 | 2,050.5 | | 79 | Au | 68,803.7 | 66,989.5 | 77,984 | 9,713.3 | 9,628.0 | 11,442.3 | 11,584.7 | 13,381.7 | 2,122.9 | | 82 | Pb | 74,969.4 | 72,804.2 | 84,936 | 10,551.5 | 10,449.5 | 12,613.7 | 12,622.6 | 14,764.4 | 2,345.5 | #+CAPTION: Excerpt of electron binding energies of elements mostly relevant for the context of #+CAPTION: this thesis in \si{eV}. Taken from the X-ray data book cite:williams2001x, #+CAPTION: specifically https://xdb.lbl.gov/Section1/Table_1-1.pdf. #+NAME: tab:theory:binding_energies #+ATTR_LATEX: :booktabs t :font \footnotesize :float sidewaystable |----+---------+--------+----------+----------+----------+----------+----------+----------+----------+----------+----------+ | Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | | |----+---------+--------+----------+----------+----------+----------+----------+----------+----------+----------+----------+ | 6 | C | 284.2 | | | | | | | | | | | 7 | N | 409.9 | 37.3 | | | | | | | | | | 8 | O | 543.1 | 41.6 | | | | | | | | | | 13 | Al | 1559.6 | 117.8 | 72.95 | 72.55 | | | | | | | | 14 | Si | 1839 | 149.7 | 99.82 | 99.42 | | | | | | | | 18 | Ar | 3205.9 | 326.3 | 250.6 | 248.4 | 29.3 | 15.9 | 15.7 | | | | | 22 | Ti | 4966 | 560.9 | 460.2 | 453.8 | 58.7 | 32.6 | 32.6 | | | | | 25 | Mn | 6539 | 769.1 | 649.9 | 638.7 | 82.3 | 47.2 | 47.2 | | | | | 26 | Fe | 7112 | 844.6 | 719.9 | 706.8 | 91.3 | 52.7 | 52.7 | | | | | 28 | Ni | 8333 | 1008.6 | 870.0 | 852.7 | 110.8 | 68.0 | 66.2 | | | | | 29 | Cu | 8979 | 1096.7 | 952.3 | 932.7 | 122.5 | 77.3 | 75.1 | | | | | 47 | Ag | 25514 | 3806 | 3524 | 3351 | 719.0 | 603.8 | 573.0 | 374.0 | 368.3 | | | 54 | Xe | 34561 | 5453 | 5107 | 4786 | 1148.7 | 1002.1 | 940.6 | 689.0 | 676.4 | | | 78 | Pt | 78395 | 13880 | 13273 | 11564 | 3296 | 3027 | 2645 | 2202 | 2122 | | | 79 | Au | 80725 | 14353 | 13734 | 11919 | 3425 | 3148 | 2743 | 2291 | 2206 | | | 82 | Pb | 88005 | 15861 | 15200 | 13035 | 3851 | 3554 | 3066 | 2586 | 2484 | 891.8 | |----+---------+--------+----------+----------+----------+----------+----------+----------+----------+----------+----------+ | Z | Element | N1 4s | N2 4p1/2 | N3 4p3/2 | N4 4d3/2 | N5 4d5/2 | N6 4f5/2 | N7 4f7/2 | O1 5s | O2 5p1/2 | O3 5p3/2 | |----+---------+--------+----------+----------+----------+----------+----------+----------+----------+----------+----------+ | 47 | Ag | 97.0 | 63.7 | 58.3 | | | | | | | | | 54 | Xe | 213.2 | 146.7 | 145.5 | 69.5 | 67.5 | --- | --- | 23.3 | 13.4 | 12.1 | | 78 | Pt | 725.4 | 609.1 | 519.4 | 331.6 | 314.6 | 74.5 | 71.2 | 101.7 | 65.3 | 51.7 | | 79 | Au | 762.1 | 642.7 | 546.3 | 353.2 | 335.1 | 87.6 | 84.0 | 107.2 | 74.2 | 57.2 | | 82 | Pb | 761.9 | 643.5 | 434.3 | 412.2 | 141.7 | 136.9 | 147 | 106.4 | 83.3 | 20.7 | **** TODOs for this section [/] :noexport: - [ ] *HOW DOES THIS CORRESPOND TO AUGER ELECTRONS?* - [X] *TOO MUCH DETAIL HERE?* *CHECK THE SHELL STUFF, GIVE A MINI TABLE OF IMPORTANT ATOMIC LINES!* Important for our 3 keV argon line + 8 keV copper line mainly. - [X] Tab. *TABLE INSERT* contains the different lines of plausible materials used for detector construction / etc. *...* *ASK TOBI IF TO ADD SOME MATERIAL* - [X] *ADD RELEVANT TABLE FOR BINDING ENERGY AS WELL!* - [X] *TODO: REMOVE UNNECESSARY LINES* - [ ] *REPHRASE THE BELOW IN PARTICULAR REFERENCE TO ABSORPTION EDGE* **** Full tables for X-ray fluorescence lines and binding energies :extended: #+NAME: tab_all_xray_fluorescence #+CAPTION: Photon energies of K, L and M emission lines for different elements in \si{eV}. #+CAPTION: Taken from cite:williams2001x, specifically https://xdb.lbl.gov/Section1/Table_1-2.pdf. |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| | Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 | |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| | 3 | Li | 54.3 | | | | | | | | | | 4 | Be | 108.5 | | | | | | | | | | 5 | B | 183.3 | | | | | | | | | | 6 | C | 277 | | | | | | | | | | 7 | N | 392.4 | | | | | | | | | | 8 | O | 524.9 | | | | | | | | | | 9 | F | 676.8 | | | | | | | | | | 10 | Ne | 848.6 | 848.6 | | | | | | | | | 11 | Na | 1,040.98 | 1,040.98 | 1,071.1 | | | | | | | | 12 | Mg | 1,253.60 | 1,253.60 | 1,302.2 | | | | | | | | 13 | Al | 1,486.70 | 1,486.27 | 1,557.45 | | | | | | | | 14 | Si | 1,739.98 | 1,739.38 | 1,835.94 | | | | | | | | 15 | P | 2,013.7 | 2,012.7 | 2,139.1 | | | | | | | | 16 | S | 2,307.84 | 2,306.64 | 2,464.04 | | | | | | | | 17 | Cl | 2,622.39 | 2,620.78 | 2,815.6 | | | | | | | | 18 | Ar | 2,957.70 | 2,955.63 | 3,190.5 | | | | | | | | 19 | K | 3,313.8 | 3,311.1 | 3,589.6 | | | | | | | | 20 | Ca | 3,691.68 | 3,688.09 | 4,012.7 | 341.3 | 341.3 | 344.9 | | | | | 21 | Sc | 4,090.6 | 4,086.1 | 4,460.5 | 395.4 | 395.4 | 399.6 | | | | |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| | Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 | |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| | 22 | Ti | 4,510.84 | 4,504.86 | 4,931.81 | 452.2 | 452.2 | 458.4 | | | | | 23 | V | 4,952.20 | 4,944.64 | 5,427.29 | 511.3 | 511.3 | 519.2 | | | | | 24 | Cr | 5,414.72 | 5,405.509 | 5,946.71 | 572.8 | 572.8 | 582.8 | | | | | 25 | Mn | 5,898.75 | 5,887.65 | 6,490.45 | 637.4 | 637.4 | 648.8 | | | | | 26 | Fe | 6,403.84 | 6,390.84 | 7,057.98 | 705.0 | 705.0 | 718.5 | | | | | 27 | Co | 6,930.32 | 6,915.30 | 7,649.43 | 776.2 | 776.2 | 791.4 | | | | | 28 | Ni | 7,478.15 | 7,460.89 | 8,264.66 | 851.5 | 851.5 | 868.8 | | | | | 29 | Cu | 8,047.78 | 8,027.83 | 8,905.29 | 929.7 | 929.7 | 949.8 | | | | | 30 | Zn | 8,638.86 | 8,615.78 | 9,572.0 | 1,011.7 | 1,011.7 | 1,034.7 | | | | | 31 | Ga | 9,251.74 | 9,224.82 | 10,264.2 | 1,097.92 | 1,097.92 | 1,124.8 | | | | | 32 | Ge | 9,886.42 | 9,855.32 | 10,982.1 | 1,188.00 | 1,188.00 | 1,218.5 | | | | | 33 | As | 10,543.72 | 10,507.99 | 11,726.2 | 1,282.0 | 1,282.0 | 1,317.0 | | | | | 34 | Se | 11,222.4 | 11,181.4 | 12,495.9 | 1,379.10 | 1,379.10 | 1,419.23 | | | | | 35 | Br | 11,924.2 | 11,877.6 | 13,291.4 | 1,480.43 | 1,480.43 | 1,525.90 | | | | | 36 | Kr | 12,649 | 12,598 | 14,112 | 1,586.0 | 1,586.0 | 1,636.6 | | | | | 37 | Rb | 13,395.3 | 13,335.8 | 14,961.3 | 1,694.13 | 1,692.56 | 1,752.17 | | | | | 38 | Sr | 14,165 | 14,097.9 | 15,835.7 | 1,806.56 | 1,804.74 | 1,871.72 | | | | | 39 | Y | 14,958.4 | 14,882.9 | 16,737.8 | 1,922.56 | 1,920.47 | 1,995.84 | | | | | 40 | Zr | 15,775.1 | 15,690.9 | 17,667.8 | 2,042.36 | 2,039.9 | 2,124.4 | 2,219.4 | 2,302.7 | | | 41 | Nb | 16,615.1 | 16,521.0 | 18,622.5 | 2,165.89 | 2,163.0 | 2,257.4 | 2,367.0 | 2,461.8 | | | 42 | Mo | 17,479.34 | 17,374.3 | 19,608.3 | 2,293.16 | 2,289.85 | 2,394.81 | 2,518.3 | 2,623.5 | | | 43 | Tc | 18,367.1 | 18,250.8 | 20,619 | 2,424 | 2,420 | 2,538 | 2,674 | 2,792 | | | 44 | Ru | 19,279.2 | 19,150.4 | 21,656.8 | 2,558.55 | 2,554.31 | 2,683.23 | 2,836.0 | 2,964.5 | | | 45 | Rh | 20,216.1 | 20,073.7 | 22,723.6 | 2,696.74 | 2,692.05 | 2,834.41 | 3,001.3 | 3,143.8 | | | 46 | Pd | 21,177.1 | 21,020.1 | 23,818.7 | 2,838.61 | 2,833.29 | 2,990.22 | 3,171.79 | 3,328.7 | | | 47 | Ag | 22,162.92 | 21,990.3 | 24,942.4 | 2,984.31 | 2,978.21 | 3,150.94 | 3,347.81 | 3,519.59 | | | 48 | Cd | 23,173.6 | 22,984.1 | 26,095.5 | 3,133.73 | 3,126.91 | 3,316.57 | 3,528.12 | 3,716.86 | | | 49 | In | 24,209.7 | 24,002.0 | 27,275.9 | 3,286.94 | 3,279.29 | 3,487.21 | 3,713.81 | 3,920.81 | | | 50 | Sn | 25,271.3 | 25,044.0 | 28,486.0 | 3,443.98 | 3,435.42 | 3,662.80 | 3,904.86 | 4,131.12 | | | 51 | Sb | 26,359.1 | 26,110.8 | 29,725.6 | 3,604.72 | 3,595.32 | 3,843.57 | 4,100.78 | 4,347.79 | | | 52 | Te | 27,472.3 | 27,201.7 | 30,995.7 | 3,769.33 | 3,758.8 | 4,029.58 | 4,301.7 | 4,570.9 | | | 53 | I | 28,612.0 | 28,317.2 | 32,294.7 | 3,937.65 | 3,926.04 | 4,220.72 | 4,507.5 | 4,800.9 | | | 54 | Xe | 29,779 | 29,458 | 33,624 | 4,109.9 | — | — | — | — | | | 55 | Cs | 30,972.8 | 30,625.1 | 34,986.9 | 4,286.5 | 4,272.2 | 4,619.8 | 4,935.9 | 5,280.4 | | | 56 | Ba | 32,193.6 | 31,817.1 | 36,378.2 | 4,466.26 | 4,450.90 | 4,827.53 | 5,156.5 | 5,531.1 | | | 57 | La | 33,441.8 | 33,034.1 | 37,801.0 | 4,650.97 | 4,634.23 | 5,042.1 | 5,383.5 | 5,788.5 | 833 | | 58 | Ce | 34,719.7 | 34,278.9 | 39,257.3 | 4,840.2 | 4,823.0 | 5,262.2 | 5,613.4 | 6,052 | 883 | | 59 | Pr | 36,026.3 | 35,550.2 | 40,748.2 | 5,033.7 | 5,013.5 | 5,488.9 | 5,850 | 6,322.1 | 929 | | 60 | Nd | 37,361.0 | 36,847.4 | 42,271.3 | 5,230.4 | 5,207.7 | 5,721.6 | 6,089.4 | 6,602.1 | 978 | | 61 | Pm | 38,724.7 | 38,171.2 | 43,826 | 5,432.5 | 5,407.8 | 5,961 | 6,339 | 6,892 | — | | 62 | Sm | 40,118.1 | 39,522.4 | 45,413 | 5,636.1 | 5,609.0 | 6,205.1 | 6,586 | 7,178 | 1,081 | |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| | Z | Element | Kα1 | Kα2 | Kβ1 | Lα1 | Lα2 | Lβ1 | Lβ2 | Lγ1 | Mα1 | |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| | 63 | Eu | 41,542.2 | 40,901.9 | 47,037.9 | 5,845.7 | 5,816.6 | 6,456.4 | 6,843.2 | 7,480.3 | 1,131 | | 64 | Gd | 42,996.2 | 42,308.9 | 48,697 | 6,057.2 | 6,025.0 | 6,713.2 | 7,102.8 | 7,785.8 | 1,185 | | 65 | Tb | 44,481.6 | 43,744.1 | 50,382 | 6,272.8 | 6,238.0 | 6,978 | 7,366.7 | 8,102 | 1,240 | | 66 | Dy | 45,998.4 | 45,207.8 | 52,119 | 6,495.2 | 6,457.7 | 7,247.7 | 7,635.7 | 8,418.8 | 1,293 | | 67 | Ho | 47,546.7 | 46,699.7 | 53,877 | 6,719.8 | 6,679.5 | 7,525.3 | 7,911 | 8,747 | 1,348 | | 68 | Er | 49,127.7 | 48,221.1 | 55,681 | 6,948.7 | 6,905.0 | 7,810.9 | 8,189.0 | 9,089 | 1,406 | | 69 | Tm | 50,741.6 | 49,772.6 | 57,517 | 7,179.9 | 7,133.1 | 8,101 | 8,468 | 9,426 | 1,462 | | 70 | Yb | 52,388.9 | 51,354.0 | 59,370 | 7,415.6 | 7,367.3 | 8,401.8 | 8,758.8 | 9,780.1 | 1,521.4 | | 71 | Lu | 54,069.8 | 52,965.0 | 61,283 | 7,655.5 | 7,604.9 | 8,709.0 | 9,048.9 | 10,143.4 | 1,581.3 | | 72 | Hf | 55,790.2 | 54,611.4 | 63,234 | 7,899.0 | 7,844.6 | 9,022.7 | 9,347.3 | 10,515.8 | 1,644.6 | | 73 | Ta | 57,532 | 56,277 | 65,223 | 8,146.1 | 8,087.9 | 9,343.1 | 9,651.8 | 10,895.2 | 1,710 | | 74 | W | 59,318.24 | 57,981.7 | 67,244.3 | 8,397.6 | 8,335.2 | 9,672.35 | 9,961.5 | 11,285.9 | 1,775.4 | | 75 | Re | 61,140.3 | 59,717.9 | 69,310 | 8,652.5 | 8,586.2 | 10,010.0 | 10,275.2 | 11,685.4 | 1,842.5 | | 76 | Os | 63,000.5 | 61,486.7 | 71,413 | 8,911.7 | 8,841.0 | 10,355.3 | 10,598.5 | 12,095.3 | 1,910.2 | | 77 | Ir | 64,895.6 | 63,286.7 | 73,560.8 | 9,175.1 | 9,099.5 | 10,708.3 | 10,920.3 | 12,512.6 | 1,979.9 | | 78 | Pt | 66,832 | 65,112 | 75,748 | 9,442.3 | 9,361.8 | 11,070.7 | 11,250.5 | 12,942.0 | 2,050.5 | | 79 | Au | 68,803.7 | 66,989.5 | 77,984 | 9,713.3 | 9,628.0 | 11,442.3 | 11,584.7 | 13,381.7 | 2,122.9 | | 80 | Hg | 70,819 | 68,895 | 80,253 | 9,988.8 | 9,897.6 | 11,822.6 | 11,924.1 | 13,830.1 | 2,195.3 | | 81 | Tl | 72,871.5 | 70,831.9 | 82,576 | 10,268.5 | 10,172.8 | 12,213.3 | 12,271.5 | 14,291.5 | 2,270.6 | | 82 | Pb | 74,969.4 | 72,804.2 | 84,936 | 10,551.5 | 10,449.5 | 12,613.7 | 12,622.6 | 14,764.4 | 2,345.5 | | 83 | Bi | 77,107.9 | 74,814.8 | 87,343 | 10,838.8 | 10,730.91 | 13,023.5 | 12,979.9 | 15,247.7 | 2,422.6 | | 84 | Po | 79,290 | 76,862 | 89,800 | 11,130.8 | 11,015.8 | 13,447 | 13,340.4 | 15,744 | — | | 85 | At | 81,520 | 78,950 | 92,300 | 11,426.8 | 11,304.8 | 13,876 | — | 16,251 | — | | 86 | Rn | 83,780 | 81,070 | 94,870 | 11,727.0 | 11,597.9 | 14,316 | — | 16,770 | — | | 87 | Fr | 86,100 | 83,230 | 97,470 | 12,031.3 | 11,895.0 | 14,770 | 14,450 | 17,303 | — | | 88 | Ra | 88,470 | 85,430 | 100,130 | 12,339.7 | 12,196.2 | 15,235.8 | 14,841.4 | 17,849 | — | | 89 | Ac | 90,884 | 87,670 | 102,850 | 12,652.0 | 12,500.8 | 15,713 | — | 18,408 | — | | 90 | Th | 93,350 | 89,953 | 105,609 | 12,968.7 | 12,809.6 | 16,202.2 | 15,623.7 | 18,982.5 | 2,996.1 | | 91 | Pa | 95,868 | 92,287 | 108,427 | 13,290.7 | 13,122.2 | 16,702 | 16,024 | 19,568 | 3,082.3 | | 92 | U | 98,439 | 94,665 | 111,300 | 13,614.7 | 13,438.8 | 17,220.0 | 16,428.3 | 20,167.1 | 3,170.8 | | 93 | Np | — | — | — | 13,944.1 | 13,759.7 | 17,750.2 | 16,840.0 | 20,784.8 | — | | 94 | Pu | — | — | — | 14,278.6 | 14,084.2 | 18,293.7 | 17,255.3 | 21,417.3 | — | | 95 | Am | — | — | — | 14,617.2 | 14,411.9 | 18,852.0 | 17,676.5 | 22,065.2 | — | |----+---------+-----------+-----------+----------+----------+-----------+----------+----------+----------+---------| X-Ray Data Booklet Table 1-1. Electron binding energies, in electron volts, for the elements in their natural forms. https://xdb.lbl.gov/Section1/Table_1-1.pdf #+CAPTION: Electron binding energies of all elements up to uranium in \si{eV}. #+CAPTION: Taken from the X-ray data book cite:williams2001x, #+CAPTION: specifically https://xdb.lbl.gov/Section1/Table_1-1.pdf. #+NAME: tab_all_atomic_binding_energies |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | N1 4s | N2 4p1/2 | N3 4p3/2 | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | 1 | H | 13.6 | | | | | | | | | | | | | 2 | He | 24.6* | | | | | | | | | | | | | 3 | Li | 54.7* | | | | | | | | | | | | | 4 | Be | 111.5* | | | | | | | | | | | | | 5 | B | 188* | | | | | | | | | | | | | 6 | C | 284.2* | | | | | | | | | | | | | 7 | N | 409.9* | 37.3* | | | | | | | | | | | | 8 | O | 543.1* | 41.6* | | | | | | | | | | | | 9 | F | 696.7* | | | | | | | | | | | | | 10 | Ne | 870.2* | 48.5* | 21.7* | 21.6* | | | | | | | | | | 11 | Na | 1070.8† | 63.5† | 30.65 | 30.81 | | | | | | | | | | 12 | Mg | 1303.0† | 88.7 | 49.78 | 49.50 | | | | | | | | | | 13 | Al | 1559.6 | 117.8 | 72.95 | 72.55 | | | | | | | | | | 14 | Si | 1839 | 149.7*b | 99.82 | 99.42 | | | | | | | | | | 15 | P | 2145.5 | 189* | 136* | 135* | | | | | | | | | | 16 | S | 2472 | 230.9 | 163.6* | 162.5* | | | | | | | | | | 17 | Cl | 2822.4 | 270* | 202* | 200* | | | | | | | | | | 18 | Ar | 3205.9* | 326.3* | 250.6† | 248.4* | 29.3* | 15.9* | 15.7* | | | | | | | 19 | K | 3608.4* | 378.6* | 297.3* | 294.6* | 34.8* | 18.3* | 18.3* | | | | | | | 20 | Ca | 4038.5* | 438.4† | 349.7† | 346.2† | 44.3 | † | 25.4† | 25.4† | | | | | | 21 | Sc | 4492 | 498.0* | 403.6* | 398.7* | 51.1* | 28.3* | 28.3* | | | | | | | 22 | Ti | 4966 | 560.9† | 460.2† | 453.8† | 58.7† | 32.6† | 32.6† | | | | | | | 23 | V | 5465 | 626.7† | 519.8† | 512.1† | 66.3† | 37.2† | 37.2† | | | | | | | 24 | Cr | 5989 | 696.0† | 583.8† | 574.1† | 74.1† | 42.2† | 42.2† | | | | | | | 25 | Mn | 6539 | 769.1† | 649.9† | 638.7† | 82.3† | 47.2† | 47.2† | | | | | | | 26 | Fe | 7112 | 844.6† | 719.9† | 706.8† | 91.3† | 52.7† | 52.7† | | | | | | | 27 | Co | 7709 | 925.1† | 793.2† | 778.1† | 101.0† | 58.9† | 59.9† | | | | | | | 28 | Ni | 8333 | 1008.6† | 870.0† | 852.7† | 110.8† | 68.0† | 66.2† | | | | | | | 29 | Cu | 8979 | 1096.7† | 952.3† | 932.7 | 122.5† | 77.3† | 75.1† | | | | | | | 30 | Zn | 9659 | 1196.2* | 1044.9* | 1021.8* | 139.8* | 91.4* | 88.6* | 10.2* | 10.1* | | | | | 31 | Ga | 10367 | 1299.0*b | 1143.2† | 1116.4† | 159.5† | 103.5† | 100.0† | 18.7† | 18.7† | | | | | 32 | Ge | 11103 | 1414.6*b | 1248.1*b | 1217.0*b | 180.1* | 124.9* | 120.8* | 29.8 | 29.2 | | | | | 33 | As | 11867 | 1527.0*b | 1359.1*b | 1323.6*b | 204.7* | 146.2* | 141.2* | 41.7* | 41.7* | | | | | 34 | Se | 12658 | 1652.0*b | 1474.3*b | 1433.9*b | 229.6* | 166.5* | 160.7* | 55.5* | 54.6* | | | | | 35 | Br | 13474 | 1782* | 1596* | 1550* | 257* | 189* | 182* | 70* | 69* | | | | | 36 | Kr | 14326 | 1921 | 1730.9* | 1678.4* | 292.8* | 222.2* | 214.4 | 95.0* | 93.8* | 27.5* | 14.1* | 14.1* | | 37 | Rb | 15200 | 2065 | 1864 | 1804 | 326.7* | 248.7* | 239.1* | 113.0* | 112* | 30.5* | 16.3* | 15.3* | | 38 | Sr | 16105 | 2216 | 2007 | 1940 | 358.7† | 280.3† | 270.0† | 136.0† | 134.2† | 38.9† | 21.3 | 20.1† | | 39 | Y | 17038 | 2373 | 2156 | 2080 | 392.0*b | 310.6* | 298.8* | 157.7† | 155.8† | 43.8* | 24.4* | 23.1* | | 40 | Zr | 17998 | 2532 | 2307 | 2223 | 430.3† | 343.5† | 329.8† | 181.1† | 178.8† | 50.6† | 28.5† | 27.1† | | 41 | Nb | 18986 | 2698 | 2465 | 2371 | 466.6† | 376.1† | 360.6† | 205.0† | 202.3† | 56.4† | 32.6† | 30.8† | | 42 | Mo | 20000 | 2866 | 2625 | 2520 | 506.3† | 411.6† | 394.0† | 231.1† | 227.9† | 63.2† | 37.6† | 35.5† | | 43 | Tc | 21044 | 3043 | 2793 | 2677 | 544* | 447.6 | 417.7 | 257.6 | 253.9* | 69.5* | 42.3* | 39.9* | | 44 | Ru | 22117 | 3224 | 2967 | 2838 | 586.1* | 483.5† | 461.4† | 284.2† | 280.0† | 75.0† | 46.3† | 43.2† | | 45 | Rh | 23220 | 3412 | 3146 | 3004 | 628.1† | 521.3† | 496.5† | 311.9† | 307.2† | 81.4*b | 50.5† | 47.3† | | 46 | Pd | 24350 | 3604 | 3330 | 3173 | 671.6† | 559.9† | 532.3† | 340.5† | 335.2† | 87.1*b | 55.7†a | 50.9† | | 47 | Ag | 25514 | 3806 | 3524 | 3351 | 719.0† | 603.8† | 573.0† | 374.0† | 368.3 | 97.0† | 63.7† | 58.3† | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | N 4s | N2 4p1/2 | N3 4p3/2 | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | 48 | Cd | 26711 | 4018 | 3727 | 3538 | 772.0† | 652.6† | 618.4† | 411.9† | 405.2† | 109.8† | 63.9†a | 63.9†a | | 49 | In | 27940 | 4238 | 3938 | 3730 | 827.2† | 703.2† | 665.3† | 451.4† | 443.9† | 122.9† | 73.5†a | 73.5†a | | 50 | Sn | 29200 | 4465 | 4156 | 3929 | 884.7† | 756.5† | 714.6† | 493.2† | 484.9† | 137.1† | 83.6†a | 83.6†a | | 51 | Sb | 30491 | 4698 | 4380 | 4132 | 946† | 812.7† | 766.4† | 537.5† | 528.2† | 153.2† | 95.6†a | 95.6†a | | 52 | Te | 31814 | 4939 | 4612 | 4341 | 1006† | 870.8† | 820.0† | 583.4† | 573.0† | 169.4† | 103.3†a | 103.3†a | | 53 | I | 33169 | 5188 | 4852 | 4557 | 1072* | 931* | 875* | 630.8 | 619.3 | 186* | 123* | 123* | | 54 | Xe | 34561 | 5453 | 5107 | 4786 | 1148.7* | 1002.1* | 940.6* | 689.0* | 676.4* | 213.2* | 146.7 | 145.5* | | 55 | Cs | 35985 | 5714 | 5359 | 5012 | 1211*b | 1071* | 1003* | 740.5* | 726.6* | 232.3* | 172.4* | 161.3* | | 56 | Ba | 37441 | 5989 | 5624 | 5247 | 1293*b | 1137*b | 1063*b | 795.7† | 780.5* | 253.5† | 192 | 178.6† | | 57 | La | 38925 | 6266 | 5891 | 5483 | 1362*b | 1209*b | 1128*b | 853* | 836* | 274.7* | 205.8 | 196.0* | | 58 | Ce | 40443 | 6549 | 6164 | 5723 | 1436*b | 1274*b | 1187*b | 902.4* | 883.8* | 291.0* | 223.2 | 206.5* | | 59 | Pr | 41991 | 6835 | 6440 | 5964 | 1511 | 1337 | 1242 | 948.3* | 928.8* | 304.5 | 236.3 | 217.6 | | 60 | Nd | 43569 | 7126 | 6722 | 6208 | 1575 | 1403 | 1297 | 1003.3* | 980.4* | 319.2* | 243.3 | 224.6 | | 61 | Pm | 45184 | 7428 | 7013 | 6459 | --- | 1471 | 1357 | 1052 | 1027 | --- | 242 | 242 | | 62 | Sm | 46834 | 7737 | 7312 | 6716 | 1723 | 1541 | 1420 | 1110.9* | 1083.4* | 347.2* | 265.6 | 247.4 | | 63 | Eu | 48519 | 8052 | 7617 | 6977 | 1800 | 1614 | 1481 | 1158.6* | 1127.5* | 360 | 284 | 257 | | 64 | Gd | 50239 | 8376 | 7930 | 7243 | 1881 | 1688 | 1544 | 1221.9* | 1189.6* | 378.6* | 286 | 271 | | 65 | Tb | 51996 | 8708 | 8252 | 7514 | 1968 | 1768 | 1611 | 1276.9* | 1241.1* | 396.0* | 322.4* | 284.1* | | 66 | Dy | 53789 | 9046 | 8581 | 7790 | 2047 | 1842 | 1676 | 1333 | 1292.6* | 414.2* | 333.5* | 293.2* | | 67 | Ho | 55618 | 9394 | 8918 | 8071 | 2128 | 1923 | 1741 | 1392 | 1351 | 432.4* | 343.5 | 308.2* | | 68 | Er | 57486 | 9751 | 9264 | 8358 | 2207 | 2006 | 1812 | 1453 | 1409 | 449.8* | 366.2 | 320.2* | | 69 | Tm | 59390 | 10116 | 9617 | 8648 | 2307 | 2090 | 1885 | 1515 | 1468 | 470.9* | 385.9* | 332.6* | | 70 | Yb | 61332 | 10486 | 9978 | 8944 | 2398 | 2173 | 1950 | 1576 | 1528 | 480.5* | 388.7* | 339.7* | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | Z | Element | N4 4d3/2 | N5 4d5/2 | N6 4f5/2 | N7 4f7/2 | O1 5s | O2 5p1/2 | O3 5p3/2 | O4 5d3/2 | O5 5d5/2 | P1 6s | P2 6p1/2 | P3 6p3/2 | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | 48 | Cd | 11.7† | l0.7† | | | | | | | | | | | | 49 | In | 17.7† | 16.9† | | | | | | | | | | | | 50 | Sn | 24.9† | 23.9† | | | | | | | | | | | | 51 | Sb | 33.3† | 32.1† | | | | | | | | | | | | 52 | Te | 41.9† | 40.4† | | | | | | | | | | | | 53 | I | 50.6 | 48.9 | | | | | | | | | | | | 54 | Xe | 69.5* | 67.5* | --- | --- | 23.3* | 13.4* | 12.1* | | | | | | | 55 | Cs | 79.8* | 77.5* | --- | --- | 22.7 | 14.2* | 12.1* | | | | | | | 56 | Ba | 92.6† | 89.9† | --- | --- | 30.3† | 17.0† | 14.8† | | | | | | | 57 | La | 105.3* | 102.5* | --- | --- | 34.3* | 19.3* | 16.8* | | | | | | | 58 | Ce | 109* | --- | 0.1 | 0.1 | 37.8 | 19.8* | 17.0* | | | | | | | 59 | Pr | 115.1* | 115.1* | 2.0 | 2.0 | 37.4 | 22.3 | 22.3 | | | | | | | 60 | Nd | 120.5* | 120.5* | 1.5 | 1.5 | 37.5 | 21.1 | 21.1 | | | | | | | 61 | Pm | 120 | 120 | --- | --- | --- | --- | --- | | | | | | | 62 | Sm | 129 | 129 | 5.2 | 5.2 | 37.4 | 21.3 | 21.3 | | | | | | | 63 | Eu | 133 | 127.7* | 0 | 0 | 32 | 22 | 22 | | | | | | | 64 | Gd | --- | 142.6* | 8.6* | 8.6* | 36 | 28 | 21 | | | | | | | 65 | Tb | 150.5* | 150.5* | 7.7* | 2.4* | 45.6* | 28.7* | 22.6* | | | | | | | 66 | Dy | 153.6* | 153.6* | 8.0* | 4.3* | 49.9* | 26.3 | 26.3 | | | | | | | 67 | Ho | 160* | 160* | 8.6* | 5.2* | 49.3* | 30.8* | 24.1* | | | | | | | 68 | Er | 167.6* | 167.6* | --- | 4.7* | 50.6* | 31.4* | 24.7* | | | | | | | 69 | Tm | 175.5* | 175.5* | --- | 4.6 | 54.7* | 31.8* | 25.0* | | | | | | | 70 | Yb | 191.2* | 182.4* | 2.5* | 1.3* | 52.0* | 30.3* | 24.1* | | | | | | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | Z | Element | K 1s | L1 2s | L2 2p1/2 | L3 2p3/2 | M1 3s | M2 3p1/2 | M3 3p3/2 | M4 3d3/2 | M5 3d5/2 | N1 4s | N2 4p1/2 | N3 4p3/2 | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | 71 | Lu | 63314 | 10870 | 10349 | 9244 | 2491 | 2264 | 2024 | 1639 | 1589 | 506.8* | 412.4* | 359.2* | | 72 | Hf | 65351 | 11271 | 10739 | 9561 | 2601 | 2365 | 2108 | 1716 | 1662 | 538* | 438.2† | 380.7† | | 73 | Ta | 67416 | 11682 | 11136 | 9881 | 2708 | 2469 | 2194 | 1793 | 1735 | 563.4† | 463.4† | 400.9† | | 74 | W | 69525 | 12100 | 11544 | 10207 | 2820 | 2575 | 2281 | 1872 | 1809 | 594.1† | 490.4† | 423.6† | | 75 | Re | 71676 | 12527 | 11959 | 10535 | 2932 | 2682 | 2367 | 1949 | 1883 | 625.4† | 518.7† | 446.8† | | 76 | Os | 73871 | 12968 | 12385 | 10871 | 3049 | 2792 | 2457 | 2031 | 1960 | 658.2† | 549.1† | 470.7† | | 77 | Ir | 76111 | 13419 | 12824 | 11215 | 3174 | 2909 | 2551 | 2116 | 2040 | 691.1† | 577.8† | 495.8† | | 78 | Pt | 78395 | 13880 | 13273 | 11564 | 3296 | 3027 | 2645 | 2202 | 2122 | 725.4† | 609.1† | 519.4† | | 79 | Au | 80725 | 14353 | 13734 | 11919 | 3425 | 3148 | 2743 | 2291 | 2206 | 762.1† | 642.7† | 546.3† | | 80 | Hg | 83102 | 14839 | 14209 | 12284 | 3562 | 3279 | 2847 | 2385 | 2295 | 802.2† | 680.2† | 576.6† | | 81 | Tl | 85530 | 15347 | 14698 | 12658 | 3704 | 3416 | 2957 | 2485 | 2389 | 846.2† | 720.5† | 609.5† | | 82 | Pb | 88005 | 15861 | 15200 | 13035 | 3851 | 3554 | 3066 | 2586 | 2484 | 891.8† | 761.9† | 643.5† | | 83 | Bi | 90524 | 16388 | 15711 | 13419 | 3999 | 3696 | 3177 | 2688 | 2580 | 939† | 805.2† | 678.8† | | 84 | Po | 93105 | 16939 | 16244 | 13814 | 4149 | 3854 | 3302 | 2798 | 2683 | 995* | 851* | 705* | | 85 | At | 95730 | 17493 | 16785 | 14214 | 4317 | 4008 | 3426 | 2909 | 2787 | 1042* | 886* | 740* | | 86 | Rn | 98404 | 18049 | 17337 | 14619 | 4482 | 4159 | 3538 | 3022 | 2892 | 1097* | 929* | 768* | | 87 | Fr | 101137 | 18639 | 17907 | 15031 | 4652 | 4327 | 3663 | 3136 | 3000 | 1153* | 980* | 810* | | 88 | Ra | 103922 | 19237 | 18484 | 15444 | 4822 | 4490 | 3792 | 3248 | 3105 | 1208* | 1058 | 879* | | 89 | Ac | 106755 | 19840 | 19083 | 15871 | 5002 | 4656 | 3909 | 3370 | 3219 | 1269* | 1080* | 890* | | 90 | Th | 109651 | 20472 | 19693 | 16300 | 5182 | 4830 | 4046 | 3491 | 3332 | 1330* | 1168* | 966.4† | | 91 | Pa | 112601 | 21105 | 20314 | 16733 | 5367 | 5001 | 4174 | 3611 | 3442 | 1387* | 1224* | 1007* | | 92 | U | 115606 | 21757 | 20948 | 17166 | 5548 | 5182 | 4303 | 3728 | 3552 | 1439*b | 1271*b | 1043† | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | Z | Element | N4 4d3/2 | N5 4d5/2 | N6 4f5/2 | N7 4f7/2 | O1 5s | O2 5p1/2 | O3 5p3/2 | O4 5d3/2 | O5 5d5/2 | P1 6s | P2 6p1/2 | P3 6p3/2 | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| | 71 | Lu | 206.1* | 196.3* | 8.9* | 7.5* | 57.3* | 33.6* | 26.7* | | | | | | | 72 | Hf | 220.0† | 211.5† | 15.9† | 14.2† | 64.2† | 38* | 29.9† | | | | | | | 73 | Ta | 237.9† | 226.4† | 23.5† | 21.6† | 69.7† | 42.2* | 32.7† | | | | | | | 74 | W | 255.9† | 243.5† | 33.6* | 31.4† | 75.6† | 45.3*b | 36.8† | | | | | | | 75 | Re | 273.9† | 260.5† | 42.9* | 40.5* | 83† | 45.6* | 34.6*b | | | | | | | 76 | Os | 293.1† | 278.5† | 53.4† | 50.7† | 84* | 58* | 44.5† | | | | | | | 77 | Ir | 311.9† | 296.3† | 63.8† | 60.8† | 95.2*b | 63.0*b | 48.0† | | | | | | | 78 | Pt | 331.6† | 314.6† | 74.5† | 71.2† | 101.7*b | 65.3*b | 51.7† | | | | | | | 79 | Au | 353.2† | 335.1† | 87.6† | 84.0 | 107.2*b | 74.2† | 57.2† | | | | | | | 80 | Hg | 378.2† | 358.8† | 104.0† | 99.9† | 127† | 83.1† | 64.5† | 9.6† | 7.8† | | | | | 81 | Tl | 405.7† | 385.0† | 122.2† | 117.8† | 136.0*b | 94.6† | 73.5† | 14.7† | 12.5† | | | | | 82 | Pb | 434.3† | 412.2† | 141.7† | 136.9† | 147*b | 106.4† | 83.3† | 20.7† | 18.1† | | | | | 83 | Bi | 464.0† | 440.1† | 162.3† | 157.0† | 159.3*b | 119.0† | 92.6† | 26.9† | 23.8† | | | | | 84 | Po | 500* | 473* | 184* | 184* | 177* | 132* | 104* | 31* | 31* | | | | | 85 | At | 533* | 507 | 210* | 210* | 195* | 148* | 115* | 40* | 40* | | | | | 86 | Rn | 567* | 541* | 238* | 238* | 214* | 164* | 127* | 48* | 48* | 26 | | | | 87 | Fr | 603* | 577* | 268* | 268* | 234* | 182* | 140* | 58* | 58* | 34 | 15 | 15 | | 88 | Ra | 636* | 603* | 299* | 299* | 254* | 200* | 153* | 68* | 68* | 44 | 19 | 19 | | 89 | Ac | 675* | 639* | 319* | 319* | 272* | 215* | 167* | 80* | 80* | --- | --- | --- | | 90 | Th | 712.1† | 675.2† | 342.4† | 333.1† | 290*a | 229*a | 182*a | 92.5† | 85.4† | 41.4† | 24.5† | 16.6† | | 91 | Pa | 743* | 708* | 371* | 360* | 310* | 232* | 232* | 94* | 94* | --- | --- | --- | | 92 | U | 778.3† | 736.2† | 388.2* | 377.4† | 321*ab | 257*ab | 192*ab | 102.8† | 94.2† | 43.9† | 26.8† | 16.8† | |----+---------+----------+----------+----------+----------+---------+----------+----------+----------+----------+--------+----------+----------| *** Bremsstrahlung :noexport: Talk about Bremsstrahlung as a requirement for the CDL data? Well yes, but that is really not of any importance or my work on the thesis. In particular the X-ray fluorescence chapter is pretty much that already. ** Cosmic rays :PROPERTIES: :CUSTOM_ID: sec:theory:cosmic_radiation :END: Cosmic rays, or cosmic radiation refers to two aspects of a related phenomenon. Primary cosmic radiation is the radiation arriving at Earth from the Sun, galactic and extragalactic sources. The main contribution are highly energetic protons, but other long lived elementary particles and nuclei also contribute to a lesser extent. Cosmic rays are isotropic at most energies, because of the influence of galactic magnetic fields. Their energies range from $\SI{1e9}{eV}$ up to $\SI{1e21}{eV}$. It is generally assumed that particles below $\SI{1e18}{eV}$ are of mainly galactic origin, whereas the above is dominated by extragalactic sources. The flux of the primary cosmic rays generally follows a power law distribution. Different contributions follow a generally similar power law (also see cite:Zyla:2020zbs, chapter on cosmic rays). When cosmic rays interact with the molecules of Earth's atmosphere, mesons are produced, mainly pions. Neutral pions generate showers of photons and electron-positron pairs. Charged pions on the other hand decay into muons and anti muon-neutrinos. Muons are produced over electrons in this case, due to chirality. As they are more massive than electrons they have a larger component of the opposite chirality to the neutrino, which is necessary for this 'forbidden' decay due to angular momentum conservation. They are produced at an altitude of roughly $\SI{15}{km}$. A large fraction of them reaches the surface as they are highly relativistic. Their spectrum is described by a convolution of the production energy, their energy loss due to ionization in the atmosphere and possible decay. Muons are of interest in the context of helioscope experiments, as they present a dominant source of background, especially in gaseous detectors (directly and indirectly due to fluorescence). And because current helioscope experiments are built near Earth's surface, little attenuation of muon flux happens. Therefore, a good understanding of the expected muon flux is required. Above $\SI{100}{GeV}$ muon decay is negligible. At those energies the muon flux at the surface strictly follows the same power law as the primary cosmic ray flux. Following cite:gaisser2016cosmic, in this regime it can be described by #+NAME: eq:theory:muon_flux_gaisser \begin{equation} \frac{\mathrm{d}N_μ}{\mathrm{d}E_μ \mathrm{d}Ω} \approx \frac{0.14 E_μ^{-2.7}}{\si{\centi\meter\squared \second \steradian \giga\electronvolt}} \times \left[ \frac{1}{1 + \frac{1.1 E_μ \cos ϑ}{\SI{115}{GeV}}} + \frac{0.054}{1 + \frac{1.1 E_μ \cos ϑ}{\SI{850}{GeV}}} \right] \end{equation} where the first term in parenthesis is the pion and the second the kaon contribution. For lower energies, cite:doi:10.1142/S0217751X18501750 provide a set of fitted functions based on [[eq:theory:muon_flux_gaisser]] with a single power law \[ I(E, θ) = I_0 N (E_0 + E)^{-n} \left(1 + \frac{E}{ε}\right)^{-1} D(θ)^{-(n - 1)}, \] where $I_0$, the intensity under zenith angle $θ$, and $ε$ is another fit parameter for the replacement of the separate meson masses in eq. [[eq:theory:muon_flux_gaisser]]. $D(θ)$ is the path length through the atmosphere under an angle $θ$ from the zenith. $N$ is a normalization constant given by \[ N = (n - 1) (E_0 + E_c)^{n-1}, \] where $n$ corresponds to the effective power of the cosine behavior and is the final fit parameter. $E_0$ accounts for the energy loss due to interactions in the atmosphere and $E_c$ is the lowest energy given in a data set. If the Earth is assumed flat, it is $D(θ) = 1/\cosθ$ (which is often assumed for simplicity and is a reasonable approximation as long as only angles close to $θ = 0$ are considered). To describe a trajectory through Earth's curved atmosphere however, $D(θ)$ can be modified to: \[ D(θ) = \sqrt{ \left( \frac{R²}{d²} \cos² θ + 2\frac{R}{d} + 1 \right) } - \frac{R}{d}\cos θ \] where $R$ is the Earth radius, $d$ the vertical path length (i.e. the height at which the muon is created) and $θ$ the zenith angle. While this parametrization is very useful to describe the few specific datasets shown in cite:doi:10.1142/S0217751X18501750 and provides a way to fit any measured muon flux at a specific location, it is limited in applicability to arbitrary locations, altitudes and angles. For that an approach that does not require a fit to a dataset is preferable, namely by utilizing a combination of the approximation by Gaisser, eq. [[eq:theory:muon_flux_gaisser]], and the interaction of muons with the atmosphere. As such, we modify the equation for the intensity $I$ to the following: \[ I(E, θ) = I_0 (n-1) (E_θ(E, θ) + E_c)^{n-1} (E_θ(E, θ) + E)^{-n} \left[ \frac{1}{1 + \frac{1.1 E_μ \cos ϑ}{\SI{115}{GeV}}} + \frac{0.054}{1 + \frac{1.1 E_μ \cos ϑ}{\SI{850}{GeV}}} \right] D(θ)^{-(n - 1)}, \] where we take $n = 3$ exactly. One could put in the best fit for the general cosine behavior under zenith angles, $n = n_{\text{fit}} + 1$, but for simplicity we just use 3 here. Take $E_θ(E, θ)$ to be the energy of a muon left from initial energy $E$ at generation in the upper atmosphere after transporting it through the atmosphere under the angle $θ$. The transport must take into account the density change using the barometric height formula of the atmosphere. Transport is done using the Bethe-Bloch equation as introduced in sec. [[#sec:theory:bethe_bloch]] assuming an atmosphere made up of nitrogen, oxygen and argon. As such we remove all parameters except an initial intensity $I_0$, which can be set to the best fit of the integrated muon flux at the zenith angle at sea level. In the following figures we simply use $I_0 = \SI{90}{\per\meter\squared \per\steradian \per\second}$. Figure [[sref:fig:theory:muon_flux_surface]] shows the expected differential muon flux using these parameters for different angles at sea level. In sref:fig:theory:muon_flux_surface:initial the initial energy of the muons is shown before transporting through the atmosphere. For each angle the lines cut off at the energy below which the muon would likely be stopped by the atmosphere according to its energy loss per distance or reaches the surface with less than $\SI{300}{MeV}$. On the right we see the final energy of the same muons at the surface. The lines in sref:fig:theory:muon_flux_surface:final are ragged, because muon decay is simulated using Monte Carlo. [fn:note_on_flux] These numbers match reasonably well with different datasets for different locations under different angles, but they should /not/ be considered as more than a starting point for a general expectation. However, they are still useful as a reference to consider when evaluating muon fluxes under different angles at CAST. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Initial energy") (label "fig:theory:muon_flux_surface:initial") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/muons/initial_energy_vs_flux_and_angle_cosmic_muons.pdf")) (subfigure (linewidth 0.5) (caption "Final energy") (label "fig:theory:muon_flux_surface:final") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/muons/final_energy_vs_flux_and_angle_cosmic_muons.pdf")) (caption "Differential muon flux at sea level for different zenith angles. " (subref "fig:theory:muon_flux_surface:initial") " shows the initial energy of the muon. The cutoff corresponds to the lowest energy transported through the atmosphere; muons still arrive at the surface without decay or stopping. " (subref "fig:theory:muon_flux_surface:final") " shows the final muon energy at the surface, with the lowest muon " ($ (SI 300 "MeV")) " at the surface. ") (label "fig:theory:muon_flux_surface")) #+end_src [fn:note_on_flux] The calculation is a "hybrid" Monte Carlo approach. We don't sample a large number of muons and transport them through the atmosphere. We just compute the loss through the atmosphere and allow the muon to decay randomly, dropping a single data point. This is for simplicity on the one hand and the fact that there is a very sharp transition of 'most muons traverse' to 'essentially no muons traverse' the atmosphere at a given energy. For that reason the flux in sref:fig:theory:muon_flux_surface:final at low energies is to be taken with a grain of salt. The fluxes are literally mappings from data points in sref:fig:theory:muon_flux_surface:initial to the corresponding energy left at the surface (i.e. assuming no muons decay). Further the $\SI{300}{MeV}$ energy left is already well inside the energy range where a large number of muons would already decay. *** TODOs for this section [/] :noexport: - [ ] *where we take n = 3* -> WHERE do we actually use n = 3 ??? - [ ] *FLAT EARTH* -> Not say Earth assumed flat? Lucian mentions jokingly the sentence is dangerous. - [ ] *THINK ABOUT PUTTING CALCULATION INTO APPENDIX* Because of their large energies, muons behave as minimally ionizing particles (MIPS), which means their mean energy loss is more or less independent of the muon's energy. They are in the trough of the Bethe-Bloch equation, see sec. [[#sec:theory:bethe_bloch]]. This means the exact energy of each muon is irrelevant and for background studies only the actual rate of muons is important. *PLOT OF PRIMARY RADIATION* *SECONDARY MUON PRODUCTION* - [X] *MUON FLUX AT ZENITH*, *MUON FLUX UNDER ANGLE* (cos²) - [X] *REFERENCE TO MUON CHIRALITY OVER ELECTRON PROD* -> Would need a literate rewrite, but maybe at least plots? *** Calculation of muon angular / energy dependence at surface :extended: :PROPERTIES: :CUSTOM_ID: sec:theory:calc_muon_angular_flux :END: The code here is directly based on code written in my notes [[file:~/org/Doc/StatusAndProgress.org::#sec:muons:expected_muon_flux]] being tangled into a file =/tmp/muon_flux.nim=. Aside from the figures included in the main thesis, this also produces fig. [[fig:theory:flux_at_88deg_CAST]]. #+CAPTION: Expected muon flux under an angle of $\SI{88}{°}$ at CAST. #+NAME: fig:theory:flux_at_88deg_CAST [[~/phd/Figs/muons/flux_at_cast_88_deg.pdf]] #+begin_src nim :results silent :tangle /home/basti/phd/code/muons.nim import math, macros, unchained import seqmath, ggplotnim, sequtils, strformat let K = 4 * π * N_A * r_e^2 * m_e * c^2 # usually in: [MeV mol⁻¹ cm²] defUnit(cm³•g⁻¹) defUnit(J•m⁻¹) defUnit(cm⁻³) defUnit(g•mol⁻¹) defUnit(MeV•g⁻¹•cm²) defUnit(mol⁻¹) defUnit(keV•cm⁻¹) defUnit(g•cm⁻³) defUnit(g•cm⁻²) proc electronDensity(ρ: g•cm⁻³, Z, A: UnitLess): cm⁻³ = result = N_A * Z * ρ / (A * M_u.to(g•mol⁻¹)) proc I[T](z: float): T = ## use Bloch approximation for all but Argon (better use tabulated values!) # 188.0 eV from NIST table result = if z == 18.0: 188.0.eV.to(T) else: (10.eV * z).to(T) proc calcβ(γ: UnitLess): UnitLess = result = sqrt(1.0 - 1.0 / (γ^2)) proc betheBloch(z, Z: UnitLess, A: g•mol⁻¹, γ: UnitLess, M: kg): MeV•g⁻¹•cm² = ## result in MeV cm² g⁻¹ (normalized by density) ## z: charge of particle ## Z: charge of particles making up medium ## A: atomic mass of particles making up medium ## γ: Lorentz factor of particle ## M: mass of particle in MeV (or same mass as `m_e` defined as) let β = calcβ(γ) let W_max = 2 * m_e * c^2 * β^2 * γ^2 / (1 + 2 * γ * m_e / M + (m_e / M)^2) let lnArg = 2 * m_e * c^2 * β^2 * γ^2 * W_max / (I[Joule](Z)^2) result = (K * z^2 * Z / A * 1.0 / (β^2) * ( 0.5 * ln(lnArg) - β^2 )).to(MeV•g⁻¹•cm²) proc mostProbableLoss(z, Z: UnitLess, A: g•mol⁻¹, γ: UnitLess, x: g•cm⁻²): keV = ## Computes the most probable value, corresponding to the peak of the Landau ## distribution, that gives rise to the Bethe-Bloch formula. ## ## Taken from PDG chapter 'Passage of particles through matter' equation ## `34.12` in 'Fluctuations in energy loss', version 2020). ## ## `x` is the "thickness". Density times length, `x = ρ * d`. The other parameters ## are as in `betheBloch` above. let β = calcβ(γ) let ξ = K / 2.0 * Z / A * z*z * (x / (β*β)) const j = 0.200 let I = I[Joule](Z) result = (ξ * ( ln((2 * m_e * c^2 * β^2 * γ^2).to(Joule) / I) + ln(ξ.to(Joule) / I) + j - β^2)).to(keV) # - δ*(β*γ) proc density(p: mbar, M: g•mol⁻¹, temp: Kelvin): g•cm⁻³ = ## returns the density of the gas for the given pressure. ## The pressure is assumed in `mbar` and the temperature (in `K`). ## The default temperature corresponds to BabyIAXO aim. ## Returns the density in `g / cm^3` let gasConstant = 8.314.J•K⁻¹•mol⁻¹ # joule K^-1 mol^-1 let pressure = p.to(Pa) # pressure in Pa # factor 1000 for conversion of M in g / mol to kg / mol result = (pressure * M / (gasConstant * temp)).to(g•cm⁻³) proc E_to_γ(E: GeV): UnitLess = result = E.to(Joule) / (m_μ * c^2) + 1 proc γ_to_E(γ: UnitLess): GeV = result = ((γ - 1) * m_μ * c^2).to(GeV) type Element = object Z: UnitLess M: g•mol⁻¹ A: UnitLess # numerically same as `M` ρ: g•cm⁻³ proc initElement(Z: UnitLess, M: g•mol⁻¹, ρ: g•cm⁻³): Element = Element(Z: Z, M: M, A: M.UnitLess, ρ: ρ) # molar mass. Numerically same as relative atomic mass let M_Ar = 39.95.g•mol⁻¹ let ρAr = density(1050.mbar, M_Ar, temp = 293.15.K) let Argon = initElement(18.0.UnitLess, 39.95.g•mol⁻¹, ρAr) proc intBethe(e: Element, d_total: cm, E0: eV, dx = 1.μm): eV = ## integrated energy loss of bethe formula after `d` cm of matter ## and returns the energy remaining var γ: UnitLess = E_to_γ(E0.to(GeV)) var d: cm result = E0 var totalLoss = 0.eV while d < d_total and result > 0.eV: let E_loss: MeV = betheBloch(-1, e.Z, e.M, γ, m_μ) * e.ρ * dx result = result - E_loss.to(eV) γ = E_to_γ(result.to(GeV)) d = d + dx.to(cm) totalLoss = totalLoss + E_loss.to(eV) result = max(0.float, result.float).eV proc plotDetectorAbsorption() = let E_float = logspace(-2, 2, 1000) let energies = E_float.mapIt(it.GeV) let E_loss = energies.mapIt( (it.to(eV) - intBethe(Argon, 3.cm, it.to(eV))).to(keV).float ) let df = toDf(E_float, E_loss) ggplot(df, aes("E_float", "E_loss")) + geom_line() + xlab("μ Energy [GeV]") + ylab("ΔE [keV]") + scale_x_log10() + scale_y_log10() + ggtitle("Energy loss of Muons in 3 cm Ar at CAST conditions") + ggsave("/home/basti/phd/Figs/muons/ar_energy_loss_cast.pdf") plotDetectorAbsorption() let Atmosphere = @[(0.78084, initElement(7.0.UnitLess, 14.006.g•mol⁻¹, 1.2506.g•dm⁻³.to(g•cm⁻³))), # N2 (0.20964, initElement(8.0.UnitLess, 15.999.g•mol⁻¹, 1.429.g•dm⁻³.to(g•cm⁻³))), # O2 (0.00934, initElement(18.0.UnitLess, 39.95.g•mol⁻¹, 1.784.g•dm⁻³.to(g•cm⁻³)))] # Ar proc plotMuonBethe() = let E_float = logspace(-2, 2, 1000) let energies = E_float.mapIt(it.GeV) var dEdxs = newSeq[float]() for e in energies: var dEdx = 0.0.MeV•g⁻¹•cm² for elTup in Atmosphere: let (w, element) = elTup let γ = E_to_γ(e) dEdx += w * betheBloch(-1, element.Z, element.M, γ, m_μ) dEdxs.add dEdx.float let df = toDf(E_float, dEdxs) ggplot(df, aes("E_float", "dEdxs")) + geom_line() + xlab("μ Energy [GeV]") + ylab("dE/dx [MeV•g⁻¹•cm²]") + scale_x_log10() + scale_y_log10() + ggtitle("Energy loss of Muons in atmosphere") + ggsave("/home/basti/phd/Figs/muons/energy_loss_muons_atmosphere.pdf") plotMuonBethe() #if true: quit() import math, unchained, ggplotnim, sequtils const R_Earth = 6371.km func distanceAtmosphere(θ: Radian, d: KiloMeter = 36.6149.km): UnitLess = ## NOTE: The default value for `d` is not to be understood as a proper height. It.s an ## approximation based on a fit to get `R_Earth / d = 174`! result = sqrt((R_Earth / d * cos(θ))^2 + 2 * R_Earth / d + 1) - R_Earth / d * cos(θ) defUnit(cm⁻²•s⁻¹•sr⁻¹) defUnit(m⁻²•s⁻¹•sr⁻¹) proc muonFlux(E: GeV, θ: Radian, E₀, E_c: GeV, I₀: m⁻²•s⁻¹•sr⁻¹, ε: GeV): m⁻²•s⁻¹•sr⁻¹ = const n = 3.0 let N = (n - 1) * pow((E₀ + E_c).float, n - 1) result = I₀ * N * pow((E₀ + E).float, -n) * #pow((1 + E / ε).float, -1) * ( ( 1.0 / (1 + 1.1 * E * cos(θ) / 115.GeV).float) + (0.054 / (1 + 1.1 * E * cos(θ) / 850.GeV).float) ) * pow(distanceAtmosphere(θ), -(n - 1)) from numericalnim/integrate import simpson proc plotE_vs_flux(θ: Radian, E₀, E_c: GeV, I₀: m⁻²•s⁻¹•sr⁻¹, ε: GeV, suffix = "") = let energies = linspace(E_c.float, 100.0, 1000) let E = energies.mapIt(it.GeV) let flux = E.mapIt(muonFlux(it, θ, E₀, E_c, I₀, ε).float) # .to(cm⁻²•s⁻¹•sr⁻¹) let df = toDf(energies, flux) echo "Integrated flux: ", simpson(flux, energies) ggplot(df, aes("energies", "flux")) + geom_line() + xlab("Energy [GeV]") + ylab("Flux [m⁻²•s⁻¹•sr⁻¹]") + scale_x_log10() + scale_y_log10() + ggtitle(&"Flux dependency on the energy of muons at θ = {θ.to(°)}{suffix}") + ggsave(&"/home/basti/phd/Figs/muons/energy_vs_flux_cosmic_muons{suffix}.pdf") plotE_vs_flux(0.Radian, 2.5.GeV, #4.29.GeV, 0.5.GeV, 70.7.m⁻²•s⁻¹•sr⁻¹, 854.GeV) let E₀ = 25.0.GeV let I₀ = 90.0.m⁻²•s⁻¹•sr⁻¹ let E_c = 1.GeV let ε = 2000.GeV proc plotFlux_at_CAST() = let energies = linspace(0.5, 100.0, 1000) let E = energies.mapIt(it.GeV) let flux = E.mapIt(muonFlux(it, 88.0.degToRad.Radian, E₀, E_c, I₀, ε).float) let df = toDf(energies, flux) ggplot(df, aes("energies", "flux")) + geom_line() + xlab("Energy [GeV]") + ylab("Flux [m⁻²•s⁻¹•sr⁻¹]") + scale_x_log10() + scale_y_log10() + ggtitle("Flux dependency on the energy at θ = 88° at CAST altitude") + ggsave("/home/basti/phd/Figs/muons/flux_at_cast_88_deg.pdf") plotFlux_at_CAST() proc computeMeanEnergyLoss() = let energies = linspace(0.5, 100.0, 1000) let E = energies.mapIt(it.GeV) let flux = E.mapIt(muonFlux( it, 88.0.degToRad.Radian, E₀, E_c, I₀, ε).float ) let E_loss = E.mapIt( (it.to(eV) - intBethe(Argon, 3.cm, it.to(eV))).to(keV).float ) let fluxSum = flux.sum let df = toDf(energies, E_loss, flux) .mutate(f{"flux" ~ `flux` / fluxSum}, f{"AdjFlux" ~ `E_loss` * `flux`}) echo "Mean energy loss: ", df["AdjFlux", float].sum computeMeanEnergyLoss() proc computeHeight(S: Meter, θ: Radian): KiloMeter = ## For given remaining distance distance along the path of a muon ## `S` (see fig. 1 in 1606.06907) computes the remaining height above ## ground. Formula is the result of inverting eq. 7 to `d` using quadratic ## formula. Positive result, because negative is negative. result = (-1.0 * R_Earth + sqrt(R_Earth^2 + S^2 + 2 * S * R_Earth * cos(θ)).m).to(km) import algorithm defUnit(K•m⁻¹) proc barometricFormula(h: KiloMeter): g•cm⁻³ = let hs = @[0.0.km, 11.0.km] let ρs = @[1.225.kg•m⁻³, 0.36391.kg•m⁻³] let Ts = @[288.15.K, 216.65.K] let Ls = @[-1.0 * 0.0065.K•m⁻¹, 0.0.K•m⁻¹] let M_air = 0.0289644.kg•mol⁻¹ let R = 8.3144598.N•m•mol⁻¹•K⁻¹ let g_0 = 9.80665.m•s⁻² let idx = hs.mapIt(it.float).lowerBound(h.float) - 1 case idx of 0: # in Troposphere, using regular barometric formula for denities let expArg = g_0 * M_air / (R * Ls[idx]) result = (ρs[idx] * pow(Ts[idx] / (Ts[idx] + Ls[idx] * (h - hs[idx])), expArg)).to(g•cm⁻³) of 1: # in Tropopause, use equation valid for L_b = 0 result = (ρs[idx] * exp(-1.0 * g_0 * M_air * (h - hs[idx]) / (R * Ts[idx]))).to(g•cm⁻³) else: doAssert false, "Invalid height! Outside of range!" import random randomize(430) proc intBetheAtmosphere(E: GeV, θ: Radian, dx = 10.cm): eV = ## integrated energy loss using Bethe formula for muons generated at ## `15.km` under an angle of `θ` to the observer for a muon of energy ## `E`. # Main contributions in Earth's atmosphere const τ = 2.19618.μs # muon half life let elements = Atmosphere var γ: UnitLess = E_to_γ(E.to(GeV)) result = E.to(eV) var totalLoss = 0.eV let h_muon = 15.km # assume creation happens in `15.km` let S = h_muon.to(m) * distanceAtmosphere(θ.rad, d = h_muon) var S_prime = S while S_prime > 0.m and result > 0.eV: let h = computeHeight(S_prime, θ) let ρ_at_h = barometricFormula(h) var E_loss = 0.0.MeV for eTup in elements: # compute the weighted contribution of the element fraction let (w, e) = eTup E_loss += w * betheBloch(-1, e.Z, e.M, γ, m_μ) * ρ_at_h * dx ## Add step for radioactive decay of muon. ## - given `dx` compute likelihood of decay ## - eigen time of muon: dx / v = dt. dτ = dt / γ ## - muon decay is λ = 1 / 2.2e-6s let β = calcβ(γ) # compute effective time in lab frame let δt = dx / (β * c) # compute eigen time let δτ = δt / γ # probability of a decay in this time frame let p = pow(1 / math.E, δτ / τ) # decay with likelihood `p` #echo "γ = ", γ, " yields ", p, " in δτ ", δτ, " for energy ", E if rand(1.0) < (1.0 - p): echo "Particle decayed after: ", S_prime return 0.eV result = result - E_loss.to(eV) S_prime = S_prime - dx γ = E_to_γ(result.to(GeV)) totalLoss = totalLoss + E_loss.to(eV) echo "total Loss ", totalLoss.to(GeV) result = max(0.float, result.float).eV block MuonLimits: let τ_μ = 2.1969811.μs # naively this means given some distance `s` the muon can # traverse `s = c • τ_μ` (approximating its speed by `c`) before # it has decayed with a 1/e chance # due to special relativity this is extended by γ let s = c * τ_μ echo s # given production in 15 km, means let h = 15.km echo h / s # so a reduction of (1/e)^22. So 0. # now it's not 15 km but under an angle `θ = 88°`. let R_over_d = 174.UnitLess let n = 3.0 let E₀ = 25.0.GeV let I₀ = 90.0.m⁻²•s⁻¹•sr⁻¹ let E_c = 1.GeV let ε = 2000.GeV # distance atmospher gives S / d, where `d` corresponds to our `h` up there let S = h * distanceAtmosphere(88.0.degToRad.rad) # so about 203 km # so let's say 5 * mean distance is ok, means we ned let S_max = S / 5.0 # so need a `γ` such that `s` is stretched to `S_max` let γ = S_max / s echo γ # ouch. Something has to be wrong. γ of 61? # corresponds to an energy loss of what? let Nitrogen = initElement(7.0.UnitLess, 14.006.g•mol⁻¹, 1.2506.g•dm⁻³.to(g•cm⁻³)) echo "================================================================================" echo "Energy left: ", intBethe(Nitrogen, S.to(cm), 6.GeV.to(eV), dx = 1.m.to(μm)).to(GeV) proc print(E: GeV, θ: Radian) = let left = intBetheAtmosphere(E, θ = θ).to(GeV) echo "E = ", E, ", θ = ", θ, ", Bethe = ", E - left print(6.GeV, 0.Radian) #print(200.GeV, 0.Radian) #print(200.GeV, 88.°.to(Radian)) #print(200.GeV, 75.°.to(Radian)) let E_loss75 = 100.GeV - intBetheAtmosphere(100.GeV, 75.°.to(Radian)).to(GeV) plotE_vs_flux(75.°.to(Radian), E_loss75, #23.78.GeV, #25.GeV, #E_loss75, 1.0.GeV, 90.m⁻²•s⁻¹•sr⁻¹, #65.2.m⁻²•s⁻¹•sr⁻¹, 2000.GeV, # 854.GeV, "_at_75deg") echo "S@75° = ", h * distanceAtmosphere(75.0.degToRad.rad, d = 15.0.km) echo "================================================================================" echo E_to_γ(4.GeV) echo E_to_γ(0.GeV) proc plotE_vs_flux_and_angles(E_c: GeV, I₀: m⁻²•s⁻¹•sr⁻¹, ε: GeV, suffix = "") = ## Generates a plot of the muon flux vs energy for a fixed set of different ## angles. ## ## The energy loss is computed using a fixed let energies = logspace(log10(E_c.float), 2.float, 1000) let angles = linspace(0.0, 80.0, 9) block CalcLossEachMuon: var df = newDataFrame() for angle in angles: let E = energies.mapIt(it.GeV) let θ = angle.°.to(Radian) var flux = newSeq[float]() var E_initials = newSeq[float]() var E_lefts = newSeq[float]() var lastDropped = 0.GeV for e in E: let E_left = intBetheAtmosphere(e, θ).to(GeV) if E_left <= 0.0.GeV: echo "Skipping energy : ", e, " as muon was lost in atmosphere" continue elif E_left <= E_c: echo "Skipping energy : ", e, " as muon has less than E_c = ", E_c, " energy left" lastDropped = e continue let E₀ = e - E_left flux.add muonFlux(e, θ, E₀, E_c, I₀, ε).float E_initials.add e.float E_lefts.add E_left.float let dfLoc = toDf({E_initials, E_lefts, flux, "angle [°]" : angle}) # .filter(f{`E_initials` >= lastDropped.float}) df.add dfLoc var theme = themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) theme.tickWidth = some(theme.tickWidth.get / 2.0) theme.tickLength = some(theme.tickLength.get / 2.0) theme.gridLineWidth = some(theme.gridLineWidth.get / 2.0) ggplot(df, aes("E_initials", "flux", color = factor("angle [°]"))) + geom_line() + xlab(r"Initial energy [\si{GeV}]") + ylab(r"Flux [\si{m^{-2}.s^{-1}.sr^{-1}}]", margin = 2.5) + scale_x_log10() + scale_y_log10() + margin(right = 3.0) + ggtitle(&"Muon flux at different angles{suffix}") + theme + ggsave(&"/home/basti/phd/Figs/muons/initial_energy_vs_flux_and_angle_cosmic_muons{suffix}.pdf", useTeX = true, standalone = true, width = 600, height = 450) ggplot(df, aes("E_lefts", "flux", color = factor("angle [°]"))) + geom_line() + xlab(r"Energy at surface [\si{GeV}]") + ylab(r"Flux [\si{m^{-2}.s^{-1}.sr^{-1}}]", margin = 2.5) + scale_x_log10() + scale_y_log10() + margin(right = 3.0) + theme + ggtitle(&"Muon flux at different angles{suffix}") + ggsave(&"/home/basti/phd/Figs/muons/final_energy_vs_flux_and_angle_cosmic_muons{suffix}.pdf", useTeX = true, standalone = true, width = 600, height = 450) block StaticLoss: var df = newDataFrame() for angle in angles: let E = energies.mapIt(it.GeV) let θ = angle.°.to(Radian) let E₀ = 100.GeV - intBetheAtmosphere(100.GeV, 0.0.Radian).to(GeV) let flux = E.mapIt(muonFlux(it, θ, E₀, E_c, I₀, ε).float) let dfLoc = toDf({energies, flux, "angle [°]" : angle}) df.add dfLoc ggplot(df, aes("energies", "flux", color = factor("angle [°]"))) + geom_line() + xlab("Energy [GeV]") + ylab("Flux [m⁻²•s⁻¹•sr⁻¹]") + scale_x_log10() + scale_y_log10() + ggtitle(&"Differential muon flux dependency at different angles{suffix}") + ggsave(&"/home/basti/phd/Figs/muons/energy_vs_flux_and_angle_cosmic_muons{suffix}.pdf") #proc plotE_vs_flux_and_angles(E_c: GeV, I₀: m⁻²•s⁻¹•sr⁻¹, ε: GeV, # suffix = "") = # ## Generates a plot of the integrated muon flux vs angles for a fixed set of different # ## energies. # let angles = linspace(0.0, 90.0, 100) # var df = newDataFrame() # let energies = linspace(E_c.float, 100.0, 1000) # let E = energies.mapIt(it.GeV) # for angle in angles: # let θ = angle.°.to(Radian) # let E₀ = 100.GeV - intBetheAtmosphere(100.GeV, θ).to(GeV) # let flux = E.mapIt(muonFlux(it, θ, E₀, E_c, I₀, ε).float) # let dfLoc = toDf({energies, flux, "angle [°]" : angle}) # df.add dfLoc # ggplot(df, aes("energies", "flux", color = factor("angle [°]"))) + # geom_line() + # xlab("Energy [GeV]") + ylab("Flux [m⁻²•s⁻¹•sr⁻¹]") + # scale_x_log10() + scale_y_log10() + # ggtitle(&"Differential muon flux dependency at different angles{suffix}") + # ggsave(&"/home/basti/phd/Figs/muons/energy_vs_flux_and_angle_cosmic_muons{suffix}.pdf") # different angles! block MuonBehavior: plotE_vs_flux_and_angles(0.3.GeV, 90.m⁻²•s⁻¹•sr⁻¹, 854.GeV) proc unbinnedCdf(x: seq[float]): (seq[float], seq[float]) = ## Computes the CDF of unbinned data var cdf = newSeq[float](x.len) for i in 0 ..< x.len: cdf[i] = i.float / x.len.float result = (x.sorted, cdf) import random, algorithm proc sampleFlux(samples = 1_000_000): DataFrame = randomize(1337) let energies = linspace(0.1, 100.0, 100_000) #let energies = logspace(0, 2, 1000) let E = energies.mapIt(it.GeV) let flux = E.mapIt(muonFlux(it, 88.0.degToRad.Radian, E₀, E_c, I₀, ε).float) # given flux compute CDF let fluxCS = flux.cumSum() let fluxCS_sorted = flux.sorted.cumSum() let fluxCDF = fluxCS.mapIt(it / fluxCS[^1]) let fluxCDF_sorted = fluxCS_sorted.mapIt(it / fluxCS_sorted[^1]) let (data, cdf) = unbinnedCdf(flux) let dfX = toDf(energies, fluxCS, fluxCS_sorted, fluxCDF, fluxCDF_sorted) ggplot(dfX, aes("energies", "fluxCS")) + geom_line() + ggsave("/t/cumsum_test.pdf") ggplot(dfX, aes("energies", "fluxCDF")) + geom_line() + ggsave("/t/cdf_test.pdf") ggplot(dfX, aes("energies", "fluxCS_sorted")) + geom_line() + ggsave("/t/cumsum_sorted_test.pdf") ggplot(dfX, aes("energies", "fluxCDF_sorted")) + geom_line() + ggsave("/t/cdf_sorted_test.pdf") ggplot(toDf(data, cdf), aes("data", "cdf")) + geom_line() + ggsave("/t/unbinned_cdf.pdf") #if true: quit() var lossesBB = newSeq[float]() var lossesMP = newSeq[float]() var energySamples = newSeq[float]() let dedxmin = 1.519.MeV•cm²•g⁻¹ echo "Loss = ", (dedxmin * Argon.ρ * 3.cm).to(keV) for i in 0 ..< samples: # given the fluxCDF sample different energies, which correspond to the # distribution expected at CAST let idx = fluxCdf.lowerBound(rand(1.0)) let E_element = E[idx] # given this energy `E` compute the loss let lossBB = (E_element.to(eV) - intBethe(Argon, 3.cm, E_element.to(eV), dx = 50.μm)).to(keV).float lossesBB.add lossBB let lossMP = mostProbableLoss(-1, Argon.Z, Argon.M, E_Element.E_to_γ(), Argon.ρ * 3.cm) lossesMP.add lossMP.float #echo "Index ", i, " yields energy ", E_element, " and loss ", loss energySamples.add E_element.float let df = toDf(energySamples, lossesBB, lossesMP) .gather(["lossesBB", "lossesMP"], "Type", "Value") ggplot(df, aes("Value", fill = "Type")) + geom_histogram(bins = 300, hdKind = hdOutline, alpha = 0.5, position = "identity") + margin(top = 2) + xlim(5, 15) + ggtitle(&"Energy loss of muon flux at CAST based on MC sampling with {samples} samples") + ggsave("/home/basti/phd/Figs/muons/sampled_energy_loss.pdf") ggplot(df, aes("energySamples")) + geom_histogram(bins = 300) + margin(top = 2) + ggtitle(&"Sampled energies for energy loss of muon flux at CAST") + ggsave("/home/basti/phd/Figs/muons/sampled_energy_for_energy_loss.pdf") let (samples, bins) = histogram(energySamples, bins = 300) let dfH = toDf({"bins" : bins[0 ..< ^1], samples}) .filter(f{`bins` > 0.0 and `samples`.float > 0.0}) ggplot(dfH, aes("bins", "samples")) + geom_line() + scale_x_log10() + margin(top = 2) + ggtitle(&"Sampled energies for energy loss of muon flux at CAST") + ggsave("/home/basti/phd/Figs/muons/sampled_energy_for_energy_loss_manual.pdf") ggplot(toDf(energies, flux), aes("energies", "flux")) + geom_line() + scale_x_log10() + ggsave("/tmp/starting_data_e_flux.pdf") ggplot(toDf(energies, flux), aes("energies", "flux")) + geom_line() + ggsave("/tmp/linear_starting_data_e_flux.pdf") discard sampleFlux(samples = 1_000_000) #+end_src #+begin_src sh ntangle thesis.org && nim c -d:release code/muons code/muons #+end_src ** Gaseous detector fundamentals :PROPERTIES: :CUSTOM_ID: sec:theory:gas_fundamentals :END: Gaseous detectors consist of a volume filled with gas, usually a noble gas with a small amount of molecular gas. For low rate, low energy experiments an entrance window allows the particles to be detected, to enter. An electric field is applied over the gas volume, strong enough to cause electron-ion pairs created by ionization of the incoming particles to drift to opposite ends of the volume (magnetic fields may be utilized in addition). At least on the side of the anode (where the electrons arrive), a readout of some form is installed to measure the time of arrival, amount of collected charge and / or the position of the electrons. Depending on the details, this in principle allows for a 3D reconstruction of the initial event in the gas volume. The details of choice of detector gas, applied electric fields, gas volume dimensions and types of readout have a very large impact on the applications a detector is useful for. In the following we will focus on the physics concepts required for gaseous detectors with few $\si{cm}$ long drift volumes and high spatial resolution readouts. Note: this section covers the basic fundamentals that will be later referenced in the thesis. For a much more in-depth understanding of these concepts, see references cite:sauli2014gaseous and cite:kolanoski2020particle and to some extent the PDG cite:Zyla:2020zbs (in particular chapters on particle detectors and passage of particles through matter; chapter number varies by year). *** TODOs for this section :noexport: Write a few general things about gaseous detectors here. I.e. contain usually mainly a noble gas, with some quencher for rotational and vibrational modes. These 'cool' the electrons down so that they are in the Townsend minimum and effectively increases the drift velocity. Electric fields should be strong enough to let electrons and ions drift in opposite directions and have a fast enough drift velocity, but low enough to not cause further ionization. *** Gas mixtures and Dalton's law :PROPERTIES: :CUSTOM_ID: sec:theory:daltons_law :END: Of common interest when dealing with gas mixtures is the notion of partial pressures. In ideal gas theory, a mixture of gases at a pressure $P$ can be considered to be the sum of the 'partial pressures' of each gas species \[ P = \sum_i p_i, \] initially noted by John Dalton in 1802 cite:dalton1802essay. The contribution of each gas only depends on the species' mole fraction. Typically when considering gas mixtures the fractions of each gas species is given as a percentage. The percentage already refers to the mole fraction of each species. As such, the partial pressure of a single species can be expressed as: \[ p_i = n_i P \] where $n_i$ is the mole fraction of species $i$. This is an extremely valuable property when computing different interactions of particles with gas mixtures. For example when computing the absorption of X-rays after propagating through a certain distance of a gas mixture, as one can compute absorption for each partial pressure independently and combine the contributions after (whether they be added, multiplied or something else of course depends on the considered process). **** TODOs for this section :noexport: - [X] *FIND GOOD REFERENCE FOR DALTON'S LAW* -> Reference to original work of John Dalton *** Ionization energy and average energy per electron-ion pair In order to understand the signals detected by a gaseous detector, the average number of electrons liberated by an incident particle should be known. This can be calculated given the energy loss in a gas required to produce a single electron-ion pair, called the $W\text{-value}$, \[ W = \frac{I}{⟨N⟩}. \] Here, $I$ is the mean ionization energy of the gas and $⟨N⟩$ the average number of electron-ion pairs generated per interaction. $⟨N⟩$ is usually smaller than one, $\numrange{0.6}{0.7}$ for noble gases and even below $\num{0.5}$ for molecular gases. Not all energy of the incoming particle is always deposited in the form of generation of, for example, a photoelectron. Other forms of energy loss are possible. In molecular gases vibrational and rotational modes offer even more possibilities, resulting in the smaller values. The mean excitation energy $I$ of an element is the weighted combination of the most likely energy levels the element is ionized from. The exact values are dependent on the specific element and tabulated values like from NIST cite:hubbell1996nist exist. Above some $Z$ (roughly argon, $Z = 18$) a rough approximation of $I = \SI{10}{eV} · Z$ can be substituted [[cite:&Zyla:2020zbs]], developed by Bloch cite:bloch1933bremsvermogen. The precise number for $W$ strongly depends on the used gas mixture and typically requires experimental measurements to determine. Monte Carlo based simulations can be used as a rough guide, but care must be taken interpreting the results as significant uncertainty can remain. Tools for such Monte Carlo simulations include GEANT4 cite:GEANT4:2002zbu [fn:geant4] and MCNelectron cite:doi:10.1080/00223131.2014.974710 [fn:mcn_electron]. These are based on atomic excitation cross sections, which are well tabulated in projects like ENDF cite:brown2018endf (citation for the latest data release) [fn:endf_website] and LXCat cite:pancheshnyi2012lxcat,pitchford2017lxcat,carbone2021data [fn:lxcat_website]. # Note: there is also MCNP6 https://mcnp.lanl.gov/ which I will # conveniently *not* mention, as it is under ridiculous US export # regulations and therefore the source code is not visible for non US # citizens. Screw that. [fn:geant4] [[https://geant4.cern.web.ch]] [fn:mcn_electron] http://web.vu.lt/ff/a.poskus/mcnelectron/ [fn:endf_website] https://www.nndc.bnl.gov/endf-b8.0/index.html [fn:lxcat_website] [[https://lxcat.net]] [fn:w_value_fano_factor] Interestingly, there is an empirical somewhat linear relationship between the W-value as shown here and the Fano factor cite:bronic1992relation. *NOTE*: Given how Fano discovered / described Fano noise, it's maybe not quite surprising that this is related! **** TODOs for this section :noexport: *FIND GOOD REFERENCES* beyond these two. Better would be a primary like reference. cite:bronic1992relation,doi:10.1080/00223131.2014.974710 Why expect ~26 eV for argon Talk about ionization energy vs. the actual mean energy loss for a single ionization. Argon ionization energy is only like 15 eV or so, but effective one is 26. Leads to our 226 or so primary electrons for our \cefe spectra. *APPROXIMATION FOR 10eV * Z*: For the approximation see fig. 34.5 in [[cite:&Zyla:2020zbs]]. As that is I / Z where the curve becomes flat is when the relationship holds! The dashed line with the blue background is the approximation. W-value and Fano factor: Sauli [[cite:&sauli2014gaseous]] mentions relationship between Fano and W-value on page 62! *** Mean free path While we have already discussed the mean free path for X-rays in sec. [[#sec:theory:xray_matter_gas]], we should revisit it quickly for electrons in a gas. It is a necessary concept when trying to understand other gaseous detector physics concepts, in particular for the gas amplification. [fn:treatment] The mean free path of an electron in a gas can be described by #+NAME: eq:theory:mean_free_path_e_def \begin{equation} λ = \frac{1}{nσ} \end{equation} where $n = N/V$ is the number density (atoms or molecules per unit volume) of the gas particles and $σ$ the cross section of electrons interacting with them. Such cross sections are tabulated for different elements, for example again see LXCat cite:pancheshnyi2012lxcat,pitchford2017lxcat,carbone2021data. Based on the ideal gas law, we can express it as a function of pressure $p$ and temperature $T$ to \[ p V = N k_B T ⇔ p = n k_B T, \] with the Boltzmann constant $k_B$. Inserting this via $n$ into eq. [[eq:theory:mean_free_path_e_def]] yields #+NAME: eq:theory:mean_free_path_e \begin{equation} λ = \frac{k_B T}{p σ}. \end{equation} [fn:treatment] Its usefulness depends on the angle from which different aspects are discussed of course. We mainly use it to understand the gas amplification later. **** TODOs for this section :noexport: The old text: *NOTE*: This is already explained for X-rays in the previous section. Do we need this concept in some more detail? Otherwise we can also remove it later. See photons above. Of major importance for the detection of particles in a gaseous detector is the mean free path. This is the mean length a particle traverses in a medium before interactions. For charged particles it yields the mean distance between individual interaction points, whereas for a photon it gives the mean length the photon traverses through the detector before interaction. Especially for photons this value is of crucial importance, as it tells us at what distance photons will likely convert depending on their energy. This is described by the absorption length as mentioned in section [[#sec:theory:xray_matter_gas]]. This is important as it directly affects the possible drift distance and thus diffusion available to the generated electrons. *TOO MUCH DETAIL, AS WE HAVE NOT INTRODUCED DETECTORS YET!!* *** Diffusion :PROPERTIES: :CUSTOM_ID: sec:theory:gas_diffusion :END: Diffusion in the context of gaseous detectors is the process of the random walk of electrons either in longitudinal or transverse (to the electric field) direction they exhibit, when drifting towards the readout. Depending on the specific detector technology, some transverse diffusion may be a desired property. For long drift distances, magnetic fields can be used to significantly reduce the transverse diffusion. For an overview of the mathematics of diffusion as a random walk process, see cite:berg1993random. In the context of gaseous detectors see for example any of cite:sauli2014gaseous,kolanoski2020particle,Hilke2020 In general, for precise numbers measurements must be taken. Monte Carlo simulations using tools like Magboltz cite:biagi1995magboltz or PyBoltz cite:pyboltz can be used as a general reference if no measurements are available. The mean distance $σ_T$ from a starting point after a certain amount of time $t$ is given by \[ σ_T = \sqrt{ 2 N D t } \] for a random walk in $N$ dimensions, where $D$ is a diffusion coefficient specifying the movement in distance squared per time. For example for a point like source of multiple electrons, after diffusion of some distance a 2 dimensional gaussian distribution will be expected with a standard deviation of $σ_T$. By relating the time to the drift velocity $v$ along an axis and introducing the diffusion constant $D_T$ \[ D_T = \sqrt{ \frac{2 N D}{v} } \] we can further express $σ_T$ as #+NAME: eq:gas_physics:diffusion_after_drift \begin{equation} σ_T = D_T · \sqrt{x}, \end{equation} which is often practically useful to estimate the expected diffusion after a certain drift length. Note that the terminology of "diffusion constant", "diffusion coefficient" and similar is often used ambiguously as to whether they refer to $D$ or $D_T$ (or sometimes even $σ_T$). Nor is the considered dimensionality $N$ always clearly indicated. Keeping $N = 1$ and handling multiple dimensions as independent random walks is a practical approach to take (as long as it is valid in the application). [fn:diffusion_and_simulation] [fn:diffusion_and_simulation] Later in chapter [[#sec:background:mlp:event_generation]] we will discuss Monte Carlo event generation, which use the diffusion coefficient as an input to generate clusters after drift. Distance $x$ and $y$ are each sampled individually according to eq. [[eq:gas_physics:diffusion_after_drift]] and combined to a radial distance from the cluster center. **** TODOs for this section :noexport: - [X] *CITE MAGBOLTZ, PYBOLTZ* - [X] *HILKE2020* *EXPLAIN WHERE COMES FROM, COMES DOWN TO $D_T$ AND PARAMETER FROM MAGBOLTZ* -> ? - [ ] *CLARIFY WHEN USING IN LATER SECTION* [[#sec:background:mlp:determine_gas_diffusion]] -> **** Additional thoughts on diffusion :noexport: *this is important as it relates to the 1.5 σ_transverse cut we do for data cleaning!* - [ ] *ADD HOW LONGITUDINAL AND TRANSVERSE DIFFUSION RELATE* -> This is important in the context of using the FADC as a veto. We can measure the transverse diffusion by looking at the event sizes, but we cannot do the same for the longitudinal diff (except by looking at the FADC events, but the idea being to define conservative cuts on FADC using theory) - [ ] Explain better distinction between diffusion coefficient and its standard deviation! -> Normal distribution describes position! See also: [[file:~/org/Papers/gas_physics/randomwalkBerg_diffusion.pdf]] <x²> = 2 D t ( 1 dimension ) <x²> = 4 D t ( 2 dimensions ) <x²> = 6 D t ( 3 dimensions ) Also look into Sauli book again (p. 82 eq. (4.5) and eq. (4.6)). Also: [[file:~/org/Papers/Hilke-Riegler2020_Chapter_GaseousDetectors.pdf]] page 15 sec. 4.2.2.2 The latter mentions on page 15 that there is a distinction between: D = diffusion coefficient for which σ = √(2 D t) (1 dim) is valid and D* = diffusion constant for which: σ = D* √(z) is valid! From PyBoltz source code in ~Boltz.pyx~ #+begin_src python self.TransverseDiffusion1 = sqrt(2.0 * self.TransverseDiffusion / self.VelocityZ) * 10000.0 #+end_src which proves the distinction in the paper: √(2 D t) = D* √x ⇔ D* = √(2 D t) / √x = √(2 D t / x) = √(2 D / v) (with x = v t) Describe diffusion based on gas. Needed to get expected photon size based on conversion at specific height. What effects affect diffusion? Random walk + a force acting on particles. - [X] *COMPUTE USING PYBOLTZ: https://github.com/UTA-REST/PyBoltz* -> DONE *** Drift velocity The drift velocity is the average speed at which electrons move towards the anode in a gaseous detector under the influence of an electric field. It is required to understand how recorded time information relates to longitudinal shape information of recorded data. Based on the so called 'friction force model' an analytical expression for the drift velocity in an electromagnetic field can be written to: \[ \vec{v} = \frac{e}{m_e} \frac{τ}{1 + ω²τ²}\left( \vec{E} + \frac{ωτ}{B} (\vec{E} × \vec{B}) + \frac{ω²τ²}{B²}(\vec{E} · \vec{B}) \vec{B} \right) \] with the electron charge $e$ and mass $m_e$, in an electric field $\vec{E}$ and magnetic field $\vec{B}$, given the Lamor frequency $ω = eB / m_e$ and the mean collision time $τ$. cite:Zyla:2020zbs For detectors without a magnetic field $B = 0, ω = 0$ and a constant, homogeneous electric field $E$, this reduces to the Townsend expression: \[ v = \frac{e E τ}{m_e}. \] If measurements are not available, these can also be computed by Magboltz cite:biagi1995magboltz or PyBoltz cite:pyboltz, which solve the underlying transport equation, the Boltzmann equation. **** TODOs for this section :noexport: Boltzmann equation: \begin{align*} \frac{∂f}{∂t} + \vec{v} \frac{∂}{∂\vec{r}}f + \frac{∂}{∂\vec{v}}\vec{g} &= Q(t) \\ \vec{g} &= \left(\frac{e\vec{E}}{m} + \vec{ω} × \vec{v}\right) f \end{align*} *MENTION MOLECULAR GASES HAVE HIGHER DRIFT VELOCITY* Talk about drift velocity of electrons for a given electric field. Required to know time scales associated with e.g. muons + FADC, time it takes for X-rays to drift (for random coincidences in long frames etc.) - [ ] *USE FORMULA PDG* The detector used in this thesis does not make use of magnetic fields. Thus, all terms but the first are zero. Further, the electric field is constant, leading to the following simplification: *FIX THIS* -> ? *BOLTZMANN EQUATION: https://en.wikipedia.org/wiki/Boltzmann_equation* - [X] *COMPUTE USING PYBOLTZ: https://github.com/UTA-REST/PyBoltz* -> DONE *** Gas amplification :PROPERTIES: :CUSTOM_ID: sec:theory:gas_gain_polya :END: In order to turn the individual electrons into a measurable signal, gaseous detectors use some form of gas amplification. Details vary, but it is usually a region in the gas volume close to the readout with a very strong electric field (multiple $\si{kV.cm^{-1}}$) such that each electron causes many secondary ionizations, leading to an avalanche effect. In case of the detectors described in this thesis, amplification factors between $\numrange{2000}{4500}$ are desired. An electron in a strong electric field $\vec{E}$ will gain enough energy to ionize an atom of the gas after a distance \[ l \geq \frac{I}{|\vec{E}|} \] with the ionization potential of the gas, $I$. This needs to be put into relation with the mean free path, eq. [[eq:theory:mean_free_path_e]]. If $l \ll λ$ no secondary ionization takes place and if $l \gg λ$ every interaction leads to ionization, likely resulting in a breakdown causing an arc (see also Paschen's law cite:paschens_law, not covered here). In the intermediate range some interactions cause secondary ionization, some does not. We can make the statistical argument that \[ \mathcal{N} = e^{-l/λ} \] is the relative number of collisions with $l > λ$. This allows to define the probability of finding a number of ionizations per unit length to be \[ P(l) = \frac{1}{λ} e^{-l / λ} = α, \] where we introduce the definition of the 'first Townsend coefficient', $α$. We can insert the definition of the mean free path, eq. [[eq:theory:mean_free_path_e]], and $l$ into this equation to obtain #+NAME: eq:theory:townsend_coefficient \begin{equation} α(T) = \frac{pσ}{kT} \exp\left( - \frac{I}{|\vec{E}|}\frac{pσ}{kT}\right). \end{equation} With this we have an expression for the temperature and pressure dependency of the first Townsend coefficient. [fn:usefulness] This derivation followed [[cite:&lucianMsc]] based on [[cite:&engel65_gases]], but see [[cite:&aoyama85_gas_gain]] for a more general treatment. Similarly to diffusion and drift parameters, the first Townsend coefficient can be computed numerically using tools like Magboltz. Note that [[cite:&sauli2014gaseous]] introduces the Townsend coefficient as $α = \frac{1}{λ}$, introducing a specific $λ$ and $σ$ referring to the /ionization/ mean free path and /ionizing/ cross sections. This can be misleading as it makes it seem as if the Townsend coefficient is inversely proportional to the temperature. This is of course only the case in the regime where each interaction actually causes another ionization. The first Townsend coefficient can be used to express the multiplication factor -- or gas amplification -- used in a detector, \[ G = e^{α x} \] as the number increases by $α$ after each step $\mathrm{d}x$. The statistical distribution describing the number of electrons after gas amplification is the Pólya distribution #+NAME: eq:theory:polya_distribution \begin{equation} p(x) = \frac{N}{G} \frac{(1 + θ)^{1 + θ}}{Γ(1 + θ)} \left(\frac{x}{G}\right)^θ \exp\left[- \frac{(1 + θ) x}{G}\right] \end{equation} where $N$ is a normalization constant, $θ$ is another parameter performing shaping of the distribution and $G$ is the effective gas gain. $Γ$ refers to the gamma function. It is to note that the term "polya distribution" in this context is different from other mathematical definitions, in which polya distribution usually means a negative binomial distribution. The above definition goes back to Alkhazov cite:alkhazov1970statistics and in slight variations is commonly used. Due to the complexity of this definition, care needs to be taken when performing numerical fits to data with this function (using bounded parameters and preferring more general non-linear optimization routines instead of a normal Levenberg-Marquardt non-linear least squares approach). Based on eq. [[eq:theory:townsend_coefficient]] the largest impacts on the expected gas amplification have the electric field, the choice of gas and the temperature of the gas. While the former two parameters are comparatively easy to control, the temperature in the amplification region may vary and is hard to measure. As such depending on the detector details and application, gas gain variations are expected and corrections based on a running gas gain value may be necessary. As the large number of interactions in the amplification region can excite many atoms of the (typically) noble gas, UV photons can be produced. Their mean free path is comparatively long relative to the size of the amplification region. They can start further avalanches, potentially away from the location of the initial avalanche start, lowering spatial resolution and increasing apparent primary electron counts. Molecular gases are added -- often only in small fractions -- to the gas mixture to provide rotational or vibrational modes to dissipate energy without emitting UV photons. In this context the molecular additions are called 'quencher gases'. [fn:usefulness] This will be a useful reference later when discussing possible temperature effects of the detector operated at CAST. **** TODOs for this section :noexport: - [X] *REVISIT THE BELOW TWO PARAGRAPHS AFTER INSERTING MORE DETAILS ABOVE!* - [ ] *FIX TYPING OF POLYA DISTRIBUTION* -> ? - [X] If it fits here, Polya distribution to describe avalanche effect. - [X] What gas properties affect the gas gain? Temperature, density etc. - [X] Gas gain. - [X] *MENTION UV PHOTONS AND HENCE MOLECULAR GASES CALLED QUENCHER GAS* - [ ] *ADD PLOT OF FUNCTION?* -> No, would be too much. Will be seen shortly anyway for real detector. *** TODO Mobility in a gas and mean free path in amplification region :noexport: - [X] Delete this subsection? Yes - [X] This is now covered in the gas gain chapter! - [ ] Not sure if I want to include this. For now I guess I don't. Will depend on when I reread the detector behavior part. *NOTE*: The first Townsend coefficient is α = 1/λ, where λ is the mean free path! -> In the end we're not talking about mobility, because of the better understanding of Townsend coefficient etc.! - [ ] This is related to the GridPix variability in the gas gain & peak position. From PDG it reads "mobility" in gas is inversely proportional to density! - [ ] Find references (Bichsel or other gaseous detector books surely has some!). - [ ] maybe mini table of some mobilities - [ ] mobility vs mean free path? - [ ] expression how that relates to temperature? *** Energy resolution Because of the statistical processes associated with the full detection method used in gaseous detectors, even a perfect delta like signal in energy, will lead to a natural spread of the measured signal. The convolution of different ionization yields, potential losses and gas gain variation all contribute to such a spread. As such a typical measure of interest for gaseous detectors is the energy resolution, which is commonly defined by \[ ΔE = \frac{σ}{E} \] where $σ$ is the standard deviation of the normal distribution associated with the resolved peak in the detectors data, assuming a -- for practical purposes -- delta peak as the input spectrum. Sometimes definitions using the full width at half maximum (FWHM) are also used in place of $σ$. Typical values for the energy resolution defined like this are smaller than $\SI{15}{\%}$. If the absolute magnitude of the $σ$ at a given energy is constant, which at least is partially reasonable as the width is not fully due to energy dependent effects, the energy resolution is proportional to $1/E$. The Fano factor cite:fano63, defined by the variance over the mean of a distribution $F = σ² / μ$ (typically within some time window), improves the ideal energy resolution. It arises in the associated statistical processes, because there are a finite number of possible interaction states. For X-rays an additional aspect is due to the maximum energy transferable into the gas due to the X-rays energy. In practice though the energy resolution of gaseous detectors is usually limited by other effects. **** TODOs for this section :noexport: - [ ] *CHECK IF SENTENCE ABOUT FANO FACTOR CORRECT!* -> I just checked the Fano paper. It's super long and I'm not sure if I understand what the Fano factor in there really is! About Fano factor from cite:bronic1992relation: #+begin_quote The Fano factor F represents the ratio of the observed variance of the distribution of the number of ion pairs to the variance of the Poisson distribution. At high incident energies the value of the Fano factor is constant, around 0.17 in noble gases, and between 0.2 and 0.4 in molecular gases. The value of F increases toward unity as the initial energy of an incident particle decreases toward the ionization potential of a gas because at low electron energies the non-ionizing collisions become more numerous. #+end_quote - [X] *THINK ABOUT REPHRASING THIS / GIVING A VALUE FOR IDEAL RESOLUTION?* (See Alkhazov paper maybe?) -> Rephrased the part, but did not consider Alkhazov - [X] *REWRITE TO USE σ/μ AS OUR 15 PERCENT MATCHES FOR THAT NOT FOR FWHM. ONLY BEST IN CLASS REACH BELOW 15 IN THE LATTER CASE* - [X] What is energy resolution, definition. Why important for our detector. - [X] *WRITE SOMETHING FANO?* See Sauli [[cite:&sauli2014gaseous]] about it. Section 7.5 on energy resolution and section 3.6 about photo ionization of X-rays. Fano factors are related to this! Theory and observation disagree. Fano factor fixes this by doing a scaling. Related to the fact that theory assumes a perfectly statistical process, but reality has fixed number of possible interactions, hence not really perfect statistics. **** More notes on Fano factor :extended: The Fano factor is related to the fact that the average energy needed to produce an electron-ion pair in each interaction. [[cite:&kolanoski2020particle]] derives the Fano factor on page 783 in sec. 17.10.2. The relationship to the W-value is studied in [[cite:&bronic1992relation]] and mentioned in [[cite:&sauli2014gaseous]] on page 62. *** Escape photons and peaks | 55Fe as a typical calibration source :PROPERTIES: :CUSTOM_ID: sec:theory:escape_peaks_55fe :END: Finally, gaseous detectors need to be calibrated, monitored and tested. This is commonly done with a \cefe source. \cefe is a radioactive isotope of iron, which decays under inverse beta decay to $\ce{^{55}Mn}$. Due to the required restructuring of the electronic shells, the manganese is in an excited state. While the emission of an Auger electron with $\SI{5.19}{keV}$ dominates with a probability of $\SI{67.2}{\%}$ [[cite:&krause1979atomic;&SCHOONJANS2011776;&BRUNETTI20041725]], as an X-ray source the Kα₁ and Kα₂ lines with combined energies of about $\SI{5.9}{keV}$ are of note. When using such a \cefe source as a calibration source for an argon filled gaseous detector, the $\SI{5.9}{keV}$ photons will produce a photoelectron in argon. Mostly an inner shell electron will be liberated, producing a photoelectron of around $\SI{2.7}{keV}$. The excited argon atom can now emit an Auger electron in about $\SI{88.2}{\%}$ of cases [[cite:&krause1979atomic;&SCHOONJANS2011776;&BRUNETTI20041725]], which will fully deposit its remaining energy into the surrounding gas via further ionizations, resulting in the 'photopeak' at around $\SI{5.9}{keV}$. If however another photon is produced with an energy below the $K 1s$ energy of argon ($\SI{3.2}{keV}$) -- for example via Kα₁ or Kα₂ fluorescence of argon, both at about $\SI{2.95}{keV}$ -- such a photon has a very long absorption length in the gas volume, about $l_{\text{abs}} = \SI{3.5}{cm}$ (cf. fig. [[fig:theory:transmission_examples]]). This can cause it to easily escape the active detection region, especially if the sensitive region of the detector is comparatively small. The result is a measured signal of $E_i - E_k = \SI{5.9}{keV} - \SI{2.95}{keV} \approx \SI{2.9}{keV}$, called the 'escape peak'. cite:sauli2014gaseous,kolanoski2020particle,hubbell1996nist Such an additional escape peak is useful as a calibration tool, as it gives two distinct peaks in a \cefe calibration spectrum, which can be utilized for an energy calibration. One important consideration of such an escape peak is however, that this escape peak of about $\SI{3}{keV}$ is not equivalent to a real $\SI{3}{keV}$ X-ray. As the original X-ray is still the $\SI{5.9}{keV}$ photon with its distinct absorption length, the geometric properties of ensembles of escape peak events and real $\SI{3}{keV}$ photons is different. This will be important later in sec. [[#sec:background:mlp:effective_efficiency]]. **** TODOs for this section :noexport: - [ ] *LUCIAN MENTIONS* in this thesis cite:lucianMsc that the Kα only produces 5.75 keV. This is what we use for the *pixel* spectrum, but not the charge spectrum. - [X] *ADD CITATIONS*. For Sauli / Wermes & NIST for numbers. Explain escape photons, escape peaks, how that gives us an escape peak in the 55Fe spectra as well as a line at 3 keV in our background data. Explain Fe ↦ Mn excited ↦ Mn + γ and what spectrum looks like *INSERT EXAMPLE FIGURE OF SPECTRUM* *** Note on origin of fluorescence yields / fraction of Auger electrons :extended: The fluorescence yield is the fraction with which an excited nucleus decays via a radiative emission of an X-ray instead of an Auger electron. So the relevant terms are either 'fluorescene yield' or 'Auger yield' depending. The main resource still often compared to is this paper by Krause [[cite:&krause1979atomic]]. The paper contains a very useful table of the fluorescence yields of the different elements. There is a C library for such calculations, [[cite:&SCHOONJANS2011776;&BRUNETTI20041725]]. They have produced a PDF with similar tables for different elements, which can be found here: http://ftp.esrf.fr/pub/scisoft/xraylib/xraylib_tables_v2.3.pdf One might encounter the term 'jump factor' in this context: It refers to the ratio of how the absorption changes from directly below to directly above the absorption edge. So for Argon see page 31 in this PDF (or page 9 in Krause's paper) to find $ω_K = \num{0.118}$, where \[ ω_K + a_K = 1 \] the fluorescence yield $ω_K$ + auger yield $a_K$ needs to be one. *NOTE*: Technically $ω_K$, $a_K$ is only for the K shell, so just an approximation. You will find $ω_1, ω_2$, ... etc for L shells. For Manganese (because Manganese is the excited nucleus after \cefe decay) you'll find it on page 47 (or page 9 in Krause's paper). They sum to $ω_{K+L} = 0.345$. * Septemboard detector :Detector: :PROPERTIES: :CUSTOM_ID: sec:septemboard :END: #+LATEX: \minitoc With the theoretical aspects of gaseous detectors out of the way, in sec. [[#sec:detector:micromegas]] we will introduce the 'Micromegas' type of gaseous detector. Micromegas can be read out in different ways. One option is the Timepix ASIC, sec. [[#sec:detector:timepix]], by way of the 'GridPix', sec. [[#sec:detector:gridpix]]. The GridPix detector in use in the 2014 / 2015 CAST data taking campaign, to be presented in section [[#sec:detector:detector_2014_15]], had a few significant drawbacks for more sensitive searches. In particular for searches at low energies $\lesssim\SI{2}{\keV}$ and searches requiring low backgrounds over larger areas on the chip (for example the chameleon search done in cite:krieger2018search). For this reason a new detector was built in an attempt to improve each shortcoming of the detector. We introduce the 'Septemboard' detector with a basic overview in section [[#sec:detector:detector_overview]]. From there we continue looking at each of the new detector features motivating their addition. All detector upgrades were done to alleviate one or more of the old detector's drawbacks. For each new detector feature we will highlight the aspects it is intended to improve on. Section [[#sec:detector:scintillators]] introduces two new scintillators as vetoes. These require the addition of an external shutter for the Timepix, which is realized by usage of a flash ADC (FADC), see section [[#sec:detector:fadc]]. Further, an independent but extremely important addition is the replacement of the Mylar window by a silicon nitride window, section [[#sec:detector:sin_window]]. Another aspect is the addition of 6 GridPixes around the central GridPix, the 'Septemboard' introduced in section [[#sec:detector:septemboard]]. Due to the additional heat produced by 6 more GridPix, a water cooling system is used, sec. [[#sec:detector:water_cooling]]. Lastly, in sec. [[#sec:septem:efficiency]] we will also discuss the combined detection efficiency of this detector. ** TODOs for this section :noexport: - [ ] *REWRITE PARAGRAPHS* - [ ] *HAVE TO MENTION FPGA AND TOF BRIEFLY FOR SCINTILLATORS AND FADC!* Why do we build such a 'complicated' detector? Background increases to edges, esp. corners. Background rate has known peaks. 3 keV for the Argon escape peak. Can't do anything about that in current iteration. Peak at 8-9 keV. A mix of a copper peak and orthogonal muons, which are expected to emit about 8 keV through 3 cm. More on this later in [[Background rate]]. Window doesn't transmit at < 2 keV The first section introduces the reasoning behind building a more complicated detector. Existing detector has multiple downsides, seen in background rate & detector efficiency. In the further sections we discuss each additional detector feature in detail and explain why it was added / what drawback of the previous detector should be improved on. After we have introduced the detector as a whole we talk about the calibrations that are necessary to perform sensible measurements. *IN EACH SUBSECTION START WITH WHAT IT'S SUPPOSED TO HELP WITH?* ** Micromegas working principle :PROPERTIES: :CUSTOM_ID: sec:detector:micromegas :END: \textbf{Micro} \textbf{Me}sh \textbf{Ga}seous \textbf{S}tructures (Micromegas) are a kind of \textbf{M}icro\textbf{p}attern \textbf{G}aseous \textbf{D}etectors (MPGDs) first introduced in 1996 cite:GIOMATARIS199629,GIOMATARIS1998239. The modern Micromegas is the Microbulk Micromegas cite:Andriamonje_2010. Interestingly, the name Micromegas is based on the novella Micromégas by Voltaire published in 1752 cite:voltaire1752micromegas, an early example of a science fiction story. cite:GIOMATARIS199629 These detectors are -- as the name implies -- gaseous detectors containing a 'micro mesh'. In the most basic form they are a made of a closed detector volume that is filled with a suitable gas, allowing for ionization (often argon based gas mixtures are used; xenon based detectors for axion helioscopes are in the prototype phase). The volume is split into two different sections, a large drift volume typically $\mathcal{O}(\text{few }\si{cm})$ and an amplification region, sized $\mathcal{O}(\SIrange{50}{100}{μm})$. At the top of the volume is a cathode to apply an electric field. Below the mesh is the readout area at the bottom of the volume. In standard Micromegas detectors, strips or pads are used as a readout. The electric field in the drift region is strong enough to avoid recombination of the created electron-ion pairs and to provide reasonably fast drift velocities $\mathcal{O}(\si{cm.μs⁻¹})$. The amplification gap on the other hand is precisely used to multiply the primary electrons using an avalanche effect. Thus, the electric field reaches values of $\mathcal{O}(\SI{50}{kV.cm⁻¹})$. These drift and amplification volumes are achieved by an electric field between a cathode and the mesh as well as the mesh and the readout area. Fig. [[micromegas_schematic]] shows a schematic for the general working principle of such detectors #+CAPTION: Working principle of a general Micromegas detector. Specific distances and gas mixture #+CAPTION: are exemplary. An ionizing photon enters through the #+CAPTION: detector window into the gas-filled detector body. After a certain distance it produces #+CAPTION: a photoelectron, which ionizes further gas molecules for a specific number of primary #+CAPTION: electrons (depending on the incoming photon's energy) and gas mixture. The primary electrons #+CAPTION: drift towards the micromesh due to the drift voltage, thereby experiencing diffusion. #+CAPTION: In the much higher voltage in the amplification gap an avalanche of electrons is produced, #+CAPTION: enough to trigger the readout electronics (strips or pads). #+NAME: micromegas_schematic [[~/org/Figs/thesis/detectors/micromegas_schematic.pdf]] ** Timepix ASIC :PROPERTIES: :CUSTOM_ID: sec:detector:timepix :END: The Timepix ASIC (Application Specific Integrated Circuit) is a $256 × 256$ pixel ASIC with each pixel $\num{55}·\SI{55}{μm^2}$ in size. It is based on the Medipix ASIC, developed for medical imaging applications by the Medipix Collaboration cite:medipix. The pixels are distributed over an active area of $\num{1.41}\times\SI{1.41}{\cm^2}$. Each pixel contains a charge sensitive amplifier, a single threshold discriminator and a $\SI{14}{bit}$ pseudo random counting logic. It requires use of an external clock, in the range of $\SIrange{10}{150}{MHz}$, with $\SI{40}{MHz}$ being the clock frequency used for the applications in this thesis. cite:LlopartCudie_1056683,LLOPART2007485_timepix,timepix_manual A good overview of the Timepix is also given in cite:lupberger2016pixel. A picture of a Timepix ASIC is shown in fig. sref:fig:detector:timepix_asic. The Timepix uses a shutter based readout, either with a fixed shutter time or using an external trigger to close a frame. After the shutter is closed in the Timepix, the readout is performed during which the detector is insensitive. Each pixel further can work in four different modes: - hit counting mode / single hit mode :: simply counts the number of times the threshold of a pixel was crossed (or whether a pixel was activated once in single hit mode). - \textbf{T}ime \textbf{o}ver \textbf{T}hreshold (ToT) :: In the ToT mode the counter of a pixel will count the number of clock cycles that the charge on the pixel exceeds the set threshold, which is set by an $\SI{8}{bit}$ \textbf{D}igital to \textbf{A}nalog \textbf{C}onverter (DAC) while the shutter is open. ToT is equivalent to the collected charge of each pixel. - \textbf{T}ime \textbf{o}f \textbf{A}rrival (ToA) :: The ToA mode records the number of clock cycles from the first time the pixel's threshold is exceeded to the end of the shutter window. Thus, it allows to calculate the time at which the pixel first crossed the threshold. In the context of this thesis only the ToT mode was used. *** TODOs for this section :noexport: - [ ] *CITE LUPBERGER AS REFERENCE FOR THOROUGH TIMEPIX OVERVIEW* - [ ] *TIMEPIX MANUAL [[file:~/org/Papers/detectors/Timepix_Manual_v1.0.pdf]]* states the size to!!! cite:timepix_manual #+begin_quote The chip dimensions are 16120x14111 μm2; it has a matrix formed by 256x256 pixels of 55x55 μm2 with an active area of 1.982 cm2 #+end_quote - [ ] *THINK ABOUT* setting timepix asic image side by side with either micromegas schematic or InGrid. Better InGrid, because then the timepix is visible in it! Check Lucian's master thesis for information about Timepix & gaseous detector physics. :) *TALK HERE ABOUT DIFFERENT TOS CALIBRATIONS?* *ADD ALL USED DETECTOR CALIBRATIONS IN FULL TO APPENDIX. INCLUDE THE PLOTS FOR TOT. TABLE OF TOT FIT PARAMETERS ETC* - [X] *INSERT TIMEPIX FIGURE?* - [X] *CITE TIMEPIX MANUAL FOR SIZES ETC* *DOES TOA IN TPX1 START FROM SHUTTER OPEN OR START FROM PIXEL HIT?* - [X] *LUPBERGER THESIS EXPLAINS IT*, page 30: #+begin_quote Time of arrival (TOA) mode: In the Medipix chip family, this mode is unique for the Timepix chip. It was particularly requested by the EUDET collaboration to measure the arrival time of charge in order to reconstruct the drift time of charge in a TPC. The FCLOCK cycles are counted from the first rising edge of the discriminator signal until the end of the shutter window. This way, the arrival time can be calculated, when the timing of the closing shutter is know. If there are several rising edges of the discriminator signal within one shutter window, only the first one will have an effect. #+end_quote - [ ] *PIXELS COUNT TO MAX 11810 AS WELL OF COURSE. THAT'S THE TIME LIMIT. AT 40 MHZ THIS IS ABOUT 295 μs TIME* *** Timepix3 :PROPERTIES: :CUSTOM_ID: sec:detector:timepix3 :END: The Timepix3 is the successor of the Timepix. cite:Poikela_2014_timepix3,timepix3_manual It is generally similar to the Timepix, but would provide 3 important advantages if used in a gaseous detector for the applications in this thesis: - clock rates of up to \SI{300}{MHz} for higher possible data rates (less interesting for data taking in an axion helioscope) - a stream based data readout. This means no fixed shutter times and no dead time during readout. Instead data is sent out when it is recorded in parallel. - each pixel can record ToT /and/ ToA information at the same time. This allows to record the charge recorded by a pixel as well as the time it was activated, yielding 3D event reconstruction with precise charge information. An open source readout was developed by the University of Bonn and is available under cite:tpx3-daq [fn:detector_tpx3_daq]. A gaseous detector based on this is currently in the prototyping phase, see the upcoming cite:schiffer_phd. [fn:detector_tpx3_daq] [[https://github.com/SiLab-Bonn/tpx3-daq]] **** TODOs for this section :noexport: Introduce as something for which readout etc. is currently in development. Mention improvements so that we can refer back to them in our conclusion that having time information would be great. Further out, the Timepix4 is also already finalized. No work on a readout for these applications has been started yet. ** GridPix :PROPERTIES: :CUSTOM_ID: sec:detector:gridpix :END: First experiments of combining a Micromegas with a Timepix readout were done in 2005 cite:campbell2005detection by using classical approaches to place a micromesh on top of the Timepix, at the time still called /TimePixGrid/. While this worked in principle, it showed a Moiré pattern, due to slight misalignment between the holes of the micromesh and the pixels of the Timepix. Shortly after, an approach based on photolithographic post-processing was developed to perfectly align the Timepix pixels each with a hole of a micromesh cite:CHEFDEVILLE2006490, called the /InGrid/ (integrated grid). The commonly used name for a gaseous detector using an InGrid is GridPix. For an overview of the process to produce InGrids nowadays, see cite:lucianMsc. The InGrid consists of a $\SI{1}{μm}$ thick aluminum grid, resting on small pillars $\SI{50}{μm}$ above the Timepix. A silicon-nitride $\ce{Si_x N_y}$ [fn:sin_notation] layer protects the Timepix from direct exposure to the amplification processes. The main advantage over previous Micromegas technologies of the GridPix is its ability to detect single electrons. As long as the diffusion distance is long enough to avoid multiple electrons entering a single hole of the InGrid, each primary electron produced during the initial ionization event is recorded. Fig. [[sref:fig:detector:ingrid_explanation]] shows an image of such an InGrid. #+begin_src subfigure (figure () (subfigure (linewidth 0.44) (caption "Timepix ASIC") (label "fig:detector:timepix_asic") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/timepix_gold.png")) (subfigure (linewidth 0.56) (caption "InGrid") (label "fig:detector:ingrid_explanation") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/ingrid_principle.pdf")) (caption (subref "fig:detector:timepix_asic") " Picture of a bare Timepix ASIC. " (subref "fig:detector:ingrid_explanation") " Image of an InGrid, which was partially cut for inspection under an electron microscope. The pillars support the micromesh and have a height of " ($ (SI 50 "μm")) ". Each hole is perfectly aligned with a pixel of the Timepix below. Typical voltages applied between the grid and the Timepix are shown.") (label "fig:detector:timepix_and_ingrid")) #+end_src [fn:sin_notation] The $x$, $y$ notation is sometimes encountered when the exact material composition is not known. Silicon-nitride is a term used for a range of silicon to nitride ratios. Most commonly $\ce{Si_3 N_4}$. *** TODOs for this section :noexport: - [ ] Talk about potential caveats? Caveats E.g. things like charge up effects etc. discussed in other theses. - [X] Production nowadays in Berlin IZM. Show sketch of production process? Imo should be enough to refer to Lucian's thesis for production process. Is there a paper about IZM process? Ask Yevgen & Lucian. -> Just referencing to Lucian MSc \SI{50}{\micro\meter} pillars (amplification gap). Typical gas gains of 2000-5000. Polya plot. Important: single electron detection efficiency. ** 2014 / 2015 GridPix detector for CAST :PROPERTIES: :CUSTOM_ID: sec:detector:detector_2014_15 :END: In the course of cite:krieger2018search a first GridPix based detector for usage at an axion helioscope, CAST, was developed. While the main result was on the coupling constant of the chameleon particle, an axion-electron coupling result was computed in cite:SchmidtMaster. The detector consists of a single GridPix in a $\SI{78}{mm}$ diameter gas volume and a drift distance of $\SI{3}{cm}$. The detector has a $\SI{2}{μm}$ thick Mylar ($\ce{C10 H8 O4}$) entrance window for X-rays. This detector serves as the foundation the detector used in the course of this thesis was built on. See fig. [[fig:detector:exploded_schematic]] for an exploded schematic of the detector. Further, fig. [[fig:detector:background_rate_2014]] shows the achieved background rate of this detector in the center $\num{5} \times \SI{5}{mm^2}$ region of the detector. The background rate shows the copper Kα line near $\SI{8}{keV}$, possibly overlaid with a muon contribution as well as the expected argon Kα lines at $\SI{3}{keV}$. Below $\SI{2}{keV}$ the background rises the lower the energy becomes, likely due to background- and signal-like events being less geometrically different at low energies (fewer pixels). The average background rate in the range from $\SIrange{0}{8}{keV}$ is $\sim\SI{2.9e-05}{keV^{-1}.cm^{-2}.s^{-1}}$. #+CAPTION: Exploded view of the GridPix detector used during the 2014/15 data taking campaign #+CAPTION: at CAST. Consists of a \SI{3}{cm} drift volume with a \SI{78}{mm} inner diameter #+CAPTION: and a single GridPix at the center. #+NAME: fig:detector:exploded_schematic [[~/org/Figs/ingrid_detector_exploded_krieger_thesis.png]] #+CAPTION: Background rate in the center $\num{5} \times \SI{5}{mm^2}$ for the GridPix used in #+CAPTION: 2014/15 at CAST. It corresponds to a background rate of $\sim\SI{2.9e-05}{keV^{-1}.cm^{-2}.s^{-1}}$ #+CAPTION: in the range from $\SIrange{0}{8}{keV}$. #+NAME: fig:detector:background_rate_2014 [[~/phd/Figs/background_rate_2014_gold.pdf]] **** TODOs for this section :noexport: - [X] *WHICH COPPER LINE? AND WHICH ARGON?* - [X] *ADD AVERAGE BACKGROUND RATE IN TEXT*. As an example and a reference shown here. The foundation of what is done in this thesis. Not sure if this section is the right place. But: Could add background rate achieved by that detector here? *YES* - background rate - background over chip (latter comes later??) *** Create background rate plot for 2014 data :extended: We simply generate the code with our background rate plotting script, as the 2014/15 dataset background rate is stored in our resources of the TPA repository. It also outputs the integrated background rates. #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Plotting/plotBackgroundRate :results drawer plotBackgroundRate --show2014 --energyMax 10.0 \ --title "Background rate in center 5·5 mm² for GridPix 2014/15 CAST data" \ --useTeX \ --outpath ~/phd/Figs/ \ --outfile background_rate_2014_gold.pdf #+end_src #+RESULTS: :results: [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 3.0372e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.5310e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 8.4056e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 4.2028e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.4269e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 3.1708e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 1.2016e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 4.8065e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 5.7818e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 1.4454e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 2.3034e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.8793e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: 2014/15 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.1610e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.9350e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 5 columns and 50 rows: Idx Rate yMax yMin Energy Dataset dtype: float float float float constant 0 11.8422 13.12774 10.67764 0.2 2014/15 1 7.5465 8.59779 6.617637 0.4 2014/15 2 9.1719 10.3181 8.14745 0.6 2014/15 3 7.7787 8.84413 6.835589 0.8 2014/15 4 8.3592 9.45909 7.381376 1 2014/15 5 3.2508 3.984673 2.643112 1.2 2014/15 6 4.644 5.495964 3.916433 1.4 2014/15 7 4.7601 5.621061 4.023424 1.6 2014/15 8 4.4118 5.245432 3.702803 1.8 2014/15 9 2.5542 3.219597 2.016388 2 2014/15 10 1.0449 1.519657 0.704645 2.2 2014/15 11 1.2771 1.787224 0.899653 2.4 2014/15 12 2.7864 3.475531 2.224329 2.6 2014/15 13 6.1533 7.114793 5.315002 2.8 2014/15 14 4.7601 5.621061 4.023424 3 2014/15 15 5.1084 5.995723 4.345048 3.2 2014/15 16 3.7152 4.490788 3.065089 3.4 2014/15 17 1.2771 1.787224 0.899653 3.6 2014/15 18 1.161 1.653855 0.801667 3.8 2014/15 19 1.161 1.653855 0.801667 4 2014/15 20 0.1161 0.381798 0.0202424 4.2 2014/15 21 1.5093 2.051874 1.098013 4.4 2014/15 22 0.3483 0.685427 0.159398 4.6 2014/15 23 1.161 1.653855 0.801667 4.8 2014/15 24 1.5093 2.051874 1.098013 5 2014/15 25 0.9288 1.384499 0.60876 5.2 2014/15 26 1.161 1.653855 0.801667 5.4 2014/15 27 2.0898 2.704354 1.604131 5.6 2014/15 28 1.161 1.653855 0.801667 5.8 2014/15 29 1.0449 1.519657 0.704645 6 2014/15 30 1.6254 2.183307 1.198198 6.2 2014/15 31 1.2771 1.787224 0.899653 6.4 2014/15 32 0.8127 1.24821 0.514242 6.6 2014/15 33 0.9288 1.384499 0.60876 6.8 2014/15 34 0.6966 1.110561 0.421413 7 2014/15 35 0.8127 1.24821 0.514242 7.2 2014/15 36 2.4381 3.091237 1.912838 7.4 2014/15 37 2.6703 3.347689 2.120225 7.6 2014/15 38 3.2508 3.984673 2.643112 7.8 2014/15 39 5.5728 6.493931 4.775268 8 2014/15 40 6.2694 7.238737 5.423184 8.2 2014/15 41 6.3855 7.362609 5.53144 8.4 2014/15 42 6.3855 7.362609 5.53144 8.6 2014/15 43 5.4567 6.369513 4.667574 8.8 2014/15 44 3.483 4.238071 2.853743 9 2014/15 45 2.5542 3.219597 2.016388 9.2 2014/15 46 1.5093 2.051874 1.098013 9.4 2014/15 47 1.0449 1.519657 0.704645 9.6 2014/15 48 0.8127 1.24821 0.514242 9.8 2014/15 49 0 0 0 10 2014/15 [INFO]:INFO: storing plot in /home/basti/phd/Figs/background_rate_2014_gold.pdf [INFO] TeXDaemon ready for input. shellCmd: command -v xelatex shellCmd: xelatex -output-directory /home/basti/phd/Figs /home/basti/phd/Figs/background_rate_2014_gold.tex Generated: /home/basti/phd/Figs/background_rate_2014_gold.pdf :end: ** Septemboard detector overview :PROPERTIES: :CUSTOM_ID: sec:detector:detector_overview :END: Generally, the detector follows the same design as the old detector shown in sec. [[#sec:detector:detector_2014_15]], mainly so that mounting it inside of the lead shielding and to the vacuum pipes at CAST is possible without significant changes. An exploded view of the full detector can be seen in fig. [[fig:detector:full_septemboard_exploded]]. At the center of the new detector is the 'septemboard', 7 GridPixes replace the single GridPix on the carrier board, sec. [[#sec:detector:septemboard]]. Analogue signals induced by the amplified charges on the center GridPix are now read out using a flash ADC (FADC), sec. [[#sec:detector:fadc]]. The housing with an inner diameter of $\SI{78}{mm}$ is again made of acrylic glass, same as in the old detector. The detector entrance window is replaced by a $\SI{300}{nm}$ \ccsini window (sec. [[#sec:detector:sin_window]]), which also acts as part of the detector cathode. The copper anode slots in right above the septemboard. The carrier board sits on the intermediate board. Below the intermediate board is a bespoke water cooling made of oxygen-free copper to cool the heat emitted by the additional 6 GridPixes, sec. [[#sec:detector:water_cooling]]. On the underside of the intermediate board is a new small silicone photomultiplier (SiPM), sec. [[#sec:detector:scintillators]]. Finally, a large veto scintillator is installed above the detector setup at CAST, also sec. [[#sec:detector:scintillators]]. During developments multiple septemboards were built and tested. The septemboard used in the final detector is septemboard 'H'. The GridPixes of the final board are listed in tab. [[tab:detector:septem_h_chips]]. #+CAPTION: Overview of the different chips on septemboard H. The first part of the name corresponds #+CAPTION: to the position on the wafer and =W69= is Timepix wafer number \num{69}. #+NAME: tab:detector:septem_h_chips #+ATTR_LATEX: :booktabs t |---------+--------| | Chip | Number | |---------+--------| | E 6 W69 | 0 | | K 6 W69 | 1 | | H 9 W69 | 2 | | H10 W69 | 3 | | G10 W69 | 4 | | D 9 W69 | 5 | | L 8 W69 | 6 | |---------+--------| #+CAPTION: Exploded view of the main GridPix septemboard detector. The FADC and #+CAPTION: large veto scintillator paddle are not shown for obvious reasons. At the #+CAPTION: center of the detector is the 'septemboard', 7 GridPixes on a carrier board. #+CAPTION: The housing is made of acrylic glass, same as in the old detector. The #+CAPTION: top shows the \SI{300}{nm} \ce{Si_3 N_4} window. Below the intermediate board #+CAPTION: is the water cooling made of pure copper. At the bottom, the SiPM veto #+CAPTION: scintillator can be seen. #+NAME: fig:detector:full_septemboard_exploded #+ATTR_LATEX: :height 0.5\textheight [[~/phd/Figs/detector/detector-mk4c-assembly-exploded-whitebg-no-cables-cropped.jpg]] *** TODOs for this section [/] :noexport: - [ ] *TALK ABOUT SIZES OF FULL DETECTOR* - [ ] *SEPTEMBOARD CAN BE NAMED HERE INSTEAD OF ABOVE POSSIBLY* - [ ] *REARRANGE THE TABLE TO ALIGN LEFT COLUMN* *AS LAST SECTION?* - [ ] *REPLACE IMAGE BY A PROPER RAYTRACED RENDER* ** Detector readout system The detector is operated by a Xilinx Virtex-6 \textbf{F}ield \textbf{P}rogrammable \textbf{G}ate \textbf{A}rray (FPGA) in the form of a Virtex-6 ML605 evaluation board. [fn:ml605_weblink] It is connected to the intermediate board via two \textbf{H}igh-\textbf{D}efinition \textbf{M}ultimedia \textbf{I}nterface (HDMI) cables. The Virtex-6 contains the firmware controlling the Timepix ASICs and correlating the scintillator and FADC signals (see appendix sec. [[#sec:daq:tof]]), the Timepix Operating Firmware (TOF). The high voltage (HV) supply both for the septemboard as well as for the scintillators sit inside a VME crate, which also houses the FADC. A USB connection is used to read out and control the FADC and HV supply via the computer running the data acquisition and control software (see sec. [[#sec:daq:tos]]), the Timepix Operating Software (TOS). A schematic of this setup is shown in fig. [[fig:detector:flowchart_setup]], which leaves out the SiPM and temperature readout. #+CAPTION: Flowchart of the whole detector and readout system #+NAME: fig:detector:flowchart_setup [[file:~/org/Doc/Detector/figs/2016_detector_setup_schematic.pdf]] [fn:ml605_weblink] https://www.xilinx.com/products/boards-and-kits/ek-v6-ml605-g.html (visited 2022/10/17) *** TODOs for this section :noexport: Note however, that in this plot the SiPM is not illustrated, as it is connected to the bottom of the intermediate board and only provides an offline flag to be used in the analysis. - [ ] *CLARIFY SIPM. PROBABLY ADD TO PLOT* ** Scintillator vetoes :PROPERTIES: :CUSTOM_ID: sec:detector:scintillators :END: The first general improvement is the addition of two scintillators for veto purposes. While both have slightly different goals, each is there to help with the removal of muon signals or muon induced events (for example X-ray fluorescence) in the detector. Given that cosmic muons (ref. section [[#sec:theory:cosmic_radiation]]) dominate the background by flux, statistically there is a high chance of muons creating X-ray like signatures in the detector. By tagging muons before they interact near the detector, these can be correlated with events seen on the GridPix and thus possibly be vetoed if precise time information is available. The first scintillator is a large 'paddle' installed above the detector and beamline, aiming to tag a large fraction of cosmic muons traversing in the area around the detector. It has a Canberra 2007 base and the photomultiplier tube (PMT) is a Bicron Corp. 31.49x15.74M2BC408/2-X (first two numbers: dimensions in inches). The full outer dimensions of the scintillator paddle are closer to $\SI{42}{cm} \times \SI{82}{cm}$. It is the same scintillator paddle used during the Micromegas data taking behind the LLNL telescope prior and after the data taking campaign with the detector described in this thesis. For this scintillator, muons which traverse through it and the gaseous detector volume are not the main use case. They can be easily identified by the geometric properties of the induced tracks (their zenith angles are relatively small, resulting in track like signatures as the GridPix readout is orthogonal to the zenith angle). There is a small chance however that a muon can ionize an atom of the detector material, which may emit an X-ray upon recombination. One particular source of background can be attributed to the presence of copper whose Kα lines are at $\sim\SI{8.04}{\keV}$ as well as fluorescence of the argon gas with its Kα lines at $\sim\SI{2.95}{keV}$ (see. table [[tab:theory:xray_fluorescence]] in sec. [[#sec:theory:xray_fluorescence]] and sec. [[#sec:theory:xray_fluorescence]]). Fig. sref:fig:detector:fadc_veto_paddle_expl shows a schematic of a side view of the detector chamber with the scintillator paddle on top. When a muon traverses the scintillator, a counter $t_{\text{Veto}}$ starts on the FPGA. Two different cases are shown. In the extreme, a muon may traverse close to the cathode or close to the anode / readout plane. This changes the time of the total drift time and therefore the time difference between the trigger time $t_{\text{Veto}}$ and the readout time, which in the readout is precisely the value of the counter on the FPGA. The drift velocity of $\sim\SI{2}{cm.μs⁻¹}$ and height of the detector chamber ($\SI{3}{cm}$) therefore allow to set an upper limit on the maximum time between a veto paddle trigger and the GridPix readout of about $\SI{1.5}{μs}$. At a clock speed of $\SI{40}{MHz}$ this corresponds to $\SI{60}{clock\;cycles}$, a number we will later try to see in the data (see sec. [[#sec:background:scinti_veto]]). As the location at which the muon traverses through the detector is random and homogeneous throughout the detection volume, we expect to see a flat distribution up to the maximum possible time and then a sharp drop (equivalent to muons at the cathode). The second scintillator is a small silicon photomultiplier (SiPM) installed on the underside of the PCB on which the septemboard is installed. This scintillator was calibrated and set up as part of cite:JannesBSc. We are interested in tagging precisely those muons, which enter the detector orthogonally to the readout plane. This implies zenith angles of almost $\SI{90}{°}$ such that the elongation in the transverse direction of the muon track is small enough to result in a small eccentricity. From the Bethe equation the mean energy loss of muons in the used gas mixture is about $\SI{8}{\keV}$ along the $\SI{3}{\cm}$ of drift volume in the detector (see fig. [[fig:theory:muon_argon_3cm_bethe_loss]] for the energy loss). This coincides with the copper Kα lines and should lead to another source of background in this energy range. Although the muon background will have a much wider energy distribution than the copper lines, which are dominated by the energy resolution of the detector. In a similar manner to the veto paddle we can make an estimate on the typical time scales associated from the time of the scintillator trigger to the GridPix detection. A muon that traverses orthogonally trough the detector can be taken to leave an instant ionization track and trigger the SiPM at the same time ($\mathcal{O}(\SI{100}{ps}) \ll \SI{25}{ns}$ for one clock cycle). As such, the relevant time scale is the drift time until enough charge has drifted towards the grid as to pass the activation threshold. Ionization of a muon is a statistical process, as indicated in fig. sref:fig:detector:fadc_sipm_expl. Depending on the density of the charge cloud for muons orthogonal to readout plane, time to accumulate enough charge to trigger FADC differs. With an average energy deposition of a muon in argon gas of $\sim\SI{2.67}{keV.cm^{-1}}$, and drift velocity again of $\SI{2}{cm.μs⁻¹}$ the accumulation time can be estimated. For example assuming an FADC activation threshold of $\SI{1.5}{keV}$ the necessary charge is accumulated on $\SI{0.56}{cm}$, which takes about $\SI{280}{ns} \approx \SI{11}{clock.cycles}$ to accumulate. Different tracks will have deposited different amounts of energy. Therefore, we expect a peak at relatively low clock cycles with a tail up to the same $\SI{60}{clock.cycles}$ (in case the full $\SI{3}{cm}$ track needs to be accumulated to activate the FADC). #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Veto paddle") (label "fig:detector:fadc_veto_paddle_expl") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/detector/fadc_scintillators/FADC_veto_explanation_StixTwo.pdf")) (subfigure (linewidth 0.5) (caption "SiPM") (label "fig:detector:fadc_sipm_expl") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/detector/fadc_scintillators/FADC_sipm_explanation_StixTwo.pdf")) (caption (subref "fig:detector:fadc_veto_paddle_expl") "Schematic of expected signals for different muons from the zenith passing through scintillator paddle and detector. " ($ "t_{\\text{Veto}}") " marks beginning of a counter. Where the muon traverses changes drift time and thus time difference between two times. " (subref "fig:detector:fadc_sipm_expl") "Ionization of a muon is a statistical process. Depending on the density of the charge cloud for muons orthogonal to readout plane, time to accumulate enough charge to trigger FADC differs.") (label "fig:detector:fadc_scintillators_explanation")) #+end_src *** TODOs for this section [9/14] :noexport: - [X] *Larger font* - [ ] *MOVE EXACT NUMBERS TO ANALYSIS PART OF BACKGROUND W SCINTIS?* -> Refers to the numbers of the drift velocity, -> Maybe. Decide that once reviewed those sections! - [ ] *REFERENCE SECTION ON MUON LOSSES* -> Partially done. Referencing the bethe bloch & most probable loss plots. - [X] *ADD NOEXPORT OF CALCULATIONS OF MUON ANGLES POSSIBLE* - [X] *EXPLAIN FIG* for schematic - [X] *PROVIDE EXPECTED TIMES FOR EACH OF THESE TWO CASES* -> Done. - [X] *REPHRASE* -> Positive HV is very specific here! - [X] *REFERENCE JANNES BSC THESIS FOR SIPM* - [ ] *GIVE NAME OR WHATEVER OF SIPM* - [X] *GIVE NAME OR WHATEVER OF BIG PADDLE* - [X] *CONSIDER REPHRASING SENTENCE ABOUT BETHE EQ GIVING US 8 KEV* - [X] *SCHEMATIC OF MUON IONIZATION* - [X] *NOTE: MAYBE INSTEAD START WITH SEPTEMBOARD? THEN IN OTHER FEATURES CAN MENTION THAT THINGS ARE ONLY FOR CENTER CHIP EG* -> We should maybe start with the exploded view of the Septemboard, because that allows us to introduce the name 'septemboard' and makes explaining things much easier. - [X] *SHOW THE EXPLANATION PLOTS FOR WHAT HAPPENS WITH MUONS HERE?* *** Discussion of orthogonal muons on the FADC :extended: While writing the thesis at some point I had the following thoughs: #+begin_quote - [ ] *QUESTION*: This is *VERY IMPORTANT* for our interpretation. Given the 'slow' drift velocity of about 2cm/μs, this means orthogonal muons of course take about 1.5μs to traverse the detector. The FADC trigger window is 2560 ns = 2.56 μs. So: Does the FADC even *TRIGGER* if the charge accumulates that slowly??? *UHHHHHH* I mean from this we expect to have signals that are extremely long, no? 1.5/2.5 = 0.58 of the whole trigger window! In *RISING EDGE* Or rather a sort of flat thing, as the charge is removed 'quickly' on those time scales. Very confusing thought.... I mean there is a chance the FADC *DID NOT* trigger in those events, which could explain why the FADC + SIPM aren't that effective! -> We *could* study this a bit, if we look into the FADC dataset in which we placed the detector towards the zenith. Question: what does the data look like, in which the *FADC DID NOT TRIGGER*? If there is significant contribution of spherical events of ~8 keV then DUHHH. #+end_quote To investigate this we can use ~plotData~ from ~TimepixAnalysis~ to make a bunch of plots comparing properties like the energy of clusters with and without FADC. Both for the raw data as well as for the results of the ~likelihood~ application (i.e. after all cuts). We will use the entire Run-3 dataset for both cases, because the scintillators were fully working in those. First the plots of the raw data without FADC: #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer plotData \ --h5file ~/CastData/data/DataRuns2018_Reco.h5 \ --runType=rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid --fadc \ --cuts '("../fadcReadout", -0.1, 0.5)' \ --applyAllCuts \ --chips 3 \ --region crGold #+end_src #+RESULTS: :results: figs/DataRuns2018_Reco_2023-10-03_22-41-58 :end: where we only plot chip 3 in the center 5·5 cm² and apply the cut to the ~fadcReadout~ flag (such that we avoid any floating point issues). ~fadcReadout == 0~ means no FADC and ~1~ means FADC. Let's wrap them all up in one PDF: #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer dir=figs/DataRuns2018_Reco_2023-10-03_22-41-58 pdfunite $dir/*.pdf run3_no_fadc_histograms.pdf #+end_src #+RESULTS: :results: :end: [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_no_fadc_histograms.pdf]] Now the same _with_ the FADC. Note for this plot change the rise time in ~config.toml~ to 2500 upper and 500 bins! #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer plotData \ --h5file ~/CastData/data/DataRuns2018_Reco.h5 \ --runType=rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid --fadc \ --cuts '("../fadcReadout", 0.5, 1.1)' \ --applyAllCuts \ --chips 3 \ --region crGold \ --quiet #+end_src #+RESULTS: :results: figs/DataRuns2018_Reco_2023-10-03_23-45-37 :end: Let's wrap them all up in one PDF: #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer dir=figs/DataRuns2018_Reco_2023-10-03_23-45-37 pdfunite $dir/*.pdf run3_with_fadc_histograms.pdf #+end_src #+RESULTS: :results: :end: [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_with_fadc_histograms.pdf]] And now for the result of the data with all cuts applied: #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer plotData \ --h5file ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster.h5 \ --runType=rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid --fadc \ --cuts '("../fadcReadout", -0.1, 0.5)' \ --applyAllCuts \ --chips 3 \ --region crGold #+end_src #+RESULTS: :results: figs/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster_2023-10-03_22-46-10 :end: #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer dir=figs/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster_2023-10-03_22-46-10 pdfunite $dir/*.pdf run3_lhood_no_fadc_histograms.pdf #+end_src #+RESULTS: :results: :end: [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_lhood_no_fadc_histograms.pdf]] #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer plotData \ --h5file ~/org/resources/lhood_lnL_04_07_23/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster.h5 \ --runType=rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid --fadc \ --cuts '("../fadcReadout", 0.5, 1.1)' \ --applyAllCuts \ --chips 3 \ --region crGold #+end_src #+RESULTS: :results: figs/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster_2023-10-03_22-46-44 :end: #+begin_src sh :dir ~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/ :results drawer dir=figs/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99_default_cluster_2023-10-03_22-46-44 pdfunite $dir/*.pdf run3_lhood_with_fadc_histograms.pdf #+end_src #+RESULTS: :results: :end: [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_lhood_with_fadc_histograms.pdf]] To summarize the results: - [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_no_fadc_histograms.pdf]] -> This file shows that there are barely any events near the $\SI{8}{keV}$ range for events without an FADC trigger. This already seems to indicate that orthogonal muons _should_ be triggering the FADC. - [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_with_fadc_histograms.pdf]] -> Shows much higher statistics in the same range. Rise time shows a very long tail, but no visible peaks at higher values. - [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_lhood_no_fadc_histograms.pdf]] -> There are literally *no* events in the $\SI{8}{keV}$ range after the likelihood cuts without an FADC trigger! - [[~/org/Figs/statusAndProgress/FADC/orthogonalMuonsOnFadc/run3_lhood_with_fadc_histograms.pdf]] -> There are plenty of events in the $\SI{8}{keV}$ range after the likelihood cuts *with* the FADC. And all their rise times are between 45 and 70 clock cycles! This means either there are no orthogonal muons in this dataset or for some reason their rise time is also exceptionally short, which seems surprising. At this point we could investigate further and see what the correlation of rise times actually looks like more generally, but well. At least for _all_ data the rise time looks unsuspicious. It's just a long exponential like tail. *** Maximum allowed angle before being vetoed :extended: The section below explains the reasoning behind why an angle of $\SI{88}{°}$ was chosen when computing the muon flux at CAST under shallow angles in sec. [[#sec:theory:calc_muon_angular_flux]] and why this is the smallest angle at which a muon is likely going to pass the normal likelihood cut and thus requires the SiPM. The reason ϑ = 88° was chosen is due to the restriction on the maximum allowed eccentricity for a cluster to still end up as a possible cluster in our 8-10 keV hump. See the =eccentricity= subplot in fig. [[8_10_keV_properties]]. #+CAPTION: See the eccentricity subplot for an upper limit on the allowed eccentricity #+CAPTION: for events in the 8-10 keV hump. Values should not be above ε = 1.3. #+NAME: 8_10_keV_properties [[~/org/Figs/statusAndProgress/muonStudies/lhood_facet_remaining_8_10_keV.pdf]] From this we can deduce the eccentricity should be smaller than ε = 1.3. What does this imply for the largest possible angles allowed in our detector? And how does the opening of the "lead pipe window" correspond to this? Let's compute by modeling a muon track as a cylinder. Reading off the mean =width= from the above fig. to w = 5 mm and taking into account the detector height of 3 cm we can compute the relation between different angles and corresponding eccentricities. In addition we will compute the largest possible angle a muon (from the front of the detector of course) can enter, under which it does not see the lead shielding. #+begin_src nim :flags -d:QuietTikZ=true :results drawer :tangle ~/phd/code/max_muon_angle_est.nim import unchained, ggplotnim, strformat, sequtils let w = 5.mm # mean width of a track in 8-10keV hump let h = 3.cm # detector height proc computeLength(α: UnitLess): mm = ## todo: add degrees? ## α: float # Incidence angle var w_prime = w / cos(α) # projected width taking incidence # angle into account let L_prime = tan(α) * h # projected `'length'` of track # from center to center let L_full = L_prime + w_prime # full `'length'` is bottom to top, thus # + w_prime result = L_full.to(mm) proc computeEccentricity(L_full, w: mm, α: UnitLess): UnitLess = let w_prime = w / cos(α) result = L_full / w_prime let αs = linspace(0.0, degToRad(25.0), 1000) let εs = αs.mapIt(it.computeLength.computeEccentricity(w, it).float) let αsDeg = αs.mapIt(it.radToDeg) let df = toDf(αsDeg, εs) # maximum eccentricity for text annotation let max_εs = max(εs) let max_αs = max(αsDeg) # compute the maximum angle under which `no` lead is seen let d_open = 28.cm # assume 28 cm from readout to end of lead shielding let h_open = 5.cm # assume open height is 10 cm, so 5 cm from center let α_limit = arctan(h_open / d_open).radToDeg # data for the limit of 8-10 keV eccentricity let ε_max_hump = 1.3 # 1.2 is more reasonable, but 1.3 is the # absolute upper limit echo df.head(1) echo α_limit echo ε_max_hump echo max_εs echo max_αs ggplot(df, aes("αsDeg", "εs")) + geom_line() + geom_linerange(aes = aes(x = α_limit, yMin = 1.0, yMax = max_εs), color = color(1.0, 0.0, 1.0)) + geom_linerange(aes = aes(y = ε_max_hump, xMin = 0, xMax = max_αs), color = color(0.0, 1.0, 1.0)) + geom_text(aes = aes(x = α_limit, y = max_εs + 0.1, text = "Maximum angle no lead traversed")) + geom_text(aes = aes(x = 17.5, y = ε_max_hump + 0.1, text = r"Largest $ε$ in $\SIrange{8}{10}{keV}$ hump")) + xlab(r"$α$: Incidence angle [°]") + ylab(r"$ε$: Eccentricity") + ylim(1.0, 4.0) + ggtitle(&"Expected eccentricity for tracks of mean width {w}") + ggsave("~/phd/Figs/muonStudies/exp_eccentricity_given_incidence_angle.pdf", useTeX = true, standalone = true, width = 600, height = 360) #+end_src #+RESULTS: :results: DataFrame with 2 columns and 1 rows: Idx αsDeg εs dtype: float float 0 0 1 10.12467165539782 1.3 3.535709570444218 25.00000000000023 [INFO] TeXDaemon ready for input. shellCmd: command -v xelatex shellCmd: xelatex -output-directory /home/basti/phd/Figs/muonStudies /home/basti/phd/Figs/muonStudies/exp_eccentricity_given_incidence_angle.tex Generated: /home/basti/phd/Figs/muonStudies/exp_eccentricity_given_incidence_angle.pdf :end: Resulting in fig. [[exp_eccentricity_given_incidence_angle]]. #+CAPTION: Relationship between incidence angle of muons of a width of 5 mm and #+CAPTION: their expected mean eccentricity. Drawn as well are the maximum angle #+CAPTION: under which no lead is seen (from the front) as well as the larges ε #+CAPTION: seen in the data. #+NAME: exp_eccentricity_given_incidence_angle [[~/phd/Figs/muonStudies/exp_eccentricity_given_incidence_angle.pdf]] This leads to an upper bound of ~3° from the horizontal. Hence the (somewhat arbitrary choice) of 88° for the ϑ angle above. ** FADC :PROPERTIES: :CUSTOM_ID: sec:detector:fadc :END: As the Timepix is read out in a shutter based fashion and typical shutter lengths for low rate experiments are long compared to the rate of cosmic muons, the scintillators introduced in previous section require an external trigger to close the Timepix shutter early if a signal is measured on the Timepix. This is one the main purposes of the \text{f}lash \textbf{a}nalog to \textbf{d}igital \textbf{c}onverter (FADC) that is part of the detector. This is done by decoupling the induced analogue signals from the grid of the center GridPix. The specific FADC used for the detector is a Caen V1792a. It runs at an internal $\SI{50}{MHz}$ or $\SI{100}{MHz}$ clock and utilizes virtual frequency multiplication to achieve sampling rates of $\SI{1}{GHz}$ or $\SI{2}{GHz}$, respectively. It has 4 channels, each with a cyclic register of $\num{2560}$ channels. At an operating clock frequency of $\SI{1}{GHz}$ that means each channel covers the last $\sim\SI{2.5}{\micro\second}$ at any time. cite:fadc_manual The raw signal decoupled from the grid is first fed into an Ortec 142 B pre-amplifier and then feeds into an Ortec 474 shaping amplifier, which integrates and shapes the signal as well as amplifies it. For a detailed introduction to this FADC system, see the thesis of A. Deisting cite:Deisting and cite:SchmidtMaster for further work integrating it into this detector. In addition see the FADC manual cite:fadc_manual [fn:fadc_manual] for a deep explanation of the working principle of this FADC. The analogue signal of the center grid is decoupled via a small $C_{\text{dec}} = \SI{10}{nF}$ capacitor in parallel to the high voltage line. For a schematic of the circuit see fig. [[fig:detector:fadc_circuit]]. When a primary electron traverses through a hole in the grid and is amplified, the back flowing ions induce a small voltage spike on top of the constant high voltage applied to the grid. The parallel capacitor filters out the constant high voltage and only transmits the time varying induced signals. Such signals -- the envelope of possibly many primary electrons -- are measured by the FADC. #+CAPTION: Schematic of the setup to decouple signals induced on the grid of the #+CAPTION: InGrid. The signal is decoupled in the sense that the capacitor essentially #+CAPTION: acts as a low pass filter, thus removing the constant HV. Only the #+CAPTION: high frequency components of the induced signals on top of the HV pass #+CAPTION: into the branch leading to the FADC. In the detector of this thesis, #+CAPTION: a capacitance of \SI{10}{nF} was used instead. The decoupling is implemented #+CAPTION: on the intermediate board. Schematic taken from cite:Deisting. #+NAME: fig:detector:fadc_circuit [[~/phd/Figs/decouple_fadc.pdf]] This signal can be used for three distinct things: 1. it may be used as a trigger to close the shutter of the ongoing event. Ideally, we want to only measure a single physical event within one shutter window. A long shutter time can statistically result in multiple events happening, which the FADC trigger helps to alleviate. This allows us to reduce the number of events with multiple physical events and acts as a trigger for the scintillators. This in turn means possible muon induced X-ray fluorescence can be vetoed. 2. By nature of the signal production and drift properties of the primary electrons before they reach the grid, the signal shape can theoretically be used to determine a rough longitudinal shape of the event. The length of the FADC event should be proportional to the size of the primary electron cloud distribution along the 'vertical' detector axis. This potentially allows to differentiate between a muon traversing orthogonally through the readout plane and an X-ray due to their longitudinal shape difference, see sec. [[#sec:background:fadc_veto]]. 3. Finally, it provides an independent measure of the collected charge on the center chip. This will prove useful in understanding the detector behavior over time later in sec. [[#sec:calib:causes_variability]]. The working principle of how the FADC and the scintillators can be used together to remove certain types of background, by correlating events in the scintillators, the FADC and the GridPix, is shown in fig. [[fig:detector:scintillator_fadc_shutter_close]]. #+CAPTION: Schematic showing how the FADC and scintillators are used together #+CAPTION: to tag possible coincidence events and close the shutter early to #+CAPTION: reduce the likelihood of multi-hit events. If the scintillator #+CAPTION: triggers when the shutter is open, a clock starts counting up to #+CAPTION: 4096 clock cycles. On every new trigger this clock is reset. If #+CAPTION: the FADC triggers, the scintillator clock values are read out and #+CAPTION: can be used to correlate events in the scintillator with FADC and #+CAPTION: GridPix information. Further, the FADC trigger is used to close the #+CAPTION: Timepix shutter $\SI{5}{μs}$ after the trigger. #+NAME: fig:detector:scintillator_fadc_shutter_close [[~/phd/Figs/scintillator_fadc_shutter_close.pdf]] [fn:fadc_manual] A PDF version is available at: https://archive.org/details/manualzilla-id-5646050/ *** TODOs for this section :noexport: - [ ] *UPDATE SCHEMATIC TO SAY e.g. 1.5 μs*! - [X] *HAVE PRE AMPLIFIER BEFORE FADC. ORTEC* - [ ] *MERGE BOTH SCHEMATICS?* -> Hmm, maybe, but the captions are already pretty long of both of them. - [ ] *SHOW IMAGE OF FADC AND SHAPING AMPLIFIER* - [ ] *PICTURE OF FADC* -> A picture would just take more space and it's not particularly interesting. - [ ] *COPY OVER SECTION ABOUT MUONS / TOA INFO FROM IAXO TDR TEXT?* -> ? *** Update schematic :extended: Need to type out μs because we don't use ~unicode-math~ by default in LaTeXDSL. #+begin_src nim import latexdsl let body = r"If $Δt \lesssim \mathcal{O}(\SI{2}{\micro\second})$ events considered correlated, flag them." compile("/tmp/text_fadc_scintillator.tex", body) #+end_src ** SiN window :PROPERTIES: :CUSTOM_ID: sec:detector:sin_window :END: Next up, a major limitation of the previous detector was its limited combined efficiency below $\SI{2}{keV}$, due to its $\SI{2}{μm}$ Mylar window. Therefore, the next improvement for the new detector is an ultra-thin silicon nitride $\ce{Si_3 N_4}$ window of $\SI{300}{nm}$ thickness and $\SI{14}{mm}$ diameter, developed by Norcada [fn:norcada_website]. A strongback support structure consisting of 4 lines of $\SI{200}{μm}$ thick and $\SI{500}{μm}$ wide $\ce{Si_3 N_4}$, helps to support a pressure difference of up to $\SI{1.5}{bar}$. On the outer side a $\SI{20}{nm}$ thin layer of aluminum is coated to allow the window to be part of the detector cathode. The strongback occludes about $\SI{17}{\%}$ of the full window area. In reality it is slightly more, as the strongbacks become somewhat wider towards the edges. In the centermost region they are straight and in the center $\num{5} \times \SI{5}{mm²}$ area, they occlude $\SI{22.2}{\%}$. Fig. sref:fig:detector:strongback_structure_mc shows the idealized strongback structure without a widening towards the edges of the window. Fig. sref:fig:detector:window_image shows an image of one such window under testing conditions in the laboratory, as it withstands a pressure difference of $\SI{1.5}{bar}$. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Window strongback schematic") (label "fig:detector:strongback_structure_mc") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/SiN_window_occlusion.pdf")) (subfigure (linewidth 0.5) (caption "Image") (label "fig:detector:window_image") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/300nm_SiN_holds.jpg")) (caption (subref "fig:detector:strongback_structure_mc") "shows an idealized schematic of the window strongback based on a simple MC simulation. " ($ (SI 22.2 "\\percent")) " of the area inside the inner " ($ "\\num{5} \\times \\SI{5}{mm^2}") " area (black square) are occluded." (subref "fig:detector:window_image") "shows an image of one such window while testing in the laboratory, if it holds " ($ (SI 1.5 "bar")) ". " "Image courtesy of Christoph Krieger.") (label "fig:detector:window_image_and_strongback")) #+end_src As the main purpose is the increase of transmission at low energies, fig. [[fig:detector:window_efficiency_comparison]] shows the transmission of the mylar window of the old detector and the new $\ce{Si_3 N_4}$ window in the energy range below $\SI{3}{keV}$. The $\ce{Si_3 N_4}$ window shows a significant increase in transmission below $\SI{2}{keV}$, which is very important for the sensitivity in solar axion-electron and chameleon searches, which both peak near $\SI{1}{keV}$ in their solar flux. The window alone significantly increases the signal to noise ratio of these physics searches. #+CAPTION: Comparison of the transmission of a $\SI{2}{μm}$ Mylar window and a #+CAPTION: $\SI{300}{nm}$ $\ce{Si_3 N_4}$ window. The efficiency gains become #+CAPTION: more and more pronounced the lower the energy is, aside from the #+CAPTION: absorption edge of carbon at around $\SI{250}{eV}$ and above about #+CAPTION: $\SI{1.75}{keV}$. In the interesting range around $\SI{1}{keV}$ significant #+CAPTION: transmission gains are achieved. #+NAME: fig:detector:window_efficiency_comparison [[~/phd/Figs/detector/window_transmisson_comparison.pdf]] [fn:norcada_website] https://www.norcada.com/ *** TODOs for this section :noexport: - [X] *IMAGE WAS TAKEN BY CHRISTOPH I THINK. CHECK AND MENTION!* -> Yes, it was. Credited. - [X] *FIX REFERENCE TO INLINE LATEX LABELS!* - [X] *POSSIBLY UPDATE IMAGE OF WINDOW* - [X] *UPDATE IMAGE OF TRANSMISSION* *** Calculation of strongback window structure plot :extended: #+begin_src nim :tangle code/window_strongback.nim :flags -d:release ## Super dumb MC sampling over the entrance window using the Johanna's code from `raytracer2018.nim` ## to check the coverage of the strongback of the 2018 window ## ## Of course one could just color areas based on the analytical description of where the ## strongbacks are, but this is more interesting and looks fun. The good thing is it also ## allows us to easily compute the fraction of pixels within and outside the strongbacks. import ggplotnim, random, chroma proc colorMe(y: float): bool = const stripDistWindow = 2.3 #mm stripWidthWindow = 0.5 #mm if abs(y) > stripDistWindow / 2.0 and abs(y) < stripDistWindow / 2.0 + stripWidthWindow or abs(y) > 1.5 * stripDistWindow + stripWidthWindow and abs(y) < 1.5 * stripDistWindow + 2.0 * stripWidthWindow: result = true else: result = false proc sample() = randomize(423) const nmc = 5_000_000 let black = color(0.0, 0.0, 0.0) var dataX = newSeqOfCap[float](nmc) var dataY = newSeqOfCap[float](nmc) var strongback = newSeqOfCap[bool](nmc) for idx in 0 ..< nmc: let x = rand(-7.0 .. 7.0) let y = rand(-7.0 .. 7.0) if x*x + y*y < 7.0 * 7.0: dataX.add x dataY.add y strongback.add colorMe(y) let df = toDf(dataX, dataY, strongback) echo "A fraction of ", df.filter(f{`strongback` == true}).len / df.len, " is occluded by the strongback" let dfGold = df.filter(f{abs(idx(`dataX`, float)) <= 2.25 and abs(idx(`dataY`, float)) <= 2.25}) echo "Gold region: A fraction of ", dfGold.filter(f{`strongback` == true}).len / dfGold.len, " is occluded by the strongback" ggplot(df, aes("dataX", "dataY", fill = "strongback")) + geom_point(size = 1.0) + # draw the gold region as a black rectangle geom_linerange(aes = aes(y = 0, x = 2.25, yMin = -2.25, yMax = 2.25), color = "black") + geom_linerange(aes = aes(y = 0, x = -2.25, yMin = -2.25, yMax = 2.25), color = "black") + geom_linerange(aes = aes(x = 0, y = 2.25, xMin = -2.25, xMax = 2.25), color = "black") + geom_linerange(aes = aes(x = 0, y = -2.25, xMin = -2.25, xMax = 2.25), color = "black") + xlab("x [mm]") + ylab("y [mm]") + ggtitle("Idealized layout, strongback in purple") + themeLatex(fWidth = 0.5, width = 640, height = 480, baseTheme = sideBySide) + ggsave("/home/basti/phd/Figs/SiN_window_occlusion.pdf", useTeX = true, standalone = true, dataAsBitmap = true)#width = 1150, height = 1000) sample() #+end_src #+RESULTS: | A | fraction | of | 0.1621256615410631 | is | occluded | by | the | strongback | | | | Gold | region: | A | fraction | of | 0.2229255019812377 | is | occluded | by | the | strongback | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs | /home/basti/phd/Figs/SiN_window_occlusion.tex | | | | | | | | Generated: | /home/basti/phd/Figs/SiN_window_occlusion.pdf | | | | | | | | | | *** Calculation of transmission efficiency [0/0] :noexport: Let's calculate the transmission for =Si₃N₄= and Mylar windows using [[https://github.com/SciNim/xrayAttenuation][=xrayTransmission=]]. #+begin_src nim :tangle /home/basti/phd/code/window_transmission_comparison.nim import std / strutils import xrayAttenuation, ggplotnim # generate a compound of silicon and nitrogen with correct number of atoms let Si₃N₄ = compound((Si, 3), (N, 4)) #Si₃N₄.plotTransmission(3.44.g•cm⁻³, 300.nm.to(Meter)) # instantiate Mylar let mylar = compound((C, 10), (H, 8), (O, 4)) # mylar.plotTransmission(1.4.g•cm⁻³, 2.μm.to(Meter), energyMax = 3.0) echo mylar.name() echo Si₃N₄.name() # define energies in which to compute the transmission # (we don't start at 0, as at 0 energy the parameters are not well defined) let energies = linspace(1e-2, 3.0, 1000) proc compTrans[T: AnyCompound](el: T, ρ: g•cm⁻³, length: Meter): DataFrame = result = toDf({ "Energy [keV]" : energies }) .mutate(f{float: "μ" ~ el.attenuationCoefficient(idx("Energy [keV]").keV).float}, f{float: "Trans" ~ transmission(`μ`.cm²•g⁻¹, ρ, length).float}, f{"Compound" <- el.name()}) var df = newDataFrame() # compute transmission for Si₃N₄ (known density and desired length) df.add Si₃N₄.compTrans(3.44.g•cm⁻³, 300.nm.to(Meter)) # and for 2μm of mylar df.add mylar.compTrans(1.4.g•cm⁻³, 2.μm.to(Meter)) # create a plot for the transmissions echo df let dS = r"$\SI{300}{nm}$" #pretty(300.nm, 3, short = true) let dM = r"$\SI{2}{\micro\meter}$" #pretty(2.μm, 1, short = true) let si = r"$\mathrm{Si}₃\mathrm{N}₄$" ggplot(df, aes("Energy [keV]", "Trans", color = "Compound")) + geom_line() + xlab(r"Energy [$\si{keV}$]") + ylab("Transmission") + xlim(0.0, 3.0) + ggtitle(r"Transmission of $# $# and $# Mylar ($#)" % [dS, si, dM, mylar.name()]) + themeLatex(fWidth = 0.9, width = 600, height = 360, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/detector/window_transmisson_comparison.pdf", width = 600, height = 360, useTex = true, standalone = true) #+end_src #+RESULTS: | C10H8O4 | | | | | | | | Si3N4 | | | | | | | | DataFrame | with | 4 | columns | and | 2000 | rows: | | Idx | Energy | [keV] | μ | Trans | Compound | | | dtype: | float | float | float | string | | | | 0 | 0.01 | 0 | 1 | Si3N4 | | | | 1 | 0.01299 | 0 | 1 | Si3N4 | | | | 2 | 0.01599 | 0 | 1 | Si3N4 | | | | 3 | 0.01898 | 0 | 1 | Si3N4 | | | | 4 | 0.02197 | 0 | 1 | Si3N4 | | | | 5 | 0.02496 | 0 | 1 | Si3N4 | | | | 6 | 0.02796 | 0 | 1 | Si3N4 | | | | 7 | 0.03095 | 175270.0 | 1.3954e-08 | Si3N4 | | | | 8 | 0.03394 | 147940.0 | 2.342e-07 | Si3N4 | | | | 9 | 0.03694 | 127640.0 | 1.9031e-06 | Si3N4 | | | | 10 | 0.03993 | 113220.0 | 8.4214e-06 | Si3N4 | | | | 11 | 0.04292 | 99660.0 | 3.416e-05 | Si3N4 | | | | 12 | 0.04592 | 88510.0 | 0.0001079 | Si3N4 | | | | 13 | 0.04891 | 78580.0 | 0.0003006 | Si3N4 | | | | 14 | 0.0519 | 70240.0 | 0.0007111 | Si3N4 | | | | 15 | 0.05489 | 65060.0 | 0.001214 | Si3N4 | | | | 16 | 0.05789 | 59660.0 | 0.002119 | Si3N4 | | | | 17 | 0.06088 | 55030.0 | 0.003417 | Si3N4 | | | | 18 | 0.06387 | 51000.0 | 0.005178 | Si3N4 | | | | 19 | 0.06687 | 47310.0 | 0.007578 | Si3N4 | | | | | | | | | | | | [INFO] | TeXDaemon | ready | for | input. | | | | shellCmd: | command | -v | lualatex | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector | /home/basti/phd/Figs/detector/window_transmisson_comparison.tex | | | | Generated: | /home/basti/phd/Figs/detector/window_transmisson_comparison.pdf | | | | | | ** Septemboard - 6 GridPixes around a center one :PROPERTIES: :CUSTOM_ID: sec:detector:septemboard :END: The main motivation for extending the readout area from a single chip to a 7 chip readout is to reduce background towards the outer sides of the chip, in particular in the corners. Against common intuition however, it also plays a role for events which have cluster centers near the center of the readout. The latter is, because diffusion can produce quite large clusters even at low energies. In particular in lower energy events, tracks may have gaps in them large enough to avoid being detected as a single cluster for standard radii in cluster searching algorithms. This is particularly of interest as different searches produce an 'image' at different positions and sizes on the detector. While the center chip is large enough to fully cover the image for essentially all models, it may not be in the regions of lowest background. Hence, improvements to larger areas are needed. The septemboard is implemented in such a way to optimize the loss of active area due to bonding requirements and general manufacturing realities. As the Timepix ASIC is a $\SI{16.1}{mm}$ by $\SI{14.1}{mm}$ large chip (the bonding area adding $\SI{2}{mm}$ on one side), the upper two rows are installed such that they are inverted to another. The bonding area is above the upper row and below the center row. The bottom row again has its bonding area below. This way the top two rows are as close together as realistically possible, with a decent gap on the order of $\SI{2}{mm}$ between the middle and bottom row. Any gap is potentially problematic as it implies loss of signal in that area, complicating the possible reconstruction methods. The layout can be seen in fig. [[fig:detector:occupancy_sparking_run_241]] in the next section. All 7 GridPix are connected in a daisy-chained way. This means that in particular for data readout, all chips are read out in serial order. The dead time for readouts therefore is approximately 7 times the readout time of a single Timepix. A single Timepix has a readout time of $\sim\SI{25}{ms}$ at a clock frequency of $\SI{40}{MHz}$ (the frequency used for this detector). This leads to an expected readout time of the full septemboard of $\SI{175}{ms}$. [fn:detector_readout_time] Such a long readout time leads to a strong restriction of the possible applications for such a detector. Fortunately, for the use cases in a very low rate experiment such as CAST, long shutter times are possible, mitigating the effect on the fractional dead time to a large extent. Fig. [[fig:detector:cluster_centers_likelihood]] shows a heatmap of all cluster centers during roughly $\SI{2000}{h}$ of background data after passing these clusters through a likelihood based cut method aiming to filter out non X-ray like clusters (details of this follow later in sec. [[#sec:background:likelihood_method]]). It is clearly visible that the further a cluster center is towards the chip edges, and especially the corners, the more likely it is to be considered an X-ray like cluster. This has an easy geometric explanation. Consider a perfect track traversing over the whole chip. In this case it is very eccentric. Move the same track such that its center is in one of the corners and rotate it by $\SI{45}{°}$ and suddenly the majority of the track won't be detected on the chip anymore. Instead something roughly circular remains visible, 'fooling' the likelihood method. For a schematic illustrating this, see fig. [[fig:detector:gridpix_ring_veto_idea]]. The septemboard therefore is expected to significantly reduce the background over the whole center chip, with the biggest effect in the regions with the most amount of background. #+CAPTION: Cluster centers left after likelihood cut applied to about $\SI{2000}{h}$ of #+CAPTION: background data. Background increasing dramatically towards edges and #+CAPTION: corners. #+NAME: fig:detector:cluster_centers_likelihood #+ATTR_LATEX: :width 0.8\textwidth [[~/phd/Figs/backgroundClusters/background_cluster_centers.pdf]] #+CAPTION: Illustration of the basic idea behind the GridPix veto ring. If a cluster on the center #+CAPTION: chip is X-ray like and near the corners, checking the outer chips close to the corner #+CAPTION: for a track containing the center cluster can overrule the X-ray like definition of #+CAPTION: the center chip only. #+NAME: fig:detector:gridpix_ring_veto_idea [[~/org/Figs/InGridSeptemExplanation/septem_explanation_lnL_monokai_StixTwo.pdf]] [fn:detector_readout_time] The ideal readout time for one chip is $t = \SI{917504}{bits} · \SI{25}{ns} = \SI{22.9942}{ms}$ cite:&lupberger2016pixel, but this does not take into account overhead from the FPGA, sending data to the computer and processing in TOS. We will later see that the practical readout time of the final detector is closer to almost $\SI{500}{ms}$ under high rate conditions (e.g. \cefe calibration runs) and $\sim\SI{200}{ms}$ for low rate background conditions. *** TODOs for this section :noexport: - [ ] *First paragraph and later paragraph talk about the same thing!!!* *REFERENCE PAPER ABOUT TRACKS IN TPCS. IONIZATION STATISTICAL AND SO ON* *THE LATTER NEEDS MORE WORDING ELSEWHERE / CLUSTERING ALGORITHM EXPL / SEPTEM VETO* - [X] *REFERENCE* that schematic of how everything is connected is explained in the detector @ CAST? Or explain it here, then refer back? -> Already done further up. - [X] *USE EXACT MEASURES OF THE TIMEPIX BASED ON TIMEPIX MANUAL* 16.1 times 14.1 seems fine. - [X] *TODO: NEED CAPTION AND LABEL FOR BACKGROUND CLUSTERS* - [X] *SIDE BY SIDE INCLUDING A SEPTEM EVENT SHOWING TRACK CUT LEADS TO CIRCLE* -> But that makes it exceptionally small! -> Added an additional illustration based on what we used back in ~2016. - [X] *MENTION SEPTEMBOARDS ARE NAMED BY LETTERS, WHICH ONE USED IN DETECTOR* -> Not here! Already done in septemboard introduction - [X] *SHOW SCHEMATIC OF LAYOUT* -> Indirectly in the temperature part! - [X] *FIND OUT WHERE 25 MS READOUT FOR SINGLE TIMEPIX COMES FROM* -> Luppis thesis as mentioned in the footnote now. - [X] *READOUT TIME:* #+begin_quote The main driver for the readout speed is the time to readout the complete matrix (one frame) from the chip and this value is fixed for a given FCLOCK frequency. A frame consists of 917504 bits, which have to be packed to the data stream. As the data is sampled with FCLOCK, the same amount of clock cycles is needed, what defines the readout time #+end_quote page 83 of Lupberger thesis. 917504 * 25ns (one clock at 40MHz) = 22.9442 ms per frame under ideal conditions *** GridPix veto ring illustration :extended: The event used in the illustration is event 23707 of run 242. The full event display: [[~/org/Figs/statusAndProgress/exampleEvents/example_track_corner_gridpix_ring_run242_event23707.pdf]] and the SVG of the illustration: [[~/org/Figs/InGridSeptemExplanation/septem_explanation_lnL_monokai.svg]] *** Compute the cluster backgrounds :extended: To compute these cluster backgrounds, we need the following ingredients: - the fully reconstructed data files =DataRuns201*_Reco.h5= - the prepared CDL data from the 2019 dataset =calibration-cdl_2019.h5= and the X-ray reference datasets that define the X-ray like properties. - apply the likelihood method to all background events in one of the files (gives enough statistics) to get a resulting file containing only passed clusters *over the whole chip*. With the resulting file we can then use [[file:~/CastData/ExternCode/TimepixAnalysis/Plotting/plotBackgroundClusters/plotBackgroundClusters.nim]] to plot these cluster centers. Assuming the reconstructed data files are found in *...* and the CDL data files in *...*, let's generate the data after likelihood method: (this takes ~2 minutes or so, so better run it in a terminal instead of via C-c C-c) #+begin_src sh likelihood \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --h5out /tmp/lhood_2017_full_chip.h5 \ --cdlYear=2018 \ --region=crAll \ --cdlFile=/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lnL #+end_src - [ ] *WHY DOES THIS* produce background suppression numbers below 1 towards the corners? Compile the plotting tool if not done: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Plotting/plotBackgroundClusters nim c -d:danger --threads:on plotBackgroundClusters.nim #+end_src Now we can create the plot. Note that we use ~fWidth = 0.8~ so that (as of right now <2023-12-06 Wed 12:11>) the heatmap fits on one page with the septem veto illustration. #+begin_src sh plotBackgroundClusters \ -f ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ --title "2000 h background data, lnL cut applied" \ --outpath ~/phd/Figs/backgroundClusters/ \ --energyMin 0.2 \ --energyMax 12.0 \ --zMax 15.0 \ --singlePlot \ --fWidth 0.8 \ --useTikZ #+end_src #+RESULTS: | reading: | /home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | @["/home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5"] | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DataFrame | with | 3 | columns | and | 20155 | rows: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Idx | x | y | count | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dtype: | int | int | int | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 | 2 | 247 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 3 | 247 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 6 | 166 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | 33 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 9 | 122 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 9 | 128 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 9 | 224 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 10 | 7 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 10 | 57 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 10 | 106 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 10 | 107 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 10 | 146 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 10 | 147 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 10 | 165 | 2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14 | 10 | 166 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 15 | 10 | 185 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 | 10 | 188 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 17 | 10 | 202 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 18 | 10 | 224 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 19 | 10 | 225 | 1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Saving | plot | to | /home/basti/phd/Figs/backgroundClusters//background_cluster_centers.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `x` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("x"), | ...)`. | | INFO: | The | integer | column | `y` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("y"), | ...)`. | | Spacing | TB | : | (val: | 3.75, | unit: | ukCentimeter) | Spacing | LR | : | (val: | 7.0, | unit: | ukCentimeter) | width: | 600.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/backgroundClusters | /home/basti/phd/Figs/backgroundClusters//background_cluster_centers.tex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generated: | /home/basti/phd/Figs/backgroundClusters//background_cluster_centers.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ** Water cooling and temperature readout for the septemboard :PROPERTIES: :CUSTOM_ID: sec:detector:water_cooling :END: During development of the septemboard one particular set of problems manifested. While testing a prototype board with 5 active GridPix in a gaseous detector, the readout was plagued by excessive noise problems. The detector exhibited a large number of frames with more than $\num{4096}$ active pixels (the limit for a zero suppressed readout) and common pixel values of $\num{11810}$ indicating overrun ToT counters. On an occupancy (sum of all active pixels) of the individual chips, it is quite visible the data is clearly not due to cosmic background. Fig. [[fig:detector:occupancy_sparking_run_241]] shows such an occupancy with the color scale topping out at the $80^{\text{th}}$ percentile of the counts for each chip individually. The chip in the bottom left shows a large number of sparks (overlapping half ellipses pointing downwards) at the top end. Especially the center chip in the top row shows highly structured activity, which is in contrast to the expectation of a homogeneous occupancy for a normal background run. In addition, on all chips some level of general noise on certain pixels is visible (some being clearly more active than others resulting in a scatter of 'points'). #+CAPTION: Occupancy of a testing background run with $\mathcal{O}(\SI{1}{s})$ long frames #+CAPTION: using septemboard F during development without any kind of cooling. This also shows #+CAPTION: the layout of the full septemboard with realistic spacing. #+NAME: fig:detector:occupancy_sparking_run_241 [[~/phd/Figs/detector/sparking/sparking_occupancy_80_quantile_run_241.pdf]] The intermediate board and carrier board used during these tests were the first boards equipped with two PT1000 temperature sensors. One on the bottom side of the carrier board and another on the intermediate board. Each is read out using a =MAX31685= micro controllers. Both of which are communicated with via a =MCP2210= USB-to-SPI micro controllers over a single USB port on the intermediate board. The single =MCP2210= communicates with both temperature sensors via the Serial Peripheral Interface (SPI) (see sec. [[#sec:daq:temperature_readout]] for more information about the temperature logging and readout). In the run shown in fig. [[fig:detector:occupancy_sparking_run_241]] the temperature sensors were not functional yet, as the readout software was not written. The required logic was added to the Timepix Operating Software (TOS), the readout software of the detector, motivated by this noise activity to monitor the temperature before and during a data taking period. The temperature on the carrier board indicated temperatures of $\sim\SI{75}{\celsius}$ in background runs similar to the one of fig. [[fig:detector:occupancy_sparking_run_241]]. One way to get a measure for the noise-like activity seen on the detector is to look at the rate of active pixels over time. With values well above numbers expected due to background, excess temperature seemed a possible cause for the issues. As no proper cooling mechanism was available, a regular desk fan was placed pointing at the detector when it was run without any kind of shielding. This saw the temperature under the carrier board drop from $\SI{76}{\celsius}$ down to $\SI{68}{\celsius}$. As a result the majority of noise disappeared as can be seen in fig. sref:fig:detector:sparking_run_with_fan_mean_hits with the temperature curve during the full run in fig. sref:fig:detector:sparking_run_with_fan_temps. [fn:detector_sparking_run_268] [fn:detector_troubleshooting] The features visible in the occupancy plots are thus likely multiple different artifacts due to too high temperatures. A mixture of real sparks (bottom left chip in fig. [[fig:detector:occupancy_sparking_run_241]]) and possible instabilities that possibly affect voltages for the pixels (and thus change the thresholds of each pixel). As the temperature is measured on the bottom side of the carrier board, temperatures in the amplification region are likely higher. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Temperature") (label "fig:detector:sparking_run_with_fan_temps") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/detector/sparking/temperature_sparking_run_268.pdf")) (subfigure (linewidth 0.5) (caption "Mean hit rate") (label "fig:detector:sparking_run_with_fan_mean_hits") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/detector/sparking/mean_hit_rate_sparking_run_268.pdf")) (caption (subref "fig:detector:sparking_run_with_fan_temps") " shows the temperature on the bottom side of the carrier board ('septem') and intermediate board ('IMB') during the background run. The point at which the desk fan is placed next to the detector is clearly visible by the " ($ (SI 8 "\\celsius")) " drop in temperature from about " ($ (SI 76 "\\celsius")) " to " ($ (SI 68 "\\celsius")) ". " (subref "fig:detector:sparking_run_with_fan_mean_hits") " shows the mean hit rate of each of the 5 chips installed on the carrier board at the time during the same run. The placement of the desk fan is easily visible as a reduction in mean rate on all chips." ) (label "fig:detector:sparking_run_with_fan")) #+end_src Following this a bespoke water cooling was designed by T. Schiffer, made from oxygen-free copper with $\SI{3}{mm}$ channels for water to circulate through the copper body. cite:schiffer_phd The body has the same diameter as the intermediate board and is installed right below. The water circulation is handled by an off-the-shelf pump and radiator from Alphacool [fn:detector_alphacool] intended for water cooling setups for desktop computers. The pump manages a water flow rate of about $\SI{0.3}{\liter\per\minute}$ through the $\SI{3}{mm}$ channels in the copper. In common operation the temperatures on the carrier board are between $\SIrange{45}{50}{\celsius}$ and noise free operation is possible. [fn:detector_troubleshooting] The realization that the issues are purely due to temperature effects was only after several months of eliminating many other options, both on the software as well as the hardware side. In particular power supply instabilities were long considered to be a source of problems. While they possibly also had an impact, better power supplies were built with larger capacitors to better deal with large variations in required power. [fn:detector_alphacool] https://www.alphacool.com/ [fn:detector_sparking_run_268] See the full thesis version for the occupancy of the run with temperature readout in the subsection after this if interested. *** TODOs for this section [3/5] :noexport: - [ ] *TAKE OUT DETAILS* about the MCP2210 etc. logic to read out sensors etc. That is going to be explained later on anyway. - [X] *REF TOBI THESIS & UPCOMING PAPER* - [ ] *UPCOMING PAPER CITE* - [X] See ~/org/Papers/tobias_schiffer_septemboard_cooling_chapter.pdf~ for his chapter about the cooling device! Good for the size of holes etc and check if our bla bla is correct! -> 3 mm channels! - [X] *REFERENCE CODE TO TEMP READOUT?* -> Will be referenced in section about readout itself - [ ] *REWRITE SENTENCE*: Further, the gas gain is proportional to the temperature, it is possible slight height differences of the InGrid cause local amplification events, similar to a photomultiplier tube. -> This was after the "As the temperature is measured" ... sentence. *** Sparking behavior :extended: :PROPERTIES: :CUSTOM_ID: sec:detector:sparking_behavior :END: See the mails containing "Septem F" (among other things) for the information about sparking behavior. From that we can also deduce the run numbers of the noisy runs (run 241 is one of them); just keep in mind that the run numbers are overlapping with some CAST run numbers, as for CAST we started again at 0. Specific run path of noisy run used in occupancy plot above: =Run_241_170216-13-49= So run from February 2017. Let's plot the temperature during the sparking run in which we installed the fan. This is essentially a reproducible version of the following plot: [[file:~/org/Figs/temps_plot_septemF_76_68deg_1s.pdf]] #+begin_src nim :tangle /home/basti/phd/code/sparking_temperature.nim import ggplotnim, times # Laptop: #const path = "/mnt/1TB/CAST/2017/development/Run_268_170418-05-43/temp_log.txt" # Desktop: const path = "~/CastData/data/2017/development/Run_268_170418-05-43/temp_log.txt" proc p(x: string): DateTime = result = x.parse("YYYY-MM-dd'.'HH:mm:ss", local()) let df = readCsv(path, sep = '\t', skipLines = 2, colNames = @["IMB", "Septem", "DateTime"]) .filter(f{string -> bool: p(`DateTime`) < initDateTime(19, mApr, 2017, 0, 0, 0, 0, local())}) .gather(@["IMB", "Septem"], "Type", "Temperature") .mutate(f{"Timestamp" ~ p(`DateTime`).toTime().toUnix()}) ## XXX: fix plotting of string columns as date scales, due to discrete / continuous mismatch and lacking ## `dataScale` field ggplot(df, aes("Timestamp", "Temperature", color = "Type")) + geom_line() + # scale_x_continuous() + ggtitle("Temperature on 2017/04/18 with fan") + xlab("Time of day", margin = 3.25, rotate = -45.0, alignTo = "right") + ylab("Temperature [°C]") + margin(bottom = 4.0, right = 4.0) + scale_x_date(isTimestamp = true, formatString = "HH:mm:ss", dateSpacing = initDuration(hours = 2), dateAlgo = dtaAddDuration, timeZone = local()) + themeLatex(fWidth = 0.5, width = 600, height = 420, baseTheme = sideBySide) + ggsave("/home/basti/phd/Figs/detector/sparking/temperature_sparking_run_268.pdf", width = 600, height = 360, useTeX = true, standalone = true) df.writeCsv("/home/basti/phd/resources/temperature_sparking_run_268.csv") #+end_src #+RESULTS: | INFO: | The | integer | column | `Timestamp` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("Timestamp"), | ...)`. | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector/sparking | /home/basti/phd/Figs/detector/sparking/temperature_sparking_run_268.tex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generated: | /home/basti/phd/Figs/detector/sparking/temperature_sparking_run_268.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Next up we need to compute the mean hit rate of the four most active chips and plot it against time. How will we go about doing that? Read and reconstruct the run, then manually extract hits per time, bin by time and that's it? #+begin_src sh # cd /mnt/1TB/CAST/2017/development/ cd ~/CastData/data/2017/development/ raw_data_manipulation -p Run_268_170418-05-43 --runType background --out raw_268_sparking.h5 reconstruction -i raw_268_sparking.h5 --out reco_268_sparking.h5 #+end_src #+RESULTS: With the resulting file, we can now generate the plot of the hits over time. This is a reproducible version of the following plot: [[file:~/org/Figs/hitrate_per_time_septemF_76_68deg_1s.pdf]] #+begin_src nim :tangle /home/basti/phd/code/sparking_hit_rate_over_time.nim import std / [options, sequtils, times] import ggplotnim, nimhdf5, unchained defUnit(Second⁻¹) import ingrid / tos_helpers # Laptop # const path = "/mnt/1TB/CAST/2017/development/reco_268_sparking.h5" # Desktop const path = "~/CastData/data/2017/development/reco_268_sparking.h5" let h5f = H5open(path, "r") var df = newDataFrame() var dfR = newDataFrame() for chip in 0 ..< 5: let dsets = @["hits"] let dfC = h5f.readRunDsets( 268, chipDsets = some((chip: chip, dsets: dsets)), commonDsets = @["timestamp"] ) .mutate(f{"chip" <- chip}) .arrange("timestamp") df.add dfC # and directly compute the hit frequency let hits = dfC["hits", int] let time = dfC["timestamp", int] let ts = time.map_inline((x - time[0]).s) const Interval = 30.min var i = 0 var rate = newSeq[Second⁻¹]() var rateTime = newSeq[float]() while i < time.len: var h = 0 var Δt = 0.s let t0 = time[i] echo "Starting at t0 = ", t0 while Δt < Interval and i < time.len: h += hits[i] if i > 0: Δt += ts[i] - ts[i-1] inc i rate.add (h.float / Δt) echo "To ", time[i-1] rateTime.add((time[i-1] + t0) / 2.0) h = 0 dfR.add toDf({"rate" : rate.mapIt(it.float), rateTime, "chip" : chip}) echo df echo dfR dfR = dfR.filter(f{int -> bool: fromUnix(`rateTime`).inZone(local()) < initDateTime(19, mApr, 2017, 0, 0, 0, 0, local())}) ggplot(dfR, aes("rateTime", "rate", color = factor("chip"))) + geom_point() + scale_y_log10() + scale_x_date(isTimestamp = true, formatString = "HH:mm:ss", dateSpacing = initDuration(hours = 2), dateAlgo = dtaAddDuration, timeZone = local()) + themeLatex(fWidth = 0.5, width = 600, height = 420, baseTheme = sideBySide) + margin(bottom = 4.0, right = 4.0) + xlab("Time of day", margin = 3.25, rotate = -45.0, alignTo = "right") + ylab(r"Rate [$\si{pixel.s^{-1}}$]") + ggsave("/home/basti/phd/Figs/detector/sparking/mean_hit_rate_sparking_run_268.pdf", width = 600, height = 360, useTex = true, standalone = true) dfR.writeCsv("/home/basti/phd/resources/mean_hit_rate_sparking_run_268.csv", precision = 10) #+end_src #+RESULTS: | Starting | at | t0 | = | 1492487045 | | | | To | 1492488845 | | | | | | | Starting | at | t0 | = | 1492488846 | | | | To | 1492490645 | | | | | | | Starting | at | t0 | = | 1492490646 | | | | To | 1492492445 | | | | | | | Starting | at | t0 | = | 1492492446 | | | | To | 1492494245 | | | | | | | Starting | at | t0 | = | 1492494246 | | | | To | 1492496045 | | | | | | | Starting | at | t0 | = | 1492496047 | | | | To | 1492497845 | | | | | | | Starting | at | t0 | = | 1492497846 | | | | To | 1492499645 | | | | | | | Starting | at | t0 | = | 1492499647 | | | | To | 1492501445 | | | | | | | Starting | at | t0 | = | 1492501446 | | | | To | 1492503245 | | | | | | | Starting | at | t0 | = | 1492503247 | | | | To | 1492505045 | | | | | | | Starting | at | t0 | = | 1492505046 | | | | To | 1492506845 | | | | | | | Starting | at | t0 | = | 1492506847 | | | | To | 1492508645 | | | | | | | Starting | at | t0 | = | 1492508646 | | | | To | 1492510445 | | | | | | | Starting | at | t0 | = | 1492510447 | | | | To | 1492512245 | | | | | | | Starting | at | t0 | = | 1492512246 | | | | To | 1492514046 | | | | | | | Starting | at | t0 | = | 1492514046 | | | | To | 1492515846 | | | | | | | Starting | at | t0 | = | 1492515846 | | | | To | 1492517646 | | | | | | | Starting | at | t0 | = | 1492517648 | | | | To | 1492519446 | | | | | | | Starting | at | t0 | = | 1492519447 | | | | To | 1492521246 | | | | | | | Starting | at | t0 | = | 1492521246 | | | | To | 1492523046 | | | | | | | Starting | at | t0 | = | 1492523046 | | | | To | 1492524846 | | | | | | | Starting | at | t0 | = | 1492524847 | | | | To | 1492526646 | | | | | | | Starting | at | t0 | = | 1492526646 | | | | To | 1492528446 | | | | | | | Starting | at | t0 | = | 1492528447 | | | | To | 1492530246 | | | | | | | Starting | at | t0 | = | 1492530247 | | | | To | 1492532046 | | | | | | | Starting | at | t0 | = | 1492532047 | | | | To | 1492533848 | | | | | | | Starting | at | t0 | = | 1492533849 | | | | To | 1492535648 | | | | | | | Starting | at | t0 | = | 1492535649 | | | | To | 1492537448 | | | | | | | Starting | at | t0 | = | 1492537449 | | | | To | 1492539248 | | | | | | | Starting | at | t0 | = | 1492539249 | | | | To | 1492541049 | | | | | | | Starting | at | t0 | = | 1492541050 | | | | To | 1492542849 | | | | | | | Starting | at | t0 | = | 1492542850 | | | | To | 1492544649 | | | | | | | Starting | at | t0 | = | 1492544649 | | | | To | 1492546449 | | | | | | | Starting | at | t0 | = | 1492546449 | | | | To | 1492548249 | | | | | | | Starting | at | t0 | = | 1492548249 | | | | To | 1492550049 | | | | | | | Starting | at | t0 | = | 1492550049 | | | | To | 1492551849 | | | | | | | Starting | at | t0 | = | 1492551849 | | | | To | 1492553650 | | | | | | | Starting | at | t0 | = | 1492553650 | | | | To | 1492555450 | | | | | | | Starting | at | t0 | = | 1492555450 | | | | To | 1492557250 | | | | | | | Starting | at | t0 | = | 1492557250 | | | | To | 1492559050 | | | | | | | Starting | at | t0 | = | 1492559050 | | | | To | 1492560850 | | | | | | | Starting | at | t0 | = | 1492560851 | | | | To | 1492562650 | | | | | | | Starting | at | t0 | = | 1492562651 | | | | To | 1492564451 | | | | | | | Starting | at | t0 | = | 1492564451 | | | | To | 1492566251 | | | | | | | Starting | at | t0 | = | 1492566251 | | | | To | 1492568051 | | | | | | | Starting | at | t0 | = | 1492568053 | | | | To | 1492569851 | | | | | | | Starting | at | t0 | = | 1492569851 | | | | To | 1492571651 | | | | | | | Starting | at | t0 | = | 1492571651 | | | | To | 1492573451 | | | | | | | Starting | at | t0 | = | 1492573451 | | | | To | 1492575251 | | | | | | | Starting | at | t0 | = | 1492575251 | | | | To | 1492577051 | | | | | | | Starting | at | t0 | = | 1492577051 | | | | To | 1492578851 | | | | | | | Starting | at | t0 | = | 1492578851 | | | | To | 1492580651 | | | | | | | Starting | at | t0 | = | 1492580653 | | | | To | 1492582451 | | | | | | | Starting | at | t0 | = | 1492582452 | | | | To | 1492584251 | | | | | | | Starting | at | t0 | = | 1492584253 | | | | To | 1492586051 | | | | | | | Starting | at | t0 | = | 1492586052 | | | | To | 1492587851 | | | | | | | Starting | at | t0 | = | 1492587851 | | | | To | 1492589651 | | | | | | | Starting | at | t0 | = | 1492589652 | | | | To | 1492591451 | | | | | | | Starting | at | t0 | = | 1492591452 | | | | To | 1492593251 | | | | | | | Starting | at | t0 | = | 1492593253 | | | | To | 1492595051 | | | | | | | Starting | at | t0 | = | 1492595052 | | | | To | 1492596851 | | | | | | | Starting | at | t0 | = | 1492596853 | | | | To | 1492598651 | | | | | | | Starting | at | t0 | = | 1492598651 | | | | To | 1492600451 | | | | | | | Starting | at | t0 | = | 1492600451 | | | | To | 1492602251 | | | | | | | Starting | at | t0 | = | 1492602252 | | | | To | 1492604051 | | | | | | | Starting | at | t0 | = | 1492604052 | | | | To | 1492605851 | | | | | | | Starting | at | t0 | = | 1492605852 | | | | To | 1492607651 | | | | | | | Starting | at | t0 | = | 1492607651 | | | | To | 1492609451 | | | | | | | Starting | at | t0 | = | 1492609452 | | | | To | 1492610639 | | | | | | | Starting | at | t0 | = | 1492487045 | | | | To | 1492488845 | | | | | | | Starting | at | t0 | = | 1492488846 | | | | To | 1492490645 | | | | | | | Starting | at | t0 | = | 1492490646 | | | | To | 1492492445 | | | | | | | Starting | at | t0 | = | 1492492446 | | | | To | 1492494245 | | | | | | | Starting | at | t0 | = | 1492494246 | | | | To | 1492496045 | | | | | | | Starting | at | t0 | = | 1492496047 | | | | To | 1492497845 | | | | | | | Starting | at | t0 | = | 1492497846 | | | | To | 1492499645 | | | | | | | Starting | at | t0 | = | 1492499645 | | | | To | 1492501445 | | | | | | | Starting | at | t0 | = | 1492501446 | | | | To | 1492503245 | | | | | | | Starting | at | t0 | = | 1492503245 | | | | To | 1492505045 | | | | | | | Starting | at | t0 | = | 1492505046 | | | | To | 1492506845 | | | | | | | Starting | at | t0 | = | 1492506847 | | | | To | 1492508645 | | | | | | | Starting | at | t0 | = | 1492508646 | | | | To | 1492510445 | | | | | | | Starting | at | t0 | = | 1492510447 | | | | To | 1492512245 | | | | | | | Starting | at | t0 | = | 1492512245 | | | | To | 1492514046 | | | | | | | Starting | at | t0 | = | 1492514046 | | | | To | 1492515846 | | | | | | | Starting | at | t0 | = | 1492515846 | | | | To | 1492517646 | | | | | | | Starting | at | t0 | = | 1492517646 | | | | To | 1492519446 | | | | | | | Starting | at | t0 | = | 1492519446 | | | | To | 1492521246 | | | | | | | Starting | at | t0 | = | 1492521246 | | | | To | 1492523046 | | | | | | | Starting | at | t0 | = | 1492523047 | | | | To | 1492524846 | | | | | | | Starting | at | t0 | = | 1492524847 | | | | To | 1492526646 | | | | | | | Starting | at | t0 | = | 1492526646 | | | | To | 1492528446 | | | | | | | Starting | at | t0 | = | 1492528447 | | | | To | 1492530246 | | | | | | | Starting | at | t0 | = | 1492530246 | | | | To | 1492532046 | | | | | | | Starting | at | t0 | = | 1492532046 | | | | To | 1492533847 | | | | | | | Starting | at | t0 | = | 1492533847 | | | | To | 1492535647 | | | | | | | Starting | at | t0 | = | 1492535647 | | | | To | 1492537448 | | | | | | | Starting | at | t0 | = | 1492537448 | | | | To | 1492539248 | | | | | | | Starting | at | t0 | = | 1492539249 | | | | To | 1492541049 | | | | | | | Starting | at | t0 | = | 1492541050 | | | | To | 1492542849 | | | | | | | Starting | at | t0 | = | 1492542849 | | | | To | 1492544649 | | | | | | | Starting | at | t0 | = | 1492544649 | | | | To | 1492546449 | | | | | | | Starting | at | t0 | = | 1492546450 | | | | To | 1492548249 | | | | | | | Starting | at | t0 | = | 1492548249 | | | | To | 1492550049 | | | | | | | Starting | at | t0 | = | 1492550049 | | | | To | 1492551849 | | | | | | | Starting | at | t0 | = | 1492551850 | | | | To | 1492553649 | | | | | | | Starting | at | t0 | = | 1492553650 | | | | To | 1492555449 | | | | | | | Starting | at | t0 | = | 1492555450 | | | | To | 1492557249 | | | | | | | Starting | at | t0 | = | 1492557249 | | | | To | 1492559049 | | | | | | | Starting | at | t0 | = | 1492559050 | | | | To | 1492560850 | | | | | | | Starting | at | t0 | = | 1492560850 | | | | To | 1492562650 | | | | | | | Starting | at | t0 | = | 1492562650 | | | | To | 1492564451 | | | | | | | Starting | at | t0 | = | 1492564451 | | | | To | 1492566251 | | | | | | | Starting | at | t0 | = | 1492566251 | | | | To | 1492568051 | | | | | | | Starting | at | t0 | = | 1492568053 | | | | To | 1492569851 | | | | | | | Starting | at | t0 | = | 1492569851 | | | | To | 1492571651 | | | | | | | Starting | at | t0 | = | 1492571652 | | | | To | 1492573451 | | | | | | | Starting | at | t0 | = | 1492573451 | | | | To | 1492575251 | | | | | | | Starting | at | t0 | = | 1492575252 | | | | To | 1492577051 | | | | | | | Starting | at | t0 | = | 1492577051 | | | | To | 1492578851 | | | | | | | Starting | at | t0 | = | 1492578852 | | | | To | 1492580651 | | | | | | | Starting | at | t0 | = | 1492580651 | | | | To | 1492582451 | | | | | | | Starting | at | t0 | = | 1492582452 | | | | To | 1492584251 | | | | | | | Starting | at | t0 | = | 1492584251 | | | | To | 1492586051 | | | | | | | Starting | at | t0 | = | 1492586051 | | | | To | 1492587851 | | | | | | | Starting | at | t0 | = | 1492587851 | | | | To | 1492589651 | | | | | | | Starting | at | t0 | = | 1492589651 | | | | To | 1492591451 | | | | | | | Starting | at | t0 | = | 1492591452 | | | | To | 1492593251 | | | | | | | Starting | at | t0 | = | 1492593252 | | | | To | 1492595051 | | | | | | | Starting | at | t0 | = | 1492595051 | | | | To | 1492596851 | | | | | | | Starting | at | t0 | = | 1492596853 | | | | To | 1492598651 | | | | | | | Starting | at | t0 | = | 1492598652 | | | | To | 1492600451 | | | | | | | Starting | at | t0 | = | 1492600452 | | | | To | 1492602251 | | | | | | | Starting | at | t0 | = | 1492602253 | | | | To | 1492604051 | | | | | | | Starting | at | t0 | = | 1492604051 | | | | To | 1492605851 | | | | | | | Starting | at | t0 | = | 1492605851 | | | | To | 1492607651 | | | | | | | Starting | at | t0 | = | 1492607651 | | | | To | 1492609451 | | | | | | | Starting | at | t0 | = | 1492609452 | | | | To | 1492610638 | | | | | | | Starting | at | t0 | = | 1492487044 | | | | To | 1492488844 | | | | | | | Starting | at | t0 | = | 1492488853 | | | | To | 1492490647 | | | | | | | Starting | at | t0 | = | 1492490656 | | | | To | 1492492449 | | | | | | | Starting | at | t0 | = | 1492492451 | | | | To | 1492494249 | | | | | | | Starting | at | t0 | = | 1492494259 | | | | To | 1492496053 | | | | | | | Starting | at | t0 | = | 1492496067 | | | | To | 1492497854 | | | | | | | Starting | at | t0 | = | 1492497859 | | | | To | 1492499654 | | | | | | | Starting | at | t0 | = | 1492499657 | | | | To | 1492501454 | | | | | | | Starting | at | t0 | = | 1492501455 | | | | To | 1492503271 | | | | | | | Starting | at | t0 | = | 1492503276 | | | | To | 1492505072 | | | | | | | Starting | at | t0 | = | 1492505088 | | | | To | 1492506872 | | | | | | | Starting | at | t0 | = | 1492506877 | | | | To | 1492508675 | | | | | | | Starting | at | t0 | = | 1492508677 | | | | To | 1492510479 | | | | | | | Starting | at | t0 | = | 1492510484 | | | | To | 1492512280 | | | | | | | Starting | at | t0 | = | 1492512282 | | | | To | 1492514085 | | | | | | | Starting | at | t0 | = | 1492514097 | | | | To | 1492515889 | | | | | | | Starting | at | t0 | = | 1492515894 | | | | To | 1492517724 | | | | | | | Starting | at | t0 | = | 1492517729 | | | | To | 1492519525 | | | | | | | Starting | at | t0 | = | 1492519526 | | | | To | 1492521326 | | | | | | | Starting | at | t0 | = | 1492521334 | | | | To | 1492523141 | | | | | | | Starting | at | t0 | = | 1492523150 | | | | To | 1492524947 | | | | | | | Starting | at | t0 | = | 1492524962 | | | | To | 1492526751 | | | | | | | Starting | at | t0 | = | 1492526755 | | | | To | 1492528574 | | | | | | | Starting | at | t0 | = | 1492528578 | | | | To | 1492530374 | | | | | | | Starting | at | t0 | = | 1492530389 | | | | To | 1492532183 | | | | | | | Starting | at | t0 | = | 1492532202 | | | | To | 1492533992 | | | | | | | Starting | at | t0 | = | 1492533997 | | | | To | 1492535799 | | | | | | | Starting | at | t0 | = | 1492535825 | | | | To | 1492537628 | | | | | | | Starting | at | t0 | = | 1492537644 | | | | To | 1492539440 | | | | | | | Starting | at | t0 | = | 1492539453 | | | | To | 1492541255 | | | | | | | Starting | at | t0 | = | 1492541267 | | | | To | 1492543061 | | | | | | | Starting | at | t0 | = | 1492543068 | | | | To | 1492544861 | | | | | | | Starting | at | t0 | = | 1492544864 | | | | To | 1492546663 | | | | | | | Starting | at | t0 | = | 1492546733 | | | | To | 1492548470 | | | | | | | Starting | at | t0 | = | 1492548490 | | | | To | 1492550279 | | | | | | | Starting | at | t0 | = | 1492550286 | | | | To | 1492552094 | | | | | | | Starting | at | t0 | = | 1492552095 | | | | To | 1492553899 | | | | | | | Starting | at | t0 | = | 1492553924 | | | | To | 1492555702 | | | | | | | Starting | at | t0 | = | 1492555709 | | | | To | 1492557512 | | | | | | | Starting | at | t0 | = | 1492557537 | | | | To | 1492559317 | | | | | | | Starting | at | t0 | = | 1492559330 | | | | To | 1492561129 | | | | | | | Starting | at | t0 | = | 1492561139 | | | | To | 1492562930 | | | | | | | Starting | at | t0 | = | 1492562943 | | | | To | 1492564758 | | | | | | | Starting | at | t0 | = | 1492564759 | | | | To | 1492566562 | | | | | | | Starting | at | t0 | = | 1492566588 | | | | To | 1492568405 | | | | | | | Starting | at | t0 | = | 1492568483 | | | | To | 1492570208 | | | | | | | Starting | at | t0 | = | 1492570216 | | | | To | 1492572010 | | | | | | | Starting | at | t0 | = | 1492572030 | | | | To | 1492573822 | | | | | | | Starting | at | t0 | = | 1492573842 | | | | To | 1492575633 | | | | | | | Starting | at | t0 | = | 1492575639 | | | | To | 1492577455 | | | | | | | Starting | at | t0 | = | 1492577458 | | | | To | 1492579267 | | | | | | | Starting | at | t0 | = | 1492579285 | | | | To | 1492581080 | | | | | | | Starting | at | t0 | = | 1492581098 | | | | To | 1492582892 | | | | | | | Starting | at | t0 | = | 1492582895 | | | | To | 1492584711 | | | | | | | Starting | at | t0 | = | 1492584714 | | | | To | 1492586513 | | | | | | | Starting | at | t0 | = | 1492586515 | | | | To | 1492588319 | | | | | | | Starting | at | t0 | = | 1492588363 | | | | To | 1492590121 | | | | | | | Starting | at | t0 | = | 1492590123 | | | | To | 1492591921 | | | | | | | Starting | at | t0 | = | 1492591939 | | | | To | 1492593723 | | | | | | | Starting | at | t0 | = | 1492593735 | | | | To | 1492595527 | | | | | | | Starting | at | t0 | = | 1492595576 | | | | To | 1492597337 | | | | | | | Starting | at | t0 | = | 1492597339 | | | | To | 1492599140 | | | | | | | Starting | at | t0 | = | 1492599143 | | | | To | 1492600951 | | | | | | | Starting | at | t0 | = | 1492600959 | | | | To | 1492602753 | | | | | | | Starting | at | t0 | = | 1492602757 | | | | To | 1492604569 | | | | | | | Starting | at | t0 | = | 1492604596 | | | | To | 1492606372 | | | | | | | Starting | at | t0 | = | 1492606374 | | | | To | 1492608182 | | | | | | | Starting | at | t0 | = | 1492608213 | | | | To | 1492609982 | | | | | | | Starting | at | t0 | = | 1492609988 | | | | To | 1492610633 | | | | | | | Starting | at | t0 | = | 1492487057 | | | | To | 1492488859 | | | | | | | Starting | at | t0 | = | 1492488860 | | | | To | 1492490683 | | | | | | | Starting | at | t0 | = | 1492490684 | | | | To | 1492492493 | | | | | | | Starting | at | t0 | = | 1492492501 | | | | To | 1492494293 | | | | | | | Starting | at | t0 | = | 1492494294 | | | | To | 1492496093 | | | | | | | Starting | at | t0 | = | 1492496105 | | | | To | 1492497893 | | | | | | | Starting | at | t0 | = | 1492497906 | | | | To | 1492499705 | | | | | | | Starting | at | t0 | = | 1492499712 | | | | To | 1492501512 | | | | | | | Starting | at | t0 | = | 1492501518 | | | | To | 1492503323 | | | | | | | Starting | at | t0 | = | 1492503331 | | | | To | 1492505123 | | | | | | | Starting | at | t0 | = | 1492505124 | | | | To | 1492506924 | | | | | | | Starting | at | t0 | = | 1492506929 | | | | To | 1492508727 | | | | | | | Starting | at | t0 | = | 1492508727 | | | | To | 1492510528 | | | | | | | Starting | at | t0 | = | 1492510531 | | | | To | 1492512329 | | | | | | | Starting | at | t0 | = | 1492512329 | | | | To | 1492514130 | | | | | | | Starting | at | t0 | = | 1492514174 | | | | To | 1492515930 | | | | | | | Starting | at | t0 | = | 1492515934 | | | | To | 1492517765 | | | | | | | Starting | at | t0 | = | 1492517797 | | | | To | 1492519569 | | | | | | | Starting | at | t0 | = | 1492519571 | | | | To | 1492521389 | | | | | | | Starting | at | t0 | = | 1492521399 | | | | To | 1492523194 | | | | | | | Starting | at | t0 | = | 1492523199 | | | | To | 1492524995 | | | | | | | Starting | at | t0 | = | 1492525007 | | | | To | 1492526797 | | | | | | | Starting | at | t0 | = | 1492526808 | | | | To | 1492528600 | | | | | | | Starting | at | t0 | = | 1492528615 | | | | To | 1492530458 | | | | | | | Starting | at | t0 | = | 1492530477 | | | | To | 1492532265 | | | | | | | Starting | at | t0 | = | 1492532269 | | | | To | 1492534095 | | | | | | | Starting | at | t0 | = | 1492534099 | | | | To | 1492535904 | | | | | | | Starting | at | t0 | = | 1492535908 | | | | To | 1492537705 | | | | | | | Starting | at | t0 | = | 1492537717 | | | | To | 1492539515 | | | | | | | Starting | at | t0 | = | 1492539555 | | | | To | 1492541338 | | | | | | | Starting | at | t0 | = | 1492541341 | | | | To | 1492543148 | | | | | | | Starting | at | t0 | = | 1492543154 | | | | To | 1492544957 | | | | | | | Starting | at | t0 | = | 1492544997 | | | | To | 1492546782 | | | | | | | Starting | at | t0 | = | 1492546804 | | | | To | 1492548584 | | | | | | | Starting | at | t0 | = | 1492548587 | | | | To | 1492550410 | | | | | | | Starting | at | t0 | = | 1492550415 | | | | To | 1492552280 | | | | | | | Starting | at | t0 | = | 1492552291 | | | | To | 1492554150 | | | | | | | Starting | at | t0 | = | 1492554183 | | | | To | 1492555952 | | | | | | | Starting | at | t0 | = | 1492555952 | | | | To | 1492557755 | | | | | | | Starting | at | t0 | = | 1492557763 | | | | To | 1492559563 | | | | | | | Starting | at | t0 | = | 1492559591 | | | | To | 1492561363 | | | | | | | Starting | at | t0 | = | 1492561364 | | | | To | 1492563163 | | | | | | | Starting | at | t0 | = | 1492563164 | | | | To | 1492564964 | | | | | | | Starting | at | t0 | = | 1492564969 | | | | To | 1492566765 | | | | | | | Starting | at | t0 | = | 1492566769 | | | | To | 1492568568 | | | | | | | Starting | at | t0 | = | 1492568579 | | | | To | 1492570371 | | | | | | | Starting | at | t0 | = | 1492570375 | | | | To | 1492572211 | | | | | | | Starting | at | t0 | = | 1492572215 | | | | To | 1492574038 | | | | | | | Starting | at | t0 | = | 1492574058 | | | | To | 1492575841 | | | | | | | Starting | at | t0 | = | 1492575863 | | | | To | 1492577645 | | | | | | | Starting | at | t0 | = | 1492577665 | | | | To | 1492579450 | | | | | | | Starting | at | t0 | = | 1492579456 | | | | To | 1492581259 | | | | | | | Starting | at | t0 | = | 1492581293 | | | | To | 1492583062 | | | | | | | Starting | at | t0 | = | 1492583074 | | | | To | 1492584873 | | | | | | | Starting | at | t0 | = | 1492584876 | | | | To | 1492586674 | | | | | | | Starting | at | t0 | = | 1492586700 | | | | To | 1492588489 | | | | | | | Starting | at | t0 | = | 1492588493 | | | | To | 1492590348 | | | | | | | Starting | at | t0 | = | 1492590358 | | | | To | 1492592176 | | | | | | | Starting | at | t0 | = | 1492592214 | | | | To | 1492594021 | | | | | | | Starting | at | t0 | = | 1492594032 | | | | To | 1492595851 | | | | | | | Starting | at | t0 | = | 1492595854 | | | | To | 1492597655 | | | | | | | Starting | at | t0 | = | 1492597669 | | | | To | 1492599487 | | | | | | | Starting | at | t0 | = | 1492599503 | | | | To | 1492601296 | | | | | | | Starting | at | t0 | = | 1492601307 | | | | To | 1492603141 | | | | | | | Starting | at | t0 | = | 1492603148 | | | | To | 1492604947 | | | | | | | Starting | at | t0 | = | 1492605002 | | | | To | 1492606809 | | | | | | | Starting | at | t0 | = | 1492606869 | | | | To | 1492608617 | | | | | | | Starting | at | t0 | = | 1492608624 | | | | To | 1492610423 | | | | | | | Starting | at | t0 | = | 1492610468 | | | | To | 1492610636 | | | | | | | Starting | at | t0 | = | 1492487192 | | | | To | 1492488999 | | | | | | | Starting | at | t0 | = | 1492489376 | | | | To | 1492490836 | | | | | | | Starting | at | t0 | = | 1492490852 | | | | To | 1492492659 | | | | | | | Starting | at | t0 | = | 1492492694 | | | | To | 1492494482 | | | | | | | Starting | at | t0 | = | 1492494489 | | | | To | 1492496300 | | | | | | | Starting | at | t0 | = | 1492496369 | | | | To | 1492498150 | | | | | | | Starting | at | t0 | = | 1492498172 | | | | To | 1492499951 | | | | | | | Starting | at | t0 | = | 1492499952 | | | | To | 1492501765 | | | | | | | Starting | at | t0 | = | 1492501769 | | | | To | 1492503624 | | | | | | | Starting | at | t0 | = | 1492503632 | | | | To | 1492505429 | | | | | | | Starting | at | t0 | = | 1492505523 | | | | To | 1492507238 | | | | | | | Starting | at | t0 | = | 1492507242 | | | | To | 1492509042 | | | | | | | Starting | at | t0 | = | 1492509042 | | | | To | 1492510848 | | | | | | | Starting | at | t0 | = | 1492510849 | | | | To | 1492512801 | | | | | | | Starting | at | t0 | = | 1492512816 | | | | To | 1492514692 | | | | | | | Starting | at | t0 | = | 1492514781 | | | | To | 1492516611 | | | | | | | Starting | at | t0 | = | 1492516906 | | | | To | 1492518424 | | | | | | | Starting | at | t0 | = | 1492518650 | | | | To | 1492520236 | | | | | | | Starting | at | t0 | = | 1492520247 | | | | To | 1492522072 | | | | | | | Starting | at | t0 | = | 1492522138 | | | | To | 1492524021 | | | | | | | Starting | at | t0 | = | 1492524228 | | | | To | 1492525840 | | | | | | | Starting | at | t0 | = | 1492526100 | | | | To | 1492527782 | | | | | | | Starting | at | t0 | = | 1492527819 | | | | To | 1492529604 | | | | | | | Starting | at | t0 | = | 1492529621 | | | | To | 1492531473 | | | | | | | Starting | at | t0 | = | 1492531562 | | | | To | 1492533341 | | | | | | | Starting | at | t0 | = | 1492533471 | | | | To | 1492535185 | | | | | | | Starting | at | t0 | = | 1492535283 | | | | To | 1492537019 | | | | | | | Starting | at | t0 | = | 1492537096 | | | | To | 1492539167 | | | | | | | Starting | at | t0 | = | 1492539189 | | | | To | 1492540983 | | | | | | | Starting | at | t0 | = | 1492541106 | | | | To | 1492542801 | | | | | | | Starting | at | t0 | = | 1492542837 | | | | To | 1492544626 | | | | | | | Starting | at | t0 | = | 1492544796 | | | | To | 1492546537 | | | | | | | Starting | at | t0 | = | 1492546561 | | | | To | 1492548405 | | | | | | | Starting | at | t0 | = | 1492548451 | | | | To | 1492550402 | | | | | | | Starting | at | t0 | = | 1492550431 | | | | To | 1492552372 | | | | | | | Starting | at | t0 | = | 1492552439 | | | | To | 1492554312 | | | | | | | Starting | at | t0 | = | 1492554350 | | | | To | 1492556138 | | | | | | | Starting | at | t0 | = | 1492556150 | | | | To | 1492558045 | | | | | | | Starting | at | t0 | = | 1492558215 | | | | To | 1492560041 | | | | | | | Starting | at | t0 | = | 1492560136 | | | | To | 1492561849 | | | | | | | Starting | at | t0 | = | 1492561874 | | | | To | 1492563668 | | | | | | | Starting | at | t0 | = | 1492563674 | | | | To | 1492565640 | | | | | | | Starting | at | t0 | = | 1492565703 | | | | To | 1492567651 | | | | | | | Starting | at | t0 | = | 1492567753 | | | | To | 1492569580 | | | | | | | Starting | at | t0 | = | 1492569590 | | | | To | 1492571475 | | | | | | | Starting | at | t0 | = | 1492571580 | | | | To | 1492573307 | | | | | | | Starting | at | t0 | = | 1492573431 | | | | To | 1492575250 | | | | | | | Starting | at | t0 | = | 1492575474 | | | | To | 1492577074 | | | | | | | Starting | at | t0 | = | 1492577129 | | | | To | 1492579262 | | | | | | | Starting | at | t0 | = | 1492579395 | | | | To | 1492581079 | | | | | | | Starting | at | t0 | = | 1492581320 | | | | To | 1492583335 | | | | | | | Starting | at | t0 | = | 1492583503 | | | | To | 1492585296 | | | | | | | Starting | at | t0 | = | 1492585350 | | | | To | 1492587141 | | | | | | | Starting | at | t0 | = | 1492587153 | | | | To | 1492589033 | | | | | | | Starting | at | t0 | = | 1492589121 | | | | To | 1492590859 | | | | | | | Starting | at | t0 | = | 1492590906 | | | | To | 1492593118 | | | | | | | Starting | at | t0 | = | 1492593169 | | | | To | 1492594983 | | | | | | | Starting | at | t0 | = | 1492595162 | | | | To | 1492596925 | | | | | | | Starting | at | t0 | = | 1492597007 | | | | To | 1492599149 | | | | | | | Starting | at | t0 | = | 1492599189 | | | | To | 1492601150 | | | | | | | Starting | at | t0 | = | 1492601693 | | | | To | 1492603051 | | | | | | | Starting | at | t0 | = | 1492603118 | | | | To | 1492604865 | | | | | | | Starting | at | t0 | = | 1492605128 | | | | To | 1492606809 | | | | | | | Starting | at | t0 | = | 1492606888 | | | | To | 1492608611 | | | | | | | Starting | at | t0 | = | 1492608627 | | | | To | 1492610512 | | | | | | | Starting | at | t0 | = | 1492610624 | | | | To | 1492610624 | | | | | | | DataFrame | with | 6 | columns | and | 376056 | rows: | | Idx | hits | Idx | eventNumber | timestamp | runNumber | chip | | dtype: | int | float | float | int | int | int | | 0 | 4 | 0 | 3 | 1492487045 | 268 | 0 | | 1 | 3 | 0 | 8 | 1492487050 | 268 | 0 | | 2 | 4 | 0 | 10 | 1492487053 | 268 | 0 | | 3 | 3 | 0 | 11 | 1492487054 | 268 | 0 | | 4 | 3 | 0 | 15 | 1492487058 | 268 | 0 | | 5 | 7 | 0 | 19 | 1492487062 | 268 | 0 | | 6 | 4 | 0 | 20 | 1492487064 | 268 | 0 | | 7 | 3 | 0 | 20 | 1492487064 | 268 | 0 | | 8 | 4 | 0 | 21 | 1492487065 | 268 | 0 | | 9 | 4 | 0 | 22 | 1492487066 | 268 | 0 | | 10 | 4 | 0 | 29 | 1492487073 | 268 | 0 | | 11 | 11 | 0 | 30 | 1492487075 | 268 | 0 | | 12 | 12 | 0 | 31 | 1492487076 | 268 | 0 | | 13 | 4 | 0 | 32 | 1492487077 | 268 | 0 | | 14 | 3 | 0 | 35 | 1492487080 | 268 | 0 | | 15 | 10 | 0 | 36 | 1492487081 | 268 | 0 | | 16 | 5 | 0 | 38 | 1492487083 | 268 | 0 | | 17 | 3 | 0 | 40 | 1492487086 | 268 | 0 | | 18 | 4 | 0 | 42 | 1492487088 | 268 | 0 | | 19 | 3 | 0 | 44 | 1492487090 | 268 | 0 | | | | | | | | | | DataFrame | with | 3 | columns | and | 342 | rows: | | Idx | rate | rateTime | chip | | | | | dtype: | float | float | int | | | | | 0 | 56.07 | 1492500000.0 | 0 | | | | | 1 | 64.87 | 1492500000.0 | 0 | | | | | 2 | 74.52 | 1492500000.0 | 0 | | | | | 3 | 76.52 | 1492500000.0 | 0 | | | | | 4 | 76.96 | 1492500000.0 | 0 | | | | | 5 | 65.47 | 1492500000.0 | 0 | | | | | 6 | 63.05 | 1492500000.0 | 0 | | | | | 7 | 56.47 | 1492500000.0 | 0 | | | | | 8 | 62.74 | 1492500000.0 | 0 | | | | | 9 | 55.99 | 1492500000.0 | 0 | | | | | 10 | 64.03 | 1492500000.0 | 0 | | | | | 11 | 66.9 | 1492500000.0 | 0 | | | | | 12 | 51.74 | 1492500000.0 | 0 | | | | | 13 | 70.42 | 1492500000.0 | 0 | | | | | 14 | 46.61 | 1492500000.0 | 0 | | | | | 15 | 16.59 | 1492500000.0 | 0 | | | | | 16 | 13.37 | 1492500000.0 | 0 | | | | | 17 | 13.16 | 1492500000.0 | 0 | | | | | 18 | 15.16 | 1492500000.0 | 0 | | | | | 19 | 19.65 | 1492500000.0 | 0 | | | | | | | | | | | | | [INFO] | TeXDaemon | ready | for | input. | | | | shellCmd: | command | -v | lualatex | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector/sparking | /home/basti/phd/Figs/detector/sparking/mean_hit_rate_sparking_run_268.tex | | | | Generated: | /home/basti/phd/Figs/detector/sparking/mean_hit_rate_sparking_run_268.pdf | | | | | | Finally, combine both and plot together: #+begin_src nim :tangle /home/basti/phd/code/temperature_and_sparking.nim import std / times import ggplotnim const path = "/home/basti/phd/resources/" let df = readCsv(path & "temperature_sparking_run_268.csv") let dfR = readCsv(path & "mean_hit_rate_sparking_run_268.csv") .group_by("chip") .mutate(f{"rateNorm" ~ `rate` / max(`rate`) * 80.0}) .rename(f{"Timestamp" <- "rateTime"}) let sa = secAxis(name = "Hit rate [a.u.]", trans = f{1.0 / 80.0}) #invTransFn = f{`rateNorm` * 80.0}) ggplot(df, aes("Timestamp", "Temperature", color = "Type")) + geom_line() + geom_point(data = dfR, aes = aes("Timestamp", "rateNorm", color = factor("chip"))) + # ggtitle("Temperature during run on 2017/04/18 in which fan was placed next to detector") + xlab("Time of day") + ylab("Temperature [°C]") + margin(top = 2.0) + scale_y_continuous(secAxis = sa) + scale_x_date(isTimestamp = true, formatString = "HH:mm:ss", dateSpacing = initDuration(hours = 2), dateAlgo = dtaAddDuration, timeZone = local()) + legendPosition(0.835, 0.1) + yMargin(0.05) + ggsave("/home/basti/phd/Figs/detector/sparking/temperature_and_sparking_run_268.pdf") #+end_src #+RESULTS: And finally, let's also recreate the occupancy plot [[file:~/phd/Figs/detector/sparking/occupancy_sparking_septem5chips_300V.pdf]] of run 241 during development to showcase the sparking behavior. In order to do that, we first need to reconstruct the run containing the data: #+begin_src sh cd /mnt/1TB/CAST/2017/development/ raw_data_manipulation -p Run_241_170216-13-49 --runType background --out raw_241_sparking.h5 reconstruction raw_241_sparking.h5 --out reco_241_sparking.h5 #+end_src With the reconstructed data file at hand, we can first of all generate a large number of plots for each chip: #+begin_src sh plotData --h5file reco_241_sparking.h5 \ --runType rtBackground \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid --occupancy #+end_src which can be adjusted according to the user's preference of course. (For this plot in particular it's really important not use the =ToT= cutting feature in =raw_data_manipulation= via the =rmToTLow= and =rmToTHigh= in the =config.toml= file) With the file in place, let's now create the plot of the occupancies for each chip, embedded in the layout of the septemboard (at least for the 5 chips that were on this septemboard F). - [ ] *REPLACE BELOW PLACEMENT BY ~geometry.nim~ IMPLEMENTATION* #+begin_src nim :tangle /home/basti/phd/code/occupancy_sparking_septem_layout.nim import std / os except FileInfo import std / strutils import ingrid / [tos_helpers, ingrid_types] import nimhdf5, ggplotnim, ginger ## The Septemboard layout code is a port of the code used in the python based event ## display for TOS. const Width = 14.1 Height = 14.1 BondHeight = 2.0 FullHeight = Height + BondHeight NumChips = 7 # If this is set to `true` the final plot will only contain the actual raster image. No legend or axes OnlyRaster = true Run = 241 type SeptemRow = object left: float right: float wspace: float top: float bottom: float proc initSeptemRow(nChips: int, x_size, y_size, x_dist, x_offset, y_t_offset, y_b_offset, dist_to_row_below: float): SeptemRow = # this class implements a single row of chips of the septem board # nChips: number of chips in row # x_dist: distance in x direction between each chip # x_offset: offset of left edge of first chip in row from # left side of center row # calculate width and height of row, based on chips and dist let width = nChips.float * Width + (nChips - 1).float * x_dist let height_active = Height let height_full = FullHeight + dist_to_row_below # using calc gridspec one calculates the coordinates of the row on # the figure in relative canvas coordinates # include padding by adding or subtracting from left, right, top, bottom result.left = x_offset / x_size result.right = result.left + width / x_size result.wspace = x_dist / x_size result.top = 1.0 - y_t_offset / y_size result.bottom = result.top - height_active / y_size proc initSeptemBoard(padding, fig_x_size, fig_y_size, scaling_factor: float): seq[SeptemRow] = # implements the septem board, being built from 3 septem rows proc initRows(y_size, scaled_x_size, scaled_y_size, y_row1_row2, y_row2_row3, row2_x_dist: float): seq[SeptemRow] = # this function creates the row objects for the septem class # calculation of row 1 top and bottom (in abs. coords.): let # (top need to add padding to top of row 1) row1_y_top = y_size - BondHeight - Height # bottom in abs. coords. row1_y_bottom = 2 * FullHeight + y_row1_row2 + y_row2_row3 - Height # offset of left side from septem in abs. coords. row1_x_offset = 6.95 # now create the first row with all absolute coordinates result.add initSeptemRow(2, scaled_x_size, scaled_y_size, 0.85, row1_x_offset, row1_y_top, row1_y_bottom, y_row1_row2) # calculation of row 2 top and bottom (top & bottom of row2 not affected by padding): let row2_y_top = y_size - FullHeight - y_row1_row2 - Height row2_y_bottom = FullHeight + y_row2_row3 + BondHeight - Height # no offset for row2, defines our left most position in abs. coords. row2_x_offset = 0.0 #padding * x_size result.add initSeptemRow(3, scaled_x_size, scaled_y_size, row2_x_dist, row2_x_offset, row2_y_top, row2_y_bottom, y_row2_row3) # calculation of row 3 top and bottom (add padding to bottom): let row3_y_top = y_size - 2 * FullHeight - y_row1_row2 - y_row2_row3 - Height row3_y_bottom = BondHeight - Height row3_x_offset = 7.22 result.add initSeptemRow(2, scaled_x_size, scaled_y_size, 0.35, row3_x_offset, row3_y_top, row3_y_bottom, 0) # include a padding all around the septem event display of 'padding' # use size of figure to scale septem accordingly to have it always properly # scaled for the given figure # take the inverse of the scaling factor (want 1/2 as input to scale to half size) let scaling_factor = 1.0 / scaling_factor # first calculate the ratio of the figure let fig_ratio = float(fig_x_size) / float(fig_y_size) # distances between different rows in absolute coordinates let y_row1_row2 = 0.38 y_row2_row3 = 3.1 # size in y direction of whole septem board in absolute coordinates y_size = 3 * FullHeight + y_row1_row2 + y_row2_row3 # already define row2_x_dist here (in absolute coordinates) to calculate x_size row2_x_dist = 0.35 # 3 chips * width + 2 * distance between chips (in absolute coordinates) x_size = 3 * Width + (3 - 1) * row2_x_dist # calculate the ratio of the septem board var ratio = float(x_size) / float(y_size) # now calculate the needed ratio to get the correct scaling of the septem on any # figure scale. fig_ratio / own ratio ratio = fig_ratio / ratio let # scaled x and y sizes scaled_x_size = x_size * ratio * scaling_factor scaled_y_size = y_size * scaling_factor # and now create the row objects result = initRows(y_size, scaled_x_size, scaled_y_size, y_row1_row2, y_row2_row3, row2_x_dist) proc readVlen(h5f: H5File, fileInfo: FileInfo, runNumber: int, dsetName: string, chipNumber = 0, dtype: typedesc = float): seq[seq[dtype]] = ## reads variable length data `dsetName` and returns it ## In contrast to `read` this proc does *not* convert the data. let vlenDtype = special_type(dtype) let dset = h5f[(fileInfo.dataPath(runNumber, chipNumber).string / dsetName).dset_str] result = dset[vlenDType, dtype] proc calcOccupancy[T](x, y: seq[seq[T]], z: seq[seq[uint16]] = @[]): Tensor[float] = ## calculates the occupancy of the given x and y datasets ## Either for a `seq[seq[T: SomeInteger]]` in which case we're calculating ## the occupancy of a raw clusters or `seq[T: SomeFloat]` in which case ## we're dealing with center positions of clusters result = newTensor[float]([NPix, NPix]) # iterate over events for i in 0 .. x.high: let xEv = x[i] yEv = y[i] var zEv: seq[uint16] if z.len > 0: zEv = z[i] ## continue if full event. ## TODO: replace by solution that also works for clusters!! #if xEv.len >= 4095: continue for j in 0 .. xEv.high: if zEv.len > 0: result[xEv[j].int, yEv[j].int] += zEv[j].float else: result[xEv[j].int, yEv[j].int] += 1.0 proc occForChip(h5f: H5File, chip: int, fileInfo: FileInfo): (Tensor[int], Tensor[int], Tensor[float]) = const NPix = 256 let xD = h5f.readVlen(fileInfo, Run, "x", chip, dtype = uint8) yD = h5f.readVlen(fileInfo, Run, "y", chip, dtype = uint8) zD = h5f.readVlen(fileInfo, Run, "ToT", chip, dtype = uint16) let occ = calcOccupancy(xD, yD) # , zD) var x = newTensorUninit[int](NPix * NPix) y = newTensorUninit[int](NPix * NPix) z = newTensorUninit[float](NPix * NPix) var i = 0 for idx, val in occ: x[i] = idx[0] y[i] = idx[1] z[i] = val inc i result = (x, y, z) proc handleOccupancy(h5f: H5File, chip: int, fileInfo: FileInfo, quant: float = 0.0): PlotView = # get x and y datasets, stack and get occupancies let (x, y, z) = h5f.occForChip(chip, fileInfo) let df = toDf(x, y, z) var quant = quant if quant == 0.0: quant = percentile(z, 80) result = ggcreate( block: var plt = ggplot(df, aes("x", "y", fill = "z"), backend = bkCairo) + geom_raster() + scale_fill_continuous(scale = (low: 0.0, high: quant)) + #high: 1600.0)) + xlim(0, NPix) + ylim(0, NPix) if OnlyRaster: plt = plt + theme_void() + hideLegend() plt ) #ggplot(df, aes("x", "y", fill = "z"), backend = bkCairo) + # geom_raster() + # scale_fill_continuous(scale = (low: 0.0, high: 1600.0)) + # xlim(0, NPix) + ylim(0, NPix) + # ggsave("/t/test_occ_0_.pdf") proc drawBounds(v: Viewport) = v.drawBoundary(writeName = true) for ch in mitems(v.children): ch.drawBounds() proc calcQuantileChip3(h5f: H5File, fileInfo: FileInfo): float = let (x, y, z) = h5f.occForChip(3, fileInfo) result = percentile(z, 80) proc addRow(view: Viewport, h5f: H5File, septem: seq[SeptemRow], fileInfo: FileInfo, i, num, chipStart: int, showEmpty = false) = let width = septem[i].right - septem[i].left let height = septem[i].top - septem[i].bottom var row = view.addViewport(left = septem[i].left, bottom = septem[i].bottom, width = width, height = height) row.layout(num, 1) #, margin = quant(septem[i].wspace, ukRelative)) #let quant = calcQuantileChip3() for j in 0 ..< num: if not showEmpty: let plt = handleOccupancy(h5f, chipStart + j, fileInfo) #, quant) let v = if OnlyRaster: plt.view[4] else: plt.view var pltView = v.relativeTo(row[j]) row.embedAt(j, pltView) row[j].drawBoundary() view.children.add row ## Read the data from the reconstructed H5 file of run 241 const path = "/mnt/1TB/CAST/2017/development/reco_$#_sparking.h5" % $Run let h5f = H5open(path, "r") let fileInfo = getFileInfo(h5f) let fig_x_size = 10.0 fig_y_size = 12.04186 ratio = fig_x_size / fig_y_size let septem = initSeptemBoard(0.0, fig_x_size, fig_y_size, 1.0) let ImageSize = fig_x_size * DPI let view = initViewport(c(0.0, 0.0), quant(fig_x_size, ukInch), quant(fig_y_size, ukInch), backend = bkCairo, wImg = ImageSize, hImg = ImageSize / ratio) view.addRow(h5f, septem, fileInfo, 0, 2, 5, showEmpty = true) view.addRow(h5f, septem, fileInfo, 1, 3, 2) view.addRow(h5f, septem, fileInfo, 2, 2, 0) view.draw("/home/basti/phd/Figs/detector/sparking/sparking_occupancy_80_quantile_run_$#.pdf" % $Run) #+end_src Running the above code for run 268 (the run we installed the fan and had temperature readout) yields fig. [[fig:detector:occupancy_sparking_run_268]]. #+CAPTION: Occupancy of a testing background run with $\mathcal{O}(\SI{1}{s})$ long frames #+CAPTION: using septemboard F during development without any kind of cooling and temperature #+CAPTION: logging. Temperatures on the underside of the carrier board reached \SI{76}{\celsius} #+CAPTION: before the fan was placed next to it. #+NAME: fig:detector:occupancy_sparking_run_268 [[~/phd/Figs/detector/sparking/sparking_occupancy_80_quantile_run_268.pdf]] ** Detector efficiency :PROPERTIES: :CUSTOM_ID: sec:septem:efficiency :END: For the applications at CAST, the detector is filled with $\ce{Ar}$ / $\ce{iC_4 H_{10}}$ : $\SI{97.7}{\%} / \SI{2.3}{\%}$ gas. Combined with its \SI{300}{nm} $\ce{Si_3 N_4}$ window, the combined detection efficiency can be computed, if the $\SI{20}{nm}$ $\ce{Al}$ coating for the detector cathode is included by computing the product of the different efficiencies. The efficiency of the window and coating are the transmissions of X-rays at different energies for each material $t_i$. For the gas, the absorption probability of the gas $a_i$ is needed. As such \[ ε_{\text{tot}} = t_{\ce{Si_3 N_4}} · t_{\ce{Al}} · a_{\ce{Ar} / \ce{iC_4H_{10}}} \] describes the full detector efficiency assuming the parts of the detector, which are not obstructed by the window strongbacks. For a statistical measure of detection efficiency the occlusion of the window needs to be taken into account. Because it is position (and thus area) dependent, the need to include it is decided on a case by case basis. For the absorption of a gas mixture, we can use Dalton's law and compute the absorption of the individual gases according to their mole fractions (their percentage as indicated by the gas mixture) and then compute it for each partial pressure \[ a_i = \text{Absorption}(P_{\text{total}} · f_i) \] where $P_{\text{total}}$ is the total pressure of the gas mixture (in this case $\SI{1050}{mbar}$) and $f_i$ is the fraction of the gas $i$. 'Absorption' simply refers to the generic function computing the absorption for a gas at a given pressure (see sec. [[#sec:theory:daltons_law]] and sec. [[#sec:theory:xray_matter_gas]]). The full combined efficiency as presented here is shown in fig. [[fig:detector:combined_efficiency]]. Different aspects dominate the combined efficiency (purple line) in different energy ranges. At energies above $\SI{5}{keV}$ the probability of X-rays to not generate a photoelectron within the $\SI{3}{cm}$ of drift distance becomes the major factor for a loss in efficiency. This means the combined efficiency at $\SI{10}{keV}$ is slightly below $\SI{30}{\%}$. The best combined efficiency of about $\SI{95}{\%}$ is reached at about $\SI{3.75}{keV}$ where both the absorption is likely and the energy is high enough to transmit well through the window. The argon $K 1s$ absorption edge is clearly visible at around $\SI{3.2}{keV}$. At energies below the mean free path of X-rays is significantly longer as the $K 1s$ absorption is a significant factor in the possible generation of a photoelectron. The window leads to a similar, but inverse, effect namely due to the $K 1s$ line of $\ce{Si}$ at around $\SI{1.84}{keV}$. Because transmission is desired through the window material, the efficiency /increases/ once we go below that energy. Finally, the nitrogen $K 1s$ line also contributes to an increase in efficiency once we cross below about $\SI{400}{eV}$. The average efficiencies in the energy ranges between $\SIrange{0}{3}{keV}$ and $\SIrange{0}{10}{keV}$ are $\SI{73.42}{\%}$ and $\SI{67.84}{\%}$, respectively. The improvement in efficiency at energies below $\SI{3}{keV}$ in comparison to the mylar window used in the 2014/15 detector (see sec. [[#sec:detector:sin_window]]) leads to a significant improvement in possible signal detection at those energies, which is especially important for searches with peak fluxes around $\SIrange{1}{2}{keV}$ as is the case for the axion-electron coupling or a possible chameleon coupling. #+CAPTION: Combined detection efficiency for the full detector, taking into account #+CAPTION: the gas filling of $\SI{1050}{mbar}$ $\ce{Ar}$ / $\ce{iC_4 H_{10}}$, the $\SI{300}{nm}$ #+CAPTION: $\ce{Si_3 N_4}$ window and its $\SI{20}{nm}$ $\ce{Al}$ coating. #+NAME: fig:detector:combined_efficiency [[~/phd/Figs/detector/detector_efficiency.pdf]] *** TODOs for this section [0/2] :noexport: Outside of that, the general background rate expected from the detector should match and exceed the previous detector, due to the additional detector features. - [ ] *ADD NUMBERS FOR AVERAGE EFFICIENCY IN RANGES, WHERE X AND Y* - [ ] *REPLACE BY NATIVE TIKZ PLOT + VEGA* *** Calculation of full detection efficiency :extended: :PROPERTIES: :CUSTOM_ID: sec:detector:efficiency :END: *Note*: We also have [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/septemboardDetectionEff/septemboardDetectionEff.nim]] which includes the LLNL effective area (designed for the limit calculation) now! See also sec. [[#sec:limit:ingredients:gen_detection_eff]]. #+begin_src nim :tangle /home/basti/phd/code/detector_efficiency.nim import std / strutils import xrayAttenuation, ggplotnim # generate a compound of silicon and nitrogen with correct number of atoms let Si₃N₄ = compound((Si, 3), (N, 4)) let al = Aluminium.init() # define energies in which to compute the transmission # (we don't start at 0, as at 0 energy the parameters are not well defined) let energies = linspace(0.03, 10.0, 1000) # instantiate an Argon instance let ar = Argon.init() # and isobutane let iso = compound((C, 4), (H, 10)) proc compTrans[T: AnyCompound](el: T, ρ: g•cm⁻³, length: Meter): Column = let df = toDf({ "Energy [keV]" : energies }) .mutate(f{float: "μ" ~ el.attenuationCoefficient(idx("Energy [keV]").keV).float}, f{float: "Trans" ~ transmission(`μ`.cm²•g⁻¹, ρ, length).float}, f{"Compound" <- el.name()}) result = df["Trans"] var df = toDf({ "Energy [keV]" : energies }) # compute transmission for Si₃N₄ (known density and desired length) df[Si₃N₄.name()] = Si₃N₄.compTrans(3.44.g•cm⁻³, 300.nm.to(Meter)) # and aluminum coating df[al.name()] = al.compTrans(2.7.g•cm⁻³, 20.nm.to(Meter)) # and now for the gas mixture. # first compute partial pressures const fracAr = 0.977 const fracIso = 0.023 # using it we can compute the density of each by partial pressure theorem (Dalton's law) let ρ_Ar = density(1050.mbar.to(Pascal) * fracAr, 293.K, ar.molarMass) let ρ_Iso = density(1050.mbar.to(Pascal) * fracIso, 293.K, iso.molarWeight) # now add transmission of argon and iso df[ar.name()] = ar.compTrans(ρ_Ar, 3.cm.to(Meter)) df[iso.name()] = iso.compTrans(ρ_Iso, 3.cm.to(Meter)) let nSiN = r"$\SI{300}{nm}$ $\ce{Si_3 N_4}$" let nAl = r"$\SI{20}{nm}$ $\ce{Al}$" let nAr = r"$\SI{3}{cm}$ $\ce{Ar}$ Absorption" let nIso = r"$\SI{3}{cm}$ $\ce{iC_4 H_{10}}$ Absorption" let nArIso = r"$\SI{3}{cm}$ $\SI{97.7}{\percent} \ce{Ar} / \SI{2.3}{\percent} \ce{iC_4 H_{10}}$" # finally just need to combine all of them in useful ways # - argon + iso df = df.mutate(f{"Trans_ArIso" ~ `Argon` * `C4H10`}, f{"Abs ArIso" ~ 1.0 - `Trans_ArIso`}, f{"Abs Ar" ~ 1.0 - `Argon`}, f{"Abs Iso" ~ 1.0 - `C4H10`}, f{"Efficiency" ~ idx("Abs ArIso") * `Si3N4` * `Aluminium`}) .rename(f{nSiN <- "Si3N4"}, f{nAl <- "Aluminium"}, f{nAr <- "Abs Ar"}, f{nIso <- "Abs Iso"}, f{nArIso <- "Abs ArIso"}) # , .gather([nSiN, nAl, nAr, nIso, nArIso, "Efficiency"], "Material", "Efficiency") echo "Mean efficiency 0-3 keV = ", df.filter(f{idx("Energy [keV]") < 3.0})["Efficiency", float].mean echo "Mean efficiency 0-5 keV = ", df.filter(f{idx("Energy [keV]") < 5.0})["Efficiency", float].mean echo "Mean efficiency 0-10 keV = ", df.filter(f{idx("Energy [keV]") < 10.0})["Efficiency", float].mean ggplot(df, aes("Energy [keV]", "Efficiency", color = "Material")) + geom_line() + xlab("Energy [keV]") + ylab("Efficiency") + xlim(0.0, 10.0) + ggtitle(r"Transmission (absorption for gases) of relevant detector materials and combined \\" & "detection efficiency of the Septemboard detector") + margin(top = 1.5, right = 2.0) + titlePosition(0.0, 0.8) + legendPosition(0.42, 0.15) + themeLatex(fWidth = 0.9, width = 600, height = 400, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/detector/detector_efficiency.pdf", width = 600, height = 400, #width = 800, height = 600, useTex = true, standalone = true) #+end_src #+RESULTS: | Mean | efficiency | 0-3 | keV | = | 0.7342084765204602 | | Mean | efficiency | 0-5 | keV | = | 0.7544999372201439 | | Mean | efficiency | 0-10 | keV | = | 0.6783959312693081 | | [INFO] | TeXDaemon | ready | for | input. | | | shellCmd: | command | -v | lualatex | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector | /home/basti/phd/Figs/detector/detector_efficiency.tex | | | Generated: | /home/basti/phd/Figs/detector/detector_efficiency.pdf | | | | | Mean efficiency 0-3 keV = 0.7342084765204602 Mean efficiency 0-5 keV = 0.7544999372201439 Mean efficiency 0-10 keV = 0.6783959312693081 ** Data acquisition and detector monitoring :PROPERTIES: :CUSTOM_ID: sec:detector:daq :END: The data acquisition software used for the Septemboard detector, the Timepix Operating Software (TOS), is not of direct importance for the thesis. But a longer section about it can be found in appendix [[#sec:daq]]. It includes discussions about how the software works internally, how it is used, what the data format produced looks like and its configuration files. Further, it goes over the temperature readout functionality and finally presents the software used for detector monitoring, in the form of an event display used at CAST. * Detector calibration for operation :Detector: :PROPERTIES: :CUSTOM_ID: sec:operation_calibration :END: #+LATEX: \minitoc This chapter introduces the calibrations required to get the Septemboard detector into a usable state for data taking at an experiment, in particular to interpret the data taken with it, sec. [[#sec:operation_calibration:timepix]]. Those calibrations purely related to the Timepix ASIC itself -- to work noise free at the lowest possible thresholds -- can be found in appendix [[#sec:appendix:calibration:timepix]]. Also the correct working parameters for the FADC are discussed in sec. [[#sec:operation_calibration:fadc]]. The scintillators need to be set to their correct discriminator thresholds, see sec. [[#sec:operation_calibration:scintillators]]. ** TODOs for this section :noexport: Old paragraph: #+begin_quote There will be a later chapter about the type of calibrations that are done based on real data to -- for example -- calibrate the charge or energy, chapter [[#sec:calibration]]. #+end_quote ** Timepix calibrations :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:timepix :END: As alluded to above, here we will focus on interpreting data taken with a Timepix ASIC. This means introducing the Time over Threshold (ToT) calibration method used to interpret ToT values as recorded charges, sec. [[#sec:operation_calibration:tot_calibration]]. Further, based on recorded charge the gas gain can be determined. This is discussed in sec. [[#sec:daq:polya_distribution]]. Important references for the Timepix in general and for the calibration procedures explained below and in the appendix are cite:LLOPART2007485_timepix,LlopartCudie_1056683,timepix_manual,lupberger2016pixel. *** TODOs for this section [/] :noexport: What kind of calibrations exist. How do they work, what do they do etc. From a purely detector standpoint. Polya distribution goes here somewhere. Related to gaseous detector physics & our detector in particular. - [ ] *LEAVE THE IMPORTANT REFERENCE PART IN? ALREADY MENTIONED WHEN TIMEPIX INTRODUCED* - [X] *ELSEWHERE AS WELL* -> Its own chapter for these things. - [ ] *REPHRASE, SCURVE NOT TO SET A DAC. SCURVE USED TO ESTIMATE THE THRESHOLD IN ELECTRONS* These calibrations are mainly explained to give context for the used detector calibrations at CAST. All calibrations can be found in appendix *APPEND CALIBRATIONS*. *** ToT calibration :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:tot_calibration :END: The purpose of the ~ToT~ (\textbf{T}ime \textbf{o}ver \textbf{T}hreshold) calibration is not to perform a calibration for stable operation of a Timepix based detector, but rather to interpret the data received. It is needed to interpret the =ToT= values recorded by each pixel as a charge, i.e. a number of recorded electrons. This is done by injecting charges onto the individual pixels -- 'test pulses'. Capacitors are present to inject very precise voltage bursts onto the pixels. In case of the Timepix 1, each pixel uses a capacitance of $\SI{8}{fF}$ cite:timepix_manual. Knowing the capacitance and the voltage induced on them, the number of injected electrons can be easily calculated from \[ Q = C U. \] By varying the injected charge and recording the resulting ToT values of the pixels, a relation between electrons and ToT values is determined: \[ f(p) = a p + b - \frac{c}{p - t} \] where $a, b, c$ and $t$ are parameters to be determined via a numerical fit and $p$ is the test pulse height in $\si{mV}$. As such, inverting the relation this can be used to compute a charge for a given =ToT= value: \[ f(\mathtt{ToT}) = \frac{α}{2 a} \left( \mathtt{ToT} - (b - a t) + \sqrt{ \left( \mathtt{ToT} - (b - a t) \right)^2 + 4 (a b t + a c - a t \mathtt{ToT} ) } \right) \] where $\mathtt{ToT}$ is the time over threshold value recorded for a pixel, the constants $a, b, c, t$ the fit parameters determined above and $α$ the conversion factor relating the number of electrons from a pulse height of $\SI{1}{mV}$. An example of a ToT calibration of one chip is shown in fig. [[fig:septem:tot_calibration_example]]. #+NAME: fig:septem:tot_calibration_example #+CAPTION: Example of a ToT calibration measurement for the chip H10 W69, the center #+CAPTION: chip of the Septemboard, as it was done for the CAST data taking period 2. [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_3.pdf]] **** TODOs for this section [4/4] :noexport: - [X] add =--outpath= argument to =plotCalibration= and use it to place it where we read it from - [X] Required understanding for our charge calibration. - [X] Show function that is being fitted to it. - [X] *NOTE: AS FAR AS I CAN TELL, THE TOT CALIBRATION ALREADY REQUIRES A CORRECT =THL= DAC VALUE!* **** Generate plots for the ToT calibration :extended: :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:gen_tot_calib_plot :END: The plots for all calibrations (of this sort) are produced using the ~plotCalibration~ tool, [[file:~/CastData/ExternCode/TimepixAnalysis/Plotting/plotCalibration/plotCalibration.nim]]. For a single plot we can produce the plot (used for the thesis above) like this: #+begin_src sh :exports code # To generate fig:septem:tot_calibration_example plotCalibration --tot --runPeriod=Run2 --useTeX \ --file ~/CastData/ExternCode/TimepixAnalysis/resources/ChipCalibrations/Run2/chip3/TOTCalib3.txt \ --outpath ~/phd/Figs/detector/calibration/ #+end_src #+RESULTS: | *** | testlinfit | status | = | 1 | | | | | χ² | = | 1.7516 | (10 | DOF) | | | | | χ²/dof | = | 0.17516 | | | | | | | NPAR | = | 4 | | | | | | | NFREE | = | 4 | | | | | | | NPEGGED | = | 0 | | | | | | | NITER | = | 8 | | | | | | | NFEV | = | 37 | | | | | | | P[0] | = | 0.352852 | +/- | 0.0420413 | | | | | P[1] | = | 50.7553 | +/- | 17.2347 | | | | | P[2] | = | 2386.45 | +/- | 1878.16 | | | | | P[3] | = | -21.2788 | +/- | 21.0224 | | | | | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector/calibration | /home/basti/phd/Figs/detector/calibration//tot_calib_Run2_chip_3.tex | | | | | Generated: | /home/basti/phd/Figs/detector/calibration//tot_calib_Run2_chip_3.pdf | | | | | | | Note that we hand the calibration data file and do not use the InGrid database. We could do either (if you pass a chip number and a run period instead of the ToT calib text file it would read from the database instead). See appendix section [[#sec:appendix:calibration:gen_tot_calibration]] for all ToT calibrations for all Septemboard chips. The following is the doc string of the function for the ToT calibration function used in ~TimepixAnalysis~. #+begin_src nim ## calculates the charge in electrons from the TOT value, based on the TOT calibration ## from MarlinTPC: ## measured and fitted is ToT[clock cycles] in dependency of TestPuseHeight [mV] ## fit function is: ## ToT[clock cycles] = a*TestPuseHeight [mV] + b - ( c / (TestPuseHeight [mV] -t) ) ## isolating TestPuseHeight gives: ## TestPuseHeight [mV] = 1/(2*a) * (ToT[clock cycles] - (b - a*t) + ## SQRT( (ToT[clock cycles] - (b - a*t))^2 + ## 4*(a*b*t + a*c - a*t*ToT[clock cycles]) ) ) ## conversion from charge to electrons: ## electrons = 50 * testpulse[mV] ## so we have: ## Charge[electrons] = 50 / (2*a) * (ToT[clock cycles] - (b - a*t) + ## SQRT( (ToT[clock cycles] - (b - a*t))^2 + ## 4*(a*b*t + a*c - a*t*ToT[clock cycles]) ) ) #+end_src *** Pólya distribution & gas gain :PROPERTIES: :CUSTOM_ID: sec:daq:polya_distribution :END: In sec. [[#sec:theory:gas_gain_polya]] we introduced the Pólya distribution to describe the statistical distribution of the gas amplification stage. In the practical context of the Septemboard detector and the ToT calibration then, this is the histogram of all charge values recorded by the detector (and related of all ToT values). As the ToT calibration function is non-linear though, the histogram of the Pólya distribution has equal bin widths in ToT space, but increasing bin widths in charge space. Such a histogram can be seen in fig. [[fig:daq:polya_example_chip3]], for a $\SI{90}{min}$ slice of background data of the center chip of the Septemboard. The reasoning behind looking at a fixed time interval for the Pólya will be explained in section [[#sec:calib:gas_gain_time_binning]]. The pink line represents the fit of the Pólya distribution following eq. [[eq:theory:polya_distribution]] to the data. The dashed part of the line was not used for the fit and is only an extension using the final fit parameters. At the lower end of charge values, a cutoff due to the chip's activation threshold is clearly visible. Note also how the bin widths increase from low to high charge values. The fit determines a gas gain of $\num{3604.3}$, compared to the mean of the data yielding $\num{3171.0}$. Following [[cite:&krieger2018search]] any number for the gas gain used in the remainder of the thesis refers to the /mean of the data/ and not the fit parameter. We use the fit mainly as a verification of the general behavior. #+CAPTION: An example of a Pólya distribution of chip 3 using the calibration #+CAPTION: of July 2018 based on \SI{90}{min} of background data. #+CAPTION: A cutoff at low charges is visible. The pink line represents the #+CAPTION: fit of the Pólya distribution to the data. In the dashed region the #+CAPTION: line was extended using the final fit parameters. #+NAME: fig:daq:polya_example_chip3 [[~/phd/Figs/gas_gain_run_267_chip_3_2_90_min_1541650002.pdf]] A secondary use case for the Pólya distribution is the determination of the activation threshold via the seen cutoff. More on this in appendix [[#sec:appendix:calibration:polya_distribution_threshold]]. **** TODOs for this section [/] :noexport: - [ ] *THIS SECTION NEEDS TO BE COMPLETELY REVAMPED I THINK!!!* -> Giving information about the minimum charge is useful. But it does not require very much text. -> We _have_ to introduce somewhere well how the gas gain is deduced. We have the gas gain time binning section, but the question is if we should only introduce how we do it there? - [ ] *Probably move this to* [[#sec:calibration]]! -> *OR* to reconstruction chapter? - [ ] *THE REFERENCE TO fig. [[fig:daq:thl_calibration_example]] IS CONTAINED IN SCURVE SECTION!* - [X] *TALK ABOUT GAS GAIN* - [ ] *MAYBE MOVE EXPLANATION ABOUT HOW TO EXTRACT GAS GAIN FROM POLYA TO EXPLANATION IN THEORY?* - [ ] *IF IT STAYS HERE, INTRODUCE WHAT "MEAN OF HISTOGRAM" EVEN MEANS!* - [ ] *USE IT TO DETERMINE THRESHOLD* - [X] *COMPUTE THRESHOLD IN EXAMPLE FROM THL CALIB AND COMPARE* - [X] *INTRODUCE IT IS USED AS THE PRACTICAL WAY TO COMPUTE GAS GAIN* - [ ] *MENTION THAT WE BIN BY TIME OF 90min AND THAT WE WILL EXPLAIN IN LATER CHAPTER HOW TIME INTERVAL WAS CHOSEN* - [ ] *GENERATE A NEW PLOT OF POLYA FOR THESIS* - [ ] *EXPLAIN HOW WE CHOOSE THE FITTING RANGE FOR THE POLYA* -> see the ~statusAndProgress~ section ~Determine polya fit range dynamically~! Explain how all charge values combined as a histogram generate a ~Pólya distribution from which we can deduce the gas gain. **** Generate Polya plot for chip 3, run period 3 [0/1] :extended: We will produce a Pólya plot by performing the reconstruction of a single run. In principle during the entire reconstruction all gas gain plots are produced anyway, but that means we'd just copy over a single file. Better to showcase how they are produced in a standalone fashion. Say we use run 267 and it is in some directory: #+begin_src sh :results none cp ~/CastData/data/2018_2/Run_267_181108-02-05_rtBackground.tar.gz /tmp/Run_267_181108-02-05.tar.gz cd /tmp raw_data_manipulation -p Run_267_181108-02-05.tar.gz --runType background --out raw_267.h5 reconstruction -i raw_267.h5 --out reco_267.h5 reconstruction -i reco_267.h5 --only_charge reconstruction -i reco_267.h5 --only_gas_gain --useTeX #+end_src where we first copy the raw data to ~/tmp~, parse it, reconstruct it, perform charge calibration and finally fit the gas gain data. That produces plots in [[file:/tmp/out/raw_267_2023-11-30_15-01-39/]]. For the thesis we use: ~gas_gain_run_267_chip_3_2_90_min_1541650002.pdf~ The gas gain of the fit and from the data are both in the title of the plot. Fit: 3604.3 Data: 3171.0 *** Final Septemboard calibrations :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:final_septemboard_calibration :END: The detector was calibrated according to the descriptions of the previous sections and appendix [[#sec:appendix:calibration]] prior to both major data taking campaigns at CAST (see sec. [[#sec:cast:data_taking_campaigns]] for a detailed overview of both campaigns), once in October 2017 and then again in July 2018. For an overview of all calibration parameters of interest, see the appendix [[#sec:appendix:septemboard_calibrations]]. Tables [[tab:daq:thl_ths_calibration_run2]] and [[tab:daq:thl_ths_calibration_run3]] show the =THL= and =THS= DAC [fn:dacs] values used for the Septemboard detector at CAST during the data taking campaign from October 2017 to March 2018 (Run-2) and October 2018 to December 2018 (Run-3), respectively. The other DACs were all set to the same values for all chips in both data taking campaigns with the detector, shown in tab. [[tab:daq:common_dac_values]]. #+CAPTION: The =THL= and =THS= DAC values for each of the chips of the #+CAPTION: Septemboard (board H) detector used at CAST for the data taking #+CAPTION: campaign from October 2017 to March 2018 (Run-2). #+NAME: tab:daq:thl_ths_calibration_run2 #+ATTR_LATEX: :booktabs t |-----+--------+--------+--------+---------+---------+--------+--------| | DAC | E6 W69 | K6 W69 | H9 W69 | H10 W69 | G10 W69 | D9 W69 | L8 W69 | |-----+--------+--------+--------+---------+---------+--------+--------| | THL | 435 | 435 | 405 | 450 | 450 | 400 | 470 | | THS | 66 | 69 | 66 | 64 | 66 | 65 | 66 | |-----+--------+--------+--------+---------+---------+--------+--------| #+CAPTION: The =THL= and =THS= DAC values for each of the chips of the #+CAPTION: Septemboard (board H) detector used at CAST for the data taking #+CAPTION: campaign from October 2018 to December 2018 (Run-3). #+NAME: tab:daq:thl_ths_calibration_run3 #+ATTR_LATEX: :booktabs t |-----+--------+--------+--------+---------+---------+--------+--------| | DAC | E6 W69 | K6 W69 | H9 W69 | H10 W69 | G10 W69 | D9 W69 | L8 W69 | |-----+--------+--------+--------+---------+---------+--------+--------| | THL | 419 | 386 | 369 | 434 | 439 | 402 | 462 | | THS | 68 | 66 | 66 | 65 | 69 | 65 | 64 | |-----+--------+--------+--------+---------+---------+--------+--------| #+CAPTION: DAC values and settings that are common between data taking periods and #+CAPTION: all chips of the Septemboard for CAST, from the ~fsr~ configuration file. #+NAME: tab:daq:common_dac_values #+ATTR_LATEX: :booktabs t |-------------+------------| | DAC | Value | |-------------+------------| | IKrum | 20 | | Hist | 0 | | GND | 80 | | Coarse | 7 | | CTPR | 4294967295 | | BiasLVDS | 128 | | SenseDAC | 1 | | DACCode | 6 | | RefLVDS | 128 | | Vcas | 130 | | ExtDAC | 0 | | Disc | 127 | | Preamp | 255 | | FBK | 128 | | BuffAnalogA | 127 | | BuffAnalogB | 127 | |-------------+------------| [fn:dacs] The ~THL~ DAC is the global threshold DAC of all pixels. The ~THS~ DAC is responsible for the range in which each pixel can be adjusted around the global value. See appendix [[#sec:appendix:calibration:timepix]] for more information. **** TODOs for this section [1/6] :noexport: - [ ] *MAYBE MOVE THIS TO APPENDIX TOO?* - [ ] *MAKE SURE TO ADD CALIBRATIONS TO APPENDIX!* - [ ] *TURN PLOT INTO TIKZ* - [X] *WHAT ELSE*? - [ ] *DISTRIBUTIONS OF 4-BIT DAC?* Only appendix - [ ] *THEORETICAL THRESHOLDS COMPUTED AS IN SECTION ABOVE. RESULT ADDED TO TABLE?* **** Generate the FSR tables for all chips and each run period :extended: :PROPERTIES: :CUSTOM_ID: sec:calib:generate_fsr_table :END: Let's generate the tables containing the table for the FSR DAC values for all chips for each of the different run periods. We will use the =ChipCalibrations= directory that is part of the =TimepixAnalysis= repository in the =resources= directory. Further, we will create a plot of all pixels showing the noise peak in THL values based on the optimized equalization. #+begin_src nim :tangle code/generate_fsr_table.nim :results raw import std / [sequtils, strutils, os, tables, algorithm] const path = "/home/basti/CastData/ExternCode/TimepixAnalysis/resources/ChipCalibrations/" const periods = ["Run2", "Run3"] const chipInfo = "chipInfo.txt" const thrMean = "thresholdMeans$#.txt" const chips = toSeq(0 .. 6) import ggplotnim proc readThresholdMeans(path: string, chip: int): DataFrame = echo path / thrMean result = readCsv(path / (thrMean % $chip), sep = '\t', colNames = @["x", "y", "min", "max", "bit", "opt"]) .select("opt") .rename(f{"THL" <- "opt"}) .mutate(f{"chip" <- chip}) # parse the names of the chips from the run info file var df = newDataFrame() for period in periods: var header = @["DAC"] var tab = initTable[int, seq[(string, int)]]() var dfPeriod = newDataFrame() for chip in chips: proc toTuple(s: seq[seq[string]]): seq[(string, int)] = for x in s: doAssert x.len == 2 result.add (x[0], x[1].parseInt) let chipPath = path / period / "chip" & $chip let data = readFile(chipPath / "fsr" & $chip & ".txt") .splitLines .filterIt(it.len > 0) .mapIt(it.split) .toTuple() tab[chip] = data # read chip name and add to header proc readChipName(chip: int): string = result = readFile(chipPath / chipInfo) .splitLines()[0] result.removePrefix("chipName: ") header.add readChipName(chip) dfPeriod.add readThresholdMeans(chipPath, chip) dfPeriod["Run"] = period df.add dfPeriod proc invertTable(tab: Table[int, seq[(string, int)]]): Table[string, seq[(int, int)]] = result = initTable[string, seq[(int, int)]]() for chip, data in pairs(tab): for (dac, value) in data: if dac notin result: result[dac] = newSeq[(int, int)]() result[dac].add (chip, value) proc wrap(s: string): string = "|" & s & "|\n" proc toOrgTable(s: seq[seq[string]], header: seq[string]): string = let tabLine = wrap toSeq(0 ..< header.len).mapIt("------").join("|") result = tabLine result.add wrap(header.join("|")) result.add tabLine for x in s: doAssert x.len == header.len result.add wrap(x.join("|")) result.add tabLine proc toOrgTable(tab: Table[string, seq[(int, int)]], header: seq[string]): string = var commonDacs = newSeq[seq[string]]() var diffDacs = newSeq[seq[string]]() for (dac, row) in pairs(tab): var fullRow = @[dac] let toAdd = row.sortedByIt(it[0]).mapIt($it[1]) if toAdd.deduplicate.len > 1: fullRow.add toAdd diffDacs.add fullRow else: commonDacs.add @[dac, toAdd.deduplicate[0]] result.add toOrgTable(diffDacs, header) result.add "\n\n" result.add toOrgTable(commonDacs, @["DAC", "Value"]) # now add common dacs table echo "Run: ", period echo tab.invertTable.toOrgTable(header) echo df["THL", float].min ggplot(df.filter(f{`THL` > 100}), aes("THL", fill = factor("chip"))) + facet_wrap("Run") + geom_histogram(binWidth = 1.0, position = "identity", alpha = 0.7, hdKind = hdOutline) + ggtitle("Optimized THL distribution of the noise peak for each chip") + ylab(r"\# pixels", margin = 2.0) + facetHeaderText(font = font(12.0, alignKind = taCenter)) + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + scale_x_continuous(breaks = 8) + margin(left = 3.0, right = 3.5) + ggsave("/home/basti/phd/Figs/detector/calibration/septemboard_all_thl_optimized.pdf", useTeX = true, standalone = true, width = 1000, height = 600) #+end_src #+RESULTS: Run: Run2 |-----+--------+--------+--------+---------+---------+--------+--------| | DAC | E6 W69 | K6 W69 | H9 W69 | H10 W69 | G10 W69 | D9 W69 | L8 W69 | |-----+--------+--------+--------+---------+---------+--------+--------| | THL | 435 | 435 | 405 | 450 | 450 | 400 | 470 | | THS | 66 | 69 | 66 | 64 | 66 | 65 | 66 | |-----+--------+--------+--------+---------+---------+--------+--------| |-------------+------------| | DAC | Value | |-------------+------------| | IKrum | 20 | | Hist | 0 | | GND | 80 | | Coarse | 7 | | CTPR | 4294967295 | | BiasLVDS | 128 | | SenseDAC | 1 | | DACCode | 6 | | RefLVDS | 128 | | Vcas | 130 | | ExtDAC | 0 | | Disc | 127 | | Preamp | 255 | | FBK | 128 | | BuffAnalogA | 127 | | BuffAnalogB | 127 | |-------------+------------| Run: Run3 |-----+--------+--------+--------+---------+---------+--------+--------| | DAC | E6 W69 | K6 W69 | H9 W69 | H10 W69 | G10 W69 | D9 W69 | L8 W69 | |-----+--------+--------+--------+---------+---------+--------+--------| | THL | 419 | 386 | 369 | 434 | 439 | 402 | 462 | | THS | 68 | 66 | 66 | 65 | 69 | 65 | 64 | |-----+--------+--------+--------+---------+---------+--------+--------| |-------------+------------| | DAC | Value | |-------------+------------| | IKrum | 20 | | Hist | 0 | | GND | 80 | | Coarse | 7 | | CTPR | 4294967295 | | BiasLVDS | 128 | | SenseDAC | 1 | | DACCode | 6 | | RefLVDS | 128 | | Vcas | 130 | | ExtDAC | 0 | | Disc | 127 | | Preamp | 255 | | FBK | 128 | | BuffAnalogA | 127 | | BuffAnalogB | 127 | |-------------+------------| *** High voltage :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:high_voltage :END: The high voltage (HV) settings used for the Septemboard detector are shown in tab. [[tab:operation_calibration:hv_settings]]. The target is a drift field on the order of $\SI{500}{V.cm⁻¹}$ and an amplification field of about $\SI{60}{kV.cm⁻¹}$. The main voltages to choose are the grid voltage (to determine the amplification field) and the cathode voltage (to determine the drift field). The other voltages are computed based on a constant field gradient. Entries ring 1 and ring 29 are the voltages applied to the field shaping ring running around the detector volume to achieve a more homogeneous field. The HV for the Septemboard is controlled via an iseg [fn:iseg] HV module, while the veto scintillator (requiring positive high voltage) is controlled via a CAEN N470 [fn:caen]. #+CAPTION: Table of high voltages in use for the InGrid Mk. IV. #+CAPTION: Note that the veto scintillator is not controlled via #+CAPTION: the iseg module, but by a CAEN N470. #+NAME: tab:operation_calibration:hv_settings #+ATTR_LATEX: :booktabs t |-------------+---------+-------------+------------------| | Description | Channel | Voltage / V | TripCurrent / mA | |-------------+---------+-------------+------------------| | grid | 0 | -300 | 0.050 | | anode | 1 | -375 | 0.050 | | cathode | 2 | -1875 | 0.050 | | ring 1 | 3 | -415 | 0.100 | | ring 29 | 4 | -1830 | 0.100 | | veto scinti | 5 | +1200 | 2 | | SiPM | 6 | -65.6 | 0.05 | |-------------+---------+-------------+------------------| [fn:iseg] https://iseg-hv.com/ [fn:caen] https://caen.it/ **** TODOs for this section [3/3] :noexport: - [X] *INTRODUCE RING 1 AND RING 29*!!! - [X] *MENTION HV SETTINGS USED* Table plus short description. - [X] *SHOWN IN CAST CHAPTER. MOVE HERE?* -> Moved here, but still shown in CAST chapter too. ** FADC calibration :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:fadc :END: The FADC requires care about multiple aspects. First, the settings need to be chosen that configure both the operating characteristics, data readout and trigger threshold. Next, the Ortec 474 shaping amplifier has multiple different settings. Finally, in order to interpret the signals received from the FADC, a so-called "pedestal run" should be recorded. - FADC settings :: The FADC settings -- more details in the configuration file explanation of appendix section [[#sec:daq:tos_config_file]] -- configure the FADC to run at a frequency of $\SI{1}{GHz}$ as to cover a running time interval of $\SI{2.56}{μs}$ in each channel. While it could run at up to $\SI{2}{GHz}$, $\SI{1}{ns}$ per time bin is accurate enough given the time scales associated with the amplifier (see below) and a longer interval is more useful. Further, an external trigger is sent out to TOS if the threshold is exceeded. The threshold itself is set to $\SI{-40}{mV}$ [fn:trigger_threshold]. The value was chosen based on trial and error to avoid no triggers based on baseline noise. Given the range of up to $\SI{-1}{V}$, the relative threshold is pretty low. Finally, the operation mode and data readout is set, the input channel is chosen and the pedestal run parameters are configured (see below). - Amplifier settings :: The 474 Ortec shaping amplifier has 3 settings of note. The absolute gain as a multiplier and a signal integration as well as differentiation time. The initial settings were set to an amplification of =6x=, an integration time of $\SI{50}{ns}$ and a differentiation time of $\SI{50}{ns}$. However, these were changed during the data taking campaign, see more on this in section [[#sec:calibration:fadc_noise]]. - Pedestals :: The $4 · \num{2560}$ registers of the FADC are part of 4 separate cyclic registers. Due to hardware implementation details, the absolute recorded values of each register is arbitrary. In a pedestal run multiple measurements, $\mathcal{O}(10)$ of a certain length ($\SI{100}{ms}$ in our case), are performed and the pedestal values averaged. The resulting values represent a mean value for the typical value in each register, hence the name 'pedestal'. To interpret a real measured signal, these pedestals are subtracted from the recorded signal. Each of the 4 FADC channels may have very different pedestal values, but within a single channel they are usually within $\lesssim\num{50}$ ADC values. Fig. [[fig:daq:fadc_pedestal_run]] shows the pedestals of all 4 FADC channels as they were recorded before CAST data taking started. The pedestals drift over time, but the associated time scales are long. Alternatively, the pedestals can be computed from real data by computing a truncated mean in each register, which we'll discuss later in sec. [[#sec:reco:fadc_pedestal_calc]]. More details to each of these will be given later where it is of importance. #+CAPTION: The FADC pedestal run used for CAST data initially, split by each of the 4 FADC channels. Each #+CAPTION: channel's pedestals vary by about $\mathcal{O}(30)$ ADC values. The first few registers #+CAPTION: in each channel are not shown, as they are outlying by $\sim\num{100}$. #+NAME: fig:daq:fadc_pedestal_run #+Attr_LATEX: :width 1\textwidth [[~/phd/Figs/detector/calibration/fadc_pedestal_split_by_channel.pdf]] [fn:fadc_manual] https://archive.org/details/manualzilla-id-5646050/ [fn:fadc_pedestals_manual] The FADC pedestals can also be computed from real data by computing a truncated mean of all FADC files in a data taking run. This yields a usable pedestal to use for that run. See the extended version of this document for a section about this. [fn:trigger_threshold] The trigger threshold DAC is a 12-bit DAC. Its values correspond to $\SI{-1}{V}$ at ~000~ and $\SI{1}{V}$ at ~FFF~. Hence $\num{1966}$ (seen in the configuration file) is roughly $\SI{-40}{mV}$. *** TODOs for this section [/] :noexport: - [X] pedestals can be obtained from a pedestal run as explained here _or_ computed from real data due to signals making up small fraction of data and having lots of statistics in one run. Explain biased truncated mean used. - [X] *MOVE PEDESTAL DISCUSSION TO* [[#sec:calibration]]! -> We will just mention here that they are "random" values and we'll use real data later to correct for them. -> *OR* data reconstruction chapter!!! -> Moved to [[#sec:reco:fadc_pedestal_calc]]. Calibration is "higher level" than this. - [X] *MOVE THE PEDESTAL* calibration to the later chapter on our general calibration stuff? *OR* data reconstruction!!! - [ ] *EQUATION* To convert FADC value of 1966 to 40 mV!!! - [X] *THRESHOLD. 40 mV. TO AVOID NOISE TRIGGERS* - [X] *AMPLIFIER SETTINGS* (CAST data taking notes) - [X] *PEDESTAL* - [X] *REWRITE TAKING INTO ACCOUNT PEDESTAL CALCULATION FROM DATA* - [ ] *LINK TO PEDESTAL CALCULATIONS FROM DATA CODE* *** Generate plot of pedestals :extended: #+begin_src nim :tangle code/fadc_pedestals_plot.nim import std / [strutils, os, sequtils, algorithm] import ggplotnim const path = "/home/basti/CastData/ExternCode/TimepixAnalysis/resources/pedestalRuns/" const file = "pedestalRun000042_1_182143774.txt-fadc" # read the FADC files using our CSV parser. Everything `#` is header # aside from the last 3 lines. Skip those using `maxLines` var df = readCsv(path / file, header = "#", maxLines = 10240) .rename(f{"val" <- "nb of channels: 0"}) # generate indices 0, 0, 0, 0, 1, 1, 1, 1, ..., 2559, 2559, 2559, 2559 df["Register"] = toSeq(0 ..< 2560).repeat(4).concat.sorted df["Channel"] = @[1, 2, 3, 4].repeat(2560).concat when false: df["Channel"] = @[1, 2, 3, 4].repeat(2560) df = df.mutate(f{int -> bool: "even?" ~ `Channel` mod 2 == 0}) echo df echo df.tail(20) ggplot(df, aes("Channel", "val", color = "even?")) + geom_point(size = 1.0) + ggtitle("FADC channel values of pedestal run") + ggsave("/home/basti/phd/Figs/detector/calibration/fadc_pedestal_run.pdf") # useTeX = true, standalone = true) ggplot(df.group_by("even?").filter(f{float -> bool: `val` < percentile(col("val"), 95)}), aes("Channel", "val", color = "even?")) + facet_wrap("even?", scales = "free") + facetMargin(0.5) + margin(bottom = 1.0, right = 3.0) + geom_point(size = 1.0) + legendPosition(0.91, 0.0) + ggtitle("FADC channel values of pedestal run, split by even and odd channel numbers") + ggsave("/home/basti/phd/Figs/detector/calibration/fadc_pedestal_split_even_odd.pdf", width = 1200, height = 600)# # useTeX = true, standalone = true) else: ggplot(df.group_by("Channel").filter(f{float -> bool: `val` < percentile(col("val"), 99)}), aes("Register", "val", color = "Channel")) + facet_wrap("Channel", scales = "free") + geom_point(size = 1.0) + facetMargin(0.7) + margin(bottom = 1.0, right = 2.0) + scale_y_continuous(breaks = 5) + legendPosition(0.87, 0.0) + ylab("Register") + ggtitle("FADC register pedestal values, split by channels") + xlab("Channel", margin = 0.0) + themeLatex(fWidth = 1.0, width = 600, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/detector/calibration/fadc_pedestal_split_by_channel.pdf", useTeX = true, standalone = true) #+end_src #+RESULTS: | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `Register` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("Register"), | ...)`. | | INFO: | The | integer | column | `Channel` | has | been | automatically | determined | to | be | discrete. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_continuous()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | | | | | | | | | | | | | | | | | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector/calibration | /home/basti/phd/Figs/detector/calibration/fadc_pedestal_split_by_channel.tex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generated: | /home/basti/phd/Figs/detector/calibration/fadc_pedestal_split_by_channel.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | *** Calculate pedestals from real FADC data [/] :extended: We will now see what happens if we compute the FADC pedestals from the raw data by computing a truncated mean of all FADC files in a single run and comparing to the real pedestal run we normally use as a reference. *UPDATE*: <2022-12-26 Mon 00:48> This was a big success. We will use this in our real data from now on, move this to ~statusAndProgress~ and rewrite the section above with that in mind, i.e. explain how to calc pedestals. - [ ] *REWRITE MAIN TEXT* - [ ] *TEST PEDESTAL CALC FOR BACKGROUND DATA* - [ ] look at c-blake's ideas using `MovingStat`, a logarithmic histogram etc. to avoid the multiple passes over the data as we do using `sort`! See my journal.org notes about this! #+begin_src nim :tangle code/attempt_fadc_pedestals_from_data.nim import std / [strutils, os, sequtils, sugar, algorithm] import ggplotnim # to read from H5 input import nimhdf5 import ingrid / tos_helpers import ingrid / ingrid_types proc readFadc(f: string): DataFrame = # read the FADC files using our CSV parser. Everything `#` is header # aside from the last 3 lines. Skip those using `maxLines` result = readCsv(f, header = "#", maxLines = 10240) .rename(f{"val" <- "nb of channels: 0"}) #result["Channel"] = toSeq(0 ..< result.len) result["Register"] = toSeq(0 ..< 2560).repeat(4).concat.sorted result["Channel"] = @[1, 2, 3, 4].repeat(2560).concat const Size = 5000 proc getFadcDset(h5f: H5File, runNumber: int): H5DataSet = let fadcGroup = fadcRawPath(runNumber) doAssert fadcGroup in h5f let group = h5f[fadcGroup.grp_str] result = h5f[(group.name / "raw_fadc").dset_str] proc readChannel(h5f: H5File, dset: H5DataSet, start: int): seq[uint16] = let toRead = min(Size, dset.shape[0] - start) result = read_hyperslab(dset, uint16, offset = @[start, 0], count = @[toRead, dset.shape[1]]) import weave proc percIdx(q: float, len: int): int = (len.float * q).round.int proc biasedTruncMean*[T](x: Tensor[T], axis: int, qLow, qHigh: float): Tensor[float] = ## Computes the *biased* truncated mean of `x` by removing the quantiles `qLow` on the ## bottom end and `qHigh` on the upper end. ## ends of the data. `q` should be given as a fraction of events to remove on both ends. ## E.g. `qLow = 0.05, qHigh = 0.99` removes anything below the 5-th percentile and above the 99-th. ## ## Note: uses `weave` internally to multithread along the desired axis! doAssert x.rank == 2 result = newTensorUninit[float](x.shape[axis]) init(Weave) let xBuf = x.toUnsafeView() let resBuf = result.toUnsafeView() let notAxis = if axis == 0: 1 else: 0 let numH = x.shape[notAxis] # assuming row column major, 0 is # rows, 1 is # cols let numW = x.shape[axis] parallelFor i in 0 ..< numW: captures: {xBuf, resBuf, numH, numW, axis, qLow, qHigh} let xT = xBuf.fromBuffer(numH, numW) # get a sorted slice for index `i` let subSorted = xT.atAxisIndex(axis, i).squeeze.sorted let plow = percIdx(qLow, numH) let phih = percIdx(qHigh, numH) var resT = resBuf.fromBuffer(numW) ## compute the biased truncated mean by slicing sorted data to lower and upper ## percentile index var red = 0.0 for j in max(0, plow) ..< min(numH, phih): # loop manually as data is `uint16` to convert red += subSorted[j].float resT[i] = red / (phih - plow).float syncRoot(Weave) exit(Weave) defColumn(uint16, uint8) proc readFadcH5(f: string, runNumber: int): DataFrame = #seq[DataTable[colType(uint16, uint8)]] = let h5f = H5open(f, "r") let registers = toSeq(0 ..< 2560).repeat(4).concat.sorted let channels = @[1, 2, 3, 4].repeat(2560).concat let idxs = arange(3, 2560, 4) ## XXX: maybe just read the hyperslab that corresponds to a single channel over ## the whole run? That's the whole point of those after all. ## -> That is way too slow unfortunately ## XXX: better replace logic by going row wise N elements instead of column wise. ## Has the advantage that our memory requirements are constant and not dependent ## on the number of elements in the run. If we then average over the resulting N ## pedestals, it should be fine. let dset = getFadcDset(h5f, runNumber) var val = newTensor[float](2560 * 4) when true: var slices = 0 for i in arange(0, dset.shape[0], Size): # read let data = readChannel(h5f, dset, i) let toRead = min(Size, dset.shape[0] - i) echo "Reading..." let dT = data.toTensor.reshape([toRead, data.len div toRead]) echo "Read ", i, " to read up to : ", toRead, " now processing..." inc slices val += biasedTruncMean(dT, axis = 1, qLow = 0.2, qHigh = 0.98) for i in 0 ..< 2560 * 4: val[i] /= slices.float result = toDf({"Channel" : channels, val, "Register" : registers}) #for fadc in h5f.iterFadcFromH5(runNumber): # let datCh3 = fadc.data[idxs] # .mapIt(it.int) # var df = toDf({"val" : dat, "Register" : registers, "Channel" : channels}) # result.add df ## Main function to avoid bug of closure capturing old variable proc readFadcData(path: string, runNumber: int): DataFrame = var df = newDataFrame() if path.endsWith(".h5"): doAssert runNumber > 0 df = readFadcH5(path, runNumber) else: var dfs = newSeq[DataFrame]() var i = 0 for f in walkFiles(path / "*.txt-fadc"): echo "Parsing ", f dfs.add readFadc(f) inc i if i > 5000: break let dfP = assignStack(dfs) var reg = newSeq[int]() var val = newSeq[float]() var chs = newSeq[int]() for (tup, subDf) in groups(dfP.group_by(["Channel", "Register"])): echo "At ", tup let p20 = percentile(subDf["val", float], 20) let p80 = percentile(subDf["val", float], 80) reg.add tup[1][1].toInt chs.add tup[0][1].toInt let dfF = if p80 - p20 > 0: subDf.filter(dfFn(subDf, f{float: `val` >= p20 and `val` <= p80})) else: subDf val.add dfF["val", float].mean df = toDf({"Channel" : chs, val, "Register" : reg}) df.writeCsv("/t/pedestal.csv") echo df.pretty(-1) result = df proc main(path: string, outfile: string = "/t/empirical_fadc_pedestal_diff.pdf", plotVoltage = false, runNumber = -1, onlyCsv = false ) = let pData = readFadcData(path, runNumber) if onlyCsv: return const path = "/home/basti/CastData/ExternCode/TimepixAnalysis/resources/pedestalRuns/" const file = "pedestalRun000042_1_182143774.txt-fadc" let pReal = readFadc(path / file) echo "DATA= ", pData echo "REAL= ", pReal var df = bind_rows([("Data", pData), ("Real", pReal)], "ID") .spread("ID", "val") .mutate(f{"Diff" ~ abs(`Data` - `Real`)}) # alternatively compute the voltage corresponding to the FADC register values, # assuming 12 bit working mode (sampling_mode == 0) .mutate(f{"DiffVolt" ~ `Diff` / 2048.0}) var plt: GgPlot if plotVoltage: plt = ggplot(df, aes("Register", "DiffVolt", color = "Channel")) + ylim(0, 100.0 / 2048.0) else: plt = ggplot(df, aes("Register", "Diff", color = "Channel")) + ylim(0, 100) plt + geom_point(size = 1.5, alpha = 0.75) + ylab("Difference between mean and actual pedestal [ADC]") + ggtitle("Attempt at computing pedestal values based on truncated mean of data") + margin(top = 2) + xlim(0, 2560) + ggsave(outfile) when isMainModule: import cligen dispatch main #+end_src We can call it on different runs with FADC data: #+begin_src sh code/attempt_fadc_pedestals_from_data \ # -p /mnt/1TB/CAST/2017/DataRuns/Run_77_171102-05-24 \ -p ~/CastData/data/2017/Run_96_171123-10-42 \ # --outfile ~/phd/Figs/detector/calibration/pedestal_from_data_compare_real_run77.pdf --outfile ~/phd/Figs/detector/calibration/pedestal_from_data_compare_real_run96.pdf #+end_src #+RESULTS: for an early 2017 run #+begin_src sh code/attempt_fadc_pedestals_from_data \ -p /mnt/1TB/CAST/2018_2/DataRuns/Run_303_181217-16-52 \ --outfile ~/phd/Figs/detector/calibration/pedestal_from_data_compare_real_run303.pdf #+End_src #+RESULTS: for a late 2018 run. These yield fig. sref:fig:daq:fadc_pedestals_from_data_compare_real #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Run 77") (label "fig:daq:fadc_pedestals_from_data_compare_real_run77") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/detector/calibration/pedestal_from_data_compare_real_run77.pdf")) (subfigure (linewidth 0.5) (caption "Run 303") (label "fig:reco:fadc_pedestals_from_data_compare_real_run303") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/detector/calibration/pedestal_from_data_compare_real_run303.pdf")) (caption "Calculation of the FADC pedestals from data by averaging all channels over all FADC data files of a single run using a truncated mean from the 20-th to 80-th percentile of the distribution. Both data runs show a comparatively small variance. Arguably it may make sense to \\textit{always} compute it based on each run instead of relying on a pedestal run though.") (label "fig:daq:fadc_pedestals_from_data_compare_real") ) #+end_src Surprisingly, the deviation for the end 2018 run is lower than for the 2017 run, despite the 2017 run being closer in time to the real pedestal run. Keep in mind that in our data taking we used the 12 bit readout mode. This means the register values divided by \num{2048} correspond to the voltages recorded by the register. As such the absolute values of the deviations are quite a bit smaller than $\lesssim \SI{48}{mV}$ (which is small given the absolute range of $±\SI{1}{V}$ of the FADC. #+begin_src nim :tangle code/fadc_apply_data_based_pedestal.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils] import ingrid / [tos_helpers, fadc_helpers, ingrid_types, fadc_analysis] proc stripPrefix(s, p: string): string = result = s result.removePrefix(p) proc plotIdx(df: DataFrame, fadcData: Tensor[float], idx: int) = let xmin = df["argMinval", int][idx] let xminY = df["minvals", float][idx] let xminlineX = @[xmin, xmin] # one point for x of min, max let fData = fadcData[idx, _].squeeze let xminlineY = linspace(fData.min, fData.max, 2) let riseStart = df["riseStart", int][idx] let fallStop = df["fallStop", int][idx] let riseStartX = @[riseStart, riseStart] let fallStopX = @[fallStop, fallStop] let baseline = df["baseline", float][idx] let baselineY = @[baseline, baseline] let df = toDf({ "x" : toSeq(0 ..< 2560), "baseline" : baseline, "data" : fData, "xminX" : xminlineX, "xminY" : xminlineY, "riseStart" : riseStartX, "fallStop" : fallStopX }) # Comparison has to be done by hand unfortunately let path = "/t/fadc_spectrum_baseline.pdf" ggplot(df, aes("x", "data")) + geom_line() + geom_point(color = color(0.1, 0.1, 0.1, 0.1)) + geom_line(data = df.head(2), aes = aes("x", "baseline"), color = "blue") + geom_line(data = df.head(2), aes = aes("xminX", "xminY"), color = "red") + geom_line(data = df.head(2), aes = aes("riseStart", "xminY"), color = "green") + geom_line(data = df.head(2), aes = aes("fallStop", "xminY"), color = "pink") + ggsave(path) proc plotCompare(data, real: Tensor[float]) = let path = "/t/fadc_spectrum_compare.pdf" for idx in 0 ..< data.shape[0]: let df = toDf({ "x" : toSeq(0 ..< 2560), "data" : data[idx, _].squeeze, "real" : real[idx, _].squeeze }) .gather(["data", "real"], "type", "vals") ggplot(df, aes("x", "vals", color = "type")) + geom_line() + #geom_point(color = color(0.1, 0.1, 0.1, 0.1)) + ggsave(path) sleep(1000) proc getFadcData(fadcRun: ProcessedFadcRun, pedestal: seq[uint16]): Tensor[float] = let ch0 = getCh0Indices() let fadc_ch0_indices = getCh0Indices() # we demand at least 4 dips, before we can consider an event as noisy n_dips = 4 # the percentile considered for the calculation of the minimum min_percentile = 0.95 numFiles = fadcRun.eventNumber.len var fData = ReconstructedFadcRun( fadc_data: newTensorUninit[float]([numFiles, 2560]), eventNumber: fadcRun.eventNumber, noisy: newSeq[int](numFiles), minVals: newSeq[float](numFiles) ) for i in 0 ..< fadcRun.eventNumber.len: let slice = fadcRun.rawFadcData[i, _].squeeze let data = slice.fadcFileToFadcData( pedestal, fadcRun.trigRecs[i], fadcRun.settings.postTrig, fadcRun.settings.bitMode14, fadc_ch0_indices).data fData.fadc_data[i, _] = data.unsqueeze(axis = 0) fData.noisy[i] = data.isFadcFileNoisy(n_dips) fData.minVals[i] = data.calcMinOfPulse(min_percentile) when false: let data = fData.fadcData.toSeq2D let (baseline, xMin, riseStart, fallStop, riseTime, fallTime) = calcRiseAndFallTime( data, false ) let df = toDf({ "argMinval" : xMin.mapIt(it.float), "baseline" : baseline.mapIt(it.float), "riseStart" : riseStart.mapIt(it.float), "fallStop" : fallStop.mapIt(it.float), "riseTime" : riseTime.mapIt(it.float), "fallTime" : fallTime.mapIt(it.float), "minvals" : fData.minvals }) for idx in 0 ..< df.len: plotIdx(df, fData.fadc_data, idx) sleep(1000) result = fData.fadcData proc readFadc(f: string): DataFrame = # read the FADC files using our CSV parser. Everything `#` is header # aside from the last 3 lines. Skip those using `maxLines` result = readCsv(f, header = "#", maxLines = 10240) .rename(f{"val" <- "nb of channels: 0"}) #result["Channel"] = toSeq(0 ..< result.len) result["Register"] = toSeq(0 ..< 2560).repeat(4).concat.sorted result["Channel"] = @[1, 2, 3, 4].repeat(2560).concat proc main(fname: string, runNumber: int) = var h5f = H5open(fname, "r") let pedestal = readCsv("/t/pedestal.csv") # created from above .arrange(["Register", "Channel"]) echo pedestal const path = "/home/basti/CastData/ExternCode/TimepixAnalysis/resources/pedestalRuns/" const file = "pedestalRun000042_1_182143774.txt-fadc" let pReal = readFadc(path / file) let fileInfo = h5f.getFileInfo() for run in fileInfo.runs: if run == runNumber: let fadcRun = h5f.readFadcFromH5(run) let fromData = fadcRun.getFadcData(pedestal["val", uint16].toSeq1D) let fromReal = fadcRun.getFadcData(pReal["val", uint16].toSeq1D) plotCompare(fromData, fromReal) when isMainModule: import cligen dispatch main #+end_src - [ ] *SHOW PLOT OF THE ABOVE (~plotCompare~) AS EXAMPLE OF BAD PEDESTALS VS GOOD PEDESTALS?* -> Would be nice for extended thesis! ** Scintillator calibration :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:scintillators :END: The final piece of the detector requiring calibration, are the two scintillators. As both of them are only used as digital triggers and no analogue signal information is recorded, a suitable discriminator threshold voltage has to be set. *** TODOs for this section :noexport: - [ ] *MENTION THAT WE PREFER TOO HIGH THRESHOLD OVER TOO LOW? BETTER FEWER SIGNALS THAN LOTS OF NOISE?* *** Large scintillator paddle The large veto scintillator paddle was calibrated at the RD51 laboratory at CERN prior to the data taking campaign in March 2017. Using two smaller, calibrated scintillators to create a coincidence setup of the three scintillators, measurements were taken at different thresholds. Each measurement was $\SI{10}{\minute}$ long. An amplifier was installed after the PMT to increase the signal. The Canberra 2007 base and Bicron Corp. 31.49x15.74M2BC408/2-X PMT require a positive high voltage. It was supplied with $\SI{1200}{V}$. Table [[tab-daq-scintillator_coincidence_measurements]] shows the recorded measurements. Based on these a threshold of $\SI{-110}{mV}$ was chosen for the CAST data taking. Fig. [[fig:daq:veto_scintillator_coincidence]] also shows the data from table. While the coincidence counts at $\SI{-110}{mV}$ are lower than the visible plateau starting at $\SI{-100}{mV}$, this threshold was chosen, because the raw counts were still considered too high compared to expectation based on cosmic muon rate and the size of the scintillator. [fn:threshold_choice] #+CAPTION: Measurements for the calibration of the large veto scintillator taken #+CAPTION: at RD51 at CERN with two smaller, calibrated scintillators in a coincidence. #+CAPTION: Each measurement was \SI{10}{min}. The thresholds set on the discriminator for #+CAPTION: the veto scintillator were originally measured #+CAPTION: with a 10x scaling and have been rescaled here to their correct values. #+NAME: tab-daq-scintillator_coincidence_measurements #+ATTR_LATEX: :booktabs t | Threshold / mV | Counts Szinti | Counts Coincidence | |----------------+---------------+--------------------| | -59.8 | 31221 | 634 | | -70.0 | 30132 | 674 | | -80.4 | 28893 | 635 | | -90.3 | 28076 | 644 | | -100.5 | 27012 | 684 | | -110.3 | 25259 | 566 | | -120.0 | 22483 | 495 | | -130.3 | 19314 | 437 | | -140.3 | 16392 | 356 | | -150.5 | 13677 | 312 | | -160.0 | 11866 | 267 | | -170.1 | 10008 | 243 | #+CAPTION: Calibration measurements for the veto scintillator printed in table #+CAPTION: [[tab-daq-scintillator_coincidence_measurements]]. The line is an #+CAPTION: interconnection of all data points. The errors represent Poisson-like #+CAPTION: $\sqrt{N}$ uncertainties. #+NAME: fig:daq:veto_scintillator_coincidence [[~/phd/Figs/detector/calibration/veto_scintillator_calibration_coinc_rd51.pdf]] [fn:threshold_choice] While it is unclear to me now given it's been over 5 years, I believe at the time of the calibration we wrongly assumed a muon rate of $\SI{100}{Hz.m⁻².sr⁻¹}$ instead of about $\SI{1}{cm⁻².min⁻¹}$. The former number only works out if one integrates it over the $\cos^2(θ)$ dependence, _but only along $θ$_ and not $φ$! Either way, the number seems problematic. However, it did misguide us in likely choosing a too low threshold, as using the former number yields an expected number of counts of $\sim\num{32000}$ compared to only $\sim\num{20000}$ in our naive approach. **** TODOs for this section [/] :noexport: - [ ] *MUON RATE MENTIONED IN FOOTNOTE. Hz/min ? WHAT IS THAT. MISSING cm²?* -> *CHECK THIS!!!* - [X] Part that now lives in appendix. -> ??? I think referring to notes of scintillator calibration -> Referenced the appendix. **** Generate the plots of the scintillator calibration data :extended: #+begin_src nim :var tbl=tab-daq-scintillator_coincidence_measurements import ggplotnim, sequtils let df = toDf({ "Thr" : tbl["Threshold / mV"], "Szinti" : tbl["Counts Szinti"], "Coinc" : tbl["Counts Coincidence"] }) .mutate(f{"SzintiErr" ~ sqrt(`Szinti`)}, f{"CoincErr" ~ sqrt(`Coinc`)}) ## XXX: `ebLinesT` is broken?! ggplot(df, aes("Thr", "Szinti")) + geom_point() + geom_line() + geom_errorbar(aes = aes(yMin = f{`Szinti` - `SzintiErr`}, yMax = f{`Szinti` + `SzintiErr`}), errorBarKind = ebLines, color = parseHex("FF00FF")) + xlab(r"Threshold [\si{mV}]") + ylab(r"Counts [\#]") + ggtitle(r"Calibration measurements of \SI{10}{min} each") + ggsave("/home/basti/phd/Figs/detector/calibration/veto_scintillator_calibration_rd51.pdf", useTeX = true, standalone = true) ggplot(df, aes("Thr", "Coinc")) + geom_point() + geom_line() + geom_errorbar(aes = aes(yMin = f{`Coinc` - `CoincErr`}, yMax = f{`Coinc` + `CoincErr`}), errorBarKind = ebLines) + xlab(r"Threshold [\si{mV}]") + ylab(r"Counts [\#]") + ggtitle(r"Calibration measurements of \SI{10}{min} each in 3-way coincidence") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/detector/calibration/veto_scintillator_calibration_coinc_rd51.pdf", useTeX = true, standalone = true) #+end_src #+RESULTS: | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector/calibration | /home/basti/phd/Figs/detector/calibration/veto_scintillator_calibration_rd51.tex | | | | | Generated: | /home/basti/phd/Figs/detector/calibration/veto_scintillator_calibration_rd51.pdf | | | | | | | | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/detector/calibration | /home/basti/phd/Figs/detector/calibration/veto_scintillator_calibration_coinc_rd51.tex | | | | | Generated: | /home/basti/phd/Figs/detector/calibration/veto_scintillator_calibration_coinc_rd51.pdf | | | | | | | **** Notes taken of calibration before CAST data taking :extended: See the appendix [[#sec:appendix:scintillator_calibration_notes]] for a reproduction of the notes taken during the calibration of the veto paddle scintillator. **** Raw scintillator data :extended: #+CAPTION: Calibration measurements for the veto scintillator printed in table #+CAPTION: [[tab-daq-scintillator_coincidence_measurements]]. In this case the #+CAPTION: raw data is shown instead of the coincidence. The line is simply an #+CAPTION: interconnection of all data points. The errors are colored to be seen at all. #+NAME: fig:daq:veto_scintillator_raw_counts [[~/phd/Figs/detector/calibration/veto_scintillator_calibration_rd51.pdf]] In particular looking at the raw counts, in hindsight I would now probably choose a threshold closer to \SI{-90}{mV} or \SI{-95}{mV}. But well. **** Calculate expected rate :extended: Let's compute the expected rate based on a mean cosmic muon rate at the surface and the area of the scintillator. - [X] *NOTE: <2023-03-14 Tue 11:49>* The talk about the veto system of the SDD detector at the IAXO collaboration meeting March 2023 had the following number for the sea level muon rate: 0.017 Hz•cm⁻² -> Ah, this is close to ~1 cm⁻²•min⁻¹! #+begin_src nim import unchained let rate = 0.017.Hz•cm⁻² echo "Rate in ", rate.toDef(min⁻¹•cm⁻²) #+end_src #+RESULTS: : Rate in 1.02 cm⁻²•min⁻¹ The scintillator has a size of 31.49x15.74 inches and we roughly have a mean cosmic muon rate of 1 per cm⁻² min⁻¹. Measurement time was 600 s. #+begin_src nim import unchained let rate = 1.cm⁻²•min⁻¹ let area = 31.49.inch * 15.74.inch let time = 600.s echo "Expected rate: ", time * rate * area #+end_src #+RESULTS: : Expected rate: 32617.1 UnitLess So about 32,000 counts in the 10 min time frame. It's a bit frustrating that for some reason during that calibration we assumed a muon rate of 100 Hz m⁻² sr⁻¹, so that we only got an expected number of counts of about 20,000. If we assume the 100 Hz m⁻² sr⁻¹ number and integrate only over $θ$ (not $φ$ as we should also!) using the $\cos² θ$ dependence we get a more comparable number: #+begin_src nim import unchained let integral = 1.5708.sr # ∫_{-π/2}^{π/2} cos²(θ) dθ = 1.5708 let rate = 100.Hz•m⁻²•sr⁻¹ let angleInt = 2*π let time = 600.s let area = 31.49.inch * 15.74.inch echo rate * time * area * integral #+end_src #+RESULTS: : 30138.2 UnitLess In any case, it seems like our assumption of 20000 as seen in appendix [[#sec:appendix:scintillator_calibration_notes]] is clearly flawed and lead to a possibly too large threshold for the discriminator. In addition: why the hell did I not take note of the size of the scintillators that Theodoros gave us? That would be a much better cross check for the expected rate. It is certainly possible that our choice of $\SI{-110}{mV}$ was actually due to an expected coincidence rate matching at that threshold given the sizes of the calibrated scintillators, but I can't verify that anymore. *** SiPM The SiPM was calibrated during the bachelor thesis of Jannes Schmitz in 2016 cite:JannesBSc based on a coincidence measurement with calibrated scintillators. **** TODOs for this section [/] :noexport: The exact threshold used *FIND A VALUE OR REPHRASE*. - [ ] *FIND VALUE* - [ ] *WELL* If I understand Jannes correctly there's not much in terms of a "value" to be had. Pretty much lost. One could try to measure that nowadays of course, but there's not really a point in doing so. * Data reconstruction :Reconstruction: :PROPERTIES: :CUSTOM_ID: sec:reconstruction :END: #+LATEX: \minitoc We will now go through the general data reconstruction for data taken with the Septemboard detector. Starting with a short introduction to the data analysis framework ~TimepixAnalysis~ developed for this thesis, sec. [[#sec:reco:tpa]]. Then we cover the data parsing of the ASCII based data format produced by TOS (see appendix [[#sec:daq:tos_output_format]] for a description of the data format) in sec. [[#sec:reco:tos_data_parsing]]. At this point we shortly discuss our expectation of the data properties recorded with the detector, sec. [[#sec:reco:event_shape]], as it motivates the kind of reconstruction that is performed. From here the explanation of the data reconstruction starts, sec. [[#sec:reco:data_reconstruction]], including cluster finding (sec. [[#sec:reco:cluster_finding]]) and calculation of geometric properties, sec. [[#sec:reco:cluster_geometry]]. The FADC data reconstruction follows in section [[#sec:reco:fadc_data]]. Finally, the scintillators are mentioned in section [[#sec:reco:scintillator_data]]. There is an additional long section in the appendix [[#sec:appendix:software]] that goes through the software used for the data reconstruction intended for people using these tools in the future. And appendix [[#sec:appendix:full_data_reconstruction]] shows how the full data reconstruction for all CAST data presented in this thesis is performed. ** TODOs for this section [/] :noexport: About gridpix reco: #+begin_quote and its properties and shortcomings define the need for the other detector components. #+end_quote - [X] *WRITE CHAPTER SUMMARY* - explain the whole data reconstruction pipeline? What we do with ingrid data, clustering, reconstruction - everything up to energy calibration - [ ] wherever we finally show the real calibrations of the detector for each run, show: - [X] the full FSR (for one chip, then differences for each, particularly THS & THL) -> Done in operation_calibration - [X] show plot of THL calibration from SCurves, with the THL value as a "rectangle" line (from y axis to fit, down to x axis) -> Done in operation_calibration - [ ] in Polya plot show the theoretical threshold in electrons as a vertical line (computed from the used THL DAC value and THL calibration) -> Partially done in operation_calibration, but needs rework!!! - [X] *SHORT INTRO TO TIMEPIX ANALYSIS AND NIM HERE. LIKE 2 PARAGRAPHS.* -> Below. ** ~TimepixAnalysis~ and Nim :PROPERTIES: :CUSTOM_ID: sec:reco:tpa :END: The data reconstruction software handling the processes mentioned in the remainder of this chapter is the ~TimepixAnalysis~ cite:TPA [fn:tpa_github] framework. It is only a "framework" in a loose sense, as it is a set of programs to parse data from different Timepix (usually GridPix) DAQ software packages, process and analyze it. In addition, it contains a large number of tools to visualize that data as well as analyze auxiliary data like lists of data taking runs, CAST log files and more. The entire code base is written in the Nim programming language cite:nim [fn:nim_lang]. Nim is a statically typed, compiled language with a Python-like whitespace sensitive syntax, taking inspirations from Pascal, Modula and in particular ideas regarding type safety from Ada. Further, it has a strong metaprogramming focus with full access to its abstract syntax tree (AST) at compile time, offering Lisp-like macro functionality. This allows to construct powerful and concise domain specific languages (DSLs). Nim compiles its code by default first to C code, which can then utilize the decades of compiler optimization techniques available via GCC cite:gcc or Clang cite:LLVM:CGO04, while allowing to target every single platform supported by C (which effectively means almost all). As such it also achieves performance on par with C, while providing high-level features normally associated with languages like Python. Nim was selected as the language of choice for ~TimepixAnalysis~, due to its combination of concise and clean syntax, high-level features for productivity, high performance due to native code generation, easy interfacing with existing C and C++ code bases and its strong metaprogramming features, which allow to reduce boilerplate code to a minimum. Appendix [[#sec:appendix:timepix_analysis]] contains a detailed overview of the important tools part of ~TimepixAnalysis~ and how they are used for the context of the data analysis presented in this thesis. Read it if you wish to understand how to recreate the results presented in this thesis. [fn:tpa_github] https://github.com/Vindaar/TimepixAnalysis [fn:nim_lang] [[https://nim-lang.org]] ** TOS data parsing :PROPERTIES: :CUSTOM_ID: sec:reco:tos_data_parsing :END: The first part of the GridPix data reconstruction is the parsing of the raw ASCII data files, presented in [[#sec:daq:tos_output_format]]. This is implemented in the ~raw_data_manipulation~ program [fn:reco_raw_data_manip], part of ~TimepixAnalysis~ cite:TPA. Its main purpose is the conversion of the inefficient ASCII data format to the more appropriate and easier to work with HDF5 [fn:hdf5] cite:hdf5 format, a binary file format intended for scientific datasets. While this data conversion is the main purpose, pixels with ~ToT~ or ~ToA~ values outside a user defined range can be filtered out at this stage [fn:toa_tot_filter_in_raw]. Each data taking run is processed separately and will be represented by one group (similar to a directory on a file system) in the output HDF5 file. See appendix section [[#sec:appendix:tos:raw_data_layout]] for an explanation of the produced data layout. As the data is being processed anyway, at this time we already compute an occupancy map for each chip in the run. This allows for a quick glance at the (otherwise unprocessed) data. [fn:reco_raw_data_manip] https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/raw_data_manipulation.nim [fn:hdf5] HDF5 cite:hdf5 is the Hierarchical Data Format of version 5. It is a binary data format intended for scientific datasets, which uses an in-file layout similar to a virtual file system. Datasets (equivalent to files) are stored in groups (equivalent to directories). Metadata can be attached to either and linking between datasets, even across files is supported. It supports a large number of compression filters to reduce the file size of the stored data. [fn:toa_tot_filter_in_raw] These type of cuts are applied at this stage of the processing, because for certain use cases or certain detectors specific ~ToT~ or ~ToA~ ranges are of no interest / contain junk data (because of a faulty chip for example). In this case it is useful to remove such data in this preprocessing stage to lighten the workload for anything after. *** Example of running ~raw_data_manipulation~ [0/1] :extended: - [ ] GIVE EXAMPLE -> Need an actual run to work with. Use ~TPAResources~? #+begin_src sh raw_data_manipulation -p <path> --runType rtBackground --h5out /tmp/run_foo.h5 #+end_src ** Expectation of event shapes :PROPERTIES: :CUSTOM_ID: sec:reco:event_shape :END: Based on the theoretical aspects of a gaseous detector as explained in chapter [[#sec:theory_detector]] and the expected kinds of signal sources at an experiment like CAST (see chapter [[#sec:cast]]), we can have a good expectation of the kinds of signals that a GridPix detector records for different types of events. The signal source we are interested in for an axion helioscope are soft energy X-rays, below $\SI{10}{keV}$. The main goal in later determining a background rate and computing a physics result from data is to filter out these X-rays from the rest of the data the detector records. The dominant source of background in any gaseous detector at surface level is due to cosmic muons. Fortunately, muons and X-rays behave very different in the detector. X-rays generally produce a single photoelectron, which creates further primary electrons in a local region. These drift under transverse diffusion to the readout plane, which effectively gives them a roughly circular shape. Muons on the other hand produce electrons (which each produce further local primaries) on a track along their entire path through the gas volume. Under most angles this implies their shape is very eccentric, i.e. 'track-like'. Two example events, one of a $\sim\SI{5.9}{keV}$ \cefe X-ray and the other of a typical muon is shown in fig. [[fig:reco:example_signal_background_event]]. #+CAPTION: Two example events one might see in the detector, left a common background event #+CAPTION: of a (likely) muon track, which enters the readout plane (hence the slightly #+CAPTION: triangular shape) and right a classical $\SI{5.9}{keV}$ X-ray from a \cefe #+CAPTION: calibration source. #+NAME: fig:reco:example_signal_background_event [[~/phd/Figs/reco/gridpix_example_events.pdf]] Given the distinct geometric properties of these different types of events and the fact that a GridPix provides extremely high spatial resolution and single electron efficiency, the data reconstruction fully embraces this. Most of the computed properties, which we will introduce in the next sections, are directly related to geometric properties of the events. *** TODOs for this section [0/2] :noexport: - [X] introduce what signal and background events actually look like -> motivates the kind of steps we then perform. So an "intro" section, then description of the processes. - [X] data parsing? - [X] clustering (our custom and DBSCAN cite:ester1996density) - [X] in particular the geometric reconstruction. Split up our explanation schematic made in inkscape and somehow present that? -> Used later. - [X] *THIS WAS: * ~***~ Expectation of event shapes [0/0] -> And is again. - [ ] *SHOULD WE MENTION ALPHAS, NEUTRONS ETC HERE?* - [ ] *MOTIVATED BY PHYSICAL STUFF SHOWN IN THEORY, DESCRIBE EXPECTATION OF WHAT EVENTS SHOULD LOOK LIKE* *** Generate example events for known events :extended: While we have two pretty nice events to plot as examples, in principle, they are generated by =karaPlot= (an extensive plotting tool part of TPA) and thus not quite suited to a thesis. As we know the run number and event number, we can just generate them quickly here. The two events (and plots) are: - [[~/org/Figs/statusAndProgress/exampleEvents/background_event_run267_chip3_event1456_region_crAll_hits_200.0_250.0_centerX_4.5_9.5_centerY_4.5_9.5_applyAll_true_numIdxs_100.pdf]] - [[~/org/Figs/statusAndProgress/exampleEvents/calibration_event_run266_chip3_event5791_region_crAll_hits_200.0_250.0_centerX_4.5_9.5_centerY_4.5_9.5_applyAll_true_numIdxs_100.pdf]] so run 267 and 266, events 1456 and 5791. Ah, I had forgotten that these are not the event numbers, but their indices of the clusters. Therefore, we'll just search for pretty events in the same runs. #+begin_src nim :tangle code/generate_example_gridpix_event.nim # Laptop #const calib = "/mnt/1TB/CAST/2018_2/CalibrationRuns/Run_266_181107-22-14" #const back = "/mnt/1TB/CAST/2018_2/DataRuns/Run_267_181108-02-05" # Desktop # All raw files found in `/mnt/4TB/CAST`. The two runs needed here copied to from std/os import expandTilde const calib = "~/CastData/data/2018_2/Run_266_181107-22-14" # calibration const back = "~/CastData/data/2018_2/Run_267_181108-02-05" # data const cEv = 5898 const bEv = 1829 # this event is nice import ingrid / tos_helpers import std / [strformat, os, strutils, sequtils] import ggplotnim proc toFile(i: int, path: string): string = let z = align($i, 6, '0') path / &"data{z}.txt" proc drawPlot() = let protoFiles = readMemFilesIntoBuffer(@[toFile(cEv, calib.expandTilde), toFile(bEv, back.expandTilde)]) var df = newDataFrame() var names = @["X-ray", "Background"] for (pf, name) in zip(protoFiles, names): let ev = processEventWithScanf(pf) let pix = ev.chips[3].pixels if pix.len == 0: return let dfL = toDf({ "x" : pix.mapIt(it.x.int), "y" : pix.mapIt(it.y.int), "ToT" : pix.mapIt(it.ch.int), "type" : name }) df.add dfL echo df ggplot(df, aes("x", "y", color = "ToT")) + facet_wrap("type") + geom_point() + xlim(0, 256) + ylim(0, 256) + #theme_font_scale(2.0) + #margin(left = 3, bottom = 3, right = 5) + margin(right = 3.5) + #facetHeaderText(font = font(12.0, alignKind = taCenter)) + xlab("x [Pixel]", margin = 1.5) + ylab("y [Pixel]", margin = 2) + legendPosition(0.88, 0.0) + themeLatex(fWidth = 0.9, width = 800, height = 400, baseTheme = singlePlot) + ggsave("/home/basti/phd/Figs/reco/gridpix_example_events.pdf", width = 800, height = 400, useTeX = true, standalone = true) drawPlot() #+end_src #+RESULTS: | free | memory | 483328 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | occ | memory | 10944 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | free | memory | 471040 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | occ | memory | 18432 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DataFrame | with | 4 | columns | and | 458 | rows: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Idx | x | y | ToT | type | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dtype: | int | int | int | string | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 | 93 | 128 | 95 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 110 | 128 | 52 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 111 | 128 | 119 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 92 | 127 | 33 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 112 | 127 | 83 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 103 | 125 | 52 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 104 | 125 | 37 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 132 | 125 | 24 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 98 | 124 | 37 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 102 | 124 | 39 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 112 | 124 | 53 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 132 | 124 | 71 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 94 | 123 | 49 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 106 | 123 | 45 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14 | 91 | 122 | 59 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 15 | 102 | 122 | 45 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 | 104 | 122 | 55 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 17 | 106 | 122 | 69 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 18 | 107 | 122 | 73 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 19 | 78 | 121 | 42 | X-ray | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `x` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("x"), | ...)`. | | INFO: | The | integer | column | `y` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("y"), | ...)`. | | INFO: | The | integer | column | `ToT` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("ToT"), | ...)`. | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/reco | /home/basti/phd/Figs/reco/gridpix_example_events.tex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generated: | /home/basti/phd/Figs/reco/gridpix_example_events.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ** Data reconstruction :PROPERTIES: :CUSTOM_ID: sec:reco:data_reconstruction :END: With the data stored in an HDF5 file after processing the raw data with ~raw_data_manipulation~, the actual data and event reconstruction can begin. This is handled by the ~reconstruction~ [fn:reco_reconstruction] program. It continues from the HDF5 file created before and proceeds to reconstruct all runs in the given input file. For each run, each GridPix chip is processed sequentially, while all events for that chip are then processed in parallel using multithreading. For each event, the data processing is essentially a two step process: 1. perform cluster finding, see section [[#sec:reco:cluster_finding]]. 2. compute geometric properties for each found cluster, see section [[#sec:reco:cluster_geometry]]. [fn:reco_reconstruction] https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/reconstruction.nim *** TODOs for this section [/] :noexport: - [ ] *REFER TO ~reconstructSingleChip~ HERE* Basic reconstruction: - [X] per run, per chip - [X] clustering (2 cluster methods) - [X] geometric properties - [X] find rotated coordinate system - [X] compute parameters for long & short axis - [X] resulting parameters - [X] table of parameters (we have that somewhere already) - [X] use a description to describe all parameters like - X :: description *** Cluster finding :PROPERTIES: :CUSTOM_ID: sec:reco:cluster_finding :END: The cluster finding algorithm splits a single event into possibly multiple clusters. Clusters are defined based on a certain notion of distance (the details depend on the clustering algorithm used). The multiple clusters from a single event are then treated fully equally for the rest of the analysis. The fact that they originate from the same event has no further relevance (with a slight exception for one veto technique, which utilizes clustering over multiple chips, more on that in section [[#sec:background:septem_veto]]). There are two different cluster finding algorithms implemented for use in ~TimepixAnalysis~. The default one is strictly used for the general cluster finding as part of the reconstruction, the other is intended to be used for one of the vetoes (again, sec. [[#sec:background:septem_veto]]). The choice is user configurable however. [fn:clustering] - Default :: The default one is the same clustering algorithm introduced for the data reconstruction of the 2014/15 GridPix detector in cite:krieger2018search. It defines a cluster by all pixels within the squares of side length $N$ centered around each pixel. It is best thought of as a recursive square neighbor search around each pixel. For each neighbor in the search square, start another search square. Once no neighbor finds any neighbors not already part of the cluster, it is finished. - DBSCAN :: The secondary clustering algorithm is the \textbf{D}ensity-\textbf{b}ased \textbf{s}patial \textbf{c}lustering of \textbf{a}pplications with \textbf{n}oise (DBSCAN) cite:ester1996density algorithm. In contrast to the default algorithm it is -- as the name implies -- a density based algorithm. This means it distinguishes points which have more neighbors (high density) from those with few neighbors (low density). The algorithm has a parameter ~minSamples~, which defines the density threshold. If a point has at least ~minSamples~ neighbors within a (euclidean) distance of $ε$ (the second parameter) it is considered a "core point". All core points build a cluster with all other points in their reach. Those points in reach of a core point, but do itself not have ~minSamples~ neighbors are still part of the cluster. Any point _not_ in reach of a core point is a "noise point". The main advantage of this algorithm over many other more classical algorithms is the ability to separate clusters close to one another, which are not separateable by a linear cut. This results in a more humanly "intuitive" clustering. DBSCAN is one of the most widely used clustering algorithm in many scientific fields and even in 2017 was still considered highly relevant cite:10.1145/3068335. Another clustering algorithm (currently not implemented) is CLASSIX cite:&CLASSIX, which promises fast clustering based on sorting along the first principal component axis. Based on its properties as presented in its paper it could be an extremely useful algorithm for our application and should be investigated in the future. [fn:clustering] The clustering logic of TPA is found here: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/clustering.nim [fn:default_algo_core] The heart of the algorithm is the following pixel search: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/clustering.nim#L120-L148 **** TODOs for this section [/] :noexport: - [ ] *THINK ABOUT EXAMPLE OF CLUSTERING THAT DBSCAN FINDS AND DEFAULT WOULDN'T* NON LINEAR CLUSTERABLE **** CLASSIX clustering algorithm [/] :extended: - [ ] *OR SHOULD THIS GO INTO MAIN SECTION BEFORE?* OR AS :optional:? There is one further clustering algorithm, which is extremely exciting and seems like a great candidate for a clustering algorithm for ~TimepixAnalysis~. That is the CLASSIX algorithm, introduced as "a fast and explainable clustering method" cite:CLASSIX. See the GitHub page with many examples here: https://github.com/nla-group/classix It is an algorithm, which first sorts the data based on the first principal component in the data. **** Clustering bug in MarlinTPC for 2014/15 data :extended: One of the initial goals of ~TimepixAnalysis~ was the reproduction of the background rate computed for the 2014/15 data with the MarlinTPC framework. While that whole ordeal wasted a lot of time trying to achieve the exact same results from both frameworks to satisfy other people, among other things it lead to the discovery of a clustering bug in MarlinTPC (which was finally the point that let me drop this pursuit). - [ ] *INSERT DISCUSSION FROM STATUSANDPROGRESS ABOUT MARLIN CLUSTER BUG* See section ~sec:marlin_vs_tpa_output~ in TPA for the Marlin clustering bug. *** Calculation of geometric properties :PROPERTIES: :CUSTOM_ID: sec:reco:cluster_geometry :END: For each individual cluster the geometric event reconstruction is up next. As the basic differentiator between X-rays and common background events is their circularity, most properties are in some sense related to how eccentric clusters are. Therefore, the first thing to be computed for each cluster, is the rotation angle [fn:rotation_angle]. The rotation angle is found via a non linear optimization of \begin{align*} x'_i &= \cos(θ) \left( x_i - \bar{x} \right) · P - \sin(θ) \left( y_i - \bar{y} \right) · P \\ y'_i &= \sin(θ) \left( x_i - \bar{x} \right) · P + \cos(θ) \left( y_i - \bar{y} \right) · P \end{align*} where $θ$ is the rotation angle (in the context of the optimization the parameter to be fitted), $x_i, y_i$ the coordinates of the $i\text{-th}$ pixel in the cluster, and $\bar{x}, \bar{y}$ the center coordinates of the cluster. $P = \SI{55}{μm}$ is the pixel pitch of a Timepix. The resulting variables $x'_i, y'_i$ define a new rotated coordinate system. From these coordinates, the RMS [fn:rms_terminology] of each of these new axes is computed via \begin{align*} x_{\text{RMS}} &= \sqrt{ \frac{1}{N} \left( \sum_i x^{\prime2}_i \right) - \frac{1}{N²} \left( \sum_i x'_i \right)² }\\ y_{\text{RMS}} &= \sqrt{ \frac{1}{N} \left( \sum_i y^{\prime2}_i \right) - \frac{1}{N²} \left( \sum_i y'_i \right)² }. \end{align*} Based on these we then simply redefine \begin{align*} σ_{\text{transverse}} &= \text{min}(x_{\text{RMS}}, y_{\text{RMS}}) \\ σ_{\text{longitudinal}} &= \text{max}(x_{\text{RMS}}, y_{\text{RMS}}), \end{align*} which then define the eccentricity $ε$ to (see also fig. [[sref:fig:reco:prop_expl_ecc]]) \[ ε = \frac{σ_{\text{longitudinal}}}{σ_{\text{transverse}}}, \] guaranteeing $ε \geq 1$. During the non linear optimization, the algorithm attempts to maximize the eccentricity. In a track like cluster, the maximum eccentricity is found under the rotation angle $θ$, which points along the longest axis of the cluster. The resulting rotated coordinate system after the fit has converged, is illustrated in fig. sref:fig:reco:prop_expl_axes. Once the rotation angle and therefore the rotated coordinate system of a cluster is defined, most other properties follow in a straight forward fashion. In the rotated coordinate system the axis along the long axis of the cluster is called "longitudinal" and the short axis "transverse" in the following. The higher moments skewness and kurtosis for each axis are computed as well as the length and width of the cluster based on the biggest spread of pixels along each axis. In addition to the geometric properties a few other properties like the number of pixels are also computed. Three of the most important variables are illustrated in fig. sref:fig:reco:property_explanations. These enter the likelihood cut definition as we will see in sec. [[#sec:background:likelihood_method]]. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Rotated axes") (label "fig:reco:prop_expl_axes") (includegraphics (list (cons 'width (linewidth 1.0))) "~/org/Figs/InGridPropExplanation/long_short_axis.pdf")) (subfigure (linewidth 0.5) (caption "Eccentricity") (label "fig:reco:prop_expl_ecc") (includegraphics (list (cons 'width (linewidth 1.0))) "~/org/Figs/InGridPropExplanation/eccentricity.pdf") (spacing "")) ;; avoid spacing to break line (subfigure (linewidth 0.5) (caption "Fraction in transverse RMS") (label "fig:reco:prop_expl_fracRms") (includegraphics (list (cons 'width (linewidth 1.0))) "~/org/Figs/InGridPropExplanation/frac_in_trans_rms.pdf")) (subfigure (linewidth 0.5) (caption "Length divided by transverse RMS") (label "fig:reco:prop_expl_ldiv") (includegraphics (list (cons 'width (linewidth 1.0))) "~/org/Figs/InGridPropExplanation/length_div_rms_trans.pdf")) (caption "Schematic explanation of the basic cluster reconstruction and the three most important geometric properties. " (subref "fig:reco:prop_expl_axes") " defines the rotated coordinate system found by non-linear optimization of the long and short cluster axis. " "Along the long and short axes, " (subref "fig:reco:prop_expl_ecc") ", the transverse standard deviation " ($ "σ_{\\text{transverse}}") " is computed, which then defines the eccentricity by this ratio. " (subref "fig:reco:prop_expl_fracRms") " shows the definition of a less obvious variable: the fraction of pixels within a circle of one " ($ "σ_{\\text{transverse}}") " radius around the cluster center. Similarly " (subref "fig:reco:prop_expl_ldiv") " shows the full cluster length defined by the furthest active pixels in the cluster divided by " ($ "σ_{\\text{transverse}}") " as another variable. These three variables enter the likelihood cut used for background suppression." ) (label "fig:reco:property_explanations") ) #+end_src #+begin_comment Probably won't end up using this table, instead use the descriptions below. #+CAPTION: Table of all the (mostly) geometric properties of a single cluster computed during the #+CAPTION: =reconstruction= tool. All but the likelihood, charge and energy properties are computed #+CAPTION: during the first pass of the tool. #+NAME: tab:geometric_properties #+ATTR_LATEX: :booktabs t | Property | Meaning | |---------------------------+------------------------------------------------------------------| | igCenterX | =x= position of cluster center | | igCenterY | =y= position of cluster center | | igHits | number of pixels in cluster | | igEventNumber | event number cluster is from | | igEccentricity | eccentricity of the cluster | | igSkewnessLongitudinal | skewness along long axis | | igSkewnessTransverse | skewness along short axis | | igKurtosisLongitudinal | kurtosis along long axis | | igKurtosisTransverse | kurtosis along short axis | | igLength | size along long axis | | igWidth | size along short axis | | igRmsLongitudinal | RMS along long axis | | igRmsTransverse | RMS along short axis | | igLengthDivRmsTrans | length divided by transverse RMS | | igRotationAngle | rotation angle of long axis over chip coordinate system | | igEnergyFromCharge | energy of cluster computed from its charge | | igLikelihood | likelihood value for cluster | | igFractionInTransverseRms | fraction of pixels within radius of transverse RMS around center | | igTotalCharge | integrated charge of total cluster in electrons | | igNumClusters | | | igFractionInHalfRadius | fraction of pixels in half radius around center | | igRadiusDivRmsTrans | radius divided by transverse RMS | | igRadius | radius of cluster | | igLengthDivRadius | length divided by radius | #+end_comment The following is a list of all properties of a single cluster computed by the ~reconstruction~ tool. The ~ig~ prefix is due to the internal naming convention. All but the likelihood, charge and energy properties are computed during the first pass of the tool, namely in the context discussed above. [fn:geometry_calc] - igEventNumber :: The event number the cluster is part of (multiple clusters may share the same event number). - igHits :: The number of pixels in the cluster. - igCenterX / igCenterY :: The center position of the cluster along the ~x~ / ~y~ axis of the detector. - igRotationAngle :: The rotation angle of the long axis of the cluster over the chip coordinate system. - igLength :: The length of the cluster along the long axis in the rotated coordinate system, defined by the furthest pixel at each end in that direction. - igWidth :: The equivalent of *igLength* for the short axis. # - igRadius :: The radius of the cluster, defined by *IM NOT DEFINED # YET, DEFINE ME* <-- Does not exist in TPA - igRmsLongitudinal :: The root mean square (RMS) along the long axis. - igRmsTransverse :: The RMS along the short axis. - igSkewnessLongitudinal / igKurtosisLongitudinal :: The skewness / kurtosis along the long axis. - igSkewnessTransverse / igKurtosisTransverse :: The skewness / kurtosis along the short axis. - igEccentricity :: The eccentricity of the cluster, defined by the ratio of the longitudinal RMS over the transverse RMS. - igLengthDivRmsTrans :: The length of the cluster divided by the transverse RMS (see fig. sref:fig:reco:prop_expl_ldiv). - igFractionInTransverseRms :: The fraction of all pixels within a radius of the transverse RMS around the center (see fig. sref:fig:reco:prop_expl_fracRms). # - igRadiusDivRmsTrans :: The radius over the transverse RMS. # - igFractionInHalfRadius :: Equivalent to *igFractionInTransveresRms* # but for a radius of half the cluster radius. # - igLengthDivRadius :: The length divided by the cluster radius. - igTotalCharge :: The sum of the charge of the ~ToT~ calibrated charges of all pixels in the cluster (see sec. [[#sec:reco:data_calibration]]). - igEnergyFromCharge :: The calibrated energy of the cluster in $\si{keV}$ (see sec. [[#sec:calibration:energy]]). - igLikelihood :: The likelihood value of the cluster for the likelihood cut method, explained in detail in section [[#sec:background:likelihood_method]]. After the calculation of all geometric properties for all events and chips, the data is written to an output HDF5 file (similar in format to the output of ~raw_data_manipulation~) for each run. This concludes the first pass of ~reconstruction~ over the data. See appendix section [[#sec:appendix:tos:reco_data_layout]] for an explanation of the produced data layout. [fn:rms_terminology] The term 'root mean square' is used although we actually refer to the standard deviation of the sample. We follow [[cite:&krieger2018search]], but this ambiguity is often encountered unfortunately. [fn:rotation_angle] Note that the absolute value of the rotation angle is of secondary importance. For X-rays the rotation angle is going to be random, as the definition of a long and short axis in a (theoretically perfect) circle depends on the statistical distribution of the pixels. However, for pure muons it allows to map the rotation angle to the incidence angle. [fn:calc_eccentricity] https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/geometry.nim#L296-L329 [fn:geometry_calc] In particular all these properties are computed here: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/geometry.nim#L331-L391 **** TODOs for this section [/] :noexport: - [ ] *THINK ABOUT WHETHER BETTER TO CLARIFY WHAT X - [ ] *REWRITE THIS SECTION* - [ ] *PROPERLY LINK THE CODE LATER* -> Still need to decide how we will do this, but for sure needs a specific git tag! The properties are computed here: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/geometry.nim#L308-L366 and here: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/geometry.nim#L517-L569 - [X] *EXPLANATION OF THE 3 MAIN PROPERTIES USED FOR LIKELIHOOD* - [X] *LOOK INTO \subref RENDERING. SHOULDN'T JUST SHOW a) HERE* -> Adjusted in our LaTeX defaults now. Option to ~subcaption~ package. - [ ] *PLACE THE LINKS AFTER FIX UP SOMEWHERE THEY BELONG* - [ ] *CONSIDER MOVING LIST OF ALL PROPERTIES TO FURTHER DOWN ONCE WE HAVE DISCUSSED CHARGE CALIB?* *** Example of data reconstruction [/] :extended: - [ ] GIVE EXAMPLE #+begin_src sh reconstruction -i <h5file> --out /tmp/reco_foo.h5 #+end_src *** Data calibration :PROPERTIES: :CUSTOM_ID: sec:reco:data_calibration :END: The next step of the reconstruction is the data calibration. This is a separate pass over the data as it is optional on the one hand and requires further inputs about each used GridPix than just the raw data (different calibration files) on the other hand. There are different calibrations to be performed: 1. the charge calibration via the application of the ~ToT~ calibration as introduced in section [[#sec:operation_calibration:tot_calibration]]. 2. the calculation of the gas gain, introduced previously in section [[#sec:daq:polya_distribution]] and more in sec. [[#sec:calib:gas_gain_time_binning]]. 3. the energy calibration (see sec. [[#sec:calibration:energy]] and [[#sec:calib:final_energy_calibration]]). The ~ToT~ calibration is in principle performed simply by converting each ~ToT~ value to an equivalent charge in electrons using the calibration as presented in section [[#sec:operation_calibration:tot_calibration]]. For each GridPix used in a detector, a ~ToT~ calibration must be available. ~TimepixAnalysis~ comes with a library and helper program, which manages a simple database about different GridPixes, their calibrations and their validity (in time and runs they apply to). The user needs to add the chips for which they wish to perform a ~ToT~ calibration to the database before it can be performed. See appendix [[#sec:appendix:software:ingrid_database]] for a detailed overview. For any chip part of the database, the ~ToT~ calibration is a single pass over the ~ToT~ values of all runs. This generates a calibrated charge for every pixel of every cluster and a combined property, the ~totalCharge~ of the full charge of each cluster. Gas gain values are computed in $\SI{90}{min}$ time intervals for each chip. This strikes a good balance between enough statistics and reduced sensitivity to variation in gas gain due to external effects. As this deserves its own discussion, more on this in sec. [[#sec:calib:gas_gain_time_binning]]. Finally, while the energy calibration is also handled by ~reconstruction~, we will cover it in section [[#sec:calibration]], due to its more complex nature. **** TODOs for this section :noexport: - [ ] *GIVE THIS SECTION A BETTER NAME!* -> We currently also call the chapter after CAST 'data calibration'... -> Well, but we _do_ talk about related concepts here after all. -> [[#sec:calibration]] really is the extension of this chapter for the more complicated aspects, i.e. energy and gas gain. And this is where the ugly mess of needing X or Y starts :( at least for energy calibration, we will refer to section later - [ ] *REPHRASE PART OF "ONLY MENTION TOT" BUT THEN MENTION GAS GAIN & ENERGY LATER* - [X] *MENTION GAS GAIN PROPERLY* -> Well, properly is not correct, but we mention it now. -> Not ideal though. -> We reference back to the initial gas gain part. Maybe properly explain more details in that original section, then here just mention it needs to be done and in [[#sec:calibration]] talk about the tricky aspects of the binning etc? Seems sensible. - [ ] *SHOW TOT CALIBRATION AGAIN?* -> Not really needed. Just a dumb function. - [X] *MENTION INGRID DATABASE STORING TOT CALIB DATA* **** Example of data calibration [/] :extended: The following three commands perform the three calibration steps mentioned above: - ToT calibration :: ~--only_charge~ - Gas gain calc :: ~--only_gas_gain~ - Energy calibration :: ~--only_energy_from_e~ #+begin_src sh reconstruction -i <h5file> --only_charge reconstruction -i <h5file> --only_gas_gain reconstruction -i <h5file> --only_energy_from_e #+end_src *** Event duration :PROPERTIES: :CUSTOM_ID: sec:reco:event_duration :END: During the reconstruction of the data, another important parameter is computed, namely the event duration of each individual event. In principle each event has a fixed length, because the Timepix uses a shutter based readout, with the shutter length predefined. However, as the FADC is used as an external trigger to close the shutter early, if it recorded a signal, all events with an FADC trigger have a shorter duration. For the fixed length duration events their length is computed by the shutter length as indicated in TOS. In appendix [[#sec:daq:tos_output_format]], listing [[code:daq:zero_suppressed_readout_run_header]] the ~shutterTime~ and ~shutterMode~ fields are listed. These define the absolute length of the shutter opening in (effectively) number of clock cycles. The ~shutterMode~ acts as a modifier to the number of clock cycles: \[ t_{\text{clocks}}(\mathtt{mode}, t) = 256^{\mathtt{mode}} · t \] where $t$ is the ~shutterTime~ and $\mathtt{mode}$ corresponds to the ~shutterMode~. The available modes are: - ~short~: \num{0} - ~long~: \num{1} - ~verylong~: \num{2} In case of the FADC triggering, the clock cycles after shutter opening that were recorded up to the trigger is also reported in the data files, see appendix sec. [[#sec:daq:tos_output_format]], listing [[code:daq:zero_suppressed_readout_event_header]]. With the number of clock cycles the shutter was open, the total event duration can then be computed in either case via: \[ d(t_{\text{clocks}}) = \frac{t_{\text{clocks}} · 46}{40 · \num{1000000}}. \] [fn:event_duration_calc] https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/pure.nim#L266-L297 **** TODOs for this section [/] :noexport: - [ ] *LINK CORRECT CODE* - [X] How to compute length of events, given - shutter mode & length - FADC trigger - [ ] *HERE OR IN CAST SECTION: WHAT SHUTTER LENGTHS WERE ACTUALLY USED?* - [ ] *MOVE THIS TO EARLIER?* ** FADC reconstruction :PROPERTIES: :CUSTOM_ID: sec:reco:fadc_data :END: The data files created from the FADC data sent upon a trigger are essentially memory snapshots of the circular register of the FADC. We will go through the necessary steps to convert that raw data into usable signals, given the FADC settings we use and the data TOS generates from it. See appendix sec. [[#sec:daq:fadc_data_files]] for an overview of the raw FADC data files. For a detailed overview of the FADC readout process see the FADC manual cite:fadc_manual [fn:reco_fadc_manual]. In ~TimepixAnalysis~ FADC data is automatically parsed from the ASCII data files into HDF5 files as part of ~raw_data_manipulation~ if FADC files are present. The spectrum reconstruction is done automatically as part of the ~reconstruction~ program, but calculation of the baseline, rise and fall time is an optional step. *** TODOs for this section [/] :noexport: - [ ] *ADD SOME KIND OF MOTIVATON FOR WHAT WE MIGHT WANT TO DO WITH THE SIGNALS?* - [X] *ADD EXPLANATION HOW IT FITS INTO ~raw_data_manipulation~ AND ~reconstruction~* -> Question whether this needs to be part of this section here. Or just part of an :extended: section? - [ ] *ADD MENTION OF SAVITZKY GOLAY!* *** FADC pedestal calculation :PROPERTIES: :CUSTOM_ID: sec:reco:fadc_pedestal_calc :END: As alluded to in sec. [[#sec:operation_calibration:fadc]] the pedestal values cannot only be taken from a pedestal run recorded before data taking, but can also be extracted from real data, under the condition that a decent fraction of FADC registers in a single FADC event is on the baseline and normally distributed between events. The idea is to look at an ensemble of values for each register taken from different events and remove all those events in each register, in which it was involved in a real signal. Due to the cyclic nature of the FADC registers, different registers will capture signals in each event. At least in typical signals recorded with a GridPix the signal lengths are $\mathcal{O}(\SI{10}{\percent})$ of the window length, leaving plenty of registers free to recover pedestal information. Regular noise affects things, but is partially taken care by the truncation and partially cancels out as real noise is normal distributed around the actual pedestal. This latter approach is the one used in the data analysis by calculating: #+NAME: fig:reco:fadc_pedestals_trunc_mean \begin{equation} p_i(r) = \text{mean}_{20^{\text{th}}}^{98^{\text{th}}}(\{r_i\}) \end{equation} where $p_i(r)$ is the pedestal in register $r_i$ and the mean is taken over all data $\{r_i\}$ in that register within the 20-th and 98-th percentile. All data refers to a full data run of $\sim\SI{1}{day}$. The highly biased nature is due to the real signals being negative. Removing the smallest $\SI{20}{\percent}$ of data guarantees in the vast majority of events the full physical signal is excluded given the typical signal lengths involved. A small upper percentile is used to exclude possible significant outliers to the top. While such a biased estimator will not result in the real mean (and in case of signal and noise free input data thus the real pedestals), a slight bias is irrelevant, as the baseline is still calculated for each reconstructed signal which is used to correct any global offset. *** FADC spectrum reconstruction The first step to reconstruct the FADC signals, is to perform the pedestal correction. This is simply done by subtracting the pedestals register by register from the data file \[ N_{i, \text{corr}} = N_{i, \text{raw}} - N_{i, \text{pedestal}} \] with the raw data $N_{i, \text{raw}}$ and the pedestals $N_{i, \text{pedestal}}$ in register $i$ (as computed according to eq. [[fig:reco:fadc_pedestals_trunc_mean]]). With the pedestals removed, the temporal correction is next to unfold the data into the correct order. This needs to be performed on each of the $\num{2560}$ registers for each channel separately. The temporal rotation is performed by shifting all registers by \[ n_\text{rot} = (\mathtt{TRIG\_REC} - \mathtt{POSTTRIG}) · 20 \] places to the left. The constants $\mathtt{TRIG\_REC}$ and $\mathtt{POSTTRIG}$ are from appendix section [[#sec:daq:fadc_data_files]], written in each data file in the header. The final step is to convert the ADC values of each register into voltages in $\si{V}$. Given that the ADC covers the range of $\SIrange{-1}{1}{V}$ as the ADC values 0 to 4096 (16384) with 12 (14) bit [fn:bit_mode], this means the conversion from ADC to volts is simply \[ U_i = \frac{N_{i, \text{corr}} - 2048}{2048} \] when using the 12 bit operating mode for each register. With these corrections applied, the recorded FADC spectrum is recovered, centered around the trigger position. [fn:bit_mode] The FADC can be operated in a 12 or 14-bit mode. We run in the 12-bit mode. Also see [[#sec:daq:fadc_data_files]]. [fn:reco_fadc_manual] A PDF of the FADC manual is available here: https://archive.org/details/manualzilla-id-5646050/ [fn:fadc_code] Data parsing and the mentioned reconstruction code is found at *CITE* **** TODOs for this section [0/2] :noexport: - [ ] *REWRITE TAKING INTO ACCOUNT THAT PEDESTALS ARE COMPUTED BASED ON DATA NOW!* -> Partially done by referencing pedestals from truncated mean. -> Now that part of the expl is one sec above! - [ ] *ADD REFERENCE TO CODE* https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/fadc_helpers.nim *** Signal baseline, rise time and fall time :PROPERTIES: :CUSTOM_ID: sec:fadc:definition_baseline_rise_fall_time :END: Assuming a singular event is recorded with the FADC, the main properties of interest of the resulting signal pulse are the signal baseline and based on that the rise and fall time. Computing the position of the baseline is generally a non trivial problem, as a priori the position, width and number of signals in the spectrum is unknown. A reasonable expectation though is that the majority of points in a signal should lie close to the baseline, as the fraction of the FADC window covered by a signal is typically less than a quarter. As such a somewhat empirical way to compute the baseline $B$ was chosen using a biased truncated mean \[ B = \text{mean}_{30^{\text{th}}}^{95^{\text{th}}}(S) %\text{median}(S) + 0.1 · \max(S) \] between the $30^{\text{th}}$ and $95^{\text{th}}$ percentile of the data. The bias is intended to remove the impact of the negative signal amplitude and remove the worst positive outliers. An optimal solution would perform a rigorous peak finding for a signal pulse, remove those points and compute the mean of the remainder of the points. [fn:baseline_ideas] Once the baseline is defined it can be used to determine both the rise and the fall time. These are generally computed based on the number of registers between the minimum of the signal and some threshold slightly below the baseline (compare with fig. [[fig:reco:fadc_reco_example]]) in order to reduce the effect of local noise variations. While configurable, the default value of the threshold $c_B$ ($B$ for baseline) is \[ c_B = B - 0.1 · \left| B - \text{min}(S)\right|, \] $\SI{10}{\%}$ of the difference between the baseline $B$ and the minimum value of the spectrum $S$ from the baseline. Similarly, for the end of the rise time / beginning of the fall time a similar offset is used. In this case the threshold value $c_P$ ($P$ for peak) is defined as \[ c_P = \text{min}(S) + 0.025 · \left| B - \text{min}(S)\right|, \] so $\SI{2.5}{\%}$ of the amplitude above the minimum of the signal $S$. The position where either threshold is crossed in registers is based on where the _simple moving average_ (of window size 5) crosses $c_B$ and $c_P$. The number of registers between $c_B$ and $c_P$ defines the rise time (left of the peak) and fall time (right of the peak). [fn:naming_rise_fall] At the used clock frequency of $\SI{1}{GHz}$ each register corresponds to $\SI{1}{ns}$ in time. A reconstructed FADC spectrum including indications for baseline, rise and fall time as well as the minimum is shown in fig. [[fig:reco:fadc_reco_example]] [fn:text_sizes], together with the corresponding event on the Septemboard. #+CAPTION: Example of a fully reconstructed InGrid event and FADC spectrum from a \SI{5.9}{keV} X-ray #+CAPTION: recorded with the Septemboard detector during a calibration run. On the left of each #+CAPTION: plot are all properties computed for the data. In the FADC plot the blue line indicates #+CAPTION: the baseline. Green vertical: rise time from full to dashed line. Red vertical: point of #+CAPTION: spectrum minimum. Light red: fall time from dashed to full line. Rise / fall time stops #+CAPTION: $\SI{2.5}{\%}$ before baseline is reached. #+ATTR_LATEX: :width 1\linewidth #+NAME: fig:reco:fadc_reco_example [[~/phd/Figs/CalibrationRuns2018_Reco_2023-10-15_22-49-53/septemEvents/septem_fadc_run_239_event_1068_region_crAll.pdf]] [fn:naming_rise_fall] The naming of the rise and fall time in the context of a negative pulse is slightly confusing. Rise time refers to the _negative rise_ towards the minimum of the pulse and the fall time to the time to return-to-baseline. [fn:baseline_ideas] Aside from performing peak fitting (which is difficult and requires understanding of the expected signal shapes) another approach might be a local linear smoothing (e.g. a Savitzky-Golay filter with polynomial of order 1) in a suitable window range. The result would be a much more stable spectrum. This could then be used to compute the numerical derivative from which all those intervals with a slope smaller than some epsilon provide the dataset from which to compute the mean. The tricky aspect would be choice of window size and the behavior in very noisy events. [fn:text_sizes] Excuse the small text for the annotations. They are not important, but may be interesting for some readers! *** TODOs for this section [/] :noexport: - [ ] *REPLACE THE PLOT!* -> The plot we use now is quite nice as it allows us to explain multiple aspects at the same time. Just need to decide what the plot should look like and prettify it (probably don't want the properties and InGrid event next to it!) -> Yes we do. It is very illustrative. - [ ] *ADD CITATION FOR* https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/fadc_analysis.nim - [X] *CHANGE FADC CODE TO NOT USE MIN, BUT MEAN OF N POINTS AROUND MIN? -> ALREADY USES MEAN OF N-th PERCENTILE AROUND ARGMIN* -> but only for the minimum *amplitude* and not the minimum value that is used to start the search around it! In the vast majority of cases this shouldn't matter, but for some weird spectrum shapes it might. -> This has since been implemented. - [X] *REWRITE EXPLANATION BASED ON NOW USING:* - instead of median + 0.1 · max: truncated mean of 30-th to 95-th percentile - instead of times to exact baseline, go to baseline - 2.5% - do not compute threshold based on individual value, but on a moving average of window size 5 - [ ] Also: use all registers and do not set first two registers to 0! -> Need to check if this is still mentioned somewhere! *** Generate the FADC baseline plot [/] :extended: - [ ] *GENERATE THE PLOT CURRENTLY USED IN THE ABOVE BODY HERE* -> The current version comes from the TPA test suite! #+begin_src nim :tangle code/plot_fadc_events.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils] import ingrid / [tos_helpers, fadc_helpers, ingrid_types, fadc_analysis] proc stripPrefix(s, p: string): string = result = s result.removePrefix(p) proc plotIdx(df: DataFrame, fadcData: Tensor[float], runNumber, idx: int) = let xmin = df["xmin", int][idx] let xminY = df["minvals", float][idx] let xminlineX = @[xmin, xmin] # one point for x of min, max let fData = fadcData[idx, _].squeeze let xminlineY = linspace(fData.min, fData.max, 2) let riseStart = df["riseStart", int][idx] let fallStop = df["fallStop", int][idx] let riseStartX = @[riseStart, riseStart] let fallStopX = @[fallStop, fallStop] let baseline = df["baseline", float][idx] let baselineY = @[baseline, baseline] let dfLoc = toDf({ "x" : toSeq(0 ..< 2560), "baseline" : baseline, "data" : fData, "xminX" : xminlineX, "xminY" : xminlineY, "riseStart" : riseStartX, "fallStop" : fallStopX }) # Comparison has to be done by hand unfortunately let path = "/t/fadc_spectrum_baseline_$#.pdf" % $idx ggplot(dfLoc, aes("x", "data")) + geom_line() + geom_point(color = color(0.1, 0.1, 0.1, 0.1)) + geom_line(aes = aes("x", "baseline"), color = "blue") + geom_line(data = dfLoc.head(2), aes = aes("xminX", "xminY"), color = "red") + geom_line(data = dfLoc.head(2), aes = aes("riseStart", "xminY"), color = "green") + geom_line(data = dfLoc.head(2), aes = aes("fallStop", "xminY"), color = "pink") + ggtitle("FADC spectrum of run $# and index $#" % [$runNumber, $idx]) + xlab("FADC Register") + ylab("FADC signal voltage U [V]") + ggsave(path) copyFile(path, "/t/fadc_spectrum_baseline.pdf") proc toDf[U: object](x: U): DataFrame = result = newDataFrame() for field, val in fieldPairs(x): type T = typeof(val[0]) when T isnot int and T is SomeInteger: result[field] = val.asType(int) elif T isnot float and T is SomeFloat: result[field] = val.asType(float) else: result[field] = val proc plotFadc(h5f: H5File, runNumber, sleep: int) = var run = h5f.readRecoFadcRun(runNumber) var data = h5f.readRecoFadc(runNumber) var df = data.toDf() df["minvals"] = run.minvals for idx in 0 ..< df.len: plotIdx(df, run.fadc_data, runNumber, idx) sleep(sleep) proc main(fname: string, runNumber: int, sleep = 1000) = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() for run in fileInfo.runs: if run == runNumber: plotFadc(h5f, run, sleep) when isMainModule: import cligen dispatch main #+end_src - [ ] *MAKE PLOT PRETTY AND RERUN FOR THIS:* run 281 and index 1533 *** Generate plot of InGrid and FADC event :extended: The command to produce the plot as seen in the thesis is: #+begin_src sh W1=825 W2=675 G_LEFT=0.65 F_LEFT=0.3 L_MARGIN=10 R_MARGIN=4 USE_TEX=true SCALE=1.3 plotData \ --h5file ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --runType=rtCalibration \ --eventDisplay --septemboard \ --events 1068 --runs 239 #+end_src The parameters used here are the default now outside the ~USE_TEX=true~ and ~SCALE=1.3~. The resulting plot needs to be copied over to the ~phd~ repository manually, as ~plotData~ currently does not support outputting a plot to a specific file name (it's intended to produce many plots automatically and act more as a fast way to visualize data under different cuts). *** Noise sensitive :extended: Because of the small amplitude of the associated signals induced on the grid, electromagnetic interference is a serious issue with this FADC setup. Ideally, the detector should be installed in a Faraday cage and a short, shielded LEMO cable should be used to connect it to the pre-amplifier and amplifier. **** TODOs for this section :noexport: - [ ] *WHAT TO DO WITH THIS? WHERE SHOULD THIS GO? MAYBE MENTION IN FADC INTRODUCTION ALREADY? OR HERE BECAUSE AFTER ALL? IT SHOWS UP IN ANALYSIS AND WE HAVE SIMPLE NOISE DETECTION IN CODE* -> Go to analysis / characterization / whatever part! - [ ] This section does not really belong here. Yes, it is noise sensitive, but we'll mention it in the context of data taking woes and then later when looking at the different data sets. *** FADC amplifier settings :extended: The FADC does not require a proper calibration in the same sense as the Timepix needs to be calibrated. However, the amplifier settings have a large impact on the resulting signals. Better measurements of the effect of the different integration / differentiation settings of the amplifier would have been very valuable, but were never performed for lack of time. The same holds for different amplifications as to properly understand the data ranges the FADC can record and how the trigger threshold in equivalent $\si{keV}$ on the center GridPix is related. This would have made smarter decisions about the settings possible (e.g. to optimize the lowest possible activation to have the FADC as a trigger for lower energies available). Only a single set of measurements exists comparing the FADC integration time of $\SI{50}{ns}$ and $\SI{100}{ns}$, which is only partially useful (in particular because the other parameters (differentiation time, amplification etc.) were not properly recorded for these. The problem is especially that the TOS data does not record these parameters anyway, as they are physical rotary knobs on the amplifier. We will talk about these things in sec. [[#sec:calib:fadc]] again when discussing the impact of noise on the FADC data taking and changing parameters that were done to mitigate that to an extent. **** TODOs for this section [/] :noexport: - [X] *FINISH* - [ ] *MENTION TALKED ABOUT FURTHER DOWN IN FADC ANALYSIS AS WELL AS CAST SECTION MENTIONING CHANGES DUE TO NOISE* *** Example to reconstruct FADC data :extended: As mentioned in the beginning of sec. [[#sec:reco:fadc_data]] reconstruction is done as part of ~raw_data_manipulation~: #+begin_src sh raw_data_manipulation -p <run path> --runType rtBackground --h5out /tmp/run_foo.h5 #+end_src In ~reconstruction~: #+begin_src sh reconstruction -i <h5file> --only_fadc #+end_src ** Scintillator data :PROPERTIES: :CUSTOM_ID: sec:reco:scintillator_data :END: For the scintillator signals we only record a trigger flag and the number of clock cycles since the last scintillator trigger from the moment the FADC triggered. These two pieces of information are part of the Septemboard data files included in the header. [fn:reco_scinti_data_bug_1] [fn:reco_scinti_data_bug_2] [fn:reco_scinti_data_bug_1] Important note for people potentially investigating the raw data from 2017: There was a small bug in the readout software during the beginning of the 2017 data taking period, which wrote the scintillator trigger clock cycle values into subsequent output files even if no FADC trigger was received (and thus no scintillator trigger was actually read out). However, there is a flag for an FADC trigger. To correctly read the first data runs it is therefore required to not only look at the scintillator trigger clock cycles, but also at whether the FADC actually triggered. This is handled in the analysis framework. [fn:reco_scinti_data_bug_2] In addition to the above bug, there was unfortunately a more serious bug, which rendered the scintillator counts useless in the end of 2017 / beginning of 2018 data taking period. The polarity of the signals was inverted in the detector firmware, resulting in useless "trigger" information. *** TODOs for this section [/] :noexport: - [ ] *WAS POLARITY REALLY THE ISSUE? I THINK SO, BUT NOT SURE* - [ ] *SHOULD SCINTI BUG NO TRIGGERS IN RUN 2 BUG BE MENTIONED HERE OR IN CAST DATA TAKING CHAPTER? IT THINK LATTER* Investigate the scinti bug again that caused data loss in the first place. What was wrong? * Detector installation & data taking at CAST :CAST: :PROPERTIES: :CUSTOM_ID: sec:cast :END: #+LATEX: \minitoc In this chapter we will cover the data taking with the Septemboard detector at the CAST experiment. We will begin with a timeline of the important events and the different data taking periods to give some reference and put certain things into perspective, sec. [[#sec:cast:timeline]]. We continue with the detector alignment in sec. [[#sec:cast:alignment]], as this is important for the position uncertainty in the limit calculation. Then we discuss the detector setup behind the LLNL telescope, sec. [[#sec:cast:detector_setup]]. Two sections follow focusing on where things did not go according to our plans, a window accident in sec. [[#sec:cast:window_accident]] and general issues encountered in each run period in sec. [[#sec:cast:data_taking_woes]]. We conclude with an overview of the of the total data taken at CAST, sec. [[#sec:cast:data_taking_campaigns]]. For an overview of the technical aspects of the CAST setup and operation see the appendix [[#sec:appendix:cast_operations]]. It contains details about the operating procedures with respect to the gas supply and vacuum system, interlocks and more. As the details of that are not particularly relevant after shutdown of the experiment, it is not discussed here. ** TODOs for this section [0/4] :noexport: - [ ] *NOTE ABOUT THIS CHAPTER*: In my meeting with Klaus on <2023-10-16 Mon> Klaus mentioned that he thinks the CAST chapter can probably be cut down to 2-3 pages. We can get there if we really only have a rewritten version of the Timeline in the thesis that then goes straight over to the data overview that we already have at the end of the chapter! - [ ] *REWRITE ABOVE WITH FINAL STRUCTURE!!*!! - [ ] *INTRODUCE NAMING OF RUN-2 AND RUN-3! RELATED TO DETECTOR CALIBRATIONS AND WORKING FEATURES* - [ ] *INTRODUCE NAMING FOR AIRPORT, JURA, SUNSET, SUNRISE* -> Not really needed imo. bla bla bla, the setup includes a \cefe source mounted to a pneumatic manipulator, see sec. [[#sec:cast:55fe_manipulator]], ... - [X] *HAVE A SORT OF TIMELINE APPROACH?* Or instead first start with a factual representation of what the setup actually looked like and then a "retrospective" of the different times? Including the problems that were encountered? -> Yup, written. - [X] alignment photo of laser alignment - [X] geometer measurements - [X] X-ray finger for alignment - [X] section about the 55Fe source and manipulator, with noexport section about the software used to control it. - [X] compute real dead time as done in `run_statistics.txt` files available for 2016 data taking campaign in December (grepping through tpc19, I finally found the code: [[file:~/CastData/Code/scripts/PyS_timeOfRunFolder.py]] -> Contained - [ ] *MENTION 2016 COSMIC ALIGNMENT MEASUREMENT? Maybe as a footnote "for completeness sake"* -> Maybe as footnote? - [ ] *AT VERY LEAST FIND 2016 DATA AND LINK IT IN EXTENDED VERSION!!!!!!* -> Might be on laptop or tpc19 or potentially even tpc00. -> Definitely not on voidRipper. -> Not on Laptop either! *** X-ray finger [/] - [ ] include funky reconstruction info / expected rate of X-ray finger and what we actually got? ** Timeline :PROPERTIES: :CUSTOM_ID: sec:cast:timeline :END: The Septemboard detector was prepared for data taking at the CAST experiment in July 2017 for preliminary alignment and fit tests. The detector beamline was prepared behind the LLNL telescope and aligned with a laser from the opposite side of the magnet using an acrylic glass target on <2017-07-07> (see fig. sref:fig:cast:laser_alignment1). Vacuum leak tests were performed and the detector installed on <2017-07-10> (see fig. [[sref:fig:cast:detector_installed]]). In addition, geometer measurements were done for final alignment and as a reference measurement the day after. An Amptek COOL-X X-ray generator [fn:amptek] ('X-ray finger') was installed on the opposite side of the magnet. A calibration measurement with the X-ray finger ran from <2017-07-13> over night. The aim of an X-ray finger run is to roughly verify the focal spot of the X-ray telescope. After this initial test the detector was dismounted to make space for the KWISP experiment. Two months later the detector was remounted between <2017-09-11 Mon> to <2017-09-14 Thu> with another geometer measurement on the last day. During an attempt to clean the detector water cooling system on <2017-09-19>, the window of the detector was destroyed (see section [[#sec:cast:window_accident]]). This required a detector dismount and transport to Bonn for repairs as the detector was electronically dead after the incident. Near the end of October <2017-10-23> the remount of the detector started and was finished by <2017-10-26> in time for another geometer measurement and alignment. The next day the veto paddle scintillator was calibrated using a 3-way coincidence in the RD51 laboratory (see sec. [[#sec:operation_calibration:scintillators]]), followed by the installation of the lead shielding and scintillator installation another day later. With everything ready, data taking of the first data taking period with the Septemboard detector started on <2017-10-30>. During the period until <2017-12-22> few minor issues were encountered, see sec. [[#sec:cast:data_taking_woes_2017]]. As CERN is typically closed over Christmas and well into January, data taking was paused until <2017-02-17> (further time is necessary to prepare the magnet for data taking again). The second part of the first data taking then continued on until <2017-04-17>, with a few more small problems encountered, see [[#sec:cast:data_taking_woes_2018]]. After data taking concluded, dismounting of the detector began the next day by removing the veto scintillator and the lead shielding. On <2018-04-20> another X-ray finger run was performed to get a sense of the placement of the detector during its actual mount as it was during the first data taking period. Afterwards, the detector was fully removed by <2018-04-26> to bring it back to Bonn to fix a few problems. Data taking was initially intended to continue by summer of 2018. The fully repaired detector was installed between <2018-07-16> and <2018-07-19> with a few minor delays due to a change in mounting of the lead shielding support to accommodate a parallel data taking with KWISP. For alignment another geometer measurement was performed on <2018-07-23>. Unfortunately, external delays pushed the begin of the data taking campaign back into late October. On <2018-10-20> the data taking finally begins after a power supply issue was fixed the day before. The issues encountered during this data taking period, which lasted until <2018-12-20> are mentioned in sec. [[#sec:cast:data_taking_woes_2018_2]]. With the end of 2018 the data taking campaign of the Septemboard was at an end. The detector was moved over from CAST to the CAST Detector Lab (CDL) on <2019-02-14> for a measurement campaign behind an X-ray tube for calibration purposes. Data was taken until <2019-02-21> with a variety of targets and filters (covered in sec. [[#sec:cdl]] later). Afterwards the detector was dismounted and taken back to Bonn. For the results of the different alignments, further see section [[#sec:cast:alignment]]. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Laser alignment") (label "fig:cast:laser_alignment1") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CAST_Alignment/laser_alignment_IMG_20170707_121738.jpg")) (subfigure (linewidth 0.5) (caption "Detector installed after alignment") (label "fig:cast:detector_installed") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Figs/CAST_Alignment/detector_installed_after_laser_alignment_IMG_20170710_185009.jpg")) (caption (subref "fig:cast:laser_alignment1") " Alignment of the telescope side pipes using an acrylic glass flange with a centered grid and a laser aligned to the magnet bore. The central laser spot is the point on the vertical line extending out from the center. The other points towards the lower right are further refractions. This was better visible by eye. " (subref "fig:cast:detector_installed") " Detector installed on the beamline behind the LLNL telescope on <2017-07-10>.") (label "fig:cast:alignment_detector")) #+end_src [fn:amptek] https://www.amptek.com/internal-products/obsolete-products/cool-x-pyroelectric-x-ray-generator *** TODOs for this section [/] :noexport: - [ ] *MENTION 55Fe for outer chips here?* - [ ] *MERGE TWO FIGURES TO SUBFIG* *** Detailed technical timeline [/] :extended: - Initial installation 2017 :: - ref: https://espace.cern.ch/cast-share/elog/Lists/Posts/Post.aspx?ID=3420 and =~/org/Documents/InGrid_calibration_installation_2017_elog.pdf= - June/July detector brought to CERN - before <2017-07-07 Fri> alignment of LLNL telescope by Jaime - <2017-07-07 Fri> laser alignment (see [[file:~/org/Figs/CAST_Alignment/laser_alignment_IMG_20170707_121738.jpg]]) - <2017-07-10 Mon> vacuum leak tests & installation of detector (see: [[file:~/org/Figs/CAST_Alignment/detector_installed_after_laser_alignment_IMG_20170710_185009.jpg]]) - after <2017-07-10 Mon> installation of lead shielding - <2017-07-11 Tue> Geometer measurement of InGrid alignment for X-ray finger run - <2017-07-13 Thu> - <2017-07-14 Fri>: first X-ray finger run (not useful to determine position of detector, due to dismount after) - after: dismounted to make space for KWISP - Installation for data taking start :: - Remount in September 2017 <2017-09-11 Mon> - <2017-09-14 Thu> - installation from <2017-09-11 Mon> to <2017-09-15 Fri> - <2017-09-14> Alignment with geometers for data taking, magnet warm and under vacuum. - Window explosion cleaning accident :: - weekend: (ref: [[file:~/org/Talks/CCM_2017_Sep/CCM_2017_Sep.org]]) - calibration (but all wrong) - water cooling stopped working - next week: try fix water cooling - quick couplings: rubber disintegrating causing cooling flow to go to zero - attempt to clean via compressed air - final cleaning <2017-09-19 Tue>: wrong tube, compressed detector... - detector window exploded... - show image of window and inside detector - detector investigation in CAST CDL <2017-09-19 Tue> see [[file:~/org/Figs/CAST_detector_exploded/broken_window_close_IMG_20170919_152130.jpg]] images & timestamps of images - study of contamination & end of Sep CCM - detector back to Bonn, fixed - Reinstallation for data taking start (Run 2) :: - detector installation before first data taking - reinstall in October for start of data taking in 30th Oct 2017 - remount start <2017-10-23 Mon> - <2017-10-26 Thu> Alignment with Geometers (after removal & remounting due to window accident) for data taking. Magnet *cold* and under vacuum. - <2017-10-27 Fri> calibration of scintillator veto paddle in RD51 lab - remount installation finished incl. lead shielding <2017-10-28 Sat> (mail "InGrid status update" to Satan Forum on <2017-11-09 Thu>) - <data taking period from <2017-10-30 Mon> to <2017-12-22 Fri> in 2017> - between runs 85 & 86: fix of ~src/waitconditions.cpp~ TOS bug, which caused scinti triggers to be written in all files up to next FADC trigger - run 101 <2017-11-29 Wed 6:40> was the first with FADC noise significant enough to make me change settings: - Diff: 50 ns -> 20 ns (one to left) - Coarse gain: 6x -> 10x (one to right) - run 109: <2017-12-04 Mon> crazy amounts of noise on FADC - run 111: stopped early. tried to debug noise and blew a fuse in gas interlock box by connecting NIM crate to wrong power cable - run 112: change FADC settings again due to noise: - integration: 50 ns -> 100 ns This was done at around <2017-12-07 Thu 8:00> - integration: 100 ns -> 50 ns again at around <2017-12-08 Fri 17:50>. - run 121: Jochen set the FADC main amplifier integration time from 50 -> 100 ns again, around <2017-12-15 Fri 10:20> - <data taking period from <2018-02-17 Sat> to <2018-04-17 Tue> beginning 2018> - start of 2018 period: temperature sensor broken! - <2018-02-15 Thu> to <2018-02-17 Sat> issues with moving THL values & weird detector behavior. Changed THL values temporarily as an attempted fix, but in the end didn't help, problem got worse. <2018-02-17 Sat> (ref: gmail "Update 17/02" and [[file:~/org/Mails/cast_power_supply_problem_thlshift/power_supply_problem.org]]) issue with power supply causing severe drop in gain / increase in THL (unclear, #hits in 55Fe dropped massively ; background eventually only saw random active pixels). Fixed by replugging all power cables and improving the grounding situation. iirc: this was later identified to be an issue with the grounding between the water cooling system and the detector. - by <2018-02-17 Sat 20:41> everything was fixed and detector was running correctly again. - 2 runs: 1. <2018-02-15 Thu 7:01> <2018-02-15 Thu 8:33> 2. <2018-02-16 Fri 7:00> <2018-02-16 Fri 8:31> were missed because of this. - <2018-04-18 Wed> removal of veto scintillator and lead shielding - X-ray finger run 2 on <2018-04-20 Fri>. This run is actually useful to determine the position of the detector. - <2018-04-24 Tue> Geometer measurement after warming up magnet and not under vacuum. Serves as reference for difference between vacuum & cold on <2017-10-26 Thu>! - <2018-04-26 Thu> detector fully removed and taken back to Bonn - Reinstallation for data taking in Oct 2018 (Run 3) :: - installation started <2018-07-16>. Mounting due to lead shielding support was more complicated than intended (see mails "ingrid installation" including Damien Bedat) - shielding fixed by <2018-07-19> and detector installed the next couple of days - <2018-07-23 Mon> Alignment with Geometers for data taking. Magnet warm and not under vacuum. - data taking was supposed to start end of September, but delayed. - detector had issue w/ power supply, finally fixed on <2018-10-19 Fri>. Issue was a bad soldering joint on the Phoenix connector on the intermediate board. *Note*: See chain of mails titled "Unser Detektor..." starting on <2018-10-03 Wed> for more information. Detector behavior was weird from beginning Oct. Weird behavior seen on the voltages of the detector. Initial worry: power supply dead or supercaps on it. Replaced power supply (Phips brought it a few days after), but no change. - data taking starts <2018-10-20 Sat> - run 297, 298 showed lots of noise again, disabled FADC on <2018-12-13 Thu 18:40> (went to CERN next day) - data taking ends <2018-12-20 Thu> - runs that were missed: 1. <2018-10-19 Fri 6:21> <2018-10-19 Fri 7:51> 2. <2018-10-28 Sun 5:32> <2018-10-28 Sun 7:05> 3. <2018-11-24 Sat 7:08> <2018-11-24 Sat 7:30> The last one was not a full run. - [ ] *CHECK THE ELOG FOR WHAT THE LAST RUN WAS ABOUT* - CAST Detector Lab measurements :: - detector mounted in CAST Detector Lab <2019-02-14 Thu> - data taking from <2019-02-15 Fri> to <2019-02-21 Thu>. - detector dismounted and taken back to Bonn - Outer chip 55Fe calibrations :: - ref: [[file:~/org/outerRingNotes.org]] - calibration measurements of outer chips with a 55Fe source using a custom anode & window - between <2021-05-20 Thu> and <2021-05-31 Mon 09:54> calibrations of each outer chip using Run 2 and Run 3 detector calibrations - <2021-08-31 Tue> start of a new detector calibration - another set of measurements between <2021-10-12 Tue 18:00> to <2021-10-16 Sat 19:55> with a new set of calibrations **** TODOs for this section :noexport: - [X] *COPY BACK TO STATUS AND PROGRESS WITH UPDATES!* -> Done <2023-10-18 Wed 21:46>. ** Alignment :PROPERTIES: :CUSTOM_ID: sec:cast:alignment :END: Detector alignment with the X-ray telescope, the magnet and by extension the solar core during solar tracking is obviously crucial for a helioscope for a good physics result. The alignment procedure used for the Septemboard detector is a three-fold approach: 1. alignment of the piping up to the detector using an acrylic glass target with a millimeter spaced cross, as seen in fig. sref:fig:cast:laser_alignment1. This target is mounted to the vacuum pipes in the same way the detector is mounted. A laser is installed on the opposite side of the magnet. With the magnet bores fully open the laser is aligned such that it propagates the full bore and is reflected by the X-ray telescope into the focal spot. This alignment guarantees the focal spot location to be near the center of the detector. Uncertainty is introduced due to the need to remove the acrylic glass target and install the detector, as the mounting screws allow for small movements. In addition the vacuum pipes are also not perfectly fixed. 2. alignment of the fully installed detector using an X-ray finger. The 'X-ray finger' is a small electric X-ray generator (in particular an Amptek COOL-X), which is installed in the magnet bore at the opposite end of the magnet. The generated X-rays must traverse the magnet and telescope, thereby being focused by the telescope into the focal spot. As the X-ray finger does not emit parallel light, the resulting distribution of the X-rays on the detector is not a perfect focal spot, even if the telescope was perfect and the detector placed right in the focus. The close distance also implies the focal length is slightly different than for an infinite source. The mean position of the taken data can anyhow be used to determine the likely focal spot position. See below, fig. [[sref:fig:cast:xray_finger_centers]], for an example and the resulting position from one of the X-ray finger runs. 3. alignment by the geometer group at CERN. A theodolite is installed in the CAST hall and the location of many targets on the magnet, telescope, vacuum pipes and the detector itself are measured up to $\SI{0.5}{mm}$ precision at $1σ$ level. See fig. sref:fig:cast:alignment_targets for a picture of such a target. The initial geometer measurement from <2017-07-11> mainly serves as a baseline reference. As the first two alignment procedures provide a good alignment, a measurement of the existing position by the geometers can then later be used to re-align the detector after it was removed relative to the previous baseline position relative to the telescope. This assures the detector can be remounted and placed in the right location without the need for an additional laser alignment. The X-ray finger run taken in April 2018 can be used as a reference for the alignment as used during the first data taking. The center positions of each cluster can be shown as a heatmap, where the number of hits each pixel received is colored. Computing the mean position of all those clusters yields the most likely center position of the focal spot. See fig. [[sref:fig:cast:xray_finger_centers]] for an example of this. The position of the center based on the mean of all cluster centers is about $\SI{0.4}{mm}$ away from the center in both axes. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "X-ray finger clusters") (label "fig:cast:xray_finger_centers") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/CAST_Alignment/xray_finger_centers_run_189.pdf")) (subfigure (linewidth 0.5) (caption "Geometer target") (label "fig:cast:alignment_targets") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/CAST_Alignment/detector_targets_for_alignment_small_crop.png")) (caption (subref "fig:cast:xray_finger_centers") ": Cluster center positions of the X-ray finger run 189 from April 2018. The red cross marks the center of all cluster centers, which is the most likely position of the focal spot. It is " ($ "\\sim\\SI{0.4}{mm}") " away from the chip center in both axes. The two parallel lines with less clusters are the window strongback. The orthogonal line is a graphite spacer in the center of the LLNL telescope. " (subref "fig:cast:alignment_targets") ": Image showing the targets on the detector. The acrylic glass cylinders are the fiducial marks used to hold the actual survey target. The survey target is a mirror to reflect the laser of the theodolite. Image from " (cite "geometer_1") ".") (label "fig:cast:xray_finger_alignment")) #+end_src With this setup after each remounting a geometer measurement was performed to align the detector back to the initial laser alignment. As the second mounting of the detector in September 2017 was not used for any data taking, the associated geometer measurement is irrelevant. Tab. [[tab:cast:geometer_alignments]] summarizes the values of the geometer alignment results for each of the measurements using the CenterR and CenterF positions defined based on the initial geometer measurement in July 2017. In each case the shifts in X, Y and Z direction is usually significantly less than $\SI{1}{mm}$. #+CAPTION: Overview of the results of the different geometer alignment measurements. The #+CAPTION: first measuremnt serves as the baseline to define 2 points (CenterR and CenterF) #+CAPTION: relative to which alignment later is done. The initial alignment is done both #+CAPTION: by laser and X-ray finger. The second geometer measurement is not useful, as #+CAPTION: no data was taken with it, due to the window rupture accident. #+NAME: tab:cast:geometer_alignments #+ATTR_LATEX: :booktabs t |-------------+---------+----------+----------+----------+--------| | Measurement | Target | ΔX [mm] | ΔY [mm] | ΔZ [mm] | Useful | |-------------+---------+----------+----------+----------+--------| | 11.07.2017 | | | | | yes | |-------------+---------+----------+----------+----------+--------| | 14.09.2017 | CenterR | -0.1 | 0.3 | -0.8 | no | | | CenterF | -0.1 | 0.3 | -0.9 | | |-------------+---------+----------+----------+----------+--------| | 26.10.2017 | CenterR | 0.2 | 0.6 | 0.2 | yes | | | CenterF | 0.1 | 0.6 | -0.1 | | |-------------+---------+----------+----------+----------+--------| | 24.04.2018 | CenterR | 0.5 | 0.5 | 0.0 | yes | | | CenterF | 0.4 | 0.5 | -0.3 | | |-------------+---------+----------+----------+----------+--------| | 23.07.2018 | CenterR | 1.1 | 0.5 | 0.6 | yes | | | CenterF | 1.0 | 0.5 | 0.3 | | |-------------+---------+----------+----------+----------+--------| For a detailed overview of the geometer measurements see the public EDMS links under cite:geometer_1,geometer_2,geometer_3,geometer_4,geometer_5 containing the PDF reports for each measurement. *** TODOs for this section [/] :noexport: - [X] *REPHRASE ABOVE W.R.T. CORRECT RUN 189 AND WHAT IS WINDOW ETC* Might change significantly. Ideally we can cross reference the X-ray finger center position to the geometer measurement, i.e. use it for the next data taking campaign where we know the deltas to the geometer associated with /this/ X-ray finger run. - [ ] *REPHRASE ABOVE TO NOT READ LIKE EXTENSION OF POINT NUMBER 2 ABOVE* -> Refers to the X-ray finger section after the 1, 2, 3 part. - [ ] *ADD NOTE HOW IN PRINCIPLE ONE COULD ARGUE THE POSITION IS ARTIFICIALLY MOVED UP DUE TO CHIP CUT OFF AT BOTTOM!* -> The data is partially cut off at the bottom potentially meaning the mean is slightly biased. - [X] *ROTATE THE PLOT* -> And rotate our background & candidates I fear... -> Yep, that still has to be done! :) -> But rotation for the X-ray finger does not really need to be done here. Or well, better rotate by 90° so it is more evident what the telescope graphite spacer is. - [ ] split these things up into something that is not a list? We can still add subsections here. - [ ] MAYBE rename to Detector setup & alignment - [ ] maybe explain X-ray finger measurements here fully instead of a separate section? -> Do we even have a separate section? - [X] X-ray finger not performed for 2018 data taking for logistical reasons and knowledge that geometer measurements give us precise information relative to "known good" alignment in 2017. - [X] have own section for alignment? Problem is that alignment takes place over the whole period, as the detector was removed multiple times etc. - [X] Alignment section which goes over the seen deviations based on geometers & X-ray finger run? - [X] Geometer measurement 1 not relevant directly. But relevant as it was used later to align against! - [X] Geometer measurement 2 irrelevant, as detector was mounted & dismounted without any data taking due to window rupture - [X] Geometer measurement 3 relevant for data taking Run 2 - [X] Geometer measurement 4 relevant for data taking Run 3 - [ ] image showing the targets from 2017/07/11 - [ ] Note: from Oct 2017 report on there's also numbers relative to *magnet* fiducials. We care only about telescope fiducials as that's what we align to. Maybe ask Johanna again about this... *About Geometer table:* - [ ] *CLARIFY WHAT THE SIGN MEANS. IS MINUS UP OR DOWN, LEFT OR RIGHT?* - [ ] *LETS HOPE WHEN THINKING MORE DEEPLY ABOUT NUMBERS IT DOESN'T MAKE 1mm IN ΔX APPEAR TOO MUCH!* -> This is about comparing the two X-ray finger runs and the associated geometer measurements. -> Not sure thinking about it more. On the one hand we need to keep in mind that the coordinates X & Y should be flipped. Ok, next the center Y position (so real X) in the X-ray finger cluster center plot for the 2017 July plot should in theory be further at the bottom as the chip cuts of data and therefore artifically moves the position slightly up. Now the issue is that this is movement *in addition to* the already existing ~1 mm that are offset based on the difference calculation of the X-ray finger plots, but the X difference is only 0.2mm while we are not insensitive to changes in X! It's very confusing. Maybe the real takeaway is just that this is not super reliable... If we can't figure this out later, the correct thing to do will be: - Explain the above in a paragraph, numbers don't fully match. Therefore both should be taken with a grain of salt & systematic uncertainty is simply forced to 0.5mm as a rough 1σ behavior. *Well*: One explanation could simply be that the movement is non linear. A shift in geometer X/Y might not be the same shift in our X-ray finger X/Y! - [ ] *SEE TIMELINE WHICH ONES ARE UNDER VACUUM AND MAGNET COLD / WARM MAYBE MENTION* *** Some extra info about the geometer alignment :extended: The following is the snippet from the <2017-09-14> PDF report about the definition of the CenterR and CenterF positions. #+begin_quote Goal of the operation has been to align the InGRID detector with respect to the LLNL telescope as on 11.07.2017 after the alignment of the setup with respect to the LASER installed. Coordinates of the measurement on 11.07.2017 are given below. In order to compare InGRID position on 11.07 and 14.09, two points close to the detector axis CenterR and CenterF have been defined on 11.07. Afterwards their coordinates for the measurement on 14.09 have been calculated. #+end_quote *** Generate X-ray heatmap [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:cast:alignment:xray_finger_plots :END: - [X] *FIND XRAY FINGER RUN 2, RUN 189!* [[file:~/CastData/data/XrayFingerRuns/]] - [X] *RECREATE BELOW FOR THE OTHER XRAY FINGER RUN!* -> Both are created and listed below. - [ ] *RECREATE PLOTS AS TIKZ + VEGA* -> We create them on the Cairo backend using DejaVu Serif, same as in the document of the thesis now. This is because the TikZ produced vector graphic ends up much larger than the Cairo one. Vega is on hold for now. - [X] *VERIFY THAT WE (LIKELY) HAVE TO ROTATE THE DATA BY 90 DEGREES AS ONE OF THE VERTICAL LINES IS THE TELESCOPE AXIS WHICH SHOULD BE HORIZONTAL TO THE GROUND* -> Yes, we do. First let's reconstruct the X-ray finger run: #+begin_src nim :tangle code/xray_finger_data_parsing.nim import shell, strutils proc main(path: string, run: int) = # parse data let outfile = "/t/xray_finger_$#.h5" % $run let recoOut = "/t/reco_xray_finger_$#.h5" % $run shell: raw_data_manipulation -p ($path) "--runType xray --out " ($outfile) shell: reconstruction -i ($outfile) "--out " ($recoOut) when isMainModule: import cligen dispatch main #+end_src And now we simply create a heatmap of the cluster centers: #+begin_src nim :tangle code/xray_finger_center_plot.nim import nimhdf5, ggplotnim, options import ingrid / tos_helpers import std / [strutils, tables] proc main(run: int, switchAxes: bool = false, useTeX = false) = let file = "/t/reco_xray_finger_$#.h5" % $run #proc readClusters(h5f: H5File): (seq[float], seq[float]) = var h5f = H5open(file, "r") # compute counts based on number of each pixel hit proc toIdx(x: float): int = (x / 14.0 * 256.0).round.int.clamp(0, 255) var ctab = initCountTable[(int, int)]() var df = readRunDsets(h5f, run = run, chipDsets = some(( chip: 3, dsets: @["centerX", "centerY"]))) .mutate(f{"xidx" ~ toIdx(idx("centerX"))}, f{"yidx" ~ toIdx(idx("centerY"))}) let xidx = df["xidx", int] let yidx = df["yidx", int] forEach x in xidx, y in yidx: inc cTab, (x, y) df = df.mutate(f{int: "count" ~ cTab[(`xidx`, `yidx`)]}) let centerX = df["centerX", float].mean let centerY = df["centerY", float].mean discard h5f.close() echo "Center position of the cluster is at: (x, y) = (", centerX, ", ", centerY, ")" ## NOTE: Exchanging the axes for X and Y is equivalent to a 90° clockwise rotation for our data ## because the centerX values are inverted `(256 - x), applyPitchConversion`. ## The real rotation of the Septemboard detector at CAST seen from the telescope onto the ## detector is precisely 90° clockwise. let x = if switchAxes: "centerY" else: "centerX" let y = if switchAxes: "centerX" else: "centerY" let cX = if switchAxes: centerY else: centerX let cY = if switchAxes: centerX else: centerY ggplot(df, aes(x, y, color = "count")) + geom_point(size = 0.75) + geom_point(data = newDataFrame(), aes = aes(x = cX, y = cY), color = "red", marker = mkRotCross) + scale_color_continuous() + ggtitle("X-ray finger clusters of run $#" % $run) + xlab(r"x [mm]") + ylab(r"y [mm]") + xlim(0.0, 14.0) + ylim(0.0, 14.0) + margin(right = 3.5) + #theme_scale(1.0, family = "serif") + coord_fixed(1.0) + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + legendPosition(0.83, 0.0) + ggsave("/home/basti/phd/Figs/CAST_Alignment/xray_finger_centers_run_$#.pdf" % $run, useTeX = useTeX, standalone = useTeX, dataAsBitmap = true) #useTeX = true, standalone = true) when isMainModule: import cligen dispatch main #+end_src #+RESULTS: # Center position of the cluster is at: (x, y) = (7.210714052855218, 5.669514297250704) First perform the data reconstruction: #+begin_src sh ./code/xray_finger_data_parsing -p ~/CastData/data/XrayFingerRuns/Run_21_170713-11-03 --run 21 ./code/xray_finger_data_parsing -p ~/CastData/data/XrayFingerRuns/Run_189_180420-09-53 --run 189 #+end_src And now create the plots: #+begin_src sh ./code/xray_finger_center_plot -r 21 --switchAxes ./code/xray_finger_center_plot -r 189 --switchAxes #+end_src For run 21: Center position of the cluster is at: (x, y) = (7.210714052855218,5.669514297250704) For run 189: Center position of the cluster is at: (x, y) = (7.428075467697270,6.594113570730057) First the plot for the (unused) X-ray finger run taken at the first installation before any data taking (detector removed afterwards): [[file:Figs/CAST_Alignment/xray_finger_centers_run_21.pdf]] And second the plot of the 2018 X-ray finger run taken _before_ the detector was removed in Apr 2018. This is the baseline for our idea where the focal spot is going to be. file:Figs/CAST_Alignment/xray_finger_centers_run_189.pdf *NOTE*: For a longer explanation about the reasoning behind the comment for ~--switchAxes~ in the code, see sec. [[#sec:limit:candidates:septemboard_layout_transformations]]. *** Generate spectrum of X-ray finger run :extended: Let's also look at the spectrum of the X-ray finger run (at least 189). Given the reconstructed H5 file of the run 189 #+begin_src sh plotBackgroundRate \ \t\reco_xray_finger_189.h5 \ --names "X-ray finger" \ --title "X-ray finger run 189 spectrum" \ --centerChip 3 \ --region crGold \ --energyDset energyFromCharge \ --outfile xray_finger_spectrum_189.pdf \ --outpath ~/phd/Figs/XrayFinger/ \ --useTeX \ --quiet #+end_src Which yields the following figure: #+CAPTION: Spectrum of the X-ray finger run. *IMPORTANT*: Care needs to be taken interpreting it, because #+CAPTION: the spectrum passes _through_ the telescope. Given that the telescope has very #+CAPTION: low efficiency above $\SI{4}{keV}$, the rate of X-rays at higher energies is #+CAPTION: extremely suppressed. #+NAME: fig:cast:xray_finger_189_spectrum [[~/phd/Figs/XrayFinger/xray_finger_spectrum_189.pdf]] *** Systematic uncertainty from graphite spacer rotation [/] :extended: - [X] Determine the rotation angle of the graphite spacer from the X-ray finger data -> do now. X-ray finger run: [[~/phd/Figs/CAST_Alignment/xray_finger_centers_run_189.pdf]] -> [[~/org/Figs/statusAndProgress/xray_finger_graphite_spacer_angle_run189.png]] -> It comes out to 14.17°! But for run 21 (between which detector was dismounted of course): [[~/org/Figs/statusAndProgress/xray_finger_graphite_spacer_angle_run21.png]] -> Only 11.36°! That's a huge uncertainty given the detector was only dismounted! 3°. - [ ] rotation of telescope! - [ ] Effect on systematic uncertainty! ** Detector setup at CAST :PROPERTIES: :CUSTOM_ID: sec:cast:detector_setup :END: The setup of the full beamline from the magnet end cap to the detector is shown in a render in fig. [[fig:cast:render_beamline_setup]]. The piping shows a clear kink introduced using a flexible bellow. This setup is used to move the detector mount further away from the other beamline to provide more space for two setups side-by-side. At the same time it is an artifact of the LLNL telescope only being a $\SI{30}{°}$ portion of a full telescope resulting in the focal plane not being centered in front of the telescope. Not shown in the image is the lead shielding installed around the detector as well as the veto scintillator, which covers the majority of the beamline area. The lead shielding is a $\SIrange{5}{15}{cm}$ thick 'castle' of lead around the detector ($\SI{10}{cm}$ on top and behind, $\SI{15}{cm}$ in front and $\SI{5}{cm}$ and $\SI{10}{cm}$ on each side). An annotated image of the real setup is seen in fig. [[fig:cast:annotated_setup]], which shows lead shielding, veto scintillator, \cefe source manipulator and the LLNL X-ray telescope. The setup is behind the VT3 gate valve of the CAST magnet. #+CAPTION: Render of the detector setup up to the magnet end cap as seen #+CAPTION: from above. The beamline kinks away from the other beamline #+CAPTION: ("below" in this image) to provide more space for two detectors #+CAPTION: at the same time. #+CAPTION: Image courtesy of Tobias Schiffer. #+NAME: fig:cast:render_beamline_setup [[~/phd/Figs/llnl_cast_gridpix_render_small_annotated.png]] #+CAPTION: Annotated setup as installed in October 2017 for the first data taking campaign. #+CAPTION: The detector is seen in its lead shielding, with the veto scintillator covering #+CAPTION: a large angular portion above the detector. The \cefe source manipulator #+CAPTION: is seen head-on here. On the right towards the magnet we see the housing of the #+CAPTION: LLNL X-ray telescope. #+NAME: fig:cast:annotated_setup [[~/phd/Figs/CAST_Nov2017Aufbau_annotated_small.png]] *** TODOs for this section [/] :noexport: - [ ] *POSSIBLY CHANGE TO SOMETHING W/O RENDER AND SHOW IMAGES OF FULL SETUP WITH VETO SCINTI AS WELL* -> Generally the annotated render is already shown earlier in the thesis. So I don't think it's really needed here. We can reference it though. - [ ] *MENTION ~VT3~ AS ITS IMPORTANT* - [X] *CHECK THICKNESS OF LEAD SHIELDING* - [X] *SHOW REAL (ANNOTATED?) IMAGE OF SETUP* *** \cefe source and manipulator :PROPERTIES: :CUSTOM_ID: sec:cast:55fe_manipulator :END: As seen in the previous section the setup includes a \cefe source. Its purpose is both monitoring of the detector behavior and it serves as a way to calibrate the energy of events (as mentioned in theory section [[#sec:theory:escape_peaks_55fe]]). More details on the usage and importance for data analysis will be given in chapter [[#sec:calibration]]. It is installed on a pneumatic manipulator. Using a compressed air line with about $\SI{6}{bar}$ pressure the manipulator can be moved up and down. Under vacuum conditions of the setup the manipulator is inserted unless the compressed air is used to push it out. A Raspberry Pi [fn:raspi] is installed close to the manipulator and connects to the two Festo [fn:festo] control sensors at the top and bottom end of the manipulator using the general purpose input/output (GPIO) pins. Two pins are used to read the sensor status from each. Five more pins connect to a $\SI{24}{V}$ relay, which is used to control the controllers for the compressed air line. The relay is controlled by pulse width modulation (PWM). The software controlling the GPIO pins of the Raspberry Pi is written in Python. A client program is running on a computer in the CAST control room and communicates with the Raspberry Pi using a network connection on which a server process is running. It can receive connections through a socket, allowing for remote and programmatic control of the manipulator via a set of simple string based messages. Further, it provides a REPL (read-evaluate-print loop) to control it interactively. For more details about the software see the extended version of this thesis. [fn:raspi] https://www.raspberrypi.org/about/ [fn:festo] https://www.festo.com/ **** TODOs for this section [/] :noexport: - [ ] *REFERENCE / LINK TO FESTO* - [ ] *MAYBE SHORTEN INFO ABOUT PWM?* - [ ] *LINK TO SOFTWARE HERE* **** Manipulator software and notes [0/1] :extended: - [ ] *MOVE MANIPULATOR CODE TO TPA TOOLS AND LINK TO IT?* -> Code can definitely go to TPA repository. The notes I think are enough if they are simply added as an Org file into the repository as well. They are too specific in some sense? The source code of the python script running on the Raspberry Pi to control the manipulator is the following script: #+begin_src python #!/usr/bin/env python3.6 import sys import pigpio import readline import logging import argparse import time import socket import threading import json import asyncio import functools import weakref # the program needs to do the following # # - on RPi 7 pins used (5 controlled via software): # - relay: # - GOOD - input, Pin 14 # - OUT - input, Pin 15 # - RC IN - output (via PWM), via Pin 14 # - VRC - const voltage, using 5V via PIN 2, not done in software # - GND - ground, pin 6, not done in software # - sensors 2 pins: # - input, read sensor output # # # program always listens to GOOD and OUT # using PWM we activate the manipulator. done by waiting for # - command line input? # - reading some file, s.t. this program runs as a daemon and we use # some external tool to write file via usb to Pi # - finally be able to execute from TOS. easiest via script to call # reading sensor inputs done in connection with usage of PWM # # finally compile this program to jar to run it # in order to control the source via network, the basic usage is something like # the following: # s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # s.connect(('localhost', 42000)) # s.send("insert".encode()) # depening on whether the call is from the local machine or not. # "insert" and "remove" are supported at the moment # sends back a byte string containing the bool of the # insertion / removal #class client(asyncio.Protocol): # connect to pi at IP address p = pigpio.pi('10.42.0.91') # define dict of pins d = {"GOOD" : 14, "OUT" : 15, "RC_IN" : 18, "S_OPEN" : 20, "S_CLOSE" : 21} class server(threading.Thread): # this is a simple server class, which receives the necessary # parameters to control the raspberry pi and a socket, from # which it listens to commands # inherits from threading.Thread to run in a separate thread def __init__(self, socket, p, d, pwm): # init the object self.socket = socket self.p = p self.d = d self.pwm = pwm self._stop = False # now call the Thread init threading.Thread.__init__(self) # and set it as a daemon, so that it cannot # stop the main program from quitting self.setDaemon(True) async def process_client(self, reader, writer): client = writer.get_extra_info('peername') print("New client connected: {}".format(client)) while self._stop == False: #data = socket.recv(1024).decode() data = (await reader.readline()).decode() if data: success = self.parse_message(data, client) message = self.create_message(client[0], success) writer.write(message) await writer.drain() else: writer.close() break # hacky get loop... def get_loop_and_server(self): return (self.loop, self.socketserver) def run(self): # using run we start the thread # need a new event loop, in which asyncio works self.loop = asyncio.new_event_loop() asyncio.set_event_loop(self.loop) # start server on specific port open on all interfaces self.server_future = asyncio.start_server(self.process_client, host = "0.0.0.0", port = 42000) # the returned future is handed to the event loop self.socketserver = self.loop.run_until_complete( asyncio.ensure_future( self.server_future, loop = self.loop)) print(self.socketserver.sockets) # run self.loop.run_forever() def stop_server(self): # in case stop_server is called, the stop flag is # set, such that the while loop, which waits for # data from the socket stops self._stop = True while self.loop.is_running() == True: print("current sockets still connected {}".format(self.socketserver.sockets)) print("loop is still running: {}".format(self.loop.is_running())) #self.socketserver.wait_closed() self.loop.stop() self.server_future.close() self.socketserver.close() # next line raises an exception, loop still running.... # TODO: fix problem that we cannot stop the running event loop :( self.loop.close(self.loop.run_until_complete(self.socketserver.wait_closed())) time.sleep(0.2) self.loop.close() def parse_message(self, data, address): # this function parses the data. If there is a function # call in the data, we call the appropriate function ip, port = address result = False if 'insert' in data: result = insert_source(self.p, self.d, self.pwm) #logging.info("source inserted via network socket {}:{}".format(ip, port)) elif 'remove' in data: result = remove_source(self.p, self.d, self.pwm) #logging.info("source removed via network socket {}:{}".format(ip, port)) elif 'out?' in data: result = read_out(self.p, self.d) #logging.info("out? status requestet via network socket {}:{}".format(ip, port)) elif 'good?' in data: # read GOOD and print result = read_good(self.p, self.d) #logging.info("good? status requestet via network socket {}:{}".format(ip, port)) elif 's_open?' in data: # read sensor 1 and print result = read_sensor_open(self.p, self.d) #logging.info("s_open? status requestet via network socket {}:{}".format(ip, port)) elif 's_close?' in data: # read sensor 1 and print result = read_sensor_close(self.p, self.d) #logging.info("s_close? status requestet via network socket {}:{}".format(ip, port)) else: result = "Unknown command" return result def create_message(self, client, data): # this function creates a JSON message containing the returned value # of the RPi call and a clientname # using dictionary, we create a json dump and return the encoded # string message = {"username" : client, "message" : data} # add trailing \r\l to indicate end of data stream json_data = json.dumps(message) + '\n' return json_data.encode() # set relay_sleep time (time to wait for activation of relay): 50ms relay_sleep = 50e-3 # set manipulator_sleep time: 1s manip_sleep = 1 def print_help(): help_string = """ The following commands are available:\n insert : insert source into bore remove : remove source from bore out? : print current value of relay OUT good? : print current value of relay GOOD d? : parameters used for relay (pin layout etc.) pwm? : parameters used for PWM (frequency, duty cycle, ...) help : prints this help """ print(help_string) return def read_good(p, d): # simple function which returns the value of the # GPIO pin for the GOOD output of the relay good = bool(p.read(d["GOOD"])) return good def read_out(p, d): # simple function which returns the value of the # GPIO pin for the OUT output of the relay out = bool(p.read(d["OUT"])) return out def read_sensor_open(p, d): # simple function which returns bool corresponding to # GPIO pin of sensor for OPEN val = bool(p.read(d["S_OPEN"])) return val def read_sensor_close(p, d): # simple function which returns bool corresponding to # GPIO pin of sensor for CLOSED val = bool(p.read(d["S_CLOSE"])) return val def configure_pins(p, d): # function to export all pins and set to correct modes # relay control / reading p.set_mode(d["GOOD"], pigpio.INPUT) p.set_mode(d["OUT"], pigpio.INPUT) p.set_mode(d["RC_IN"], pigpio.OUTPUT) # sensor reading p.set_mode(d["S_OPEN"], pigpio.INPUT) p.set_mode(d["S_CLOSE"], pigpio.INPUT) return def pwm_control(p, d, freq, duty_cycle): # function to control the pwm of the RC IN pin p.hardware_PWM(d["RC_IN"], freq, duty_cycle) return def insert_source(p, d, pwm): # inserts the source into the bore by activating the relay # wrapper for source_control success = source_control(p, d, pwm, "on") return success def remove_source(p, d, pwm): # removes source from bore by disabling the relay # wrapping source control success = source_control(p, d, pwm, "off") return success def source_control(p, d, pwm, direction): # this function provides a generalized interface to control the source # inputs: # p: the Pi object # d: the dict. containing the parameters # pwm: the dict. containing pwm parameters # direction: a string describing the direction to move the source # "on" : insert source # "off" : remove source pwm_control(p, d, pwm["f"], pwm[direction]) # relay was triggered: means relay should now read # insert: # GOOD == True & # OUT == True # remove: # GOOD == True & # OUT == False time.sleep(relay_sleep) good = read_good(p, d) out = read_out(p, d) success = False # set expected values based on insertion / removal if direction == "on": good_exp = True out_exp = True elif direction == "off": good_exp = True out_exp = False else: raise NotImplementedError("only 'on' and 'off' implemented to control source.") s1 = None s2 = None if good == good_exp and out == out_exp: # if good is True and out False, everything fine #logging.debug("pwm set to {}, relay reports: (good : {}), (out : {})".format(direction, good ,out)) # after setting of relay, wait again and check sensors print('pwm switched, waiting for manipulator to be moved') time.sleep(manip_sleep) # check sensors s1 = read_sensor_open(p, d) s2 = read_sensor_close(p, d) #logging.debug('sensors report: s1 = {}, s2 = {}'.format(s1, s2)) # TODO: implement logic, which deals with sensors of manipulators if direction is "on": # after insertion the sensors should read: # s1 (sensor open) == True # s2 (sensor close) == False if s1 == True and s2 == False: success = True else: success = False else: if s1 == False and s2 == True: # after removal the sensors should read: # s1 (sensor open) == False # s2 (sensor close) == True success = True else: success = False if success == False: #logging.warning("""WARNING: direction was {}, but sensors read (open): {} (close): {}. #Relay switched correctly.""".format(direction, s1, s2)) pass elif good == good_exp and out != out_exp: # something is wrong, seems like relay did not change, both still repot True #logging.warning("pwm set to {}, relay good, but OUT still reports True: {}, {}".format(direction, good, out)) pass elif good == False: #logging.warning("relay reports bad signal: {}".format(good)) pass else: #logging.warning("should not happen. Contact developer.") pass if direction == "on": print("Insertion returned {}".format(success)) if success == False: print("WARNING: insertion may have failed, but sensors read (open): {} (close): {}".format(direction, s1, s2)) print("However, relay was activated correctly.") elif direction == "off": print("Removal returned {}".format(success)) if success == False: print("WARNING: removal may have failed, but sensors read (open): {} (close): {}".format(direction, s1, s2)) print("However, relay was activated correctly.") # the following lines are here to make sure there is a new prompt # even in case a network call was made before sys.stdout.write('> ') sys.stdout.flush() return success def control_loop(p, d, pwm): # this function defines the main control loop of the manipulator # control print('Starting command prompt') print('\t insert : inserts source into bore') print('\t remove : removes source out of bore') print('\t quit : stop the program') # TODO: still need to implement the checks for # - sensor positions # output warning to console and log file in case sensors # don't report what was commanded # - output warning in case signal not good while True: # the sys calls are used to make sure the line is empty before we # write to it via input. Don't want two > > to appear (depening # on network calls this might happen) sys.stdout.write('\r') sys.stdout.flush() line = input('> ') if 'insert' in line: insert_source(p, d, pwm) #logging.info("source inserted") elif 'remove' in line: remove_source(p, d, pwm) #logging.info("source removed") elif 'out?' in line: # read OUT and print print(read_out(p, d)) elif 'good?' in line: # read GOOD and print print(read_good(p, d)) elif 's_open?' in line: # read sensor 1 and print print(read_sensor_open(p, d)) elif 's_close?' in line: # read sensor 1 and print print(read_sensor_close(p, d)) elif 'd?' in line: # print dictionary print(d) elif 'pwm?' in line: # print dictionary print(pwm) elif line in ['help', 'h', 'help?']: print_help() elif line in ['quit', 'q', 'stop']: break elif line is not "": print('not a valid command.') else: continue # perform some logging of input, exit #logging.debug("command: {}".format(line)) # after loop perform final logging? #logging.info('stopping program.') return def create_message(client, data): # this function creates a JSON message containing the returned value # of the RPi call and a clientname # using dictionary, we create a json dump and return the encoded # string message = {"username" : client, "message" : data} # add trailing \r\l to indicate end of data stream json_data = json.dumps(message) + '\n' return json_data.encode() def main(args): # setup arg parser parser = argparse.ArgumentParser(description = 'parse log level') parser.add_argument('--log', default="DEBUG", type=str) parsed_args = parser.parse_args() loglevel = parsed_args.log # setup logger numeric_level = getattr(logging, loglevel.upper(), None) if not isinstance(numeric_level, int): raise ValueError('Invalid log level: {}'.format(loglevel)) # add an additional handler for the asyncio logger so that it also # writes the errors and exceptions to console console = logging.StreamHandler() logging.getLogger("asyncio").addHandler(console) LOG_FILENAME = 'log/manipulator.log' logging.basicConfig(filename = LOG_FILENAME, #stream = sys.stdout, format = '%(levelname)s %(asctime)s: %(message)s', datefmt='%d/%m/%Y %H:%M:%S', level = numeric_level) # now configure all pins configure_pins(p, d) # define PWM settings pwm = {"f" : 200, "off" : 200000, "on" : 400000} # set pwm for RC IN pin pwm_control(p, d, pwm["f"], pwm["off"]) # configure readline readline.parse_and_bind('tab: complete') readline.set_auto_history(True) # create the socket for the server # instantiate the server object # thr = server(serversocket, p, d, pwm) thr = server(None, p, d, pwm) # and start thr.start() # now that everything is configured, start the control loop control_loop(p, d, pwm) #try: #thr.stop_server() #except: # after control loop has finished, shut down the server thread loop, socketserver = thr.get_loop_and_server() # the following is an ugly hack to close the program without getting any # exceptions, thrown because the event loop in the server class is # not being shut down. Trying, but doesn't work, so this will have # to do for now try: thr.stop_server() except: socketserver.close() #loop.close() if __name__=="__main__": import sys main(sys.argv[1:]) #+end_src The following are my notes taken during development of the hardware & software that describe the specific hardware in use. ***** DONE Manipulator [3/3] ****** DONE test for leaks ****** DONE test using compressed air, reading sensors Regarding sensors, setup and hardware: Hardware: - sensors: Festo 150 857 accept between 12 and 30 V DC max. output amperage: 500 mA switch on time: 0.5 ms switch off time: 0.03 ms - cable : Festo NEBU-M8G3-K5-LE3 (541 334) - cable (power): Festo NEBV-Z4WA2L-R-E-5-N-LE2-S1 Thus, supply sensors with 24 V DC as well. Build setup such that valve and sensors receive same 24 V. Sensor outputs need to go on RPi GPIO pins. These max value of 3.3 V (!). Using voltage divider something like the following seems reasonable #+BEGIN_LaTeX $\frac{U_{\text{Pi, in}}}{U_{\text{sensor, out}}} = \frac{R_2}{R_1 + R_2}$ #+END_LaTeX with #+BEGIN_LaTeX $U_{\text{Pi, in}} < 3.3\,\text{V}$ $U_{\text{sensor, out}} = 24\,\text{V}$ #+END_LaTeX Thus, we'd get: #+BEGIN_SRC python R2 = 1e3 R1 = 8.2e3 U_sensor_out = 24 U_pi_in = U_sensor_out * R2 / (R1 + R2) return U_pi_in #+END_SRC #+RESULTS: : 2.60869565217 Build simple board using these resistors (first check output current of sensor does not exceed 0.5 mA! max of RPi) to feed the sensor values into the RPi. Should be simple? Tested basic setup today (<2017-08-29 Di 18:47>). - 24V power supply prepared - RPi connected to relay - tpc20 used to run PyS_manipController.py - relay connected as: - power supply 24V+: relay COM - power supply GND: valve GND - valve +: relay NO is all there is to do. :) ****** DONE finalize software The software to readout and control the manipulator needs to be finished. The [[file:~/CastData/ManipulatorController/PyS_manipController.py][Python script to control manipulator]] currently creates a server, which listens for connections from a client connecting to it. Commands are not final yet (use only "insert" and "remove" so far). Still need to: 1. DONE separate server and client into two actually separate threads 2. DONE try using nim client of chat app as the client. allows me to use nim, yay. Note <2017-09-07 Do>: took me the last two days to figure out, why the server application was buggy. See mails to Lucian and Fabian for an explanation titled 'Python asyncio'. Having a logger enabled, causes asyncio to redirect all error output from the asyncio code parts to land in the log file. CLOSED: <2017-09-09 Sa 01:51> Python server is finished, allows multiple incoming connections at the same time, thanks to asyncio (what a PITA...). Final version is [[file:~/CastData/ManipulatorController/PyS_manipController.py][PyS_manipController.py]]. Nim client works well as a client to control the server. See [[file:~/CastData/ManipulatorController/nim/client.nim][client.nim]] for the code currently in use. *** Lead shielding layout :extended: The full lead shielding layout can be found here (created by Christoph Krieger): [[file:resources/lead_shielding_assembly_ingrid_2017.pdf]] ** Window accident :PROPERTIES: :CUSTOM_ID: sec:cast:window_accident :END: During the preparations of the detector for data taking, it became clear that the rubber seals of the quick connectors used for the water cooling system started to disintegrate. The connectors were replaced by Swagelok connectors, but the water cooling system still contained rubber pieces blocking the flow. Due to the small diameter and twisted layout of the cooling ducts in the copper body, the only way at hand to clean them was a compressed air line, normally used for operation of the \cefe manipulator (see sec. [[#sec:cast:55fe_manipulator]]). This cleaning process worked very well. Multiple cleaning & water pumping cycles were needed, as after cleaning the system with compressed air, pumping water the next time moved some remaining pieces, which blocked it again. After multiple cycles at which no more clogging happened upon water pumping a final cycle was intended. As the gas supply and the water cooling system after replacement of the quick connectors now used not only the same tubing, but also the same connectors, the compressed air line was mistakenly connected to the gas supply instead of water cooling line by me. The windows -- tested up to $\SI{1.5}{bar}$ pressure -- could not withstand the sudden pressure of the compressed air line of about $\SI{6}{bar}$. A sudden and catastrophic window failure broke the vacuum and shot window pieces as well as possible contamination into the vacuum pipes towards the X-ray optics. Because the LLNL telescope is an experimental optics there was worry about potential oil contamination coming from dirty air of the compressed air line. A conservative estimate of this given an upper bound on contamination of the air, volume of the vacuum pipes and the telescope area was computed. Assuming a flow of compressed air of $\SI{5}{s}$, a ISO 8573-1:2010 class 4 compressed air contamination of $\text{ppmv}_{\text{oil}} = \SI{10}{\milli\gram\per\meter\cubed}$ and all oil in the air sticking to the telescope shells would lead to a contamination of $c_{\text{oil}} = \SI{41.7}{\nano\gram\per\cm\squared}$. More realistic is about $\SI{1}{\percent}$ of that due to the telescope only being less than $\frac{1}{10}$ of the full system area and the primary membrane pump likely removing the majority ($>\SI{90}{\percent}$) of the oil in the first place. This puts an upper limit of $c_{\text{oil}} = \SI{0.417}{\nano\gram\per\cm\squared}$, which is well below anything considered problematic for further data taking. Further, the \cefe source manipulator likely caught most of the debris, as it was fully inserted due to the necessary removal of the compressed air line from it, which is normally needed to keep the manipulator extruded when the system is under vacuum. For this reason it is unlikely any window debris could have caused significant scratches in the telescope layers. After the incident the detector was dismounted and taken to the CAST detector lab. Fig. sref:fig:cast:window_accident:broken_window shows the detector from above with the small remaining pieces of the window. Fig. sref:fig:cast:window_accident:broken_window_inside shows the detector inside after opening it. A bulge is visible where the gas inlet is and the compressed air entered. As the detector was electronically dead after the incident, the decision was made to move it back to Bonn for repairs. It turned out that the Septemboard had become loose from the connector. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Broken window from the inside") (label "fig:cast:window_accident:broken_window") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CAST_detector_exploded/broken_window_close_IMG_20170919_152130.jpg")) (subfigure (linewidth 0.5) (caption "View into the detector after accident") (label "fig:cast:window_accident:broken_window_inside") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CAST_detector_exploded/detector_broken_window_open_top_IMG_20170919_152130.jpg")) (caption (subref "fig:cast:window_accident:broken_window") " shows the cathode of the detector from the inside with the broken window. Essentially the full window directly exposed to vacuum is gone. " (subref "fig:cast:window_accident:broken_window_inside") " is the view into the detector without the cathode. A bulge of the field cage is visible where the compressed air entered.") (label "fig:cast:window_accident:broken_window_subfig")) #+end_src *** TODOs for this section [/] :noexport: - [X] LIKELY NOT LIST EXACT NUMBERS HERE, BUT IF SO REFERENCE ISO CLASS -> As this will be moved to the appendix, the numbers can remain. - [X] contamination calculation - [X] pictures of broken window & detector - [ ] *REWRITE CODE TO USE UNCHAINED* *** Calculations of contamination [0/1] :extended: - [ ] *REWRITE TO USE UNCHAINED!!* Check the appendix [[#sec:appendix:vacuum_contamination]] for the document written that contains my thoughts about the calculations below. Here are the calculations done to estimate the contamination. First a file containing the tubing sizes of the vacuum system: #+begin_src nim :tangle code/vacuum_contamination/tubing.nim import tables type # defines the TubesMap datatype, which is a combined object to # store the different parts of the tubing each sequences of tuples TubesMap* = object static_tubes* : seq[tuple[diameter: float, length: float]] flexible_tubes* : seq[tuple[diameter: float, length: float]] t_pieces* : seq[tuple[diameter: float, length_long: float, length_short: float]] crosses* : seq[tuple[diameter: float, length: float]] proc getVacuumTubing*(): TubesMap = # this function returns the data (originally written in calc_vacuum_volume.org # as a set of hash maps as a "TubesMap" datatype let st_tubing = @[(63.0, 10.0), (63.0, 51.0), (63.0, 21.5), (25.0, 33.7), (63.0, 20.0), (63.0, 50.0), (40.0, 15.5), (16.0, 13.0), (40.0, 10.0)] let fl_tubing = @[(16.0, 25.0), (16.0, 25.0 ), (16.0, 25.0 ), (16.0, 25.0 ), (16.0, 40.0 ), (25.0, 90.0 ), (25.0, 80.0 ), (40.0, 50.0 ), (16.0, 150.0 ), (40.0, 80.0 ), (40.0, 80.0)] let t_pieces = @[(40.0, 18.0, 21.0), (16.0, 7.0, 4.5), (40.0, 10.0, 10.0)] let crosses = @[(16.0, 10.0), (40.0, 14.0), (40.0, 14.0), (40.0, 14.0)] let t = TubesMap(static_tubes: st_tubing, flexible_tubes: fl_tubing, t_pieces: t_pieces, crosses: crosses) echo "Vacuum tubing is as follows:" echo t return t #+end_src And the actual code using the tubing to calculate possible contamination: #+begin_src nim :tangle code/vacuum_contamination/vacuum_contamination.nim import math import tubing import sequtils, future import typeinfo # This script contains a calculation for the total volume of the # currently in use vacuum system at CAST (behind and including LLNL # telescope) proc cylinder_volume(diameter, length: float): float = # this proc calculates the volume of a cylinder, given a # diameter and a length both in cm result = PI * pow(diameter / 2.0, 2) * length proc t_piece_volume(diameter, length_long, length_short: float): float = # this proc calculates the volume of a T shaped vacuum piece, using # the cylinder volume proc # inputs: # diameter: diameter of the tubing in cm # length_long: length of the long axis of the tubing # length_short: length of the short axis of the tubing result = cylinder_volume(diameter, length_long) + cylinder_volume(diameter, length_short - diameter) proc cross_piece_volume(diameter, length: float): float = # this proc calculates the volume of a cross shaped vacuum piece, using # the cylinder volume proc # inputs: # diameter: diameter of the tubing in cm # length: length of one axis of the tubing result = 2 * cylinder_volume(diameter, length) - pow(diameter, 3) proc calcTotalVacuumVolume(t: TubesMap): float = # function which calculates the total vacuum volume, using # the rough measurements of the length and diameters of all the # piping # the TubesMap consists of: # static_tubes : seq[tuple[diameter: float, length: float]] # flexible_tubes : seq[tuple[diameter: float, length: float]] # t_pieces : seq[tuple[diameter: float, length_long: float, length_short: float]] # crosses : seq[tuple[diameter: float, length: float]] # define variables to store static volume etc # calc volume of static tubing let static_vol = sum(map( t.static_tubes, (b: tuple[diameter, length: float]) -> float => cylinder_volume(b.diameter / 10, b.length))) let flexible_vol = sum(map( t.flexible_tubes, (b: tuple[diameter, length: float]) -> float => cylinder_volume(b.diameter / 10, b.length))) let t_vol = sum(map( t.t_pieces, (b: tuple[diameter, length_long, length_short: float]) -> float => t_piece_volume(b.diameter / 10, b.length_long, b.length_short))) let crosses_vol = sum(map( t.crosses, (b: tuple[diameter, length: float]) -> float => cross_piece_volume(b.diameter / 10, b.length))) result = static_vol + flexible_vol + t_vol + crosses_vol proc calcFlowRate(d, p, mu, x: float): float = # this function calculates the flow rate following the Poiseuille Equation # for a non-ideal gas under laminar flow. # inputs: # d: diameter of the tube in m # p: pressure difference between both ends of the tube in Pa # mu: dynamic viscosity of the medium # x: length of the tube # note: get viscosity e.g. from https://www.lmnoeng.com/Flow/GasViscosity.php # returns the flow rate in m^3 / s result = PI * pow(d, 4) * p / (128 * mu * x) proc calcGasAmount(p, V, T: float): float = # this function calculates the amount of gas in moles follinwg # the ideal gas equation p V = n R T for a given pressure, volume # and temperature let R = 8.31446 result = p * V / (R * T) proc calcVolumeFromMol(p, n, T: float): float = # this function calculates the volume in m^3 follinwg # the ideal gas equation p V = n R T for a given pressure, amount in mol # and temperature let R = 8.31446 result = n * R * T / p proc main() = # TODO: checke whether diameter of 63mm for telescope is a reasonable # number! let t = getVacuumTubing() # first of all we need to calculate the total volume of the vacuum let volume = calcTotalVacuumVolume(t) echo volume # now calcualte flow rate through pipe let # 3 mm diameter d = 3e-3 # 6 bar pressure diff p = 6.0e5 # viscosity of air mu = 1.8369247e-4 # ~2m of tubing x = 2.0 flow = calcFlowRate(d, p, mu, x) echo(flow * 1e3, " l / s") # given the flow in liter, calc total gas inserted into the system let flow_l = flow * 1e3 # detector volume in m^3 let det_vol = cylinder_volume(12.0, 3.0) * 1e-6 echo("Detector volume is : ", det_vol) # initial gas volume inside detector (1 bar is argon!), thus # only .5 bar let n_initial = calcGasAmount(0.5e5, det_vol, 293.15) # gas which came in after window ruptured let valve_open = 5.0 # total volume in m^3 let flow_vol = flow_l * 1e-3 * valve_open # since the flown volume is given for normal pressure and temp, calc # amount of gas let n_flow = calcGasAmount(1.0e5, flow_vol, 293.15) echo("Initial gas is : ", n_initial, " mol") echo("Gas from flow is : ", n_flow, " mol") let n_total = n_initial + n_flow echo("Total compressed air, which entered system : ", n_total) # calc volume corresponding to normal pressure let tot_vol_atm = calcVolumeFromMol(1e5, n_total, 293.15) echo("Total volume of air at normal pressure : ", tot_vol_atm * 1e3, " l") when isMainModule: main() #+end_src ** Data taking woes :PROPERTIES: :CUSTOM_ID: sec:cast:data_taking_woes :END: In this section we will cover the smaller issues encountered during the data taking. These are worth naming, due to having an impact on the quality of the data as well as affecting certain aspects of data analysis. In case someone wishes to analyze the data, they should be aware of them. We will cover each of the effectively three data taking periods one after another. *** TODOs for this section [/] :noexport: - [ ] *LATER HAVE SECTION ON FADC NOISE, NOISE DETECTION AND WHAT EVENTS LOOK LIKE IN EACH CASE?* - [ ] *THIS DOES NOT TALK ABOUT DRIFT OF PEAK IN DATA NOR GAIN VARIATION! TOPIC FOR ANOTHER SECTION* *** 2017 Oct - Dec :PROPERTIES: :CUSTOM_ID: sec:cast:data_taking_woes_2017 :END: The first data taking period from <2017-10-30 Mon> to <2017-12-22 Fri> initially had a bug in the data acquisition software, which failed to reset the veto scintillator values from one event to the next, if the next one did not have an FADC trigger. In that case in principle the veto scintillators should not have any values other than ~0~. However, as there is a flag in the data readout for whether the FADC triggered at all, this is nowadays handled neatly in the software by only checking the triggers if there was an FADC trigger in the first place. Unfortunately, it was later found that the scintillator triggers were nonsensical in this data taking period due to a firmware bug anyway. Starting from the solar tracking run on <2017-11-29 Wed> the analogue FADC signals showed significant signs of noise activity. This lead to an effectively extremely high dead time of the detector, because the FADC triggered pretty much immediately after the Timepix shutter was opened. As I was on shift during this tracking, I changed the FADC settings to a value, which got rid of the noise enough to continue normal data taking. The following changes were made: - differentiation time reduced from $\SI{50}{ns}$ to $\SI{20}{ns}$ - coarse gain of the main amplifier increased from ~6x~ to ~10x~ Evidently this has a direct effect on the shape of the FADC signals, to be discussed in sec. [[#sec:calibration:fadc_noise]]. On <2017-12-05 Tue> while trying to investigate the noise problem which resurfaced the day before despite the different settings, a fuse blew in the gas interlock box. This caused a loss of a solar tracking the next day. The still present FADC noise lead me to change the amplification settings more drastically on <2017-12-07 Thu 8:00> during the shift: - integration time from $\SI{50}{ns}$ to $\SI{100}{ns}$ The same day in the evening the magnet quenched causing the shift to be missed the next day. In the evening of <2017-12-08 Fri> the integration time was turned down to $\SI{50}{ns}$ again, as the noise issue was gone again. A week later the integration time was finally changed again to $\SI{100}{ns}$. By this time it was clear that there would be no easy fix to the problem and that it is strongly correlated to the magnet activity during a shift. For that reason the setting was kept for the remaining data taking periods. *** 2018 Feb - Apr :PROPERTIES: :CUSTOM_ID: sec:cast:data_taking_woes_2018 :END: Two days before the data taking period was supposed to start again in 2018, there were issues with the detector behavior with respect to the thresholds and the gain of the GridPixes. During one calibration run with the \cefe source the effective gain dropped further and further such that instead of $\sim\num{220}$ electrons less than $\sim\num{100}$ were recorded. This turned out to be a grounding issue of the detector relative to the water cooling system. Further, the temperature readout of the detector did not work anymore. It is unclear what happened exactly, but the female micro USB connector on the detector had a bad soldering joint as was found out after the data taking campaign. It is possible that replugging cables to fix the above mentioned issue caused an already weak connector to fully break. The second data taking period finally started on <2018-02-17 Sat> and ran until <2018-04-17 Tue>. This data taking campaign still ran without functioning scintillators, due to lack of time and alternative hardware in Bonn to debug the underlying issue and develop a solution. *** 2018 Oct - Dec :PROPERTIES: :CUSTOM_ID: sec:cast:data_taking_woes_2018_2 :END: Between the spring and final data taking campaign the temperature readout as well as the firmware were fixed to get the scintillator triggers working correctly, with the installation being done end of July 2018. By the time of the start of the actual solar tracking data taking campaign at the end of October however, a powering issue had appeared. This time the Phoenix connector on the intermediate board had a bad soldering joint, which was finally fixed <2018-10-19 Fri>. Data taking started the day after. Two runs in mid December showed strong noise on the FADC again. This time no amount of changing amplifier settings had any effect, which is why 2 runs were done without the FADC. See runs 298 and 299 in the appendix, tab. [[#sec:appendix:cast_run_list]]. For the last runs it was activated again and no more noise issues appeared. *** Concluding thoughts about issues The FADC noise issue was in many ways the most disrupting active issue the detector was plagued by. In hindsight, the standard LEMO cable used should have been a properly shielded cable. Someone with more knowledge about RF interference should have assisted in the installation. In a later section, [[#sec:calibration:fadc_noise]], the typical signals recorded by the FADC under noise will be shown as well as mitigation strategies on the software side. Also how the signals and the FADC activation threshold changed due to the changed settings will be presented. **** TODOs for this section [1/1] :noexport: - [X] *FIX REFERENCE TO FADC NOISE LATER* -> The section exists but is not written as of <2023-10-19 Thu 14:09>. ** X-ray finger runs [/] :noexport: - [X] should this go to the alignment part? I suppose we could mention this as part of the alignment with a heatmap and the center location _before_ (?) geometer alignment and movement? For context We can then later refer to this again in the context of determining the systematic uncertainty on the position. -> Yes, this is described enough in the alignment part. Note though that we will also need to reference it again for the systematics later, but that's fine. - [ ] *WHEN WE* rewrite the actual CAST section for the final thesis (not the appendix part) we can rethink whether to have an explicit X-ray finger subsection with a few words about the uncertainty. 2 X-ray finger runs were made. One near beginning, one near end. Plot of X-ray finger centers. Mention how this plays into the analysis side, that it means we need to adjust the ray tracing. - [X] *ONE RUN NOW PART OF ALIGNMENT, OTHER IS MISSING ON MY LAPTOP* -> Both found in [[file:~/CastData/data/XrayFingerRuns/]] ** Summary of CAST data taking :PROPERTIES: :CUSTOM_ID: sec:cast:data_taking_campaigns :END: In summary then, the data taken at CAST with the Septemboard detector can be split into two periods. The first from October 2017 to April 2018 and the second from October 2018 to December 2018. The former will from here on be called "Run-2" and the latter "Run-3". Run-1 refers to the data taking campaign with the single GridPix detector in 2014 and 2015. The distinction of run periods is mainly based on the fact that the detector was dismounted between Run-2 and Run-3 and additionally a full detector recalibration was performed, meaning the datasets require slightly different parameters for calibration related aspects. During Run-2 the scintillator vetoes were not working correctly. The FADC was partially noisy. In Run-3 all detector features were working as intended. The feature list is summarized in tab. [[tab:cast:features_by_run]]. #+CAPTION: Overview of working (\green{o}), mostly working (\orange{m}), not #+CAPTION: working (\red{x}) features in each run. FADC was partially noisy #+CAPTION: in Run-2. #+NAME: tab:cast:features_by_run #+ATTR_LATEX: :booktabs t |-------------+------------+-----------| | Feature | Run 2 | Run 3 | |-------------+------------+-----------| | Septemboard | \green{o} | \green{o} | | FADC | \orange{m} | \green{o} | | Veto scinti | \red{x} | \green{o} | | SiPM | \red{x} | \green{o} | |-------------+------------+-----------| Run-2 ran with a Timepix shutter time of ~2/32~ (ref. sec. [[#sec:reco:event_duration]]) resulting in about $\SI{2.4}{s}$ long frames. This was changed with the start of 2018 (still in Run-2) to ~2/30~ ($\sim\SI{2.2}{s}$). In total 115 solar trackings were recorded between Run-2 and Run-3, out of 120 solar trackings taking place. 4 of the 120 total were missed for detector related reasons and one was aborted after 30 minutes of tracking time. This amounts to about $\SI{180}{\hour}$ of tracking data. Further, $\SI{3526}{\hour}$ of background data and $\SI{194}{\hour}$ of \cefe calibration data were recorded. The total active fraction of these times is about $\SI{90}{\%}$ in both run periods. See tab. [[tab:cast:total_data_time]] for the precise times and fractions of active data taking. Two X-ray finger runs were done for alignment purposes (out of which only 1 is directly useful). #+CAPTION: Overview of the total data taken with the Septemboard detector at CAST #+CAPTION: in the time between October 2017 and December 2018. 'Active s' and #+CAPTION: 'Active b.' refers to the total solar tracking and background time excluding #+CAPTION: the dead time due to readout of the septemboard. #+CAPTION: See the table below [[tab:cast:data_stats_overview]] for more details. #+NAME: tab:cast:total_data_time #+ATTR_LATEX: :align lrrrrr :booktabs t | | Solar tracking [h] | Active s. [h] | Background [h] | Active b. [h] | Active [%] | |-------+--------------------+---------------+----------------+---------------+------------| | Run-2 | 106.006 | 93.3689 | 2391.16 | 2144.12 | 89.65 | | Run-3 | 74.2981 | 67.0066 | 1124.93 | 1012.68 | 90.02 | | Total | 180.3041 | 160.3755 | 3516.09 | 3157.35 | 89.52 | Outside the issues mentioned in the previous section [[#sec:cast:data_taking_woes]], the detector generally ran very stable. Certain detector behaviors will be discussed later, which do not affect data quality as they can be calibrated out. Table [[tab:cast:data_stats_overview]] provides a comprehensive overview of different statistics of each data taking period, split by calibration and background / solar tracking data. The appendix [[#sec:appendix:cast_run_list]] lists the full run list with additional information about each run. Further, appendix [[#sec:appendix:occupancy]] shows occupancy maps of the Septemboard for Run-2 and Run-3, showing a mostly homogeneous activity, as one would expect for background data taking. Fig. [[fig:appendix:background_rates:all_cast_data]] in appendix [[#sec:appendix:background_rates]] shows the raw rate of activity on the center chip over the entire CAST data taking period. #+CAPTION: Overview of the data taken in each of the runs split by calibration data #+CAPTION: ("calib") and background ("back"). First information about the total recorded #+CAPTION: time and trackings and then event information regarding general activity and #+CAPTION: activity split by chips, FADC and scintillators. Note that the scintillator #+CAPTION: information for Run-2 is not useful, as the signals recorded were not actually #+CAPTION: real signals. #+NAME: tab:cast:data_stats_overview #+ATTR_LATEX: :booktabs t | Field | calib Run-2 | calib Run-3 | back Run-2 | back Run-3 | |--------------------------------+--------------+--------------+---------------+---------------| | total duration | 107.42 h | 87.06 h | 2497.16 h | 1199.22 h | | active duration | 2.6 h | 3.53 h | 2238.78 h | 1079.6 h | | active fraction | 2.422 % | 4.049 % | 89.65 % | 90.02 % | | # trackings | \num{0} | \num{0} | \num{68} | \num{47} | | non tracking time | 107.42 h | 87.06 h | 2391.15 h | 1124.93 h | | active non tracking time | 2.6 h | 3.53 h | 2144.11 h | 1012.67 h | | tracking time | 0 h | 0 h | 106.01 h | 74.3 h | | active tracking time | 0 h | 0 h | 93.36 h | 67 h | |--------------------------------+--------------+--------------+---------------+---------------| | Events | | | | | |--------------------------------+--------------+--------------+---------------+---------------| | total # events | \num{532020} | \num{415927} | \num{3758960} | \num{1837330} | | only center chip | \num{472048} | \num{361244} | \num{21684} | \num{10342} | | only any outer chip | \num{5} | \num{5} | \num{1558546} | \num{744722} | | center + outer | \num{59554} | \num{53499} | \num{1014651} | \num{486478} | | center chip | \num{531602} | \num{414743} | \num{1036335} | \num{496820} | | any chip | \num{531607} | \num{414748} | \num{2594881} | \num{1241542} | | fraction with center | 99.92 % | 99.72 % | 27.57 % | 27.04 % | | fraction with any | 99.92 % | 99.72 % | 69.03 % | 67.57 % | | with fadc readouts | \num{531529} | \num{413853} | \num{542233} | \num{211683} | | fraction with FADC | 99.91 % | 99.50 % | 14.43 % | 11.52 % | | with SiPM trigger <4095 | \num{1656} | \num{20} | \num{8585} | \num{4304} | | with veto scinti trigger <4095 | \num{0} | \num{2888} | \num{0} | \num{70016} | | with any SiPM trigger | \num{531528} | \num{1312} | \num{825460} | \num{34969} | | with any veto scinti trigger | \num{0} | \num{216170} | \num{0} | \num{206025} | | fraction with any SiPM | 99.91 % | 0.3154 % | 21.96 % | 1.903 % | | fraction with any veto scinti | 0.000 % | 51.97 % | 0.000 % | 11.21 % | *** TODOs for this section [/] :noexport: - [ ] *PRESENT AN ENERGY SPECTRUM WITHOUT ANY CUTS WHATSOEVER* *ALSO MAYBE OCCUPANCY OR AT LEAST REFERENCE APPENDIX OF OCCUPANCIES!* - [X] *VERIFY EXACT SHUTTER TIMES* - [ ] *UNIFORMLY DECIDE TO EITHER USE ~Run-2~ or ~Run 2~ STYLE* - [ ] table for number of shifts in each run, column for number of trackings - [X] table of temperatures recovered from the shift logs! - [X] number of X-ray finger runs - [X] table of total time taking in each run - [X] link to appendix containing total run list! - [X] table of working detector features in each run - [ ] table of recorded temperatures (or as a plot? Table takes a lot of space! ~Tools/mapSeptemTempToFePeak.nim~) -> Done later when talking about detector activity & time behavior. - [X] numbers of the total activity, number of events on the center & outside chips, # of FADC triggers, event durations etc. - [ ] *THINK ABOUT WHETHER TO INCLUDE FADC NOISE HERE IN STATISTICS* - [X] *MENTION SHUTTER TIMES USED, 2017 2/32, 2018 2/30?* - [ ] plot of event durations already created! -> Not super interesting as a plot, imo. - [ ] *FIX TABLE OF TIME! CODE IN NEXT NOEXPORT SECTION GIVES DIFFERENT NUMBERS THAN THE FIRST TABLE HERE! WHY?* -> Difference (among others) is different number of trackings known. In [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/writeRunList/writeRunList.nim]] the generated run list that's commited to TPA is 115 trackings, whereas here (on voidRipper) only 109 trackings are known. Why? -> *Investigate*! - [X] *FIX UP THE TABLE TO NOT USE FIELDS AS ROW NAMES, BUT RATHER TEXT* - [X] *SEE NOEXPORT SECTION BELOW TO THINK ABOUT ADDING OTHER THINGS* - [X] *UPDATE TABLE!!!* -> The numbers here are still the ones before the tracking length * active ratio fix! -> Table from ~writeRunList~ -> Done <2023-11-24 Fri 10:46>. *** Extended table about total time :extended: This, tab. [[tab:cast:total_data_time_extended]], is an extended version (wider + total times) of the table presented in the section above. #+CAPTION: Overview of the total data taken with the Septemboard detector at CAST #+CAPTION: in the time between October 2017 and December 2018. See the table below #+CAPTION: [[tab:cast:data_stats_overview]] for more precise numbers including the #+CAPTION: time the detector was active (shutter open). #+NAME: tab:cast:total_data_time_extended #+ATTR_LATEX: :align lrrrrrrrr :booktabs t | | Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | Active [%] | |-------+--------------------+----------------+---------------------+-------------------------------------+-----------------------+----------------+-----------------+------------| | Run-2 | 106.006 | 2391.16 | 93.3689 | 93.3689 | 2144.12 | 2497.16 | 2238.78 | 0.89653046 | | Run-3 | 74.2981 | 1124.93 | 67.0066 | 67.0066 | 1012.68 | 1199.23 | 1079.6 | 0.90024432 | | Total | 180.3041 | 3516.09 | 160.3755 | 160.3755 | 3157.35 | 3706.66 | 3318.38 | 0.89524801 | #+TBLFM: $9=$8/$7 *** Code to compute statistics [10/17] :extended: :PROPERTIES: :CUSTOM_ID: sec:cast:compute_statistics :END: - [ ] code to compute the total run duration of the different parts - [ ] code to produce the outer chip & central chip activity - [ ] produce number of events with FADC trigger - [ ] number of events with scintillator trigger + non trivial triggers (i.e. not maximum) Tools we have for this and related: - [[~/CastData/ExternCode/TimepixAnalysis/Tools/extractScintiRandomRate.nim]] -> works by single run - [[~/CastData/ExternCode/TimepixAnalysis/Tools/outerChipActivity/outerChipActivity.nim]] -> works on full files - [[~/CastData/ExternCode/TimepixAnalysis/Tools/countNonEmptyFrames/countNonEmptyFrames.nim]] -> works on full files -> only looks at center chip - [[~/CastData/ExternCode/TimepixAnalysis/Tools/extractAlphasInBackground/extractAlphasInBackground.nim]] -> works on full files -> extracts all events with energy > 1 MeV - [[~/CastData/ExternCode/TimepixAnalysis/Tools/extractScintiTriggers/extractScintiTriggers.nim]] -> (super old, still uses plotly) -> works on full files -> reads scinti trigger values and plots all != 0 & 4095 & and plots all < 300 - [[~/CastData/ExternCode/TimepixAnalysis/Tools/extractSparks/extractSparks.nim]] -> works on full files -> counts events with more than MinPix hits in a region. Written for Lucian iirc - [[~/CastData/ExternCode/TimepixAnalysis/Tools/mapSeptemTempToFePeak.nim]] -> works on full files (all calibration) -> maps peak position by fit parameter to temperature from ~/resources~ directory - [[~/CastData/ExternCode/TimepixAnalysis/Tools/writeRunList/writeRunList.nim]] -> works on full files -> uses the ~ExtendedRunInfo~ type Also outputs tracking & non tracking duration already. - [ ] *CREATE PLOT OF DURATIONS, HIGHLIGHT 2017 USED 2/32 WHILE 2018 2/30* Combined the above gives us more than we need. The one thing missing (outside of details, like computing rates instead of numbers etc) is _maybe_ FADC related, i.e. number of noisy events. But we haven't even talked about noisy events, so I'm not sure if this is the right place for that anyway. So the information we want: - [X] total duration (sum of all runs, background + calibration) - [X] total active duration (event durations) - [X] total # of trackings - [X] total tracking time - [X] total active tracking time - [ ] total # of events for each chip (non empty ones, & split between center & outside) - [X] fraction of events with only center / only outer chips - [X] total # of FADC triggers - [X] total # of scintillator triggers (run 3, by scintillator) - [X] total # of non trivial scintillator triggers - [ ] total # of non trivial center chip events w/o FADC trigger (some runs don't have FADC though) ? - ? Better to just write a new piece of code that extracts exactly what we need using ~ExtendedRunInfo~. #+begin_src nim :tangle code/cast_run_information.nim # 1. open file # 2. get file info # 3. for each run get extended run info # ? import std / [times, strformat, strutils] import nimhdf5, unchained import ingrid / tos_helpers type CastInformation = object totalDuration: Second activeDuration: Second activeFraction: float # ratio of active / total numTrackings: int nonTrackingDuration: Second activeNonTrackingTime: Second trackingTime: Second activeTrackingTime: Second totalEvents: int # total number of recorded events onlyCenter: int # events with activity only on center chip (> 3 hits) onlyOuter: int # events with activity only on outer, but not center chip centerAndOuter: int # events with activity on center & any outer chip center: int # events with activity on center (irrespective any other) anyActive: int # events with any active chip fractionWithCenter: float # fraction of events that have center chip activity fractionWithAny: float # fraction of events that have any activity # ... add mean of event durations? fadcReadouts: int fractionFadc: float # fraction of events having FADC readout scinti1NonTrivial: int # number of non trivial scinti triggers 0 < x < 4095 scinti2NonTrivial: int # number of non trivial scinti triggers 0 < x < 4095 scinti1Triggers: int # number of any scinti triggers != 0 scinti2Triggers: int # number of any scinti triggers != 0 fractionScinti1: float # fraction of events with any scinti 1 activity fractionScinti2: float # fraction of events with any scinti 2 activity proc fieldToStr(s: string): string = case s of "totalDuration": result = "total duration" of "activeDuration": result = "active duration" of "activeFraction": result = "active fraction" of "numTrackings": result = "# trackings" of "nonTrackingDuration": result = "non tracking time" of "activeNonTrackingTime": result = "active non tracking time" of "trackingTime": result = "tracking time" of "activeTrackingTime": result = "active tracking time" of "totalEvents": result = "total # events" of "center": result = "center chip" of "onlyCenter": result = "only center chip" of "onlyOuter": result = "only any outer chip" of "centerAndOuter": result = "center + outer" of "anyActive": result = "any chip" of "fractionWithCenter": result = "fraction with center" of "fractionWithAny": result = "fraction with any" of "fadcReadouts": result = "with fadc readouts" of "fractionFadc": result = "fraction with FADC" of "scinti1NonTrivial": result = "with SiPM trigger <4095" of "scinti2NonTrivial": result = "with veto scinti trigger <4095" of "scinti1Triggers": result = "with any SiPM trigger" of "scinti2Triggers": result = "with any veto scinti trigger" of "fractionScinti1": result = "fraction with any SiPM" of "fractionScinti2": result = "fraction with any veto scinti" proc `$`(castInfo: CastInformation): string = result.add &"Total duration: {pretty(castInfo.totalDuration.to(Hour), 4, true)}\n" result.add &"Active duration: {pretty(castInfo.activeDuration.to(Hour), 4, true)}\n" result.add &"Active fraction: {castInfo.activeFraction}\n" result.add &"Number of trackings: {castInfo.numTrackings}\n" result.add &"Non-tracking time: {pretty(castInfo.nonTrackingDuration.to(Hour), 4, true)}\n" result.add &"Active non-tracking time: {pretty(castInfo.activeNonTrackingTime.to(Hour), 4, true)}\n" result.add &"Tracking time: {pretty(castInfo.trackingTime.to(Hour), 4, true)}\n" result.add &"Active tracking time: {pretty(castInfo.activeTrackingTime.to(Hour), 4, true)}\n" result.add &"Number of total events: {castInfo.totalEvents}\n" result.add &"Number of events without center: {castInfo.onlyOuter}\n" result.add &"\t| {(castInfo.onlyOuter.float / castInfo.totalEvents.float) * 100.0} %\n" result.add &"Number of events only center: {castInfo.onlyCenter}\n" result.add &"\t| {(castInfo.onlyCenter.float / castInfo.totalEvents.float) * 100.0} %\n" result.add &"Number of events with center activity and outer: {castInfo.centerAndOuter}\n" result.add &"\t| {(castInfo.centerAndOuter.float / castInfo.totalEvents.float) * 100.0} %\n" result.add &"Number of events any hit events: {castInfo.anyActive}\n" result.add &"\t| {(castInfo.anyActive.float / castInfo.totalEvents.float) * 100.0} %\n" proc countEvents(df: DataFrame): int = for (tup, subdf) in groups(df.group_by("runNumber")): inc result, subDf["eventNumber", int].max proc contains[T](t: Tensor[T], x: T): bool = for i in 0 ..< t.size: if x == t[i]: return true proc countChipActivity(castInfo: var CastInformation, df: DataFrame) = for (tup, subDf) in groups(df.group_by(["eventNumber", "runNumber"])): let chips = subDf["chip"].unique.toTensor(int) if 3 in chips: inc castInfo.center # start new if if 3 notin chips: inc castInfo.onlyOuter elif [3].toTensor == chips: inc castInfo.onlyCenter elif 3 in chips and chips.len > 1: inc castInfo.centerAndOuter inc castInfo.anyActive proc processFile(fname: string): CastInformation = # extend to both calib & both background let h5f = H5open(fname, "r") let fileInfo = getFileInfo(h5f) var castInfo: CastInformation for run in fileInfo.runs: let runInfo = getExtendedRunInfo(h5f, run, fileInfo.runType) castInfo.totalDuration += runInfo.timeInfo.t_length.inSeconds().Second castInfo.activeDuration += runInfo.activeTime.inSeconds.Second castInfo.nonTrackingDuration += runInfo.nonTrackingDuration.inSeconds.Second castInfo.activeNonTrackingTime += runInfo.activeNonTrackingTime.inSeconds.Second castInfo.numTrackings += runInfo.trackings.len castInfo.trackingTime += runInfo.trackingDuration.inSeconds.Second castInfo.activeTrackingTime += runInfo.activeTrackingTime.inSeconds.Second # read the data of all chips & FADC const names = ["eventNumber", "fadcReadout", "szint1ClockInt", "szint2ClockInt"] let dfNoChips = h5f.readRunDsets(run, commonDsets = names) let dfChips = h5f.readRunDsetsAllChips(run, fileInfo.chips, dsets = @[]) # don't need additional dsets castInfo.totalEvents += dfNoChips.countEvents() castInfo.countChipActivity(dfChips) castInfo.fadcReadouts += dfNoChips.filter(f{`fadcReadout` == 1}).len castInfo.scinti1Triggers += dfNoChips.filter(f{`szint1ClockInt` != 0}).len castInfo.scinti2Triggers += dfNoChips.filter(f{`szint2ClockInt` != 0}).len castInfo.scinti1NonTrivial += dfNoChips.filter(f{`szint1ClockInt` != 0 and `szint1ClockInt` < 4095}).len castInfo.scinti2NonTrivial += dfNoChips.filter(f{`szint2ClockInt` != 0 and `szint2ClockInt` < 4095}).len # compute at the end as we need total information about fraction of total / active template fraction(arg, by: untyped): untyped = (castInfo.arg / castInfo.by) * 100.0 castInfo.activeFraction = fraction(activeDuration, totalDuration) # fractions castInfo.fractionWithCenter = fraction(center , totalEvents) castInfo.fractionWithAny = fraction(anyActive , totalEvents) castInfo.fractionFadc = fraction(fadcReadouts , totalEvents) castInfo.fractionScinti1 = fraction(scinti1Triggers, totalEvents) castInfo.fractionScinti2 = fraction(scinti2Triggers, totalEvents) echo castInfo result = castInfo proc toTable(castInfos: Table[(string,string), CastInformation]): string = ## Turns the input into an Org table # | Field | Back Run-2 | Back Run-3 | Calib Run-2 | Calib Run-3 | # |- # ... # turn the input into a DF, then `toOrgTable` it proc toColName(tup: (string, string)): string = result = tup[1] & " " if "2017" in tup[0]: result.add "Run-2" else: result.add "Run-3" var df = newDataFrame() for k, v in pairs(castInfos): var fields = newSeq[string]() var vals = newSeq[string]() for field, val in fieldPairs(v): fields.add field.fieldToStr() when typeof(val) is Second: vals.add pretty(val.to(Hour), precision = 2, short = true, format = ffDecimal) elif typeof(val) is float: vals.add $(val.formatFloat(precision = 4)) & " %" else: vals.add "\\num{" & $val & "}" let colName = k.toColName() let dfLoc = toDf({"Field" : fields, colName : vals}) if df.len == 0: df = dfLoc else: df[colName] = dfLoc[colName] df = df.select(["Field", "calib Run-2", "calib Run-3", "back Run-2", "back Run-3"]) echo df.toOrgTable(emphStrNumber = false) proc main(background: seq[string], calibration: seq[string]) = var tab = initTable[(string, string), CastInformation]() for b in background: echo "--------------- Processing: ", b, " ---------------" tab[(b, "back")] = processFile(b) for c in calibration: echo "--------------- Processing: ", c, " ---------------" tab[(c, "calib")] = processFile(c) echo tab echo tab.toTable() when isMainModule: import cligen dispatch main #+end_src | Field | calib Run-2 | calib Run-3 | back Run-2 | back Run-3 | |--------------------------------+--------------+--------------+---------------+---------------| | total duration | 107.42 h | 87.06 h | 2507.43 h | 1199.22 h | | active duration | 2.6 h | 3.53 h | 2238.78 h | 1079.6 h | | active fraction | 2.422 % | 4.049 % | 89.29 % | 90.02 % | | # trackings | \num{0} | \num{0} | \num{68} | \num{47} | | tracking time | 0 h | 0 h | 106.01 h | 74.3 h | | active tracking time | 0 h | 0 h | 94.65 h | 66.89 h | | total # events | \num{532020} | \num{415927} | \num{3758960} | \num{1837330} | | only center chip | \num{480232} | \num{366917} | \num{23820} | \num{9462} | | only any outer chip | \num{7} | \num{5} | \num{1557934} | \num{741199} | | center + outer | \num{51368} | \num{47825} | \num{960499} | \num{460726} | | center chip | \num{531600} | \num{414742} | \num{984319} | \num{470188} | | any chip | \num{531607} | \num{414747} | \num{2542253} | \num{1211387} | | fraction with center | 99.92 % | 99.72 % | 26.19 % | 25.59 % | | fraction with any | 99.92 % | 99.72 % | 67.63 % | 65.93 % | | with fadc readouts | \num{531529} | \num{413853} | \num{542233} | \num{211683} | | fraction with FADC | 99.91 % | 99.50 % | 14.43 % | 11.52 % | | with SiPM trigger <4095 | \num{1656} | \num{20} | \num{8585} | \num{4304} | | with veto scinti trigger <4095 | \num{0} | \num{2888} | \num{0} | \num{70016} | | with any SiPM trigger | \num{531528} | \num{1312} | \num{825460} | \num{34969} | | with any veto scinti trigger | \num{0} | \num{216170} | \num{0} | \num{206025} | | fraction with any SiPM | 99.91 % | 0.3154 % | 21.96 % | 1.903 % | | fraction with any veto scinti | 0.000 % | 51.97 % | 0.000 % | 11.21 % | - [X] generate that table... then onto calibration finally! #+begin_src sh code/cast_run_information \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ -c ~/CastData/data/CalibrationRuns2017_Reco.h5 \ -c ~/CastData/data/CalibrationRuns2018_Reco.h5 #+end_src | Field | calib Run-2 | calib Run-3 | back Run-2 | back Run-3 | |--------------------------------+--------------+--------------+---------------+---------------| | total duration | 107.42 h | 87.06 h | 2497.16 h | 1199.22 h | | active duration | 2.6 h | 3.53 h | 2238.78 h | 1079.6 h | | active fraction | 2.422 % | 4.049 % | 89.65 % | 90.02 % | | # trackings | \num{0} | \num{0} | \num{68} | \num{47} | | non tracking time | 107.42 h | 87.06 h | 2391.15 h | 1124.93 h | | active non tracking time | 2.6 h | 3.53 h | 2144.11 h | 1012.67 h | | tracking time | 0 h | 0 h | 106.01 h | 74.3 h | | active tracking time | 0 h | 0 h | 93.36 h | 67 h | | total # events | \num{532020} | \num{415927} | \num{3758960} | \num{1837330} | | only center chip | \num{472048} | \num{361244} | \num{21684} | \num{10342} | | only any outer chip | \num{5} | \num{5} | \num{1558546} | \num{744722} | | center + outer | \num{59554} | \num{53499} | \num{1014651} | \num{486478} | | center chip | \num{531602} | \num{414743} | \num{1036335} | \num{496820} | | any chip | \num{531607} | \num{414748} | \num{2594881} | \num{1241542} | | fraction with center | 99.92 % | 99.72 % | 27.57 % | 27.04 % | | fraction with any | 99.92 % | 99.72 % | 69.03 % | 67.57 % | | with fadc readouts | \num{531529} | \num{413853} | \num{542233} | \num{211683} | | fraction with FADC | 99.91 % | 99.50 % | 14.43 % | 11.52 % | | with SiPM trigger <4095 | \num{1656} | \num{20} | \num{8585} | \num{4304} | | with veto scinti trigger <4095 | \num{0} | \num{2888} | \num{0} | \num{70016} | | with any SiPM trigger | \num{531528} | \num{1312} | \num{825460} | \num{34969} | | with any veto scinti trigger | \num{0} | \num{216170} | \num{0} | \num{206025} | | fraction with any SiPM | 99.91 % | 0.3154 % | 21.96 % | 1.903 % | | fraction with any veto scinti | 0.000 % | 51.97 % | 0.000 % | 11.21 % | **** Cross check with ~writeRunList~ and regenerate short table :PROPERTIES: :CUSTOM_ID: sec:cast:data_taking_campaigns:gen_total_time_table :END: Cross check with logic in ~writeRunList~: -> *Yes, they match*, <2023-11-23 Thu>. The above commands produce this table (after creating the active column and total sum row): | | Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | Active [%] | |-------+--------------------+----------------+---------------------+-------------------------------------+-----------------------+----------------+-----------------+------------| | Run-2 | 106.006 | 2391.16 | 93.3689 | 93.3689 | 2144.12 | 2497.16 | 2238.78 | 0.89653046 | | Run-3 | 74.2981 | 1124.93 | 67.0066 | 67.0066 | 1012.68 | 1199.23 | 1079.6 | 0.90024432 | | Total | 180.3041 | 3516.09 | 160.3755 | 160.3755 | 3157.35 | 3706.66 | 3318.38 | 0.89524801 | #+TBLFM: $9=$8/$7 #+begin_src sh ./writeRunList -b ~/CastData/data/DataRuns2017_Reco.h5 -c ~/CastData/data/CalibrationRuns2017_Reco.h5 --runList /tmp/runList_2017.org #+end_src total duration: 14 weeks, 6 days, 1 hour, 9 minutes, 53 seconds, 97 milliseconds, 615 microseconds, and 921 nanoseconds In hours: 2497.1647493375 active duration: 2238.783333333333 trackingDuration: 4 days, 10 hours, and 20 seconds In hours: 106.0055555555556 active tracking duration: 93.36890683888889 active tracking duration from event durations: 93.36890683861111 nonTrackingDuration: 14 weeks, 1 day, 15 hours, 9 minutes, 33 seconds, 97 milliseconds, 615 microseconds, and 921 nanoseconds In hours: 2391.159193781944 active background duration: 2144.122404722222 | Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | |--------------------+----------------+---------------------+-------------------------------------+-----------------------+----------------+-----------------| | 106.006 | 2391.16 | 93.3689 | 93.3689 | 2144.12 | 2497.16 | 2238.78 | total duration: 4 days, 11 hours, 25 minutes, 20 seconds, 453 milliseconds, 596 microseconds, and 104 nanoseconds In hours: 107.4223482211111 active duration: 2.601388888888889 trackingDuration: 0 nanoseconds In hours: 0.0 active tracking duration: 0.0 active tracking duration from event durations: 0.0 nonTrackingDuration: 4 days, 11 hours, 25 minutes, 20 seconds, 453 milliseconds, 596 microseconds, and 104 nanoseconds In hours: 107.4223482211111 active background duration: 2.601391883888889 | Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | |--------------------+----------------+---------------------+-------------------------------------+-----------------------+----------------+-----------------| | 0 | 107.422 | 0 | 0 | 2.60139 | 107.422 | 2.60139 | #+begin_src sh ./writeRunList -b ~/CastData/data/DataRuns2018_Reco.h5 -c ~/CastData/data/CalibrationRuns2018_Reco.h5 --runList /tmp/runList_2018.org #+end_src total duration: 7 weeks, 23 hours, 13 minutes, 35 seconds, 698 milliseconds, 399 microseconds, and 775 nanoseconds In hours: 1199.226582888611 active duration: 1079.598333333333 trackingDuration: 3 days, 2 hours, 17 minutes, and 53 seconds In hours: 74.29805555555555 active tracking duration: 67.00656808194445 active tracking duration from event durations: 67.00656808222222 nonTrackingDuration: 6 weeks, 4 days, 20 hours, 55 minutes, 42 seconds, 698 milliseconds, 399 microseconds, and 775 nanoseconds In hours: 1124.928527333056 active background duration: 1012.677445774444 | Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | |--------------------+----------------+---------------------+-------------------------------------+-----------------------+----------------+-----------------| | 74.2981 | 1124.93 | 67.0066 | 67.0066 | 1012.68 | 1199.23 | 1079.6 | total duration: 3 days, 15 hours, 3 minutes, 47 seconds, 557 milliseconds, 131 microseconds, and 279 nanoseconds In hours: 87.06321031416667 active duration: 3.525555555555556 trackingDuration: 0 nanoseconds In hours: 0.0 active tracking duration: 0.0 active tracking duration from event durations: 0.0 nonTrackingDuration: 3 days, 15 hours, 3 minutes, 47 seconds, 557 milliseconds, 131 microseconds, and 279 nanoseconds In hours: 87.06321031416667 active background duration: 3.525561761944445 | Solar tracking [h] | Background [h] | Active tracking [h] | Active tracking (eventDuration) [h] | Active background [h] | Total time [h] | Active time [h] | |--------------------+----------------+---------------------+-------------------------------------+-----------------------+----------------+-----------------| | 0 | 87.0632 | 0 | 0 | 3.52556 | 87.0632 | 3.52556 | * Data calibration :Calibration: :PROPERTIES: :CUSTOM_ID: sec:calibration :END: #+LATEX: \minitoc With the roughly $\SI{3500}{h}$ of data recorded at CAST it is time to discuss the final calibrations [fn:calibration_term] necessary for the calculation of a physics result. On the side of the Septemboard detector this means the 'energy calibration', sec. [[#sec:calibration:energy]]; the calculation of the energy of each event recorded with the Septemboard detector. This necessarily needs to include a discussion of detector variability both due to external factors as well as differing detector calibrations and setups, sec. [[#sec:calib:detector_behavior_over_time]]. Here we provide the reasoning for the choices leading to the final energy calibration, presented in sec. [[#sec:calib:final_energy_calibration]]. Similarly, for the FADC the impact of the noise seen during data taking and resulting differing amplifier settings will be discussed in sec. [[#sec:calib:fadc]]. [fn:calibration_term] Note that the term 'calibration' is a heavily loaded term implying very different things depending on context. This can at times be confusing. I try to be explicit by fully specifying what calibration is meant when it might be ambiguous. ** Thoughts and TODOs [/] :noexport: This was our initial introduction for this chapter, but it seems outdated. #+begin_quote There are two different kinds of calibrations used for the data taking campaign at CAST. One is a data taking campaign behind an X-ray tube at the CAST Detector Lab (CDL) to characterize the geometric properties of X-rays at different energies (as the foundation for discrimination methods), discussed in section [[#sec:cdl]]. The second are measurements using \cefe source installed on a pneumatic manipulator at CAST to perform regular calibration runs to monitor the detector behavior and serve as a basis for the energy calibration of events, see section [[#sec:preparation:55fe]]. *NOTE*: Should we therefore maybe split up the section rather by type of calibration? Certainly clearer that way. Only question is how to best present the other calibrations then (FADC, scintillators). #+end_quote - [ ] *INSERT REFERENCES TO SECTIONS!* - [X] *RENAME* this to something like "data calibration"? -> Yup. - [X] *MAYBE THIS CHAPTER CAN NOT ONLY CONTAIN STRICT CALIBRATIONS, BUT ALSO THINGS LiKE GAS GAIN TIME BINNING?* -> Yup. On the one hand: - [ ] talk about 55 fe calibration -> energy calibration - [ ] talk about CDL -> definition of likelihood On the other hand - [ ] information about FADC data (i.e. spectrum etc) - [ ] detector behavior over time - [ ] ... How do these two things go together? In some sense the CDL data is "less important" or "later staged data", because it is _only relevant_ for the determination of the background rate. All the other mentioned things are relevant already for the understanding of the data itself, i.e. how does the detector behave, what do we see in the calibration / background etc I would propose the following for now: We start writing about the general data related parts, i.e. anything but CDL. The CDL can be mentioned afterwards as the motivation to "how do we even get a background rate from all this?" So then, where do we start? FADC first or GridPix? -> GridPix first. ** Energy calibration - in principle :PROPERTIES: :CUSTOM_ID: sec:calibration:energy :END: The reconstructed data from the GridPixes, as described in chapter [[#sec:reco:data_reconstruction]] (cluster finding, cluster reconstruction and charge calibration), still needs to be calibrated in energy. The charge calibration [[#sec:operation_calibration:tot_calibration]] computes the number of electrons recorded on each GridPix pixel in an event from the ~ToT~ counts. In order to calculate an equivalent energy based on a certain amount of charge -- which depends on the gas gain -- the data recorded using the \cefe calibration source at CAST is used. As the \cefe spectrum (see sec. [[#sec:theory:escape_peaks_55fe]]) has a photopeak at $\SI{5.9}{keV}$ and an escape peak at $\SI{2.9}{keV}$ it provides two different lines relating charges to energies for calibration. While the charge calibration for each pixel from ~ToT~ to electrons is non-linear, the relation between energy and recorded charge is linear. The position of the two peaks in the \cefe spectrum needs to be determined precisely, which is done using a double gaussian fit #+NAME: eq:calib:fe55_charge_fit_function \begin{equation} f(N_e, μ_e, σ_e, N_p, μ_p, σ_p) = G^{\text{esc}}_{\text{K}_{α}}(N_e,μ_e,σ_e) + G_{\text{K}_{α}}(N_p,μ_p,σ_p), \end{equation} where $G$ is a regular gaussian, one for the escape peak $G^{\text{esc}}$ and one for the photopeak $G$. An example spectrum with such a fit can be seen in fig. [[sref:fig:calib:fe55_example_fit_spectrum]]. # "~/phd/Figs/energyCalibration/run_149/fe_spec_run_149_chip_3_charge.pdf")) # "~/phd/playground/Figs/run_149_2023-10-31_18-44-29/fe_spec_run_149_chip_3_charge.pdf")) #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "\\cefe spectrum") (label "fig:calib:fe55_example_fit_spectrum") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/energyCalibration/run_149/fe_spec_run_149_chip_3_charge.pdf")) (subfigure (linewidth 0.5) (caption "Energy calibration") (label "fig:calib:fe55_example_energy_calib") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/energyCalibration/run_149/energy_calib_run_149_charge.pdf")) (caption (subref "fig:calib:fe55_example_fit_spectrum") ": Fit to a \\cefe calibration run from the CAST data (run 149) using a double gaussian fit. " (subref "fig:calib:fe55_example_energy_calib") ": Linear fit to the escape and photopeak energies to relate charges in electrons to energies in " ($ "\\si{keV}") ".") (label "fig:calib:fe55_calibration")) #+end_src Then, a linear function without y-offset \[ Q(E) = m_c · E \] is fitted to the found peak positions of the spectra by charge $Q$, against the known energies $E$ of the peaks in the \cefe spectrum. This yields the calibration factor, $a = m_c⁻¹$, which can be used to calibrate all events with _the same_ gas gain. Over the time of data taking at CAST the gas gain varies by a significant margin, requiring a more complex calibration routine as the calibration factor would produce too imprecise energy values otherwise (for example if each \cefe calibration run were used to deduce one calibration factor $a = m_c⁻¹$ to be applied to the closest background data in time). An example for this fit is seen in fig. sref:fig:calib:fe55_example_energy_calib. Fortunately, the gas gain can be computed using raw data without evaluating any physical events, allowing calculation of it also for raw background data. This motivates the idea to map a gas gain to a calibration factor needed to calibrate events at such gas gains in energy. Taking a certain time interval in which the detector gas gain is assumed constant, the gas gain of all time slices of this length is computed for background and calibration data. For all time slices in the calibration data the procedure above -- fitting the \cefe spectrum and calculating the energy calibration -- is performed. A higher gas gain leads to linearly more recorded electrons in the \cefe spectra. Therefore, all energy calibration factors determined from different time intervals should be on a slope depending on the gas gain. As such a final fit #+NAME: eq:gas_gain_vs_calib_factor \begin{equation} a(G) = m_g · G + b \end{equation} is performed to all time intervals of all calibration runs. This yields the energy calibration factor $a$ valid for a given gas gain $G$. Then in order to calibrate the energy of a given cluster in the background data, the same time slicing is performed and one gas gain calculated for each slice. The gas gain is inserted into the fit and the resulting calibration factor is used to calibrate the energy of every cluster in that time slice. We will come back to this fit in sec. [[#sec:calib:final_energy_calibration]] to finalize the energy calibration. The remaining question is the stability of the gas gain over time, which we will look at next in the context of the general detector behavior over time. This allows us to find a suitable time interval to use for all data and hence perform a temporally stable energy calibration. # In sec. [[#sec:calib:final_energy_calibration]] we will see the final # calibrations based on the chosen time interval. *** TODOs for this section [/] :noexport: - [ ] Mention something about section [[#sec:large_events_few_pixels_tot]] in =StatusAndProgress=, i.e. events that convert very close to grid and thus have few pixels, but a lot of energy as determined by the ~ToT~. This is an argument in favor of using ~ToT~ over #pix. - [X] *TALK ABOUT DETECTOR BEHAVIOR OVER TIME, REFER TO SUBSECTION* - [ ] *INSERT REFERENCES TO THE FITS IN CODE* - [ ] *CLARIFY WHAT IS BEING FIT / UNITS OF FIT FUNCTION / CLARIFY PIXEL VS CHARGE FITS / CHECK RELATION a to m IS CORRECT* -> maybe add pixel fit only as a footnote? - [ ] *INSERT PLOT OF ENERGY CALIBRATION FIT?* -> Maybe as a side-by-side? - [X] *INSERT A 55 FE FIT W/ FIT PARAMETERS SHOWN* - [ ] *UPDATE PLOT* *** Generate example plot for \cefe spectrum :extended: :PROPERTIES: :CUSTOM_ID: sec:calib:energy_gen_example_cefe :END: We use run number 149 (for no important reason) as an example calibration run. Desktop: #+begin_src sh raw_data_manipulation \ -p /mnt/4TB/CAST/Data/2018/CalibrationRuns/Run_149_180219-17-25.tar.gz \ -r calib \ -o /tmp/run_149.h5 #+end_src Laptop: #+begin_src sh raw_data_manipulation \ -p /mnt/1TB/CAST/2018/CalibrationRuns/Run_149_180219-17-25.tar.gz \ -r calib \ -o /tmp/run_149.h5 #+end_src We overwrite the default to use TikZ output via an environment variable here just to make sure it is set independent of the ~config.toml~ file. #+begin_src sh reconstruction -i /tmp/run_149.h5 --out /tmp/reco_149.h5 --useTeX=true --plotOutPath ~/phd/Figs/energyCalibration/run_149/ #+end_src This produces the following plots: - [[~/phd/Figs/energyCalibration/run_149/energy_calib_run_149.pdf]] - [[~/phd/Figs/energyCalibration/run_149/energy_calib_run_149_charge.pdf]] - [[~/phd/Figs/energyCalibration/run_149/fe_spec_run_149_chip_3.pdf]] - [[~/phd/Figs/energyCalibration/run_149/fe_spec_run_149_chip_3_charge.pdf]] *** On ~ToT~ vs. ~ToA~ for a Timepix1 detector :extended: This is a good point to comment on the choice of using all pixels in the CAST data taking to record ~ToT~ values. One might argue that due to the single electron efficiency of GridPix detectors it would have been a good idea to either just record only ~ToA~ values for all pixels as to have access to time information (yielding longitudinal information about events) or at least use a checkerboard pattern with half the pixels recording ~ToT~ and half ~ToA~ values. There are two major issues with that (outside of the fact that at the time of making these choices I was not in a position to make an educated choice anyway): 1. the ~ToA~ counter, as far as I'm aware, is too short for the Timepix1 as needed in the context for CAST like shutter times. Ref [[cite:&lupberger2016pixel]] page 30, but the gist is that Timepix1 pixels can count to 11810. At a clock frequency of $\SI{40}{MHz}$ this only yields a time window of $\SI{295}{μs}$ for ~ToA~ values. For shutter lengths on the order of seconds such short ~ToA~ counters would run over pretty much always. 2. ignoring the practical limitation of 1, which may or may not be possible to circumvent in some way or another, there is a separate problem: Single electron efficiency is an ideal approximation of reality. Either for higher energies or in rare cases -- which are extremely important for low rate experiments like CAST where "rare" means precisely important for the selection of candidates! -- conversion of photons can happen very close to the grid. In those cases _many_ primary electrons will enter single holes, resulting in events with very few pixels but very high charges. See sec. [[#sec:large_events_few_pixels_tot]] below. Fortunately, we do have the FADC signal to get at least some time information regardless of the choice. At the same time in the future with a Timepix3 based GridPix detector all these points will become mute: it records both ~ToT~ and ~ToA~ at the same time at time high resolution. This _also_ means using an FADC will become irrelevant, avoiding the difficulties of dealing with analogue signals and associated EMI issues. **** (While generating fake data) Events with large energy, but few pixels :PROPERTIES: :CUSTOM_ID: sec:large_events_few_pixels_tot :END: #+begin_quote This section is taken out of my regular notes. It was written while trying to understand certain behaviors while trying to generate fake event data from existing data by removal of pixels. That approach is the easiest way to generate lower energy 'simulated' data from existing data without having to simulate full events (which we ended up doing later anyway). #+end_quote While developing some fake data using existing events in the photo peak & filtering out pixels to end up at ~3 keV, I noticed the prevalence of events with <150 pixels & ~6 keV energy. Code produced by splicing in the following code into the body of =generateFakeData=. #+begin_src nim for i in 0 ..< xs.len: if xs[i].len < 150 and energyInput[i] > 5.5: # recompute from data let pp = toSeq(0 ..< xs[i].len).mapIt((x: xs[i][it], y: ys[i][it], ch: ts[i][it])) let newEnergy = h5f.computeEnergy(pp, group, a, b, c, t, bL, mL) echo "Length ", xs[i].len , " w/ energy ", energyInput[i], " recomp ", newEnergy let df = toDf({"x" : pp.mapIt(it.x.int), "y" : pp.mapIt(it.y.int), "ch" : pp.mapIt(it.ch.int)}) ggplot(df, aes("x", "y", color = "ch")) + geom_point() + ggtitle("funny its real") + ggsave("/tmp/fake_event.pdf") sleep(200) if true: quit() #+end_src This gives about 100 events that fit the criteria out of a total of O(20000). A ratio of 1/200 seems probably reasonable for absorption of X-rays at 5.9 keV. While plotting them I noticed that they all share that they are incredibly dense, like: [[file:~/org/Figs/statusAndProgress/exampleEvents/event_few_pixels_large_energy.pdf]] These events must be events where the X-ray to photoelectron conversion happens very close to the grid! This is one argument "in favor" of using ToT instead of ToA on the Timepix1 and more importantly a good reason to keep using the ToT values instead of pure pixel counting for at least some events! - [ ] We should look at number of pixels vs. energy as a scatter plot to see what this gives us. **** Plotting low count / high energy events with ~plotData~ Alternatively to the above section we can also just use ~plotData~ to create some event displays for such events for us. We can utilize the ~--cuts~ argument to create event displays only for events with fewer than a certain number of pixels and more than some amount of energy. Let's say < 100 pixels and > 5 keV for example: #+begin_src sh plotData \ --h5file ~/CastData/data/DataRuns2018_Reco.h5 \ --runType=rtBackground \ --eventDisplay --septemboard \ --cuts '("hits", 0, 100)' \ --cuts '("energyFromCharge", 5.0, Inf)' \ --cuts '("centerX", 3.0, 11.0)' \ --cuts '("centerY", 3.0, 11.0)' \ --applyAllCuts #+end_src Or we can produce a scatter plot of how the number of hits relates to the energy if we make some similar cuts (producing the plot for all background data obviously drowns it in uninteresting events). We do this by utilizing the custom ~--x~ and ~--y~ argument: #+begin_src sh plotData \ --h5file ~/CastData/data/DataRuns2018_Reco.h5 \ --runType=rtBackground \ --x energyFromCharge --y hits --z length \ --cuts '("hits", 0, 150)' \ --cuts '("energyFromCharge", 4.0, Inf)' \ --cuts '("centerX", 3.0, 11.0)' \ --cuts '("centerY", 3.0, 11.0)' \ --applyAllCuts #+end_src In addition we colored each point by the length of the cluster to see if these clusters are commonly small. This yields the following plot, fig. [[fig:calibration:large_energy_few_hits_scatter]]. #+CAPTION: Scatter plot of the energy of clusters against the number of hits #+CAPTION: for clusters not at the edges of the chips and filtered to < 150 hits #+CAPTION: and more than 4 keV. The color code is the length of the clusters in milli meter. #+NAME: fig:calibration:large_energy_few_hits_scatter [[~/phd/Figs/eventProperties/events_few_hits_large_energy_scatter.pdf]] ** Detector behavior over time :PROPERTIES: :CUSTOM_ID: sec:calib:detector_behavior_over_time :END: Outside the detector related issues discussed in section [[#sec:cast:data_taking_woes]] the detector generally ran very stable during Run-2 and Run-3 at CAST. This allows and requires to assess the data quality in more nuanced ways. Specifically, the stability of the recorded signals over time is of interest, which is one of the main purposes of the \cefe calibration runs. A fixed spectrum allows to verify stable operation easily. In particular of interest for the energy calibration of the data are the detected charge and gas gain of the detector. As the charge and gas gain can be computed purely from individual pixel data without any physical interpretation, it serves as a great reference over time. Longer time scale variations of the gas gain were already evident from the calibration runs during data taking and partially expected due to the power supply and grounding problems encountered, as well as different sets of calibrations between Run-2 and Run-3. By binning the data into short intervals of order one hour, significant fluctuations can be observed even on such time scales. Fig. [[fig:calib:total_charge_over_time]] shows the normalized median of the total charge in events for all CAST data normalized by the datasets (background and calibration). Each data point represents a $\SI{90}{min}$ time slice. Some data is removed prior to calculation of the median as mentioned in the caption. The important takeaway of the figure is the extreme variability of the median charge (up to $\SI{30}{\%}$!). Fortunately though, the background and calibration data behaves the same, evident by the strong correlation (purple background, green calibration). While the causes for the variability are not entirely certain (see sec. [[#sec:calib:causes_variability]]), it allows us to take action and calibrate the data accordingly. #+CAPTION: The plot shows the median charge within $\SI{90}{min}$ time windows of both #+CAPTION: background and calibration data. Some data is removed (only clusters with #+CAPTION: less than 500 pixels active to remove the worst sparks and extremely large #+CAPTION: events) and only events within the inner $\SI{4.5}{mm}$ radius are considered. #+CAPTION: Each data type (calibration and background) is normalized to 1 as the median #+CAPTION: charge is very different in the datasets. The median is used instead of the mean #+CAPTION: to further remove effect of very rare, but extreme outliers. Each pane of #+CAPTION: the plot shows a portion of data taking with significant time without data #+CAPTION: between each. #+NAME: fig:calib:total_charge_over_time #+ATTR_LATEX: :width 1.0\textwidth [[~/phd/Figs/behavior_over_time/plotTotalChargeOverTime/background_median_charge_binned_90.0_min_filtered_crSilver.pdf]] [fn:calib_amount_of_calibration] As is pretty evident in the top left pane of fig. [[fig:calib:total_charge_over_time]] of the first data taking campaign in 2017, the amount of calibration data is initially pretty limited. The reason for this is plainly that too much was going on at the time, leading to a neglect in taking regular calibration runs. (: *** TODOs for this section [/] :noexport: - [ ] *LOTS OF REPETITION BETWEEN NEXT PARAGRAPH AND PARTS OF PREVIOUS SECTION!* - [X] *CHECK NOTES W/ KLAUS AND STATUSANDPROGRESS FOR HOW WE WENT THROUGH GAS GAIN SLICING INVESTIGATION* - [ ] this should be in the detector energy calibration section. It only matters (and can be computed) once we discuss the energy calibration. - [X] plot of total charge over time as *main motivation* - [X] Plot of the peak position of the 55Fe calibration runs. -> Will be shown next. - [X] Regarding 55Fe peak positions & temperature correlation: _Maybe_ the relevant temperature to correlate to isn't actually the septemboard temperature, but the ambient temperature in the CAST hall? Unlikely, but can easily check by extracting temps from log files and plotting against position! -> lol @ <2023-10-21 Sat 16:57>: What a fun idea right there. That is indeed the relevant parameter to look at, as we discuss in the next section anyway. - [ ] SPLIT THIS BY: EXPLANATION TO A *WHY* SUCH VARIATION AND *WHAT IS DONE* AS A RESULT. - [X] *FIGURE TOTAL CHARGE OVER TIME* what binning? 90min - [ ] *INSERT TABLE OF THE MEAN POSITIONS OF THE 55FE PEAKS WITH* column charge and pixel mean of esc and photo. similar to that cdl table we have! -> AND OR have a plot of the positions? - [X] *EXPLAIN HOW WE ENDED UP AT 90 MIN BINNING! AND HOW SLICING WORKS ETC* -> Section later. *** Generate plot for median of charge over time :extended: Let's generate the plot for the median charge within 90 minutes, filtered to only clusters with less than 500 hits, also showing the calibration data, filtered to the silver region & each data type (calibration & background) normalized to 1, as a facet plot. - [X] We hand ~StartHue=285~ manually here for now, but we should change that to become a thesis wide setting for everything we compile. -> Done. For a note on why the median and not the mean, see the whole section on "Detector behavior over time" in the ~statusAndProgress~ and in particular the 'Addendum' there (extreme outliers in some cases is the tl/dr). #+begin_src sh nim c -d:danger -d:StartHue=285 plotTotalChargeOverTime && \ LEFT=4.0 FACET_MARGIN=0.75 ROT_ANGLE=0.0 USE_TEX=true \ plotTotalChargeOverTime \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ --interval 90 \ --cutoffCharge 0 \ --cutoffHits 500 \ --calibFiles ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --calibFiles ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --applyRegionCut \ --timeSeries \ --useMedian \ --normalizeMedian \ --outpath ~/phd/Figs/behavior_over_time/plotTotalChargeOverTime/ \ --titleSuff " " #+end_src We use a ~titleSuff~ suffix of a single space, because a) it's not empty but b) we don't want an actual suffix (about the cut region etc.). yielding [[~/phd/Figs/behavior_over_time/plotTotalChargeOverTime/background_median_charge_binned_90.0_min_filtered_crSilver.pdf]] among other things, with many more related plots to be found in: [[file:Figs/behavior_over_time/plotTotalChargeOverTime/]] *** Potential causes for the variability :PROPERTIES: :CUSTOM_ID: sec:calib:causes_variability :END: # As mentioned in the previous section, variability of the data was # already apparent during the data taking based on the position of the # peaks in the \cefe spectra. The change in the median of the charge (and related the # gas gain) is an easier data point to investigate possible reasons for # such changes. One possible cause for the variability seen in the previous section is the electronics of the detector readout. Either a floating ground or unstable power supply can result in the activation thresholds of the pixels moving -- as was indeed observed -- as mentioned in section [[#sec:cast:data_taking_woes]]. Lesser versions of the problems discussed in that section could theoretically explain the variations. Specifically, in regards to the \cefe spectra showing variation, the number of pixels and the amount of charge are directly correlated. The number of pixels is plainly a clamped version of the charge information. If electronics caused threshold variations, it would both change the effective ~ToT~ value as well as the number of pixels activated in the first place. Fortunately, the center chip also contains the FADC, which allows for an independent measurement of the effective charge generated below the grid and thus another indirect measurement of the gas gain. By comparing how the mean position of the \cefe spectra behave in the FADC data compared to the GridPix data we can deduce if the GridPix behavior likely is due to real gas gain changes or due to electronics. Fig. [[fig:calib:fe55_peak_pos_charge_pixel_fadc]] shows the (normalized) position of the \cefe photopeak based on a fit to the pixel, charge and FADC spectrum (the latter based on the amplitudes of the FADC signals). Aside from the variations in the FADC data in the 2017 data (left) due to the changed FADC settings (more on that in sec. [[#sec:calib:fadc]]), the 'temporally local' changes in all three datasets are almost perfectly correlated. This implies a /real physical origin/ in the observed variation and not an electronic or power supply origin. #+CAPTION: Normalized photopeak positions in the Run-2 data based on the charge (purple), pixel #+CAPTION: (green) and FADC (orange) spectra. The empty range in the middle is the period between #+CAPTION: Dec 2017 and Feb 2018. The strong changes in the FADC #+CAPTION: on the left are due to the different FADC settings. Beyond that the three sets of #+CAPTION: data are fully correlated, implying a physical origin in the variation. Compare how #+CAPTION: local (in time) features appear identical in each data. #+NAME: fig:calib:fe55_peak_pos_charge_pixel_fadc [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_pos.pdf]] A physical change in the gas gain can either be caused by a change in high voltage in the amplification region, a change in gas composition or gas properties (assuming no change in the physical size of the amplification gap, which is reasonable at least within Run-2 and Run-3 as the detector was not touched). Firstly, the high voltage, while not logged to a file [fn:hv_logging], was visually inspected regularly and was always kept at the desired voltages by the Iseg HV module within the operating window. It is a very unlikely source of the variability. [fn:hv_variability_lab_course] Secondly, there is no reason to believe the gas composition to be at fault as a) the detector is used in an open loop at a constant gas flow and b) it would then if anything show up as a sudden change in detector properties upon a gas bottle change and not a continuous change during operation. This finally leaves the properties of the gas itself, for which three variables are (partially) known: 1. the gas flow 2. the chamber pressure via the pressure controller on the outlet side 3. the temperature The gas flow was at a relatively constant $\SI{2}{\liter\per\hour}$. The absolute value should not be too relevant, as the flow is small in absolute terms and thus should have no effect on the gas properties in the chamber as such (via gas flow related effects causing turbulence or similar in the chamber). Its secondary impact is only one on absolute gas pressure, which is controlled by the pressure controller, which provides granular control. While also no log files were written for the chamber pressure, visual inspection was also done regularly and the pressure was at a constant $\SI{1050}{mbar}$ at most varying by $\SI{1}{mbar}$ in rare cases, but certainly not in a way correlating to the gas gain variations. This leaves the temperature inside the chamber and in the amplification region as the final plausible source of the variations. As the temperature log files for the Septemboard were lost due to a software bug (more on that in appendix sec. [[#sec:daq:temperature_readout]]), there are two other sources of temperature information. First of all the shift log of each morning shift contains one temperature reading of the Septemboard, which yields one value for every solar tracking. Second of all the CAST slow control log files contain multiple different temperature readings in one second intervals. Most notably the ambient temperature in the CAST hall, which up to an offset (and some variation due to detector load and cooling efficiency) should be equivalent to the gas temperature. Fig. [[fig:calib:correlation_ambient_temperature_gasgain_and_spectra]] shows the normalized temperature sensors in the CAST hall (excluding the exterior temperature) during the Run-3 data taking period together with the normalized peak position of the \cefe spectra in pixels (black points), the temperature from the shift logs (blue points) and the gas gain values of each chip (smaller points using the color scale, based on $\SI{90}{min}$ intervals per point). The blue points of the temperature of the Septemboard recorded during each solar tracking nicely follow the temperature trend of the ambient temperature (~T_amb~) in the hall, as expected. Comparing the \cefe spectra mean positions with the shift log temperatures does not allow to draw meaningful conclusions about possible correlations, due to lack of statistics. But the gas gains of each chip compared to the temperature lines does imply an (imperfect) /inverse/ correlation between the temperature and the gas gain. As discussed in theory sec. [[#sec:theory:gas_gain_polya]] the expectation for the gas gain given constant pressure is $G ∝ e^α$ where the first Townsend coefficient $α$ scales with temperature by #+NAME: eq:calib:townsend_scaling_prop \begin{equation} α ∝ \frac{1}{T} \exp\left(-\frac{1}{T}\right). \end{equation} The combination of the inverse relation to $T$ and its negative exponential is a monotonically increasing sublinear function (and not decreasing as $1/T$ would imply alone) in the relevant parameter ranges. This should imply an increase in gas gain instead of the apparent decrease we see for increasing temperatures. The kind of scaling according to eq. [[eq:calib:townsend_scaling_prop]] was also already experimentally measured for GridPix detectors by L. Scharenberg in [[cite:&lucianMsc]]. The implications seem to be that the assumptions going into the $α$ scaling must have been violated. The septemboard detector in its -- essentially open -- gas system is a non-trivial thermodynamic system due to the significant heating of the Timepix ASICs and very small amplification region of $\SI{50}{μm}$ height enclosing a gas mass, where gas flow is potentially inhibited. This is not meant as a definitive statement about the origins of the gas gain variations in the Septemboard detector data. /However/, it clearly motivates the need for an even more in depth study of the behavior of these detectors for different gas temperatures at constant pressures (continuing the work of [[cite:&lucianMsc]]). More precise logging of temperatures and pressures in future detectors is highly encouraged. Further, a significantly improved cooling setup (to more closely approach a region where temperature changes have a smaller relative impact), or theoretically even a temperature controlled setup (to avoid temperature changes in the first place) with known inlet gas temperatures might be useful. This behavior is one of the most problematic from a data analysis point of view and thus it should be taken seriously for future endeavors. See appendix [[#sec:appendix:detector_time_behavior]] for plots similar to fig. [[fig:calib:correlation_ambient_temperature_gasgain_and_spectra]] for the other periods of CAST data taking and a scatter plot of the center chip gas gains against the ambient temperature directly. #+CAPTION: Normalized data for Run-3 of the temperature sensors from the CAST slow control log #+CAPTION: files compared to the behavior of the mean peak position in the \cefe pixel spectra #+CAPTION: (black points), the recovered temperature values recorded during each solar tracking #+CAPTION: (blue points) and the gas gain values computed based on \SI{90}{min} of data for each #+CAPTION: chip (smaller points using Viridis color scale). The shift log temperatures nicely #+CAPTION: follow the trend of the general temperatures. Gas gains and temperatures seem to be #+CAPTION: inversely correlated, providing a possible explanation for the detector behavior. #+NAME: fig:calib:correlation_ambient_temperature_gasgain_and_spectra #+ATTR_LATEX: :float sideways [[~/phd/Figs/behavior_over_time/correlation_fePixel_all_chips_gasgain_period_2018-10-19.pdf]] [fn:hv_logging] Once again, in hindsight writing a log file of the high voltage values would have been valuable, especially as it could have been done straight from TOS. However, similar to what lead to losses of the temperature log files, this was simply not prioritized to implement at the time. The same holds for the gas pressure in the chamber, which should have been logged using the ~FlowView~ software of the Bronkhorst pressure controller used to control it. [fn:hv_variability_lab_course] From other Iseg HV modules used in lab course experiments we know that when they _are_ faulty it is very evident. We have never experienced a module that reads correct values, but actually supplies the wrong voltage. In each faulty case the desired target voltage was simply not correctly held and this was visible in the read out voltage. **** TODOs about this section [/] :noexport: - [X] *COME BACK TO THIS ONCE WE a) UNDERSTAND [[#sec:calib:behavior_over_time:thoughts_townsend]] OR TALKED TO LUCIAN* -> With our newfound understanding thanks to the talk with Lucian, we should now do the following: - [ ] Merge explanation about gas gain, Townsend coefficient etc. back into Polya / gas gain section of theory - [ ] Refer to theory in the above section saying "this is our expectation" - [ ] Finish / extend section above with adjusted explanation / interpretation of our data. Text section about GridPix 1: #+begin_quote - [ ] *TAKE OUT THIS PART?* This behavior was likely less relevant in single GridPix detectors as the heat emission of a single GridPix is much lower and thus the absolute temperatures are lower. For a fixed change in ambient temperature this means a single GridPix detector undergoes temperature changes in the amplification region at lower absolute temperatures. As the behavior is highly non-linear the effect is likely less evident for a single GridPix. In addition the single GridPix detector was mostly built from acrylic glass, which is a good insulator potentially leading to a more stable temperature. - [ ] *MAYBE REWRITE ABOVE* #+end_quote Previous text about interpretation: #+begin_quote While an inverse correlation between temperature and gas gain may appear counter intuitive, it is sensible when considering the amplification region of the GridPix as an ideal gas. The pressure controller keeps the pressure inside the gas chamber constant independent of the temperature. This implies that an increase in temperature results in a decrease in density in the amplification region. The mobility of the charge carriers in the region is inversely proportional to the density *CITE CITE CITE* (for now PDG chapter detectors!, *REF THEORY SEC*) and a higher mobility implies a longer mean free path. This in turn leads to _less_ additional ionization events for higher temperatures, reducing the full amplification. - [ ] *FIND GOOD REFERENCE FOR MOBILITY IN GAS INVERSELY PROP TO DENSITY!* -> Generally better understand this, add section to gaseous detector theory about this, show equation for mean free path based on mobility! Once section in theory there, reference theory instead! #+end_quote About \cefe spectrum peak plot: - [ ] *REWRITE TEXT ABOVE PLOT BELOW IS REPLACED BY FACET PLOT* - [ ] *UPDATE THE PLOT WITH ALL DATA, SPLIT BY RUN PERIOD AS FACET, AND UPDATE COLORS (!!!) IN THE CAPTION!* -> Not sure if we really want a different plot. I think this one is now quite nice actually. - [ ] *(RE)MOVE THE PARAGRAPH BELOW?* -> Refers to paragraph about "what if we had temp log files" - [ ] *WE COULD CREATE A PLOT SIMILAR TO ABOVE, BUT ACTUALLY PLOTTING TEMPERATURE AGAINST THE GAS GAIN DIRECTLY* -> Refers to fig. fig:calib:correlation_ambient_temperature_gasgain_and_spectra -> we'd just have to assign a temperature to each gas gain value (compute a mean of all temps in a gas gain slice interval?) and see what happens! - [X] Reasons for peak position moving: - [X] electronic causing the effective threshold to change - [X] change in gas temperature (seems uncorrelated going by plots!. *Maybe* correlated to ambient temperature?) - [X] change in pressure (unlikely, had pressure sensor!) - [X] change in gas flow (related to above 2) (unlikely, was constant at ~2 L/h. If changes smaller than what's commonly visible on the flow meter was responsible it would be Jochen's fault for not being aware of that!) - [X] change in grid voltage (real or effective due to charge up) -> either Iseg module broken (we *know* they are crappy), but in this case the module *showing* the right voltage, but maybe not really *applying* that voltage? -> a case of bad grounding causing the module to applying 300 V, but in reality not relative to the pixel layer, but rather to something else so that it was floating around that? - [X] *WAS GAS PRESSURE CONTROLLER ON INLET OR OUTLET SIDE??? CHECK AND ADJUST ABOVE* -> Outlet - [X] Plot of the (known) temperature at PCB from shift forms. - [ ] *MENTION VARYING COOLING POWER OVER TIME, POSSIBLY CLOGGING* - [X] *CHECK NAME OF BRONKHORST ? OR WHATEVER PRESSURE CONTROLLER AND ADJUST IN FOOTNOTE!* -> Bronkhorst. **** Extended thoughts on missing temperature log data :extended: - [ ] *THINK ABOUT WHETHER TO PUT INTO MAIN AGAIN / REMOVE EXTENDED* Note that even if the temperature logs were still available, it is not obvious how they could lead to a correction that goes beyond the gas gain binning in time that was eventually settled on. The variations lead to gain and loss of information that cannot easily be corrected for without introducing potential biases, especially because the temperature sensor on the bottom side of the Septemboard does not yield an absolute temperature inside the amplification region anyway. While theoretically a fit correlating temperature to energy calibration factors is thinkable it is not clear it would improve the calibration over using gas gains binned in time, as the gas gain is the physical result of temperature changes. The only interesting aspect of it would be potentially higher time resolution than the time binning required to have good statistics for a gas gain. Further, temperature changes are not expected to usually occur on time scales much shorter than of the order of one hour, if they are due to ambient temperature changes. Still, it could be an interesting avenue to explore by experimenting with the available slow control log information on the ambient temperature as a proxy for the temperature in the amplification region (same as the Septemboard temperature sensors, but just with a larger offset and lack of detail regarding local temperature changes due to water cooling related variations). **** Further thoughts about variability :extended: What if (/put on my crackpot helmet/): At lower temperatures gas diffusion is less efficient. This might lead to stronger effects of "over pressure" / less gas cycling between below and above the grid. This could increase the pressure below the grid as it is further away from an open system in a thermodynamic sense. The higher the temperature the more flow via diffusion exchanges gas below and above the grid, bringing the detector closer to desired 1050 mbar operating window. Yeah right lol. **** Thoughts about Townsend coefficient & gas gain temperature dependence :extended: :PROPERTIES: :CUSTOM_ID: sec:calib:behavior_over_time:thoughts_townsend :END: *NOTE*: This section was me trying to better understand the origin of the Townsend coefficient and its temperature dependence. The results have since been merged back into the theory part (about mean free path and gas gain) and the main section above. Some further discussions of the fact that our temperature vs. gain data in fig. [[fig:calib:correlation_ambient_temperature_gasgain_and_spectra]] seems to imply an inverse proportionality between temperature and gas gain from our data at CAST. So let's go back to our theoretical expectation here and see what we learn. The number of electrons after a distance $x$ should be \[ n = n_0 e^{αx} \] where $α$ is the first Townsend coefficient, cite:&sauli2014gaseous (eq. 5.2 p. 146). The gas gain is just this divided by the initial number $n_0$. Sauli on the definition of the first Townsend coefficient: [[cite:&sauli2014gaseous]] page 145 eq. 5.1: #+begin_quote The mean free path for ionization λ is defined as the average distance an electron has to travel before having an ionizing collision; its inverse, α = λ⁻¹, is the ionization or first Townsend coefficient, and represents the number of ion pairs produced per unit length of drift; it relates to the ionization cross section through the expression: α = N σ_i (eq 5.1) where N is the number of molecules per unit volume. As for other quantities in gaseous electronics, the Townsend coefficient is proportional to the gas density and therefore to the pressure P; the ratio α/P is a sole function of the reduced field E/P, as shown in Figure 5.19 for noble gases (Druyvesteyn and Penning, 1940). #+end_quote Also from Sauli [[cite:&sauli2014gaseous]] p. 151 is fig. [[fig:sauli_gas_gain_T_over_P]]. The plot (data at least) is taken from cite:&altunbas03_gas_gain and shows a linear (or very shallow exponential) behavior of the gain vs $T/P$ with experimental data from GEMs for COMPASS and Magboltz simulations as well. #+CAPTION: Figure from cite:&sauli2014gaseous (page 151) showing the gas gain #+CAPTION: dependence on the ratio of temperature and pressure. #+NAME: fig:sauli_gas_gain_T_over_P [[~/phd/Figs/gas_physics/fig_5_25_sauli_p151_gas_gain_vs_T_P.png]] Further papers of interest: - cite:&aoyama85_gas_gain -> Contains a mathematical derivation for a generalized first Townsend coefficient relationship with S = E/N (where N is the density and E the electric field). - [[cite:&Davydov_2006]] contains a discussion about the first Townsend coefficient for very low densities and thus also discussions about the math etc. Now, if we just go by our intuition from ideal gas physics we would expect the following: Assuming $α = 1 / λ$ where $λ$ is the mean free path. If the temperature increases in a gas, the density decreases for constant pressure $p$ via \[ p = ρ R_s T \] with the specific gas constant $R_s$. A lower density necessarily implies less particles per unit volume and thus a typically longer path between interactions. This means $λ$ increases and due to the inverse relationship with $α$, the first Townsend coefficient -- and by extension the gas gain -- decreases. This is even explicitly mentioned by that quote of Sauli above, literally in the sentence #+begin_quote As for other quantities in gaseous electronics, the Townsend coefficient is proportional to the gas density [...] #+end_quote However, this is in stark contrast to - the screenshot of the fig. above, [[fig:sauli_gas_gain_T_over_P]] - the fact that Jochen kept going on about the gas gain being essentially $G ∝ e^{T/P}$ -> This is clearly wrong, see both below and generally the fact that neither Magboltz nor Lucian's MSc measurements indicate anything of the sorts of an exponential increase with temperature. - and my Magboltz simulations, sec. [[#sec:calib:behavior_over_time:magboltz_sim]] After a discussion with Lucian today <2023-10-23 Mon>, I'm a little bit more illuminated. In his MSc thesis [[cite:&lucianMsc]] goes through a derivation based on [[cite:&engel65_gases]] for the temperature dependence of the first Townsend coefficient. Starting from the argument above about $α = 1/λ$ and then continuing with the requirement to accumulate enough energy to produce secondary ionization events, $e |\vec{E}| l \geq eV_i$ with the ionization potential $V_i$ for the gas mixture and $l$ for the forward distance of an electron under the electric field $|\vec{E}|$. This distance \[ l = \frac{V_i}{|\vec{E}|} \] can be compared to the mean free path $λ$ of the electron \[ \mathcal{N} = e^{-l/λ} \] where $\mathcal{N}$ is the relative number of colliding electrons with $l > λ$. This allows to define the probability of finding $1/l$ collisions per unit distance to be \[ P(l) \frac{1}{λ} e^{-l / λ} = α \] which is precisely the definition of the first Townsend coefficient, $α$. The mean free path $λ$ can be related to the pressure $p$, temperature $T$ and cross section of the electron in the gas, $σ$: \[ λ = \frac{kT}{pσ}. \] Inserting this into the above definition of $α$ yields: \[ α(T) = \frac{pσ}{kT} \exp\left( - \frac{V_i}{|\vec{E}|}\frac{pσ}{kT}\right) \] which allows to analytically compute the temperature dependence of the first Townsend coefficient, which we'll do in sec. [[#sec:calib:behavior_over_time:townsend_coefficient_temp_scaling]]. The expression now is actually similar to (eq. 5.4) in [[cite:&sauli2014gaseous]]. It seems to roughly match the Magboltz simulations. Note though that this dependence is 'fragile', as it is a higher order dependence on $T$ assuming idealized constant parameters for $p$ and $σ$ and gas composition. In reality it is easily thinkable that gas contamination and slight variations in pressure can change the results from this result. ***** Applying Lucians (eq. 5.17) formula and plotting it :PROPERTIES: :CUSTOM_ID: sec:calib:behavior_over_time:townsend_coefficient_temp_scaling :END: Lucian gives the following formula for the temperature dependence of the first Townsend coefficient: \[ α(T) = \frac{pσ}{kT} \exp\left( - \frac{V_i}{|\vec{E}|}\frac{pσ}{kT}\right) \] where $p$ is the gas pressure, $σ$ the cross section of electrons with the gas at the relevant energies, $V_i$ the ionization potential for the gas, $|\vec{E}|$ the electric field strength. #+begin_src nim :tangle code/townsend_coefficient_temp_scaling.nim import unchained, math, ggplotnim, sequtils const V_i = 15.7.V # Lucian gives this ionization potential next to fig. 5.4 defUnit(kV•cm⁻¹) defUnit(cm⁻¹) proc townsend[P: Pressure; A: Area](p: P, σ: A, T: Kelvin, E: kV•cm⁻¹): cm⁻¹ = let arg = (V_i * p * σ) / (E * k_B * T) echo arg result = (p * σ / (k_B * T) * exp( -arg )).to(cm⁻¹) echo townsend(1013.25.mbar, 500.MegaBarn, 273.15.K, 60.kV•cm⁻¹) let temps = linspace(0.0, 100.0, 1000) # 0 to 100 °C #let temps = linspace(-273.15, 10000.0, 1000) # all of da range! var αs = temps.mapIt(townsend(1013.25.mbar, 500.MegaBarn, (273.15 + it).K, 60.kV•cm⁻¹).float) let df = toDf(temps, αs) ggplot(df, aes("temps", "αs")) + geom_line() + xlab("Gas temperature [°C]") + ylab("Townsend coefficient [cm⁻¹]") + theme_font_scale(1.0, family = "serif") + ggsave("~/phd/Figs/gas_physics/townsend_coefficient_temperature_scaling_lucian.pdf") #+end_src #+RESULTS: ***** Simulations with Magboltz :PROPERTIES: :CUSTOM_ID: sec:calib:behavior_over_time:magboltz_sim :END: I wrote a simple interfacing library with Magboltz for Nim: https://github.com/SciNim/NimBoltz [[file:~/CastData/ExternCode/NimBoltz/nimboltz.nim]] and I ran simulations at different temperatures, but same pressure and the first Townsend coefficient (based on the steady state simulation, which should be the correct one for high fields #+begin_quote The simulation of avalanche gain detectors at high field requires the use of SST Townsend parameters. #+end_quote from https://magboltz.web.cern.ch/magboltz/usage.html and line 256 in [[file:~/src/Magboltz/magboltz-11.17.f]]. These seem to indicate that the coefficient should increase. *Why?* -> See above! **** TODO Note about variability in GridPix 1 :extended: - [ ] *CREATE TEMPERATURE PLOT OF TEMP IN CAST HALL DURING GRIDPIX1 DATA!* -> for noexport this is very useful info. Christoph *did* see variations in his gas gain as well!! Fig. 9.7 of his thesis and he *even notes it is likely due to temperature effects in the hall*! The big difference is just that the absolute variations were quite a bit smaller. Why this has never been on the mind of people like Jochen I will never understand... Further: in fig. 7.26 he even sees significant differences in the gas gain for different targets of CDL data. But he concludes (by taking a cut) that this is due to multiple electrons in the same hole instead of real changes. Likely a combination of both I assume. **** Generate plot of \cefe peak position :extended: From my zsh history: #+begin_src : 1672709064:0;./mapSeptemTempToFePeak ~/CastData/data/CalibrationRuns2017_Reco.h5 --inputs fePixel --inputs feCharge --inputs feFadc : 1672709066:0;evince /t/time_vs_peak_pos.pdf : 1672709148:0;cp /t/time_vs_peak_pos.pdf ~/phd/Figs/time_vs_55fe_peak_pos_2017.pdf #+end_src First of all we need to make sure our calibration HDF5 file not only has the reconstructed \cefe spectra including their fits, but also the fits for the FADC spectra. If that is not the case: 1. Make sure the FADC data is fully reconstructed: #+begin_src sh reconstruction -i ~/CastData/data/CalibrationRuns2017_Reco.h5 --only_fadc #+end_src 2. Now redo the \cefe fits: #+begin_src sh reconstruction -i ~/CastData/data/CalibrationRuns2017_Reco.h5 --only_fe_spec #+end_src With that done we can create a plot of all normalized \cefe peak positions and compare it to the temperatures recovered from the CAST shift forms of the septemboard. #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools :results drawer WRITE_PLOT_CSV=true USE_TEX=true ./mapSeptemTempToFePeak \ # ~/CastData/data/CalibrationRuns2017_Reco.h5 \ /mnt/1TB/CAST/2017/CalibrationRuns2017_Reco.h5 \ --inputs fePixel --inputs feCharge --inputs feFadc \ --outpath ~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/ #+end_src #+RESULTS: :results: [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_fePixel.tex Generated: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_fePixel.pdf [INFO] Writing CSV file: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_fePixel.pdf.csv [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_feCharge.tex Generated: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_feCharge.pdf [INFO] Writing CSV file: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_feCharge.pdf.csv [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_feFadc.tex Generated: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_feFadc.pdf [INFO] Writing CSV file: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos_feFadc.pdf.csv [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos.tex Generated: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos.pdf [INFO] Writing CSV file: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_pos.pdf.csv tot diff 171180 2017-11-09T06:33:00+01:00 2017-11-07T07:00:00+01:00 2017-11-08T17:27:34+01:00 Temp 98.3078354948008 peak 212.7571189854556 norm 0.5727625012999181 tot diff 84780 2017-11-13T06:40:00+01:00 2017-11-12T07:07:00+01:00 2017-11-12T15:30:08+01:00 Temp 68.86005708893606 peak 200.5267629392088 norm 0.5863183224669442 tot diff 86520 2017-11-18T06:38:00+01:00 2017-11-17T06:36:00+01:00 2017-11-17T20:18:47+01:00 Temp 85.04725936199722 peak 224.2078745321606 norm 0.625934087076792 tot diff 518880 2017-11-25T06:52:00+01:00 2017-11-19T06:44:00+01:00 2017-11-23T11:42:19+01:00 Temp 95.77865970937404 peak 213.4664809739825 norm 0.578611813845371 tot diff 86220 2017-11-30T07:32:00+01:00 2017-11-29T07:35:00+01:00 2017-11-29T21:19:07+01:00 Temp 83.69371537926236 peak 219.4480315730993 norm 0.6149695850461161 tot diff 85500 2017-12-05T07:35:00+01:00 2017-12-04T07:50:00+01:00 2017-12-04T14:39:45+01:00 Temp 60.80512631578948 peak 222.9204153830614 norm 0.6675160757145163 tot diff 173400 2017-12-07T07:45:00+01:00 2017-12-05T07:35:00+01:00 2017-12-05T12:20:37+01:00 Temp 45.89712508650519 peak 222.6633344356147 norm 0.6979010839707228 tot diff 86160 2017-12-13T07:41:00+01:00 2017-12-12T07:45:00+01:00 2017-12-12T21:59:03+01:00 Temp 87.18364798050139 peak 214.0834731995871 norm 0.5941256787963686 tot diff 86460 2017-12-14T07:42:00+01:00 2017-12-13T07:41:00+01:00 2017-12-13T22:30:09+01:00 Temp 89.09145350451077 peak 216.7236407119627 norm 0.598285035065055 tot diff 86580 2017-12-15T07:45:00+01:00 2017-12-14T07:42:00+01:00 2017-12-14T18:04:59+01:00 Temp 74.97479706629706 peak 209.0577735867058 norm 0.60052537293657 tot diff 86220 2017-12-16T07:42:00+01:00 2017-12-15T07:45:00+01:00 2017-12-15T20:22:45+01:00 Temp 82.12456158663883 peak 203.3763022789357 norm 0.5724482534597104 tot diff 86520 2017-12-20T07:46:00+01:00 2017-12-19T07:44:00+01:00 2017-12-19T17:21:16+01:00 Temp 73.18033009708738 peak 217.0091594597064 norm 0.6265958843364132 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2017-12-22T01:18:26+01:00 Temp 41.25354131187325 peak 217.00562158794 norm 0.6902136683399531 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-17T18:18:27+01:00 Temp 58.04207115915322 peak 235.8389880170459 norm 0.7120912864599173 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-18T19:12:58+01:00 Temp 58.34400536653413 peak 234.861220208172 norm 0.7084931142223374 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-19T18:25:04+01:00 Temp 58.62524853997592 peak 246.4517606092901 norm 0.7428274462722462 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-20T18:36:37+01:00 Temp 58.91850227778887 peak 245.8056920522876 norm 0.7402258581172541 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-21T19:05:40+01:00 Temp 59.21529150553447 peak 249.787911981088 norm 0.7515463207653517 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-22T18:23:52+01:00 Temp 59.49776704975278 peak 252.4084638125304 norm 0.758785985702344 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-23T18:41:43+01:00 Temp 59.79229356394149 peak 253.3111732989325 norm 0.7608260596375216 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-26T09:46:35+01:00 Temp 60.55694259230681 peak 248.9573437917113 norm 0.7460358536677646 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-01T11:26:13+01:00 Temp 61.44983225836223 peak 244.5913071945384 norm 0.7309965027288973 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-04T21:17:02+01:00 Temp 62.44195470899375 peak 238.8047287008442 norm 0.7115925317933264 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-06T20:15:43+01:00 Temp 63.01140765461126 peak 232.606294373343 norm 0.6919482399726685 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-07T19:29:18+01:00 Temp 63.292950502914 peak 234.8215091323142 norm 0.6979534235486392 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-13T17:55:16+01:00 Temp 65.01947502375376 peak 228.0743092639248 norm 0.6744378961108312 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-14T22:01:51+01:00 Temp 65.36021207492874 peak 229.4105013005594 norm 0.6777062939825866 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-15T18:59:25+01:00 Temp 65.61427574862168 peak 229.7085598544244 norm 0.6780778739045036 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-16T17:28:40+01:00 Temp 65.88686202242839 peak 228.9962799627942 norm 0.6754318058419422 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-17T21:55:56+01:00 Temp 66.23177768599042 peak 226.4263956710665 norm 0.6671731087476513 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-22T18:41:57+01:00 Temp 67.64718919415749 peak 242.4026851767026 norm 0.711281351087089 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-24T19:10:39+01:00 Temp 68.23482802656184 peak 234.522084598226 norm 0.6869727807000807 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-26T21:47:17+02:00 Temp 68.8361912941714 peak 235.4174324210365 norm 0.688382859934055 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-30T16:18:57+02:00 Temp 69.93354002659815 peak 226.9681348675651 norm 0.6615535529625496 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-04-11T18:04:09+02:00 Temp 73.44583714528223 peak 229.2245575721957 norm 0.6613598116474546 tot diff 171180 2017-11-09T06:33:00+01:00 2017-11-07T07:00:00+01:00 2017-11-08T17:27:34+01:00 Temp 98.3078354948008 peak 738.7759551121773 norm 1.988855489151521 tot diff 84780 2017-11-13T06:40:00+01:00 2017-11-12T07:07:00+01:00 2017-11-12T15:30:08+01:00 Temp 68.86005708893606 peak 663.3731310792177 norm 1.939630479657839 tot diff 86520 2017-11-18T06:38:00+01:00 2017-11-17T06:36:00+01:00 2017-11-17T20:18:47+01:00 Temp 85.04725936199722 peak 799.2321364326664 norm 2.231262567045371 tot diff 518880 2017-11-25T06:52:00+01:00 2017-11-19T06:44:00+01:00 2017-11-23T11:42:19+01:00 Temp 95.77865970937404 peak 744.378475487249 norm 2.017675927030548 tot diff 86220 2017-11-30T07:32:00+01:00 2017-11-29T07:35:00+01:00 2017-11-29T21:19:07+01:00 Temp 83.69371537926236 peak 773.8471437480873 norm 2.168588405503018 tot diff 85500 2017-12-05T07:35:00+01:00 2017-12-04T07:50:00+01:00 2017-12-04T14:39:45+01:00 Temp 60.80512631578948 peak 793.9408389528367 norm 2.377387787729548 tot diff 173400 2017-12-07T07:45:00+01:00 2017-12-05T07:35:00+01:00 2017-12-05T12:20:37+01:00 Temp 45.89712508650519 peak 791.8759007288726 norm 2.482002934563872 tot diff 86160 2017-12-13T07:41:00+01:00 2017-12-12T07:45:00+01:00 2017-12-12T21:59:03+01:00 Temp 87.18364798050139 peak 738.9641224049003 norm 2.050777457354989 tot diff 86460 2017-12-14T07:42:00+01:00 2017-12-13T07:41:00+01:00 2017-12-13T22:30:09+01:00 Temp 89.09145350451077 peak 757.4254599892301 norm 2.090940870133734 tot diff 86580 2017-12-15T07:45:00+01:00 2017-12-14T07:42:00+01:00 2017-12-14T18:04:59+01:00 Temp 74.97479706629706 peak 713.9061377206993 norm 2.050719005761439 tot diff 86220 2017-12-16T07:42:00+01:00 2017-12-15T07:45:00+01:00 2017-12-15T20:22:45+01:00 Temp 82.12456158663883 peak 686.4979444411518 norm 1.932302558830234 tot diff 86520 2017-12-20T07:46:00+01:00 2017-12-19T07:44:00+01:00 2017-12-19T17:21:16+01:00 Temp 73.18033009708738 peak 759.7617308758466 norm 2.193748756174088 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2017-12-22T01:18:26+01:00 Temp 41.25354131187325 peak 757.8227974988322 norm 2.4103507051376 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-17T18:18:27+01:00 Temp 58.04207115915322 peak 852.4346835501794 norm 2.573837835449101 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-18T19:12:58+01:00 Temp 58.34400536653413 peak 853.7613608914891 norm 2.575495626074692 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-19T18:25:04+01:00 Temp 58.62524853997592 peak 917.1271793854213 norm 2.764302591653144 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-20T18:36:37+01:00 Temp 58.91850227778887 peak 917.3399036896196 norm 2.762502006053641 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-21T19:05:40+01:00 Temp 59.21529150553447 peak 942.1628065404408 norm 2.834720804548125 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-22T18:23:52+01:00 Temp 59.49776704975278 peak 954.5304989241002 norm 2.869493180098019 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-23T18:41:43+01:00 Temp 59.79229356394149 peak 960.0623610256922 norm 2.883569854549922 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-26T09:46:35+01:00 Temp 60.55694259230681 peak 934.5722051533741 norm 2.800577650238313 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-01T11:26:13+01:00 Temp 61.44983225836223 peak 920.0562505712758 norm 2.749721194901412 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-04T21:17:02+01:00 Temp 62.44195470899375 peak 879.9295382823657 norm 2.622022149027353 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-06T20:15:43+01:00 Temp 63.01140765461126 peak 837.1503229848131 norm 2.490322517464415 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-07T19:29:18+01:00 Temp 63.292950502914 peak 847.7053821522342 norm 2.519611068934827 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-13T17:55:16+01:00 Temp 65.01947502375376 peak 820.5075832242663 norm 2.426320658204387 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-14T22:01:51+01:00 Temp 65.36021207492874 peak 832.4712512941716 norm 2.459220494978466 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-15T18:59:25+01:00 Temp 65.61427574862168 peak 829.1939738548302 norm 2.447701936759499 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-16T17:28:40+01:00 Temp 65.88686202242839 peak 815.0269686899784 norm 2.403947947807698 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-17T21:55:56+01:00 Temp 66.23177768599042 peak 799.709976352887 norm 2.356372760510467 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-22T18:41:57+01:00 Temp 67.64718919415749 peak 893.160508660574 norm 2.620797755910266 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-24T19:10:39+01:00 Temp 68.23482802656184 peak 857.0677293443778 norm 2.510561861518031 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-26T21:47:17+02:00 Temp 68.8361912941714 peak 849.9825612530595 norm 2.485429478998809 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-30T16:18:57+02:00 Temp 69.93354002659815 peak 809.3858421341336 norm 2.359150899723678 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-04-11T18:04:09+02:00 Temp 73.44583714528223 peak 829.1849688901491 norm 2.392368516943787 tot diff 171180 2017-11-09T06:33:00+01:00 2017-11-07T07:00:00+01:00 2017-11-08T17:27:34+01:00 Temp 98.3078354948008 peak 0.3156517706638187 norm 0.0008497647390944236 tot diff 84780 2017-11-13T06:40:00+01:00 2017-11-12T07:07:00+01:00 2017-11-12T15:30:08+01:00 Temp 68.86005708893606 peak 0.2937728612960976 norm 0.0008589597153855189 tot diff 86520 2017-11-18T06:38:00+01:00 2017-11-17T06:36:00+01:00 2017-11-17T20:18:47+01:00 Temp 85.04725936199722 peak 0.3454255849712654 norm 0.0009643445781425574 tot diff 518880 2017-11-25T06:52:00+01:00 2017-11-19T06:44:00+01:00 2017-11-23T11:42:19+01:00 Temp 95.77865970937404 peak 0.3330661587350751 norm 0.0009027928570186176 tot diff 86220 2017-11-30T07:32:00+01:00 2017-11-29T07:35:00+01:00 2017-11-29T21:19:07+01:00 Temp 83.69371537926236 peak 0.2360089584533551 norm 0.0006613790527388689 tot diff 85500 2017-12-05T07:35:00+01:00 2017-12-04T07:50:00+01:00 2017-12-04T14:39:45+01:00 Temp 60.80512631578948 peak 0.2419659658580049 norm 0.0007245463440773799 tot diff 173400 2017-12-07T07:45:00+01:00 2017-12-05T07:35:00+01:00 2017-12-05T12:20:37+01:00 Temp 45.89712508650519 peak 0.240435169531754 norm 0.0007536039369311459 tot diff 86160 2017-12-13T07:41:00+01:00 2017-12-12T07:45:00+01:00 2017-12-12T21:59:03+01:00 Temp 87.18364798050139 peak 0.2198524965217226 norm 0.0006101359053030192 tot diff 86460 2017-12-14T07:42:00+01:00 2017-12-13T07:41:00+01:00 2017-12-13T22:30:09+01:00 Temp 89.09145350451077 peak 0.2288293130553534 norm 0.0006317038285970326 tot diff 86580 2017-12-15T07:45:00+01:00 2017-12-14T07:42:00+01:00 2017-12-14T18:04:59+01:00 Temp 74.97479706629706 peak 0.217900953835728 norm 0.0006259277008475522 tot diff 86220 2017-12-16T07:42:00+01:00 2017-12-15T07:45:00+01:00 2017-12-15T20:22:45+01:00 Temp 82.12456158663883 peak 0.1503207381427924 norm 0.0004231114591246482 tot diff 86520 2017-12-20T07:46:00+01:00 2017-12-19T07:44:00+01:00 2017-12-19T17:21:16+01:00 Temp 73.18033009708738 peak 0.1622721196992897 norm 0.0004685472382791298 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2017-12-22T01:18:26+01:00 Temp 41.25354131187325 peak 0.1613966941531701 norm 0.00051334248170275 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-17T18:18:27+01:00 Temp 58.04207115915322 peak 0.1519729807227832 norm 0.0004588666032700798 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-18T19:12:58+01:00 Temp 58.34400536653413 peak 0.153636081513809 norm 0.0004634656404840052 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-19T18:25:04+01:00 Temp 58.62524853997592 peak 0.1658627644541513 norm 0.0004999250703120681 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-20T18:36:37+01:00 Temp 58.91850227778887 peak 0.16726297987167 norm 0.0005037002266831909 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-21T19:05:40+01:00 Temp 59.21529150553447 peak 0.1702751299217176 norm 0.0005123132116184948 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-22T18:23:52+01:00 Temp 59.49776704975278 peak 0.1721913218680174 norm 0.0005176385923019391 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-23T18:41:43+01:00 Temp 59.79229356394149 peak 0.1728023255392353 norm 0.0005190158441256989 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-02-26T09:46:35+01:00 Temp 60.55694259230681 peak 0.1699640802422223 norm 0.0005093213791775053 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-01T11:26:13+01:00 Temp 61.44983225836223 peak 0.1683157670935782 norm 0.0005030360175542845 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-04T21:17:02+01:00 Temp 62.44195470899375 peak 0.1609667718301883 norm 0.0004796502704296638 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-06T20:15:43+01:00 Temp 63.01140765461126 peak 0.155946231067982 norm 0.0004639028380920181 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-07T19:29:18+01:00 Temp 63.292950502914 peak 0.1575761287654471 norm 0.0004683591334872756 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-13T17:55:16+01:00 Temp 65.01947502375376 peak 0.1542088700265075 norm 0.0004560106142509626 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-14T22:01:51+01:00 Temp 65.36021207492874 peak 0.1574943337701327 norm 0.0004652572600535655 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-15T18:59:25+01:00 Temp 65.61427574862168 peak 0.1573456420773231 norm 0.0004644694064319835 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-16T17:28:40+01:00 Temp 65.88686202242839 peak 0.1540871661662128 norm 0.0004544849938943203 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-17T21:55:56+01:00 Temp 66.23177768599042 peak 0.1542531269493453 norm 0.0004545121072825144 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-22T18:41:57+01:00 Temp 67.64718919415749 peak 0.1716222596302931 norm 0.0005035905960260642 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-24T19:10:39+01:00 Temp 68.23482802656184 peak 0.1678832053427238 norm 0.0004917711379067538 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-26T21:47:17+02:00 Temp 68.8361912941714 peak 0.1651640919999399 norm 0.0004829554414899408 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-03-30T16:18:57+02:00 Temp 69.93354002659815 peak 0.1559328159354264 norm 0.0004545039261380404 tot diff 26437920 2018-10-22T08:38:00+02:00 2017-12-20T07:46:00+01:00 2018-04-11T18:04:09+02:00 Temp 73.44583714528223 peak 0.1591476212920055 norm 0.0004591734932618242 INFO: The integer column `Timestamp` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("Timestamp"), ...)`. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_norm_by_temp.tex Generated: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_norm_by_temp.pdf [INFO] Writing CSV file: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_norm_by_temp.pdf.csv [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_temp_normed_comparison.tex Generated: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_temp_normed_comparison.pdf [INFO] Writing CSV file: /home/basti/phd/Figs/behavior_over_time/mapSeptemTempToFePeak//time_vs_peak_temp_normed_comparison.pdf.csv :end: Note that the plot that we include in the thesis from the following created: - [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_pos.pdf]] - [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_pos_feFadc.pdf]] - [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_norm_by_temp.pdf]] - [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_pos_fePixel.pdf]] - [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_temp_normed_comparison.pdf]] - [[~/phd/Figs/behavior_over_time/mapSeptemTempToFePeak/time_vs_peak_pos_feCharge.pdf]] is actually one of the ones that does not include the septemboard temperatures from the shift forms. That's because of the much more in depth plot below of course! **** Generate plot of ambient CAST temp against 55Fe peaks :extended: First we run the CAST log reader to get the temperature data as a simple CSV file (by default just written to ~/tmp/temperatures_cast.csv~): #+begin_src sh cd $TPA/LogReader # <- directory of TimepixAnalysis ./cast_log_reader sc -p ../resources/LogFiles/SClogs -s Version.idx #+end_src Note that this requires the slow control files for the relevant times to be present in the ~SCLogs~ directory! - [ ] *MOVE CODE OVER TO TPA, DEDUCTION TO STATUS, MAYBE KEEP HERE AS WELL?* -> well, the interesting stuff will go straight into the thesis, so there is less #+begin_src nim :tangle code/correlation_ambient_temps_fe55_peaks.nim import std / [strutils, sequtils, times, stats, strformat] import os except FileInfo import ggplotnim, nimhdf5 import ingrid / tos_helpers import ingrid / ingrid_types type FeFileKind = enum fePixel, feCharge, feFadc let UseTex = getEnv("USE_TEX", "false").parseBool let Width = getEnv("WIDTH", "1000").parseFloat let Height = getEnv("HEIGHT", "600").parseFloat const Peak = "μ" let PeakNorm = if UseTex: r"$μ/μ_{\text{max}}$" else: "μ/μ_max" const TempPeak = "(μ/T) / max" let T_amb = if UseTex: r"$T_{\text{amb}}$" else: "T_amb" proc readFePeaks(files: seq[string], feKind: FeFileKind = fePixel): DataFrame = const kalphaPix = 10 const kalphaCharge = 4 const parPrefix = "p" const dateStr = "yyyy-MM-dd'.'HH:mm:ss" # example: 2017-12-04.13:39:45 var dset: string var kalphaIdx: int case feKind of fePixel: kalphaIdx = kalphaPix dset = "FeSpectrum" of feCharge: kalphaIdx = kalphaCharge dset = "FeSpectrumCharge" of feFadc: kalphaIdx = kalphaCharge dset = "FeSpectrumFadcPlot" # raw dataset is `minvals` instead of `FeSpectrumFadc` var h5files = files.mapIt(H5open(it, "r")) var fileInfos = newSeq[FileInfo]() for h5f in mitems(h5files): let fi = h5f.getFileInfo() fileInfos.add fi var peakSeq = newSeq[float]() dateSeq = newSeq[float]() for (h5f, fi) in zip(h5files, fileInfos): for r in fi.runs: let group = h5f[(recoBase() & $r).grp_str] let chpGrpName = if feKind in {fePixel, feCharge}: group.name / "chip_3" else: group.name / "fadc" peakSeq.add h5f[(chpGrpName / dset).dset_str].attrs[ parPrefix & $kalphaIdx, float ] dateSeq.add parseTime(group.attrs["dateTime", string], dateStr, utc()).toUnix.float result = toDf({ Peak : peakSeq, "Timestamp" : dateSeq }) .arrange("Timestamp", SortOrder.Ascending) .mutate(f{float: PeakNorm ~ idx(Peak) / max(col(Peak))}, f{"Type" <- $feKind}) proc toDf[T: object](data: seq[T]): DataFrame = ## Converts a seq of objects that (may only contain scalar fields) to a DF result = newDataFrame() for i, d in data: for field, val in fieldPairs(d): if field notin result: result[field] = newColumn(toColKind(type(val)), data.len) result[field, i] = val proc readGasGainSliceData(files: seq[string]): DataFrame = result = newDataFrame() for f in files: let h5f = H5file(f, "r") let fInfo = h5f.getFileInfo() for r in fInfo.runs: for c in fInfo.chips: let group = recoDataChipBase(r) & $c var gainSlicesDf = h5f[group & "/gasGainSlices90", GasGainIntervalResult].toDf gainSlicesDf["Chip"] = c gainSlicesDf["Run"] = r gainSlicesDf["File"] = f result.add gainSlicesDf discard h5f.close() const periods = [("2017-10-30", "2017-12-23"), ("2018-02-15", "2018-04-22"), ("2018-10-19", "2018-12-21")] proc toPeriod(x: int): string = let date = x.fromUnix() for p in periods: let t0 = p[0].parseTime("YYYY-MM-dd", utc()) let t1 = p[1].parseTime("YYYY-MM-dd", utc()) if date >= t0 and date <= t1: return p[0] proc mapToPeriod(df: DataFrame, timeCol: string): DataFrame = result = df.mutate(f{int -> string: "RunPeriod" ~ toPeriod(idx(timeCol))}) .filter(f{string -> bool: `RunPeriod`.len > 0}) proc readSeptemTemps(): DataFrame = const TempFile = "/home/basti/CastData/ExternCode/TimepixAnalysis/resources/cast_2017_2018_temperatures.csv" const OrgFormat = "'<'yyyy-MM-dd ddd H:mm'>'" result = toDf(readCsv(TempFile)) .filter(f{c"Temp / °" != "-"}) result["Timestamp"] = result["Date"].toTensor(string).map_inline(parseTime(x, OrgFormat, utc()).toUnix) proc readCastTemps(): DataFrame = result = readCsv("/tmp/temperatures_cast.csv") #.filter(f{float: `Time` >= t0 and `Time` <= t1}) .group_by("Temperature") .mutate(f{"TempNorm" ~ `TempVal` / max(col("TempVal"))}) .filter(f{`Temperature` != "T_ext"}) var newKeys = newSeq[(string, string)]() if UseTex: result = result.mutate(f{string -> string: "Temperature" ~ ( let suff = `Temperature`.split("_")[1] r"$T_{\text{" & suff & "}}$") }) echo "Resulting DF: ", result proc toPeriod(v: float): string = result = v.int.fromUnix.format("dd/MM/YYYY") proc keepEvery(df: DataFrame, num: int): DataFrame = ## Keeps only every `num` row of the data frame result = df result["idxMod"] = toSeq(0 ..< df.len) result = result.filter(f{int -> bool: `idxMod` mod num == 0}) proc plotCorrelationPerPeriod(df: DataFrame, kind: FeFileKind, gainDf, dfCastTemp, dfTemp: DataFrame, period, outpath = "/tmp") = let t0 = df["Timestamp", float].min let t1 = df["Timestamp", float].max let dfCastTemp = dfCastTemp .keepEvery(50) .filter(f{float: `Time` >= t0 and `Time` <= t1}) let dfTemp = dfTemp .filter(f{float: `Timestamp` >= t0 and `Timestamp` <= t1}) var gainDf = gainDf .filter(f{float: `tStart` >= t0 and `tStart` <= t1}) .mutate(f{float: "gainNorm" ~ `G` / max(col("G"))}) echo gainDf ## XXX: combine point like data for legend? # let dfC = bind_rows([("Fe55", df), ("SeptemTemp", dfTemp)], "Type") var plt = ggplot(df, aes("Timestamp", PeakNorm)) + geom_line(data = dfCastTemp, aes = aes("Time", "TempNorm", color = "Temperature")) + geom_point() + scale_x_continuous(labels = toPeriod) if dfTemp.len > 0: # only if septemboard data available in this period plt = plt + geom_point(data = dfTemp, aes = aes("Timestamp", f{idx("Temp / °") / max(col("Temp / °"))}), color = "blue") block AllChips: plt + geom_point(data = gainDf, aes = aes("tStart", "gainNorm", color = gradient("Chip")), alpha = 0.7, size = 1.5) + ggtitle("Correlation between temperatures (Septem = blue points) \\& 55Fe position " & $kind & " (black) and gas gains by chip", titleFont = font(11.0)) + themeLatex(fWidth = 0.9, textWidth = 677.3971, # the `\textheight`, want to insert in landscape width = Width, height = Height, baseTheme = singlePlot) + margin(bottom = 2.5) + ggsave(&"{outpath}/correlation_{kind}_all_chips_gasgain_period_{period}.pdf", width = 1000, height = 600, useTeX = UseTeX, standalone = UseTeX) block CenterChip: gainDf = gainDf.filter(f{`Chip` == 3}) plt + geom_point(data = gainDf, aes = aes("tStart", "gainNorm"), color = "purple", alpha = 0.7, size = 1.5) + ggtitle("Correlation between temperatures (Septem = blue points) \\& 55Fe position " & $kind & " (black) and gas gains (chip3) in purple", titleFont = font(11.0)) + themeLatex(fWidth = 0.9, textWidth = 677.3971, # the `\textheight`, want to insert in landscape width = Width, height = Height, baseTheme = singlePlot) + ggsave(&"{outpath}/correlation_{kind}_period_{period}.pdf", width = 1000, height = 600, useTeX = UseTeX, standalone = UseTeX) proc plotCorrelation(files: seq[string], kind: FeFileKind, gainDf, dfCastTemp, dfTemp: DataFrame, outpath = "/tmp") = let df = readFePeaks(files, feCharge) .mapToPeriod("Timestamp") for (tup, subDf) in groups(df.group_by("RunPeriod")): plotCorrelationPerPeriod(subDf, kind, gainDf, dfCastTemp, dfTemp, tup[0][1].toStr, outpath) proc plotTempVsGain(dfCastTemp, gainDf: DataFrame, outpath: string) = ## Now let's plot the actual gas gain against the temperature in each slice. ## Only for the center chip. ## 1. compute mean temperature within time associated with each gain value # dfCastTemp # gainDf ## NOTE: We do not compute the mean temperature associated with the proc mapGainToTemp(gainDf, dfCastTemp: DataFrame, period: string): DataFrame = let t0G = gainDf["tStart", int].min let t1G = gainDf["tStop", int].max # filter temperature data to relevant range echo dfCastTemp.isNil echo dfCastTemp let dfF = dfCastTemp .filter(f{int: `Time` >= t0G and `Time` <= t1G}, f{string -> bool: `Temperature` == T_amb}) var cT: RunningStat let ambT = dfF["TempVal", float] let time = dfF["Time", int] var j = 0 let gDf = gainDf.filter(f{int -> bool: `Chip` == 3}) var temps = newSeq[float](gDf.len) ## we now walk all temperatures and accumulate them in a `RunningStat` to compute ## the mean within `tStart` and `tStop` (by `tStart` of the next slice). ## First and last are just copied from ambient temperature values. temps[0] = ambT[0] for i in 1 ..< gDf.high: while time[j] < gDf["tStart", int][i]: cT.push ambT[j] inc j temps[i] = cT.mean cT.clear() temps[gDf.high] = ambT[ambT.len - 1] let gains = gDf["G", float] result = toDf(temps, gains, period) var dfGT = newDataFrame() for (tup, subDf) in groups(gainDf.groupBy("RunPeriod")): dfGT.add mapGainToTemp(subDf, dfCastTemp, tup[0][1].toStr) echo dfGT echo dfGT.tail(100) ggplot(dfGT.filter(f{`temps` > 0.0}), aes("temps", "gains", color = "period")) + geom_point() + ggtitle("Gas gain (90 min slices) vs ambient T at CAST (center chip)") + xlab("Temperature [°C]") + ylab("Gas gain") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave(&"{outpath}/gain_vs_temp_center_chip.pdf", width = 600, height = 360, useTeX = UseTeX, standalone = UseTeX) proc main(calibFiles: seq[string], dataFiles: seq[string] = @[], outpath = "/tmp/") = ## NOTE: this file needs the CSV file containing the temperature data from the slow control ## CAST log files, which is written running the `cast_log_reader` on the slow control log ## directory! var gainDf = newDataFrame() if dataFiles.len > 0: gainDf = readGasGainSliceData(dataFiles) .mapToPeriod("tStart") ## Make a plot of the raw gas gains of all chips ggplot(gainDf, aes("tStart", "G", color = "Chip")) + geom_point(size = 2.0) + ggtitle("Raw gas gain values in 90 min bins for all chips") + themeLatex(fWidth = 0.9, width = Width, height = Height, baseTheme = singlePlot) + ggsave(&"{outpath}/raw_gas_gain.pdf", width = 600, height = 360, useTeX = UseTeX, standalone = UseTeX) let dfCastTemp = readCastTemps() let dfTemp = readSeptemTemps() plotTempVsGain(dfCastTemp, gainDf, outpath) plotCorrelation(calibFiles, fePixel, gainDf, dfCastTemp, dfTemp, outpath) plotCorrelation(calibFiles, feCharge, gainDf, dfCastTemp, dfTemp, outpath) when isMainModule: import cligen dispatch main #+end_src Running the above as: #+begin_src sh USE_TEX=true WRITE_PLOT_CSV=true code/correlation_ambient_temps_fe55_peaks \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ -d ~/CastData/data/DataRuns2017_Reco.h5 \ -d ~/CastData/data/DataRuns2018_Reco.h5 \ --outpath ~/phd/Figs/behavior_over_time/ #+end_src which generates the following plots: - [[~/phd/Figs/behavior_over_time/raw_gas_gain.pdf]] - [[~/phd/Figs/behavior_over_time/gain_vs_temp_center_chip.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_fePixel_period_2018-10-19.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_fePixel_period_2018-02-15.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_fePixel_period_2017-10-30.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_fePixel_all_chips_gasgain_period_2018-10-19.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_fePixel_all_chips_gasgain_period_2018-02-15.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_fePixel_all_chips_gasgain_period_2017-10-30.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_feCharge_period_2018-10-19.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_feCharge_period_2018-02-15.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_feCharge_period_2017-10-30.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_feCharge_all_chips_gasgain_period_2018-10-19.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_feCharge_all_chips_gasgain_period_2018-02-15.pdf]] - [[~/phd/Figs/behavior_over_time/correlation_feCharge_all_chips_gasgain_period_2017-10-30.pdf]] of which we insert only one of them (Run 3) correlation of gas gains and temperature against the time. That is mainly because in that period there was no worry about power supply effects anymore. It should be noted that the apparent inverse correlation is not apparent in the Run-2 data of 2017. Generally the water cooling was working better at those times, which may be relevant. I don't want to introduce even more speculation into the main section and as the scatter plot of gas gain and temperature clearly shows an inverse correlation for a large chunk of the data, the existing text is justified. Also, we chose to include the ~fePixel~ version and not the ~feCharge~ version as the link between gas gain of the center chip and the \cefe charge spectrum is much more direct, offering less additional information. ***** Initial interpretation upon seeing the correlation plot Note: this text was written after I created the first version of the above plot for the first time. The first thing that jumps out is that the (normalized) temperature recovered from the shift forms of the Septem board sensor is strongly correlated with the ambient CAST temperature (~T_amb~). This is interesting and reassuring as it partially explains why the temperatures were higher on the Septemboard during Run-3 than Run-2: it was hotter in the hall in Run-3 (not shown in this plot, see full version of all data). *Next paragraph was written before gas gain information was in the plot* However, the peak position of the 55Fe data is either uncorrelated *or* actually inversely proportional to the temperatures. When the temperatures are lower the peak position is higher and vice versa. The data is imo not good enough to make final statements about this, but _something_ might be going on there. This is something that one might want to investigate in the future! *UPDATE*: Having added the gas gain slice information to the plot now, it seems pretty evident that there *is* an inverse correlation between the gas gain and the temperature! - ideal gas, temp + constant pressure, lower density, higher mobility - less visible in old detector, as absolute temperatures under grid much lower, therefore on a "less steep" part of the exponential that makes up the gas gain temperature dependence! PDG 2016 page 467 says: (detectors at accelerators chapter) #+begin_quote For different temperatures and pressures, the mobility can be scaled inversely with the density assuming an ideal gas law #+end_quote This *should* imply: - A higher temperature in the CAST hall, while keeping the same pressure in the detector, means a lower gas density according to the ideal gas law, p·V = nRT ⇔ n₁RT₁ = n₂RT₂ ⇔ T₁/T₂ = n₁/n₂ ⇔ T₁ > T₂ ⇒ n₁ < n₂. n ∝ ρ. - A lower density according to the quote then implies a _higher_ mobility. - The 'mobility' should be proportional to the mean free path. - [ ] *CHECK THIS* - *Assuming* the mean free path is _long enough_ in 'both' temperatures as to have enough kinetic energy to cause an ionization ~typically, *then* a _higher mobility_ means *less* gas gain, as there will be *less* collisions! *However* if the mean free path would lead to typical collisions that do *not* have enough energy to cause ionization, then the gas gain would be *lower* for a *lower* mobility, as the gas would then act as a dampener. But the former should always be true in the amplification region I guess. This explanation is not meant as a definitive statement about the origins of the gas gain variations in the Septemboard detector data. *However*, it clearly motivates the need for an in depth study of the behavior of these detectors for different gas temperatures at constant pressures and more precise logging of temperatures in future detectors. Further, a significantly improved cooling setup (to more closely approach a region where temperature changes have a smaller relative impact), or theoretically even a temperature controlled setup with known inlet gas temperatures might be useful. This behavior is one of the most problematic from a data analysis point of view and thus it should be taken seriously for future endeavors! - [X] *INSERT THE PIXEL TEMP GASGAIN PLOT INTO THESIS ROTATED FULL PAGE?* - [X] *ADD VERSION OF PLOTS THAT SHOW FULL DATA WITHOUT CUT TO RUN-3* - [X] *ADD A SIMILAR PLOT, BUT NOT USING 55FE POSITIONS, BUT GAS GAIN SLICES* -> done by *adding* Gas gain data as well for all chips! *** TODO Section covering our plots of logL variables over time? :noexport: We have those plots in ~statusAndProgress~! Maybe add them here? Section: ~Time behavior of logL variables~ *** Gas gain binning :PROPERTIES: :CUSTOM_ID: sec:calib:gas_gain_time_binning :END: Motivated by the strong variation seen over timescales much shorter than the typical length of a background run, the gas gain needs to be computed in time slices of a fixed length. This is naturally a trade-off between assigning accurate gas gains to a time slice and acquiring enough statistics to compute said gas gain correctly. To determine a suitable time window the gas gain was computed for a fixed set of different time intervals and figures similar to fig. [[fig:calib:total_charge_over_time]] were considered not only for the median charge, but also different geometric cluster distributions. Further, by applying the energy calibration based on each different set of time intervals to the background data (as will be explained in sec. [[#sec:calib:final_energy_calibration]]), the histograms of the median cluster energy in the background data was studied. The ideal time interval is one in which the resulting median energy distribution has low variance and is unimodal approaching a normal distribution, (background in all slices is equivalent over large enough times) while at the same time provides enough statistics in the \cefe spectrum of the slice to perform a good fit. Unimodality can be quantitatively checked using different goodness of fit tests (Anderson-Darling, Cramér-von Mises, Kolmogorov-Smirnov). See appendix [[#sec:appendix:choice_gas_gain_binning]] for a comparison and further plots comparing the intervals. The goodness of fit tests tend to favor shorter intervals, in particular $\SI{45}{min}$. However, looking at fig. [[fig:calib:median_energy_ridgeline_30_10_2017]] shows that the variance grows significantly below $\SI{90}{min}$. #+CAPTION: Ridgeline plot of a kernel density estimation (bandwidth based on Silverman's rule of thumb) #+CAPTION: of the median cluster energies split by the used time intervals. The underlying data is the background data #+CAPTION: from Oct 2017 to Dec 2017. The overlap of the individual ridges is for #+CAPTION: easier visual comparison and a KDE was selected over a histogram due to strong #+CAPTION: binning dependence of the resulting histograms. For the dataset and binning the $\SI{90}{min}$ #+CAPTION: interval (olive) strikes an acceptable balance between unimodality and #+CAPTION: variance. #+NAME: fig:calib:median_energy_ridgeline_30_10_2017 [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_ridges_30_10_2017.pdf]] As the different ways to look at the data are not entirely conclusive, the choice was made to choose an interval length that is not too long, while still providing enough statistics for the \cefe spectra. As such $\SI{90}{min}$ was selected as the final interval time. Of course no data taking run is a perfect multiple of $\SI{90}{min}$. The last slice smaller than the time interval is either added to the second to last slice (making it longer than $\SI{90}{min}$) if it is smaller than some fixed interval or will be kept as a single shorter interval. This is controlled by an additional parameter that is set to $\SI{25}{min}$ by default. [fn:configuration_slicing] [fn:configuration_slicing] Both the gas gain time slicing and the minimum length for the last slice in a run to be kept as a shorter slice can be configured from the TPA configuration file, via the ~gasGainInterval~ and ~minimumGasGainInterval~ fields, respectively. [fn:optimize_gas_gain_window] The code for this optimization can be found here *GITHUB/TPA/Tools/optimizaGasGainSliceTime.nim*. Further see the ~statusAndProgress~ notes on this! **** TODOs for this section [/] :noexport: - [X] *REWORD ABOVE DUE TO CHANGE IN PREVIOUS SECTION* -> And in general after a full read probably rephrase the why selection was chosen. - [X] *INSERT THE PLOT SHOWING THE DIFFERENT DISTRIBUTIONS?* - [X] *INSERT FIG OF MEDIAN ENERGY OF BACKGROUND DATA USING 90 MIN* -> Must be in the section that actually explains the energy calibration! - [X] *INSERT FIG OF RIDGELINE FOR MEDIAN ENERGY HISTO FOR DIFFERENT TIMES* -> to appendix - [X] *NOTE: The ~optimizeGasGainSliceTime.nim~ tool writes the CSV files to the ~TPA/Tools/out/~ directory! So we needn't rerun the whole thing and can just generate the plot we want to show here! -> Also part of ~resources~ in this repo now for reference. - [X] *EXPLAIN ~minimumGasGainSlice~*: how we deal with last pieces in a run! #+begin_src # minutes the gas gain interval has to be long at least. This comes into play # at the end of a run `(tRun mod gasGainInterval) = lastSliceLength`. If the # remaining time `lastSliceLength` is less than `minimumGasGainInterval`, the # slice will be absorbed into the second to last, making that longer. minimumGasGainInterval = 25 #+end_src **** Ridgeline plot of the median energies :extended: Finally, let's recreate the plot of the histograms of the median energies in the time slices as a ridgeline plot to better explain why we chose 90 min instead of anything else that we tested. First, if one is not happy with using the provided CSV files that contain the precomputed medians of the cluster energy in the ~phd/resources/optimize_gas_gain_length~ directory, run the ~optimizeGasGainSliceTime~ tool: #+begin_src sh cd $TPA/Tools nim c -d:release -d:StartHue=285 optimizeGasGainSliceTime.nim ./optimizeGasGainSliceTime --path <PathToDataRunsH5> --genCsv #+end_src Note that this takes some time, as the fitting of all gas gains in each time slice is somewhat time consuming (for the 90 min case it takes maybe 15 minutes for all data; more for shorter time scales, less for longer). (It may be necessary to modify the code as I may forget to change the input data file paths; they are hardcoded as of right now) - [ ] *CHANGE CODE TO NOT USE HARDCODED PATHS, THEN ADJUST SCRIPT ABOVE* -> Change code to not point to hardcoded config file! This generates CSV files in an ~out~ directory from wherever you actually ran the code from. To produce the plots as used in the thesis (and many more), just run: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/optimizeGasGainSliceTime/ USE_TEX=true WIDTH=600 HEIGHT=380 FACET_MARGIN=1.1 LINE_WIDTH=1.0 WRITE_PLOT_CSV=true \ ./optimizeGasGainSliceTime \ --path out \ --plot \ --outpath ~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/ #+end_src Among others this generates a plot of the scores for different goodness of fit tests for each interval setting and period of data taking. They all use the mean of the full data and the variance of the full data (that's clearly not ideal, but it should be passable to have something comparable). The resulting plot is [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/gofs_for_different_binnings.pdf]] At least for the 2017 data set the 45 minute interval seems to be the clear winner. However, in beginning 2018 it's one of the worst at least for Anderson-Darling and Cramér-von-Mises (which are probably the most interesting tests to look at). The 90 min result is sort of mostly in the middle. The big advantage of it though is that it definitely captures enough statistics, which is extremely important for the \cefe spectrum, as the data rate is very low there. As much statistics as possible is needed to get a nice fit there. At the same time comparing the ridgeline plot / histograms it is also evident that the variance itself is quite a bit smaller in the 90 min case, which is another important aspect. In addition all these plots for the distribution of properties / the energy are created: - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_intervals.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_30.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_45.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_60.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_90.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_120.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_180.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_vs_time_300.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_intervals.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_ridges_17_02_2018.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_ridges_21_10_2018.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_ridges_30_10_2017.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_intervals_17_02_2018.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_intervals_21_10_2018.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_intervals_30_10_2017.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_ridges_17_02_2018.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_ridges_21_10_2018.pdf]] - [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_ridges_30_10_2017.pdf]] ** Energy calibration dependence on the gas gain :PROPERTIES: :CUSTOM_ID: sec:calib:final_energy_calibration :END: With the final choice of time interval for the gas gain binning in place, the actual calibration used for further analysis can be presented. Fig. sref:fig:calib:gasgain_vs_energy_calib_comparison shows the fits according to the linear relation as explained in sec. [[#sec:calibration:energy]], eq. [[eq:gas_gain_vs_calib_factor]], for the two data taking campaigns, Run-2 in sref:fig:calib:gasgain_vs_energy_calib_2017 and Run-3 in sref:fig:calib:gasgain_vs_energy_calib_2018. Each point represents one $\SI{90}{min}$ slice of calibration data for which a \cefe spectrum was fitted and then the linear energy calibration performed. The resulting energy calibration factor is then plotted against the gas gain computed for this time slice. The uncertainty of each point is the uncertainty extracted from the fit parameter of the calibration factor after error propagation. Calibrations need to be performed separately for each data taking campaign as the detector behavior changes due to different detector calibrations. These have an impact on the ~ToT~ calibration as well as the activation threshold. Note that the reduced $χ²$ values shown in the figure, implies that this calibration does not perfectly calibrate out the systematic effects of the variability. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Run-2") (label "fig:calib:gasgain_vs_energy_calib_2017") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/energyCalibration/Run_2/gasgain_vs_energy_calibration_factors_4316675118229057340.pdf")) (subfigure (linewidth 0.5) (caption "Run-3") (label "fig:calib:gasgain_vs_energy_calib_2018") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/energyCalibration/Run_3/gasgain_vs_energy_calibration_factors_-4542617296427170283.pdf")) (caption "Fit of the gas gain values vs the calculated energy calibration factors for all calibration runs in Run-2 " (subref "fig:calib:gasgain_vs_energy_calib_2017") " and Run-3 " (subref "fig:calib:gasgain_vs_energy_calib_2018") ". Each run was further sliced into " ($ (SI 90 "min")) " parts for the gas gain determination and \\cefe fits.") (label "fig:calib:gasgain_vs_energy_calib_comparison")) #+end_src To compare the energy calibration using single gas gain values for each full run against the method of time slicing them to $\SI{90}{min}$ chunks, we will look at the median cluster energy in each time slice for background and calibration data. This is the same idea as behind fig. [[fig:calib:total_charge_over_time]] previously, just for the energy instead of charge. This yields fig. [[fig:calib:median_energy_binned_vs_unbinned]]. The points represent background and crosses calibration data. Green is the unbinned (full run) approach and purple the binned approach using $\SI{90}{min}$ slices. The effect is a slight, but visible reduction in variance. It represents an important aspect of increasing data reliability and lowering associated systematic uncertainties. Note that the variability looks much smaller than in fig. [[fig:calib:total_charge_over_time]] due to not being normalized. However, here we wish to emphasize that the absolute energy calibration yields a flat result and matches our expectation. As such the final energy calibration works by first deducing the gas gain at the time of an event, computing the calibration factor required for this gas gain and finally using that factor to convert the charge into energy. #+CAPTION: Median of the cluster energy after calibration using two different approaches. #+CAPTION: Green corresponds to calculating the energy based on a single gas gain for each #+CAPTION: run and purple implies calculation based on $\SI{90}{min}$ time intervals for the #+CAPTION: gas gain. Both cases use the same $\SI{90}{min}$ intervals to compute a local, temporal #+CAPTION: median of all clusters. Each subplot corresponds to a data taking period with #+CAPTION: significant times between for clarity. The energies from unbinned gas gains has a #+CAPTION: wider distribution than the binned data. The latter approaches a flat distribution #+CAPTION: of the background energies (points) better than the former. The impact for the #+CAPTION: calibration data (crosses) is much smaller, as they are not much longer than the #+CAPTION: $\SI{90}{min}$ binning anyway. #+NAME: fig:calib:median_energy_binned_vs_unbinned #+ATTR_LATEX: :width 1\textwidth [[~/phd/Figs/behavior_over_time/median_energy_binned_vs_unbinned.pdf]] *** TODOs for this section [/] :noexport: - [X] *REWRITE ME WITH NEW STRUCTURE IN MIND!* - [ ] *ADD TABLE OF ALL PARAMETERS & GAS GAIN SLICE DATA WITH ERRORS TO APPENDIX!* - [ ] *HAVE TABLE OF FIT PARAMETERS SOMEWHERE FOR RUN-2 AND RUN-3!* -> Appendix. - [X] *INSERT PLOT OF GAS GAIN VS CALIB?* - [X] *REPHRASE WORD 'UNDERSTANDING' IN SECTION ABOVE* - [X] *MOVE EXPLANATION OF THIS FIT TO AFTER DISCUSSION OF GAS GAIN VARIANCE?* - [ ] *HERE (and related sections) IMPORTANT TO CHECK THAT OUR EXPLANATION OF HOW GAS GAIN CALC'D IS GOOD* - [X] *Instead of single plot as now, have both plots in a subfig for each of the two run periods!* - [X] *THINK ABOUT INSTEAD OF EXPLAINING HOW THE CALIBRATION IS DONE* in as much detail as done above, instead just start by saying how the calibration is gas gain dependent and then say that is varying. therefore first study gas gain behavior & discover slicing - [X] *MENTION IF GAS GAIN VS ENERGY CALIB FITS USE INDIVIDUAL SLICES OR MEAN VALUE!* -> Note that this is somewhat of an internal implementation detail to some extent, but practically the ~gcMean~ approach is not mentioned at all in the thesis. We can't and don't need to explain every possibility of the software after all. *** Generate plot for the gas gain vs. energy calibration factors :extended: In principle the plots shown in the section above are produced during the regular data reconstruction and calibration, in particular by the ~reconstruction~ program using the ~--only_gain_fit~ argument. Let's recreate them in the style we want for the thesis. We use a slightly taller height to have a bit more space (for y label and annotation). #+begin_src sh USE_TEX=true WIDTH=600 HEIGHT=420 reconstruction \ -i ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --only_gain_fit \ --plotOutPath ~/phd/Figs/energyCalibration/Run_2/ \ --overwrite USE_TEX=true WIDTH=600 HEIGHT=420 reconstruction \ -i ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --only_gain_fit \ --plotOutPath ~/phd/Figs/energyCalibration/Run_3/ \ --overwrite #+end_src which produces the plots in: - [[~/phd/Figs/energyCalibration/Run_2]] - [[~/phd/Figs/energyCalibration/Run_3]] *** Generate plot of median cluster energy :extended: Let's now generate the plots of the median cluster energy. *Note*: As the gas gain calculation is time consuming: If the gas gain values were computed at some point in the past and the ~gasGainSlices*~ datasets are still present, the calculation of the gas gain bins isn't needed. One can simply change the current ~gasGainInterval~ in the ~config.toml~ and recompute the gas gain vs energy calibration fit (~--only_gain_fit~) and the energy again to re-generate the plots. First here are two sections which cover how to compute the relevant gas gain slices and calibrate the energy accordingly. They also show how to use the tool to plot the median energy over the time bins. When running them, they generate CSV files that we will use further down to generate a plot that combines the data from the full run and 90 min binned gas gain data into one plot. **** Full run gas gain First, the case of computing it from the full run gas gain, calculate the median of 90 min data in the plot and second the 90 minute binned gas gain version, with the same bins used for the plot. To start, first need to recompute the gas gain based on the full runs. Set the ~fullRunGasGain~ value in the ~config.toml~ file to ~true~ and ~gasGainInterval~ to ~0~ (the latter because ~fullRunGasGain~ appends a ~0~ suffix to the generated dataset & the ~--only_gain_fit~ step reads from the dataset with suffix of ~gasGainInterval~) then run: #+begin_src sh cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data --outpath ~/CastData/data \ --years 2017 --years 2018 \ --back --calib \ --only_gas_gain #+end_src Then check that the ~gasGainInterval~ datasets in the H5 files now actually contain a single slice in the dataset without a numbered suffix (indicating the minutes used for the slice). With that done, again, run the ~only_gain_fit~ argument on the calibration files: #+begin_src sh cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data --outpath ~/CastData/data \ --years 2017 --years 2018 \ --calib \ --only_gain_fit #+end_src Side note: to check if the fit was done correctly on the full run slices, check the output directory, e.g. something like ~out/CalibrationRuns2018_Raw_2020-04-28_15-06-54~ in case of the Run-3 data and compare the ~gasgain_vs_energy_calibration_factors_*~ files present there. The latest (full run) version should have less data points than the 90 min version that should already be present from the initial reconstruction & calibration of the data. Then recompute the energy for all data: #+begin_src sh cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data --outpath ~/CastData/data \ --years 2017 --years 2018 \ --back --calib \ --only_energy_from_e #+end_src *Note*: The ~plotTotalChargeOverTime~ should be compiled using #+begin_src sh nim c -d:danger -d:StartHue=285 plotTotalChargeOverTime #+end_src as was done in the previous section where we generated the median charge plot. And again, generate the plot using our tool: #+begin_src sh ./plotTotalChargeOverTime ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ --interval 90 \ --cutoffCharge 0 --cutoffHits 500 \ --calibFiles ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --calibFiles ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --applyRegionCut --timeSeries #+end_src which yields the output file: ~out/background_median_energyFromCharge_90.0_min_filtered_crSilver.pdf~ and [[~/phd/Figs/behavior_over_time/median_energy_full_run_gasgain_binned_90min_crSilver.pdf]] Finally, it generates the following CSV file from the used data frame: ~out/data_90.0_min_filtered_crSilver.csv~ which we store in: [[~/phd/resources/behavior_over_time/data_full_run_gasgain_90.0_min_filtered_crSilver.csv]] **** 90 min binning gas gains Now to generate the 90 minute version, set the ~fullRunGasGain~ back to ~false~ and make sure the gas gain time slice interval is set to 90 min in the ~config.toml~. Then again run: #+begin_src sh cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data --outpath ~/CastData/data \ --years 2017 --years 2018 \ --back --calib \ --only_gas_gain #+end_src and again check the gas gain slice dataset without a suffix has _multiple_ slices each about 90 min in length. With that done, again, run the ~only_gain_fit~ argument on the calibration files: #+begin_src sh cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data --outpath ~/CastData/data \ --years 2017 --years 2018 \ --calib \ --only_gain_fit #+end_src and then recompute the energy for all data: #+begin_src sh cd $TPA/Analysis/ingrid ./runAnalysisChain -i ~/CastData/data --outpath ~/CastData/data \ --years 2017 --years 2018 \ --back --calib \ --only_energy_from_e #+end_src And again, generate the plot using our tool: #+begin_src sh ./plotTotalChargeOverTime ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ --interval 90 \ --cutoffCharge 0 --cutoffHits 500 \ --calibFiles ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --calibFiles ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --applyRegionCut --timeSeries #+end_src which - among others - generates the output file ~out/background_median_energyFromCharge_90.0_min_filtered_crSilver.pdf~ which for our purposes: [[~/phd/Figs/behavior_over_time/median_energy_binned_90min_crSilver.pdf]] And again, it generates the CSV file: ~out/data_90.0_min_filtered_crSilver.csv~ which we store in: [[~/phd/resources/behavior_over_time/data_gas_gains_binned_90.0_min_filtered_crSilver.csv]] **** Comparison of the files If you ran the code in the order above, your ~config.toml~ file should again be in the 90 min & binned gas gain setting. Otherwise change it back and rerun the 90 min binning & fitting & energy calculation commands from above to make sure we don't mess up plots that are generated further! - ~~/phd/Figs/behavior_over_time/median_energy_full_run_gasgain_binned_90min_crSilver.pdf~ - ~~/phd/Figs/behavior_over_time/median_energy_binned_90min_crSilver.pdf~ Comparing the two median energy files, both binned by the same intervals (the ones used for the 90 min gas gain calculations), it's evident that _more or less_ they do agree. However, in some areas significant spikes can be seen in the version from the full run gas gain values, which is precisely expected: It's those runs in which the temperature varied significantly within the one run, changing the gas gain as a result. So there are times in which the average gas gain of that run does not match locally within the run. Now we use the CSV files from ~phd/resources/behavior_over_time~ to generate the same plot as the individual ones here, but showing the binned and unbinned data with different shapes / colors. #+begin_src nim :tangle code/median_energy_binned_vs_unbinned.nim import std / [times, strformat] import ggplotnim let df1 = readCsv("/home/basti/phd/resources/behavior_over_time/data_full_run_gasgain_90.0_min_filtered_crSilver.csv") let df2 = readCsv("/home/basti/phd/resources/behavior_over_time/data_gas_gains_binned_90.0_min_filtered_crSilver.csv") let df = bind_rows([("Unbinned", df1), ("Binned", df2)], "Data") proc th(): Theme = result = singlePlot() result.tickLabelFont = some(font(7.0)) let name = "energyFromChargeMedian" ggplot(df, aes("timestamp", name, shape = "runType", color = "Data")) + facet_wrap("runPeriods", scales = "free") + facetMargin(0.75, ukCentimeter) + scale_x_date(name = "Date", isTimestamp = true, dateSpacing = initDuration(weeks = 2), formatString = "dd/MM/YYYY", dateAlgo = dtaAddDuration) + geom_point(alpha = some(0.8), size = 2.0) + ylim(2.0, 6.5) + margin(top = 1.5, left = 4.0, bottom = 1.0, right = 2.0) + legendPosition(0.5, 0.175) + xlab("Date", margin = 0.0) + ylab("Energy [keV]", margin = 3.0) + themeLatex(fWidth = 1.0, width = 1200, height = 800, baseTheme = th) + ggtitle(&"Median of cluster energy, binned vs. unbinned. 90 min intervals.") + ggsave(&"Figs/behavior_over_time/median_energy_binned_vs_unbinned.pdf", width = 1200, height = 800, useTeX = true, standalone = true) #+end_src #+RESULTS: | INFO: | The | integer | column | `timestamp` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("timestamp"), | ...)`. | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | lualatex | -output-directory | Figs/behavior_over_time | Figs/behavior_over_time/median_energy_binned_vs_unbinned.tex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generated: | Figs/behavior_over_time/median_energy_binned_vs_unbinned.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | which results in [[~/phd/Figs/behavior_over_time/median_energy_binned_vs_unbinned.pdf]] ** FADC :PROPERTIES: :CUSTOM_ID: sec:calib:fadc :END: As touched on multiple times previously, in particular in sec. [[#sec:cast:data_taking_woes]], the FADC suffered from noise. This meant multiple changes to the amplifier settings to mitigate this. We will now go over what the FADC noise looks like and explain our noise filter used to determine if an FADC event is noisy (to ignore it), in sec. [[#sec:calibration:fadc_noise]]. Then we check the impact of the different amplification settings on the FADC data and discuss the impact of the FADC data quality on the rest of the detector data in sec. [[#sec:calib:fadc:amplifier_settings]]. *** TODOs for this section [/] :noexport: Initial text for this section: #+begin_quote As described over the course of previous chapters, the FADC serves essentially a 4-way purpose. First of all, it is used as a trigger to start the readout process of the GridPixes, if the center chip records a large enough signal. This brings up the question of the activation threshold of the FADC, sec. [[#sec:fadc:activation_threshold]]. Secondly, thanks to the high temporal resolution the recorded signal can be used to gain information about the longitudinal shape of recorded events. This theoretically allows it to act as an additional background suppression mechanism, sec. [[#sec:fadc:fadc_as_veto]]. Finally, its trigger clock acts as the reference time and readout flag for the scintillators, see sec. [[#sec:fadc:scintillator_trigger_reference]] (*I THINK THIS MAY NOT BE NEEDED, AS THERE IS ENOUGH STUFF ABOUT THIS BEFORE*). Furthermore, in principle the FADC provides an independent measurement about the total charge collected on the central GridPix. Thus, it serves as a reference as to whether changes in the computed gas gain on the GridPix as well as possible changes in the recorded number of pixels are due to physical changes in the gas gain or due to changes in the readout electronics. Such a secondary insight is valuable in the context of studying the GridPix behavior over time, sec. [[#sec:calib:detector_behavior_over_time]]. #+end_quote - trigger - longitudinal shape information -> acting as a veto - reference clock for scintillators - [ ] -> independent information of total collected charge -> allows to check whether seen changes in gas gain on gridPix are really changes in gas gain or changes in the readout electronics of a timepix!! This is actually an important insight as it allows us to see what happened in the cases where pixel information was lost / peak position changed drastically. Did gas gain change based on FADC information? -> Used in time dependence of gas gain. problems: - noise - different thresholds due to gain changes - different shapes due to 50ns 100ns signal integration, 50ns 20ns differentiation - [ ] *REGARDING 50ns vs 100ns ~fadc_analysis.nim~ CONTAINS CODE TO CHECK NOISE COMPARED TO 50 vs 100!!* *** FADC noise example and detection :PROPERTIES: :CUSTOM_ID: sec:calibration:fadc_noise :END: An example of the most common type of noise events seen in the FADC data is shown in fig. [[fig:calib:fadc_noise_example]]. As the FADC registers effectively correspond to $\SI{1}{ns}$ time resolution, the periodicity of these noise events is about $\SI{150}{ns}$, corresponding to roughly $\SI{6.6}{MHz}$ frequency. Other types of less common noise events are events with a noise frequency of about $\SI{1.5}{MHz}$. [fn:frequencies] A final type of noise events are events in which the FADC input is fully at a low input voltage (in the tens of $\si{mV}$ range), but contains no real 'activity'. The values though are lower than the threshold in these cases triggering the FADC. #+CAPTION: Example of the most common type of noise example. Noise has a #+CAPTION: periodicity of about $\sim\SI{150}{ns}$, or about $\SI{6.6}{MHz}$. #+NAME: fig:calib:fadc_noise_example [[~/phd/Figs/FADC/fadcNoise/fadc_event_run109_event19157_region_crAll_fadc_noisy_0.5_1.5_applyAll_true.pdf]] For data analysis purposes, in particular when the FADC data is used in conjunction with the GridPix data, it is important to not accidentally use an FADC event, which contains noise. While generally noise events are unlikely to be part of physical ionization events on the center GridPix it is better to be on the conservative side. The noise analysis is kept very simple [fn:implementation]. The FADC spectrum, consisting of $\num{2560}$ registers, is being sliced into $\num{150}$ register wide intervals. In each interval we check for the minimum of the signal, $m_s$. The slice is adjusted around the found minimum to check if the minimum is contained fully in the slice (if not it is part of the next slice). If that minimum is below $m_s < B - σ$, where $B$ is the signal baseline and $σ$ the standard deviation of the full FADC signal (including the peaks!), it will be counted as a peak. The noise filter detection is then defined by signals with at least $\num{4}$ peaks within slices of $\SI{150}{ns}$. [fn:frequencies] The frequencies are on the low end in terms of common radio communication frequencies. The leading assumption has always been that the source likely points to noise produced by e.g. the motors moving the CAST magnet and similar. [fn:implementation] I started with a simple implementation, intending to replace it later. But it worked well enough that I simply kept it so far. **** TODOs for this section [/] :noexport: - [X] *EXAMPLE OF NOISE* - [ ] Add examples of other types of noise too? **** Notes on FADC noise analysis :extended: The ~fadc_analysis.nim~ program in ~TimepixAnalysis~ on the one hand contains the code to detect noisy events and is used as a library for that purpose. But it can also be used to perform a standalone FADC noise analysis if compiled on its own. To determine if an event is noisy: - check for dips in the signal of width 150 ns - if more than 4 dips in one event -> noisy ~fadc_helpers.nim~ -> ~isFadcFileNoisy~ using ~findPeaks~ from ~NimUtil/helpers/utils.nim~. Note: the implementation is rather simple. Instead of slicing the FADC data into chunks of the desired width it would be smarter to work with a running version of the data and see if the running mean crosses some lower threshold. The difficulty in that is detecting separate peaks. One would need to track the 'last return to baseline' and only count more dips if the next minimum had a baseline return (or maybe 50% of baseline return) in it. **** Find a good noisy FADC event :extended: Types of noise: - O(4) periods in 2560 ns - O(16) periods in 2560 ns - signal at negative voltage (with no periodicity) over entire range -> appendix one example of each? Then again, we also don't show examples of all sorts of fun GridPix events. Then again again, those don't cause data loss because they are extremely infrequent :) Run 109 (based on our notes taken during the CAST data taking) was a run with serious amounts of noise. We'll find a good FADC event using ~plotData~ by filtering to ~noisy == 1~ and producing FADC event displays. First let's generate some events: #+begin_src sh F_LEFT=0.8 \ plotData \ --h5file ~/CastData/data/DataRuns2017_Reco.h5 \ --runType=rtBackground \ --cuts '("fadc/noisy", 0.5, 1.5)' \ --applyAllCuts \ --fadc \ --eventDisplay \ --runs 109 \ --head 50 #+end_src Good examples for the three main types of noise we had are: High frequency event: event #: 19157 Full on negative value event: event #: 4147 Low frequency event: event #: 9497 For each of these let's generate a prettier version: #+begin_src sh :dir ~/phd/Figs/FADC/fadcNoise F_LEFT=-0.8 L_MARGIN=2.5 B_MARGIN=2.0 T_MARGIN=1.0 USE_TEX=true WIDTH=600 HEIGHT=420 \ plotData \ --h5file ~/CastData/data/DataRuns2017_Reco.h5 \ --runType=rtBackground \ --cuts '("fadc/noisy", 0.5, 1.5)' \ --applyAllCuts \ --fadc \ --eventDisplay \ --runs 109 \ --events 19157 --events 4147 --events 9497 \ --plotPath ~/phd/Figs/FADC/fadcNoise/ #+end_src - [[~/phd/Figs/FADC/fadcNoise/fadc_event_run109_event19157_region_crAll_fadc_noisy_0.5_1.5_applyAll_true.pdf]] - [[~/phd/Figs/FADC/fadcNoise/fadc_event_run109_event4147_region_crAll_fadc_noisy_0.5_1.5_applyAll_true.pdf]] - [[~/phd/Figs/FADC/fadcNoise/fadc_event_run109_event9497_region_crAll_fadc_noisy_0.5_1.5_applyAll_true.pdf]] *** Amplifier settings impact and activation threshold :PROPERTIES: :CUSTOM_ID: sec:calib:fadc:amplifier_settings :END: Now let's look at the impact of the different amplifier settings on the FADC data properties. This includes differences in the rise time and fall time, but because changing the integration and differentiation times on the amplifier has a direct impact on the absolute amplification on the signal, we also need to consider the change in activation threshold of the signals. As a short reminder, the FADC settings were changed twice during the Run-2 period in 2017. Starting from 2018, the settings were left unchanged from the end of 2017. An overview of the setting changes is shown in tab. [[tab:calib:fadc_amplifier_settings]]. Note that the Ortec amplifier has a coarse and a fine gain. Only the coarse gain was changed. [fn:fine_gain] The gain changes were performed to counteract the resulting amplification changes due to the integration and differentiation setting changes (this is documented as a side effect in the FADC manual [[cite:&fadc_manual]]). #+CAPTION: Overview of the different FADC amplifier settings and the associated run #+CAPTION: numbers. #+NAME: tab:calib:fadc_amplifier_settings #+ATTR_LATEX: :booktabs t | Runs | Integration [$\si{ns}$] | Differentiation [$\si{ns}$] | Coarse gain | |-----------+-------------------------+-----------------------------+-------------| | 76 - 100 | 50 | 50 | ~6x~ | | 101 - 120 | 50 | 20 | ~10x~ | | 121 - 306 | 100 | 20 | ~10x~ | The \cefe calibration spectra come in handy for the FADC data, as they also give a known baseline to compare against for this type of data. To get an idea of the rise and fall times of the FADC for different settings, we can compute a truncated mean of all rise and fall times in each calibration run. [fn:trunc_mean] This is done in fig. [[fig:fadc:mean_rise_times_55fe_fadc_run2]], which shows the mean rise time for each run in 2017 with the fall time color coded in each point. The shaded regions indicate the FADC amplifier settings. Changing the differentiation time from $\SI{50}{ns}$ down to $\SI{20}{ns}$ decreased the rise time by about $\SI{10}{ns}$. The change in the fall time is much more pronounced. The change of the integration time from $\SI{50}{ns}$ to $\SI{100}{ns}$ then brings the rise time back up by about $\SI{5}{ns}$ with now a drastic extension in the fall time from the mid $\SI{200}{ns}$ to over $\SI{400}{ns}$. Clearly the fall time is much more determined by the amplifier settings. #+CAPTION: The mean rise time of the FADC signals recorded during the \cefe data #+CAPTION: during Run-2 of the FADC. Again, the FADC amplifier settings are visible #+CAPTION: as expected. #+CAPTION: '∫': integration time, '∂': differentiation time, 'G': coarse gain. #+NAME: fig:fadc:mean_rise_times_55fe_fadc_run2 [[~/phd/Figs/FADC/fadc_mean_riseTime_run2.pdf]] A direct scatter plot of the rise times against the fall times is shown in fig. [[fig:fadc:riseTime_vs_fallTime_55fe_fadc_run2]], where the drastic changes to the fall time are even more pronounced. Each point once again represents one \cefe calibration run. The different settings manifest as separate 'islands' in this space. #+CAPTION: The mean rise time of the FADC signals recorded during the \cefe data #+CAPTION: against the fall time during Run-2 at CAST. One point for each calibration #+CAPTION: run. #+CAPTION: The different settings create three distinct blobs. #+CAPTION: '∫': integration time, '∂': differentiation time, 'G': coarse gain. #+NAME: fig:fadc:riseTime_vs_fallTime_55fe_fadc_run2 [[~/phd/Figs/FADC/fadc_mean_riseTime_vs_fallTime_run2.pdf]] The FADC pulses contain a measure of the total charge that was induced on the grid and therefore an indirect measure of the charge seen on the center GridPix. The \cefe calibration runs could be used to fully calibrate the FADC signals in charge if desired. Ideally one would fully (numerically) integrate the FADC signal for each event to compute an effective charge. As we only use the FADC signals in the context of this thesis for their sensitivity to longitudinal shape information, this is not implemented. For \cefe calibration data the amplitude of the FADC pulse is a direct proxy for the charge anyway, because the signal shape is (generally) the same for X-rays. [fn:shape] For the determination of whether the gas gain variations discussed in sec. [[#sec:calib:detector_behavior_over_time]] have a physical origin due to changing gas gain or are caused by electronic effects, we already included the FADC data in fig. [[fig:calib:fe55_peak_pos_charge_pixel_fadc]] of sec. [[#sec:calib:causes_variability]]. Computing the histogram of all amplitudes of the FADC signals in a \cefe calibration run, yields a \cefe spectrum like for the center GridPix. The fitted position of the photopeak in these spectra is then a direct counterpart to those computed for the GridPix. Due to its independence and only being sensitive to induced charge, it acts as a good validator. In the context of the FADC amplifier settings it is interesting to see how the photopeak position changes between runs when computed like that. This is shown in fig. [[fig:fadc:peak_position_55fe_run2_settings]]. We can see that the initial change in differentiation time resulted in a larger gain decrease than the attempt at compensation from ~6x~ to ~10x~ on the coarse gain. The final increase in the integration time then caused another drop in signal amplitudes, implying an even lower absolute gain. In addition though the gas gain variation within a single setting is very visible (as discussed in sec. [[#sec:calib:causes_variability]]). #+CAPTION: The peak position in \si{V} of the photopeak in the \cefe calibration #+CAPTION: runs during Run-2 as seen on the FADC. The different FADC amplifier settings are clearly visible. #+CAPTION: '∫': integration time, '∂': differentiation time, 'G': coarse gain. #+NAME: fig:fadc:peak_position_55fe_run2_settings [[~/phd/Figs/FADC/peak_positions_fadc_run2.pdf]] Finally, we can look at the activation threshold of the FADC. The easiest way to do this is the following: we read the energies of all events on the center GridPix, then map them to their FADC events. Although not common in calibration data, some events may not trigger the FADC. By then computing -- for example -- the first percentile of the energy data (the absolute lowest value may be some outlier) we automatically get the lowest equivalent GridPix energy that triggers the FADC. Doing this leads to yet another similar plot to the previous, fig. [[fig:fadc:activation_threshold_gridpix_55fe_run2_settings]]. With the first FADC settings the activation threshold was at a low $\sim\SI{1.1}{keV}$. Unfortunately, both amplifier settings moved the threshold further up to about $\SI{2.2}{keV}$ with the final settings. In hindsight it likely would have been a better idea to try to run with a lower activation threshold so that the FADC trigger is available for more events at low energies. However, at the time of the data taking campaign not all information was available for an educated assessment, nor was there enough time to test and implement other ideas. Especially because there is a high likelihood that other settings might have run back into noise problems. #+CAPTION: The activation threshold of the FADC for each calibration run in 2017. #+CAPTION: Computed by the first percentile of the corresponding energies recorded #+CAPTION: by the GridPix. #+CAPTION: '∫': integration time, '∂': differentiation time, 'G': coarse gain. #+NAME: fig:fadc:activation_threshold_gridpix_55fe_run2_settings [[~/phd/Figs/FADC/activation_threshold_gridpix_energy_fadc_run2.pdf]] [fn:trunc_mean] We use a truncated mean of all data within the $5^{\text{th}}$ and $95^{\text{th}}$ percentile if the rise and fall time values. This is just to make the numbers less susceptible to extreme outliers. Alternatively, we could of course also look at the median for example. [fn:fine_gain] At least to my memory and notes, which should otherwise contain that. [fn:shape] As with everything, this is only an approximation and completely neglects possible nonlinearities in amplitude vs. integral and so on. But it works well for its purpose here. **** TODOs for this section [/] :noexport: - [ ] *MAKE SURE TO ADD 50/100 ns DATASETS TO ALL RAW DATA!* What is it that we actually want from this? Aside from providing an *overview* the FADC is barely used for anything. Things that are important: - [ ] *barely used not true anymore!* Important veto. - [ ] *trigger threshold* deduced from calibration data / real data - [ ] define how to compute & compute for every run, plot trigger threshold vs run (correlate with settings) - [ ] show rise time / fall time histograms! Required for our cuts we actually perform on that in background calc later! This is required because otherwise we cannot explain why we do / do not use the FADC as a veto and why it's not super helpful. - [ ] *MOST LIKELY THESE RISE/FALL TIME PLOTS CHANGE SIGNIFICANTLY NOW AFTER HAVING CHANGED FADC BASELINE & THRESHOLD CALCS!!!* - [X] show spectra (already one shown in previous chapter!) - [ ] show 55Fe spectrum of FADC data - [X] create plot showing position of 55 Fe photopeak in FADC data, highlighting when amplifier settings were changed - [X] create plot of mean rise times in FADC - [ ] show how FADC events differ in 50ns and 100ns data (*we have those datasets*) -> partially done in plots comparing rise time between different runs! - [ ] show detector pointing to Zenith data for muon related stuff - [ ] plot histograms of 50ns runs vs 100ns runs at CAST to see difference in background -> code is there! - [ ] plot histograms of the different properties (rise & fall time in particular) for the different FADC settings used at CAST -> partially done! Not the histograms shown above, but the mean value of those histograms! -> For the actual histograms we could split the data by the datasets with different FADC settings (similar to how done in ~fadc_analysis.nim~!) and then plot them by 3 different colors. - [X] finish trying to deduce the pedestals from real data. Didn't work last time only due to our confusion of using all 4 channels. Try again focusing on the single channel we actually use! -> Now that it's done correctly it works extremely well! - [ ] create a plot comparing 55Fe data from GridPix with FADC (by peak position over time. Same correlation or not?) -> also see detector behavior over time / FADC as proxy for charge - [ ] *REGARDING 50ns vs 100ns ~fadc_analysis.nim~ CONTAINS CODE TO CHECK NOISE COMPARED TO 50 vs 100!!* - [ ] noise analysis - [ ] apply Savitzky Golay filter to FADC spectra & describe -> Still necessary after correct pedestals in use w/ trunc mean? Maybe much more helpful now? Try it! - [ ] train NN on FADC data and see what we can gain <- this goes towards analysis chapter of course! Best use a CNN with 1D input data & kernel! **** Further thoughts on understanding impact of $\SI{50}{ns}$ vs $\SI{100}{ns}$ :extended: We did actually take a measurement of the FADC in the laboratory at some point comparing $\SI{50}{ns}$ integration time with $\SI{100}{ns}$ integration time. This was during the course of the master thesis of Hendrik Schmick [[cite:&SchmickMaster]]. Unfortunately, the exact values of the amplifier gain and differentiation times were not recorded (dummy me!). These two datasets may still be valuable (and are part of the raw data hosted at Zenodo), but I haven't attempted to use them for a deeper understanding in the last years (we looked into them back in 2019 though). **** Generate plots of FADC fall & rise times and FADC \cefe spectrum [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:reco:fadc_rise_fall_plots :END: - [ ] *REPLACE THE ORIGIN OF THIS PLOT* - [ ] *CREATE A NEW FADC \cefe SPECTRUM* -> Follow sec. [[#sec:calib:energy_gen_example_cefe]] -> Use the same run too! #+begin_src sh :results drawer raw_data_manipulation -p ~/CastData/data/2018_2/Run_240_181021-14-54/ \ --out /t/raw_240.h5 \ --runType rtBackground #+end_src #+begin_src sh :results drawer reconstruction /t/raw_240.h5 --out /t/reco_240.h5 #+end_src #+RESULTS: :results: {"<HDF5file>": /t/raw_240.h5, "--only_gain_fit": false, "--create_fe_spec": false, "--version": false, "--runNumber": nil, "--only_charge": false, "--only_energy_from_e": false, "--only_energy": nil, "--out": /t/reco_240.h5, "--config": nil, "--only_gas_gain": false, "--only_fadc": false, "--help": false, "--only_fe_spec": false} ... INFO Writing data to datasets INFO Writing of FADC data took 0.9212641716003418 seconds INFO Reconstruction of all runs in /t/raw_240.h5 with flags: {rfReadAllRuns} took 5.32500171661377 seconds INFO Performed reconstruction of the following runs: INFO {240} INFO while iterating over the following: INFO {240} :end: #+begin_src sh :results drawer reconstruction /t/reco_240.h5 --only_fadc #+end_src #+RESULTS: :results: {"<HDF5file>": /t/reco_240.h5, "--only_gain_fit": false, "--create_fe_spec": false, "--version": false, "--runNumber": nil, "--only_charge": false, "--only_energy_from_e": false, "--only_energy": nil, "--out": nil, "--config": nil, "--only_gas_gain": false, "--only_fadc": true, "--help": false, "--only_fe_spec": false} INFO Reading config file: /home/basti/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/config.toml Start fadc calc FADC minima calculations took: 0.5971462726593018 INFO Reconstruction of all runs in /t/reco_240.h5 with flags: {rfOnlyFadc, rfReadAllRuns} took 1.252563714981079 seconds INFO Performed reconstruction of the following runs: INFO {240} INFO while iterating over the following: INFO {240} :end: #+begin_src sh :results drawer raw_data_manipulation -p ~/CastData/data/2017/Run_96_171123-10-42 \ --out /t/raw_96.h5 \ --runType rtCalibration #+end_src #+RESULTS: :results: Flags are is {} ... INFO Closing h5file with code 0 INFO Processing all given runs took 0.3456798712412516 minutes :end: #+begin_src sh :results drawer reconstruction /t/raw_96.h5 --out /t/reco_96.h5 #+end_src #+RESULTS: :results: {"<HDF5file>": /t/raw_96.h5, "--only_gain_fit": false, "--create_fe_spec": false, "--version": false, "--runNumber": nil, "--only_charge": false, "--only_energy_from_e": false, "--only_energy": nil, "--out": /t/reco_96.h5, "--config": nil, "--only_gas_gain": false, "--only_fadc": false, "--help": false, "--only_fe_spec": false} ... INFO Writing data to datasets INFO Writing of FADC data took 6.967457056045532 seconds INFO Reconstruction of all runs in /t/raw_96.h5 with flags: {rfReadAllRuns} took 15.26722311973572 seconds INFO Performed reconstruction of the following runs: INFO {96} INFO while iterating over the following: INFO {96} :end: #+begin_src sh :results drawer reconstruction /t/reco_96.h5 --only_fadc #+end_src #+RESULTS: :results: {"<HDF5file>": /t/reco_96.h5, "--only_gain_fit": false, "--create_fe_spec": false, "--version": false, "--runNumber": nil, "--only_charge": false, "--only_energy_from_e": false, "--only_energy": nil, "--out": nil, "--config": nil, "--only_gas_gain": false, "--only_fadc": true, "--help": false, "--only_fe_spec": false} INFO Reading config file: /home/basti/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/config.toml Start fadc calc FADC minima calculations took: 4.997849464416504 INFO Reconstruction of all runs in /t/reco_96.h5 with flags: {rfOnlyFadc, rfReadAllRuns} took 10.23849511146545 seconds INFO Performed reconstruction of the following runs: INFO {96} INFO while iterating over the following: INFO {96} :end: #+begin_src nim :results drawer :tangle code/fadc_rise_fall_different_settings.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils, strformat] import ingrid / [tos_helpers] import ingrid / calibration / [calib_fitting, calib_plotting] import ingrid / calibration proc stripPrefix(s, p: string): string = result = s result.removePrefix(p) let useTeX = getEnv("USE_TEX", "false").parseBool let Width = getEnv("WIDTH", "600").parseFloat let Height = getEnv("HEIGHT", "450").parseFloat from ginger import transparent const settings = @["∫: 50 ns, ∂: 50 ns, G: 6x", "∫: 50 ns, ∂: 20 ns, G: 10x", "∫: 100 ns, ∂: 20 ns, G: 10x"] const runs = @[80, 101, 121] const riseTimeS = "riseTime [ns]" const fallTimeS = "fallTime [ns]" proc fadcSettings(plt: GgPlot, allRuns: seq[int], hideText: bool, minVal, maxVal, margin: float): GgPlot = ## This is a bit of a mess, but: ## It handles drawing the colored rectangles for the different FADC settings and ## adjusting the margin if any given via the R_MARGIN environment variable. ## The rectangle drawing is a bit ugly to look at, because we use the numbers initially ## intended for the peak position plot, but rescale them to map the completely different ## values for the other plots using min/max value and a potential margin. let mRight = getEnv("R_MARGIN", "6.0").parseFloat let widths = @[101 - 80, 121 - 101, allRuns.max - 121 + 1] let Δ = (maxVal - minVal) let min = minVal - Δ * margin let ys = @[min, min, min] let heights = @[0.25, 0.25, 0.25].mapIt(it / 0.25 * (Δ * (1 + 2 * margin))) let textYs = @[0.325, 0.27, 0.22].mapIt((it - 0.1) / (0.35 - 0.1) * Δ + minVal) let dfRects = toDf(settings, ys, textYs, runs, heights, widths) echo dfRects result = plt + geom_tile(data = dfRects, aes = aes(x = "runs", y = "ys", height = "heights", width = "widths", fill = "settings"), alpha = 0.3) + xlim(80, 200) + margin(right = mRight) + themeLatex(fWidth = 0.9, width = Width, height = Height, baseTheme = singlePlot) if not hideText: result = result + geom_text(data = dfRects, aes = aes(x = f{`runs` + 2}, y = "textYs", text = "settings"), alignKind = taLeft) proc getSetting(run: int): string = result = settings[lowerBound(runs, run) - 1] proc plotFallTimeRiseTime(df: DataFrame, suffix: string, allRuns: seq[int], hideText: bool) = ## Given a full run of FADC data, create the ## Note: it may be sensible to compute a truncated mean instead let dfG = df.group_by("runNumber").summarize(f{float: riseTimeS << truncMean(col("riseTime").toSeq1D, 0.05)}, f{float: fallTimeS << truncMean(col("fallTime").toSeq1D, 0.05)}) .mutate(f{int -> string: "settings" ~ getSetting(`runNumber`)}) let width = getEnv("WIDTH_RT", "600").parseFloat let height = getEnv("HEIGHT_RT", "450").parseFloat let mRight = getEnv("R_MARGIN", "4.0").parseFloat let fontScale = getEnv("FONT_SCALE", "1.0").parseFloat let (rMin, rMax) = (dfG[riseTimeS, float].min, dfG[riseTimeS, float].max) let perc = 0.025 let Δr = (rMax - rMin) * perc var plt = ggplot(dfG, aes(runNumber, riseTimeS)) + ggtitle("FADC signal rise times in ⁵⁵Fe data for all runs in $#" % suffix) + margin(right = mRight) + #theme_font_scale(fontScale) + themeLatex(fWidth = 0.9, width = width, height = height, baseTheme = singlePlot) + ylim(rMin - Δr, rMax + Δr) plt = plt.fadcSettings(allRuns, hideText, rMin, rMax, perc) plt + geom_point(aes = aes(color = fallTimeS)) + ggsave("Figs/FADC/fadc_mean_riseTime_$#.pdf" % suffix, width = width, height = height, useTeX = useTeX, standalone = useTeX) let (fMin, fMax) = (dfG[fallTimeS, float].min, dfG[fallTimeS, float].max) let Δf = (fMax - fMin) * 1.025 var plt2 = ggplot(dfG, aes(runNumber, fallTimeS)) + margin(right = mRight) + ylim(fMin - Δf, fMax + Δf) + #theme_font_scale(fontScale) + ggtitle("FADC signal fall times in ⁵⁵Fe data for all runsin $#" % suffix) plt2 = plt2.fadcSettings(allRuns, hideText, fMin, fMax, perc) plt2 + geom_point(aes = aes(color = riseTimeS)) + ggsave("Figs/FADC/fadc_mean_fallTime_$#.pdf" % suffix, width = width, height = height, useTeX = useTeX, standalone = useTeX) ggplot(dfG, aes(riseTimeS, fallTimeS, color = "settings")) + geom_point() + ggtitle("FADC signal rise vs fall times for ⁵⁵Fe data in $#" % suffix) + margin(right = mRight) + #theme_font_scale(fontScale) + themeLatex(fWidth = 0.9, width = width, height = Height, baseTheme = singlePlot) + ggsave("Figs/FADC/fadc_mean_riseTime_vs_fallTime_$#.pdf" % suffix, width = width, height = height, useTeX = useTeX, standalone = useTeX) proc fit(fname: string, year: int): (DataFrame, DataFrame) = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() let is2017 = year == 2017 let is2018 = year == 2018 if not is2017 and not is2018: raise newException(IOError, "The input file is neither clearly a 2017 nor 2018 calibration file!") var peakPos = newSeq[float]() var actThr = newSeq[float]() var dfProp = newDataFrame() for run in fileInfo.runs: var df = h5f.readRunDsets( run, commonDsets = @["fadc/eventNumber", "fadc/baseline", "fadc/riseStart", "fadc/riseTime", "fadc/fallStop", "fadc/fallTime", "fadc/minVal", "fadc/argMinval"] ) df = df.rename(df.getKeys.mapIt(f{it.stripPrefix("fadc/") <- it})) df["runNumber"] = run let dset = h5f[(recoBase() & $run / "fadc/fadc_data").dset_str] let fadcData = dset[float].toTensor.reshape(dset.shape) let feSpec = fitFeSpectrumFadc(df["minVal", float].toSeq1D) let ecData = fitEnergyCalib(feSpec, isPixel = false) let texts = buildTextForFeSpec(feSpec, ecData) plotFeSpectrum(feSpec, run, 3, texts = texts, pathPrefix = "Figs/FADC/fe55_fits/", useTeX = false) # add fit to peak positions peakPos.add feSpec.pRes[feSpec.idx_kalpha] ggplot(df, aes("minVal")) + geom_histogram(bins = 300) + ggsave("/t/fadc_run_$#_minima.pdf" % $run) # Now get the activation threshold as a function of gridpix energy on center # chip. Get GridPix data on center chip... var dfGP = h5f.readRunDsets( run, chipDsets = some((chip: 3, dsets: @["energyFromCharge", "eventNumber"])) ) # ...sum all clusters for each event (for multiple clusters, the FADC sees all)... dfGP = dfGP.group_by("eventNumber").summarize(f{float -> float: "energyFromCharge" << sum(col("energyFromCharge"))}) # ... join with FADC DF to only have events left with FADC trigger... df = innerJoin(dfGP, df.clone(), "eventNumber") # ...compute activation threshold as 1-th percentile of data actThr.add percentile(df["energyFromCharge", float], 1) dfProp.add df doAssert h5f.close() >= 0 let df = toDf({ "runs" : fileInfo.runs, "peaks" : peakPos, "actThr" : actThr }) result = (df, dfProp) proc main(path: string, year: int, fit = false, hideText = false) = ## - run 101 <2017-11-29 Wed 6:40> was the first with FADC noise ## significant enough to make me change settings: ## - Diff: 50 ns -> 20 ns (one to left) ## - Coarse gain: 6x -> 10x (one to right) ## - run 112: change FADC settings again due to noise: ## - integration: 50 ns -> 100 ns ## This was done at around <2017-12-07 Thu 8:00> ## - integration: 100 ns -> 50 ns again at around ## <2017-12-08 Fri 17:50>. ## - run 121: Jochen set the FADC main amplifier ## integration time from 50 -> 100 ns again, around ## <2017-12-15 Fri 10:20> let is2017 = year == 2017 let yearToRun = if is2017: 2 else: 3 let suffix = "run$#" % $yearToRun var dfProp = newDataFrame() var df = newDataFrame() var peakPos: seq[float] if fit: (df, dfProp) = fit(path, year) dfProp.writeCsv(&"resources/properties_fadc_{suffix}.csv") df.writeCsv(&"resources/peak_positions_fadc_{suffix}.csv") else: dfProp = readCsv(&"{path}/properties_fadc_{suffix}.csv") df = readCsv(&"{path}/peak_positions_fadc_{suffix}.csv") let allRuns = df["runs", int].toSeq1D plotFallTimeRiseTime(dfProp, suffix, allRuns, hideText) block Fe55PeakPos: let outname = "Figs/FADC/peak_positions_fadc_$#.pdf" % $suffix var plt = ggplot(df, aes("runs", "peaks")) if is2017: plt = plt.fadcSettings(allRuns, hideText, 0.1, 0.35, 0.0) plt + geom_point() + ylim(0.1, 0.35) + ylab("⁵⁵Fe peak position [V]") + xlab("Run number") + ggtitle("Peak position of the ⁵⁵Fe runs in the FADC data") + ggsave(outname, width = Width, height = Height, useTeX = useTeX, standalone = useTeX) block ActivationThreshold: let outname = "Figs/FADC/activation_threshold_gridpix_energy_fadc_$#.pdf" % $suffix var plt = ggplot(df, aes("runs", "actThr")) if is2017: plt = plt.fadcSettings(allRuns, hideText, 0.9, 2.4, 0.0) plt + geom_point() + ylim(0.9, 2.4) + ylab("Activation threshold [keV]") + xlab("Run number") + ggtitle("Activation threshold based on center GridPix energy") + ggsave(outname, width = Width, height = Height, useTeX = useTeX, standalone = useTeX) when isMainModule: import cligen dispatch main #+end_src - [X] *REGENERATE FADC DATA IN H5 FILES ON ~voidRipper~* - [X] *GENERATE HISTOGRAMS OF MINVALS FOR ALL CALIBRATION RUNS* - [X] *GENERATE PLOT OF ALL FIT TO MINVALS HISTO FIND PEAK FOR ALL CALIBRATION RUNS* -> start by just computing maximum of the above histogram for each run as a basis - [ ] *ALL THESE PLOTS SHOULD REALLY BE GENERATED WHEN RUNNING ~reconstruction --only_fadc~! Replace that!* -> Well.. Not today. - [X] *GENERATE PLOT OF FADC & GridPix PEAK POSITIONS AGAINST ALL RUNS* -> Done and in previous section (at least for relevant runs) Run the code for 2017 calibration data to generate the plot of the FADC settings (and generate the CSV containing the peak positions by run): #+begin_src sh :dir ~/phd ./code/fadc_rise_fall_different_settings.nim -p ~/CastData/data/CalibrationRuns2017_Reco.h5 --fit --year 2017 #+end_src and now for 2018 to generate the CSV for the peak positions of the run 3 data: #+begin_src sh ./code/fadc_rise_fall_different_settings.nim -p ~/CastData/data/CalibrationRuns2018_Reco.h5 --fit --year 2018 #+end_src To generate the final plots we use the generated CSV files (in order to more quickly change parameters about the size of plots etc): #+begin_src sh :dir ~/phd/ USE_TEX=true FONT_SCALE=1.2 R_MARGIN=7.5 RT_MARGIN=3.0 WIDTH=600 HEIGHT=360 HEIGHT_RT=420 \ code/fadc_rise_fall_different_settings \ -p resources \ --year 2017 \ --hideText USE_TEX=true FONT_SCALE=1.2 R_MARGIN=7.5 RT_MARGIN=3.0 WIDTH=600 HEIGHT=360 HEIGHT_RT=420 \ code/fadc_rise_fall_different_settings \ -p resources \ --year 2018 \ --hideText #+end_src The CSV files are found in: - [[~/phd/resources/peak_positions_fadc_run2.csv]] - [[~/phd/resources/peak_positions_fadc_run3.csv]] - [[~/phd/resources/properties_fadc_run2.csv]] - [[~/phd/resources/properties_fadc_run3.csv]] and the plots we generated are all in: [[file:Figs/FADC/]] with the \cefe fits in [[file:Figs/FADC/fe55_fits/]] **** Initial study of activation threshold :extended: This section was the ideas and code that I initially wrote when I first thought about including something about the activation threshold for the FADC. At that point I had never actually tried to quantify what the threshold was (I obviously had a pretty good idea based on other aspects). - How to compute? :: The fits we perform for the 55 Fe in one of our scripts here ideally should be done for each run again... That way we could compute the energy similar to what we do for GridPix data. Ideally we could compute it more by using an integral approach though, as that gives us a better proxy for the amount of charge. Or at least using a peak finding to detect multiple signals within one FADC event to sum energies of both. As an easier approach we can of course compute a lower percentile (not the total minimum, but maybe 1% of each run and plot that?). #+begin_src nim :tangle code/fadc_compute_activation_threshold.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils] import ingrid / [tos_helpers, fadc_helpers, ingrid_types, fadc_analysis] proc fadcSettingRuns(): seq[int] = result = @[0, 101, 121] proc stripPrefix(s, p: string): string = result = s result.removePrefix(p) proc minimum(h5f: H5File, runNumber: int, percentile: int): (float, float) = var df = h5f.readRunDsets( runNumber, chipDsets = some((chip: 3, dsets: @["energyFromCharge", "eventNumber"])) ) # sum all energies of all same events to get a combined energy of all # clusters on the center chip in each event (to correlate w/ FADC) df = df.group_by("eventNumber").summarize(f{float -> float: "energyFromCharge" << sum(col("energyFromCharge"))}) var run = h5f.readRecoFadc(runNumber) let fEvs = h5f.readRunDsets(runNumber, fadcDsets = @["eventNumber"]) let minVals = run.minVal.toSeq1D let dfFadc = toDf({ "eventNumber" : fEvs["eventNumber", int], "minVals" : minVals }) # join both by `eventNumber` (dropping center chip events w/ no FADC) df = innerJoin(df, dfFadc, "eventNumber") # percentile based on minvals & gridpix energy result = (percentile(minVals, 100 - percentile), percentile(df["energyFromCharge", float], percentile)) proc main(fname: string, percentile: int) = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() echo fileInfo var minimaFadc = newSeq[float]() var minimaGP = newSeq[float]() var idxs = newSeq[int]() for run in fileInfo.runs: let idx = lowerBound(fadcSettingRuns(), run) echo "idx ", idx, " for run ", run let (minFadc, minEnergy) = minimum(h5f, run, percentile) minimaFadc.add minFadc minimaGP.add minEnergy idxs.add idx let df = toDf(minimaFadc, minimaGP, idxs) ggplot(df, aes("minimaFadc", fill = "idxs")) + geom_histogram(position = "identity", alpha = 0.5, hdKind = hdOutline) + xlab("Pulse amplitude [V]") + ylab("Counts") + ggtitle("Activation threshold by smallest pulses triggering FADC") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + margin(right = 2.5) + ggsave("~/phd/Figs/FADC/fadc_minima_histo_activation_threshold_mV.pdf", useTeX = true, standalone = true) ggplot(df, aes("minimaGP", fill = "idxs")) + geom_histogram(position = "identity", alpha = 0.5, hdKind = hdOutline) + xlab("Energy on GridPix [keV]") + ylab("Counts") + ggtitle("Activation threshold by energy recorded on center GridPix") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + margin(right = 2.5) + ggsave("~/phd/Figs/FADC/fadc_minima_histo_gridpix_energy.pdf", useTeX = true, standalone = true) when isMainModule: import cligen dispatch main #+end_src #+begin_src sh ./code/fadc_compute_activation_threshold -f ~/CastData/data/CalibrationRuns2017_Reco.h5 --percentile 1 #+end_src The produced fig. [[fig:fadc:activation_threshold_histo_mV_histo_run2]] of the largest values found for the activation of the FADC shows us the actual activation thresholds for the FADC in volt. We can see that generally the threshold remained constant in terms of milli volt. That's good to know. #+CAPTION: Different activation thresholds of the Run-2 data due to different FADC settings, #+CAPTION: determined based on the 1-th percentile of the data in the \cefe calibration runs, #+CAPTION: by using the energy of the center GridPix. #+NAME: fig:fadc:activation_threshold_histo_mV_histo_run2 [[~/phd/Figs/FADC/fadc_minima_histo_activation_threshold_mV.pdf]] The second plot fig. [[fig:fadc:activation_threshold_histo_run2]] shows the activation in terms of the sum of all clusters energies (for one event number, to take into account multiple clusters) based on the GridPix energy. Here we see the 'real' activation energy in keV and can see that unfortunately for the later settings the threshold was very high. :( #+CAPTION: Different activation thresholds of the Run-2 data due to different FADC settings, #+CAPTION: determined based on the 1-th percentile of the data in the \cefe calibration runs, #+CAPTION: by using the energy of the center GridPix. #+NAME: fig:fadc:activation_threshold_histo_run2 [[~/phd/Figs/FADC/fadc_minima_histo_gridpix_energy.pdf]] Of course this itself does not imply a difference in activation threshold of the equivalent physical energy! - [ ] *COMPUTE IN ENERGY INSTEAD OF VOLTAGE* *NOTE*: From the FADC plots here (pure FADC & GridPix energy correlated) as well as the raw FADC spectra it's clearly evident that the actual gain of the FADC went _down_ instead of up after changing diff 50->20ns and coarse gain 6x->10x and further when going from 50ns-100ns integration time. To some extent this may make sense: differentiation According to FADC manual the "differentiation" setting adjusts the RC differentiation time and therefore #+begin_quote DIFFERENTIATE Front panel 6-position switch selects a differentiation time constant to control the decay time of the pulse; settings select Out (equivalent to 150 µs), 20, 50, 100, 200, or 500 ns. INTEGRATE Front panel 6-position switch selects an integration time constant to control the risetime of the pulse; settings select 20, 50, 100, #+end_quote And further: #+begin_quote Generally speaking, the Integrate time constant can be selected so that the rise time of the output pulses is normalized at a rate that is slower than the rise times of the input pulses. This function is of greatest value when the pulses originate in a large detector so that they generate a wide variety of rise times and are difficult to observe for timing measurements. The Differentiate time constant is also selectable and determines the total interval before the pulse returns to the baseline and allows a new pulse to be observed. The combination of integration and differentiation time constants also contributes to the amount of electronic noise that is seen in the system, so the resulting waveforms should be considered from each of these points of view and adjusted for optimum results. When the shaping time constants impose considerable changes in the input waveform, the nominal gain, which is the product of the Coarse and Fine control settings, may be degraded somewhat. This is not normally a problem, since the gain is constant even though it may be less than the nominal settings indicate. #+end_quote I.e. this means the differentiation time is responsible for getting the signal back to baseline. And it's expected that it has an effect on the amplitude on the signal! - [ ] *ADD ABOVE AS EXPLANATION FOR THE SEEN BEHAVIOR* -> maybe 4 fold plot: - histogram as above for lower percentile energies of min vals & gridpix energies together with three spectra, one for each setting combined in one plot? All this is effectively good news. This explains also _why_ the changes to the integration / differentiation had such an effect on the noise! They simply reduced the effective gain, ergo made the FADC less sensitive to the same noise! Ref: https://www.ortec-online.com/-/media/ametekortec/manuals/4/474-mnl.pdf?la=en&revision=07c47ecb-5c63-48ff-a393-ba39e45be57b from here: https://www.ortec-online.com/products/electronics/amplifiers/474 *** How to compute an effective charge based on FADC signals? [0/1] :extended: - [X] The ideas of this section have been merged into the general FADC section This section should cover our ideas about how we compute an effective charge (in arbitrary units) based on the FADC signals as a measure about the effective charge recorded in a signal. To cross correlate changes in FADC "charge" to GridPix charge. - [ ] -> is detector behavior over time visible in FADC data? -> Looking at [[file:Figs/time_vs_55fe_peak_pos_2017.pdf]] generated via #+begin_src sh ./mapSeptemTempToFePeak ~/CastData/data/CalibrationRuns2017_Reco.h5 --inputs fePixel --inputs feCharge --inputs feFadc #+end_src shows a _very_ strong correlation between all three kinds of calibration. Note the strong fall in the FADC data in the "left hump" of points is the change of the integration / differentiation time during the 2017 data taking (up to Dec 2017). But looking closely even there a strong correlation is visible inside of each "block". This puts to rest at least _most_ theories that the change might _not_ be a change in the gas gain, but some other effect like electronics! - [ ] *EXPAND ON THE ABOVE, EXPLAIN THAT IN DETECTOR BEHAVIOR OVER TIME!* -> given that also temperature is not properly correlated it leaves charge up effects (changing effective voltage) and gas flow (unlikely as flow constant & pressure always stable). ** Scintillators [/] :noexport: - [X] *DO THE SCINTILLATORS DESERVE A SECTION HERE TO EXPLAIN THEIR USAGE?* -> probably not, as this really should be about the stuff we do to calibrate and use data. But none of this is needed for scintillators. -> As a matter of fact the explanation for the FADC above about being of 4-fold use might actually be better placed somewhere else? -> No, they don't. - [X] Well, at the very least we should present the information about the number of triggers, clock cycle distributions etc. somewhere. Either do it here somewhere or do it before we perform the cuts. Both can work. -> In principle this chapter here could really be about the detector energy etc. while other calculations could be presented before? Or we have another short chapter/section about "data overview"? -> We have an overview of the CAST data. Later in the background rate chapter we then present the scintillator in more detail when discussing the cuts. * Chapter about analysis principle [/] :noexport:Software: :PROPERTIES: :CUSTOM_ID: sec:analysis_principle :END: #+LATEX: \minitoc - [ ] *This chapter does not serve a proper purpose anymore I think. All aspects are already explained in other chapters.* *IDEA*: Maybe this chapter should only be about everything from raw data to clustering, charge & energy calibration, gas gain, computation of geometric properties? Well, two of these are already mentioned in the previous chapter. *NOTE:* We need to better understand how to: - explain the theoretical foundations of what we do, e.g. cluster finding algorithms. Certain things, e.g. cluster finding algos could also just go to the appendix. Interesting, but technically just a detail. - introduce the software stack we use - introduce the physics for the calibration (e.g. ~ToT~ calib, ...) - explain the algorithms used in the software Can we disentangle this from the purely detector focused things? I'm not so sure. After introducing detector specific calibrations etc. we can go on to what the steps are that are required to turn a calibrated detector (one that is sensitive to N electrons essentially) into something that can do physics. Need some chapter that talks about the detector specific details that explain how a limit / physics result is obtained. - [ ] turn the ingrid reconstruction schematic into a generalized flow chart? ** Take data. Output data is ASCII files - [X] In [[#sec:reco:tos_data_parsing]] Parsing of data in format. Present format. #+begin_quote Generic header. Data. #+end_quote Store data in HDF5 files. Not much going on here aside from making it fast. ** Reconstruct & calibrate data Read data from HDF5 files. What does reconstruction mean? Multiple things. *** 1. perform cluster finding - [X] In [[#sec:reco:data_reconstruction]] (subsection) Present our current two clustering algorithms. - dumb search in radius around each pixel (add foot note that the implementation in MarlinTPC had a bug), based on *rectangular* search, not circular - optional: DBSCAN, short introduction give full reference to implementation. **** Investigation of buggy clustering in MarlinTPC :noexport: *** 2. for each cluster, compute geometric properties - [X] In [[#sec:reco:data_reconstruction]] (subsection) Table of the computed properties. As they are geometric easy to explain. Highlight the ones used for likelihood. Done here? Show our sketch explaining what each property means from one of the talks. Maybe need to fix the radius variable? *** 3. (optional / required for analysis) charge calibration - [X] In [[#sec:operation_calibration:tot_calibration]] Use Timepix ~ToT~ calibration (ref theory section before where we explain how it works). Given ~ToT~ calibration apply function to get number of electrons (given that we ran in ~ToT~ mode). *** 4. (optional / required for analysis) compute gas gain - [X] [[#sec:daq:polya_distribution]] Computing gas gain. Polya fit. Explain not fit parameter used, but mean of data. Heavy gas gain variation over time. Explain that thus behavior chosen that minimizes effect by binning in time. 90 minutes. Show plot with old way (full runs) vs. new length. Results in stable operation. This section probably belongs somewhere else? **** Study for optimal gas gain time length :noexport: *** 5. (optional / required for analysis) energy computation - [X] [[#sec:calibration:energy]] Requires: charge calibration, gas gain Very easy in theory. In practice complicated. Theoretically, two ways: 1. pixel counting. Due to single electron detection efficiency (or a slight correction for under/overcounting) can just multiply hits by eV per hit 2. charge calibration plus reference spectrum of 55Fe runs. Both cases are very simple *iff* the detector is stable over time. Then just take closest 55Fe run and compute correction factor for hits / conversion factor from peak in charge values. But: instability means we need to average over more data. Compute for all runs. Fit. Apply fit. *** 6. (optional) FADC reconstruction - [X] In [[#sec:calib:fadc]] Apply pedestals. Determine lowest point. Determine rising / falling times in ns. Compute other properties. ** Compute reference spectra Possibly explain in chapter about CDL? Or talk there only about the *data* we took there, but not in detail about *what* this data is *for*? If so explain here. ** Log file reader to get tracking (maybe no export) Talk about log file reader (full section definitely :noexport:), used to mark times in runs that correspond to tracking. ** Likelihood method Explain likelihood method in theory (maybe do in section before). Explain our methods for linear interpolation between the reference spectra. Apply reference data for limit at specific custom software efficiency. Started at 80% reference and then tweaked for optimal ε = S/√B (or something). Likelihood method gives us everything we need for background rate. Whatever comes out gives us left over clusters. *** Septem veto :PROPERTIES: :CUSTOM_ID: sec:septem_veto :END: Talk about the septem veto we finally use. In the septem veto the main idea is to go back to the raw data for any event, which contains a cluster on the central chip, which is signal-like based on the likelihood method presented above. For these, a so called 'septem event' is built based on the raw pixel data of all chips. This is simply an 'event' in the same notion as understood by the =reconstruction= tool (c/f [[sec:reconstruction]]), i.e. a two dimensional array of the ~ToT~ values. Except in this case it is not a $256 \times 256$ array, but rather a $3 \cdot 256 \times 3 \cdot 256$ array, where the full septemboard detector is merged into a single event without any spacing between the chips (more on that below). These septem events are then pushed through the whole reconstruction and calibration pipeline. Thanks to starting from an event that now includes information that was previously not taken into account (pixel activity outside the center chip), the cluster finding algorithm can detect larger clusters than previously. This can change the shape of the cluster that was previously considered signal like. In the case that this cluster now looks more like background, it will be vetoed by this technique. The decision to merge the different chips into a single event *without* any spacing between the chips is made to ensure good cluster finding. The spacing between chips is of course a dead zone where no activity can be measured. For a cluster finding algorithm this may cause a cutoff that should not happen, as the information is simply not *available*. Ideally, one could imagine an algorithm that interpolates data between the chips based on the information on the two neighboring chips. But this is too experimental. If there is data on the neighboring chip, it is extremely likely there was ionization between the chips as well, meaning the merging of the chips "only" makes the event less eccentric than the physical event. If there is no data on the neighboring chip, the merging has no effect, the cluster will simply look the same as in the original single chip event. This excludes two possibilities: 1. a physical event may have no active pixels between the chips, despite having active pixels on both chip borders. This is extremely unlikely, if the events are significantly close to the border. The main case of this would be two real X-rays (extremely low probability) or either of the clusters is a track parallel to border of the chip. 2. a more likely loss information for events without information on the neighboring chips, despite the physical event being more eccentric than the recorded one. This is a real limitation that cannot be worked around. **** Explain why no spacing between chips **** Hough transformation experiments :noexport: *** Scintillator veto Talk about scintillator veto. Main ideas of course. *** FADC veto Explain the veto we use based on FADC. *** Comparison with other attempts :noexport: Show the other attempts we did about the different ways to interpolate. ** Compute limit Limit computation done. Needs to be after ray tracing introduction I'd say. Input for theory is required. Perform signal / background rejection. Get background rate + signals in tracking. Get expected flux from theory. Get detector efficiencies. Combine efficiencies & theory flux in raytracing simulation. Compute 'real' expected signal. Use suitable limit calculation method to compute a limit. We can split this into two pieces?: ** How to compute background rate ** How to compute a limit For =mclimit= use the notes in StatusAndProgress about how limit calculation works. Might be good in general, because a lot of it applies elsewhere anyway. Describe unbinned likelihood method from Nature paper adapted to our work. We can plot some funny plots explaining how it works. * Finding signal and defining background | Background rate computation :Analysis: :PROPERTIES: :CUSTOM_ID: sec:background :END: #+LATEX: \minitoc With the CAST data fully reconstructed and energy calibrated, it is time to define the methods used to extract axion candidates and derive a background rate from the data. We first introduce the likelihood cut method in sec. [[#sec:background:likelihood_method]] to motivate the need for reference X-ray data from an X-ray tube. Such data, taken at the CAST Detector Lab (CDL) will be discussed in depth in sec. [[#sec:cdl]]. We then see how the reference data is used in the likelihood method to act as an event classifier in sec. [[#sec:background:likelihood_cut]]. As an alternative to the likelihood cut, we will introduce another classifier in the form of a simple artificial neural network in sec. [[#sec:background:mlp]]. This is another in depth discussion, as the selection of the training data and verification is non-trivial. With both classifiers discussed, it is time to include all other Septemboard detector features as additional vetoes in sec. [[#sec:background:additional_vetoes]]. At the very end we will look at background rates for different cases, sec. [[#sec:background:all_vetoes_combined]], motivating the approach of our limit calculation in the next chapter. #+begin_quote Note: The methods discussed in this chapter are generally classifiers that predict how 'signal-like' a cluster is. Based on this prediction we will usually define a cut value as to keep a cluster as a potential signal. This means that if we apply the method to background data (that is, CAST data taken outside of solar trackings) we recover the 'background rate'; the irreducible amount of background left (at a certain efficiency), which is signal-like. If instead we apply the same methods to CAST solar tracking data we get instead a set of 'axion induced X-ray candidates'. [fn:candidates_contain_background] In the context of the chapter we commonly talk about "background rates", but the equivalent meaning in terms of tracking data and candidates should be kept in mind. #+end_quote [fn:candidates_contain_background] Of course the set of candidates contains background itself. The terminology 'candidate' intends to communicate that each candidate may be a background event or potentially a signal due to axions. But that is part of chapter [[#sec:limit]]. *** TODOs for this section :noexport: - [X] *REFER TO CAST DATA SUMMARY TABLE AND PRESENT AN ENERGY SPECTRUM WITHOUT ANY CUTS WHATSOEVER* -> Raw data plots into CAST summary chapter! Likelihood method demands info about X-ray properties, hence: ** Likelihood method :PROPERTIES: :CUSTOM_ID: sec:background:likelihood_method :END: The detection principle of the GridPix detector implies physically different kinds of events will have different geometrical shapes. An example can be seen in fig. sref:fig:background:eccentricity_signal_background, comparing the cluster eccentricity of \cefe calibration events with background data. This motivates usage of a geometry based approach to determine likely signal-like or background-like clusters. The method to distinguish the two types of events is a likelihood cut, based on the one in cite:krieger2018search. It effectively assigns a single value to each cluster for the likelihood that it is a signal-like event. Specifically, this likelihood method is based on three different geometric properties (also see sec. [[#sec:reco:cluster_geometry]]): 1. the eccentricity $ε$ of the cluster, determined by computing the long and short axis of the two dimensional cluster and then computing the ratio of the RMS of the projected positions of all active pixels within the cluster along each axis. 2. the fraction of all pixels within a circle of the radius of one transverse RMS from the cluster center, $f$. 3. the length of the cluster (full extension along the long axis) divided by the transverse RMS, $l$. These variables are obviously highly correlated, but still provide a very good separation between the typical shapes of X-rays and background events. They mainly characterize the "spherical-ness" as well as the density near the center of the cluster, which is precisely the intuitive sense in which these type of events differ. For each of these properties we define a probability density function $\mathcal{P}_i$, which can then be used to define the likelihood of a cluster with properties $(ε, f, l)$ to be signal-like: #+NAME: eq:background:likelihood_def \begin{equation} \mathcal{L}(ε, f, l) = \mathcal{P}_{ε}(ε) \cdot \mathcal{P}_{f}(f) \cdot \mathcal{P}_l(l) \end{equation} where the subscript is denoting the individual probability density and the argument corresponds to the individual value of each property. This raises the important question of what defines each individual probability density $\mathcal{P}_i$. In principle it can be defined by computing a normalized density distribution of a known dataset, which contains representative signal-like data. The \cefe calibration data from CAST contains such representative data, if not for one problem: the properties used in the likelihood method are energy dependent, as seen in fig. sref:fig:background:eccentricity_photo_escape, a comparison of the eccentricity of X-rays from the photopeak of the \cefe calibration source compared to those from the escape peak. The CAST calibration data can only characterize two different energies, but the expected axion signal is a (model dependent) continuous spectrum. For this reason data was taken using an X-ray tube with 8 different target / filter combinations to provide the needed data to compute likelihood distributions for X-rays of a range of different energies. The details will be discussed in the next section, [[#sec:cdl]]. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Calibration \\& background") (label "fig:background:eccentricity_signal_background") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/background/eccentricity_calibration_background.pdf")) (subfigure (linewidth 0.5) (caption "Photopeak \\& escape peak") (label "fig:background:eccentricity_photo_escape") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/background/eccentricity_photo_escape_peak.pdf")) (caption (subref "fig:background:eccentricity_signal_background") "Comparison of the eccentricity of clusters from a calibration and a background run. Background events are clearly much more eccentric on average. " (subref "fig:background:eccentricity_photo_escape") "The X-ray properties are energy dependent as can be seen in the comparison between the eccentricity of X-rays from the photopeak compared to those from the escape peak. A kernel density estimation is used in both figures.") (label "fig:background:properties_energy_type_dependent")) #+end_src *** TODOs for this section [/] :noexport: - [X] *REPHRASE SUCH THAT THIS XRAY DATA BECOMES A REQUIREMENT* -> Relates to second paragraph with numbers - [X] *REPLACE EVENT FIGURES BY EXAMPLE HISTOGRAM OF DIFFERENT PROPERTIES BACKGROUND / CALIBRATION* - [X] *ADD NOTE THAT THERE IS _ALSO_ AN ENERGY DEPENDENCE ON SIGNAL CLUSTER SHAPES* *** Generate plot of eccentricity signal vs. background :extended: We will now generate a plot comparing the eccentricity of signal events from calibration data to background events. In addition another comparison will be made between photopeak photons and escape peak photons to show that events at different energies have different shapes, motivating the need for the X-ray tube data at different energies. #+begin_src nim :tangle code/eccentricity_background_signal.nim import ggplotnim, nimhdf5 import ingrid / [tos_helpers, ingrid_types] proc read(f: string, run: int): DataFrame = withH5(f, "r"): result = h5f.readRunDsets( run = run, chipDsets = some((chip: 3, dsets: @["eccentricity", "centerX", "centerY", "energyFromCharge"])) ) proc main(calib, back: string) = # read data from each file, one fixed run with good statistics # cut on silver region let dfC = read(calib, 128) let dfB = read(back, 124) var df = bind_rows([("Calibration", dfC), ("Background", dfB)], "Type") .filter(f{float -> bool: inRegion(`centerX`, `centerY`, crSilver)}, f{float: `eccentricity` < 10.0}) ggplot(df, aes("eccentricity", fill = "Type")) + #geom_histogram(bins = 100, # hdKind = hdOutline, # position = "identity", # alpha = 0.5, # density = true) + xlab("Eccentricity") + ylab("Density") + geom_density(color = "black", size = 1.0, alpha = 0.7, normalize = true) + ggtitle("Eccentricity of calibration and background data") + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + margin(left = 2.2, right = 5.2) + ggsave("Figs/background/eccentricity_calibration_background.pdf", useTeX = true, standalone = true) proc splitPeaks(x: float): string = if x >= 2.75 and x <= 3.25: "Escapepeak" elif x >= 5.55 and x <= 6.25: "Photopeak" else: "Unclear" let dfP = dfC .mutate(f{float: "Peak" ~ splitPeaks(`energyFromCharge`)}) .filter(f{string: `Peak` != "Unclear"}, f{`eccentricity` <= 2.0}) ggplot(dfP, aes("eccentricity", fill = "Peak")) + #geom_histogram(bins = 50, # hdKind = hdOutline, # position = "identity", # alpha = 0.5, # density = true) + xlab("Eccentricity") + ylab("Density") + geom_density(color = "black", size = 1.0, alpha = 0.7, normalize = true) + ggtitle(r"$^{55}\text{Fe}$ photopeak (5.9 keV) and escapepeak (3 keV)") + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + margin(left = 2.2, right = 5.2) + ggsave("Figs/background/eccentricity_photo_escape_peak.pdf", useTeX = true, standalone = true) when isMainModule: import cligen dispatch main #+end_src yielding [[~/phd/Figs/background/eccentricity_calibration_background.pdf]] and [[~/phd/Figs/background/eccentricity_photo_escape_peak.pdf]] #+begin_src sh #+end_src ** CAST Detector Lab :PROPERTIES: :CUSTOM_ID: sec:cdl :END: In this section we will cover the X-ray tube data taken at the CAST Detector Lab (CDL) at CERN. First, we will show the setup in sec. [[#sec:cdl:setup]]. Next the different target / filter combinations that were used will be discussed and the measurements presented in sec. [[#sec:cdl:measurements]]. These measurements then are used to define the probability densities for the likelihood method, see sec. [[#sec:cdl:derive_probability_density]]. Further, in sec. [[#sec:cdl:cdl_morphing]], we cover a few more details on a linear interpolation we perform to compute a likelihood distribution at an arbitrary energy. And finally, they can be used to measure the energy resolution of the detector at different energies, sec. [[#sec:cdl:energy_resolution]]. The data presented here was also part of the master thesis of Hendrik Schmick cite:SchmickMaster. Further, the selection of lines and approach follows the ideas used for the single GridPix detector in cite:krieger2018search with some notable differences. For a reasoning for one particular difference in terms of data treatment, see appendix [[#sec:appendix:fit_by_run_justification]]. #+begin_quote Note: in the course of the CDL related sections the term target/filter combination (implicitly including the used high voltage setting) is used interchangeably with the main fluorescence line (or just 'the fluorescence line') targeted in a specific measurement. Be careful while reading about applied high voltages in $\si{kV}$ and energies in $\si{keV}$. The produced fluorescence lines typically have about 'half' the energy in $\si{keV}$ as the applied voltage in $\si{kV}$. See tab. [[tab:cdl:run_overview_tab]] in sec. [[#sec:cdl:measurements]] for the precise relation. #+end_quote *** TODOs for this section [/] :noexport: - [ ] *(RE)WRITE THE FIT BY RUN JUSTIFICATION SECTION AND REFERENCE HERE* - [X] *FIND OUT WHAT CHIP CALIBRATION USED FOR CDL* -> It was the Run-3 calibration. Checked by comparing the ~fsr~ files in the CDL TOS run directories with those of the ~ChipCalibration~ directory of the TPA resources. The ones from Run-3 match and the ones from Run-2 don't. - [X] *REFERENCE HENDRIK MSC FOR CDL* - [X] *CREATE PLOT SIMILAR TO*: ~/org/Figs/statusAndProgress/cdl_vs_background/eccentricity_ridgeline_XrayReferenceFile2018.h5_2018.pdf~ but using our custom colors and shown as a KDE similar to the median energy cluster KDE plot! - [ ] *INCLUDE ALL CDL PLOTS GENERATED BY ~cdl_spectrum_creation~ WE DON'T PUT INTO MAIN THESIS INTO EXTENDED!* - [ ] *reintroduce something along these lines?* #+begin_comment The distributions which the previous background rate plots were based on were obtained in 2014 with the Run-1 detector at the CAST Detector Lab (CDL). Using a different detector for this extremely sensitive part of the analysis chain will obviously introduce systematic errors. Thus, new calibration data was taken with the current Run-2 and Run-3 detector from 15-19 Feb 2019. #+end_comment *** CDL setup :PROPERTIES: :CUSTOM_ID: sec:cdl:setup :END: The CAST detector lab provides a vacuum test stand, which contains an X-ray tube. A Micromegas-like detector can easily be mounted to the rear end of the test stand. An X-ray tube uses a filament to produce free electrons, which are then accelerated with a high voltage of the order of multiple $\si{kV}$. The main part of the setup is a rotateable wheel inside the vacuum chamber, which contains a set of 18 different positions with 8 different target materials, as seen in tab. [[tab:cdl:targets]]. The highly energetic electrons interacting with the target material generate a continuous Bremsstrahlung spectrum with characteristic lines depending on the target. A second rotateable wheel contains a set of 11 different filters, see tab. [[tab:cdl:filters]]. These can be used to filter out undesired parts of the generated spectrum by choosing a filter that is opaque in the those energy ranges. As mentioned previously in sec. [[#sec:cast:timeline]], the detector was dismounted from CAST in Feb. 2019 and installed in the CAST detector lab on <2019-02-14 Thu>. The week from <2019-02-15 Fri> to <2019-02-21 Thu> X-ray tube data was taken in the CDL. Fig. sref:fig:cdl:cdl_setup shows the whole vacuum test stand, which contains the X-ray tube on the front left end and the Septemboard detector installed on the rear right, visible by the red HV cables and yellow HDMI cable. In fig. sref:fig:cdl:detector_setup we see the whole detector part from above, with the Septemboard detector installed to the vacuum test stand on the left side. The water cooling system is seen in the bottom right, with the power supply above that. The copper shielded box slightly right of the center is the Ortec pre-amplifier of the FADC, which is connected via a heavily shielded LEMO cable to the Septemboard detector. This cable was available in the CAST detector lab and proved invaluable for the measurements as it essentially removed any noise visible in the FADC signals (compared to the significant noise issues encountered at CAST, sec. [[#sec:cast:data_taking_woes_2017]]). Given the activation threshold of the FADC with CAST amplifier settings (see sec. [[#sec:calib:fadc:amplifier_settings]]) at around $\SI{2}{keV}$, the amplification needed to be adjusted in the CDL on a per-target basis. The shielded cable allowed the FADC to even act as a trigger for $\SI{277}{eV}$ $\ce{C}_{Kα}$ X-rays without any noise problems. [fn:shielded_lemo_cable] #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Full setup") (label "fig:cdl:cdl_setup") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CDL/IMG_20190214_155459.jpg")) (subfigure (linewidth 0.5) (caption "Detector setup") (label "fig:cdl:detector_setup") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CDL/IMG_20190214_191752.jpg")) (caption (subref "fig:cdl:cdl_setup") " shows the full vacuum test stand containing the X-ray tube with the Septemboard detector installed at the rear, visible by the red HV and yellow HDMI cables. " (subref "fig:cdl:detector_setup") " is a view of the detector setup from above. On the left hand side is the detector mounted to the vacuum setup. The water cooling is seen in the bottom right, connected via the blue tubes. The gas supply is in red tubing and the power supply is visible on the right above the water cooling (with a green Phoenix connector). The copper shielded cable is a LEMO cable for the FADC signal going to the pre-amplifier (with the University of Bonn sticker).") (label "fig:cdl:cdl")) #+end_src #+CAPTION: Table of all available target materials in the CAST detector lab and their respective #+CAPTION: position on a rotateable wheel. #+NAME: tab:cdl:targets #+ATTR_LATEX: :booktabs t | Target Material | Position | |-----------------+----------| | Ti | 1 | | Ag | 2 | | Mn | 3 | | C | 4 | | BN | 5 | | Au | 6 | | Al | 7 - 11 | | Cu | 12 - 18 | #+CAPTION: Table of all available filters in the CAST detector lab and their respective #+CAPTION: position on a rotateable wheel. #+NAME: tab:cdl:filters #+ATTR_LATEX: :booktabs t | Filter material | Position | |---------------------------------------------+----------| | Ni 0.1 mm | 1 | | Al 5 µm | 2 | | PP G12 (EPIC) | 3 | | PP G12 (EPIC) | 4 | | Cu 10 µm | 5 | | Ag 5 µm (99.97%) AG000130/24 | 6 | | Fe 50 µm | 7 | | Mn foil 50 µm (98.7% + permanent polyester) | 8 | | Co 50 µm (99.99%) CO0000200 | 9 | | Cr 40 µm (99.99% + permanent polyester) | 10 | | Ti 50 µm (99.97%) TI000315 | 11 | [fn:shielded_lemo_cable] In hindsight it seems obvious that a similarly shielded LEMO cable should have been used for the measurements at CAST. Unfortunately, due to lack of experience with electromagnetic interference this was not considered before. **** TODOs for this section [/] :noexport: - [ ] *SOMEWHERE HERE ADD REFERENCE TO WHAT EPIC REFERS TO!* -> See the extra info subsection below. Link to a website and we need a link to some XMM Newton paper!!! - [ ] *CHECK XRAY TUBE EXPLANATION FOR CORRECTNESS!* - [ ] *SAY SOMETHING ABOUT TURBO AND SPUTTER ION PUMPS?* - [ ] *CONSIDER ADDING PICTURE OF TARGET AND FILTER WHEEL?* See Hendrik's thesis. He has pictures from Tobi. - [X] 2 pictures of setup, side by side - [X] one paragraph about how X-ray tube works. Bremsstrahlung via HV, hit target & then filter - [X] table of available targets & filters - [X] measurement dates (which week) - ? - [ ] *CONSIDER MOVING THESE TABLES TO AN APPENDIX!* *** CDL measurements :PROPERTIES: :CUSTOM_ID: sec:cdl:measurements :END: The measurements were performed with 8 different target and filter combinations, with a higher density towards lower energies due to the nature of the expected solar axion flux. For each target and filter at least two runs were measured, one /with/ and one /without/ using the FADC as a trigger. The latter was taken to collect more statistics in a shorter amount of time, as the FADC readout slows down the data taking speed due to increased dead time. Table [[tab:cdl:run_overview_tab]] provides an overview of all data taking runs, the target, filter and HV setting, the main X-ray fluorescence line targeted and its energy. Finally, the mean position of the main fluorescence line in the charge spectrum and its width as determined by a fit is shown. As can be seen the position moves in some cases significantly between different runs, for example in case of the $\ce{Cu}-\ce{Ni}$ measurements. Also within a single run some stronger variation can be seen, evident by the much larger line width for example in run $\num{320}$ compared to $\num{319}$. See appendix sec. [[#sec:appendix:cdl:all_spectra_fits_by_run]] for all measured spectra (pixel and charge spectra), where each plot contains all runs for a target / filter combination. For example fig. [[sref:fig:appendix:cdl_charge_Cu-Ni-15kV_by_run]] shows runs 319, 320 and 345 of the $\ce{Cu}-\ce{Ni}$ measurements at $\SI{15}{kV}$ with the strong variation in charge position of the peak visible between the runs. The variability both between runs for the same target and filter as well as within a run shows the detector was undergoing gas gain changes similar to the variations at CAST (sec. [[#sec:calib:detector_behavior_over_time]]). With an understanding that the variation is correlated to the temperature this can be explained. The small laboratory underwent significant temperature changes due to the presence of 3 people, all running machines and freely opening and closing windows, in particular due to very warm temperatures relative to a typical February in Geneva. [fn:weather_at_cdl] Fig. [[fig:cdl:gas_gain_by_cdl_run]] shows the calculated gas gains (based on $\SI{90}{min}$ intervals) for each run colored by the target/filter combination. It undergoes a strong, almost exponential change over the course of the data taking campaign. Because of this variation between runs, each run is treated fully separately in contrast to [[cite:&krieger2018search]], where all runs for one target / filter combination were combined. #+CAPTION: Gas gain of each gas gain slice of $\SI{90}{min}$ in all CDL runs. #+CAPTION: A significant decrease over the runs (equivalent to the week of data taking) #+CAPTION: is visible. Multiple points for the same run correspond to multiple #+CAPTION: gas gain time slices as explained in sec. [[#sec:calib:gas_gain_time_binning]]. #+NAME: fig:cdl:gas_gain_by_cdl_run [[~/phd/Figs/CDL/gas_gain_by_run_and_tfkind.pdf]] Fortunately, this significant change of gas gain does not have an impact on the distributions of the cluster properties. See appendix [[#sec:appendix:fit_by_run:gas_gain_var_cluster_prop]] for comparisons of the cluster properties of the different runs (and thus different gas gains) for each target and filter combination. As the main purpose is to use the CDL data to generate reference distributions for certain cluster properties, the relevant clusters that correspond to known X-ray energies must be extracted from the data. This is done in two different ways: 1. A set of fixed cuts (one set for each target/filter combination) is applied to each run, as presented in tab. [[tab:cdl:cdl_cleaning_cuts]]. This is the same set as used in cite:krieger2018search. Its main purpose is to remove events with multiple clusters and potential background contributions. 2. By cutting around the main fluorescence line in the charge spectrum in a $3σ$ region, for which the spectrum needs to be fitted with the expected lines, see sec. [[#sec:cdl:fits_to_spectra]]. This is done on a run-by-run basis. The remaining data after both sets of cuts can then be combined for each target/filter combination to make up the distributions for the cluster properties as needed for the likelihood method. For a reference of the X-ray fluorescence lines (for more exact values and $\alpha_1$, $\alpha_2$ values etc.) see tab. [[tab:theory:xray_fluorescence]]. The used EPIC filter refers to a filter developed for the EPIC camera of the XMM-Newton telescope. It is a bilayer of $\SI{1600}{\angstrom}$ polyimide and $\SI{800}{\angstrom}$ aluminium. For more information about the EPIC filters see references cite:struder2001xmm_pnccd,turner2001xmm_mos,barbera2003monitoring,barbera2016thin, in particular cite:barbera2016thin for an overview of the materials and production. #+CAPTION: Overview of all runs taken behind the X-ray tube, whether they ran with or without FADC, #+CAPTION: their targets, filters and high voltage setting and information about the major fluorescence #+CAPTION: line and its energy. Finally, the mean $μ$ and width $σ$ of the main fluorescence line as #+CAPTION: determined by the fit plus the resulting energy resolution $μ/σ$ is shown. #+NAME: tab:cdl:run_overview_tab #+ATTR_LATEX: :float sideways :booktabs t |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | Run | FADC? | Target | Filter | HV [kV] | Line | Energy [keV] | μ [e⁻] | σ [e⁻] | σ/μ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 319 | y | Cu | Ni | 15 | $\ce{Cu}$ $\text{K}_{\alpha}$ | 8.04 | $\num{9.509(21)e+05}$ | $\num{7.82(18)e+04}$ | $\num{8.22(19)e-02}$ | | 320 | n | Cu | Ni | 15 | | | $\num{9.102(22)e+05}$ | $\num{1.010(19)e+05}$ | $\num{1.110(21)e-01}$ | | 345 | y | Cu | Ni | 15 | | | $\num{6.680(12)e+05}$ | $\num{7.15(11)e+04}$ | $\num{1.070(16)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 315 | y | Mn | Cr | 12 | $\ce{Mn}$ $\text{K}_{\alpha}$ | 5.89 | $\num{6.321(29)e+05}$ | $\num{9.44(26)e+04}$ | $\num{1.494(41)e-01}$ | | 323 | n | Mn | Cr | 12 | | | $\num{6.328(11)e+05}$ | $\num{7.225(89)e+04}$ | $\num{1.142(14)e-01}$ | | 347 | y | Mn | Cr | 12 | | | $\num{4.956(10)e+05}$ | $\num{6.211(82)e+04}$ | $\num{1.253(17)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 325 | y | Ti | Ti | 9 | $\ce{Ti}$ $\text{K}_{\alpha}$ | 4.51 | $\num{4.83(31)e+05}$ | $\num{4.87(83)e+04}$ | $\num{1.01(18)e-01}$ | | 326 | n | Ti | Ti | 9 | | | $\num{4.615(87)e+05}$ | $\num{4.93(25)e+04}$ | $\num{1.068(57)e-01}$ | | 349 | y | Ti | Ti | 9 | | | $\num{3.90(23)e+05}$ | $\num{4.57(57)e+04}$ | $\num{1.17(16)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 328 | y | Ag | Ag | 6 | $\ce{Ag}$ $\text{L}_{\alpha}$ | 2.98 | $\num{3.0682(97)e+05}$ | $\num{3.935(79)e+04}$ | $\num{1.283(26)e-01}$ | | 329 | n | Ag | Ag | 6 | | | $\num{3.0349(51)e+05}$ | $\num{4.004(40)e+04}$ | $\num{1.319(13)e-01}$ | | 351 | y | Ag | Ag | 6 | | | $\num{2.5432(63)e+05}$ | $\num{3.545(49)e+04}$ | $\num{1.394(20)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 332 | y | Al | Al | 4 | $\ce{Al}$ $\text{K}_{\alpha}$ | 1.49 | $\num{1.4868(50)e+05}$ | $\num{2.027(38)e+04}$ | $\num{1.364(26)e-01}$ | | 333 | n | Al | Al | 4 | | | $\num{1.3544(30)e+05}$ | $\num{2.539(24)e+04}$ | $\num{1.875(18)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 335 | y | Cu | EPIC | 2 | $\ce{Cu}$ $\text{L}_{\alpha}$ | 0.930 | $\num{8.885(99)e+04}$ | $\num{1.71(11)e+04}$ | $\num{1.93(13)e-01}$ | | 336 | n | Cu | EPIC | 2 | | | $\num{7.777(94)e+04}$ | $\num{2.39(14)e+04}$ | $\num{3.08(19)e-01}$ | | 337 | n | Cu | EPIC | 2 | | | $\num{7.86(15)e+04}$ | $\num{2.47(11)e+04}$ | $\num{3.14(15)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 339 | y | Cu | EPIC | 0.9 | $\ce{O }$ $\text{K}_{\alpha}$ | 0.525 | $\num{5.77(11)e+04}$ | $\num{1.38(22)e+04}$ | $\num{2.39(39)e-01}$ | | 340 | n | Cu | EPIC | 0.9 | | | $\num{4.778(31)e+04}$ | $\num{1.230(50)e+04}$ | $\num{2.58(11)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 342 | y | C | EPIC | 0.6 | $\ce{C }$ $\text{K}_{\alpha}$ | 0.277 | $\num{4.346(36)e+04}$ | $\num{1.223(29)e+04}$ | $\num{2.814(70)e-01}$ | | 343 | n | C | EPIC | 0.6 | | | $\num{3.952(20)e+04}$ | $\num{1.335(14)e+04}$ | $\num{3.379(40)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| #+CAPTION: Cuts applied to the CDL datasets in order to roughly clean of potential #+CAPTION: background events and double hits (recognized as single cluster). #+CAPTION: RMS refers to the transverse RMS of the clusters. #+NAME: tab:cdl:cdl_cleaning_cuts #+ATTR_LATEX: :align lllrrrrr |--------+--------+-------------------------------+-----+--------+-----------+-----------+--------------| | Target | Filter | line | HV | length | RMS_T_min | RMS_T_max | eccentricity | |--------+--------+-------------------------------+-----+--------+-----------+-----------+--------------| | Cu | Ni | $\ce{Cu}$ $\text{K}_{\alpha}$ | 15 | | 0.1 | 1.0 | 1.3 | | Mn | Cr | $\ce{Mn}$ $\text{K}_{\alpha}$ | 12 | | 0.1 | 1.0 | 1.3 | | Ti | Ti | $\ce{Ti}$ $\text{K}_{\alpha}$ | 9 | | 0.1 | 1.0 | 1.3 | | Ag | Ag | $\ce{Ag}$ $\text{L}_{\alpha}$ | 6 | 6.0 | 0.1 | 1.0 | 1.4 | | Al | Al | $\ce{Al}$ $\text{K}_{\alpha}$ | 4 | | 0.1 | 1.1 | 2.0 | | Cu | EPIC | $\ce{Cu}$ $\text{L}_{\alpha}$ | 2 | | 0.1 | 1.1 | 2.0 | | Cu | EPIC | $\ce{O }$ $\text{K}_{\alpha}$ | 0.9 | | 0.1 | 1.1 | 2.0 | | C | EPIC | $\ce{C }$ $\text{K}_{\alpha}$ | 0.6 | 6.0 | 0.1 | 1.1 | | |--------+--------+-------------------------------+-----+--------+-----------+-----------+--------------| [fn:weather_at_cdl] The measurement campaign at the CDL was done in February. However, the weather was nice and sunny most of the week, if I remember correctly. The laboratory has windows towards the south-east. With reasonably cold outside temperatures and sunshine warming the lab in addition to machines and people, opening and closing windows changed the temperature on short time scales, likely significantly contributing to the detector instability. In the extended version of this thesis you'll find a section below this one containing the weather data at the time of the CDL data taking. [fn:epic_filter] The filter in the CDL was indicated as 'G12', which based on the references should be the 'medium filter'. Hence the used numbers. **** TODOs from the section above [9/15] :noexport: - [ ] *REWRITE SENTENCE IN MAIN TEXT ABOUT TEMPERATURE?* -> ROBERTO COMMENT This section contains the more generic TODOs that are longer / done and have no specific relevance to an existing paragraph / fig / tab - [X] *THIS STRONGER VARIATION IS NOT REALLY VISIBLE IN THE WAY WE LOOK AT THE DATA NOW, I.E. VIA FITS TO FULL SPECTRA!* - [X] *HAVE EXPLICIT SUBSECTION IN ABOVE APPENDIX ONLY ABOUT PROPERTIES PER ENERGY* -> Note: this is too important as a general concept to not show up in the thesis at all! So not only for extended version, but at least in appendix! - [ ] *JUST NEED TO CAPTION ALL PLOTS IN APPENDIX* - [ ] *CONSIDER TAKING EXPONENTIAL PART 10⁵ OF CHARGES AND PUT INTO HEADER!* - [X] *ADD SCHMICK THESIS REFERENCE* - [X] *PLOT OF ALL SPECTRA AS SINGLE W/ CALIBRATED ENERGY* - [X] *FOR APPENDIX/EXTENDED: HISTOGRAMS OF ALL SPECTRA BY RUN* -> what we hacked into cdl_spectrum_creation! - [X] *REMOVE PIXEL MEAN* -> replace by some measure of charge position? More importantly, we need to replace the table anyway due to our new fit by run approach. Better to show the mean charges and then the σ of the line width? The variance of that over the different runs is a good enough approximator for "run internal" variation. - [X] *LOOK INTO THE CDL TEMPERATURES THAT WE HAVE. CREATE A PLOT OF ALL TEMP DATA NOW THAT IT'S PART OF RAW_DATA*. -> possibly just add them to the gas gain plot here. - [ ] once we have that maybe decide if it's worth including them. For sure include those in the extended version! - [ ] *CREATE PLOT LIKE RIDGE LINE PLOT BUT NOT COMPARING BETWEEN RUNS, BUT BETWEEN ENERGIES!* -> As in combine all Mn-Cr data and all Cu-Ni data and create same ridges. If my understanding is correct there _should_ be a difference there, despite there not being a difference in the cases of different gas gains! However, I'm not even sure how different the change was in the first place between Mn-Cr and Cu-Ni. Need to check that of course (well I guess that plot _would_ be such a comparison). - [ ] GENERAL OUTLINE OF MEASUREMENT / IDEAS SECTION: - [X] explain idea behind it: different targets & filters for different energies. Same ones as krieger - data is handled on a per run basis, where we have 2-3 runs per target & filter. Always at least one with and one without FADC to collect more data (FADC readout slows out readout a bit) - clean the data slightly, depending on target/filter, cleaning cuts - perform fit according to expected lines on cleaned data _per run_ - main fluorescence line is the important one, as that one defines a further cut to extract clusters matching that energy - optional: each run is energy calibrated based on the mean position of the main peak in charge and its known charge position, *different from regular energy calibration!* - cleaned data + individual fits extract that data which is used to define reference probability distributions for each energy! - combine PDFs to get likelihood, done. - [ ] *MAYBE TEMPERATURE REFERENCE ONLY IN FOOTNOTE?* - [X] *HOW VARIANCE IN TABLE DETERMINED?* -> I can't figure out anymore how the variance was calculated in the table. For that reason I suppose the better idea is to make a plot of: gas gain in our 90 min bins with colors based on target/filter and shape based on run? For run we need to replace real run numbers by run count (1, 2, 3, ...) to all target/filters start at same shape. That's easier to understand & clearer to have all data and doesn't rely on pixel information. Ah, we can have discrete X-axis actually or run number as x! then we don't even need different shapes? -> Not used anymore now, so doesn't matter. - [X] *UPDATE PEAK POSITIONS IN THE TABLE!* - [X] *AT LEAST THE Cu EPIC 2 KV ROW FIT FUNCTION IS WRONG!! NOT ONLY ONE AND NOT Kα!* -> Also: Cu EPIC 2kV is Cu Lα and Lβ, but their energies: α = 929.7, 929.7, β = 949.8 That's a single pixel difference! Our plot for Cu EPIC 2 kV shows 2 very clear peaks though. Those *cannot* be two lines, but must be detector changing its behavior again! CHeck runs above. Also explains why the charge spectrum of this only shows a single one! -> *NOTE*: When plotting the hits information by run for the Cu EPIC 2kV we see that the 2 peak structure is only visible in *2* of 3 runs. 335 has one, 336 and 337 have two. And the latter two are the runs *without* the FADC! So likely a double hit thing? The thing is though the main peak present in the FADC run is the *larger* one of the two present in 336, 337. If it was double hits we'd expect the smaller one to be that. -> Looking at event displays of 336: There's a *large* number of multi hit events and some that at least _look_ like higher energetic X-rays. Quite possibly though those could also be higher energy events. The run with FADC looks much better in this regard. It's still unclear though, why the peaks in the runs w/o FADC are at even lower pixel counts... -> how to proceed? -> CDL program cleaned up. Almost everything looks but, but Ti-Ti 9kV charge is a bit off in NLOPT fit. - [X] Investigate. Fixed. - [X] Also: the error bars on feCharge are huge in the energy resolution plot now. Fixed. - [ ] *NOTE: ALSO OTHER FUNCTIONS ARE NOT QUITE CORRECT. THEY CURRENTLY CONTAIN THE STUFF THAT IS CONTAINED IN THE SPECTRUM, BUT NOT WHAT WE FIT! ALSO THE FIXED PARAMETERS ARE NOT SHOWN!* - [ ] *APPENDIX WITH THE EXACT FIT FUNCTIONS AND THE FIXED PARAMETERS!* (well, this is what the table is for, no?) - [X] *MAKE NOTES ABOUT VARIABILITY, TEMPERATURE CHANGES & SLIGHTLY HIGHER PRESSURE IN DETECTOR THAN NORMAL* -> in hindsight shouldn't have opened / closed windows etc. - [X] *CHANGE THE USED CHARGES AGAIN IN ~cdl_cuts.nim~ AFTER OUR NEW FITS!* -> We now do everything on a per run basis and write the charge bounds to the H5 file and use those! - [ ] *NOTE*: We do have some information about FADC gains in CDL run in ~CDL_measurements.org~ in the text portion of the notes for each target / filter! - [X] *LOOK UP LOCAL WEATHER IN GENEVA DURING WEEK OF CDL DATA TAKING AND CREATE A PLOT!* - [ ] *NOTE THAT GAS PRESSURES DURING CDL WERE:* - <2019-02-15 Fri 15:06>: 1052 mbar - <2019-02-16 Sat 15:01>: 1053 mbar - <2019-02-17 Sun 10:25>: 1052 mbar - <2019-02-19 Tue 16:17>: 1052 mbar **** Table of fit lines :extended: This is the full table. #+CAPTION: The fun table! #+NAME: test #+ATTR_LATEX: :float sideways | cuNi15 | mnCr12 | tiTi9 | agAg6 | alAl4 | cuEpic2 | cuEpic0_9 | cEpic0_6 | |-----------------------------------+----------------------------------+------------------------------------------------+----------------------------------------------+----------------------+--------------------------------------------------+-----------------------------------------+--------------------------------------------| | $\text{EG}(\ce{Cu}_{Kα})$ | $\text{EG}(\ce{Mn}_{Kα})$ | $\text{EG}(\ce{Ti}_{Kα})$ | $\text{EG}(\ce{Ag}_{Lα})$ | $\text{EG}(Al_{Kα})$ | $\text{G}(\ce{Cu}_{Lα})$ | $\text{G}(O_{Kα})$ | $\text{G}(\ce{C}_{Kα})$ | | $\text{EG}(\ce{Cu}^{\text{esc}})$ | $\text{G}(\ce{Mn}^{\text{esc}})$ | $\text{G}(\ce{Ti}^{\text{esc}}_{Kα})$ | G: | | G: | #G: | G: | | | | #name = $\ce{Ti}^{\text{esc}}_{Kα}$ | name = $\ce{Ag}_{Lβ}$ | | name = $\ce{Cu}_{Lβ}$ | # name = $\ce{C}_{Kα}$ | name = $\ce{O}_{Kα}$ | | | | # $μ = eμ(\ce{Ti}_{Kα}) · \frac{1.537}{4.511}$ | $N = eN(\ce{Ag}_{Lα}) · 0.1$ | | $N = N(\ce{Cu}_{Lα}) / 5.0$ | # $μ = μ(O_{Kα}) · (0.277/0.525)$ | $μ = μ(\ce{C}_{Kα}) · \frac{0.525}{0.277}$ | | | | G: | $μ = eμ(\ce{Ag}_{Lα}) · \frac{3.151}{2.984}$ | | $μ = μ(\ce{Cu}_{Lα}) · \frac{0.9498}{0.9297}$ | # $σ = σ(O_{Kα})$ | $σ = σ(\ce{C}_{Kα})$ | | | | name = $\ce{Ti}^{\text{esc}}_{Kβ}$ | $σ = eσ(\ce{Ag}_{Lα})$ | | $σ = σ(\ce{Cu}_{Lα})$ | #G: | | | | | $μ = eμ(\ce{Ti}_{Kα}) · \frac{1.959}{4.511}$ | | | G: | # name = $\ce{Fe}_{Lα}β$ | | | | | $σ = σ(\ce{Ti}^{\text{esc}}_{Kα})$ | | | name = $\ce{O}_{Kα}$ | # $μ = μ(O_{Kα}) · \frac{0.71}{0.525}$ | | | | | G: | | | $N = N(\ce{Cu}_{Lα}) / 3.5$ | # $σ = σ(O_{Kα})$ | | | | | name = $\ce{Ti}_{Kβ}$ | | | $μ = μ(\ce{Cu}_{Lα}) · \frac{0.5249}{0.9297}$ | #G: | | | | | $μ = eμ(\ce{Ti}_{Kα}) · \frac{4.932}{4.511}$ | | | $σ = σ(\ce{Cu}_{Lα}) / 2.0$ | # name = $\ce{Ni}_{Lα}β$ | | | | | $σ = eσ(\ce{Ti}_{Kα})$ | | | | # $μ = μ(O_{Kα}) · \frac{0.86}{0.525}$ | | |-----------------------------------+----------------------------------+------------------------------------------------+----------------------------------------------+----------------------+--------------------------------------------------+-----------------------------------------+--------------------------------------------| | cuNi15 Q | mnCr12 Q | tiTi9 Q | agAg6 Q | alAl4 Q | cuEpic2 Q | cuEpic0_9 Q | cEpic0_6 Q | |-----------------------------------+----------------------------------+------------------------------------------------+----------------------------------------------+----------------------+--------------------------------------------------+-----------------------------------------+--------------------------------------------| | $\text{G}(\ce{Cu}_{Kα})$ | $\text{G}(\ce{Mn}_{Kα})$ | $\text{G}(\ce{Ti}_{Kα})$ | $\text{G}(\ce{Ag}_{Lα})$ | $\text{G}(Al_{Kα})$ | $\text{G}(\ce{Cu}_{Lα})$ | $\text{G}(O_{Kα})$ | $\text{G}(\ce{C}_{Kα})$ | | $\text{G}(\ce{Cu}^{\text{esc}})$ | $\text{G}(\ce{Mn}^{\text{esc}})$ | $\text{G}(\ce{Ti}^{\text{esc}}_{Kα})$ | G: | | G: | G: | G: | | | | #name = $\ce{Ti}^{\text{esc}}_{Kα}$ | name = $\ce{Ag}_{Lβ}$ | | name = $\ce{Cu}_{Lβ}$ | name = $\ce{C}_{Kα}$ | name = $\ce{O}_{Kα}$ | | | | # $μ = eμ(\ce{Ti}_{Kα}) · \frac{1.537}{4.511}$ | $N = N(\ce{Ag}_{Lα}) · 0.1$ | | $N = N(\ce{Cu}_{Lα}) / 5.0$ | $N = N(O_{Kα}) / 10.0$ | $μ = μ(\ce{C}_{Kα}) · \frac{0.525}{0.277}$ | | | | G: | $μ = μ(\ce{Ag}_{Lα}) · \frac{3.151}{2.984}$ | | $μ = μ(\ce{Cu}_{Lα}) · \frac{0.9498}{0.9297}$ | $μ = μ(O_{Kα}) · \frac{277.0}{524.9}$ | $σ = σ(\ce{C}_{Kα})$ | | | | name = $\ce{Ti}^{\text{esc}}_{Kβ}$ | $σ = σ(\ce{Ag}_{Lα})$ | | $σ = σ(\ce{Cu}_{Lα})$ | $σ = σ(O_{Kα})$ | | | | | $μ = μ(\ce{Ti}_{Kα}) · \frac{1.959}{4.511}$ | | | # $\text{G}: | | | | | | $σ = σ(\ce{Ti}^{\text{esc}}_{Kα})$ | | | # name = $\ce{O}_{Kα}$ | | | | | | G: | | | # $N = N(\ce{Cu}_{Lα}) / 4.0$ | | | | | | name = $\ce{Ti}_{Kβ}$ | | | # $μ = μ(\ce{Cu}_{Lα}) · \frac{0.5249}{0.9297}$ | | | | | | $μ = μ(\ce{Ti}_{Kα}) · \frac{4.932}{4.511}$ | | | # $σ = σ(\ce{Cu}_{Lα}) / 2.0$ | | | | | | $σ = σ(\ce{Ti}_{Kα})$ | | | | | | **** Extra info on target materials :extended: - [X] *EXTEND THIS TO INCLUDE CITATIONS* - [ ] *ADD OUR OWN POLYIAMID + ALUMINUM TRANSMISSION PLOT* -> 1600 Å polyimide + 800 Å Al -> Q: What do we use for the polyimide? Guess below text can tell us the atomic fractions in principle. EPIC filters: https://www.cosmos.esa.int/web/xmm-newton/technical-details-epic section 6 about filters contains: #+begin_quote There are four filters in each EPIC camera. Two are thin filters made of 1600 Å of poly-imide film with 400 Å of aluminium evaporated on to one side; one is the medium filter made of the same material but with 800 Å of aluminium deposited on it; and one is the thick filter. This is made of 3300 Å thick Polypropylene with 1100 Å of aluminium and 450 Å of tin evaporated on the film. #+end_quote i.e. the EPIC filters contain aluminum. That could explain why the Cu-EPIC 2kV data contains something that might be either aluminum fluorescence or at least just continuous spectrum that is not filtered due to the absorption edge of aluminum there! Relevant references: cite:struder2001xmm_pnccd,turner2001xmm_mos,barbera2003monitoring,barbera2016thin In particular cite:barbera2016thin contains much more details about the EPIC filters and its actual composition: #+begin_quote Filter manufacturing process The EPIC Thin and Medium filters manufactured by MOXTEX consist of a thin film of polyimide, with nominal thickness of 160 nm, coated with a single layer of aluminum whose nominal thickness is 40 nm for the Thin and 80 nm for the Medium filters, respectively. The polyimide thin films are produced by spin-coating of a polyamic acid (PAA) solution obtained by dissolving two precursor monomers (an anhydride and an amine) in an organic polar solvent. For the EPIC Thin and Medium filters the two precursors are the Biphenyldianhydride (BPDA) and the p-Phenyldiamine (PDA) (Dupont PI-2610), and the solvent is N-methyl-2-pyrrolidone (NMP) and Propylene Glycol Monomethyl Ether (Dupont T9040 thinner). To convert the PAA into polyimide, the solution is heated up to remove the NMP and to induce the imidization through the evaporation of water molecules. The film thickness is controlled by spin coating parameters, PAA viscosity, and curing temperature [19]. The polyimide thin membrane is attached with epoxy onto a transfer ring and the aluminum is evaporated in a few runs, distributed over 2–3 days, each one depositing a metal layer of about 20 nm thickness. The EPIC Thin and Medium flight qualified filters have been manufactured during a period of 1 year, from January’96 to January’97. Table 1 lists the full set of flight-qualified filters (Flight Model and Flight Spare) delivered to the EPIC consortium, together with their most relevant parameters. Along with the production of the flight qualified filters, the prototypes and the qualification filters (not included in this list) have been manufactured and tested for the construction of the filter transmission model and to assess the stability in time of the Optical/UV transparency (opacity). Among these qualification filters are T4, G12, G18, and G19 that have been previously mentioned. #+end_quote Further it states that 'G12' refers to the *medium filter* #+begin_quote UV/Vis transmission measurements in the range 190–1000 nm have been performed between May 1997 and July 2002 on one Thin (T4) and one medium (G12) EPIC on-ground qualification filters to monitor their time stability [16]. #+end_quote PP G12 is the name written in the CDL documentation! Mystery solved. **** Reconstruct all CDL data :extended: Reconstructing all CDL data is done by either using ~runAnalysisChain~ on the directory (currently not tested) or by manually running ~raw_data_manipulation~ and ~reconstruction~ as follows: Take note that you may change the paths of course. The paths chosen here are those in use during the writing process of the thesis. #+begin_src sh cd ~/CastData/data/CDL_2019 raw_data_manipulation -p . -r Xray -o ~/CastData/data/CDL_2019/CDL_2019_Raw.h5 #+end_src And now for the reconstruction: #+begin_src sh cd ~/CastData/data/CDL_2019 reconstruction -i ~/CastData/data/CDL_2019/CDL_2019_Raw.h5 -o ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 reconstruction -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --only_charge reconstruction -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --only_fadc reconstruction -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --only_gas_gain reconstruction -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --only_energy_from_e #+end_src At this point the reconstructed CDL H5 file is generally done and ready to be used in the next section. **** Generate plots and tables for this section :extended: :PROPERTIES: :CUSTOM_ID: sec:background:gen_plots_cdl_data :END: To generate the plots of the above section (and much more) as well as the table summarizing all the runs and their fits, we continue with the ~cdl_spectrum_creation~ tool as follows: Make sure the config file uses ~fitByRun~ to reproduce the same plots! ~ESCAPE_LATEX~ performs replacement of characters like ~&~ used in titles. #+begin_src sh F_WIDTH=0.9 ESCAPE_LATEX=true USE_TEX=true \ cdl_spectrum_creation \ -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --cutcdl --dumpAccurate --hideNloptFit \ --plotPath ~/phd/Figs/CDL/fWidth0.9/ #+end_src Note: This generates lots of plots, all placed in the output directory. The ~fWidth0.9~ subdirectory is because here we wish to produce them with slightly smaller fonts as the single charge spectrum we use in the next section is inserted standalone. Among them are: - plots of the target / filter kinds with all fits and the raw / cut data as histogram - plots of histograms of the raw data split *by run* -> useful to see the detector variability! - energy resolution plot - peak position of hits / charge vs energy (to see if indeed linear) - calibrated, normalized histogram / kde plot of the target/filter combinations in energy (calibrated using the main line that was fitted) - ridgeline plots of KDEs of all the geometric cluster properties split by target/filter kind and run - gas gain of each gas gain slice in the CDL data, split by run & target filter kind - temperature data of those CDL runs that contain it - finally it generates the (almost complete) tables as shown in the above section for the data overview including μ, σ and μ/σ Note about temperature plots: - [[~/phd/Figs/CDL/septem_temperature_cdl.pdf]] - [[~/phd/Figs/CDL/septem_imb_temperature_facet_cdl.pdf]] -> This plot including the IMB temperature shows us precisely what we expect: the temperature of the IMB and the Septemboard is directly related and just an offset of one another. This is very valuable information to have as a reference - [[~/phd/Figs/CDL/septem_temperature_facet_cdl.pdf]] - [[~/phd/Figs/CDL/septem_temperature_facet_cdl_time_since_start.pdf]] As we ran the command from ~/tmp/~ the output plots will be in a ~/tmp/out/CDL*~ directory. We copy over the generated files files including ~calibrated_cdl_energy_histos.pdf~ and ~calibrated_cdl_energy_kde.pdf~ here to ~/phd/Figs/CDL/~. Finally, running the code snippet as mentioned above also produces table [[tab:cdl:run_overview_tab]] as well as the equivalent for the pixel spectra and writes them to stdout at the end! ***** TODOs for this section :noexport: - [X] *DISCUSS HOW ~CDL_2019_RECO.h5~ IS GENERATED* - [X] *GENERATE A PLOT OF THE GAS GAINS ENCOUNTERED DURING THE CDL MEASUREMENTS* -> As a simple point plot where the points can be colored by the target/filter kind - [X] *ADD THE ANNOTATIONS FOR THE FIT PARAMETERS STILL AND NO NLOPT VERSIONS!* - [X] *INSERT RIDGELINE PLOT OF DIFFERENT PROPERTIES PER RUN, SHOWING NO SIGNIFICANT CHANGE DESPITE DIFFERENT GAS GAINS* **** Generate the ~calibration-cdl-2018.h5~ file :extended: This is also done using the ~cdl_spectrum_creation~. Assuming the reconstructed CDL H5 file exists, it is as simple as: #+begin_src sh cdl_spectrum_creation -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 --genCdlFile \ --outfile ~/CastData/data/CDL_2019/calibration-cdl #+end_src which generates ~calibration-cdl-2018.h5~ for us. Make sure the config file uses ~fitByRun~ to reproduce the same plots! **** Get the table of fit lines from code :extended: Our code ~cdl_spectrum_creation~ can be used to output the final fitting functions in a format like the table inserted in the main text, thanks to using a CT declarative definition of the fit functions. We do this by running: #+begin_src sh :results drawer cdl_spectrum_creation --printFunctions #+end_src #+RESULTS: :results: Charge functions: | Target | Filter | HV [kV] | Fit functions | | Cu | Ni | 15 | $G^{\ce{Cu}}_{Kα} + G^{\ce{Cu}, \text{esc}}_{Kα}$ | | Mn | Cr | 12 | $G^{\ce{Mn}}_{Kα} + G^{\ce{Mn}, \text{esc}}_{Kα}$ | | Ti | Ti | 9 | $G^{\ce{Ti}}_{Kα} + G^{\ce{Ti}, \text{esc}}_{Kα} + G^{\ce{Ti}}_{Kβ}\left( μ^{\ce{Ti}}_{Kα}·(\frac{4.932}{4.511}), σ^{\ce{Ti}}_{Kα} \right) + G^{\ce{Ti}, \text{esc}}_{Kβ}\left( μ^{\ce{Ti}}_{Kα}·(\frac{1.959}{4.511}), σ^{\ce{Ti}, \text{esc}}_{Kα} \right)$ | | Ag | Ag | 6 | $G^{\ce{Ag}}_{Lα} + G^{\ce{Ag}}_{Lβ}\left( N^{\ce{Ag}}_{Lα}·0.56, μ^{\ce{Ag}}_{Lα}·(\frac{3.151}{2.984}), σ^{\ce{Ag}}_{Lα} \right)$ | | Al | Al | 4 | $G^{\ce{Al}}_{Kα}$ | | Cu | EPIC | 2 | $G^{\ce{Cu}}_{Lα} + G^{\ce{Cu}}_{Lβ}\left( N^{\ce{Cu}}_{Lα}·(\frac{0.65}{1.11}), μ^{\ce{Cu}}_{Lα}·(\frac{0.9498}{0.9297}), σ^{\ce{Cu}}_{Lα} \right) + G^{\ce{O}}_{Kα}\left( \frac{N^{\ce{Cu}}_{Lα}}{3.5}, μ^{\ce{Cu}}_{Lα}·(\frac{0.5249}{0.9297}), \frac{σ^{\ce{Cu}}_{Lα}}{2.0} \right) + G_{\text{unknown}}$ | | Cu | EPIC | 0.9 | $G^{\ce{O}}_{Kα} + G^{\ce{C}}_{Kα}\left( \frac{N^{\ce{O}}_{Kα}}{10.0}, μ^{\ce{O}}_{Kα}·(\frac{277.0}{524.9}), σ^{\ce{O}}_{Kα} \right) + G_{\text{unknown}}$ | | C | EPIC | 0.6 | $G^{\ce{C}}_{Kα} + G^{\ce{O}}_{Kα}\left( μ^{\ce{C}}_{Kα}·(\frac{0.525}{0.277}), σ^{\ce{C}}_{Kα} \right)$ | Pixel functions: | Target | Filter | HV [kV] | Fit functions | | Cu | Ni | 15 | $EG^{\ce{Cu}}_{Kα} + EG^{\ce{Cu}, \text{esc}}_{Kα}$ | | Mn | Cr | 12 | $EG^{\ce{Mn}}_{Kα} + G^{\ce{Mn}, \text{esc}}_{Kα}$ | | Ti | Ti | 9 | $EG^{\ce{Ti}}_{Kα} + G^{\ce{Ti}, \text{esc}}_{Kα} + G^{\ce{Ti}}_{Kβ}\left( eμ^{\ce{Ti}}_{Kα}·(\frac{4.932}{4.511}), eσ^{\ce{Ti}}_{Kα} \right) + G^{\ce{Ti}, \text{esc}}_{Kβ}\left( eμ^{\ce{Ti}}_{Kα}·(\frac{1.959}{4.511}), σ^{\ce{Ti}, \text{esc}}_{Kα} \right)$ | | Ag | Ag | 6 | $EG^{\ce{Ag}}_{Lα} + G^{\ce{Ag}}_{Lβ}\left( eN^{\ce{Ag}}_{Lα}·0.56, eμ^{\ce{Ag}}_{Lα}·(\frac{3.151}{2.984}), eσ^{\ce{Ag}}_{Lα} \right)$ | | Al | Al | 4 | $EG^{\ce{Al}}_{Kα}$ | | Cu | EPIC | 2 | $G^{\ce{Cu}}_{Lα} + G^{\ce{Cu}}_{Lβ}\left( N^{\ce{Cu}}_{Lα}·(\frac{0.65}{1.11}), μ^{\ce{Cu}}_{Lα}·(\frac{0.9498}{0.9297}), σ^{\ce{Cu}}_{Lα} \right) + G^{\ce{O}}_{Kα}\left( \frac{N^{\ce{Cu}}_{Lα}}{3.5}, μ^{\ce{Cu}}_{Lα}·(\frac{0.5249}{0.9297}), \frac{σ^{\ce{Cu}}_{Lα}}{2.0} \right) + G_{\text{unknown}}$ | | Cu | EPIC | 0.9 | $G^{\ce{O}}_{Kα} + G_{\text{unknown}}$ | | C | EPIC | 0.6 | $G^{\ce{C}}_{Kα} + G^{\ce{O}}_{Kα}\left( μ^{\ce{C}}_{Kα}·(\frac{0.525}{0.277}), σ^{\ce{C}}_{Kα} \right)$ | :end: **** Historical weather data for Geneva during CDL data taking :extended: More or less location of CDL (side of building 17 at Meyrin CERN site): #+begin_src 46.22965, 6.04984 #+end_src (https://www.openstreetmap.org/search?whereami=1&query=46.22965%2C6.04984#map=19/46.22965/6.04984) Here we can see historic weather data plots from Meyrin from the relevant time range: https://meteostat.net/en/place/ch/meyrin?s=06700&t=2019-02-15/2019-02-21 This at least proves the weather was indeed very nice outside, sunny and over 10°C peak temperatures during the day! I exported the data and it's available here: [[~/phd/resources/weather_data_meyrin_cdl_data_taking.csv]] Legend of the columns: |----+--------+------------------------| | # | Column | Description | |----+--------+------------------------| | 1 | time | Time | | 2 | temp | Temperature | | 3 | dwpt | Dew Point | | 4 | rhum | Relative Humidity | | 5 | prcp | Total Precipitation | | 6 | snow | Snow Depth | | 7 | wdir | Wind Direction | | 8 | wspd | Wind Speed | | 9 | wpgt | Peak Gust | | 10 | pres | Air Pressure | | 11 | tsun | Sunshine Duration | | 12 | coco | Weather Condition Code | #+begin_src nim import ggplotnim, times var df = readCsv("/home/basti/phd/resources/weather_data_meyrin_cdl_data_taking.csv") .mutate(f{string -> int: "timestamp" ~ parseTime(`time`, "yyyy-MM-dd HH:mm:ss", local()).toUnix()}) .rename(f{"Temperature [°C]" <- "temp"}, f{"Pressure [mbar]" <- "pres"}) df = df.gather(["Temperature [°C]", "Pressure [mbar]"], "Data", "Value") echo df ggplot(df, aes("timestamp", "Value", color = "Data")) + facet_wrap("Data", scales = "free") + facetMargin(0.5) + geom_line() + xlab("Date", rotate = -45.0, alignTo = "right", margin = 2.0) + margin(bottom = 2.5, right = 4.75) + legendPosition(0.84, 0.0) + scale_x_date(isTimestamp = true, dateSpacing = initDuration(hours = 12), formatString = "yyyy-MM-dd HH:mm", timeZone = local()) + ggtitle("Meyrin, Geneva weather during CDL data taking campaign") + ggsave("/home/basti/phd/Figs/CDL/weather_meyrin_during_cdl_data_taking.pdf", width = 1000, height = 600) #+end_src #+RESULTS: | DataFrame | with | 13 | columns | and | 336 | rows: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Idx | Data | Value | snow | dwpt | wdir | coco | prcp | time | timestamp | rhum | wspd | tsun | wpgt | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dtype: | string | float | float | float | int | int | int | string | int | int | float | float | float | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 | Temperatu... | -0.3 | -nan | -1.7 | 200 | 1 | 0 | 2019-02-1... | 1550185200 | 90 | 1.8 | -nan | 12 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | Temperatu... | -0.7 | -nan | -2.6 | 230 | 1 | 0 | 2019-02-1... | 1550188800 | 87 | 3.6 | -nan | 3.7 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | Temperatu... | -1.7 | -nan | -2.5 | 280 | 1 | 0 | 2019-02-1... | 1550192400 | 94 | 1.8 | -nan | 7 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | Temperatu... | -1.9 | -nan | -2.9 | 240 | 1 | 0 | 2019-02-1... | 1550196000 | 93 | 1.8 | -nan | 13 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | Temperatu... | -2.1 | -nan | -3.4 | 320 | 1 | 0 | 2019-02-1... | 1550199600 | 91 | 1.8 | -nan | 3.7 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | Temperatu... | -2.5 | -nan | -3.8 | 330 | 1 | 0 | 2019-02-1... | 1550203200 | 91 | 1.8 | -nan | 13 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | Temperatu... | -2.7 | -nan | -3.5 | 290 | 1 | 0 | 2019-02-1... | 1550206800 | 94 | 1.8 | -nan | 12 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | Temperatu... | -2 | 0 | -3.3 | 240 | 1 | 0 | 2019-02-1... | 1550210400 | 91 | 1.8 | 0 | 5.5 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | Temperatu... | -1.8 | -nan | -2.8 | 330 | 1 | 0 | 2019-02-1... | 1550214000 | 93 | 1.8 | -nan | 12 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | Temperatu... | -0.1 | -nan | -1.8 | 300 | 2 | 0 | 2019-02-1... | 1550217600 | 88 | 1.8 | -nan | 13 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | Temperatu... | 3.6 | -nan | 0.3 | 110 | 1 | 0 | 2019-02-1... | 1550221200 | 79 | 1.8 | -nan | 5.5 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | Temperatu... | 5.2 | -nan | 0 | 70 | 2 | 0 | 2019-02-1... | 1550224800 | 69 | 3.6 | -nan | 10 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | Temperatu... | 7.4 | -nan | 0.8 | 20 | 2 | 0 | 2019-02-1... | 1550228400 | 63 | 3.6 | -nan | 11 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | Temperatu... | 8.7 | -nan | 0.4 | 70 | 1 | 0 | 2019-02-1... | 1550232000 | 56 | 5.4 | -nan | 9.3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14 | Temperatu... | 9.7 | -nan | 0.3 | 80 | 1 | 0 | 2019-02-1... | 1550235600 | 52 | 7.6 | -nan | 15 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 15 | Temperatu... | 9.8 | -nan | -0.4 | 90 | 1 | 0 | 2019-02-1... | 1550239200 | 49 | 7.6 | -nan | 19 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 | Temperatu... | 10.5 | -nan | -0.3 | 80 | 1 | 0 | 2019-02-1... | 1550242800 | 47 | 5.4 | -nan | 9.3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 17 | Temperatu... | 10 | -nan | -0.2 | 80 | 1 | 0 | 2019-02-1... | 1550246400 | 49 | 5.4 | -nan | 11 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 18 | Temperatu... | 7.3 | -nan | -0.4 | 60 | 1 | 0 | 2019-02-1... | 1550250000 | 58 | 3.6 | -nan | 6 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 19 | Temperatu... | 4.7 | 0 | -0.7 | 50 | 1 | 0 | 2019-02-1... | 1550253600 | 68 | 1.8 | -nan | 7.4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `timestamp` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | timestamp | ...)`. | The weather during the data taking campaign is shown in fig. [[fig:cdl:weather_meyrin_during_cdl_data_taking]]. It's good to see my memory served me right. #+CAPTION: Weather during the data taking campaign at the CDL in Meyrin, Geneva. #+CAPTION: The weather was sunny and warm for a Februray during the day. As a result #+CAPTION: the small laboratory heat up significantly and opening / closing windows #+CAPTION: lead to significant temperature changes. #+NAME: fig:cdl:weather_meyrin_during_cdl_data_taking [[~/phd/Figs/CDL/weather_meyrin_during_cdl_data_taking.pdf]] *** Charge spectra of the CDL data :PROPERTIES: :CUSTOM_ID: sec:cdl:fits_to_spectra :END: To the spectra of each run of the charge data a mixture of Gaussian functions is fitted. [fn:pixel_spectra_fitting] Specifically the Gaussian expressed as \begin{equation} G(E; \mu, \sigma, N) = \frac{N}{\sqrt{2 \pi}} \exp\left(-\frac{(E - \mu)^2}{2\sigma^2}\right), \end{equation} is used and will be referenced as $G$ with possible arguments from here on. Note that while the physical X-ray transition lines are Lorentzian shaped [[cite:&weisskopf97_lorentzian]], the lines as detected by a gaseous detector are entirely dominated by detector resolution, resulting in Gaussian lines. For other types of detectors used in X-ray fluorescence (XRF) analysis, convolutions of Lorentzian and Gaussian distributions are used cite:huang86_profile_xrf,heckel87_low_peak_distor, called pseudo Voigt functions cite:roberts75_lorentz,gunnink77_algo_lorentz. The functions fitted to the different spectra then depend on which fluorescence lines are visible. The full list of all combinations is shown in tab. [[tab:cdl:fit_func_charge]]. Typically each line that is expected from the choice of target, filter and chosen voltage is fitted, if it can be visually identified in the data. [fn:choice_of_params] If no 'argument' is shown in the table to $G$ it means each parameter ($N, μ, σ$) is fitted. Any specific argument given implies that parameter is _fixed_ relative to another parameter. For example $μ^{\ce{Ag}}_{Lα}·\left(\frac{3.151}{2.984}\right)$ fixes the $Lβ$ line of silver to the fit parameter $μ^{\ce{Ag}}_{Lα}$ of the $Lα$ line with a multiplicative shift based on the relative eneries of $Lα$ to $Lβ$. In some cases the amplitude is fixed between different lines where relative amplitudes cannot be easily predicted or determined, e.g. in one of the $\ce{Cu}-\text{EPIC}$ runs, the $\ce{C}_{Kα}$ line is fixed to a tenth of the $\ce{O}_{Kα}$ line. This is done to get a good fit based on trial and error. Finally, in both $\ce{Cu}-\text{EPIC}$ lines two 'unknown' Gaussians are added to cover the behavior of the data at higher charges. The physical origin of this additional contribution is not entirely clear. The used EPIC filter contains an aluminum coating cite:barbera2016thin. As such it has the aluminum absorption edge at about $\SI{1.5}{keV}$, possibly matching the additional contribution for the $\SI{2}{kV}$ dataset. Whether it is from a continuous part of the spectrum or a form of aluminum fluorescence is not clear however. This explanation does not work in the $\SI{0.9}{kV}$ case, which is why the line is deemed 'unknown'. It may also be a contribution of the specific polyimide used in the EPIC filter cite:barbera2016thin. Another possibility is it is a case of multi-cluster events, which are too close to be split, but with properties close enough to a single X-ray as to not be removed by the cleaning cuts (which gets more likely the lower the energy is). The full set of all fits (including the pixel spectra) is shown in appendix [[#sec:appendix:cdl:all_spectra_fits_by_run]]. Fig. [[fig:cdl:ti_ti_charge_spectrum_run_326]] shows the charge spectrum of the $\ce{Ti}$ target and $\ce{Ti}$ filter at $\SI{9}{kV}$ for one of the runs. These plots show the raw data in the green histogram and the data left after application of the cleaning cuts (tab. [[tab:cdl:cdl_cleaning_cuts]]) in the purple histogram. The black line is the result of the fit as described in tab. [[tab:cdl:fit_func_charge]] with the resulting parameters shown in the box (parameters that were fixed are not shown). The black straight lines with grey error bands represent the $3σ$ region around the main fluorescence line, which is used to extract those clusters likely from the fluorescence line and therefore known energy. #+CAPTION: Charge spectrum of the $\ce{Ti}-\ce{Ti}$ spectrum at $\SI{9}{kV}$ from run #+CAPTION: 326. The green histogram shows the raw data of this run and the purple #+CAPTION: histogram indicating the data left after the cleaning cuts are applied. #+CAPTION: The purple line indicates the result of the fit as described in tab. [[tab:cdl:fit_func_charge]] #+CAPTION: with the resulting parameters shown in the box. The black lines represent the #+CAPTION: $3σ$ region around the main fluorescence line (with grey error bands), #+CAPTION: which is later used to extract those clusters likely from the #+CAPTION: fluorescence line and therefore known energy. #+NAME: fig:cdl:ti_ti_charge_spectrum_run_326 [[~/phd/Figs/CDL/fWidth0.9/Ti-Ti-9kVCharge-2019_run_326.pdf]] \footnotesize #+CAPTION: All fit functions for the charge spectra used for each target / filter combination. Typically each #+CAPTION: line that is expected and visible in the data is fit. $G$ is a normal Gaussian. No 'argument' #+CAPTION: to $G$ means each parameter ($N, μ, σ$) is fit. Specific arguments imply this #+CAPTION: parameter is _fixed_ relative to another parameter, e.g. $μ^{\ce{Ag}}_{Lα}·\left(\frac{3.151}{2.984}\right)$ fixes #+CAPTION: $Lβ$ line of silver to the fit parameter $μ^{\ce{Ag}}_{Lα}$ of $Lα$ with a multiplicative #+CAPTION: shift based on the relative eneries of $Lα$ to $Lβ$. #+CAPTION: In some cases the amplitude is fixed between different lines, e.g. in one of the $\ce{Cu}-\text{EPIC}$ #+CAPTION: runs the $\ce{C}_{Kα}$ line is fixed to a tenth of the #+CAPTION: $\ce{O}_{Kα}$ line. In both $\ce{Cu}-\text{EPIC}$ #+CAPTION: lines two 'unknown' Gaussians are added to cover the behavior of the data at higher charges. It is unclear #+CAPTION: what the real cause is, in particular in the lower energy case. #+NAME: tab:cdl:fit_func_charge #+ATTR_LATEX: :environment longtable :width \textwidth :spread |--------+--------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------| | Target | Filter | HV [kV] | Fit functions | |--------+--------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------| | Cu | Ni | 15 | $G^{\ce{Cu}}_{Kα} + G^{\ce{Cu}, \text{esc}}_{Kα}$ | | Mn | Cr | 12 | $G^{\ce{Mn}}_{Kα} + G^{\ce{Mn}, \text{esc}}_{Kα}$ | | Ti | Ti | 9 | $G^{\ce{Ti}}_{Kα} + G^{\ce{Ti}, \text{esc}}_{Kα} + G^{\ce{Ti}}_{Kβ}\left( μ^{\ce{Ti}}_{Kα}·\left(\frac{4.932}{4.511}\right), σ^{\ce{Ti}}_{Kα} \right) + G^{\ce{Ti}, \text{esc}}_{Kβ}\left( μ^{\ce{Ti}}_{Kα}·\left(\frac{1.959}{4.511}\right), σ^{\ce{Ti}, \text{esc}}_{Kα} \right)$ | | Ag | Ag | 6 | $G^{\ce{Ag}}_{Lα} + G^{\ce{Ag}}_{Lβ}\left( N^{\ce{Ag}}_{Lα}·0.56, μ^{\ce{Ag}}_{Lα}·\left(\frac{3.151}{2.984}\right), σ^{\ce{Ag}}_{Lα} \right)$ | | Al | Al | 4 | $G^{\ce{Al}}_{Kα}$ | | Cu | EPIC | 2 | $G^{\ce{Cu}}_{Lα} + G^{\ce{Cu}}_{Lβ}\left( N^{\ce{Cu}}_{Lα}·\left(\frac{0.65}{1.11}\right), μ^{\ce{Cu}}_{Lα}·\left(\frac{0.9498}{0.9297}\right), σ^{\ce{Cu}}_{Lα} \right) + G^{\ce{O}}_{Kα}\left( \frac{N^{\ce{Cu}}_{Lα}}{3.5}, μ^{\ce{Cu}}_{Lα}·\left(\frac{0.5249}{0.9297}\right), \frac{σ^{\ce{Cu}}_{Lα}}{2.0} \right) + G_{\text{unknown}}$ | | Cu | EPIC | 0.9 | $G^{\ce{O}}_{Kα} + G^{\ce{C}}_{Kα}\left( \frac{N^{\ce{O}}_{Kα}}{10.0}, μ^{\ce{O}}_{Kα}·\left(\frac{277.0}{524.9}\right), σ^{\ce{O}}_{Kα} \right) + G_{\text{unknown}}$ | | C | EPIC | 0.6 | $G^{\ce{C}}_{Kα} + G^{\ce{O}}_{Kα}\left( μ^{\ce{C}}_{Kα}·\left(\frac{0.525}{0.277}\right), σ^{\ce{C}}_{Kα} \right)$ | |--------+--------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------| \normalsize [fn:pixel_spectra_fitting] Note that similar fits can be performed for the pixel spectra as well. However, as these are not needed for anything further they are only presented in the extended version of the thesis. [fn:choice_of_params] The restriction to what is 'visually identifiable' is to avoid the need to fit lines lost in detector resolution and signal to noise ratio. In addition, it avoids overfitting to large numbers of parameters (which brings its own challenges and problems). **** TODOs of the above section :noexport: - [X] *REWRITE BELOW, FIND CITATION FOR LORENTZ SHAPE OF X-ray transitions* -> And also this could maybe go to where we first show. Done. - [X] *REWRITE ABOVE PART ABOUT UNKNOWN LINE ACCORDING TO*: #+begin_quote The physical origin of this additional contribution modeled by a Gaussian is not entirely clear. The used EPIC filter contains an aluminum coating. As such it has the aluminum absorption edge at about $\SI{1.5}{keV}$, matching the additional contribution. Whether it is from a continuous part of the spectrum or a form of aluminum fluorescence is not clear however. #+end_quote -> Done. - [X] *HAVE [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/calcCdlCutsFromFitParams.nim]] TO COMPUTE CHARGE CUT VALUES FROM DUMPED FITS!* -> We now handle this in code, write them to CDL H5 file. The above could still be useful to generate a table of the final used values! -> We generate the table from the code now. The script above might still serve as some reference though. - [X] Only mention pixel spectra in passing? We don't use them for anything. So there's no need for why they should really appear here! -> Yes, this is better. See section below now. - [X] *SHOW EXAMPLE FIT WITH RAW+CUTS & FIT PARAMETERS PLOT* - [X] *PROBABLY REPLACE PURPLE LINE FOR PLOT WITHOUT NLOPT & START BY BLACK LINE!* - [X] *THINK THIS TABLE THROUGH AGAIN. POSSIBLY REPLACE IT BY AN ENUMERATION WITH ONE ITEM BY TARGET / FILTER* -> I think that's better, yes. (hence it's now here) #+CAPTION: The fun table! #+CAPTION: ~(*)~: The ratio used in these amplitudes is not physical, but motivated by the very rough ratios seen in the data. #+CAPTION: ~(**)~: The physical origin of this additional contribution modeled by a Gaussian is not entirely clear. The used #+CAPTION: EPIC filter contains an aluminum coating. As such it has the aluminum absorption edge at about $\SI{1.5}{keV}$, matching the additional #+CAPTION: contribution. Whether it is from a continuous part of the spectrum or a form of aluminum fluorescence is not clear however. #+NAME: test #+ATTR_LATEX: :float sideways :booktabs t | $\ce{CuNi} \SI{15}{kV}$ | $\ce{MnCr} \SI{12}{kV}$ | tiTi9 | agAg6 | alAl4 | cuEpic2 | cuEpic0.9 | cEpic0.6 | |----------------------------------------+---------------------------------------+------------------------------------------------+------------------------------------------------+----------------------+-------------------------------------------------+-----------------------------------------+----------------------------------------------| | $\text{EG}(\ce{Cu}_{Kα})$ | $\text{EG}(\ce{Mn}_{Kα})$ | $\text{EG}(\ce{Ti}_{Kα})$ | $\text{EG}(\ce{Ag}_{Lα})$ | $\text{EG}(Al_{Kα})$ | $\text{G}(\ce{Cu}_{Lα})$ | $\text{G}(O_{Kα})$ | $\text{G}(\ce{C}_{Kα})$ | | $\text{EG}(\ce{Cu}^{\text{esc}}_{Kα})$ | $\text{G}(\ce{Mn}^{\text{esc}}_{Kα})$ | $\text{G}(\ce{Ti}^{\text{esc}}_{Kα})$ | G($\ce{Ag}_{Lβ}$) | | G($\ce{Cu}_{Lβ}$): | | G($\ce{O}_{Kα}$): | | | | G($\ce{Ti}^{\text{esc}}_{Kβ}$): | $\ N = eN(\ce{Ag}_{Lα}) · 0.56$ | | $\ N = N(\ce{Cu}_{Lα}) · \frac{0.65}{1.11}$ | | $\ μ = μ(\ce{C}_{Kα}) · \frac{0.525}{0.277}$ | | | | $\ μ = eμ(\ce{Ti}_{Kα}) · \frac{1.959}{4.511}$ | $\ μ = eμ(\ce{Ag}_{Lα}) · \frac{3.151}{2.984}$ | | $\ μ = μ(\ce{Cu}_{Lα}) · \frac{0.9498}{0.9297}$ | | $\ σ = σ(\ce{C}_{Kα})$ | | | | $\ σ = σ(\ce{Ti}^{\text{esc}}_{Kα})$ | $\ σ = eσ(\ce{Ag}_{Lα})$ | | $\ σ = σ(\ce{Cu}_{Lα})$ | | | | | | G($\ce{Ti}_{Kβ}$): | | | G($\ce{O}_{Kα}$): | | | | | | $\ μ = eμ(\ce{Ti}_{Kα}) · \frac{4.932}{4.511}$ | | | $\ N = N(\ce{Cu}_{Lα}) / 3.5$ ~(*)~ | | | | | | $\ σ = eσ(\ce{Ti}_{Kα})$ | | | $\ μ = μ(\ce{Cu}_{Lα}) · \frac{0.5249}{0.9297}$ | | | | | | | | | $\ σ = σ(\ce{Cu}_{Lα}) / 2.0$ | | | | | | | | | G($\ce{Al}_K$) ~(**)~ | | | |----------------------------------------+---------------------------------------+------------------------------------------------+------------------------------------------------+----------------------+-------------------------------------------------+-----------------------------------------+----------------------------------------------| | cuNi15 Q | mnCr12 Q | tiTi9 Q | agAg6 Q | alAl4 Q | cuEpic2 Q | cuEpic0.9 Q | cEpic0.6 Q | |----------------------------------------+---------------------------------------+------------------------------------------------+------------------------------------------------+----------------------+-------------------------------------------------+-----------------------------------------+----------------------------------------------| | $\text{G}(\ce{Cu}_{Kα})$ | $\text{G}(\ce{Mn}_{Kα})$ | $\text{G}(\ce{Ti}_{Kα})$ | $\text{G}(\ce{Ag}_{Lα})$ | $\text{G}(Al_{Kα})$ | $\text{G}(\ce{Cu}_{Lα})$ | $\text{G}(O_{Kα})$ | $\text{G}(\ce{C}_{Kα})$ | | $\text{G}(\ce{Cu}^{\text{esc}}_{Kα})$ | $\text{G}(\ce{Mn}^{\text{esc}}_{Kα})$ | $\text{G}(\ce{Ti}^{\text{esc}}_{Kα})$ | G($\ce{Ag}_{Lβ}$): | | G($\ce{Cu}_{Lβ}$): | G($\ce{C}_{Kα}$): | G($\ce{O}_{Kα}$): | | | | G($\ce{Ti}^{\text{esc}}_{Kβ}$): | $\ N = N(\ce{Ag}_{Lα}) · 0.56$ | | $\ N = N(\ce{Cu}_{Lα}) · \frac{0.65}{1.11}$ | $\ N = N(O_{Kα}) / 10.0$ ~(*)~ | $\ μ = μ(\ce{C}_{Kα}) · \frac{0.525}{0.277}$ | | | | $\ μ = μ(\ce{Ti}_{Kα}) · \frac{1.959}{4.511}$ | $\ μ = μ(\ce{Ag}_{Lα}) · \frac{3.151}{2.984}$ | | $\ μ = μ(\ce{Cu}_{Lα}) · \frac{0.9498}{0.9297}$ | $\ μ = μ(O_{Kα}) · \frac{277.0}{524.9}$ | $\ σ = σ(\ce{C}_{Kα})$ | | | | $\ σ = σ(\ce{Ti}^{\text{esc}}_{Kα})$ | $\ σ = σ(\ce{Ag}_{Lα})$ | | $\ σ = σ(\ce{Cu}_{Lα})$ | $\ σ = σ(O_{Kα})$ | | | | | G($\ce{Ti}_{Kβ}$): | | | | | | | | | $\ μ = μ(\ce{Ti}_{Kα}) · \frac{4.932}{4.511}$ | | | | | | | | | $\ σ = σ(\ce{Ti}_{Kα})$ | | | | | | **** Notes on implementation details of fit functions :extended: - [ ] *REWRITE THE BELOW* (much of that is irrelevant for the full thesis) -> Place into :noexport: section! The exact implementation in use for both the gaussian: - Gauss: https://github.com/Vindaar/seqmath/blob/master/src/seqmath/smath.nim#L997-L1009 The fitting was performed both with [[https://www.physics.wisc.edu/~craigm/idl/cmpfit.html][MPFit]] (Levenberg Marquardt C implementation) as a comparison, but mainly using [[https://nlopt.readthedocs.io/en/latest/][NLopt]] (via [[https://github.com/Vindaar/nimnlopt]]). Specifically the gradient based [[http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.146.5196]["Method of Moving Asymptotes"]] algorithm was used (NLopt provides a large number of different minimization / maximization algorithms to choose from) to perform maximum likelihood estimation written in the form of a poisson distributed log likelihood $\chi^2$: #+BEGIN_EXPORT latex \begin{equation} \chi^2_{\lambda, P} = 2 \sum_i y_i - n_i + n_i \ln\left(\frac{n_i}{y_i}\right), \end{equation} #+END_EXPORT where $n_i$ is the number of events in bin $i$ and $y_i$ the model prediction of events in bin $i$. The required gradient was calculated simply using the [[https://en.wikipedia.org/wiki/Symmetric_derivative][symmetric derivative]]. Other algorithms and minimization functions were tried, but this proved to be the most reliable. See the implementation: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/calibration.nim#L131-L162 **** Pixel spectra :extended: In case of the pixel spectra the fit functions are generally very similar, but in some cases the regular Gaussian is replaced by an 'exponential Gaussian' defined as follows: #+BEGIN_EXPORT latex \begin{equation} EG(E; \mu, \sigma, N, a, b) = \begin{cases} N \exp\left(-\frac{(E-\mu)^2}{2\sigma^2}\right) & \mathrm{for}: E \geq c\\ \exp(aE + b) & \mathrm{for}: E < c \\ \end{cases} \end{equation} #+END_EXPORT where the constant $c$ is chosen such that the resulting function is continuous. The idea being that in the pixel spectra can have a longer exponential tail on the left side due to threshold effects and multiple electrons entering a single grid hole. The implementation of the exponential Gaussian is found here: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/calibration.nim#L182-L194 - [ ] *REPLACE LINKS SUCH AS THESE BY TAGGED VERSION AND NOT DIRECT INLINE LINKS* The full list of all fit combinations for the pixel spectra is shown in tab. [[tab:cdl:fit_funcs_pixel]]. The fitting otherwise works the same way, using a non-linear least square fit both implemented by hand using MMA as well as a standard Levenberg-Marquardt fit. #+CAPTION: All fit functions for the pixel spectra for the different combinations. If a function misses #+CAPTION: a parameter below (out of $(N, μ, σ)$ for the Gaussian and $(a, b, N, μ, σ)$ for the exponential #+CAPTION: Gaussian) means that parameter has been fixed relative to another. #+NAME: tab:cdl:fit_funcs_pixel |--------+--------+---------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Target | Filter | HV [kV] | Fit functions | |--------+--------+---------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Cu | Ni | 15 | $EG^{\ce{Cu}}_{Kα} + EG^{\ce{Cu}, \text{esc}}_{Kα}$ | | Mn | Cr | 12 | $EG^{\ce{Mn}}_{Kα} + G^{\ce{Mn}, \text{esc}}_{Kα}$ | | Ti | Ti | 9 | $EG^{\ce{Ti}}_{Kα} + G^{\ce{Ti}, \text{esc}}_{Kα} + G^{\ce{Ti}, \text{esc}}_{Kβ}\left( eμ^{\ce{Ti}}_{Kα}·(\frac{1.959}{4.511}), σ^{\ce{Ti}, \text{esc}}_{Kα} \right) + G^{\ce{Ti}}_{Kβ}\left( eμ^{\ce{Ti}}_{Kα}·(\frac{4.932}{4.511}), eσ^{\ce{Ti}}_{Kα} \right)$ | | Ag | Ag | 6 | $EG^{\ce{Ag}}_{Lα} + G^{\ce{Ag}}_{Lβ}\left( eN^{\ce{Ag}}_{Lα}·0.56, eμ^{\ce{Ag}}_{Lα}·(\frac{3.151}{2.984}), eσ^{\ce{Ag}}_{Lα} \right)$ | | Al | Al | 4 | $EG^{\ce{Al}}_{Kα}$ | | Cu | EPIC | 2 | $G^{\ce{Cu}}_{Lα} + G^{\ce{Cu}}_{Lβ}\left( N^{\ce{Cu}}_{Lα}·(\frac{0.65}{1.11}), μ^{\ce{Cu}}_{Lα}·(\frac{0.9498}{0.9297}), σ^{\ce{Cu}}_{Lα} \right) + G^{\ce{O}}_{Kα}\left( \frac{N^{\ce{Cu}}_{Lα}}{3.5}, μ^{\ce{Cu}}_{Lα}·(\frac{0.5249}{0.9297}), \frac{σ^{\ce{Cu}}_{Lα}}{2.0} \right) + G_{\text{unknown}}$ | | Cu | EPIC | 0.9 | $G^{\ce{O}}_{Kα} + G_{\text{unknown}}$ | | C | EPIC | 0.6 | $G^{\ce{C}}_{Kα} + G^{\ce{O}}_{Kα}\left( μ^{\ce{C}}_{Kα}·(\frac{0.525}{0.277}), σ^{\ce{C}}_{Kα} \right)$ | |--------+--------+---------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| **** Generate plot of charge spectrum :extended: See sec. [[#sec:background:gen_plots_cdl_data]] for the commands used to generate all the plots for the pixel and charge spectra, including the one used in the above section. *** Overview of CDL data in energy With the fits to the charge spectra performed on a run-by-run basis, they can be utilized to calibrate the energy of each cluster in the data. [fn:using_regular_energy_calib] This is done by using the linear relationship between charge and energy and therefore using the charge of the main fluorescence line as computed from the fit. Each run is therefore self-calibrated (in contrast to our normal energy calibration approach [[#sec:calibration:energy]]). Fig. [[fig:cdl:calibrated_energy_histos]] shows normalized histograms of all CDL data after applying basic cuts and performing said energy calibrations. #+CAPTION: Normalized histograms of all CDL data after applying basic cuts #+CAPTION: and calibrating the data in energy using the charge of the main fitted line #+CAPTION: and its known energy as a baseline. Some targets show a wider distribution, due #+CAPTION: to detector variability which results in different gas gains and thus different #+CAPTION: charges in different runs. #+NAME: fig:cdl:calibrated_energy_histos [[~/phd/Figs/CDL/calibrated_cdl_energy_histos.pdf]] [fn:using_regular_energy_calib] We can of course apply the regular energy calibration based on the multiple fits to the CAST calibration data as explained in sec. [[#sec:calib:final_energy_calibration]]. However, due to the very different gas gains observed in the CDL, the applicability is not ideal. Add to that the fact that we know for certain the energy of the clusters in the main fluorescence peak there is simply no need either. *** Definition of the reference distributions :PROPERTIES: :CUSTOM_ID: sec:cdl:derive_probability_density :END: Having performed fits to all charge spectra of each run and the position of the main fluorescence line and its width determined, the reference distributions for the cluster properties entering the likelihood can be computed. By taking all clusters within the $3σ$ bounds around the main fluorescence line of the data used for the fit, the dataset is selected. This guarantees to leave mainly X-rays of the targeted energy for each line in the dataset. As the fit is performed for each run separately, the $3σ$ charge cut is performed run by run and then all data combined for each fluorescence line (target, filter & HV setting). The desired reference distributions then are simply the normalized histograms of the clusters in each of the properties. With 8 targeted fluorescence lines and 3 properties this yields a total of 24 reference distributions. Each histogram is then interpreted as a probability density function (PDF) for clusters 'matching' (more on this in sec. [[#sec:cdl:cdl_morphing]]) the energy of its fluorescence line. An overview of all the reference distributions is shown in fig. [[fig:cdl:reference_distributions_overview_ridgeline]]. We can see that all distributions tend to get wider towards lower energies (towards the top of the plot). This is expected and due to smaller clusters having less primary electrons and therefore statistical variations in geometric properties playing a more important role. [fn:diffusion] The binning shown in the figure is the exact binning used to define the PDF. In case of the fraction of pixels within a transverse RMS radius, bins with significantly higher counts are observed at low energies. This is _not_ a binning artifact, but a result of the definition of the variable. The property computes the fraction of pixels that lie within a circle of the radius corresponding to the transverse RMS of the cluster around the cluster center (see fig. sref:fig:reco:property_explanations). At energies with few pixels in total, the integer nature of $N$ or $N+1$ primary electrons (active pixels) inside the radius becomes apparent. The binning in the histograms is chosen by hand such that the binning is as fine as possible without leading to significant statistical fluctuations, as those would have a direct effect on the PDFs leading to unphysical effects on the probabilities. Ideally an approach of either an automatic bin selection algorithm or something like a kernel density estimation should be used. However, the latter is slightly problematic due to the integer effects in the low energies of the fraction in transverse RMS variable. The summarized 'recipe' of the approach is therefore: 1. apply cleaning cuts according to tab. [[tab:cdl:cdl_cleaning_cuts]], 2. perform fits according to tab. [[tab:cdl:fit_func_charge]], 3. cut to the $3σ$ region around the main fluorescence line of the performed fit (i.e. the first term in tab. [[tab:cdl:fit_func_charge]]), 4. combine all remaining clusters for the same fluorescence line from each run, 5. compute a histogram for each desired cluster property and each fluorescence line, 6. normalize the histogram to define the reference distribution, $\mathcal{P}_i$ #+CAPTION: Overview of all reference distributions for each target/filter combination and property. #+CAPTION: The binning is the same as used to compute the probabilities. Towards lower energies #+CAPTION: (towards the top) the distributions all become wider, as the clusters have fewer electrons #+CAPTION: and statistical fluctuations play a larger role. The 'fraction in transverse RMS' property #+CAPTION: becomes partially discrete, which is _not_ a binning effect, but due to integer counting effects #+CAPTION: of the number of electrons within the transverse RMS radius. #+NAME: fig:cdl:reference_distributions_overview_ridgeline [[~/phd/Figs/CDL/ridgeline_all_properties_side_by_side.pdf]] [fn:diffusion] Keep in mind that all energies undergo roughly the same amount of diffusion (aside from the energy dependent absorption length to a lesser extent), so lower energy events are more sparse. **** TODOs for the above section [/] :noexport: See appendix [[#sec:appendix:cdl_reference_distributions]] for an overview of the distributions with each histogram in a separate plot. - [ ] *Do we want this?*!!! - [ ] *ADD THAT APPENDIX* -> Appendix about all the individual histograms - [ ] mention what kind of binning we use - [X] present plot of reference distributions. How? -> ridge line of all targets & then show one property? -> ridge line of all properties -> side-by-side facet of all properties of one line? -> Finally: side by side of 3 ridge line plots! - [X] *REGARDING CORRELATION BETWEEN THE THREE VARIABLES!* -> Create a point plot of the eccentricity, lengthDivRmsTrans and fracRms data (using one as color scale). Should give a nice correlation likely! (we could also *compute* the correlation, but well) -> Done here, sec. [[#sec:background:correlation_lnL_variables]] [[~/phd/Figs/background/correlation_ecc_ldiv_frac.pdf]] where we create plot of the logL interpolation based on CDL data. This one shows the CDL data after cleaning cut! Clearly strong correlation visible. The 'inverse' plot [[~/phd/Figs/background/correlation_ecc_frac_ldiv.pdf]] looks a bit more funny. I think because of some ldiv values being _much_ larger the color scale is not very useful. The line at the bottom is the diagonal outliers in the first plot. - [X] *ALSO DO SAME PLOT BUT LDIV vs FRAC with ECC AS COLOR!* Done here: [[~/phd/Figs/background/correlation_ldiv_frac_ecc.pdf]] And as a bonus the same plots, but cut to $ε < 2.5$: - [[~/phd/Figs/background/correlation_ecc_ldiv_frac_ecc_smaller_2_5.pdf]] - [[~/phd/Figs/background/correlation_ecc_frac_ldiv_ecc_smaller_2_5.pdf]] - [[~/phd/Figs/background/correlation_ldiv_frac_ecc_ecc_smaller_2_5.pdf]] In some ways the correlation becomes nicer, in others it gets harder to see. Interesting! **** Generate ridgeline plot of all reference distributions :extended: The ridgeline plot used in the above section is produced using ~TimepixAnalysis/Plotting/plotCdl~. See sec. [[#sec:background:gen_plots_likelihood]] for the exact commands as more plots are produced used in other sections. **** Generate plots for interpolated likelihood distribution and logL variable correlations [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:background:correlation_lnL_variables :END: In the main text I mentioned that the $\ln\mathcal{L}$ variables are all very correlated. Let's create a few plots to look at how correlated they actually are. We'll generate scatter plots of the three variables against another, using a color scale for the third variable. - [ ] *WRITE A FEW WORDS ABOUT THE PLOTS!* - [X] Create a plot of the likelihood distributions interpolated over all energies! Well, it doesn't work in the way I thought, because obviously we cannot just compute the likelihood distributions directly! We can compute *a* likelihood value for a cluster at an arbitrary energy, but to get a likelihood distribution we'd need actual X-ray data at all energies! The closest we have to that is the general CDL data (not just the main peak & using correct energies for each cluster) -> Attempt to use the CDL data with only the cleaning cuts -> Question: Which tool do we add this to? Or separate here using CDL reconstructed data? How much statistics do we even have if we end up splitting everything over 1000 energies? Like nothing... Done in: [[~/phd/Figs/background/logL_of_CDL_vs_energy.pdf]] now where we compare the no morphing vs. the linear morphing case. Much more interesting than I would have thought, because one can clearly see the effect of the morphing on the logL values that are computed! This is a pretty nice result to showcase the morphing is actually useful. Will be put into appendix and linked. - [X] *ADD LINE SHOWING WHERE THE CUT VALUES ARE TO THE PLOT* -> The hard corners in the interpolated data is because the reference distributions are already pre-binned of course! So if by just changing the bins slightly the ε cut value still lies in the same bin, of course we see a 'straight' line in energy. Hence a non smooth curve. We could replace this all: - either by a smooth KDE and interpolate based on that as an alternative. I should really try this, the only questionable issue is the fracRms distribution and its discrete features. *However* we don't actually guarantee in any way that in current approach the bins _actually_ correspond to any fixed integer values. It is quite likely that the bins that show smaller/larger values are too wide/small! - or by keeping everything as is, but then performing a spline interpolation on the *distinct* (logL, energy) pairs such that the result is a smooth logL value. "Better bang for buck" than doing full KDE and avoids the issue of discreteness in the fracRms distribution. Even though the interpolated values probably correspond to something as if the fracRms distribution _had_ been smooth. #+begin_src nim :tangle code/generate_interp_likelihood.nim import std / [os, strutils] import ingrid / ingrid_types import ingrid / private / [cdl_utils, cdl_cuts, hdf5_utils, likelihood_utils] import pkg / [ggplotnim, nimhdf5] const TpxDir = "/home/basti/CastData/ExternCode/TimepixAnalysis" const cdl_runs_file = TpxDir / "resources/cdl_runs_2019.org" const fname = "/home/basti/CastData/data/CDL_2019/CDL_2019_Reco.h5" const cdlFile = "/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5" const dsets = @["totalCharge", "eccentricity", "lengthDivRmsTrans", "fractionInTransverseRms"] proc calcEnergyFromFits(df: DataFrame, fit_μ: float, tfKind: TargetFilterKind): DataFrame = ## Given the fit result of this data type & target/filter combination compute the energy ## of each cluster by using the mean position of the main peak and its known energy result = df result["Target"] = $tfKind let invTab = getInverseXrayRefTable() let energies = getXrayFluorescenceLines() let lineEnergy = energies[invTab[$tfKind]] result = result.mutate(f{float: "energy" ~ `totalCharge` / fit_μ * lineEnergy}) let h5f = H5open(fname, "r") var df = newDataFrame() for tfKind in TargetFilterKind: for (run, grp) in tfRuns(h5f, tfKind, cdl_runs_file): var dfLoc = newDataFrame() for dset in dsets: if dfLoc.len == 0: dfLoc = toDf({ dset : h5f.readCutCDL(run, 3, dset, tfKind, float64) }) else: dfLoc[dset] = h5f.readCutCDL(run, 3, dset, tfKind, float64) dfLoc["runNumber"] = run dfLoc["tfKind"] = $tfKind # calculate energy from fit let fit_μ = grp.attrs["fit_μ", float] dfLoc = dfLoc.calcEnergyFromFits(fit_μ, tfKind) df.add dfLoc proc calcInterp(ctx: LikelihoodContext, df: DataFrame): DataFrame = # walk all rows # feed ecc, ldiv, frac into logL and return a DF with result = df.mutate(f{float: "logL" ~ ctx.calcLikelihoodForEvent(`energy`, `eccentricity`, `lengthDivRmsTrans`, `fractionInTransverseRms`) }) # first make plots of 3 logL variables to see their correlations ggplot(df, aes("eccentricity", "lengthDivRmsTrans", color = "fractionInTransverseRms")) + geom_point(size = 1.0) + ggtitle("lnL variables of all (cleaned) CDL data for correlations") + ggsave("~/phd/Figs/background/correlation_ecc_ldiv_frac.pdf", dataAsBitmap = true) ggplot(df, aes("eccentricity", "fractionInTransverseRms", color = "lengthDivRmsTrans")) + geom_point(size = 1.0) + ggtitle("lnL variables of all (cleaned) CDL data for correlations") + ggsave("~/phd/Figs/background/correlation_ecc_frac_ldiv.pdf", dataAsBitmap = true) ggplot(df, aes("lengthDivRmsTrans", "fractionInTransverseRms", color = "eccentricity")) + geom_point(size = 1.0) + ggtitle("lnL variables of all (cleaned) CDL data for correlations") + ggsave("~/phd/Figs/background/correlation_ldiv_frac_ecc.pdf", dataAsBitmap = true) df = df.filter(f{`eccentricity` < 2.5}) ggplot(df, aes("eccentricity", "lengthDivRmsTrans", color = "fractionInTransverseRms")) + geom_point(size = 1.0) + ggtitle("lnL variables of all (cleaned) CDL data for correlations (ε < 2.5)") + ggsave("~/phd/Figs/background/correlation_ecc_ldiv_frac_ecc_smaller_2_5.pdf", dataAsBitmap = true) ggplot(df, aes("eccentricity", "fractionInTransverseRms", color = "lengthDivRmsTrans")) + geom_point(size = 1.0) + ggtitle("lnL variables of all (cleaned) CDL data for correlations (ε < 2.5)") + ggsave("~/phd/Figs/background/correlation_ecc_frac_ldiv_ecc_smaller_2_5.pdf", dataAsBitmap = true) ggplot(df, aes("lengthDivRmsTrans", "fractionInTransverseRms", color = "eccentricity")) + geom_point(size = 1.0) + ggtitle("lnL variables of all (cleaned) CDL data for correlations (ε < 2.5)") + ggsave("~/phd/Figs/background/correlation_ldiv_frac_ecc_ecc_smaller_2_5.pdf", dataAsBitmap = true) from std/sequtils import concat # now generate the plot of the logL values for all cleaned CDL data. We will compare the # case of no morphing with the linear morphing case proc getLogL(df: DataFrame, mk: MorphingKind): (DataFrame, DataFrame) = let ctx = initLikelihoodContext(cdlFile, yr2018, crGold, igEnergyFromCharge, Timepix1, mk, useLnLCut = true) var dfMorph = ctx.calcInterp(df) dfMorph["Morphing?"] = $mk let cutVals = ctx.calcCutValueTab() case cutVals.morphKind of mkNone: let lineEnergies = getEnergyBinning() let tab = getInverseXrayRefTable() var cuts = newSeq[float]() var energies = @[0.0] var lastCut = Inf var lastE = Inf for k, v in tab: let cut = cutVals[k] if classify(lastCut) != fcInf: cuts.add lastCut energies.add lastE cuts.add cut lastCut = cut let E = lineEnergies[v] energies.add E lastE = E cuts.add cuts[^1] # add last value again to draw line up echo energies.len, " vs ", cuts.len let dfCuts = toDf({energies, cuts, "Morphing?" : $cutVals.morphKind}) result = (dfCuts, dfMorph) of mkLinear: let energies = concat(@[0.0], cutVals.lnLCutEnergies, @[20.0]) let cutsSeq = cutVals.lnLCutValues.toSeq1D let cuts = concat(@[cutVals.lnLCutValues[0]], cutsSeq, @[cutsSeq[^1]]) let dfCuts = toDf({"energies" : energies, "cuts" : cuts, "Morphing?" : $cutVals.morphKind}) result = (dfCuts, dfMorph) var dfMorph = newDataFrame() let (dfCutsNone, dfNone) = getLogL(df, mkNone) let (dfCutsLinear, dfLinear) = getLogL(df, mkLinear) dfMorph.add dfNone dfMorph.add dfLinear var dfCuts = newDataFrame() dfCuts.add dfCutsNone dfCuts.add dfCutsLinear #dfCuts.showBrowser() echo dfMorph ggplot(dfMorph, aes("logL", "energy", color = factor("Target"))) + facet_wrap("Morphing?") + geom_point(size = 1.0) + geom_line(data = dfCuts, aes = aes("cuts", "energies")) + # , color = "Morphing?")) + ggtitle(r"$\ln\mathcal{L}$ values of all (cleaned) CDL data against energy") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ylab("Energy [keV]") + xlab(r"$\ln\mathcal{L}$") + ggsave("~/phd/Figs/background/logL_of_CDL_vs_energy.pdf", width = 1000, height = 600, dataAsBitmap = true) #+end_src *** Definition of the likelihood distribution :PROPERTIES: :CUSTOM_ID: sec:cdl:cdl_morphing :END: With our reference distributions defined it is time to look back at the equation for the definition of the likelihood, eq. [[eq:background:likelihood_def]]. #+begin_quote Note: To avoid numerical issues dealing with very small probabilities, the actual relation evaluated numerically is the negative log of the likelihood: \[ -\ln \mathcal{L}(ε, f, l) = - \ln \mathcal{P}_ε(ε) - \ln \mathcal{P}_f(f) - \ln \mathcal{P}_l(l). \] #+end_quote By considering a single fluorescence line, the three reference distributions $\mathcal{P}_i(i)$ make up the likelihood $\mathcal{L}(ε, l, f)$ /function/ for that energy; a function of the variables $ε, f, l$. In order to use the likelihood function as a classifier for X-ray like clusters we need a one dimensional expression. This is where the likelihood /distribution/ $\mathfrak{L}$ comes in. We reuse all the clusters used to define the reference distributions $\mathcal{P}_{ε,f,l}$ and compute their likelihood values $\{ \mathcal{L}_j \}$, where $j$ is the index of the $j\text{-th}$ cluster. By then computing the histogram of the set of all these likelihood values, we obtain the likelihood distribution \[ \mathfrak{L} = \text{histogram}\left( \{ \mathcal{L}_j \} \right). \] All these distributions are shown in fig. [[fig:cdl:likelihood_distributions]] as negative log likelihood distributions. [fn:logL_instead_of_L] We see that the distributions change slightly in shape and move towards larger $-\ln \mathcal{L}$ values towards the lower energy target/filter combinations. The shape change is most notably due to the significant integer nature of the 'fraction in transverse RMS' $f$ variable, as seen in fig. [[fig:cdl:reference_distributions_overview_ridgeline]]. The shift to larger values expresses that the reference distributions $\mathcal{P}_i$ become wider and thus each bin has lower probability. #+CAPTION: $-\ln\mathcal{L}$ distributions for each of the targeted fluorescence lines #+CAPTION: and thus target/filter combinations. #+NAME: fig:cdl:likelihood_distributions [[~/phd/Figs/CDL/logL_ridgeline.pdf]] To finally classify events as signal or background using the likelihood distribution, one sets a desired "software efficiency" $ε_{\text{eff}}$, which is defined as: #+NAME: eq:background:lnL:cut_condition \begin{equation} ε_{\text{eff}} = \frac{∫_0^{\mathcal{L'}} \mathfrak{L}(\mathcal{L}) \, \mathrm{d}\mathcal{L}}{∫_0^{∞}\mathfrak{L}(\mathcal{L}) \, \mathrm{d} \mathcal{L}}. \end{equation} The likelihood /value/ $\mathcal{L}'$ is the value corresponding to the $ε_{\text{eff}}^{\text{th}}$ percentile of the likelihood /distribution/ $\mathfrak{L}$. In practical terms one computes the normalized cumulative sum of the log likelihood and searches for the point at which the desired $ε_{\text{eff}}$ is reached. The typical software efficiency we aim for is $\SI{80}{\percent}$. Classification as X-ray-like then is simply any cluster with a $-\ln\mathcal{L}$ value smaller than the cut value $-\ln\mathcal{L}'$. Note that this value $\mathcal{L}'$ has to be determined for _each_ likelihood distribution. To summarize the derivation of the likelihood distribution $\mathfrak{L}$ and its usage as a classifier as a 'recipe': 1. compute the reference distributions $\mathcal{P}_i$ as described in sec. [[#sec:cdl:derive_probability_density]], 2. take the raw cluster data (unbinned data!) of those clusters that define the $\mathcal{P}_i$ and feed each of these into eq. [[eq:background:likelihood_def]] for a single likelihood /value/ $\mathcal{L}_i$ each, 3. compute the /histogram/ of the set of all these likelihood values $\{ \mathcal{L}_i \}$ to define the likelihood /distribution/ $\mathfrak{L}$. 4. define a desired 'software efficiency' $ε_{\text{eff}}$ and compute the corresponding likelihood /value/ $\mathcal{L}_c$ using eq. [[eq:background:lnL:cut_condition]], 5. any cluster with $\mathcal{L}_i \leq \mathcal{L}_c$ is considered X-ray-like with efficiency $ε_{\text{eff}}$. Note that due to the usage of negative log likelihoods the raw data often contains infinities, which are just a side effect of picking up a zero probability from one of the reference distributions for a particular cluster. In reality the reference distributions should be a continuous distribution that is nowhere exactly zero. However, due to limited statistics there is a small range of non-zero probabilities (most bins are empty outside the main range). For all practical purposes this does not matter, but it does explain the rather hard cutoff from 'sensible' likelihood values to infinities in the raw data. [fn:logL_instead_of_L] The $-\ln\mathcal{L}$ values in the distributions give a good idea of why. Roughly speaking the values go from 5 to 20, meaning the actual likelihood values are in the range from $e^{-5} \approx \num{7e-3}$ to $e^{-20} \approx \num{2e-9}$! While 64-bit floating point numbers nowadays in principle provide enough precision for these numbers, human readability is improved. But 32-bit floats would already accrue serious floating point errors due to only about 7 decimal digits accuracy. But even with 64 bit floats, slight changes to the likelihood definition might run into trouble as well. **** TODOs for this section [/] :noexport: - [ ] *REMOVE?* -> About last paragraph with uncertainties and infinities. - [ ] *PUT NOTE INTO FOOTNOTE?* - [ ] *TAKE OUT 'NOTE ABOUT INIFNITIES' IN DATA* -> I don't think it is particular enlightening in the context of the section. Probably it will just become an extended note about the raw values. Also the text does not mention that it is partially also due to 64 bit float limits. - [ ] *REWRITE FOOTNOTE ABOUT 7 DECIMAL DIGITS FOR 32 BIT FLOATS* - [X] *Overthink this notation*: -> Distinguish between the *likelihood distribution* over which we integrate and the *likelihood value* that is its argument! - [X] *Rephrase the above with a clearer mathematical model* \[ \mathfrak{L} = \text{histogram}\left( \{ \mathcal{L}_i \} \right) \] where $\mathfrak{L}$ is the combined distribution of all the individual $\mathcal{L}$ likelihood values. But clearly there must be better ways to talk about this? -> Well, at least GPT4 told me something that's pretty similar to what I wrote up there. So I guess it stays. Old text: #+begin_quote However, the reference distributions do not strictly speaking define the likelihood distribution. They only represent a PDF to look up a probability-like value for a given cluster with properties $(ε, l, f)$ for a single likelihood value $\mathcal{L}$. #+end_quote - [ ] *REWRITE TEXT AROUND SO THAT THIS MAKES SENSE HERE* Maybe we could create a "box" environment that is a side-note of sorts? I think that could work well. Something like this https://tex.stackexchange.com/questions/179197/framed-or-colored-box-with-text-and-margin-notes possibly. -> related to lnL discussion. - [ ] *THINK ABOUT OPTIMIZING CUT EFFICIENCY AGAIN!* -> Well, we calc multiple different ones, but optimizing fully is not worth it (would maybe be for MLP though) - [ ] *INCLUDE THE CUT VALUE INTO THE RIDGELINE PLOT. VERTICAL LINE WHERE IT IS FOR EACH TFKIND AND THEN A GEOM_TEXT NEXT WITH THE VALUE!* - [ ] *BELOW MUST GO TO PART WHERE WE FINISH HOW ALL THIS WORKS WITH CDL DATA, INCL LINEAR INTERPOLATION. ALTERNATIVELY, FINISH THAT EXPLANATION ABOVE BEFORE TALKING ABOUT CDL DETAILS?* **** Generate plots of the likelihood and reference distributions :extended: :PROPERTIES: :CUSTOM_ID: sec:background:gen_plots_likelihood :END: In order to generate plots of the reference distributions as well as the likelihood distributions, we use the ~TimepixAnalysis/Plotting/plotCdl~ tool (adjust the path to the ~calibration-cdl-2018.h5~ according to your system): #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Plotting/plotCdl ESCAPE_LATEX=true USE_TEX=true \ ./plotCdl \ -c ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --outpath ~/phd/Figs/CDL #+end_src which generates a range of figures in the local ~out~ directory. Among them: - ridgeline plots for each property, as well as facet plots - a combined ridgeline plot of all reference distributions side by side as a ridgeline each (used in the above section) - plots of the CDL energies using ~energyFromCharge~ - plots of the likelihood distributions. Both as a outline and ridgeline plot. Note that this tool can also create comparisons between a background dataset and their properties and the reference data! For now we use: - ~out/ridgeline_all_properties_side_by_side.pdf~ - ~out/eccentricity_facet_calibration-cdl-2018.h5.pdf~ - ~out/fractionInTransverseRms_facet_calibration-cdl-2018.h5.pdf~ - ~out/lengthDivRmsTrans_facet_calibration-cdl-2018.h5.pdf~ - ~out/logL_ridgeline.pdf~ which also all live in ~phd/Figs/CDL~. *** Energy interpolation of likelihood distributions :PROPERTIES: :CUSTOM_ID: sec:cdl:cdl_morphing_energy_logL :END: For the GridPix detector used in 2014/15, similar X-ray tube data was taken and each of the 8 X-ray tube energies were assigned an energy interval. The likelihood distribution to use for each cluster was chosen based on which energy interval the cluster's energy falls into. This leads to discontinuities of the properties at the interval boundaries due to the energy dependence of the cluster properties as seen in fig. [[sref:fig:background:eccentricity_photo_escape]]. This change is of course continuous instead of discrete. It can then lead to jumps in the efficiency of the background suppression method and thus in the achieved background rate. It seems a safe assumption that the reference distributions undergo a continuous change for changing energies of the X-rays. Therefore, to avoid discontinuities we perform a linear interpolation for each cluster with energy $E_β$ between the closest two neighboring X-ray tube energies $E_α$ and $E_γ$ in each probability density $\mathcal{P}_i$ at the cluster's properties. With $ΔE = E_α - E_γ$ the difference in energy between the closest two X-ray tube energies, each probability density is then interpolated to: \[ \mathcal{P}_i(E_β, x_i) = \left(1 - \frac{|E_β - E_α|}{ΔE}\right) · \mathcal{P}_i(E_α, x_i) + \left( 1 - \frac{|E_γ - E_β|}{ΔE} \right) · \mathcal{P}_i(E_γ, x_i) . \] Each probability density of the closest neighbors is evaluated at the cluster's property $x_i$ and the linear interpolation weighted by the distance to each energy is computed. The choice for a linear interpolation was made after different ideas were tried. Most importantly, a linear interpolation does not yield unphysical results (for example negative bin counts in interpolated data, which can happen in a spline interpolation) and yields very good results in the cases that can be tested, namely reconstructing a known likelihood distribution $B$ by doing a linear interpolation between the two outer neighbors $A$ and $C$. sref:fig:cdl:cdl_morphing_frac_known_lines shows this idea using the fraction in transverse RMS variable. This variable is shown here as it has the strongest obvious shape differences going from line to line. The green histogram corresponds to interpolated bins based on the difference in energy between the line above and below the target (hence it is not done for the outer lines, as there is no 'partner' above / below). Despite the interpolation covering an energy range almost twice as large as used in practice, over- or underestimation of the interpolation (green) is minimal. To give an idea of what this results in for the interpolation, fig. sref:fig:cdl:cdl_morphing_frac_all_energies shows a heatmap of the same variable showing how the interpolation describes the energy and fraction in transverse RMS space continuously. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Known lines") (label "fig:cdl:cdl_morphing_frac_known_lines") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/CDL/cdlMorphing/fractionInTransverseRms_ridgeline_morph_mtLinear_calibration-cdl-2018.h5_2018.pdf")) (subfigure (linewidth 0.5) (caption "All energies") (label "fig:cdl:cdl_morphing_frac_all_energies") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/CDL/cdlMorphing/cdl_as_raster_interpolated_morph_mtNone_fractionInTransverseRms.pdf")) (caption (subref "fig:cdl:cdl_morphing_frac_known_lines") " shows recovering the known lines (aside from the outer ones) using a binwise linear interpolation from the neighbors. While the interpolation (green) sometimes over- or undershoots it needs to be kept in mind it covers almost twice the energy that is needed. " (subref "fig:cdl:cdl_morphing_frac_all_energies") " is a heatmap of the full energy vs. fraction in transverse RMS space interpolated using this binwise linear interpolation. The energy of the fluorescence lines is indicated on which the interpolation is based.") (label "fig:cdl:cdl_morphing_examples")) #+end_src Given that such an interpolation works as well as it does on recovering a known line, implies that a linear interpolation on 'half' [fn:why_not_half_energy] the energy interval in practice should yield reasonably realistic distributions, certainly much better than allowing a discrete jump at specific boundaries. See appendix [[#sec:appendix:morphing_cdl_spectra]] for a lot more information about the ideas considered and comparisons to the approach without interpolation. In particular fig. [[fig:appendix:cdl_morphing_logL_vs_energy]], which computes the $\ln\mathcal{L}$ values for all of the (cleaned, cuts tab. [[tab:cdl:cdl_cleaning_cuts]] applied) CDL data comparing the case of no interpolation with interpolation showing a clear improvement in the smoothness of the point cloud. [fn:why_not_half_energy] It's not half the energy as the lines are not evenly spaced in energy. **** TODOs for the above sections :noexport: - [ ] *REDO THESE TWO PLOTS!* -> Partially done. - [X] *FIX REFERENCE TO COLORS AFTER FIXING PLOTS* -> Done, but plots are still not final. Need to decide on final sizes first. - [ ] *MAYBE MOVE NEXT PARAGRAPH BACK TO INTRODUCTION OF METHOD* Due to the energy dependence eq. [[eq:background:likelihood_def]] needs to be understood to be valid in the energy range that defines the used likelihood distributions. - [X] *REWRITE THIS!!!* -> Referring to introduction of section. -> I'm still not 100% happy with the explanation, but well. Seems a bit redundant in its content. - [X] *EXPAND ON THIS, GIVE AN EQUATION DEFINING WHAT WE MEAN INCLUDING THE BINNING!* - [ ] *WE STILL HAVEN'T TALKED ABOUT BINNING ETC OF ALL THIS!* - [X] *ADD PLOT OF LINEAR INTERPOLATION 'RECOVERING' MIDDLE DISTRIBUTION* - [X] *IN APPENDIX PUT PLOTS THAT SHOW HEATMAP OF KDE(?)/INTERPOLATION OF FULL ECCENTRICITY/... BEHAVIOR* -> Partially done, notes of CDL morphing study added in appendix as :noexport: - [X] What about 2D heatmap showing energy ranges? - [X] explain why Inf is a common value - [ ] explain why if interpreted as probabilities in reference distributions it's not to be thought of as "is 0.2% probability to be an X-ray", but rather is a generalized concept of probability, which when combined into a likelihood is just a distribution where all we care about is the cumulative fraction below a certain cutoff (our software efficiency) - [ ] potentially rephrase the footnote, in particular after actually generating the likeilhood distributions as non-log plot! - [X] *REFERENCE OF 1e-5 to 1e-20 IS NOT TRUE, AS WE USE LN INSTEAD OF LOG10!* -> Fixed. - [ ] Note: an interesting plot could be to compute the likelihood distributions for each CDL _run_ separately and produce such a ridgeline plot, with all runs in each target/filter row. Similar to the ridgeline plots we have comparing the properties. Given the similarity of the properties there won't be a significant difference, but that alone would be valuable insight! - [ ] *IMPORTANT* <2023-02-04 Sat 14:31> If I'm not too confused right now: We do *TWO* different morphings. One for the reference distributions which are used to compute the LogL *values* for a new cluster and another based on the existing logL *distributions* in order to determine _the cut values_ in the morphed case! This means we *CAN* create an interpolation heatmap of the data using that. It still makes me wonder about whether we can get by with an unbinned approach somehow. **** Practical note about interpolation :extended: The need to define cut values $\mathcal{L}'$ for each likelihood distribution to have a variable to cut on is one reason why the practical implementation does not use full interpolation to compute the reference distributions on the fly each time for every individual cluster energy, but instead uses a pre-calculated high resolution mesh of $\num{1000}$ interpolated distributions. This allows to compute the cut value as well as the distributions before starting to classify all clusters, saving significant performance. With a mesh of $\num{1000}$ energies the largest error on the energy is $<\SI{5}{eV}$ anyhow. **** Generate plots for morphing / interpolation :extended: :PROPERTIES: :CUSTOM_ID: sec:background:generated_morphing_plots :END: The plots are created with ~TimepixAnalysis/Tools/cdlMorphing~: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/cdlMorphing/ WRITE_PLOT_CSV=true USE_TEX=true ./cdlMorphing \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --outpath ~/phd/Figs/CDL/cdlMorphing/ #+end_src *Note*: The script produces the ~cdl_as_raster_interpolated_*~ plots. These use the terminology ~mtNone~ for "no" morphing, but that is only because we perform the morphing 'manually' in the code using the same function as in ~likelihood_utils.nim~. Because the other morphing happening in that script is one using an offset of 2 (to skip the center line!). *** Study of interpolation of CDL distributions [/] :extended: The full study and notes about how we ended up at using a linear interpolation, can be found in appendix [[#sec:appendix:morphing_cdl_spectra]]. - [X] This is now in the appendix. Good enough? Link Put here all our studies of how and why we ended up at linear interpolation! The actual linear interpolation results (i.e. reproducing the "middle" one will be shown in the actual thesis). *** Energy resolution :PROPERTIES: :CUSTOM_ID: sec:cdl:energy_resolution :END: On a slight tangent, with the main fluorescence lines fitted in all the charge spectra, the position and line width can be used to compute the energy resolution of the detector: \[ ΔE = \frac{σ}{μ} \] where $σ$ is a measure of the line width and $μ$ the position (in this case in total cluster charge). Note that in some cases the full width at half maximum (FWHM) is used and in others the standard deviation (for a normal distribution the FWHM is about $2.35 σ$). The energy resolutions obtained by our detector in the CDL dataset is seen in fig. [[fig:cdl:energy_resolution]]. At the lowest energies below $\SI{1}{keV}$ the resolution is about $ΔE \approx \SI{30}{\%}$ and generally between $\SIrange{10}{15}{\%}$ from $\SI{2}{keV}$ on. #+CAPTION: Energy resolutions depending on the energy of the fluorescence lines based on the #+CAPTION: charge and pixel spectra. As expected the behavior of the energy resolution is #+CAPTION: more or less $1/E$. The uncertainty for each point is the error propagated uncertainty #+CAPTION: based on the fit parameter uncertainties for the mean position and the line width. #+CAPTION: There are multiple data points for each energy owing to the fact that each run #+CAPTION: is fit separately. #+NAME: fig:cdl:energy_resolution [[~/phd/Figs/CDL/energyresoplot-2019.pdf]] **** TODOs for this section :noexport: - [ ] *COMPUTE ENERGY RESOLUTION FOR EVERY 55FE SPECTRUM AND ADD NOTE ABOUT IT HERE!* -> Would be neat, but then again not really important. - [ ] this should be a small aside, as it is not extremely important for us. Only serves as an input to our systematics. I guess it is important in that sense after all, heh. *NOTE AS TO HOW FAR IT ACTUALLY GOES INTO SYSTEMATICS!* - [ ] *MAYBE ADD REFERENCE TO SYSTEMATICS HERE* **** Note on energy resolutions :extended: Our energy resolutions, if converted to FWHM seem to be rather bad compared to best in class gaseous detectors, for which sometimes almost as low as 10% was achieved. **** Generate plot of energy resolution :extended: This plot is generated as part of the ~cdl_spectrum_creation.nim~ program. See sec. [[#sec:background:gen_plots_cdl_data]] for the commands. ** Application of likelihood cut for background rate :PROPERTIES: :CUSTOM_ID: sec:background:likelihood_cut :END: By applying the likelihood cut method introduced in the first part of this chapter to the background data of CAST, we can extract all clusters that are X-ray-like and therefore describe the irreducible background rate. Unless otherwise stated the following plots use a software efficiency of $\SI{80}{\%}$. Fig. [[sref:fig:background:cluster_centers_no_vetoes_2017_18]] shows the cluster centers and their distribution over the whole center GridPix, which highlights the extremely uneven distribution of background. The increase towards the edges and in particular the corners is due to events being cut off. Statistically by cutting off a piece of a track-like event, the resulting event likely becomes more spherical than before. In particular in a corner where two sides are cut off potentially (see also sec. [[#sec:detector:septemboard]]). This is an aspect the detector vetoes help with, see sec. [[#sec:background:septem_veto]]. In addition the plot shows some smaller regions of few pixel diameter that have more activity, due to minor noise. With about $\num{74000}$ clusters left on the center chip, the likelihood cut at $\SI{80}{\percent}$ software efficiency represents a background suppression of about a factor $\num{20}$ (compare tab. [[tab:cast:data_stats_overview]], $\sim\num{1.5e6}$ events on the center chip over the entire CAST data taking). In the regions towards the center of the chip, the suppression is of course much higher. Fig. [[sref:fig:background:background_suppression_tiles_no_vetoes_2017_18]] shows what the background suppression looks like locally when comparing the number of clusters in a small region of the chip to the total number of raw clusters that were detected. Note that this is based on the assumption that the raw data is homogeneously distributed. See appendix [[#sec:appendix:occupancy]] for occupancy maps of the raw data. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Cluster centers $\\ln\\mathcal{L}$") (label "fig:background:cluster_centers_no_vetoes_2017_18") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/backgroundClusters/background_cluster_centers_lnL80_no_vetoes.pdf")) (subfigure (linewidth 0.5) (caption "Background suppression") (label "fig:background:background_suppression_tiles_no_vetoes_2017_18") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_no_vetoes.pdf")) (caption (subref "fig:background:cluster_centers_no_vetoes_2017_18") " Cluster centers of all X-ray like clusters in the 2017/18 CAST background data. The number of these clusters increases drastically towards the edges and in particular corners, due to geometric effects. Some regions with minor sparking at the edges are visible as small yellow points of few pixel sizes. The red outline is the center \\goldArea region in which we quote the background rate. " (subref "fig:background:background_suppression_tiles_no_vetoes_2017_18") " shows the local background suppression over the total number of raw clusters detected. It assumes a homogeneous background distribution in the raw data.") (label "fig:background:background_no_vetoes_clusters")) #+end_src The distribution of the X-ray like clusters in the background data motivate on the one hand to consider local background rates for a physics analysis and at the same time the selection of a specific region in which a benchmark background rate can be defined. For this purpose cite:krieger2018search defines different detector regions in which the background rate is computed and treated as constant. One of these, termed the 'gold region' is a square around the center of $\SI{5}{mm}$ side length (visible as red square in fig. sref:fig:background:cluster_centers_no_vetoes_2017_18 and a bit less than 3x3 tiles around center in fig. sref:fig:background:background_suppression_tiles_no_vetoes_2017_18). All background rate plots unless otherwise specified in the remainder of the thesis always refer to this region of low background. Using the $\ln\mathcal{L}$ approach for the GridPix data taken at CAST in 2017/18 at a software efficiency of $ε_{\text{eff}} = \SI{80}{\%}$ then, a background rate shown in fig. [[fig:background_rate_eff80_only_center]] is achieved. The average background rate between $\SIrange{0}{8}{keV}$ in this case is $\SI{2.12423(9666)e-05}{keV⁻¹.cm⁻².s⁻¹}$. #+begin_comment [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.69938(7732)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.12423(9666)e-05 keV⁻¹·cm⁻²·s⁻¹ #+end_comment #+CAPTION: Background rate achieved based on the $\ln\mathcal{L}$ method at #+CAPTION: $ε_{\text{eff}} = \SI{80}{\percent}$ using the CAST 2017/18 data #+CAPTION: without the application of any vetoes. #+NAME: fig:background_rate_eff80_only_center [[~/phd/Figs/background/background_rate_crGold_no_vetoes.pdf]] This is comparable to the background rate presented in cite:krieger2018search, and reproduced here in in fig. [[fig:detector:background_rate_2014]] for the 2014/15 CAST GridPix detector. For this result no additional detector features over those available for the 2014/15 detector are used and the classification technique is essentially the same. [fn:differences] In order to improve on this background rate we will first look at a different classification technique based on artificial neural networks. Then afterwards we will go through the different detector features and discuss how they can further improve the background rate. [fn:differences] The only conceptual difference in the techniques is our inclusion of the interpolation between reference distributions in energy. *** TODOs for above :noexport: - [X] *UPDATE TEXT ABOUT NUMBER OF CLUSTERS, NO SPARKY* -> Done, I *THINK* everywhere. - The following is now part of the introduction of this chapter. #+begin_quote Note that by applying the exact same method to the CAST data taken during the solar tracking time, the dataset containing candidates for the solar axion search is extracted. #+end_quote This is not needed anymore. #+begin_quote With the software efficiency defined, the $-\ln\mathcal{L}$ of each cluster is simply computed using the linear interpolation of the log likelihood distribution determined based on the cluster energy. A cluster is considered a signal if its $L$ is smaller than the $L$ computed for the desired $ε_{\text{eff}}$ at the energy of the cluster. #+end_quote - [ ] *redo the plots* - [ ] *INCLUDE GOLD REGION IN CLUSTER PLOT!* - [X] *SLIGHTLY LARGER THAN TILES: ACTUALLY CORRECT?* -> Yes. Pretty much mid of the next tile. - [ ] *THINK ABOUT IF EVERYTHING IS CORRECT HERE* -> Referring to comparison to Christoph I think - [ ] *REWRITE BELOW TO MAKE A BIT MORE SENSE IN CONTEXT OF ABOVE* - [ ] *EXPLAIN THAT THIS USES GOLD REGION!!!* -> cluster distribution motivates that! - [ ] *ADD OCCUPANCIES TO APPENDIX!* - [X] *FIX TITLE OF SUPPRESSION PLOT* - [ ] This *COULD* become its own chapter after all. That way make the CDL part a likelihood method & stuff chapter and this an apply method for background rate chapter. - [X] most of it moved to likelihood method section - [ ] *REWRITE ME* - [X] *BETTER CLARIFY DISTINCTION BETWEEN BACKGROUND AND TRACKING* 1. show all clusters left over after likelihood cut? Introduces discussion of different chip regions. For a pure "background rate" thus taking a center region is good idea and also introduces talking point again that vetoes can help. 2. show that background rate in center. - [ ] What to do with raw data vs. applying rate? - [ ] how to treat different chip regions? - [ ] Try to replace the binning of reference distributions and therefore logL distribution by a KDE. Note on background rate plot above: - [ ] *THINK ABOUT THIS* -> The slight 'cut' at 3 keV is stronger than the I would have thought and in particular different from the background rate in [[~/org/Figs/statusAndProgress/IAXO_TDR/background_rate_2017_2018_no_vetoes.pdf]] which is much more "smooth" than the new background. My first intuition was that this was due to some bug related to the morphing kind. I.e. we weren't actually using the linear morphing. However, I've computed the equivalent crGold, no vetoes case [[~/org/Figs/statusAndProgress/backgroundRates/background_rate_crGold_no_morphing_no_vetoes.pdf]] and there _is_ a difference. So it's definitely related to the morphing. The other big difference between the old plot is the revamp of the calculation of the reference distributions (no use of the X-ray reference H5 file anymore) and obviously the fitting by run + adapted charge cuts for the reference data. I *ASSUME* that this is the reason we get the new results. *However* we should somehow maybe perform some check that we're actually doing everything correctly? *** Generate background rate and cluster plot [0/2] :extended: - [ ] *NOTE: THE PLOT WE CURRENTLY GENERATE: DOES IT USE TRACKING INFO OR NOT?* - [ ] *EXPLAIN AND SHOW HOW TO INSERT TRACKING INFO* #+begin_src sh ./cast_log_reader tracking \ -p ../resources/LogFiles/tracking-logs \ --startTime 2018/05/01 \ --endTime 2018/12/31 \ --h5out ~/CastData/data/DataRuns2018_Reco.h5 \ --dryRun #+end_src With the ~dryRun~ option you are only presented with what would be written. Run without to actually add the data. To generate the background rate plot we need: 1. the reconstructed data files for the background data at CAST - (optional for this part): the slow control and tracking logs of CAST - the tracking information added to the reconstructed background data files to compute the background rate only for tracking or only for signal candidate data, done by running the ~cast_log_reader~ with the reconstructed data files as input 2. the reconstructed CDL data and the ~calibration-cdl-2018.h5~ file generated from it for the reference and likelihood distributions To compute the background rate using these inputs we use the ~likelihood~ tool first in order to apply the likelihood cut method according to a desired software efficiency defined in the ~config.toml~ file. Afterwards we can use another tool to generate the plot of the background rate (which takes care of scaling the data to a rate etc.). The relevant section of the ~config.toml~ file should look like this: #+begin_src toml [Likelihood] # the signal efficiency to be used for the logL cut (percentage of X-rays of the # reference distributions that will be recovered with the corresponding cut value) signalEfficiency = 0.8 # the CDL morphing technique to be used (see `MorphingKind` enum), none or linear morphingKind = "Linear" # clustering algorithm for septem veto clusterAlgo = "dbscan" # choose from {"default", "dbscan"} # the search radius for the cluster finding algorithm in pixel searchRadius = 50 # for default clustering algorithm epsilon = 65 # for DBSCAN algorithm [CDL] # whether to fit the CDL spectra by run or by target/filter combination. # If `true` the resulting `calibration-cdl*.h5` file will contain sub groups # for each run in each target/filter combination group! fitByRun = true #+end_src (linear morphing, 80% efficiency, and CDL based on fits per run) In total we'll want the following files: - for years 2017 & 2018: - chip region crAll & crGold: - no vetoes - scinti veto - FADC veto - septem veto - line veto - [X] *USE ~createAllLikelihoodCombinations~ for it after short rewrite* In order to generate all likelihood output files (after having computed the ~likelihood~ datasets in the data files and added the tracking information!) we will use the ~createAllLikelihoodCombinations~ tool (rename please) - [ ] *INSERT TRACKING INFORMATION AND ADD THE ~--tracking~ FLAG TO ~LIKELIHOOD~* - [ ] *RERUN THE BELOW AND CHANGE PATHS TO DIRECTLY IN PHD DIRECTORY!* - *UPDATE*: The current files we will likely use are generated by: All standard variants including the different FADC percentiles: #+begin_src sh ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{fkScinti, fkFadc, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --fadcVetoPercentiles 0.9 --fadcVetoPercentiles 0.95 --fadcVetoPercentiles 0.99 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing #+end_src The septem + line variants _without_ the FADC: #+begin_src sh ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --vetoSets "{+fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing #+end_src and the lnL cut efficiency variants for 70 and 90% for the septem + line + FADC@90% variants: #+begin_src sh ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold --regions crAll \ --signalEfficiency 0.7 --signalEfficiency 0.9 \ --vetoSets "{+fkScinti, +fkFadc, +fkSeptem, fkLineVeto}" \ --fadcVetoPercentile 0.9 \ --out /t/lhood_outputs_adaptive_fadc \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --dryRun #+end_src which for the time being are here: [[file:~/org/resources/lhood_limits_automation_correct_duration/]] #+begin_src sh :var TPA='/home/basti/CastData/ExternCode/TimepixAnalysis' :results drawer cd $TPA/Analysis ./createAllLikelihoodCombinations --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crGold \ --regions crAll \ --vetoes fkNoVeto \ --vetoes fkScinti \ --vetoes fkFadc \ --vetoes fkSeptem \ --vetoes fkLineVeto \ --vetoes fkExclusiveLineVeto \ --out ~/phd/resources/background/autoGen \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --dryRun #+end_src #+RESULTS: :results: Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crGold, vetoes: {fkNoVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crGold, vetoes: {fkNoVeto, fkScinti}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold_scinti.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold_scinti_fadc.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold_scinti_fadc_septem.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem, fkLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold_scinti_fadc_septem_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc, fkExclusiveLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold_scinti_fadc_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crAll, vetoes: {fkNoVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crAll.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crAll, vetoes: {fkNoVeto, fkScinti}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crAll_scinti.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crAll_scinti_fadc.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crAll_scinti_fadc_septem.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem, fkLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crAll_scinti_fadc_septem_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2017_Reco.h5", year: 2017, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc, fkExclusiveLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crAll_scinti_fadc_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crGold, vetoes: {fkNoVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crGold, vetoes: {fkNoVeto, fkScinti}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold_scinti.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold_scinti_fadc.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold_scinti_fadc_septem.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem, fkLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold_scinti_fadc_septem_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crGold, vetoes: {fkNoVeto, fkScinti, fkFadc, fkExclusiveLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold_scinti_fadc_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crAll, vetoes: {fkNoVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crAll.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crAll, vetoes: {fkNoVeto, fkScinti}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crAll_scinti.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crAll_scinti_fadc.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crAll_scinti_fadc_septem.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc, fkSeptem, fkLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crAll_scinti_fadc_septem_line.h5 Command: (fname: "/home/basti/CastData/data/DataRuns2018_Reco.h5", year: 2018, region: crAll, vetoes: {fkNoVeto, fkScinti, fkFadc, fkExclusiveLineVeto}) As filename: /home/basti/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crAll_scinti_fadc_line.h5 :end: using the ~--dryRun~ option first to see the generated commands that will be run. We will run this over night <2023-02-13 Mon 00:37> now using the new default eccentricity cutoff for the line veto of 1.5 (motivated by the ratio of fraction passing real over fake!) as well as the ~lvRegularNoHLC~ line veto kind (although this should not matter, as we won't use the line veto without the septem veto!). The files are now in [[file:resources/background/autoGen/]]. - [ ] *WARNING: THE GENERATED FILES HERE STILL USE OUR HACKED IN CHANGED NUMBER OF BINS FOR THE REFERENCE DISTRIBUTIONS!* -> TWICE AS MANY BINS! - [ ] *THESE ARE OUTDATED. WE USE OUR TOOL* First let's apply the likelihood tool to the 2017 data for the gold region: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 --computeLogL \ --region crGold \ --cdlYear 2018 \ --h5out /home/basti/phd/resources/background/lhood_2017_crGold_80eff_no_vetoes.h5 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 #+end_src and the same for the end of 2018 data: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 --computeLogL \ --region crGold \ --cdlYear 2018 \ --h5out /home/basti/phd/resources/background/lhood_2018_crGold_80eff_no_vetoes.h5 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 #+end_src and now the same for the whole chip: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2017_Reco.h5 --computeLogL \ --region crAll \ --cdlYear 2018 \ --h5out /home/basti/phd/resources/background/lhood_2017_crAll_80eff_no_vetoes.h5 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 #+end_src and the same for the end of 2018 data: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 --computeLogL \ --region crAll \ --cdlYear 2018 \ --h5out /home/basti/phd/resources/background/lhood_2018_crAll_80eff_no_vetoes.h5 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 #+end_src With this we can first create the plot of the cluster centers from the all chip files using ~plotClusterCenters~: #+begin_src sh :results drawer plotBackgroundClusters \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ --zMax 30 \ --title "X-ray like clusters of CAST data" \ --outpath ~/phd/Figs/backgroundClusters/ \ --showGoldRegion \ --backgroundSuppression \ --energyMin 0.2 --energyMax 12.0 \ --suffix "_lnL80_no_vetoes" \ --filterNoisyPixels \ --useTikZ #+end_src #+RESULTS: :results: reading: /home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 reading: /home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 @["/home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5", "/home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5"] DataFrame with 3 columns and 24527 rows: Idx x y count dtype: int int int 0 2 1 2 1 2 247 3 2 3 247 5 3 5 247 1 4 6 1 2 5 6 166 8 6 9 33 1 7 9 88 1 8 9 122 1 9 9 128 1 10 9 205 1 11 9 224 1 12 10 7 1 13 10 57 1 14 10 106 1 15 10 107 1 16 10 146 1 17 10 147 1 18 10 165 2 19 10 166 1 [INFO]: Saving plot to /home/basti/phd/Figs/backgroundClusters//background_cluster_centers_lnL80_no_vetoes.pdf INFO: The integer column `x` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("x"), ...)`. INFO: The integer column `xs` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("xs"), ...)`. INFO: The integer column `y` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("y"), ...)`. INFO: The integer column `ys` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("ys"), ...)`. DataFrame with 4 columns and 49 rows: Idx xI yI cI sI dtype: int int int float 0 0 0 4821 6.882 1 0 36 1952 17 2 0 73 364 91.15 3 0 109 618 53.69 4 0 146 405 81.93 5 0 182 2686 12.35 6 0 219 2257 14.7 7 36 0 3970 8.358 8 36 36 3665 9.053 9 36 73 1251 26.52 10 36 109 1488 22.3 11 36 146 1393 23.82 12 36 182 4895 6.778 13 36 219 2604 12.74 14 73 0 509 65.19 15 73 36 578 57.4 16 73 73 195 170.2 17 73 109 175 189.6 18 73 146 196 169.3 19 73 182 845 39.27 [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/backgroundClusters /home/basti/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_no_vetoes.tex Generated: /home/basti/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_no_vetoes.pdf :end: From the logL output files created by ~createAllLikelihoodCombinations~, let's now create the 'classical' background rate in the gold region without any vetoes: #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ --combName 2017/18 \ --combYear 2017 \ --centerChip 3 \ --region crGold \ --title "Background rate from CAST data" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_no_vetoes.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: [INFO]:Total time: 7723494.509174726 of file: lhood_c18_R2_crAll_sEff_0.8_lnL.h5 [INFO]:Total time: 3645329.367165649 of file: lhood_c18_R3_crAll_sEff_0.8_lnL.h5 [INFO]:Total total time: 11368823.87634037 Manual rate = 1.97910(7618)e-05 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.37492(9141)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.97910(7618)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.5008(2482)e-05 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 7.0016(4963)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.5008(2482)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7053(1454)e-05 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.21736(6545)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.7053(1454)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.8421(2325)e-05 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.6052(5813)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.8421(2325)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.7729(7718)e-06 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.7092(3087)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.7729(7718)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.37217(8970)e-05 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.2330(5382)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.37217(8970)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.12423(9666)e-05 [INFO]:Dataset: 2017/18 [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.69938(7732)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.12423(9666)e-05 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | false | false | false | false | 0.800 | 2.12423(9666)e-05 | [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_no_vetoes.pdf [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_no_vetoes.tex Generated: /home/basti/phd/Figs/background/background_rate_crGold_no_vetoes.pdf :end: which generates [[file:~/phd/ Figs/background/background_rate_crGold_no_vetoes.pdf]] from the TikZ ~.tex~ file of the same name with the following output about the background rates. - [ ] *FINISH THIS* *** Background suppression [/] :extended: In tab. [[tab:cast:data_stats_overview]] we see that the total CAST data contains $\num{984319} + \num{470188} = \num{1454507}$ events. Compared to our $\num{94625}$ clusters left on the whole chip, this represents a: #+begin_src nim let c17 = 984319'f64 let c18 = 470188'f64 let cCut = 94625'f64 echo (c17 + c18) / cCut #+end_src #+RESULTS: : 15.37127608982827 background suppression of about 15.3. - [X] We could make a plot showing the background suppression? A tile map with text on taking X times Y squares and printing the suppression? Would be a nice way to give better understanding how the vetoes help to improve things over the chip. Running ~plotBackgroundClusters~ now also produces a tile map of the background suppression over the total raw number of clusters on the chip! ~plots/background_suppression_tile_map.pdf~ which is in [[file:Figs/backgroundClusters/background_suppression_tiles_no_vetoes.pdf]]. *** Verification of software efficiency using calibration data [/] :extended: - [ ] *WRITE ME* -> Important because systematics. - [ ] *MAYBE INSTEAD OF EXTENDED, APPENDIX* - [ ] *SHOW OUR VERIFICATION USING 55FE DATA* -> Check and if not possibly do: Using fake energies (by leaving out pixel info) to generate arbitrary energies check the efficiency. I think this is what we did in that context. Right, that was it: [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/determineEffectiveEfficiency.nim]] -> This is "outdated" in the sense that we have good fake generation now. Probably good enough to just discuss efficiency that we use for systematic! - [ ] *CREATE PLOT OF THE LOGL CUT VALUES TO ACHIEVE THE USED EFFICIENCY!* -> essentially take our morphed likelihood distributions and for each of those calculate the logL value for the cut value. Then create a plot of energy vs. cut value and show that. Should be a continuous function through the logL values that fixes the desired efficiency. - [ ] *THINK ABOUT OPTIMIZING EFFICIENCY* -> We can think about writing a short script to optimize the efficiency - [ ] *AFAIR WE STILL COMPUTE LOGL CUT VALUE BASED ON BINNED HISTOGRAM. CHANGE THAT TO UNBINNED DATA* ** Artificial neural networks as cluster classifiers :PROPERTIES: :CUSTOM_ID: sec:background:mlp :END: The likelihood cut based method presented in section [[#sec:limit:method_likelihood]] works well, but does not use the full potential of the data. It mainly uses length and eccentricity related properties, which are hand picked and ignores the possible separation power of other properties. Multiple different ways to use all separation power exist. One promising approach is the usage of artificial neural networks (ANNs). A multi-layer perceptron (MLP) cite:&amari67_mlp [fn:citations] is a simple supervised ANN model, which consists of an input and output layer plus one or more fully connected hidden layers. By training such a network on the already computed geometric properties of the clusters, computational requirements remain relatively moderate compared to approaches using -- for example -- the raw data as inputs. As each neuron on a given layer in an MLP is fully connected to all neurons on the previous layer, the output of neuron $k$ on layer $i$ is described by #+NAME: eq:mlp:neuron_output \begin{equation} y_{k,i} = φ \left( \sum_{j = 0}^m w_{kj} y_{j,i-1} \right) \end{equation} where $w_{kj}$ is the weight between neuron $k$ on layer $i$ and neuron $j$ of $m$ neurons on layer $i-1$. $φ$ is a (typically non-linear) activation function which is applied to saturate a neuron's output. If $y_{j,i-1}$ is considered a vector for all $j$, $w_{kj}$ can be considered a weight matrix and eq. [[eq:mlp:neuron_output]] is simply a matrix product. Each layer is computed iteratively starting from the input layer towards the output layer. Given that an MLP is a supervised learning algorithm, the desired target output for a given input during training is known. A loss function is defined to evaluate the accuracy of the network. Many different loss functions are used in practice, but in many cases the mean squared error (MSE; also known as L2 norm) is used as a loss \[ l(\mathbf{y}, \mathbf{\hat{y}}) = \frac{1}{N} \sum_{i = 1}^N \left( y_i - \hat{y}_i \right)² \] where $\mathbf{y}$ is a vector $∈ \mathbb{R}^N$ of the network outputs and $\mathbf{\hat{y}}$ the target outputs. The sum runs over all $N$ output neurons. [fn:loss_of_minibatch] In order to train a neural network, the initially random weights must be modified. This is done by computing the gradients of the loss (given an input) with respect to all the weights of the network, $\frac{∂ l(\mathbf{y})}{∂ w_{ij}}$. Effectively the chain rule is used to express these partial derivatives using the intermediate steps of the calculation. This leads to an iterative equation for the weights further up the network (towards the input layer). Each weight is updated from iteration $n$ to $n+1$ according to \[ w^{n+1}_{ij} = w^n_{ij} - η \frac{∂ l(\mathbf{y})}{∂ w^n_{ij}}. \] where $η$ is the learning rate. This approach to updating the weights during training is referred to as backpropagation. cite:&rumelhart86_backprop [fn:reverse_mode_autograd] [fn:citations] Due to the rich and long history of artificial neural networks picking "a" or only a few citations is tricky. Amari's work [[cite:&amari67_mlp]] was, as far as I'm aware, the first to combine a perceptron with non-linear activation functions and using gradient descent for training. See Schmidhuber's recent overview for a detailed history leading up to modern deep learning [[cite:&schmidhuber22_history]]. [fn:loss_of_minibatch] For performance reasons to utilize the parallel nature of GPUs, training and inference of NNs is done in 'mini-batches'. As still only a single loss value is needed for training, the loss is computed as the mean of all losses for each mini-batch element. [fn:reverse_mode_autograd] The gradients in a neural network are usually computed using 'automatic differentiation' (or 'autograd', 'autodiff'). There are two forms of automatic differentiation: forward mode and reverse mode. These differ by the practical evaluation order of the chain rule. Forward mode computes the chain rule from left-to-right (input to output), while reverse mode computes it from right-to-left (output to input). Computationally these differ in their complexity in terms of the required number of evaluations given N inputs and M outputs. Forward mode computes all M output derivatives for a single input, whereas reverse mode computes all input derivatives for a single output. Thus, forward mode is efficient for cases with few inputs and many outputs, while the opposite is true for reverse mode. Neural networks are classical cases of many inputs to few outputs (scalar loss function!). As such, reverse mode autograd is the standard way to compute the gradients during NN training. In the context it is effectively synonymous with 'backpropagation' (due to its output-to-input evaluation of the chain rule). *** TODOs for this section [/] :noexport: *THIS WILL STILL BE A DECENTLY LARGE CHAPTER* - super short introduction to ANNs and MLPs in particular - including math of an MLP, effectively just simple matrix math - introduced backpropagation, very simply, mention SGD - show the MLP layout we use, refer to appendix / extended version for study of different activation functions, layouts etc - Highlight that one of the main points was to use a network that remains relatively small - describe training input data! - background is fine - X-ray training data: simulated events! - [ ] *THIS IS FINALLY BECOMING A PROPER SECTION!* -> See further above after likelihood cut method Good references are tricky... - [X] *CITE REFERENCE FOR MLP* -> History of MLPs is tricky, because it spans so many decades from original perceptrons (1958 by Frank Rosenblatt) etc -> The closest reference is maybe Amari, Shun'ichi (1967). "A theory of adaptive pattern classifier". IEEE Transactions. EC (16): 279–307. - [X] *CITE REFERENCE FOR SGD* - [X] *CITE REFERENCE FOR BACKPROPAGATION* - [ ] *CITE REFERENCE FOR AUTOGRAD*? -> Hmm, maybe? *** MLP for CAST data :PROPERTIES: :CUSTOM_ID: sec:background:mlp_for_cast :END: The simplest approach to using a neural network for data classification of CAST-like data, is an MLP that uses the pre-computed geometric properties for each cluster as an input. The choice remains between a single or two output neurons. The more classical approach is treating signal (X-rays) and background events as two different classes that the classifier learns to predict, which we also use. By our convention the target outputs for signal and background events are \begin{align*} \mathbf{\hat{y}}_{\text{signal}} &= \vektor{ 1 \\ 0 } \\ \mathbf{\hat{y}}_{\text{background}} &= \vektor{ 0 \\ 1 } \end{align*} where the first entry of each $\mathbf{\hat{y}}$ corresponds to output neuron 1 and the second to output neuron 2. For the network to generalize to all real CAST data, the training dataset must be representative of the wide variety of the real data. For background-like clusters it can be sourced from the extensive non-tracking dataset recorded at CAST. For the signal-like X-ray data it is more problematic. The only source of X-rays with enough statistics are the $\cefe$ calibration datasets from CAST, but these only describe X-rays of two energies. The CAST detector lab data from the X-ray tube are both limited in statistics as well as suffering from systematic differences to the CAST data due to different gas gains. [fn:problem_mlp_not_lnL] We will now describe how we generate X-ray clusters for MLP training by simulating them from a /target energy/, a specific /transverse diffusion/ and a desired /gas gain/. [fn:problem_mlp_not_lnL] The acute reader may wonder why we care less about this for the $\ln\mathcal{L}$ method. The reason is simply that the effect of gas gain on the three properties used there is comparatively small. But if (almost) all properties are to be used, that is less true. *** Generation of simulated X-rays as MLP training input :PROPERTIES: :CUSTOM_ID: sec:background:mlp:event_generation :END: To generate simulated events we wish to use the least amount of inputs, which still yield events that represent the recorded data as well as possible. In particular, the systematic variations between different times and datasets should be reproducible based on these parameters. The idea is to generate events using the underlying gaseous detector physics (see sec. [[#sec:theory_detector]]) and make as few heuristic modifications to better match the observed (imperfect) data as possible. This lead to an algorithm, which only uses three parameters: a target energy for the event (within the typical energy resolution). A gas gain to encode the gain variation seen. A gas diffusion coefficient to encode variations in the diffusion properties (also possibly a result of changing temperatures). In contrast to approaches that simulate the interactions at the particle level, our simulation is based entirely on the emergent properties seen as a result of those. The [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/fake_generator.nim][basic Monte Carlo algorithm]] will now be described in a series of steps. 1. (optional) sample from different fluorescence lines for an element given their relative intensities to define a target energy $E$. 2. sample from the exponential distribution of the absorption length (see sec. [[#sec:theory:xray_matter_gas]]) for the used gas mixture and target photon energy to get the conversion point of the X-ray in the gas (Note: we only sample X-rays that convert; those that would traverse the whole chamber without conversion are ignored). 3. sample a target charge to achieve for the cluster, based on target energy by sampling from a normal distribution with a mean roughly matching detector energy resolution (target energy inverted to a charge). 4. sample center position of the cluster within a uniform radius of $\SI{4.5}{mm}$ around the chip center. 5. begin the sampling of each electron in the cluster. 6. sample a charge (in number of electrons after amplification) for the electron based on a Pòlya distribution of input gas gain (and matching normalization & $θ$ constant). Reject the electron if not crossing activation threshold (based on real data). 7. sample a radial distance from the cluster center based on gas diffusion constant $D_T$ and diffusion after the remaining drift distance to the readout plane $z_{\text{drift}}$ using eq. [[eq:gas_physics:diffusion_after_drift]]. Sample twice from the 1D version using a normal distribution $\mathcal{N}(μ = 0, σ = D_T · \sqrt{z_{\text{drift}}})$ for each dimension. Combine to a radius from center, $r = \sqrt{x² + y²}$. 8. sample a random angle, uniform in $(0, 2π]$. 9. convert radius and angle to an $(x, y)$ position of the electron, add to the cluster. 10. based on a linear approximation activate between 0 to 4 neighboring pixels with a slightly reduced gas gain. We never activate any neighbors at a charge of $\SI{1000}{e⁻}$ and always activate at least one at $\SI{10000}{e⁻}$. Number of activated pixels depends on a uniform random number being below one of 4 different thresholds: \[ N_{\text{neighbor}} = \text{rand}(0, 1) · N < \text{threshold} \] where the threshold is determined by the linear function described by the mentioned condition and $\text{rand}(0, 1)$ is a uniform random number in $(0, 1)$. 11. continue sampling electrons until the total charge adds up to the target charge. We stop at the value closest to the target (before or after adding a final pixel that crosses the target charge) to avoid biasing us towards values always larger than the target. 12. The final cluster is reconstructed just like any other real cluster, using the same calibration functions as the real chip (depending on which dataset it should correspond to). Note: Neighboring pixels are added to achieve matching eccentricity distributions between real data and simulated. Activating neighboring pixels increases the local pixel density randomly, which effectively increases the weight of some pixels, leading to a slight increase of eccentricity of a cluster. The approach of activating up to 4 neighbor pixels and giving them slightly lower charges stems from empirically matching the simulated data to real data. From a physical perspective the most likely cause of neighboring pixels is due to UV photons that are emitted in the amplification region and travel towards the grid and produce a new electron, which starts an avalanche from there. From that point of view neighboring pixels should see the full gas amplification and neighbors other than the direct { up, down, left, right } neighbors can be activated (depending on an exponential distribution due to possible absorption of the UV photons in the gas). See Markus Gruber's master thesis cite:&markusMsc for a related study on this for GridPix detectors. Based on the above algorithm, fake events can be generated that either match the gas gain and gas diffusion of an existing data taking run (background or calibration) or any arbitrary combination of parameters. The former is important for verification of the validity of the generated events as well as to check the MLP cut efficiency (more on that in sec. [[#sec:background:mlp:mlp_cut_value]]). The latter is used to generate a wide variety of MLP training data. Event production is a very fast process [fn:how_fast], allowing to produce large amounts of statistics for MLP training in a reasonable time frame. For other applications that require higher fidelity simulations, Degrad [[cite:&biagi1995Degrad]], a sister program to Magboltz [[cite:&biagi1995magboltz]], can be used, which simulates the particle interactions directly. This is utilized in a code by M. Gruber cite:gruber_xray_sim, which uses Degrad and Garfield++ cite:garfieldpp to perform realistic event simulation for GridPix detectors (its data can also be analyzed by the software presented in sec. [[#sec:reco:tpa]] and appendix [[#sec:appendix:software]]). [fn:how_fast] Generating $\num{100000}$ fake events of the \cefe photopeak takes around $\SI{35}{s}$ on a single thread using an AMD Ryzen 9 5950X. The code is not exactly optimized either. **** TODOs for this section [/] :noexport: - [ ] *CITE LINK TO IMPLEMENTATION OF FAKE GENERATION* - [ ] *MENTION MARKUS GRIDPIX SIMULATION* -> slower, but more accurate. **** Note on this section and theory :extended: The explanation above and the implementation of the fake event generation is the main motivator for a good chunk of the included gaseous detector theory fundamentals: - the X-ray matter interaction section is needed to calculate the absorption length (implemented in [[cite:&Schmidt_xrayAttenuation_2022]]). Understanding the theory is needed both to understand how to sample in the first place and how to calculate the absorption length for the gas mixture (and pressure, temperature etc) of the used detector gas - Dalton's law is crucial to correctly take into account the behavior of the gas mixtures and how the percentages given for gas mixtures are applied to the calculation - Diffusion, how it relates to a random walk, its dimensionality dependence and interpretation as sampling from a normal distribution for the resulting diffusion after a certain drift is required for the sampling of each electron. - the discussion of the \cefe spectrum is required to understand that an escape photon is not equivalent to a real $\SI{3}{keV}$ X-ray (more on that in the next sections) - gas gain understanding is (here also) needed to sample the gain for each electron **** Generate some example and comparison plots [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:background:gen_plots_fake_event_comparisons :END: - [X] Comparison of the different properties as a ridgeline plot? -> similar to plots in appendix [[#sec:appendix:fit_by_run:gas_gain_var_cluster_prop]] like [[~/phd/Figs/CDL/C-EPIC-0.6kV_ridgeline_kde_by_run.pdf]] Let's generate plots comparing the properties used by the MLP of generated events with those of different runs. The idea is to generate fake data based on the gas properties of each run, i.e. gas gain, gas diffusion and the target energy of the X-ray fluorescence line we want (i.e. the Kα line for $\ce{Mn}$ in case of the \cefe source or each target in the X-ray tube data). We'll generate a ridgeline plot of all these properties normalized to $x/\text{max}(\{x\})$ where $\{x\}$ is the set of all values in the run to get values between 0 and 1 for all properties such that they can be compared in a single plot. Each ridge will use a KDE based density to best highlight any differences in the underlying distribution. We will first use a tool to generate HDF5 files of simulated events following a real run. For one run, e.g. 241 CAST calibration, we do this by: #+begin_src sh fake_event_generator \ like \ -p ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --run 241 \ --outpath /tmp/test_fakegen_run241.h5 \ --outRun 241 \ --tfKind Mn-Cr-12kV \ --nmc 50000 #+end_src where we specify we want X-rays like the 55Fe source (via ~tfKind~) and want to simulate 50k X-rays. Then we can compare the properties using ~plotDatasetGgplot~, which can be found here [[file:~/CastData/ExternCode/TimepixAnalysis/Plotting/plotDsetGgplot/plotDatasetGgplot.nim]]: #+begin_src sh plotDatasetGgplot \ -f ~/CastData/data/CalibrationRuns2018_Reco.h5 \ -f /t/test_fakegen_run241.h5 \ --run 241 \ --plotPath ~/phd/Figs/fakeEventSimulation/runComparisons #+end_src Let's automate this quickly for all \cefe calibration runs. #+begin_src nim :tangle code/generate_all_run_fake_data_plots.nim import shell, strutils, sequtils import nimhdf5 import ingrid / [ingrid_types, tos_helpers] const filePath = "~/CastData/data/CalibrationRuns$#_Reco.h5" const genData = """ fake_event_generator \ like \ -p $file \ --run $run \ --outpath $fakeFile \ --outRun $run \ --tfKind Mn-Cr-12kV \ --nmc 50000 """ const plotData = """ plotDatasetGgplot \ -f $file \ -f $fakeFile \ --names 55Fe --names Simulation \ --run $run \ --plotPath /home/basti/phd/Figs/fakeEventSimulation/runComparisons/ \ --prefix ingrid_properties_run_$run \ --suffix ", run $run" """ proc main(generate = false, plot = false, fakeFile = "~/org/resources/fake_events_for_runs.h5") = const years = [2017, 2018] for year in years: #if year == "2017": continue ## skip for now, already done let file = filePath % [$year] var runs = newSeq[int]() withH5(file, "r"): let fileInfo = getFileInfo(h5f) runs = fileInfo.runs for run in runs: echo "Working on run: ", run if generate: let genCmd = genData % ["file", file, "run", $run, "fakeFile", fakeFile] shell: ($genCmd) if plot: let plotCmd = plotData % ["file", file, "fakeFile", fakeFile, "run", $run, "run", $run, "run", $run] shell: ($plotCmd) when isMainModule: import cligen dispatch main #+end_src which yields all the plots in: [[file:Figs/fakeEventSimulation/runComparisons/]] - [ ] *RERUN WITH CSV OUTPUT!* First we generate the HDF5 file containing all the fake data: #+begin_src sh :dir ~/phd/ ntangle thesis.org && nim c code/generate_all_run_fake_data_plots code/generate_all_run_fake_data_plots --generate #+end_src and then we can create all the plots. In the section below we will include the plot for run 241. #+begin_src sh :dir ~/phd/ code/generate_all_run_fake_data_plots --plot #+end_src *Note*: The ~plotDatasetGgplot~ program already uses ~themeLatex~ to get a pretty version of the plot. For that reason no further modification is needed to get pretty plots. *** Determination of gas diffusion :PROPERTIES: :CUSTOM_ID: sec:background:mlp:determine_gas_diffusion :END: To generate X-ray events with similar physical properties as those seen in a particular data taking run, we need the three inputs required: gas gain, gas diffusion and target energy. The determination of the gas gain is a well defined procedure, explained in sections [[#sec:daq:polya_distribution]] and [[#sec:calib:gas_gain_time_binning]]. The target energy is a matter of the purpose the generated data is for. The gas diffusion however is more complicated and requires explanation. In theory the gas diffusion is a fixed parameter for a specific gas mixture at fixed temperature, pressure and electromagnetic fields and can be computed using Monte Carlo tools as mentioned already in sec. [[#sec:theory:gas_diffusion]]. However, in practice MC tools suffer from significant uncertainty (in particular in the form of run-by-run RNG variation) especially at common numbers of MC samples. More importantly though, real detectors do not have perfectly stable and known temperatures, pressures and electromagnetic fields (for example the field inhomogeneity of the drift field in the 7-GridPix detector is not perfectly known), leading to important differences and variations from theoretical numbers. Fortunately, similar to the gas gain, the effective numbers for the gas diffusion can be extracted from real data. The gas diffusion constant $D_T$ (introduced in sec. [[#sec:theory:gas_diffusion]]) describes the standard deviation of the transverse distance from the initial center after drifting a distance $z$ along a homogeneous electric field. This parameter is one of our geometric properties, in the form of the transverse RMS '\rmst' (which is actually the transverse standard deviation of the electron positions [fn:rms_symbol]) and thus it can be used to determine the gas diffusion coefficient. [fn:longitudinal_rms] As it is a statistical process and $D_T$ is the standard deviation of the population a large ensemble of events is needed. This is possible both for calibration data as well as background data. In both cases it is important to apply minor cuts to remove the majority of events in which \rmst contains contributions that are not due to gas diffusion (e.g. a double X-ray event which was not separated by the cluster finder). For calibration data the cuts should filter to single X-rays, whereas for background events clean muon tracks are the target. In theory one could determine it by way of a fit to the \rmst distribution of all clusters in a data taking run as was done in cite:krieger2018energy. This approach is problematic in practice due to the large variation in possible X-ray conversion points -- and therefore drift distances -- as a function of energy. Different energies lead to different convolutions of exponential distributions with the gas diffusion. This means a single fit does not describe the data well in general and the upper limit is _not_ a good estimator for the real gas diffusion coefficient (because it is due to those X-rays converting directly behind the detector window by chance and/or undergoing statistically large amount of diffusion). Instead of performing an analytical convolution of the exponential distribution and gas diffusion distributions to determine the correct fit, a Monte Carlo approach is used here as well. The idea here is to use a reasonable [fn:reasonable] gas diffusion constant to generate (simplified [fn:simplified_events]) fake events. Compute the \rmst distribution of this fake data and compare to the real \rmst distribution of the target run based on a test statistic. Next one computes the derivative of the test statistic as a form of loss function and adjusts the diffusion coefficient accordingly. Over a number of iterations the distribution of the simulated \rmst distribution will converge to the real \rmst distribution, if the choice of test statistic is suitable. In other words we use gradient descent to find that diffusion constant $D_T$, which best reproduces the \rmst distribution of our target run. One important point with regards to computing it for background data is the following: when generating fake events for the background dataset, the equivalent 'conversion point' for muons is a uniform distribution over all distances from the detector window to the readout. For X-rays it is given by the energy dependent attenuation length in the detector gas. The test statistic chosen in practice for this purpose is the Cramér-von-Mises (CvM) criterion, defined by cite:cramer28_gof,mises36_gof: \[ ω² = ∫_{-∞}^∞ \left[ F_n(x) - F^*(x) \right]² \, \mathrm{d}F^*(x) \] where $F_n(x)$ is an empirical distribution function to test for and $F^*(x)$ a cumulative distribution function to compare with. In case of the two-sample case it can be computed as follows cite:&anderson62_cvm_gof: \[ T = \frac{N M}{N + M} ω² = \frac{U}{N M (N + M)} - \frac{4 M N - 1}{6(M + N)} \] with $U$: \[ U = N \sum_{i=1}^N \left( r_i - i \right)² + M \sum_{j = 1}^M \left( s_j - j \right)² \] In contrast to -- for example -- the Kolmogorov-Smirnov test, it includes the entire (E)CDF into the statistic instead of just the largest deviation, which is a useful property to protect against outliers. The iterative optimization process is therefore \[ D_{T,i+1} = D_{T,i} - η \frac{∂ f(D_T)}{∂ D_T} \] where $f(D_T)$ is the Monte Carlo algorithm, which computes the test statistic for a given $D_T$ and $η$ is the step size along the gradient (a 'learning rate'). The derivative is computed using finite differences. [fn:derivatives_optimization] Fig. [[fig:background:mlp:gas_diffusion_values]] shows the values of the transverse gas diffusion constant $D_T$ as they have been determined for all runs. The color scale is the value of the Cramér-von-Mises test criterion and indicates a measure of the uncertainty of the obtained parameter. We can see that for the CDL datasets (runs above 305) the uncertainty is larger. This is because reproducing the \rmst distribution for these datasets is more problematic than for the CAST \cefe datasets (due to more data impurity caused by X-ray backgrounds of energies other than the target fluorescence line). Generally we can see variation in the range from about $\SIrange{620}{660}{μm.cm^{-1/2}}$ for the CAST data and larger variation for the CDL data. The theoretical value we expect is about $\SI{670}{μm.cm^{-1/2}}$, implying the approach yields reasonable values. #+CAPTION: Transverse gas diffusion constant $D_T$ as determined based on an #+CAPTION: iterative method attempting to reproduce the transverse RMS distribution using simulated X-ray events, #+CAPTION: the Cramér-von-Mises (CvM) test criterion and gradient descent. All runs above run 305 #+CAPTION: are CDL calibration runs. The color scale is the CvM test score. #+CAPTION: Background runs are marked by crosses. #+NAME: fig:background:mlp:gas_diffusion_values [[~/phd/Figs/determineDiffusion/σT_per_run.pdf]] Because this iterative calculation is not computationally negligible, the resulting $D_T$ parameters for each run are cached in an HDF5 file. [fn:determine_D_T_time] [fn:rms_symbol] $σ_T$ would be more appropriate following eq. [[eq:gas_physics:diffusion_after_drift]] in sec. [[#sec:theory:gas_diffusion]]. [fn:reasonable] For example a number provided by Magboltz for the used gas mixture at normal temperature and used chamber pressure. For the CAST gas mixture and settings we start with $D_T = \SI{660}{μm.cm^{-1/2}}$. [fn:longitudinal_rms] Note that it is important to consider the _transverse_ standard deviation and not the longitudinal one. Even for an X-ray there will be a short track like signature during production of the primary electrons from the initial photoelectron. This can define the long axis of the cluster in certain cases (depending on gas mixture). Similarly, for background events of cosmic muons the long axis corresponds to the direction of travel while the transverse direction is purely due to the diffusion process. [fn:simplified_events] In principle the event generation is the same algorithm as explained previously, but it does not sample from a Pólya for the gas gain, nor compute any cluster properties. Only the conversion point, number of target electrons based on target energy and their positions is simulated. From these electron positions the long and short axes are determined and $σ_T$ returned. [fn:derivatives_optimization] Initially I tried to compute the gradient using automatic differentiation using dual numbers for the $D_T$ input and the MC code, but the calculation of the test statistics is effectively independent of the input numbers due to the calculation of the ECDF. As such the derivative information is lost. I didn't manage to come up with a way to compute Cramér-von-Mises based on the actual input numbers directly in a timely manner. [fn:determine_D_T_time] One optimization takes on the order of $<\SI{30}{s}$ for most runs. But the diffusion parameters are used many times. **** TODOs for this section [/] :noexport: - [X] *REWRITE. WE ACTUALLY RETURN $σ_T$ AND NOT $D_T$!* -> No. We *do* return $D_T$. We just call the field of ~FakeDesc~ ~σ_T~, which is a bit confusing in its own right. But that parameter is the one going into the gaussian call with ~fakeDesc.σT * sqrt(zDist)~! - [ ] *RENAME σT FIELD OF FAKEDESC* - [X] *GENERALLY CLARIFY BETTER DISTINCTION BETWEEN PARAMETER $σ_T$ AND OUR 'transverse RMS' NUMBER* -> Done that now by introducing an \rmst macro that expands to $\text{RMS}_T$ - [ ] *INTRODUCE PLOT* showing the simulated vs real data after CvM optimization? -> For the main thesis I don't think this is important enough. We should mention that all the plots for the 'best cases' are generated during the optimization process. - [ ] *INSERT PLOT OF RUN 169 INTO APPENDIX* - [ ] *IN EXTENDED SECTION SHOW THAT THE DERIVED VALUE FROM THE FIT DOES NOT MATCH THE ITERATIVE VALUE* **** Cramér-von-Mises references [/] :noexport: - [X] *CITE CRAMER-VON-MISES* - https://doi.org/10.1080%2F03461238.1928.10416862 Cramér, H. (1928). "On the Composition of Elementary Errors". Scandinavian Actuarial Journal. 1928 (1): 13–74 - von Mises, R. E. (1928). Wahrscheinlichkeit, Statistik und Wahrheit. Julius Springer. - [X] *CITE TWO SAMPLE FORM* - https://doi.org/10.1214%2Faoms%2F1177704477 Anderson, T. W. (1962). "On the Distribution of the Two-Sample Cramer–von Mises Criterion" (PDF). Annals of Mathematical Statistics. Institute of Mathematical Statistics. 33 (3): 1148–1159. **** Note on handling background runs :extended: When determining the diffusion constant for background runs, the value we actually determine is about $\SI{40}{μm.cm^{-1/2}}$ too large when compared to \cefe calibration runs of the same time period (and thus likely similar parameters). For this reason we simply subtract this empirical difference as a constant offset. This is already done in the plot of fig. [[fig:background:mlp:gas_diffusion_values]]. The reason is almost certainly that the background data still contains more events that are not very suitable to use for their transverse RMS. But because it is a stable offset, we simply subtract it globally for all background deduced values. This guarantees that the cut values we actually produce for the MLP matches those that would be valid for equivalent real X-ray data. If we didn't do that we would likely get completely wrong cut values. The idea is to only have a reference for the diffusion in the background runs, which seems to be working correctly for its purpose. A figure of the diffusion without the offset is (this plot is from development), fig. [[fig:background:diffusion_per_run_raw_background]]. As one can see the offset is very stable and thus removing it manually should be fine. #+CAPTION: Development figure of the diffusion constant $D_T$ (disregard the usage of $σ_T$ in the figure) #+CAPTION: which shows the raw background values. #+NAME: fig:background:diffusion_per_run_raw_background [[~/phd/Figs/gasGainAndNeighbors/σT_per_run_more_precise.pdf]] **** Generate plot of diffusion constants for all runs [/] :extended: The ~TimepixAnalysis/Tools/determineDiffusion~ module can be compiled as a standalone program and then run on an input HDF5 file. It will perform the gradient descent optimization for every run in the file and then create the figure of the section above. #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/determineDiffusion/ WRITE_PLOT_CSV=true ./determineDiffusion \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --plotPath ~/phd/Figs/determineDiffusion/ \ --histoPlotPath ~/phd/Figs/determineDiffusion/histograms \ --useTeX #+end_src - [X] *RERUN THIS WITH TEX AND CSV FILES!* - [ ] *RERUN WITH TEX AND CSV FOR SIMULATION ITSELF* i.e. delete the cache tables before rerunning with ~WRITE_PLOT_CSV=true~ - [ ] *CHECK Ag-Ag TARGET* Run 351 and 329 All the histograms created during the optimization process are stored in the ~histogram~ subdirectory. It is one histogram with the \rmst distribution and a fit similar to [[cite:&krieger2018energy]] and (possibly multiple) comparison plots of the real and simulated data. See also sec. [[#sec:background:mlp:gen_effective_efficiency_plots]]. *** Comparison of simulated events and real data Fig. [[fig:background:mlp:ridgeline_simulated_vs_real_run241]] shows the comparison of all cluster properties of simulated and real data for one \cefe calibration run, number 241. The data is cut to the photopeak and normalized to fit into a single plot. Each ridge shows a kernel density estimation (KDE) of the data. Gas gain and gas diffusion are extracted from the data as explained in previous sections. All distributions outside the number of hits agree extremely well. This is expected to not perfectly match. Only the total charge is used as an input to the MLP and it matches pretty well on its own. The reason is the neighboring activation logic in the MC algorithm, which is empirically modified such that it produces correct _geometric_ behavior at the cost of slightly over- or underestimating the number of hits. The Pólya sampling and general neighboring logic is too simplistic to properly reproduce both aspects at the same time. Other runs show similar agreement. Plots for all runs like this can be found in the extended version of the thesis. #+CAPTION: Comparison of all geometric properties of simulated and real data of the #+CAPTION: \cefe photopeak clusters of run 241 in a ridgeline plot. Each ridge shows #+CAPTION: a KDE of the data. The gas gain and gas diffusion were #+CAPTION: first extracted from the real run data to simulate events of the photopeak. #+CAPTION: The two datasets agree very well with the exception of the number of hits (expected). #+NAME: fig:background:mlp:ridgeline_simulated_vs_real_run241 [[~/phd/Figs/fakeEventSimulation/runComparisons/ingrid_properties_run_241_ridgeline_kde_by_run.pdf]] **** TODOs for this section :noexport: - [X] Show ridgeline plot of comparison of simulated and real for a given run - [ ] Note that in both cases we filter to the main peak, photopeak in \cefe ! - [X] *DECIDE WHERE TO SHOW ALL PLOTS. APPENDIX IS TOO LONG!* **** Plots comparing simulated and real data :extended: The figure shown in the main body and all other plots for all runs are produced in sec. [[#sec:background:gen_plots_fake_event_comparisons]]. As it would also blow up the length of the extended thesis unnecessarily, all the figures are simply found in: [[file:Figs/fakeEventSimulation/runComparisons/]] *** Overview of the best performing MLP As a reference [fn:reference] an MLP was trained on a mix of simulated X-rays and a subset of the CAST background data. The implementation was done using [[https://github.com/scinim/flambeau][Flambeau]] [[cite:&flambeau]], a wrapper for [[https://pytorch.org/][libtorch]] [[cite:&Paszke_PyTorch_An_Imperative_2019]] [fn:libtorch], using the parameters and inputs as shown in tab. [[tab:background:best_mlp_overview]]. The network layout consists of $\num{14}$ input neurons, two hidden layers of only $\num{30}$ neurons [fn:neurons] each and $\num{2}$ neurons on the output layer. We use the Adam cite:&kingma14_adam optimizer and $\tanh$ as an activation function on both hidden layers, with $\text{sigmoid}$ used on the output layer. The mean squared error between output and target vectors is used as the loss function. #+CAPTION: Overview of all MLP parameters and training information. #+NAME: tab:background:best_mlp_overview #+ATTR_LATEX: :booktabs t | Property | Value | |----------------------------------+------------------------------------------------------| | Input neurons | 14 (12 geometric, 2 non-geometric) | | Hidden layers | 2 | | Neurons on hidden layer | 30 | | Output neurons | 2 | | Activation function | $\tanh$ | | Output layer activation function | $\text{sigmoid}$ | | Loss function | Mean Squared Error (MSE) | | Optimizer | Adam [[cite:&kingma14_adam]] | | Learning rate | $\num{7e-4}$ | | Batch size | 8192 | | Training data | $\num{250 000}$ simulated X-rays and $\num{288 000}$ | | | real background events selected from the | | | *outer chips only*, same number for validation | | Training epochs | $\num{82 000}$ | The $\num{14}$ input neurons were fed with all geometric properties that do not scale directly with the energy (no number of active pixels or direct energy) or position on the chip (as the training data is skewed towards the center) with the exception of the total charge. For the background data we avoid 'contaminating' the training dataset with CAST clusters that will end up as part of our background rate for the limit calculation by only selecting clusters from chips other than the center chip. This way the MLP is trained entirely on data independent of the CAST data of interest. $\num{250 000}$ synthetic X-rays and $\num{288 000}$ background clusters are used for training and the same number of validation [fn:number_of_events]. The first $\num{25000}$ epochs are trained on a synthetic X-ray dataset with a strong bias to low energy X-rays (clusters in the range from $\SIrange{0}{3}{keV}$ with the frequency dropping linearly with increasing energy). This is because these are more difficult to separate from background. Afterwards we switch to a separate set of synthetic X-rays uniformly distributed in energies. Without this approach the network can drift towards a local minimum, in which low energy X-rays are classified as background, while still achieving good accuracy. At the cost of low energy X-rays, low energy background clusters are almost perfectly rejected. This is of course undesirable. During the training process of $\num{82 000}$ epochs, every $\num{1000}$ epochs the accuracy and loss are evaluated based on the test sample and a model checkpoint is stored. The evolution of the loss function for this network is shown in fig. sref:fig:background:mlp:training_loss. We can see that performance for the test dataset is mostly stagnant from around epoch $\num{10 000}$. The network develops minor overtraining, but this is not important, because no training data is ever used in practice. Fig. sref:fig:background:mlp:validation_output shows the output of neuron 0 (neuron 1 is a mirror image) for the test sample, with signal like data towards 1 and background like data towards 0. The separation is almost perfect, but a small amount of each type can be seen over the entire range. This is expected due to the presence of real X-rays in the background dataset, low energy events generally being similar and statistical outliers in the X-ray dataset. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Training loss") (label "fig:background:mlp:training_loss") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/loss.pdf")) (subfigure (linewidth 0.5) (caption "MLP prediction") (label "fig:background:mlp:validation_output") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/test_validation_log10.pdf")) (caption (subref "fig:background:mlp:training_loss") " Loss over the training progress, evaluated every " ($ "\\num{1000}") " epochs. Test data loss mostly stagnant after " ($ "\\num{10000}") " epochs. " (subref "fig:background:mlp:validation_output") " Output of the validation data sample for neuron 0 as a " ($ "\\log") " plot.") (label "fig:background:mlp:training_plots")) #+end_src Beyond the MLP prediction to indicate a measure of classification efficiency, we can look at the receiver operating characteristic (ROC) curves for different X-ray reference datasets of the CDL data. For this we read all real center GridPix background data and all CDL data. Both of these are split into the corresponding target/filter combinations based on the energy of each cluster. We can then define the background rejection $b_{\text{rej}}$ for different cut values $c_i$ as \[ b_{\text{rej}} = \frac{\text{card}(\{ b < c_i\})}{\text{card}(\{ b \})}, \] where $b$ are all the predictions of the MLP for background data and $\text{card}$ is the cardinality of the set of numbers, i.e. the number of entries. $i$ refers to the $i^{\text{th}}$ cut value (as this is computed with discrete bins). In addition we define the signal efficiency by \[ s_{\text{eff}} = \frac{\text{card}(\{ s \geq c_i\})}{\text{card}(\{ s \})}, \] where similarly $s$ are the MLP predictions for X-ray clusters. If we plot the pairs of signal efficiency and background rejection values (one for each cut value), we produce a ROC curve. Done for each CDL target/filter combination and similarly for the likelihood cut by replacing the MLP prediction with the likelihood value of each cluster, we obtain fig. [[fig:roc_curves_logl_mlp]]. The line style indicates the classifier method ($\ln\mathcal{L}$ or MLP) and the color corresponds to the different targets and thus different fluorescence lines. The improvement in background rejection at a fixed signal efficiency exceeds $\SI{10}{\percent}$ at multiple energies. The only region where the MLP is worse than the likelihood cut is for the Cu-EPIC $\SI{0.9}{kV}$ target/filter combination at signal efficiencies above about $\SI{92}{\%}$. Especially the shape of the ROC curve for the MLP at high signal efficiencies implies that it should produce comparable background rates to the likelihood cut at higher signal efficiencies. #+CAPTION: ROC curves for the comparison of the likelihood cut method (solid lines) to the #+CAPTION: MLP predictions (dashed lines), colored by the different targets used to generate the #+CAPTION: reference datasets. The background data used for each target corresponds to background #+CAPTION: clusters in an energy range around the fluorescence line. #+CAPTION: The MLP visibly outperforms the likelihood cut method in all energies. At the same #+CAPTION: signal efficiency (x axis) a significantly higher background rejection is achieved. #+NAME: fig:roc_curves_logl_mlp [[~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/roc_curve_combined_split_by_target.pdf]] [fn:reference] This network is considered a "reference" implementation, as it is essentially the simplest ANN layout possible. [fn:libtorch] ~libtorch~ is the C++ library PyTorch is built on. [fn:neurons] Yes, only $\num{30}$ neurons. When using SGD as an optimizer more neurons are useful. With Adam anything more leads to excessive overtraining. Model performance and training time is anyhow better with Adam. Indeed, even a $\num{10}$ neuron model performs almost as good. The latter is fun, because it allows to write the weight matrices on a piece of paper and almost do a forward pass by hand. [fn:number_of_events] For training X-rays a total of $\num{500 000}$ events are simulated. For background $\num{1 000}$ events are randomly selected from each outer chip for each background runs (so $\num{6 000}$ clusters per background run), totaling $\num{576 000}$ events. Training and validation is split 50-50. **** TODOs for this section [/] :noexport: - [ ] CNN as an alternative idea, add footnote and show github link to our work, say this has been looked into already during MSc thesis - [X] *Verify NUMBER OF USED INPUTS* - [X] *VERIFY NUMBER OF TRAINING EVENTS* - [X] *REDO PREDICTION PLOT AS A LOG Y PLOT!* - [X] References for SGD and ReLU -> We don't actually use ReLU in the final network. -> We use Adam & tanh Parameters of old SGD model: #+begin_quote - Gradient algorithm: Stochastic Gradient Descent (SGD) - Momentum: 0.2 #+end_quote Old loss explanation for SGD model: #+begin_quote As desired the loss decreases during the training, but spikes somewhere between $\num{485 000}$ and $\num{490 000}$ epochs. It does recover, but does not reach the minima found earlier. The accuracy also decreases slightly, but not to the extent potentially expected based on the loss. Regardless, the checkpoint after $\num{485 000}$ epochs is used as the final MLP model. #+end_quote Old paragraph about MLP replacement for LnL: #+begin_quote The usage of such neural networks is plainly a replacement for the likelihood cut method used to detect the initial cluster candidates and during classification of the septem/line veto. It does not replace any of the vetoes, as these include information outside the center chip, which is not available for the neural network (training and validation becomes much more complicated if all chips were to be used, due to the random coincidence problem mentioned in a previous section). #+end_quote **** Training code :extended: The code performing the training is found [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Tools/NN_playground/train_ingrid.nim][here]]. **** Thoughts on number of background training events :noexport: The total number of background events is almost a third of all background clusters on the center chip. Hmm. We should probably select 1000 from each chip instead of 6000 from the center chip only. Better even use _only_ outer chip data. -> This is what we ended up doing in the end! **** More thoughts on different types of MLPs :extended: As mentioned in the footnote of the main section above, we trained different networks. Any network trained with stochastic gradient descent (SGD) tends to require a lot longer to train and generally struggles more to find the best parameters. Initially I focused on SGD networks, due to the extreme overtraining found with Adam networks. It was only when I tried ridiculously small networks that Adam ended up being very useful after all. The main reason I prefer the 30 neuron network over the mentioned 10 neuron network, is that it tends to generalize a little bit better between different runs (i.e. gas gain and diffusion parameters) of X-rays of similar energies. What this means is that the effective efficiency is a bit more stable, in particular for the CDL data (where we know the data differs more between runs and compared to our synthetic data). Next, we use a sigmoid function on the output layer and then simply the mean squared error as the loss function. The advantage of the sigmoid on the output is that the neuron predictions will be effectively clamped between $(0, 1)$. This is useful, because otherwise there is always a chance the network will produce different 'distributions' in the output prediction. So that inputs with differing parameters produce strictly different outputs in different ranges. This is not necessarily a problem (it can even be useful), but it can become one if the synthetic dataset from which a cut value is computed ends (partially) in a different output distribution than the target data (e.g. a \cefe calibration data). It tends to exacerbate potential differences between synthetic and real data leading to larger differences in the target and real efficiencies. Also, the nice aspect about the models being so tiny, is that it makes it much easier to look into the model. In particular the mentioned 10 neuron model (2 layers of 10 neurons each) can easily be represented as a set of 2 matrices and 2 bias vectors, small enough to look at by eye. With the 30 neuron model we mostly use finally, this is starting to be on the annoying and opaque end of course. Still much better than a 300x300 matrix! **** Network validation :extended: In order to validate the usability and performance of any MLP we train, we look at the following things (outside loss, accuracy and train / test predictions): 1. the direct MLP prediction values for all the different CDL runs. This use useful to check that all data, independent of energy and gas properties is classified as X-rays (as mentioned in the text, low energy X-rays can end up in the background class). 2. the effective efficiencies for each calibration and CDL run, as explained in sec. [[#sec:background:mlp:mlp_cut_value]] and [[#sec:background:mlp:effective_efficiency]]. The real efficiency should be as close to the target efficiency as possible. Otherwise the network does not generalize from synthetic data to the real data as it should. 3. the ROC curve split by each CDL fluorescence line. Ideally it should be higher than the likelihood cut ROC curve for all lines and in all relevant signal efficiency regions. If this is not the case the network is likely not going to outperform the likelihood cut method. All the plots for the MLP we use can be found in XXXXXXXXX **** Generate plots of the loss, accuracy and output [/] :extended: The plots we care about for the context of the thesis are: - validation output as log10 plot <- ~train_ingrid~ - loss <- ~train_ingrid~ - ROC curve <- ~train_ingrid~ The ROC curve is generated by ~train_ingrid~ when passing the ~--predict~ argument. The first two are either generated during training or when using the ~--skipTraining~ argument without ~--predict~ (due to different data requirements). #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground/ WRITE_PLOT_CSV=true USE_TEX=true ./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --simFiles ~/CastData/data/FakeData/fakeData_500k_uniform_energy_0_10_keV.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --plotPath ~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/ \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --numHidden 30 \ --numHidden 30 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer Adam \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --backgroundChips 0 \ --backgroundChips 1 \ --backgroundChips 2 \ --backgroundChips 4 \ --backgroundChips 5 \ --backgroundChips 6 \ --clamp 50 \ --predict #+end_src #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground/ WRITE_PLOT_CSV=true USE_TEX=true ./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --simFiles ~/CastData/data/FakeData/fakeData_500k_uniform_energy_0_10_keV.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --plotPath ~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/ \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --numHidden 30 \ --numHidden 30 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer Adam \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --backgroundChips 0 \ --backgroundChips 1 \ --backgroundChips 2 \ --backgroundChips 4 \ --backgroundChips 5 \ --backgroundChips 6 \ --clamp 50 \ --skipTraining #+end_src [[file:Figs/neuralNetworks/17_11_23_adam_tanh30_mse/]] which are a direct copy of the initial plots generated from the call above. - [ ] We will likely want slightly nicer versions of these plots of course! The H5 file [[~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_desc_v2.h5]] contains all the required losses etc for the plots! The model file we use is: [[~/org/resources/nn_devel_mixing/10_05_23_sgd_gauss_diffusion_tanh300_mse_loss/mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933.pt]] - [ ] *CREATE COMMAND TO GENERATE THE PLOTS FROM THE MODEL!* **** Other networks trained :extended: In the course of developing the MLP presented in the thesis, we trained a large number of different networks. We varied: - learning rates - optimizers (SGD, Adam, AdaGrad) - optimizer settings (momentum etc) - activation functions ($\tanh$, ReLU, GeLU, softmax, eLU) - loss functions (MSE, L1, ...) - linear output layer or non linear output layer - number of hidden layers - number of neurons on hidden layer - input neurons (including / excluding individual variables) As usual when training machine learning models, it's all a bit hit and miss. :) I may upload all the models, plots (in addition to the notes) to a repository / Zenodo, but there is not _that_ much of interest there I imagine. - [ ] *UPLOAD ALL MODELS* **** Generate plot of the ROC curve :extended: The ROC curve plot is generated by ~train_ingrid~ if the ~--predict~ argument is given, specifically via the ~targetSpecificRocCurve~ proc. Generate the ROC curve for the MLP trained on the outer chips: #+begin_src sh ./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/30_10_23_sgd_tanh300_mse_loss_outer_chips/mlp_tanh_sigmoid_MSE_SGD_300_2.pt \ --plotPath ~/phd/Figs/neuralNetworks/30_10_23_sgd_tanh300_mse_outer_chips/ \ --numHidden 300 \ --numHidden 300 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer SGD \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --backgroundChips 0 \ --backgroundChips 1 \ --backgroundChips 2 \ --backgroundChips 4 \ --backgroundChips 5 \ --backgroundChips 6 \ --predict #+end_src **** Training call for the best performing MLP :extended: All MLPs we trained, including the one we mainly use in the end, use the following datasets as inputs: - eccentricity - skewnessLongitudinal - skewnessTransverse - kurtosisLongitudinal - kurtosisTransverse - length - width - rmsLongitudinal - rmsTransverse - lengthDivRmsTrans - rotationAngle - fractionInTransverseRms - totalCharge - σT (<- this is actually $D_T$) This section (anything below) is extracted from [[file:~/org/journal.org::#sec:journal:17_11_23:train_adam_tanh30]]. Start training at <2023-11-17 Fri 19:42>: #+begin_src sh ./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --simFiles ~/CastData/data/FakeData/fakeData_500k_0_to_3keV_decrease.h5 \ --modelOutpath ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/ \ --plotPath ~/Sync/17_11_23_adam_tanh30_sigmoid_mse_3keV/ \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --numHidden 30 \ --numHidden 30 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer Adam \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --backgroundChips 0 \ --backgroundChips 1 \ --backgroundChips 2 \ --backgroundChips 4 \ --backgroundChips 5 \ --backgroundChips 6 \ --clamp 5000 \ --plotEvery 1000 #+end_src Stopped after 25k <2023-11-17 Fri 20:09> Now continue with uniform data: #+begin_src sh ./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --simFiles ~/CastData/data/FakeData/fakeData_500k_uniform_energy_0_10_keV.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_25000_loss_0.0253_acc_0.9657.pt \ --plotPath ~/Sync/17_11_23_adam_tanh30_sigmoid_mse_3keV/ \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --numHidden 30 \ --numHidden 30 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer Adam \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --backgroundChips 0 \ --backgroundChips 1 \ --backgroundChips 2 \ --backgroundChips 4 \ --backgroundChips 5 \ --backgroundChips 6 \ --clamp 5000 \ --plotEvery 1000 #+end_src Stopped after 82k epochs <2023-11-17 Fri 21:14>. ***** ~nn_predict~ Prediction: #+begin_src sh ./nn_predict \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --signalEff 0.9 \ --plotPath ~/Sync/17_11_23_adam_tanh30_sigmoid_mse_3keV #+end_src All positive fortunately. ***** ~effective_eff_55fe~ #+begin_src sh ./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --ε 0.95 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit --plotDatasets \ --plotPath ~/Sync/run2_run3_17_11_23_adam_tanh30_sigmoid_mse_82k/ #+end_src Finished <2023-11-17 Fri 21:29>. =~/Sync/run2_run3_17_11_23_adam_tanh30_sigmoid_mse_82k/efficiency_based_on_fake_data_per_run_cut_val.pdf= [[./Figs/neuralNetworks/run2_run3_17_11_23_adam_tanh30_sigmoid_mse_82k/efficiency_based_on_fake_data_per_run_cut_val.pdf]] -> Somewhat similar to the 10 neuron network *BUT* the spread is much smaller and the CDL data is better predicted! This might our winner. 90% <2023-11-18 Sat 08:59> #+begin_src sh WRITE_PLOT_CSV=true USE_TEX=true ./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --ε 0.90 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit --plotDatasets \ --generatePlots --generateRunPlots \ --plotPath ~/Sync/run2_run3_17_11_23_adam_tanh30_sigmoid_mse_mlp90_82k/ #+end_src ***** ~train_ingrid --predict~ <2023-11-17 Fri 21:30> #+begin_src sh ./train_ingrid \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --back ~/CastData/data/DataRuns2017_Reco.h5 \ --back ~/CastData/data/DataRuns2018_Reco.h5 \ --simFiles ~/CastData/data/FakeData/fakeData_500k_uniform_energy_0_10_keV.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --plotPath ~/Sync/17_11_23_adam_tanh30_sigmoid_mse_3keV/ \ --datasets eccentricity \ --datasets skewnessLongitudinal \ --datasets skewnessTransverse \ --datasets kurtosisLongitudinal \ --datasets kurtosisTransverse \ --datasets length \ --datasets width \ --datasets rmsLongitudinal \ --datasets rmsTransverse \ --datasets lengthDivRmsTrans \ --datasets rotationAngle \ --datasets fractionInTransverseRms \ --datasets totalCharge \ --datasets σT \ --numHidden 30 \ --numHidden 30 \ --activation tanh \ --outputActivation sigmoid \ --lossFunction MSE \ --optimizer Adam \ --learningRate 7e-4 \ --simulatedData \ --backgroundRegion crAll \ --nFake 250_000 \ --backgroundChips 0 \ --backgroundChips 1 \ --backgroundChips 2 \ --backgroundChips 4 \ --backgroundChips 5 \ --backgroundChips 6 \ --clamp 5000 \ --predict #+end_src Finished <2023-11-17 Fri 21:33> Compared to the ROC curve of tanh10, this ROC curve looks a little bit worse. But the effective efficiency predictions are quite a bit better. So we will choose based on what happens with the background and other efficiencies. ***** ~createAllLikelihoodCombinations~ Starting <2023-11-18 Sat 00:03>: #+begin_src sh ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll \ --vetoSets "{fkMLP, +fkFadc, +fkScinti, +fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --fadcVetoPercentile 0.99 \ --signalEfficiency 0.85 --signalEfficiency 0.9 --signalEfficiency 0.95 --signalEfficiency 0.98 \ --out ~/org/resources/lhood_mlp_17_11_23_adam_tanh10_sigmoid_mse_82k/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 4 \ --dryRun #+end_src #+begin_src Running all likelihood combinations took 11730.39050412178 s #+end_src Despite only 2 jobs in the multithreaded case below (but excluding fkMLP alone!!) it was still more than twice slower. So single threaded is the better way. Multithreaded took: #+begin_src Running all likelihood combinations took 31379.83979129791 s #+end_src ***** ~plotBackgroundClusters~, ~plotBackgroundRate~ ****** No vetoes Background clusters, 95%: #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ --zMax 30 --title "X-ray like clusters of CAST data MLP@95%" \ --outpath /tmp/ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 --suffix "_mlp_95_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression #+end_src 90% #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ --zMax 30 --title "X-ray like clusters of CAST data MLP@90%" \ --outpath /tmp/ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 --suffix "_mlp_90_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression #+end_src Background rate, comparison: #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ --names "98" --names "98" \ --names "95" --names "95" \ --names "90" --names "90" \ --names "85" --names "85" \ --centerChip 3 --title "Background rate from CAST" \ --showNumClusters --showTotalTime --topMargin 1.5 --energyDset energyFromCharge \ --outfile background_rate_adam_tanh30_mse_sigmoid_82k.pdf \ --outpath /tmp \ --region crGold \ --energyMin 0.2 #+end_src ****** Scinti+FADC+Line Background clusters, 95%: #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters of CAST data MLP@95%+Scinti+FADC+Line" \ --outpath /tmp/ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 --suffix "_mlp_95_scinti_fadc_line_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression #+end_src 85% #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters of CAST data MLP@85%+Scinti+FADC+Line" \ --outpath /tmp/ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 --suffix "_mlp_85_scinti_fadc_line_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression #+end_src #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "98" --names "98" \ --names "95" --names "95" \ --names "90" --names "90" \ --names "85" --names "85" \ --centerChip 3 --title "Background rate from CAST" \ --showNumClusters --showTotalTime --topMargin 1.5 --energyDset energyFromCharge \ --outfile background_rate_adam_tanh30_scinti_fadc_line_mse_sigmoid_82k.pdf \ --outpath /tmp \ --region crGold \ --energyMin 0.2 #+end_src ****** Scinti+FADC+Septem+Line Background clusters, 95%: #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters of CAST data MLP@95%+Scinti+FADC+Septem+Line" \ --outpath /tmp/ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 --suffix "_mlp_95_scinti_fadc_septem_line_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression #+end_src 85% #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --zMax 30 --title "X-ray like clusters of CAST data MLP@85%+Scinti+FADC+Septem+Line" \ --outpath /tmp/ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 --suffix "_mlp_85_scinti_fadc_septem_line_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression #+end_src #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "98" --names "98" \ --names "95" --names "95" \ --names "90" --names "90" \ --names "85" --names "85" \ --centerChip 3 --title "Background rate from CAST" \ --showNumClusters --showTotalTime --topMargin 1.5 --energyDset energyFromCharge \ --outfile background_rate_adam_tanh30_scinti_fadc_septem_line_mse_sigmoid_82k.pdf \ --outpath /tmp \ --region crGold \ --energyMin 0.2 #+end_src **** Combination of NN w/ vetoes :noexport: - [ ] *I THINK THIS CAN BE DELETED* [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/NN_playground/predict_event.nim]] #+begin_src sh ./predict_event ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/lhood_2017_crGold_septemveto_lineveto_dbscan.h5 \ ~/CastData/ExternCode/TimepixAnalysis/resources/LikelihoodFiles/lhood_2018_crGold_septemveto_lineveto_dbscan.h5 \ --lhood --cutVal 3.2 --totalTime 3400 #+end_src Within 0 - 8 keV: 6.413398692810459e-6 keV⁻¹•cm⁻²•s⁻¹ Within 0 - 2.8 & 4.0 - 8.0 keV: 4.133025759323338e-6 keV⁻¹•cm⁻²•s⁻¹ yields [[~/org/Figs/statusAndProgress/neuralNetworks/background_rate_after_logl_septemveto_lineveto_and_mlp.pdf]] *** Determination of MLP cut value :PROPERTIES: :CUSTOM_ID: sec:background:mlp:mlp_cut_value :END: Based on the output distributions of the MLP, fig. sref:fig:background:mlp:validation_output, it can be used to act as a cluster discriminator in the same way as for the likelihood cut method, following eq. [[eq:background:lnL:cut_condition]]. The likelihood distribution is replaced by the MLP prediction via either of the two output neurons. For example, as seen for one output neuron in fig. sref:fig:background:mlp:validation_output, the software efficiency would be determined based on the 'prediction' value along the x-axis such that the desired software efficiency is above the cut value. Any cluster below would be removed. In practice again the empirical distribution function of the signal-like output data is computed and the quantile of the target software efficiency $ε_{\text{eff}}$ is determined. Similarly to the likelihood cut method this is done for $\num{8}$ different energy ranges corresponding to the CDL fluorescence lines. However, no interpolation is performed because only the single output distribution is known. [fn:interpolation] Due to the significant differences in gas gain between the CDL dataset and the CAST \cefe calibration data, we do not use a single cut value for each energy for all CAST data taking runs. Instead we use the X-ray cluster simulation machinery to provide run specific X-ray clusters from which to deduce cut values. For each run we wish to apply the MLP to, we start by computing the mean gas gain based on all gas gain intervals. Then we determine the gas diffusion coefficient as explained in sec. [[#sec:background:mlp:determine_gas_diffusion]]. With two of the three required parameters for X-ray generation in hand, we then simulate X-rays following each X-ray fluorescence line measured in the CDL dataset, yielding 8 different simulated datasets for each run. Based on these we compute one cut value on the MLP output each. The so deduced cut value is applied as the MLP cut to that run in the valid energy range. The same approach is used for calibration runs as well as for background runs. This means /both/ the MLP training as well as determination of cut values is /entirely/ based on synthetic data, only using aggregate parameters of the real data. [fn:interpolation] One could of course interpolate on the cut values itself, but that is less well motivated and harder to cross-check. Predicting a cut value from two known, neighboring energies would likely not work very well. *** Verification of software efficiency using calibration data :PROPERTIES: :CUSTOM_ID: sec:background:mlp:effective_efficiency :END: As the simulated X-ray clusters certainly differ from real X-rays, verification of the software efficiency using calibration data is required. For all \cefe calibration runs as well as all CDL data we produce cut values as explained in the previous section. Then we apply these to the real clusters of those runs and compute the effective efficiency as \[ ε_{\text{effective}} = \frac{N_{\text{cut}}}{N_{\text{total}}} \] where $N_{\text{cut}}$ is the clusters remaining after applying the MLP cut. $N_{\text{total}}$ is the total number of clusters after application of the standard cleaning cuts applied for the \cefe spectrum fit and the CDL fluorescence line fits. The resulting efficiency is the effective efficiency the MLP produces. A close match with the target software efficiency implies the simulated X-ray data matches the real data well and the network learned to identify X-rays based on physical properties (and not due to overtraining). In the limit calculation later, the mean of all these effective efficiencies of the \cefe calibration runs is used in place of the target software efficiency as a realistic estimator for the signal efficiency. The standard deviation of all these effective efficiencies is one of the systematics used there. Fig. [[fig:background:mlp:effective_efficiencies]] shows the effective efficiencies obtained for all \cefe calibration and CDL runs using the MLP introduced earlier. The marker symbol represents the energy of the target fluorescence line. Note that the \cefe calibration escape peak near $\SI{3}{keV}$ is a separate data point ~"3.0"~ compared to the silver fluorescence line given as ~"2.98"~. That is because the two are fundamentally different things. The escape event is a $\SI{5.9}{keV}$ photopeak X-ray entering the detector, which ends up 'losing' about $\SI{3}{keV}$ due to excitation of an argon fluorescence Kα X-ray. This means the absorption length is that of a $\SI{5.9}{keV}$ photon, whereas the silver fluorescence line corresponds to a real $\SI{2.98}{keV}$ photon and thus corresponding absorption length. As such, the geometric properties on average are different. The target efficiency in the plot was $\SI{95}{\%}$ with an achieved mean effective efficiency close to the target. Values are slightly lower in Run-2 (runs < 200) and slightly above in Run-3. Variation is significantly larger in the CDL runs (runs above number 305), but often towards larger efficiencies. Only one CDL run is at significantly lower efficiency ($\SI{83}{\%}$), with a few other lower energy runs around $\SI{91}{\%}$. The data quality in the CDL data is lower and it has larger variation of the common properties (gas gain, diffusion) as compared to the CAST dataset. #+CAPTION: Effective efficiencies obtained using the MLP for a target $ε = \SI{95}{\%}$ #+CAPTION: software efficiency. Runs above run 305 are CDL calibration runs. Different #+CAPTION: symbols represent different target fluorescence lines. The \cefe escape peak #+CAPTION: is indicated at $\SI{3}{keV}$ in contrast to the $\SI{2.98}{keV}$ silver #+CAPTION: line, due to different physical properties. The photopeak corresponds to ~5.9~ and #+CAPTION: the equivalent CDL Mn target to ~5.89~. #+NAME: fig:background:mlp:effective_efficiencies [[~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/effectiveEff0.95/efficiency_based_on_fake_data_per_run_cut_val.pdf]] **** TODOs for this section :noexport: - [ ] *UPDATE PLOT* to have larger margin at top **** Generate effective efficiency plot :extended: :PROPERTIES: :CUSTOM_ID: sec:background:mlp:gen_effective_efficiency_plots :END: Before we can compute the effective efficiencies we need the cache table. In principle we have it already for the data and calibration runs, but we want a plot of all runs as well: #+begin_src sh ./determineDiffusion \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/CalibrationRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ ~/CastData/data/CalibrationRuns2018_Reco.h5 \ ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 #+end_src This currently produces the plot =~/Sync/σT_per_run.pdf=, which we copied here: [[~/phd/Figs/σT_per_run.pdf]] Now we compute the effective efficiencies: #+begin_src sh WRITE_PLOT_CSV=true USE_TEX=true ./effective_eff_55fe \ ~/CastData/data/CalibrationRuns2017_Reco.h5 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --model ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --ε 0.95 \ --cdlFile ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --evaluateFit --plotDatasets \ --generatePlots --generateRunPlots \ --plotPath ~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/effectiveEff0.95/ #+end_src ***** TODOs for this section :noexport: - [ ] *CHANGE THE SIZE OF THE PLOT TO FIT GAIN COMPLETELY* - [X] *IT SEEMS TO ME LIKE THE ~DataType~ FIELD SHOULD GIVE DIFFERENT NAMES* -> In the effective efficiency code the ~DataType~ column of the DF we plot for the effective efficiency contains the data information. But to me it seems that the name should be different than in the plot. -> Fixed it. For some reason the ~$E~ variant here caused the ~8.03999999~. Using ~strformat~ now. #+begin_src nim import ingrid / [ingrid_types, tos_helpers] import std / strformat for tfKind in TargetFilterKind: let E = toXrayLineEnergy(tfKind) echo E echo $E echo &"{E:g}" #+end_src **** Note on effective efficiency for CDL data :extended: Of course it is not ideal that the CDL data sees larger variation than the CAST data. However, on the one hand I'm reasonably certain that the variation would be quite a bit lower if the CDL data quality was higher and more similar to the CAST data. On the other hand, unfortunately we simply do not have the data to cross check with the efficiency for each run. The parameters going into the MLP (total charge and diffusion) depending on the detector behavior are vitally important to get good cut values. If we had more data for different energies, we could use better calculate effective efficiencies, maybe individually for each target. But as of this dataset, I would not fully trust those numbers. And we can see that there are enough runs of low energy X-rays in the CDL data with higher signal efficiency. This gives us confidence that the synthetic data doesn't just give us cut values that are generally too strict (underestimating the effective efficiency for low energy X-rays always). *** Comparing the MLP to the $\ln\mathcal{L}$ cut :noexport: - [ ] *THIS HAS BEEN MOVED BACK UP AFTER ALL* Beyond the verification of the target software efficiencies and the general separation between signal and background datasets during training, we can consider the - [ ] VALIDATION As a comparison to the likelihood method the receiver operating characteristic (ROC) curves for the different X-ray reference datasets is used. The signal and background data used for each case (likelihood and MLP) is the test dataset used to test the MLP performance after training. Fig. [[fig:roc_curves_logl_mlp]] shows these ROC curves with the line style indicating the method and the color corresponding to the different targets and thus different fluorescence lines. The improvement in background rejection at a fixed signal efficiency exceeds $\SI{10}{\percent}$ at multiple energies. #+CAPTION: ROC curves for the comparison of the likelihood cut method (solid lines) to the #+CAPTION: MLP predictions (dashed lines), colored by the different targets used to generate the #+CAPTION: reference datasets. The background data used for each target corresponds to background #+CAPTION: clusters in an energy range around the fluorescence line. #+CAPTION: The MLP visibly outperforms the likelihood cut method in all energies. At the same #+CAPTION: signal efficiency (x axis) a significantly higher background rejection is achieved. #+NAME: fig:roc_curves_logl_mlp [[~/phd/Figs/neuralNetworks/17_11_23_adam_tanh30_sigmoid_mse_3keV/roc_curve_combined_split_by_target.pdf]] *** Background rate using MLP :PROPERTIES: :CUSTOM_ID: sec:background:mlp:background_rate :END: Applying the MLP cut as explained leads to background rates as presented in fig. [[fig:background:mlp:background_rate]], where the MLP is compared to the $\ln\mathcal{L}$ cut ($ε = \SI{80}{\%}$) at a similar software efficiency $ε = \SI{84.7}{\%}$ as well as at a significantly higher efficiency of $ε = \SI{94.4}{\%}$. Note the efficiencies are effective efficiencies as explained in sec. [[#sec:background:mlp:effective_efficiency]]. The background suppression is significantly improved in particular at lower energies, implying other cluster properties provide better separation at those than the three inputs used for the likelihood cut. The mean background rates between $\SIrange{0.2}{8}{keV}$ for each of these are [fn:diff_ln80]: \begin{align*} b_{\ln\mathcal{L} @ \SI{80}{\%}} &= \SI{2.06142(9643)e-05}{keV^{-1}.cm^{-2}.s^{-1}} \\ b_{\text{MLP} @ \SI{85}{\%}} &= \SI{1.56523(8403)e-05}{keV^{-1}.cm^{-2}.s^{-1}} \\ b_{\text{MLP} @ \SI{95}{\%}} &= \SI{2.01631(9537)e-05}{keV^{-1}.cm^{-2}.s^{-1}} \end{align*} #+begin_comment [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.60791(7521)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.06142(9643)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.56523(8403)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.22088(6554)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.56523(8403)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.01631(9537)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.57272(7439)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.01631(9537)e-05 keV⁻¹·cm⁻²·s⁻¹ #+end_comment The most important aspect is that the MLP allows to achieve very comparable background rates at significantly higher efficiencies. Accepting lower signal efficiencies does not come with a noticeable improvement, implying that for the remaining clusters the distinction between background and X-ray properties is very small (and some remaining clusters surely are of X-ray origin). #+CAPTION: Comparison of the background rate in the center region of the MLP at different #+CAPTION: efficiencies and the standard $\ln\mathcal{L}$. The MLP improves most at #+CAPTION: low energies and achieves comparable or better rates at higher efficiencies. #+NAME: fig:background:mlp:background_rate [[~/phd/Figs/background/background_rate_gold_mlp_0.95_0.8_lnL.pdf]] [fn:diff_ln80] The number quoted here for \lnL at $\SI{80}{\%}$ differs slightly from sec. [[#sec:background:likelihood_cut]], because the lower energy range is $\SI{0.2}{keV}$ here. **** Generate background rate plot for MLP [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:background:gen_bck_rate_mlp :END: Let's generate the background rate plot for the MLP. We'll want to show 85% (84.7% effective) as direct comparison to LnL, 95% effective (94.4% effective) as reference because we use it later and compare both to the 80% LnL. #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ --centerChip 3 \ --names "MLP@0.95" --names "MLP@0.95" \ --names "MLP@0.85" --names "MLP@0.85" \ --names "LnL@0.8" --names "LnL@0.8" \ --title "Background rate in center 5·5 mm², MLP at different ε" \ --showNumClusters \ --region crGold \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.2 \ --outfile background_rate_gold_mlp_0.95_0.8_lnL.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: Manual rate = 1.93512(7596)e-05 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 0.2 .. 12.0: 2.28344(8963)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.93512(7596)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.58328(6871)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.86827(8108)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.58328(6871)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.04544(7810)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 0.2 .. 12.0: 2.41362(9215)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 2.04544(7810)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.5008(2482)e-05 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 7.0016(4963)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.5008(2482)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.1083(1396)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 2.2166(2793)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.1083(1396)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.9175(1837)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 0.5 .. 2.5: 3.8350(3673)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.9175(1837)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7053(1454)e-05 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.21736(6545)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.7053(1454)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.7748(1178)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 7.9868(5301)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.7748(1178)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.3456(1354)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.05552(6094)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.3456(1354)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.7784(2404)e-05 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 0.2 .. 2.5: 8.6904(5530)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 3.7784(2404)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.6062(1568)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 0.2 .. 2.5: 3.6943(3605)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 1.6062(1568)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.5088(1959)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 0.2 .. 2.5: 5.7702(4506)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 2.5088(1959)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.7729(7718)e-06 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.7092(3087)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.7729(7718)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.6201(8708)e-06 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 3.4480(3483)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 8.6201(8708)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.02033(9474)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 4.0 .. 8.0: 4.0813(3789)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 1.02033(9474)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.37217(8970)e-05 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.2330(5382)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.37217(8970)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.51291(9419)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.0775(5651)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.51291(9419)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.7827(1022)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 2.0 .. 8.0: 1.06959(6135)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.7827(1022)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.06142(9643)e-05 [INFO]:Dataset: LnL@0.8 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.60791(7521)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.06142(9643)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.56523(8403)e-05 [INFO]:Dataset: MLP@0.85 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.22088(6554)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.56523(8403)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.01631(9537)e-05 [INFO]:Dataset: MLP@0.95 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.57272(7439)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.01631(9537)e-05 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | false | false | false | false | 0.800 | 2.06142(9643)e-05 | | MLP | 0.865 | false | false | false | false | 0.865 | 1.56523(8403)e-05 | | MLP | 0.957 | false | false | false | false | 0.957 | 2.01631(9537)e-05 | [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_gold_mlp_0.95_0.8_lnL.pdf [WARNING]: Printing total background time currently only supported for single datasets. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_gold_mlp_0.95_0.8_lnL.tex Generated: /home/basti/phd/Figs/background/background_rate_gold_mlp_0.95_0.8_lnL.pdf :end: ** Additional detector features as vetoes :PROPERTIES: :CUSTOM_ID: sec:background:additional_vetoes :END: Next we will cover how the additional detector features can be used as vetoes to suppress even more background. We start by looking at the scintillators in sec. [[#sec:background:scinti_veto]]. Then we consider the FADC as a veto based on time evolution of the events in sec. [[#sec:background:fadc_veto]]. Finally, we consider the outer GridPix ring as further geometric vetoes in sec. [[#sec:background:septem_veto]] and sec. [[#sec:background:line_veto]] in the form of the 'septem veto' and the 'line veto'. Note, in the context of these detector vetoes we will generally apply these vetoes on top of the the $ε = \SI{80}{\%}$ $\ln\mathcal{L}$ method. We will later come back to the MLP in sec. [[#sec:background:all_vetoes_combined]]. *** TODOs for this section [/] :noexport: - [ ] *ADD INTRODUCTION TO SECTION ABOUT VETOES* -> Make sure to properly get a 'split' from the likelihood cut method here. Better to have a chapter / section on the different methods to suppress background. So use it. *** Scintillators as vetoes :PROPERTIES: :CUSTOM_ID: sec:background:scinti_veto :END: As introduced in theory section [[#sec:theory:xray_fluorescence]] one problematic source of background is X-ray fluorescence created due to abundant muons interacting with material of -- or close to -- the detector. The resulting X-rays, if detected, are indistinguishable from those X-rays originating from an axion (or other particle of study). And given the required background levels, cosmic induced X-ray fluorescence plays a significant role in the datasets. Scintillators (ideally 4π around the whole detector setup) can be very helpful to reduce the influence of such background events by 'tagging' certain events. For a fully encapsulated detector, any (at least) muon induced X-ray would be preceded by a signal in one (or multiple) scintillators. As such, if the time $t_s$ between the scintillator trigger and the time of activity recorded with the GridPix is small enough, the two are likely to be in real coincidence. In the setup used at CAST with a $\SI{42}{\cm}$ times $\SI{82}{\cm}$ scintillator paddle (see sec. [[#sec:detector:scintillators]]), about $\sim\SI{30}{cm}$ above the detector center -- and a $\cos²(θ)$ distribution for muons -- a significant fraction of muons should be tagged. Similarly, the second scintillator on the Septemboard detector, the small SiPM behind the readout area should trigger precisely in those cases where a muon traverses the detector from the front or back such that the muon track is orthogonal to the readout plane. The term 'non trivial trigger' in the following indicates events in which a scintillator triggered and the number of clock cycles to the GridPix readout was larger than $\num{0}$ (scintillator had a trigger in the first place) and smaller than $\num{300}$. The latter cut is somewhat arbitrary, as the idea is two things: 1. no clock cycles of $\num{4095}$ (indicates clock ran over) and 2. the physics involved leading to coincidences typically takes place on time scales shorter than $\SI{100}{clock\;cycles} = \SI{2.5}{μs}$ (see sec. [[#sec:detector:scintillators]]). Anything above should just be a random coincidence. $\num{300}$ is therefore just chosen to have a buffer to where physical clock cycles start. During the Run-3 data taking period a total of $\num{69243}$ non trivial veto paddle triggers were recorded. [fn:estimation_of_tagged_muons] The distribution of the clock cycles after which the GridPix readout happened is shown in fig. [[fig:background:scintillator_clock_cycles]] on the right. The narrow peak at $\SI{255}{clock\;cycles}$ seems to be some kind of artifact, potentially a firmware bug. The corresponding GridPix data was investigated and there is nothing unusual about it (neither cluster properties nor individual events). The source therefore remains unclear, but a physical origin is extremely unlikely as the peak is exactly one clock cycle wide (unrealistic for a physical process in this context) and coincidentally exactly $\num{255}$ (~0xFF~ in hexadecimal), hinting towards a counting issue in the firmware. The actual distribution looks as expected, being a more or less flat distribution corresponding to muons traversing at different distances to the readout. The real distribution does not start at exactly $\SI{0}{clock\;cycles}$ due to inherent processing delays and even close tracks requiring some time to drift to the readout and activating the FADC. Further, geometric effects play a role. Very close to the grid, only perfectly parallel tracks can achieve low clock cycles, but the further away different angles contribute to the same times. The SiPM recorded $\num{4298}$ non trivial triggers in the same time, which are shown in fig. [[fig:background:scintillator_clock_cycles]] on the left. [fn:estimation_of_tagged_muons_sipm] Also this distribution looks more or less as expected, showing a peak towards low clock cycles (typical ionization and therefore similar times to accumulate enough charge to trigger) with a tail for less and less ionizing tracks. The same physical cutoff as in the veto paddle distribution is visible corresponding to the full $\SI{3}{cm}$ of drift time. Both of these distributions and the physically motivated cutoffs in clock cycles motivate a scintillator veto at clock cycles somewhere above $\SIrange{60}{70}{clock\;cycles}$. To be on the conservative side and because random coincidences are very unlikely (on the time scales of several clock cycles; picking a value slightly larger implies a negligible dead time), a scintillator veto cut of $\SI{150}{clock\;cycles}$ was chosen. The resulting improvement of the background rate is shown in fig. [[fig:background:background_rate_scinti_veto]], albeit only for the end of 2018 (Run-3) data as the scintillator trigger was not working correctly in Run-2 (as mentioned in sec. [[#sec:cast:data_taking_woes]]). The biggest improvements can be seen in the $\SI{3}{keV}$ and $\SI{8}{keV}$ peaks, both of which are likely X-ray fluorescence ($\ce{Ar}_{Kα}$ $\SI{3}{keV}$ & $\ce{Cu}_{Kα}$ $\SI{8}{keV}$) and orthogonal muons ($>\SI{8}{keV}$). Arguably the improvements could be bigger, but the efficiency of the scintillator was not ideal, resulting in some likely muon induced X-rays to remain untagged. Lastly, the coverage of the scintillators is leaving large angular areas without a possibility to be tagged. For a future detector an almost $4π$ scintillator setup and correctly calibrated and tested scintillators are an extremely valuable upgrade. #+CAPTION: Clock cycle distributions of both scintillators of the end of 2018 data. #+CAPTION: The data is filtered to all non-trivial triggers (non zero and #+CAPTION: less than \num{300}; there are individual random coincidences in #+CAPTION: values up to \num{4095} where all triggers whose counter overran are). #+CAPTION: The origin of the peak at \SI{255}{clocks} in the data of the paddle is #+CAPTION: unclear. #+NAME: fig:background:scintillator_clock_cycles [[~/phd/Figs/scintillators/scintillators_facet_main_run-1.pdf]] #+CAPTION: Background rate based on the Run-3 CAST data achieved by the addition of a scintillator cut veto #+CAPTION: of $\SI{3.75}{μs}$ ($\num{150} clock cycles) for any cluster that initially passes the log likelihood #+CAPTION: cut. The biggest improvements can be seen in the $\SI{3}{keV}$ and $\SI{8}{keV}$ #+CAPTION: peaks, both of which are likely X-ray fluorescence (Cu & Ar; both energies) and orthogonal #+CAPTION: muons ($>\SI{8}{keV}$). #+NAME: fig:background:background_rate_scinti_veto [[~/phd/Figs/background/background_rate_crGold_scinti_run3.pdf]] #+begin_comment [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.5419e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.9274e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.2827e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.2138e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.3606e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 5.9015e-06 keV⁻¹·cm⁻²·s⁻¹ #+end_comment [fn:estimation_of_tagged_muons] A ball park estimate yields a coverage of about $\SI{35}{°}$ around the zenith. Assuming $\cos²(θ)$ muons it covers about $\SI{68}{\%}$ of muons in that axis. With a rate of $\sim\SI{1}{cm⁻².min⁻¹}$, $\SI{1125}{h}$ of data with scintillator and $\SI{4.2}{cm²}$ of active area in front of center GridPix roughly $\num{194000}$ muons are expected. About $\num{70000}$ non trivial triggers were recorded. Efficiency of $\sim\SI{35}{\%}$. [fn:estimation_of_tagged_muons_sipm] Estimating the number of expected muons for the SiPM is much more difficult, because there is little literature on the actual muon rate at angles close to $\SI{90}{°}$. The extended version of this thesis contains some calculations, which try to estimate energy distribution and rate under these angles in the detector based on numerically transporting muons from the upper atmosphere to the surface undergoing relativistic effects, changing atmospheric pressures and the related energy loss per distance. **** TODOs for this section :noexport: - [ ] *ADD SECTION ABOUT EXPECTATIONS OF THE VETOS. USING THE PLOTS HENDRIK ALREADY SHOWED IN HIS THESIS ABOUT IONIZATION* -> They are in the introduction of the detector chapter, sec. [[#sec:detector:scintillators]] in fig. [[fig:detector:fadc_scintillators_explanation]] - [ ] *UPDATE PLOT OF SCINTILLATOR VETO* *TODO*: rewrite this paragraph in context of full chapter (first paragraph) -> in particular references to GridPix1 & language that is more theoretical than actually referring to the setup of the Septemboard detector. - [ ] *IMPORTANT*: These ideas *ARE* already explained in the detector chapter where the features are introduced! I think such explanations really should better be there, no? And here just explain what is done and why? -> take some of the explanations and language from here to sec. [[#sec:detector:scintillators]]! - [X] Show the clock cycle histograms here? Or before in a section? -> here. Done. - [ ] *PUT SCINTI CUT OFF INTO CONFIG.TOML* - [ ] *CHECK THE BACKGROUND RATE USING 60, 100, 150, 300 CLOCK CYCLES* -> if everything from 100 looks the same, choose 100. - [X] *INVESTIGATE THE >250 PEAK IN VETO PADDLE DATA BY CHECKING EVENTS THAT HAVE SUCH VALUES. DISPLAY AND PROPERTIES!* -> Done, see the extra section about it. - [X] *WHY IS THE FADC POST TRIGGER NOT VISIBLE IN THE DATA?* -> because post trig is unrelated to when the *trigger is sent* **** Generate background rate plot with scintillator vetoes :extended: Let's generate the plot, first for all data (effect of scinti only visible in 2018 though!): #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ --names "No vetoes" --names "No vetoes" --names "Scinti" --names "Scinti" \ --centerChip 3 \ --region crGold \ --title "Background rate from CAST data, incl. scinti veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.3355e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.9462e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.1413e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.7844e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.1610e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.0805e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 2.5: 5.9935e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.9968e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.1669e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5931e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.0798e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.3997e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.9066e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.5626e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.6722e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.4689e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.6285e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.5711e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.3606e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 5.9015e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6591e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.0739e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.5419e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.9274e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.3039e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.3840e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.2827e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.2138e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:DataFrame with 7 columns and 120 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax dtype: float float constant float string float float 0 0 4.688 3318 0.8859 No vetoes 3.802 5.574 1 0.2 2.511 3318 0.6484 No vetoes 1.863 3.16 2 0.4 5.692 3318 0.9762 No vetoes 4.716 6.668 3 0.6 6.362 3318 1.032 No vetoes 5.33 7.394 4 0.8 5.86 3318 0.9905 No vetoes 4.869 6.85 5 1 5.86 3318 0.9905 No vetoes 4.869 6.85 6 1.2 3.014 3318 0.7103 No vetoes 2.303 3.724 7 1.4 2.846 3318 0.6903 No vetoes 2.156 3.536 8 1.6 2.846 3318 0.6903 No vetoes 2.156 3.536 9 1.8 3.348 3318 0.7487 No vetoes 2.6 4.097 10 2 1.507 3318 0.5023 No vetoes 1.005 2.009 11 2.2 1.842 3318 0.5553 No vetoes 1.286 2.397 12 2.4 1.005 3318 0.4101 No vetoes 0.5944 1.415 13 2.6 1.674 3318 0.5294 No vetoes 1.145 2.204 14 2.8 2.176 3318 0.6036 No vetoes 1.573 2.78 15 3 5.525 3318 0.9617 No vetoes 4.563 6.487 16 3.2 5.19 3318 0.9321 No vetoes 4.258 6.122 17 3.4 4.018 3318 0.8202 No vetoes 3.198 4.838 18 3.6 3.348 3318 0.7487 No vetoes 2.6 4.097 19 3.8 2.511 3318 0.6484 No vetoes 1.863 3.16 [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_scinti.pdf [WARNING]: Printing total background time currently only supported for single datasets. shellCmd: command -v xelatex shell 7151> /home/basti/texlive/2022/bin/x86_64-linux/xelatex shellCmd: xelatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_scinti.tex :end: (old files: ~/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold.h5 \ ~/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti.h5 \ ) And now only for Run-3: #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ --names "No vetoes" --names "Scinti" \ --centerChip 3 \ --region crGold \ --title "Background rate from CAST data in Run-3, incl. scinti veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_run3.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: Manual rate = 2.0209(1359)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.4250(1631)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 2.0209(1359)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.5271(1182)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.8325(1418)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.5271(1182)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.0176(4069)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.0351(8138)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.0176(4069)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7981(3918)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 2.5: 5.5962(7836)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.7981(3918)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7067(2569)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.2180(1156)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.7067(2569)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.0727(2248)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 5.0: 9.327(1012)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.0727(2248)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.4236(3876)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.5589(9691)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.4236(3876)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.2041(3750)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 2.5: 8.0103(9375)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.2041(3750)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.858(1372)e-06 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.7432(5486)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.858(1372)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.938(1164)e-06 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.9751(4655)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.938(1164)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.5179(1666)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 2.0 .. 8.0: 9.1075(9997)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.5179(1666)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.510(1319)e-06 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 2.0 .. 8.0: 5.7059(7913)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 9.510(1319)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.1123(1702)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.6898(1362)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.1123(1702)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.6597(1509)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.3277(1207)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.6597(1509)e-05 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | false | false | false | false | 0.800 | 2.1123(1702)e-05 | | LnL | 0.800 | true | false | false | false | 0.800 | 1.6597(1509)e-05 | [INFO]:DataFrame with 17 columns and 122 rows: Idx Energy Counts CountErr Rate totalTime RateErr Dataset yMin yMax File ε_total ε_eff Classifier Scinti FADC Septem Line dtype: float int float float constant float string float float string constant constant constant bool constant constant constant 0 0 8 2.8284271 4.3891781 1012.5915 1.5518088 No vetoes 2.8373693 5.940987 No vetoes 0.8 0.8 LnL false false false false 1 0.2 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 2 0.4 13 3.6055513 7.1324145 1012.5915 1.9781759 No vetoes 5.1542386 9.1105903 No vetoes 0.8 0.8 LnL false false false false 3 0.6 8 2.8284271 4.3891781 1012.5915 1.5518088 No vetoes 2.8373693 5.940987 No vetoes 0.8 0.8 LnL false false false false 4 0.8 11 3.3166248 6.03512 1012.5915 1.8196571 No vetoes 4.2154628 7.8547771 No vetoes 0.8 0.8 LnL false false false false 5 1 9 3 4.9378254 1012.5915 1.6459418 No vetoes 3.2918836 6.5837672 No vetoes 0.8 0.8 LnL false false false false 6 1.2 5 2.236068 2.7432363 1012.5915 1.2268126 No vetoes 1.5164238 3.9700489 No vetoes 0.8 0.8 LnL false false false false 7 1.4 7 2.6457513 3.8405309 1012.5915 1.4515842 No vetoes 2.3889466 5.2921151 No vetoes 0.8 0.8 LnL false false false false 8 1.6 4 2 2.1945891 1012.5915 1.0972945 No vetoes 1.0972945 3.2918836 No vetoes 0.8 0.8 LnL false false false false 9 1.8 4 2 2.1945891 1012.5915 1.0972945 No vetoes 1.0972945 3.2918836 No vetoes 0.8 0.8 LnL false false false false 10 2 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 11 2.2 3 1.7320508 1.6459418 1012.5915 0.95028494 No vetoes 0.69565686 2.5962267 No vetoes 0.8 0.8 LnL false false false false 12 2.4 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 13 2.6 4 2 2.1945891 1012.5915 1.0972945 No vetoes 1.0972945 3.2918836 No vetoes 0.8 0.8 LnL false false false false 14 2.8 7 2.6457513 3.8405309 1012.5915 1.4515842 No vetoes 2.3889466 5.2921151 No vetoes 0.8 0.8 LnL false false false false 15 3 13 3.6055513 7.1324145 1012.5915 1.9781759 No vetoes 5.1542386 9.1105903 No vetoes 0.8 0.8 LnL false false false false 16 3.2 10 3.1622777 5.4864727 1012.5915 1.734975 No vetoes 3.7514977 7.2214477 No vetoes 0.8 0.8 LnL false false false false 17 3.4 9 3 4.9378254 1012.5915 1.6459418 No vetoes 3.2918836 6.5837672 No vetoes 0.8 0.8 LnL false false false false 18 3.6 6 2.4494897 3.2918836 1012.5915 1.3439059 No vetoes 1.9479778 4.6357895 No vetoes 0.8 0.8 LnL false false false false 19 3.8 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 20 4 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 21 4.2 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 22 4.4 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 23 4.6 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 24 4.8 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 25 5 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 26 5.2 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 27 5.4 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 28 5.6 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 29 5.8 3 1.7320508 1.6459418 1012.5915 0.95028494 No vetoes 0.69565686 2.5962267 No vetoes 0.8 0.8 LnL false false false false 30 6 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 31 6.2 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 32 6.4 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 33 6.6 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 34 6.8 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 35 7 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 36 7.2 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 37 7.4 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 38 7.6 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 39 7.8 3 1.7320508 1.6459418 1012.5915 0.95028494 No vetoes 0.69565686 2.5962267 No vetoes 0.8 0.8 LnL false false false false 40 8 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 41 8.2 7 2.6457513 3.8405309 1012.5915 1.4515842 No vetoes 2.3889466 5.2921151 No vetoes 0.8 0.8 LnL false false false false 42 8.4 5 2.236068 2.7432363 1012.5915 1.2268126 No vetoes 1.5164238 3.9700489 No vetoes 0.8 0.8 LnL false false false false 43 8.6 8 2.8284271 4.3891781 1012.5915 1.5518088 No vetoes 2.8373693 5.940987 No vetoes 0.8 0.8 LnL false false false false 44 8.8 7 2.6457513 3.8405309 1012.5915 1.4515842 No vetoes 2.3889466 5.2921151 No vetoes 0.8 0.8 LnL false false false false 45 9 10 3.1622777 5.4864727 1012.5915 1.734975 No vetoes 3.7514977 7.2214477 No vetoes 0.8 0.8 LnL false false false false 46 9.2 6 2.4494897 3.2918836 1012.5915 1.3439059 No vetoes 1.9479778 4.6357895 No vetoes 0.8 0.8 LnL false false false false 47 9.4 4 2 2.1945891 1012.5915 1.0972945 No vetoes 1.0972945 3.2918836 No vetoes 0.8 0.8 LnL false false false false 48 9.6 4 2 2.1945891 1012.5915 1.0972945 No vetoes 1.0972945 3.2918836 No vetoes 0.8 0.8 LnL false false false false 49 9.8 4 2 2.1945891 1012.5915 1.0972945 No vetoes 1.0972945 3.2918836 No vetoes 0.8 0.8 LnL false false false false 50 10 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 51 10.2 3 1.7320508 1.6459418 1012.5915 0.95028494 No vetoes 0.69565686 2.5962267 No vetoes 0.8 0.8 LnL false false false false 52 10.4 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 53 10.6 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 54 10.8 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 55 11 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 56 11.2 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 57 11.4 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 58 11.6 2 1.4142136 1.0972945 1012.5915 0.77590441 No vetoes 0.32139013 1.8731989 No vetoes 0.8 0.8 LnL false false false false 59 11.8 1 1 0.54864727 1012.5915 0.54864727 No vetoes 0 1.0972945 No vetoes 0.8 0.8 LnL false false false false 60 12 0 0 0 1012.5915 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 61 0 8 2.8284271 4.3891781 1012.5915 1.5518088 Scinti 2.8373693 5.940987 Scinti 0.8 0.8 LnL true false false false 62 0.2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 63 0.4 13 3.6055513 7.1324145 1012.5915 1.9781759 Scinti 5.1542386 9.1105903 Scinti 0.8 0.8 LnL true false false false 64 0.6 8 2.8284271 4.3891781 1012.5915 1.5518088 Scinti 2.8373693 5.940987 Scinti 0.8 0.8 LnL true false false false 65 0.8 11 3.3166248 6.03512 1012.5915 1.8196571 Scinti 4.2154628 7.8547771 Scinti 0.8 0.8 LnL true false false false 66 1 9 3 4.9378254 1012.5915 1.6459418 Scinti 3.2918836 6.5837672 Scinti 0.8 0.8 LnL true false false false 67 1.2 5 2.236068 2.7432363 1012.5915 1.2268126 Scinti 1.5164238 3.9700489 Scinti 0.8 0.8 LnL true false false false 68 1.4 6 2.4494897 3.2918836 1012.5915 1.3439059 Scinti 1.9479778 4.6357895 Scinti 0.8 0.8 LnL true false false false 69 1.6 4 2 2.1945891 1012.5915 1.0972945 Scinti 1.0972945 3.2918836 Scinti 0.8 0.8 LnL true false false false 70 1.8 4 2 2.1945891 1012.5915 1.0972945 Scinti 1.0972945 3.2918836 Scinti 0.8 0.8 LnL true false false false 71 2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 72 2.2 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 73 2.4 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 74 2.6 4 2 2.1945891 1012.5915 1.0972945 Scinti 1.0972945 3.2918836 Scinti 0.8 0.8 LnL true false false false 75 2.8 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 76 3 9 3 4.9378254 1012.5915 1.6459418 Scinti 3.2918836 6.5837672 Scinti 0.8 0.8 LnL true false false false 77 3.2 7 2.6457513 3.8405309 1012.5915 1.4515842 Scinti 2.3889466 5.2921151 Scinti 0.8 0.8 LnL true false false false 78 3.4 6 2.4494897 3.2918836 1012.5915 1.3439059 Scinti 1.9479778 4.6357895 Scinti 0.8 0.8 LnL true false false false 79 3.6 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 80 3.8 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 81 4 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 82 4.2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 83 4.4 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 84 4.6 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 85 4.8 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 86 5 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 87 5.2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 88 5.4 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 89 5.6 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 90 5.8 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 91 6 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 92 6.2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 93 6.4 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 94 6.6 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 95 6.8 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 96 7 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 97 7.2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 98 7.4 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 99 7.6 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 100 7.8 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 101 8 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 102 8.2 3 1.7320508 1.6459418 1012.5915 0.95028494 Scinti 0.69565686 2.5962267 Scinti 0.8 0.8 LnL true false false false 103 8.4 4 2 2.1945891 1012.5915 1.0972945 Scinti 1.0972945 3.2918836 Scinti 0.8 0.8 LnL true false false false 104 8.6 7 2.6457513 3.8405309 1012.5915 1.4515842 Scinti 2.3889466 5.2921151 Scinti 0.8 0.8 LnL true false false false 105 8.8 5 2.236068 2.7432363 1012.5915 1.2268126 Scinti 1.5164238 3.9700489 Scinti 0.8 0.8 LnL true false false false 106 9 9 3 4.9378254 1012.5915 1.6459418 Scinti 3.2918836 6.5837672 Scinti 0.8 0.8 LnL true false false false 107 9.2 2 1.4142136 1.0972945 1012.5915 0.77590441 Scinti 0.32139013 1.8731989 Scinti 0.8 0.8 LnL true false false false 108 9.4 3 1.7320508 1.6459418 1012.5915 0.95028494 Scinti 0.69565686 2.5962267 Scinti 0.8 0.8 LnL true false false false 109 9.6 4 2 2.1945891 1012.5915 1.0972945 Scinti 1.0972945 3.2918836 Scinti 0.8 0.8 LnL true false false false 110 9.8 3 1.7320508 1.6459418 1012.5915 0.95028494 Scinti 0.69565686 2.5962267 Scinti 0.8 0.8 LnL true false false false 111 10 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 112 10.2 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 113 10.4 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 114 10.6 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 115 10.8 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 116 11 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 117 11.2 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 118 11.4 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 119 11.6 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 120 11.8 1 1 0.54864727 1012.5915 0.54864727 Scinti 0 1.0972945 Scinti 0.8 0.8 LnL true false false false 121 12 0 0 0 1012.5915 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_scinti_run3.pdf [WARNING]: Printing total background time currently only supported for single datasets. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_scinti_run3.tex Generated: /home/basti/phd/Figs/background/background_rate_crGold_scinti_run3.pdf :end: (old files: ~/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti.h5 \ ) **** Angle estimate covered by veto paddle :extended: Given a veto paddle size of $\SI{42}{\cm}$ times $\SI{82}{\cm}$ we can calculate the rough arc angle covered by the paddle for muons hitting detector material. The exact calculation of course would need to take into account the extent of the detector, the fact that there is copper in the vacuum tube in front of the detector etc. For now let's just look at the center of the detector volume and see what is covered by the paddle. The lead shielding is $\SI{10}{cm}$ above the detector. There are maybe a bit less than another $\SI{10}{cm}$ above the lead shielding to the scintillator and the detector center is also a bit less than $\SI{10}{cm}$ away from the lead shielding above ($\SI{78}{mm}$ diameter gas volume diameter, with the rest of the housing and a couple of centimeter spacing to the lead shielding). Assuming $\SI{30}{cm}$ from the center to the veto scintillator, the angle to the sides is: \[ α = \arctan\left(\frac{\SI{21}{cm}}{\SI{30}{cm}}\right) = \SI{35}{°} \] #+begin_src nim import math echo arctan(21.0/30.0).radToDeg #+end_src #+RESULTS: : 34.99202019855866 so about $\SI{35}{°}$ covering to either side. This implies $\SI{70}{°}$ coverage above the detector. In the long direction it's even more due to the length of the paddle. Given a roughly $\cos²(θ)$ distribution of the muon background, this should cover #+begin_src nim import numericalnim, math proc cos2(θ: float, ctx: NumContext[float, float]): float = cos(θ)*cos(θ) proc integrate(start, stop: float): float = result = simpson(cos2, start, stop) echo "Integrate from -π/2 to π/2 = ", integrate(-PI/2.0, PI/2.0), " = π/2" echo "Integrate from -π/2 to π/2 = ", integrate(-35.0.degToRad, 35.0.degToRad) echo "Fraction of total flux *in this dimension* = ", integrate(-35.0.degToRad, 35.0.degToRad) / (PI/2.0) #+end_src #+RESULTS: | Integrate | from | -π/2 | to | π/2 | = | 1.570796326794897 | = | π/2 | | Integrate | from | -π/2 | to | π/2 | = | 1.080711548592458 | | | | Fraction | of | total | flux | *in | this | dimension* | = | 0.688002340059947 | So assuming a 2D case it already covers about 68% of the expected muon flux, assuming a perfect trigger efficiency of the scintillator. **** Estimate expected number of muons in data taking :extended: The detection volume of relevance for the center chip can be roughly said to be $\SI{4.2}{cm²}$ (1.4 cm wide chip, 3 cm height orthogonal to sky). Assuming a muon rate of 1 cm⁻²·min⁻¹ yields for 1125 h of background time: #+begin_src nim import unchained let rate = 1.cm⁻²•min⁻¹ let time = 1125.h let area = 4.2.cm² echo "Expected number of muon counts = ", rate * time * area #+end_src #+RESULTS: : Expected number of muon counts = 283500 UnitLess So about \num{284500} expected muons in front of the detector. Assuming I can trust the ~scintiInfo~ tool of an output of (see next section): #+begin_quote Main triggers: 69243 FADC triggers 211683 Relevant triggers 142440 Open shutter time in s: 13.510434 Scinti1 rate: 0.3050257995269533 s^-1 Scinti2 rate: 57.21503839180888 s^-1 #+end_quote so about 69k veto paddle triggers. That puts the efficiency at knowing the paddle will only see about 68% of all muons $\sim\SI{35.8}{\%}$. Not great, but also not horrible. Given that we know our threshold was likely a bit high, this seems to be a possible ball park estimate. **** Generate plots of scintillator data, study of 255 peak :extended: The tool we use to plot the scintillator data is ~Tools/scintiInfo~. It generates output into the ~out/<plot path>/~ directory: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/scintiInfo/ WRITE_PLOT_CSV=true USE_TEX=true ./scintiInfo -f ~/CastData/data/DataRuns2018_Reco.h5 \ --plotPath ~/phd/Figs/scintillators/ #+end_src which generates 3 plots for each scintillator containing all data, all non trivial data and all hits > 0 and < 300. Now let's investigate what the events of the veto paddle look like in which we have a peak >250 clocks. #+begin_src nim :tangle code/scintillator_data_plots.nim import std / [sequtils, strformat] import ingrid / [tos_helpers, ingrid_types] import pkg / [ggplotnim, nimhdf5, datamancer, ginger] proc allData(h5f: H5File): DataFrame = result = h5f.readDsets(chipDsets = some((chip: 3, dsets: TpaIngridDsetKinds.mapIt(it.toDset()))), commonDsets = @["fadcReadout", "szint1ClockInt", "szint2ClockInt"]) proc plotEvents(df: DataFrame, run: int, numEvents: int, plotCount: var int, outpath: string, fadcRun: ReconstructedFadcRun) = let showFadc = fadcRun.eventNumber.len > 0 for (tup, subDf) in groups(df.group_by("eventNumber")): if numEvents > 0 and plotCount > numEvents: break let dfEv = subDf.dfToSeptemEvent() let eventNumber = tup[0][1].toInt let pltSeptem = ggplot(dfEv, aes(x, y, color = "charge"), backend = bkCairo) + geom_point(size = 1.0) + xlim(0, 768) + ylim(0, 768) + scale_x_continuous() + scale_y_continuous() + geom_linerange(aes = aes(y = 0, xMin = 128, xMax = 640)) + geom_linerange(aes = aes(y = 256, xMin = 0, xMax = 768)) + geom_linerange(aes = aes(y = 512, xMin = 0, xMax = 768)) + geom_linerange(aes = aes(y = 768, xMin = 128, xMax = 640)) + geom_linerange(aes = aes(x = 0, yMin = 256, yMax = 512)) + geom_linerange(aes = aes(x = 256, yMin = 256, yMax = 512)) + geom_linerange(aes = aes(x = 512, yMin = 256, yMax = 512)) + geom_linerange(aes = aes(x = 768, yMin = 256, yMax = 512)) + geom_linerange(aes = aes(x = 128, yMin = 0, yMax = 256)) + geom_linerange(aes = aes(x = 384, yMin = 0, yMax = 256)) + geom_linerange(aes = aes(x = 640, yMin = 0, yMax = 256)) + geom_linerange(aes = aes(x = 128, yMin = 512, yMax = 768)) + geom_linerange(aes = aes(x = 384, yMin = 512, yMax = 768)) + geom_linerange(aes = aes(x = 640, yMin = 512, yMax = 768)) + margin(top = 1.5) + ggtitle(&"Septem event of event {eventNumber} and run {run}. ") if not showFadc: pltSeptem + ggsave(&"{outpath}/septemEvents/septemEvent_run_{run}_event_{eventNumber}.pdf") else: # prepare FADC plot, create canvas and place both next to one another let eventIdx = fadcRun.eventNumber.find(eventNumber) let dfFadc = toDf({ "x" : toSeq(0 ..< 2560), "data" : fadcRun.fadcData[eventIdx, _].squeeze }) let pltFadc = ggplot(dfFadc, aes("x", "data"), backend = bkCairo) + geom_line() + geom_point(color = "black", alpha = 0.1) + ggtitle(&"Fadc signal of event {eventNumber} and run {run}") var img = initViewport(wImg = 1200, hImg = 600, backend = bkCairo) img.layout(2, rows = 1) img.embedAsRelative(0, ggcreate(pltSeptem).view) img.embedAsRelative(1, ggcreate(pltFadc).view) var area = img.addViewport() let title = &"Septemboard event and FADC signal for event {eventNumber}" let text = area.initText(c(0.5, 0.05, ukRelative), title, goText, taCenter, font = some(font(16.0))) area.addObj text img.children.add area img.draw(&"{outpath}/septemEvents/septem_fadc_run_{run}_event_{eventNumber}.pdf") inc plotCount proc plotSzinti(h5f: H5file, df: DataFrame, cutFn: FormulaNode, title: string, outpath: string, fname: string, numEventPlots: int, plotEvents: bool, showFadc: bool = false ) = let toGather = df.getKeys().filterIt(it notin ["runNumber", "eventNumber"]) let dfF = df.filter(cutFn) .filter(f{`eccentricity` < 10.0}) .gather(toGather, "key", "value") echo dfF ggplot(dfF, aes("value", fill = "key")) + facet_wrap("key", scales = "free") + geom_histogram(position = "identity", binBy = "subset") + legendPosition(0.90, 0.0) + ggtitle(title) + ggsave(&"{outpath}/{fname}", width = 2000, height = 2000) if plotEvents: var plotCount = 0 var fadcRun: ReconstructedFadcRun for (tup, subDf) in dfF.group_by(@["runNumber"]).groups: if numEventPlots > 0 and plotCount > numEventPlots: break let run = tup[0][1].toInt if showFadc: fadcRun = h5f.readRecoFadcRun(run) echo "Run ", run let events = subDf["eventNumber", int].toSeq1D let dfS = getSeptemDataFrame(h5f, run) .filter(f{int: `eventNumber` in events}) echo dfS plotEvents(dfS, run, numEventPlots, plotCount, outpath, fadcRun) proc main(fname: string, peakAt255 = false, vetoPaddle = false, sipm = false, sipmXrayLike = false, plotEvents = true) = let h5f = H5open(fname, "r") #let fileInfo = h5f.getFileInfo() let df = h5f.allData() # first the veto paddle around the 255 peak (plot all events) if peakAt255: h5f.plotSzinti(df, f{int: `szint2ClockInt` > 250 and `szint2ClockInt` < 265}, "Cluster properties of all events with veto paddle trigger clock cycles = 255", "Figs/scintillators/peakAt255", "cluster_properties_peak_at_255.pdf", -1, plotEvents) # now the veto paddle generally if vetoPaddle: h5f.plotSzinti(df, f{int: `szint2ClockInt` > 0 and `szint2ClockInt` < 200}, "Cluster properties of all events with veto paddle > 0 && < 200", "Figs/scintillators/veto_paddle/", "cluster_properties_veto_paddle_less200.pdf", 200, plotEvents) # finally the SiPM if sipm: h5f.plotSzinti(df, f{int: `szint1ClockInt` > 0 and `szint1ClockInt` < 200}, "Cluster properties of all events with SiPM > 0 && < 200", "Figs/scintillators/sipm/", "cluster_properties_sipm_less200.pdf", 200, plotEvents) if sipmXrayLike: h5f.plotSzinti(df, f{float: `szint1ClockInt` > 0 and `szint1ClockInt` < 200 and `energyFromCharge` > 7.0 and `energyFromCharge` < 9.0 and `length` < 7.0}, "Cluster properties of all events with SiPM > 0 && < 200, 7 keV < energy < 9 keV, length < 7mm", "Figs/scintillators/sipmXrayLike/", "cluster_properties_sipm_less200_7_energy_9_length_7.pdf", -1, plotEvents, showFadc = true) discard h5f.close() when isMainModule: import cligen dispatch main #+end_src -> Conclusion from all this: I don't see any real cause for these kinds of events. They are all relatively busy background events of mostly standard tracks. Maybe these events are the events that follow the events where the FPGA has a hiccup and doesn't take the FADC trigger? Who knows or it may be something completely different. At 255 * 25ns = 6.375μs I don't see any physical source. More importantly are two things: 1. a physical source is *extremely unlikely* to be so narrow that it _always_ give 255 clock cycles. 2. 255 is 2^8 when counting from 0 (= it's ~1111 1111~), which implies there is a high chance it is a bug in the counting logic where maybe the clock stopped at 2^8 instead of 2^12 (4095) or something along those lines. A physical event that always appears at 255 clock cycles is rather unlikely! See fig. [[fig:scintillators:peak_at_255_cluster_properties]] for the cluster properties and the ~Figs/scintillators/peakAt255/septemEvents~ directory for all the event displays of these events. #+CAPTION: Overview of the cluster properties of all the events with a trigger at #+CAPTION: $\SI{255}{clock\;cycles}$. Nothing out of the ordinary for background data here. #+NAME: fig:scintillators:peak_at_255_cluster_properties [[~/phd/Figs/scintillators/peakAt255/cluster_properties.pdf]] -> Conclusion regarding regular veto paddle plots and events: The plots look pretty much like what we expect here, namely mainly 14 mm long tracks in the histogram and looking at events the story continues like that. It's essentially all just tracks that go through horizontally. It's still useful though, as sometimes the statistical process of ionization leaves blobs that could maybe be interpreted as X-rays, in particular in corner cutting tracks. -> As such this is also a good dataset for candidates for corner cutting cases as we know the data is very pure "good" tracks. #+CAPTION: Overview of the cluster properties of all the events with a trigger at #+CAPTION: $>\SI{0}{clock\;cycles}$ and $<\SI{200}{clock;cycles}$ for the veto paddle. #+CAPTION: As one might expect it's dominated by full chip long tracks (peak at 14mm). #+NAME: fig:scintillators:veto_paddle_cluster_properties [[~/phd/Figs/scintillators/veto_paddle/cluster_properties_veto_paddle_less200.pdf]] - [ ] *SIMILARLY DO THE SAME FOR SIPM* -> Extend the code above such that we also do plots for the SiPM. so properties & events. In addition to that add the FADC events! Read FADC data and place it next to the SiPM event displays. What do we see? -> In particular this is at the same time an extremely valuable "dataset" of events where we expect the FADC to have different shapes. Then when comparing the distribution of those events with 55Fe events, do we see a difference? -> Conclusions regarding the SiPM. The data is a bit surprising in some ways, but explained by looking at it. There is more 14 mm like tracks in the data than I would have assumed. But looking at the individual events there is a common trend of tracks that are pretty steep as to go through the SiPM (it's much bigger than one chip of 14mm after all!), but shallow enough to still cover the full chip more or less! Some events *do* have clusters that are *very* dense and likely orthogonal muons though. In addition there is a large number of extremely busy events in these plots. As there are few events that are really X-ray like in nature, looking at the FADC data for *most* of them is likely not very interesting. But we should cut on the energy (7 - 9 keV) and having triggered and a length maybe 7 mm or so. What remains is a target for event displays including the FADC. Also looking at the event displays with the FADC signal of those event that are not very track like and in energies between 7 and 9 keV shows that there are indeed quite a few whose fall and rise time is *significantly longer* than the typical O(<100 clock cycles). This implies there is really a 'long time' accumulation going on. Our expectation of 1.5μs until all the track is accumulated, so this might actually be _an_ explanation. #+CAPTION: Overview of the cluster properties of all the events with a trigger at #+CAPTION: $>\SI{0}{clock\;cycles}$ and $<\SI{200}{clock;cycles}$ of the SiPM. #+CAPTION: Surprisingly the data also contains significant amounts of 14 mm events. #+NAME: fig:scintillators:sipm_cluster_properties [[~/phd/Figs/scintillators/sipm/cluster_properties_sipm_less200.pdf]] **** Muon calculations for expected SiPM rates :extended: - [ ] Place our calcs here as no export? -> Refers to calculations of muon rates etc that partially are mentioned in the theory section! -> Yes, put here! *** FADC veto :PROPERTIES: :CUSTOM_ID: sec:background:fadc_veto :END: As previously mentioned in [[#sec:detector:fadc]] the FADC not only serves as a trigger for the readout and reference time for the scintillator triggers. Because of its high temporal resolution it can in principle act as a veto of its own by providing insight into the longitudinal cluster shape. A cluster drifting towards the readout and finally through the grid induces a voltage measured by the FADC. As such the length of the FADC signal is a function of the time it takes the cluster to drift 'through' the grid. The kind of orthogonal muon events that should be triggered by the SiPM as explained in the previous section [[#sec:background:scinti_veto]] for example should also be detectable by the FADC in the form of longer signal rise times than typical in an X-ray. From gaseous detector physics theory we can estimate the typical sizes and therefore expected signal rise times for an X-ray if we know the gas of our detector. For the $\SI{1050}{mbar}$, $97.7/2.3\,\%$ $\ce{Ar}/\ce{iC4H10}$ mixture used with a $\SI{500}{V.cm⁻¹}$ drift field in the Septemboard detector at CAST, the relevant parameters are [fn:gas_properties_pyboltz] - drift velocity $v = \SI{2.28}{cm.μs⁻¹}$ - transverse diffusion $D_T = \SI{670}{μm.cm^{-1/2}}$ - longitudinal diffusion $D_L = \SI{270}{μm.cm^{-1/2}}$ As the detector has a height of $\SI{3}{cm}$ we expect a typical X-ray interacting close to the cathode to form a cluster of $\sqrt{\SI{3}{cm}}·\SI{670}{μm.cm^{-1/2}} \approx \SI{1160.5}{μm}$ transverse extent and $\sqrt{\SI{3}{cm}}·\SI{270}{μm.cm^{-1/2}} \approx \SI{467.5}{μm}$ in longitudinal size, where this corresponds to a $1 σ$ environment. To get a measure for the cluster size, a rough estimate for an upper limit is a $3 σ$ distance away from the center in each direction. For the transverse size it leads to about $\SI{7}{mm}$ and in longitudinal about $\SI{2.8}{mm}$. From the CAST \cefe data we see a peak at around $\SI{6}{mm}$ of transverse cluster size along the longer axis, which matches well with our expectation (see appendix [[#sec:appendix:fadc_veto_empirical_cluster_length]] for the length data). From the drift velocity and the upper bound on the longitudinal cluster size we can therefore also compute an equivalent effective drift time _seen by the FADC_. This comes out to be \[ t = \SI{2.8}{mm} / \SI{22.8}{mm.μs⁻¹} = \SI{0.123}{μs} \] or about $\SI{123}{ns}$, equivalent to $\SI{123}{clock\;cycle}$ of the FADC clock. Alternatively, we can compute the most likely rise time based on the known peak of the cluster length, $\SI{6}{mm}$ and the ratio of the transverse and longitudinal diffusion, $D_T / D_L \approx 2.5$ to end up at a time of $\frac{\SI{6}{mm}}{2.5} · \SI{22.8}{mm.μs⁻¹} = \SI{105}{ns}$. Fig. [[fig:background:fadc_rise_time]] shows the measured rise time for the \cefe calibration data compared to the background data. The peak of the calibration rise time is at about $\SI{55}{ns}$. While this is almost a factor of 2 smaller than the theoretical values mentioned before, this is expected as those first of all are an upper bound and secondly the "rise time" here is not the full time due to our definition starting from $\SI{10}{\%}$ below the baseline and stopping $\SI{2.5}{\%}$ before the peak, shortening our time (ref. sec. [[#sec:fadc:definition_baseline_rise_fall_time]]). At the same time we see that the background data has a much longer tail towards higher clock cycle counts, as one expects. This implies that we can perform a cut on the rise time and utilize it as an additional veto. The \cefe data allows us to define a cut based on a known and desired signal efficiency. This is done by identifying a desired percentile on both ends of the distribution of a cleaned X-ray dataset. It is important to note that the cut values need to be determined for each of the used FADC amplifier settings separately. This is done automatically based on the known runs corresponding to each setting. #+CAPTION: KDE of the rise time of the FADC signals in the \cefe and background data #+CAPTION: of the CAST Run-3 dataset. The X-ray data is a single peak with a mean of #+CAPTION: about $\SI{55}{ns}$ while the background distribution is extremely wide, #+CAPTION: motivating a veto based on this data. #+NAME: fig:background:fadc_rise_time [[~/phd/Figs/FADC/fadc_riseTime_kde_signal_vs_background_run3.pdf]] For the signal decay time we do a simpler cut, with less of a physical motivation. As the decay time is highly dependent on the resistance and capacitance properties of the grid, high voltage and FADC amplifier settings, we simply introduce a conservative cut in such a range as to avoid cutting away any X-rays. Any background removed is just a bonus. Finally, the FADC is only used as a veto if the individual spectrum is not considered noisy (see sec. [[#sec:calibration:fadc_noise]]). This is determined by 4 local dominant peaks based on a peak finding algorithm and in addition a general skewnees of the total signal larger than $\num{-0.4}$. The skewness is an empirical cutoff as good FADC signals typically lie at larger negative skewness values (due to them having a signal towards negative values). See appendix [[#sec:appendix:background:fadc]] for a scatter plot of FADC rise times and skewness values. A very low percentage of false positives (good signals appearing 'noisy') is accepted for the certainty of not falsely rejecting noisy FADC events (as they would represent a random coincidence to the event on the center chip). Applying the FADC veto with a target percentile of 1 from both the lower and upper end ($1^{\text{st}}$ and $99^{\text{th}}$ percentiles; thus a $\SI{98}{\%}$ signal efficiency) in addition to the scintillator veto presented in the previous section, results in a background rate as seen in fig. [[fig:background:background_rate_fadc_veto]]. Different percentiles (and associated efficiencies) were be tested for their impact on the expected axion limit. Lower percentiles than 99 did not yield a significant improvement in the background suppression over the resulting signal efficiency loss. This is visible due to the sharp edge of the rise time for X-rays (green) in fig. [[fig:background:fadc_rise_time]] when comparing to its overlap with background (purple). The achieved background rate of the methods leads to a background rate of $\SI{9.0305(7277)e-06}{keV⁻¹.cm⁻².s⁻¹}$ in the energy range $\SIrange{2}{8}{keV}$. And between $\SIrange{4}{8}{keV}$ even $\SI{4.5739(6343)e-06}{keV⁻¹.cm⁻².s⁻¹}$. The veto brings improvements across the whole energy range, in which the FADC triggers. Interestingly, in some cases even below that range. The events removed in the first two bins are events with a big spark on the upper right GridPix, which induced an X-ray like low energy cluster on the central chip and triggered the FADC. In the range at around $\SI{8}{keV}$, likely candidates for removed events are cases of orthogonal muons that did not trigger the SiPM (but have a rise time incompatible with X-rays). #+begin_comment [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.8296(2537)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.5739(6343)e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 2.0 .. 8.0: 5.4183(4366)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 9.0305(7277)e-06 keV⁻¹·cm⁻²·s⁻¹ #+end_comment #+CAPTION: Comparison of pure $\ln\mathcal{L}$ cuts, additional scintillator veto and #+CAPTION: finally also FADC veto at an efficiency of $\SI{98}{\%}$. In the range #+CAPTION: $\SIrange{2}{8}{keV}$ a background rate of $\SI{9.0305(7277)e-06}{keV⁻¹.cm⁻².s⁻¹}$ is #+CAPTION: achieved and between $\SIrange{4}{8}{keV}$ even $\SI{4.5739(6343)e-06}{keV⁻¹.cm⁻².s⁻¹}$. #+NAME: fig:background:background_rate_fadc_veto [[~/phd/Figs/background/background_rate_crGold_scinti_fadc.pdf]] Generally, appendix [[#sec:appendix:background:fadc]] contains more figures of the FADC data. Rise times, fall times, how the different FADC settings affect these and an efficiency curve of a one-sided cut on the rise time for the signal efficiency and background suppression. **** Thoughts on the FADC as a ToA tool for orthognal muons :extended: - [ ] *THIS STUFF HAS ALREADY BEEN MENTIONED IN CONTEXT OF SCINTILLATORS!* -> Rewrite it! However, the scinti part does not explain it as well and leaves out some pieces (transverse distribution along Z) -> But explanation better in [[#sec:detector:scintillators]]. -> Most of this below does not belong here. It belongs into the motivation, which was mentioned before. _Some_ of the stuff may be useful in an interpretation of the application of the FADC veto. While the FADC serves as a very useful trigger to read out events, decrease the likelihood of multiple independent clusters on the center chip and serves as a reference time for triggers of the scintillators, its time information can theoretically provide an additional veto. Even for an almost fully 4π coverage of scintillators around the detector setup, there is exactly one part that is never covered: the window of the detector towards the magnet bore. If a muon traverses through this path it is unlikely to be shielded. [fn:shielding_magnet_other_end] In particular during solar trackings of a helioscope experiment this should contribute a more significant fraction of background due to the distribution of muons at surface level following a roughly $\cos²(θ)$ distribution ($θ$ the zenith angle). Muons entering in such a way traverse the gas volume orthogonally to the readout plane. The projection on the readout is thus roughly spherical (same as an X-ray). The speed of the muon compared to the time scale of gas diffusion & clock speed of the readout electronic means that a muon ionizes the gas along its path effectively instantly. Two distinctions can be made between X-rays and such muons: 1. The effective instant ionization implies that such an cluster has a 'duration' that is equivalent to the drift velocity times the gas volume height. For a typical GridPix setup of height $\SI{3}{cm}$ with an argon/isobutane gas mixture, this implies times of $\mathcal{O}(\SI{2}{μs})$. Compared to that the duration of an X-ray is roughly the equivalent of the geometrical size in the readout plane, which for the same conditions is $\mathcal{O}(\SI{3}{mm})$, expressed as a time under the drift velocity. Thus, if the duration of a cluster is reasonably well known, this can easily separate the two types of clusters. 2. Each ionized electron undergoes diffusion along its path to the readout plane. For these tracks those electrons that are produced far away from the readout yield large diffusion, whereas those close to the readout very little. Assuming a constant ionization per distance, this implies the muon track is formed like a cone. The electrons arriving at the readout first are almost exactly on the muon path and the later ones have more and more diffusion. This latter effect is invisible in a setup using the kind of FADC as used in this detector combined with the Timepix1 based GridPix, because the FADC records no spatial information. However, a detector based on GridPixes using Timepix3, which allows for ToA readout at the same time as ~ToT~ this effect may be visible and for this reason is important to keep in mind for the future. For a argon/isobutane mixture at $\SI{1050}{mbar}$ the energy deposition of most muons will be in the range of $\SIrange{8}{10}{keV}$, which in any case is outside of the main region of interest for axion searches. Note: for a detector with an almost perfect 4π coverage, the scintillators *behind* the detector would of course trigger for such muons. Indeed, it is likely that using that information would already be enough to remove such events. A discrimination based on time information yields a more certain result than a large scintillator might, which (even if to a small degree) does introduce random coincidences. [fn:percentiles] Lower percentiles than 99 did not yield a significant improvement in the background suppression over the resulting signal efficiency loss. This is visible due to the sharp edge of the [fn:shielding_magnet_other_end] Of course a properly calibrated scintillator behind the magnet should detect any muons traversing orthogonally. However, it is likely a $4π$ veto system would be used in a coincidence setup between opposite scintillators. A detector with good time resolution can also detect such events in principle. Despite this, it might be a decent idea to include some lead shielding on the opposite end of future helioscopes to provide some shielding against muons coming from such a direction. In particular a future experiment that tracks the Sun to much higher altitudes will see such muon background significantly more ($\cos²(θ)$!). [fn:gas_properties_pyboltz] The properties were calculated with ~Magboltz~ using ~Nimboltz~, a Nim interfacer for ~Magboltz~. The code can be found here: https://github.com/Vindaar/TimepixAnalysis/Tools/septemboardCastGasNimboltz Further the numbers used here are for a temperature of $\SI{26}{°C}$, slightly above room temperature due to the heating of the Septemboard itself into the gas. **** TODOs for above section [/] :noexport: - [ ] *PUT THE IDEAS FROM THIS SECTION INTO AN 'OUTLOOK FOR BACKGROUND IMPROVEMENTS' SECTION?* -> This could be a useful section to extend the ideas and provide motivation for - [ ] create an appendix for FADC veto information? Could put stuff like - rise time vs skewness - rise time KDE, fall time KDE - explanation of noise stuff? - [ ] *REALLY MAKE SURE ALL THE POINTS MENTIONED HERE ARE ACTUALLY EXPLAINED PROPERLY BEFORE FOR EXAMPLE*. - [X] *MAKE SURE Chosen 99-th percentile AS VETO IS MENTIONED* -> Mention others were also tried, but improvements too small vs signal loss. - [X] *FINISH EXPLANATION AND WHAT WE DO WITH FADC REGARDING VETO WHEN SEEN 2018 DATA* - [ ] *FIX LINK IN FOOTNOTE!!* -> More importantly, fix the code to not use PyBoltz anymore! - [X] *MENTION SOMETHING ABOUT 3 KEV COPPER OR ORTHOGONAL MUONS?* -> In terms of plot. - [X] *FOUND A HUGE BUG IN THE FADC VETO CODE* -> The veto did not exclude anything *ABOVE* a cut... The comparison was wrong. In particular in that light look at the veto properly again. I put in some ~reasonable numbers for the cuts somewhere between 1-th (99-th) and 5-th (95-th) percentile of the data. - [X] big question is still energy dependence of rise time, fall time -> There is not much. Lower energy events *going by escape peak clusters* are _slightly_ wider, but that is quite possibly just a side effect of having worse data. - [X] Investigate FADC rise and fall time now that it is fixed based on: - [ ] FADC settings in Run-2 -> There seems to be a pretty small effect generally as the difference is not very big between the two datasets. However, we haven't actually looked at the data split by setting. - [X] energy of X-rays (calibration mainly, escape & photo) - [ ] Look into CDL data! - [X] *INSERT SKEWNESS PLOT INTO APPENDIX* -> In ~statusAndProgress~ sec. [[#sec:fadc:noisy_events_and_fadc_veto]] - [X] *INSERT FADC RISE TIME EXAMPLE* - [ ] *UPDATE THAT PLOT* - [ ] *GET THE CORRECT NUMBERS FROM SIMULATION* -> In particular provide a few more words about how it compares to the detected lengths of the calibration clusters, which drops above 6 significantly. Once final number easier to write this. - [ ] *INSERT APPENDIX WITH LENGTH PLOTS* - [ ] *FOR MY OWN CURIOSITY*: -> What happens if we allow application of only FADC as veto? What kind of background rate can we achieve then? **** About FADC rise time from theory and reality [/] :extended: The theoretical numbers of about 100 ns do not really match very well with our experimental numbers anymore after the recent changes. However, this is mainly due to three aspects: 1. The offsets from the baseline and to the peak shorten the rise time by at least (10 + 2.5) %, so our peak at ~55ns -> 61.875 ns. 2. Realistically it is also though not taking into account that the bottom of the signal is likely still part of the actual signal. Once the trailing electrons of the cloud traverse through the grid, the bulk has already been deposited on the pixels and the discharge of the grid is underway. It's a convolution of the still incoming electrons and their induction and the discharge of the previous. 3. Our calculations use a rather random $3 σ$ size of the clusters (compared to the RMS). This is of course an extremely rough estimate for the cluster size, which mainly will be correct in the upper outliers. All three aspects combined likely very well explain the difference. It would be interesting to add to our simulation of the ~rmsTransverse~ distributions an additional step of trying to simulate the ionization & discharge convolution to see what the actual physical signals might look like! For that it might be enough to assume that each electron once it passes through a grid hole "instantaneously" is amplified by the gas gain leading to a fixed amount of induction. Would be interesting. - [ ] Do this! **** Note on events being removed below FADC trigger :extended: See sec. [[file:~/org/Doc/StatusAndProgress.org::#sec:fadc:removed_events_lower_than_fadc_threshold]] in ~statusAndProgress~. There we look at precisely those events that are removed despite in theory having no reason to be removed. **** Further notes about FADC :extended: For more detailed notes about the FADC discussed above, see my development notes about the FADC here [[file:~/org/Doc/StatusAndProgress.org::#sec:fadc]]. **** Generate plot for background rate including FADC :extended: Let's generate the plot: #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ --names "No vetoes" --names "No vetoes" \ --names "Scinti" --names "Scinti" \ --names "FADC" --names "FADC" \ --centerChip 3 \ --region crGold \ --title "Background rate from CAST data, incl. scinti and FADC veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: Manual rate = 1.50411(6641)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.80494(7969)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.50411(6641)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.97910(7618)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.37492(9141)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.97910(7618)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.81490(7295)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.17789(8754)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.81490(7295)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.8675(2246)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.5 .. 2.5: 5.7350(4492)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.8675(2246)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.5008(2482)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 2.5: 7.0016(4963)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.5008(2482)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.4304(2457)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.8609(4913)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.4304(2457)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.0328(1261)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.5 .. 5.0: 9.1478(5673)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.0328(1261)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7053(1454)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.21736(6545)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.7053(1454)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.5020(1399)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.12589(6294)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5020(1399)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.0821(2083)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 2.5: 7.7053(5207)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.0821(2083)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.8421(2325)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.6052(5813)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.8421(2325)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.7717(2304)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.4293(5760)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.7717(2304)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.5739(6343)e-06 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.8296(2537)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.5739(6343)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.7729(7718)e-06 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.7092(3087)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.7729(7718)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.1572(7359)e-06 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.4629(2944)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.1572(7359)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.0305(7277)e-06 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 2.0 .. 8.0: 5.4183(4366)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 9.0305(7277)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.37217(8970)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.2330(5382)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.37217(8970)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.19039(8355)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.1423(5013)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.19039(8355)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.57888(8333)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.26310(6666)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.57888(8333)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.12423(9666)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.69938(7732)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.12423(9666)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.97910(9330)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.58328(7464)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.97910(9330)e-05 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | true | true | false | false | 0.784 | 1.57888(8333)e-05 | | LnL | 0.800 | false | false | false | false | 0.800 | 2.12423(9666)e-05 | | LnL | 0.800 | true | false | false | false | 0.800 | 1.97910(9330)e-05 | [INFO]:DataFrame with 17 columns and 183 rows: Idx Energy Counts CountErr Rate totalTime RateErr Dataset yMin yMax File ε_total ε_eff Classifier Scinti FADC Septem Line dtype: float int float float constant float string float float string float constant constant bool bool constant constant 0 0 14 3.7416574 2.4628757 3158.0066 0.65823122 FADC 1.8046445 3.1211069 FADC 0.784 0.8 LnL true true false false 1 0.2 8 2.8284271 1.4073575 3158.0066 0.49757603 FADC 0.90978151 1.9049336 FADC 0.784 0.8 LnL true true false false 2 0.4 34 5.8309519 5.9812695 3158.0066 1.0257793 FADC 4.9554903 7.0070488 FADC 0.784 0.8 LnL true true false false 3 0.6 35 5.9160798 6.1571892 3158.0066 1.0407549 FADC 5.1164343 7.1979442 FADC 0.784 0.8 LnL true true false false 4 0.8 32 5.6568542 5.6294302 3158.0066 0.99515206 FADC 4.6342781 6.6245822 FADC 0.784 0.8 LnL true true false false 5 1 28 5.2915026 4.9257514 3158.0066 0.93087951 FADC 3.9948719 5.8566309 FADC 0.784 0.8 LnL true true false false 6 1.2 14 3.7416574 2.4628757 3158.0066 0.65823122 FADC 1.8046445 3.1211069 FADC 0.784 0.8 LnL true true false false 7 1.4 14 3.7416574 2.4628757 3158.0066 0.65823122 FADC 1.8046445 3.1211069 FADC 0.784 0.8 LnL true true false false 8 1.6 11 3.3166248 1.9351166 3158.0066 0.58345961 FADC 1.351657 2.5185762 FADC 0.784 0.8 LnL true true false false 9 1.8 15 3.8729833 2.6387954 3158.0066 0.68133404 FADC 1.9574613 3.3201294 FADC 0.784 0.8 LnL true true false false 10 2 6 2.4494897 1.0555182 3158.0066 0.43091348 FADC 0.62460467 1.4864316 FADC 0.784 0.8 LnL true true false false 11 2.2 6 2.4494897 1.0555182 3158.0066 0.43091348 FADC 0.62460467 1.4864316 FADC 0.784 0.8 LnL true true false false 12 2.4 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 13 2.6 7 2.6457513 1.2314378 3158.0066 0.46543976 FADC 0.76599809 1.6968776 FADC 0.784 0.8 LnL true true false false 14 2.8 4 2 0.70367877 3158.0066 0.35183938 FADC 0.35183938 1.0555182 FADC 0.784 0.8 LnL true true false false 15 3 25 5 4.3979923 3158.0066 0.87959846 FADC 3.5183938 5.2775908 FADC 0.784 0.8 LnL true true false false 16 3.2 16 4 2.8147151 3158.0066 0.70367877 FADC 2.1110363 3.5183938 FADC 0.784 0.8 LnL true true false false 17 3.4 16 4 2.8147151 3158.0066 0.70367877 FADC 2.1110363 3.5183938 FADC 0.784 0.8 LnL true true false false 18 3.6 11 3.3166248 1.9351166 3158.0066 0.58345961 FADC 1.351657 2.5185762 FADC 0.784 0.8 LnL true true false false 19 3.8 9 3 1.5832772 3158.0066 0.52775908 FADC 1.0555182 2.1110363 FADC 0.784 0.8 LnL true true false false 20 4 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 21 4.2 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 22 4.4 0 0 0 3158.0066 0 FADC 0 0 FADC 0.784 0.8 LnL true true false false 23 4.6 0 0 0 3158.0066 0 FADC 0 0 FADC 0.784 0.8 LnL true true false false 24 4.8 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 25 5 1 1 0.17591969 3158.0066 0.17591969 FADC 0 0.35183938 FADC 0.784 0.8 LnL true true false false 26 5.2 1 1 0.17591969 3158.0066 0.17591969 FADC 0 0.35183938 FADC 0.784 0.8 LnL true true false false 27 5.4 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 28 5.6 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 29 5.8 8 2.8284271 1.4073575 3158.0066 0.49757603 FADC 0.90978151 1.9049336 FADC 0.784 0.8 LnL true true false false 30 6 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 31 6.2 5 2.236068 0.87959846 3158.0066 0.39336839 FADC 0.48623007 1.2729669 FADC 0.784 0.8 LnL true true false false 32 6.4 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 33 6.6 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 34 6.8 1 1 0.17591969 3158.0066 0.17591969 FADC 0 0.35183938 FADC 0.784 0.8 LnL true true false false 35 7 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 36 7.2 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 37 7.4 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 38 7.6 1 1 0.17591969 3158.0066 0.17591969 FADC 0 0.35183938 FADC 0.784 0.8 LnL true true false false 39 7.8 4 2 0.70367877 3158.0066 0.35183938 FADC 0.35183938 1.0555182 FADC 0.784 0.8 LnL true true false false 40 8 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 41 8.2 8 2.8284271 1.4073575 3158.0066 0.49757603 FADC 0.90978151 1.9049336 FADC 0.784 0.8 LnL true true false false 42 8.4 8 2.8284271 1.4073575 3158.0066 0.49757603 FADC 0.90978151 1.9049336 FADC 0.784 0.8 LnL true true false false 43 8.6 12 3.4641016 2.1110363 3158.0066 0.60940369 FADC 1.5016326 2.72044 FADC 0.784 0.8 LnL true true false false 44 8.8 16 4 2.8147151 3158.0066 0.70367877 FADC 2.1110363 3.5183938 FADC 0.784 0.8 LnL true true false false 45 9 20 4.472136 3.5183938 3158.0066 0.78673678 FADC 2.7316571 4.3051306 FADC 0.784 0.8 LnL true true false false 46 9.2 17 4.1231056 2.9906348 3158.0066 0.72533547 FADC 2.2652993 3.7159702 FADC 0.784 0.8 LnL true true false false 47 9.4 15 3.8729833 2.6387954 3158.0066 0.68133404 FADC 1.9574613 3.3201294 FADC 0.784 0.8 LnL true true false false 48 9.6 13 3.6055513 2.286956 3158.0066 0.63428747 FADC 1.6526685 2.9212435 FADC 0.784 0.8 LnL true true false false 49 9.8 14 3.7416574 2.4628757 3158.0066 0.65823122 FADC 1.8046445 3.1211069 FADC 0.784 0.8 LnL true true false false 50 10 7 2.6457513 1.2314378 3158.0066 0.46543976 FADC 0.76599809 1.6968776 FADC 0.784 0.8 LnL true true false false 51 10.2 5 2.236068 0.87959846 3158.0066 0.39336839 FADC 0.48623007 1.2729669 FADC 0.784 0.8 LnL true true false false 52 10.4 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 53 10.6 5 2.236068 0.87959846 3158.0066 0.39336839 FADC 0.48623007 1.2729669 FADC 0.784 0.8 LnL true true false false 54 10.8 2 1.4142136 0.35183938 3158.0066 0.24878801 FADC 0.10305137 0.6006274 FADC 0.784 0.8 LnL true true false false 55 11 4 2 0.70367877 3158.0066 0.35183938 FADC 0.35183938 1.0555182 FADC 0.784 0.8 LnL true true false false 56 11.2 3 1.7320508 0.52775908 3158.0066 0.30470185 FADC 0.22305723 0.83246092 FADC 0.784 0.8 LnL true true false false 57 11.4 1 1 0.17591969 3158.0066 0.17591969 FADC 0 0.35183938 FADC 0.784 0.8 LnL true true false false 58 11.6 0 0 0 3158.0066 0 FADC 0 0 FADC 0.784 0.8 LnL true true false false 59 11.8 1 1 0.17591969 3158.0066 0.17591969 FADC 0 0.35183938 FADC 0.784 0.8 LnL true true false false 60 12 0 0 0 3158.0066 0 FADC 0 0 FADC 0.784 0.8 LnL true true false false 61 0 26 5.0990195 4.573912 3158.0066 0.89701794 No vetoes 3.6768941 5.4709299 No vetoes 0.8 0.8 LnL false false false false 62 0.2 14 3.7416574 2.4628757 3158.0066 0.65823122 No vetoes 1.8046445 3.1211069 No vetoes 0.8 0.8 LnL false false false false 63 0.4 34 5.8309519 5.9812695 3158.0066 1.0257793 No vetoes 4.9554903 7.0070488 No vetoes 0.8 0.8 LnL false false false false 64 0.6 37 6.0827625 6.5090286 3158.0066 1.0700777 No vetoes 5.4389509 7.5791063 No vetoes 0.8 0.8 LnL false false false false 65 0.8 33 5.7445626 5.8053499 3158.0066 1.0105817 No vetoes 4.7947682 6.8159315 No vetoes 0.8 0.8 LnL false false false false 66 1 33 5.7445626 5.8053499 3158.0066 1.0105817 No vetoes 4.7947682 6.8159315 No vetoes 0.8 0.8 LnL false false false false 67 1.2 19 4.3588989 3.3424742 3158.0066 0.76681616 No vetoes 2.575658 4.1092903 No vetoes 0.8 0.8 LnL false false false false 68 1.4 17 4.1231056 2.9906348 3158.0066 0.72533547 No vetoes 2.2652993 3.7159702 No vetoes 0.8 0.8 LnL false false false false 69 1.6 17 4.1231056 2.9906348 3158.0066 0.72533547 No vetoes 2.2652993 3.7159702 No vetoes 0.8 0.8 LnL false false false false 70 1.8 19 4.3588989 3.3424742 3158.0066 0.76681616 No vetoes 2.575658 4.1092903 No vetoes 0.8 0.8 LnL false false false false 71 2 9 3 1.5832772 3158.0066 0.52775908 No vetoes 1.0555182 2.1110363 No vetoes 0.8 0.8 LnL false false false false 72 2.2 10 3.1622777 1.7591969 3158.0066 0.55630691 No vetoes 1.20289 2.3155038 No vetoes 0.8 0.8 LnL false false false false 73 2.4 5 2.236068 0.87959846 3158.0066 0.39336839 No vetoes 0.48623007 1.2729669 No vetoes 0.8 0.8 LnL false false false false 74 2.6 10 3.1622777 1.7591969 3158.0066 0.55630691 No vetoes 1.20289 2.3155038 No vetoes 0.8 0.8 LnL false false false false 75 2.8 13 3.6055513 2.286956 3158.0066 0.63428747 No vetoes 1.6526685 2.9212435 No vetoes 0.8 0.8 LnL false false false false 76 3 31 5.5677644 5.4535105 3158.0066 0.97947939 No vetoes 4.4740311 6.4329899 No vetoes 0.8 0.8 LnL false false false false 77 3.2 26 5.0990195 4.573912 3158.0066 0.89701794 No vetoes 3.6768941 5.4709299 No vetoes 0.8 0.8 LnL false false false false 78 3.4 25 5 4.3979923 3158.0066 0.87959846 No vetoes 3.5183938 5.2775908 No vetoes 0.8 0.8 LnL false false false false 79 3.6 17 4.1231056 2.9906348 3158.0066 0.72533547 No vetoes 2.2652993 3.7159702 No vetoes 0.8 0.8 LnL false false false false 80 3.8 11 3.3166248 1.9351166 3158.0066 0.58345961 No vetoes 1.351657 2.5185762 No vetoes 0.8 0.8 LnL false false false false 81 4 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 82 4.2 5 2.236068 0.87959846 3158.0066 0.39336839 No vetoes 0.48623007 1.2729669 No vetoes 0.8 0.8 LnL false false false false 83 4.4 2 1.4142136 0.35183938 3158.0066 0.24878801 No vetoes 0.10305137 0.6006274 No vetoes 0.8 0.8 LnL false false false false 84 4.6 0 0 0 3158.0066 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 85 4.8 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 86 5 1 1 0.17591969 3158.0066 0.17591969 No vetoes 0 0.35183938 No vetoes 0.8 0.8 LnL false false false false 87 5.2 2 1.4142136 0.35183938 3158.0066 0.24878801 No vetoes 0.10305137 0.6006274 No vetoes 0.8 0.8 LnL false false false false 88 5.4 5 2.236068 0.87959846 3158.0066 0.39336839 No vetoes 0.48623007 1.2729669 No vetoes 0.8 0.8 LnL false false false false 89 5.6 4 2 0.70367877 3158.0066 0.35183938 No vetoes 0.35183938 1.0555182 No vetoes 0.8 0.8 LnL false false false false 90 5.8 10 3.1622777 1.7591969 3158.0066 0.55630691 No vetoes 1.20289 2.3155038 No vetoes 0.8 0.8 LnL false false false false 91 6 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 92 6.2 5 2.236068 0.87959846 3158.0066 0.39336839 No vetoes 0.48623007 1.2729669 No vetoes 0.8 0.8 LnL false false false false 93 6.4 4 2 0.70367877 3158.0066 0.35183938 No vetoes 0.35183938 1.0555182 No vetoes 0.8 0.8 LnL false false false false 94 6.6 5 2.236068 0.87959846 3158.0066 0.39336839 No vetoes 0.48623007 1.2729669 No vetoes 0.8 0.8 LnL false false false false 95 6.8 4 2 0.70367877 3158.0066 0.35183938 No vetoes 0.35183938 1.0555182 No vetoes 0.8 0.8 LnL false false false false 96 7 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 97 7.2 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 98 7.4 4 2 0.70367877 3158.0066 0.35183938 No vetoes 0.35183938 1.0555182 No vetoes 0.8 0.8 LnL false false false false 99 7.6 2 1.4142136 0.35183938 3158.0066 0.24878801 No vetoes 0.10305137 0.6006274 No vetoes 0.8 0.8 LnL false false false false 100 7.8 5 2.236068 0.87959846 3158.0066 0.39336839 No vetoes 0.48623007 1.2729669 No vetoes 0.8 0.8 LnL false false false false 101 8 4 2 0.70367877 3158.0066 0.35183938 No vetoes 0.35183938 1.0555182 No vetoes 0.8 0.8 LnL false false false false 102 8.2 15 3.8729833 2.6387954 3158.0066 0.68133404 No vetoes 1.9574613 3.3201294 No vetoes 0.8 0.8 LnL false false false false 103 8.4 12 3.4641016 2.1110363 3158.0066 0.60940369 No vetoes 1.5016326 2.72044 No vetoes 0.8 0.8 LnL false false false false 104 8.6 13 3.6055513 2.286956 3158.0066 0.63428747 No vetoes 1.6526685 2.9212435 No vetoes 0.8 0.8 LnL false false false false 105 8.8 19 4.3588989 3.3424742 3158.0066 0.76681616 No vetoes 2.575658 4.1092903 No vetoes 0.8 0.8 LnL false false false false 106 9 21 4.5825757 3.6943135 3158.0066 0.80616531 No vetoes 2.8881482 4.5004788 No vetoes 0.8 0.8 LnL false false false false 107 9.2 24 4.8989795 4.2220726 3158.0066 0.86182696 No vetoes 3.3602457 5.0838996 No vetoes 0.8 0.8 LnL false false false false 108 9.4 17 4.1231056 2.9906348 3158.0066 0.72533547 No vetoes 2.2652993 3.7159702 No vetoes 0.8 0.8 LnL false false false false 109 9.6 13 3.6055513 2.286956 3158.0066 0.63428747 No vetoes 1.6526685 2.9212435 No vetoes 0.8 0.8 LnL false false false false 110 9.8 15 3.8729833 2.6387954 3158.0066 0.68133404 No vetoes 1.9574613 3.3201294 No vetoes 0.8 0.8 LnL false false false false 111 10 9 3 1.5832772 3158.0066 0.52775908 No vetoes 1.0555182 2.1110363 No vetoes 0.8 0.8 LnL false false false false 112 10.2 7 2.6457513 1.2314378 3158.0066 0.46543976 No vetoes 0.76599809 1.6968776 No vetoes 0.8 0.8 LnL false false false false 113 10.4 4 2 0.70367877 3158.0066 0.35183938 No vetoes 0.35183938 1.0555182 No vetoes 0.8 0.8 LnL false false false false 114 10.6 6 2.4494897 1.0555182 3158.0066 0.43091348 No vetoes 0.62460467 1.4864316 No vetoes 0.8 0.8 LnL false false false false 115 10.8 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 116 11 6 2.4494897 1.0555182 3158.0066 0.43091348 No vetoes 0.62460467 1.4864316 No vetoes 0.8 0.8 LnL false false false false 117 11.2 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 118 11.4 1 1 0.17591969 3158.0066 0.17591969 No vetoes 0 0.35183938 No vetoes 0.8 0.8 LnL false false false false 119 11.6 3 1.7320508 0.52775908 3158.0066 0.30470185 No vetoes 0.22305723 0.83246092 No vetoes 0.8 0.8 LnL false false false false 120 11.8 1 1 0.17591969 3158.0066 0.17591969 No vetoes 0 0.35183938 No vetoes 0.8 0.8 LnL false false false false 121 12 0 0 0 3158.0066 0 No vetoes 0 0 No vetoes 0.8 0.8 LnL false false false false 122 0 26 5.0990195 4.573912 3158.0066 0.89701794 Scinti 3.6768941 5.4709299 Scinti 0.8 0.8 LnL true false false false 123 0.2 13 3.6055513 2.286956 3158.0066 0.63428747 Scinti 1.6526685 2.9212435 Scinti 0.8 0.8 LnL true false false false 124 0.4 34 5.8309519 5.9812695 3158.0066 1.0257793 Scinti 4.9554903 7.0070488 Scinti 0.8 0.8 LnL true false false false 125 0.6 37 6.0827625 6.5090286 3158.0066 1.0700777 Scinti 5.4389509 7.5791063 Scinti 0.8 0.8 LnL true false false false 126 0.8 33 5.7445626 5.8053499 3158.0066 1.0105817 Scinti 4.7947682 6.8159315 Scinti 0.8 0.8 LnL true false false false 127 1 33 5.7445626 5.8053499 3158.0066 1.0105817 Scinti 4.7947682 6.8159315 Scinti 0.8 0.8 LnL true false false false 128 1.2 19 4.3588989 3.3424742 3158.0066 0.76681616 Scinti 2.575658 4.1092903 Scinti 0.8 0.8 LnL true false false false 129 1.4 16 4 2.8147151 3158.0066 0.70367877 Scinti 2.1110363 3.5183938 Scinti 0.8 0.8 LnL true false false false 130 1.6 17 4.1231056 2.9906348 3158.0066 0.72533547 Scinti 2.2652993 3.7159702 Scinti 0.8 0.8 LnL true false false false 131 1.8 19 4.3588989 3.3424742 3158.0066 0.76681616 Scinti 2.575658 4.1092903 Scinti 0.8 0.8 LnL true false false false 132 2 8 2.8284271 1.4073575 3158.0066 0.49757603 Scinti 0.90978151 1.9049336 Scinti 0.8 0.8 LnL true false false false 133 2.2 9 3 1.5832772 3158.0066 0.52775908 Scinti 1.0555182 2.1110363 Scinti 0.8 0.8 LnL true false false false 134 2.4 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 135 2.6 10 3.1622777 1.7591969 3158.0066 0.55630691 Scinti 1.20289 2.3155038 Scinti 0.8 0.8 LnL true false false false 136 2.8 8 2.8284271 1.4073575 3158.0066 0.49757603 Scinti 0.90978151 1.9049336 Scinti 0.8 0.8 LnL true false false false 137 3 27 5.1961524 4.7498317 3158.0066 0.91410554 Scinti 3.8357262 5.6639372 Scinti 0.8 0.8 LnL true false false false 138 3.2 23 4.7958315 4.0461529 3158.0066 0.84368121 Scinti 3.2024717 4.8898341 Scinti 0.8 0.8 LnL true false false false 139 3.4 22 4.6904158 3.8702332 3158.0066 0.8251365 Scinti 3.0450967 4.6953697 Scinti 0.8 0.8 LnL true false false false 140 3.6 12 3.4641016 2.1110363 3158.0066 0.60940369 Scinti 1.5016326 2.72044 Scinti 0.8 0.8 LnL true false false false 141 3.8 10 3.1622777 1.7591969 3158.0066 0.55630691 Scinti 1.20289 2.3155038 Scinti 0.8 0.8 LnL true false false false 142 4 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 143 4.2 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 144 4.4 2 1.4142136 0.35183938 3158.0066 0.24878801 Scinti 0.10305137 0.6006274 Scinti 0.8 0.8 LnL true false false false 145 4.6 0 0 0 3158.0066 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false 146 4.8 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 147 5 1 1 0.17591969 3158.0066 0.17591969 Scinti 0 0.35183938 Scinti 0.8 0.8 LnL true false false false 148 5.2 2 1.4142136 0.35183938 3158.0066 0.24878801 Scinti 0.10305137 0.6006274 Scinti 0.8 0.8 LnL true false false false 149 5.4 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 150 5.6 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 151 5.8 9 3 1.5832772 3158.0066 0.52775908 Scinti 1.0555182 2.1110363 Scinti 0.8 0.8 LnL true false false false 152 6 2 1.4142136 0.35183938 3158.0066 0.24878801 Scinti 0.10305137 0.6006274 Scinti 0.8 0.8 LnL true false false false 153 6.2 5 2.236068 0.87959846 3158.0066 0.39336839 Scinti 0.48623007 1.2729669 Scinti 0.8 0.8 LnL true false false false 154 6.4 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 155 6.6 5 2.236068 0.87959846 3158.0066 0.39336839 Scinti 0.48623007 1.2729669 Scinti 0.8 0.8 LnL true false false false 156 6.8 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 157 7 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 158 7.2 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 159 7.4 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 160 7.6 2 1.4142136 0.35183938 3158.0066 0.24878801 Scinti 0.10305137 0.6006274 Scinti 0.8 0.8 LnL true false false false 161 7.8 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 162 8 4 2 0.70367877 3158.0066 0.35183938 Scinti 0.35183938 1.0555182 Scinti 0.8 0.8 LnL true false false false 163 8.2 11 3.3166248 1.9351166 3158.0066 0.58345961 Scinti 1.351657 2.5185762 Scinti 0.8 0.8 LnL true false false false 164 8.4 11 3.3166248 1.9351166 3158.0066 0.58345961 Scinti 1.351657 2.5185762 Scinti 0.8 0.8 LnL true false false false 165 8.6 12 3.4641016 2.1110363 3158.0066 0.60940369 Scinti 1.5016326 2.72044 Scinti 0.8 0.8 LnL true false false false 166 8.8 17 4.1231056 2.9906348 3158.0066 0.72533547 Scinti 2.2652993 3.7159702 Scinti 0.8 0.8 LnL true false false false 167 9 20 4.472136 3.5183938 3158.0066 0.78673678 Scinti 2.7316571 4.3051306 Scinti 0.8 0.8 LnL true false false false 168 9.2 19 4.3588989 3.3424742 3158.0066 0.76681616 Scinti 2.575658 4.1092903 Scinti 0.8 0.8 LnL true false false false 169 9.4 15 3.8729833 2.6387954 3158.0066 0.68133404 Scinti 1.9574613 3.3201294 Scinti 0.8 0.8 LnL true false false false 170 9.6 13 3.6055513 2.286956 3158.0066 0.63428747 Scinti 1.6526685 2.9212435 Scinti 0.8 0.8 LnL true false false false 171 9.8 14 3.7416574 2.4628757 3158.0066 0.65823122 Scinti 1.8046445 3.1211069 Scinti 0.8 0.8 LnL true false false false 172 10 8 2.8284271 1.4073575 3158.0066 0.49757603 Scinti 0.90978151 1.9049336 Scinti 0.8 0.8 LnL true false false false 173 10.2 5 2.236068 0.87959846 3158.0066 0.39336839 Scinti 0.48623007 1.2729669 Scinti 0.8 0.8 LnL true false false false 174 10.4 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 175 10.6 6 2.4494897 1.0555182 3158.0066 0.43091348 Scinti 0.62460467 1.4864316 Scinti 0.8 0.8 LnL true false false false 176 10.8 2 1.4142136 0.35183938 3158.0066 0.24878801 Scinti 0.10305137 0.6006274 Scinti 0.8 0.8 LnL true false false false 177 11 6 2.4494897 1.0555182 3158.0066 0.43091348 Scinti 0.62460467 1.4864316 Scinti 0.8 0.8 LnL true false false false 178 11.2 3 1.7320508 0.52775908 3158.0066 0.30470185 Scinti 0.22305723 0.83246092 Scinti 0.8 0.8 LnL true false false false 179 11.4 1 1 0.17591969 3158.0066 0.17591969 Scinti 0 0.35183938 Scinti 0.8 0.8 LnL true false false false 180 11.6 2 1.4142136 0.35183938 3158.0066 0.24878801 Scinti 0.10305137 0.6006274 Scinti 0.8 0.8 LnL true false false false 181 11.8 1 1 0.17591969 3158.0066 0.17591969 Scinti 0 0.35183938 Scinti 0.8 0.8 LnL true false false false 182 12 0 0 0 3158.0066 0 Scinti 0 0 Scinti 0.8 0.8 LnL true false false false [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc.pdf [WARNING]: Printing total background time currently only supported for single datasets. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc.tex Generated: /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc.pdf :end: (old files: ~/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold.h5 \ ~/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99.h5 \ ) **** Generate plots of rise/fall time for CDL data :extended: :PROPERTIES: :CUSTOM_ID: sec:background:fadc_veto:gen_signal_back_fadc_plots :END: This code snippet is the original code that we used for the first plots, when it was still part of [[file:~/org/Doc/StatusAndProgress.org]]. It has since been moved to [[file:~/CastData/ExternCode/TimepixAnalysis/Plotting/plotFadc/plotFadc.nim]]. #+begin_src nim :results drawer :tangle /tmp/fadc_rise_fall_signal_vs_background.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils, sets, strformat] import ingrid / [tos_helpers, ingrid_types] import ingrid / calibration / [calib_fitting, calib_plotting] import ingrid / calibration proc plotFallTimeRiseTime(df: DataFrame, suffix: string, riseTimeHigh: float) = ## Given a full run of FADC data, create the ## Note: it may be sensible to compute a truncated mean instead # local copy filtered to maximum allowed rise time let df = df.filter(f{`riseTime` <= riseTimeHigh}) proc plotDset(dset: string) = let dfCalib = df.filter(f{`Type` == "⁵⁵Fe"}) echo "============================== ", dset, " ==============================" echo "Percentiles:" echo "\t 1-th: ", dfCalib[dset, float].percentile(1) echo "\t 5-th: ", dfCalib[dset, float].percentile(5) echo "\t50-th: ", dfCalib[dset, float].percentile(50) echo "\t mean: ", dfCalib[dset, float].mean echo "\t95-th: ", dfCalib[dset, float].percentile(95) echo "\t99-th: ", dfCalib[dset, float].percentile(99) ggplot(df, aes(dset, fill = "Type")) + geom_histogram(position = "identity", bins = 100, hdKind = hdOutline, alpha = 0.7) + ggtitle(&"FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + xlab(dset & " [ns]") + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_signal_vs_background_$#.pdf" % suffix) ggplot(df, aes(dset, fill = "Type")) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0) + ggtitle(&"FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + xlab(dset & " [ns]") + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_signal_vs_background_$#.pdf" % suffix) plotDset("fallTime") plotDset("riseTime") proc read(fname, typ: string, eLow, eHigh: float): DataFrame = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() var peakPos = newSeq[float]() result = newDataFrame() for run in fileInfo.runs: if recoBase() & $run / "fadc" notin h5f: continue # skip runs that were without FADC var df = h5f.readRunDsets( run, #chipDsets = some((chip: 3, dsets: @["eventNumber"])), # XXX: causes problems?? Removes some FADC data # but not due to events! fadcDsets = @["eventNumber", "baseline", "riseStart", "riseTime", "fallStop", "fallTime", "minvals", "argMinval"] ) # in calibration case filter to if typ == "⁵⁵Fe": let xrayRefCuts = getXrayCleaningCuts() let cut = xrayRefCuts["Mn-Cr-12kV"] let grp = h5f[(recoBase() & $run / "chip_3").grp_str] let passIdx = cutOnProperties( h5f, grp, crSilver, # try cutting to silver (toDset(igRmsTransverse), cut.minRms, cut.maxRms), (toDset(igRmsTransverse), 0.0, cut.maxEccentricity), (toDset(igLength), 0.0, cut.maxLength), (toDset(igHits), cut.minPix, Inf), (toDset(igEnergyFromCharge), eLow, eHigh) ) let dfChip = h5f.readRunDsets(run, chipDsets = some((chip: 3, dsets: @["eventNumber"]))) let allEvNums = dfChip["eventNumber", int] let evNums = passIdx.mapIt(allEvNums[it]).toSet df = df.filter(f{int: `eventNumber` in evNums}) df["runNumber"] = run result.add df result["Type"] = typ echo result proc main(back, calib: string, year: int, energyLow = 0.0, energyHigh = Inf, riseTimeHigh = Inf ) = var df = newDataFrame() df.add read(back, "Background", energyLow, energyHigh) df.add read(calib, "⁵⁵Fe", energyLow, energyHigh) let is2017 = year == 2017 let is2018 = year == 2018 if not is2017 and not is2018: raise newException(IOError, "The input file is neither clearly a 2017 nor 2018 calibration file!") let yearToRun = if is2017: 2 else: 3 let suffix = "Run-$#" % $yearToRun plotFallTimeRiseTime(df, suffix, riseTimeHigh) when isMainModule: import cligen dispatch main #+end_src Run-2: #+begin_src sh :results none WRITE_PLOT_CSV=true ESCAPE_LATEX=true USE_TEX=true plotFadc \ -c ~/CastData/data/CalibrationRuns2017_Reco.h5 \ -b ~/CastData/data/DataRuns2017_Reco.h5 \ --year 2017 \ --outpath ~/phd/Figs/FADC/ \ --riseTimeHigh 500 #+end_src Run-3: #+begin_src sh WRITE_PLOT_CSV=true ESCAPE_LATEX=true USE_TEX=true plotFadc \ -c ~/CastData/data/CalibrationRuns2018_Reco.h5 \ -b ~/CastData/data/DataRuns2018_Reco.h5 \ --year 2018 \ --outpath ~/phd/Figs/FADC/ \ --riseTimeHigh 500 #+end_src #+RESULTS: | INFO: | Run | /reconstruction/run_239/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_241/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_243/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_245/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_247/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_249/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_251/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_253/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_255/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_257/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_259/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_260/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_262/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_264/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_266/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_269/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_271/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_273/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_275/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_277/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_280/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_282/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_284/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_286/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_288/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_290/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_292/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_294/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_296/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_300/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_302/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_304/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DataFrame | with | 11 | columns | and | 123934 | rows: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Idx | eventNumber | baseline | riseStart | riseTime | fallStop | fallTime | noisy | argMinval | runNumber | Settings | Type | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dtype: | int | float | int | int | int | int | int | int | float | constant | constant | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 | 2 | 0.008423 | 603 | 41 | 1139 | 472 | 0 | 657 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 4 | 0.01014 | 604 | 54 | 1194 | 508 | 0 | 673 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 8 | 0.006491 | 616 | 66 | 1167 | 460 | 0 | 695 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 9 | 0.005923 | 580 | 52 | 1159 | 499 | 0 | 651 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 15 | 0.007497 | 568 | 55 | 1078 | 430 | 0 | 639 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 16 | 0.007565 | 566 | 56 | 1184 | 536 | 0 | 637 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 18 | 0.009988 | 593 | 55 | 1182 | 508 | 0 | 662 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 19 | 0.007538 | 596 | 51 | 1157 | 484 | 0 | 658 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 20 | 0.009162 | 609 | 53 | 1199 | 506 | 0 | 672 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 22 | 0.00629 | 609 | 57 | 1139 | 449 | 0 | 677 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 29 | 0.007168 | 598 | 45 | 1108 | 443 | 0 | 654 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 33 | 0.007921 | 610 | 47 | 1134 | 453 | 0 | 668 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 34 | 0.01071 | 576 | 53 | 1171 | 510 | 0 | 645 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 37 | 0.007671 | 540 | 63 | 1149 | 520 | 0 | 616 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14 | 39 | 0.009933 | 593 | 59 | 1164 | 487 | 0 | 665 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 15 | 41 | 0.01002 | 609 | 46 | 1154 | 476 | 0 | 669 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 | 45 | 0.007921 | 600 | 54 | 1157 | 480 | 0 | 667 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 17 | 49 | 0.006646 | 602 | 56 | 1149 | 465 | 0 | 673 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 18 | 50 | 0.008408 | 578 | 52 | 1152 | 492 | 0 | 641 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 19 | 51 | 0.008542 | 609 | 44 | 1153 | 472 | 0 | 660 | 239 | Setting | 3 | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_240/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_242/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_244/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_246/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_248/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_250/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_254/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_256/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_258/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_261/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_263/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_265/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_267/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_268/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_270/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_272/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_274/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_276/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_278/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_279/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_281/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_283/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_285/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_287/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_289/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_291/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_293/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_295/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_297/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_301/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_303/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | Run | /reconstruction/run_306/fadc | does | not | have | any | data | for | dataset | minvals | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | DataFrame | with | 11 | columns | and | 211718 | rows: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Idx | eventNumber | baseline | riseStart | riseTime | fallStop | fallTime | noisy | argMinval | runNumber | Settings | Type | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dtype: | int | float | int | int | int | int | int | int | float | constant | constant | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 0 | 4 | 0.002716 | 679 | 93 | 1238 | 432 | 0 | 790 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1 | 7 | 0.005544 | 542 | 66 | 1355 | 709 | 0 | 619 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2 | 21 | -0.001553 | 560 | 193 | 1316 | 537 | 0 | 762 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 | 47 | 0.004027 | 635 | 61 | 1178 | 456 | 0 | 709 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 4 | 61 | 0.0001456 | 531 | 75 | 1140 | 503 | 0 | 622 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5 | 71 | 0.006051 | 386 | 134 | 1012 | 441 | 0 | 545 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 | 75 | -0.002058 | 571 | 218 | 1173 | 362 | 0 | 801 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 7 | 76 | 0.008234 | 467 | 184 | 1198 | 487 | 0 | 696 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8 | 81 | 0.003344 | 718 | 90 | 1243 | 405 | 0 | 828 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 9 | 84 | 0.002498 | 673 | 132 | 1239 | 404 | 0 | 822 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10 | 89 | -0.003767 | 635 | 157 | 1430 | 604 | 0 | 801 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 11 | 95 | 0.8999 | 18 | 2 | 982 | 208 | 0 | 169 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 12 | 100 | 0.0007305 | 643 | 449 | 1545 | 416 | 0 | 1102 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 13 | 106 | -0.004829 | 673 | 120 | 1170 | 349 | 0 | 806 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 14 | 111 | -0.03493 | 123 | 10 | 708 | 294 | 0 | 142 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 15 | 121 | 0.0002351 | 522 | 139 | 1267 | 573 | 0 | 679 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 16 | 144 | 0.004718 | 742 | 67 | 1279 | 447 | 0 | 824 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 17 | 146 | 0.0001567 | 718 | 103 | 1646 | 806 | 1 | 832 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 18 | 150 | 0.009029 | 306 | 72 | 893 | 488 | 0 | 389 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 19 | 159 | -0.001712 | 541 | 45 | 1090 | 482 | 0 | 596 | 240 | Setting | 3 | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ============================== | fallTime | ============================== | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Type: | Type | (kind: | VString, | str: | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Percentiles: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1-th: | 17.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5-th: | 309.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 50-th: | 458.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mean: | 447.5550088992432 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 80-th: | 493.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 95-th: | 560.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 99-th: | 747.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ============================== | fallTime | ============================== | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Type: | Type | (kind: | VString, | str: | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Percentiles: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1-th: | 367.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5-th: | 415.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 50-th: | 468.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mean: | 465.2696155956492 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 80-th: | 488.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 95-th: | 505.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 99-th: | 522.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `fallTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | fallTime | ...)`. | | INFO: | The | integer | column | `fallTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | fallTime | ...)`. | | INFO: | The | integer | column | `fallTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | fallTime | ...)`. | | INFO: | The | integer | column | `fallTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | fallTime | ...)`. | | INFO: | The | integer | column | `fallTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | fallTime | ...)`. | | INFO: | The | integer | column | `fallTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | fallTime | ...)`. | | ============================== | riseTime | ============================== | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Type: | Type | (kind: | VString, | str: | back | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Percentiles: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1-th: | 12.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5-th: | 47.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 50-th: | 99.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mean: | 131.5238977238196 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 80-th: | 191.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 95-th: | 339.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 99-th: | 450.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ============================== | riseTime | ============================== | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Type: | Type | (kind: | VString, | str: | ⁵⁵Fe | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Percentiles: | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1-th: | 44.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 5-th: | 46.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 50-th: | 55.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mean: | 55.4189313494497 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 80-th: | 59.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 95-th: | 64.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 99-th: | 71.0 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `riseTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | riseTime | ...)`. | | INFO: | The | integer | column | `riseTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | riseTime | ...)`. | | INFO: | The | integer | column | `riseTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | riseTime | ...)`. | | INFO: | The | integer | column | `riseTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | riseTime | ...)`. | | INFO: | The | integer | column | `riseTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | riseTime | ...)`. | | INFO: | The | integer | column | `riseTime` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | riseTime | ...)`. | **** Generate plots comparing rise/fall time of X-rays and background :extended: *UPDATE*: <2023-12-14 Thu 18:50>: The below is also outdated! See ~plotFadc~ above. - [ ] *MOVE THE CODE CURRENTLY IN STATUS TO TPA!* Use our FADC plotting tool to generate the plots for the rise and fall time with a custom upper end #+begin_src sh cd /tmp ntangle ~/org/Doc/StatusAndProgress.org && nim c -d:danger /t/fadc_rise_fall_signal_vs_background ./fadc_rise_fall_signal_vs_background \ -c ~/CastData/data/CalibrationRuns2017_Reco.h5 \ -b ~/CastData/data/DataRuns2017_Reco.h5 \ --year 2017 \ --riseTimeHigh 600 ./fadc_rise_fall_signal_vs_background \ -c ~/CastData/data/CalibrationRuns2018_Reco.h5 \ -b ~/CastData/data/DataRuns2018_Reco.h5 \ --year 2018 \ --riseTimeHigh 600 #+end_src - [ ] *RE GENERATE PLOT FOR 2018 WHEN SEPTEM+LINE VETO APPLICATION DONE!* - [X] fixed the issue causing the flat offset / background This is somewhat of a continuation of sec. [[#sec:reco:fadc_rise_fall_plots]]. - [ ] *REVISIT THIS WITH 2018 DATA* #+begin_src nim :results drawer :tangle code/fadc_rise_fall_signal_vs_background.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils, sets, strformat] import ingrid / [tos_helpers, ingrid_types] import ingrid / calibration / [calib_fitting, calib_plotting] import ingrid / calibration proc stripPrefix(s, p: string): string = result = s result.removePrefix(p) proc plotFallTimeRiseTime(df: DataFrame, suffix: string) = ## Given a full run of FADC data, create the ## Note: it may be sensible to compute a truncated mean instead proc plotDset(dset: string) = let dfCalib = df.filter(f{`Type` == "calib"}) echo "============================== ", dset, " ==============================" echo "Percentiles:" echo "\t 1-th: ", dfCalib[dset, float].percentile(1) echo "\t 5-th: ", dfCalib[dset, float].percentile(5) echo "\t95-th: ", dfCalib[dset, float].percentile(95) echo "\t99-th: ", dfCalib[dset, float].percentile(99) ggplot(df, aes(dset, fill = "Type")) + geom_histogram(position = "identity", bins = 100, hdKind = hdOutline, alpha = 0.7) + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_signal_vs_background_$#.pdf" % suffix) ggplot(df, aes(dset, fill = "Type")) + geom_density(normalize = true, alpha = 0.7, adjust = 2.0) + ggtitle(&"Comparison of FADC signal {dset} in ⁵⁵Fe vs background data in $#" % suffix) + ggsave(&"Figs/statusAndProgress/FADC/fadc_{dset}_kde_signal_vs_background_$#.pdf" % suffix) plotDset("fallTime") plotDset("riseTime") when false: let dfG = df.group_by("runNumber").summarize(f{float: "riseTime" << truncMean(col("riseTime").toSeq1D, 0.05)}, f{float: "fallTime" << truncMean(col("fallTime").toSeq1D, 0.05)}) ggplot(dfG, aes(runNumber, riseTime, color = fallTime)) + geom_point() + ggtitle("Comparison of FADC signal rise times in ⁵⁵Fe data for all runs in $#" % suffix) + ggsave("Figs/FADC/fadc_mean_riseTime_$#.pdf" % suffix) ggplot(dfG, aes(runNumber, fallTime, color = riseTime)) + geom_point() + ggtitle("Comparison of FADC signal fall times in ⁵⁵Fe data for all runsin $#" % suffix) + ggsave("Figs/FADC/fadc_mean_fallTime_$#.pdf" % suffix) proc read(fname, typ: string): DataFrame = var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() var peakPos = newSeq[float]() result = newDataFrame() for run in fileInfo.runs: if recoBase() & $run / "fadc" notin h5f: continue # skip runs that were without FADC var df = h5f.readRunDsets( run, #chipDsets = some((chip: 3, dsets: @["eventNumber"])), # XXX: causes problems?? Removes some FADC data # but not due to events! fadcDsets = @["eventNumber", "baseline", "riseStart", "riseTime", "fallStop", "fallTime", "minvals", "argMinval"] ) # in calibration case filter to if typ == "calib": let xrayRefCuts = getXrayCleaningCuts() let cut = xrayRefCuts["Mn-Cr-12kV"] let grp = h5f[(recoBase() & $run / "chip_3").grp_str] let passIdx = cutOnProperties( h5f, grp, crSilver, # try cutting to silver (toDset(igRmsTransverse), cut.minRms, cut.maxRms), (toDset(igRmsTransverse), 0.0, cut.maxEccentricity), (toDset(igLength), 0.0, cut.maxLength), (toDset(igHits), cut.minPix, Inf), #(toDset(igEnergyFromCharge), 2.5, 3.5) ) let dfChip = h5f.readRunDsets(run, chipDsets = some((chip: 3, dsets: @["eventNumber"]))) let allEvNums = dfChip["eventNumber", int] let evNums = passIdx.mapIt(allEvNums[it]).toSet df = df.filter(f{int: `eventNumber` in evNums}) df["runNumber"] = run result.add df result["Type"] = typ echo result proc main(back, calib: string, year: int) = var df = newDataFrame() df.add read(back, "back") df.add read(calib, "calib") let is2017 = year == 2017 let is2018 = year == 2018 if not is2017 and not is2018: raise newException(IOError, "The input file is neither clearly a 2017 nor 2018 calibration file!") let yearToRun = if is2017: 2 else: 3 let suffix = "run$#" % $yearToRun plotFallTimeRiseTime(df, suffix) when isMainModule: import cligen dispatch main #+end_src - *EXPLANATION FOR FLAT BACKGROUND IN RISE / FALL TIME:* The "dead" register causes our fall / rise time calculation to break! This leads to a 'background' of homogeneous rise / fall times -> THIS NEEDS TO BE FIXED FIRST!! -> Has been fixed since. **** Calculate expected cluster sizes and rise times [/] :extended: - [ ] Once happy with the text from ~statusAndProgress~, move all the related parts, concluding in the explicit code using the gas property CSV to compute the expected values explicitly. We estimate the rise times in sec. [[file:~/org/Doc/StatusAndProgress.org::#sec:fadc:estimate_rise_times]]. *** Outer GridPix as veto - 'septem veto' :PROPERTIES: :CUSTOM_ID: sec:background:septem_veto :END: The final hardware feature that is used to improve the background rate is the outer ring of GridPixes. The size of large clusters is a significant fraction of a single GridPix. This means that depending on the cluster center position, parts of the cluster may very well be outside of the chip. While the most important area of the chip is the center area (due to the X-ray optics focusing the axion induced X-rays), misalignment and the extended nature of the 'axion image' mean that a significant portion of the chip should be as low in background as possible. Fig. sref:fig:background:cluster_centers_no_vetoes_2017_18 as we saw in sec. [[#sec:background:likelihood_cut]] shows a significant increase in cluster counts towards the edges and corners of the center GridPix. The GridPix ring can help us reduce this. Normally the individual chips are treated separately in the analysis chain. The 'septem veto' is the name for an additional veto step, which can be optionally applied to the center chip. With it, each cluster that is considered signal-like based on the likelihood cut (or MLP), will be analyzed in a second step. The full event is reconstructed again from the beginning, but this time as an event covering /all 7/ chips. This allows the cluster finding algorithm to detect clusters beyond the center chip boundaries. After finding all clusters, the normal cluster reconstruction to compute all properties is done again. Finally, for each cluster in the event the likelihood method or MLP is applied. If now all clusters whose center is on the central chip are considered background-like, the event is vetoed, because the initial signal-like cluster turned out to be part of a larger cluster covering more than one chip. There is a slight complication here. The layout of the septemboard includes spacing (see for example fig. [[fig:detector:occupancy_sparking_run_241]] or [[fig:reco:fadc_reco_example]] for the layout), which is required to mount the chips. This spacing, in addition to general lower efficiency towards the edges of a GridPix, means significant information is lost. When reconstructing the full 'septemboard events' this spacing would break the cluster finding algorithms, as the spacing might extend the distance over the cutoff criterion [fn:alternative_fill]. For this reason the cluster finding algorithm is actually performed on a 'tight layout' where the spacing has been reduced to zero. For the calculation of the geometric properties however, the clusters are transformed into the real septemboard coordinates including spacing. [fn:spacing_impact_rms] In contrast to the regular single chip analysis performed before, which uses a simple, bespoke clustering algorithm (see sec. [[#sec:reco:cluster_finding]], 'Default'), the full 'septemboard reconstruction' uses the DBSCAN clustering algorithm. The equivalent search radius is extended a bit over the normal default (65 pixels compared to 50, unless changed in configuration). DBSCAN produces better results in terms of likely related clusters (over chip boundaries). [fn:downside] An example that shows two clusters on the center chip, one of which was initially interpreted as a signal-like event before being vetoed by the 'septem veto', is shown in fig. [[fig:example_clusters_septem_veto]]. The colors indicate the clusters each pixel belongs to according to the cluster finding algorithm. As the chips are treated separately initially, there are two clusters found on the center chip. The green and purple cluster "portions" of the center chip. The cyan part passes the likelihood cut initially, which triggers the 'septem veto' (X-rays at such low energies are much less spherical on average; same diffusion, but less electrons). A full 7 GridPix event reconstruction shows the additional parts of the two clusters. The cyan cluster is finally rejected. It is a good example, as it shows a cluster that is still relatively close to the center, and yet still 'connects' to another chip. #+CAPTION: An example event showing all 7 GridPix of the CAST GridPix1 detector. #+CAPTION: The outlines are the boundaries of each chip #+CAPTION: and the color of each point indicates the cluster which it is part of #+CAPTION: according to the cluster finder. #+CAPTION: Initially the cyan cluster (center chip portion) passes the log likelihood cut #+CAPTION: (i.e. is signal like), but is vetoed by the 'septem veto' #+CAPTION: as there are more pixels outside the center chip that are part of this cluster. #+CAPTION: The green cluster in the bottom left of the center chip is in addition a good example of #+CAPTION: how in particular cutting a track in the corners leads to a much more spherical #+CAPTION: cluster. #+NAME: fig:example_clusters_septem_veto [[~/phd/Figs/background/exampleEvents/septemEvent_run_162_event_80301.pdf]] The background rate with the septem veto is shown in fig. [[fig:background:background_rate_septem_veto]], where we see that most of the improvement is in the lower energy range $< \SI{2}{keV}$. This is the /most important region for the solar axion flux for the axion-electron coupling/. Looking at the mean background rate in intervals of interest, between $\SIrange{2}{8}{keV}$ it is $\SI{8.0337(6864)e-06}{keV⁻¹.cm⁻².s⁻¹}$ and $\SIrange{4}{8}{keV} = \SI{4.0462(5966)e-06}{keV⁻¹.cm⁻².s⁻¹}$. And even in the full range down to $\SI{0}{keV}$ the mean is in the $10⁻⁶$ range, $\SI{9.5436(6479)e-06}{keV⁻¹.cm⁻².s⁻¹}$. #+begin_comment [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 8.0: 7.6349(5183)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 9.5436(6479)e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.8202(4118)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 8.0337(6864)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.0462(5966)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.6185(2386)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.0462(5966)e-06 keV⁻¹·cm⁻²·s⁻¹ #+end_comment #+CAPTION: Background rate achieved based on the 'septem veto' (in addition to the #+CAPTION: scintillator cut and FADC veto) for the full 2017/18 dataset within the center #+CAPTION: $\SI[parse-numbers=false]{5 \times 5}{mm²}$. Significant improvement in the #+CAPTION: $< \SI{2}{keV}$ range, which is most important for axion-electron coupling solar #+CAPTION: flux. #+CAPTION: The background rate between $\SIrange{2}{8}{keV}$ is $\SI{8.0337(6864)e-06}{keV⁻¹.cm⁻².s⁻¹}$ and #+CAPTION: $\SIrange{4}{8}{keV} = \SI{4.0462(5966)e-06}{keV⁻¹.cm⁻².s⁻¹}$. #+NAME: fig:background:background_rate_septem_veto [[~/phd/Figs/background/background_rate_crGold_scinti_fadc_septem.pdf]] [fn:alternative_fill] An alternative could be to attempt to fill in the area between the chips with simulated data based on what is seen at the chip edges. But that is a complex problem. A fun (but possibly questionable) idea would be to train a diffusion model (generative AI) to predict missing data between chips. [fn:spacing_impact_rms] In principle this has a small impact on the calculation of the RMS of the cluster, if there is now a large gap between two parts of the 'same' cluster. However, as we don't really rely on the properties too much it is of no real concern. Technically we still calculate the $\ln\mathcal{L}$ of each cluster, but any cluster passing the $\ln\mathcal{L}$ cut before being added to a larger cluster almost certainly won't pass it afterwards. The slight bias in the RMS won't change that. [fn:downside] The downside is that DBSCAN is _significantly_ slower, which is part of the reason it is not utilized for the initial reconstruction. **** TODOs for this section [/] :noexport: - [ ] *POSSIBLY REPLACE THIS EVENT, CERTAINLY REPLACE PLOT* -> Show an event that uses real septem layout! - [X] Layout is changed - [ ] Need to finalize the text settings for the thesis before producing final version. This was previous text. #+begin_quote shows the cluster centers based on the background data taken at CAST in 2017 and 2018 remaining after the likelihood cut. Evidently, the cluster density is significantly lower in the center area than towards the edges and in particular the corners. The closer the cluster center is to the edges, the higher the chance that parts of it are cut off. In particular, cutting of from a track like cluster most likely *reduces* the length and thus makes the cluster *more* spherical. - [ ] *REFER TO THE IMAGE OF CLUSTER CENTERS FIRST INTRODUCED IN LNL STUFF* #+CAPTION: Each point represents the cluster center of a single cluster that passes #+CAPTION: the likelihood cut. The color scale in addition represents whether multiple #+CAPTION: cluster centers were on the same pixel. The data is the full 2017/18 background #+CAPTION: data. It is very evident that the gold region (marked as a square) has the #+CAPTION: lowest background. From there towards the edges and in particular the corners #+CAPTION: the background increases significantly. This is mostly a geometric effect, #+CAPTION: for which an example can be seen in fig. [[fig:example_clusters_septem_veto]]. #+CAPTION: *TODO ADD THE GOLD REGION OUTLINE* #+NAME: fig:background_clusters_no_septem_veto #+end_quote **** Generate plot for background rate including septem veto :extended: Let's generate the plot: #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ --names "No vetoes" --names "No vetoes" \ --names "Scinti" --names "Scinti" \ --names "FADC" --names "FADC" \ --names "Septem" --names "Septem" \ --centerChip 3 \ --region crGold \ --title "Background rate from CAST data, incl. scinti, FADC, septem veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: Manual rate = 1.50411(6641)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.80494(7969)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.50411(6641)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.97910(7618)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.37492(9141)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.97910(7618)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.81490(7295)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.17789(8754)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.81490(7295)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.07604(5617)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.29125(6740)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.07604(5617)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.8675(2246)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.5 .. 2.5: 5.7350(4492)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.8675(2246)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.5008(2482)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 2.5: 7.0016(4963)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.5008(2482)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.4304(2457)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.8609(4913)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.4304(2457)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.0555(1363)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.5 .. 2.5: 2.1110(2725)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.0555(1363)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.0328(1261)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.5 .. 5.0: 9.1478(5673)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.0328(1261)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7053(1454)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.21736(6545)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.7053(1454)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.5020(1399)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.12589(6294)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5020(1399)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.14934(9480)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.5 .. 5.0: 5.1720(4266)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.14934(9480)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.0821(2083)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 2.5: 7.7053(5207)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.0821(2083)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.8421(2325)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.6052(5813)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.8421(2325)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.7717(2304)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.4293(5760)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.7717(2304)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.2807(1343)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 2.5: 3.2017(3356)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 1.2807(1343)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.5739(6343)e-06 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.8296(2537)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.5739(6343)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.7729(7718)e-06 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.7092(3087)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.7729(7718)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.1572(7359)e-06 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.4629(2944)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.1572(7359)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.0462(5966)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.6185(2386)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.0462(5966)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.0305(7277)e-06 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 2.0 .. 8.0: 5.4183(4366)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 9.0305(7277)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.37217(8970)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.2330(5382)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.37217(8970)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.19039(8355)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.1423(5013)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.19039(8355)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.0337(6864)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.8202(4118)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 8.0337(6864)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.57888(8333)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.26310(6666)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.57888(8333)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.12423(9666)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.69938(7732)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.12423(9666)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.97910(9330)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.58328(7464)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.97910(9330)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.5436(6479)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 8.0: 7.6349(5183)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 9.5436(6479)e-06 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | true | true | false | false | 0.784 | 1.57888(8333)e-05 | | LnL | 0.800 | false | false | false | false | 0.800 | 2.12423(9666)e-05 | | LnL | 0.800 | true | false | false | false | 0.800 | 1.97910(9330)e-05 | | LnL | 0.800 | true | true | true | false | 0.615 | 9.5436(6479)e-06 | [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem.pdf [WARNING]: Printing total background time currently only supported for single datasets. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem.tex Generated: /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem.pdf :end: (old files: ~/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold.h5 \ ~/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99.h5 \ ) **** Generate example plot of vetoed septemboard event :extended: The septemboard event is generated as a byproduct of the ~likelihood~ program, if it is being run with ~--plotSeptem~ as a command line argument. #+begin_src sh PLOT_SEPTEM_E_CUTOFF=1.0 PLOT_SEPTEM_EVENT_NUMBER=80301 \ likelihood \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --run 162 \ --h5out /tmp/dummy.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lnL \ --signalEfficiency 0.8 \ --septemveto \ --plotSeptem \ --plotPath ~/phd/Figs/background/exampleEvents/ \ --useTeX #+end_src -> Works (just produces too many plots!) -> Add an "event index" filter for the plotSeptem. Done. *** 'Line veto' :PROPERTIES: :CUSTOM_ID: sec:background:line_veto :END: There is one more further optional veto, dubbed the 'line veto'. It checks whether there are clusters on the outer chips whose long axis "points at" clusters on the center chip (passing the likelihood or MLP cut). The idea being that there is a high chance that such clusters are correlated, especially because ionization is an inherently statistical process. It can be used in addition to the 'septem veto' or standalone. The approach uses the same general approach as the septem veto by reconstructing the full events as 'septemboard events' again. The difference to the septem veto is that it is not reliant on the cluster finding search radius. The center cluster that initially passed the likelihood or MLP cut will be rejected if the long axis of the outer chip cluster cuts within $3·\left(\frac{\text{RMS}_T + \text{RMS}_L}{2}\right)$. In other words within $3 σ$ of the mean (along long and short axis) standard distribution of the pixels [fn:roughly_radius]. If desired not every outer chip cluster is considered for the 'line veto'. An $ε_{\text{cut}}$ parameter can be adjusted to only allow those clusters to contribute with an eccentricity larger than $ε_{\text{cut}}$. The value of this cutoff impacts the efficiency, but also the expected random coincidence rate of the veto. More on that in sec. [[#sec:background:estimate_veto_efficiency]]. An example of an event being vetoed by the 'line veto' is shown in fig. [[fig:background:example_clusters_line_veto]]. The black circle around the center chip cluster indicates the radius in which the orange line (extension of the long axis of the top, green cluster) needs to cut the center cluster. #+CAPTION: Example event, which highlights the use case of the 'line veto'. The #+CAPTION: green cluster in the upper chip is eccentric and its long axis 'points' #+CAPTION: towards the purple center cluster (which initially passes the $\ln\mathcal{L}$ cut). #+CAPTION: The black circle is a measure for the radius of the center cluster. If the #+CAPTION: line of the eccentric cluster cuts the circle, the cluster is vetoed. #+NAME: fig:background:example_clusters_line_veto [[~/phd/Figs/background/exampleEvents/septemEvent_run_261_event_44109.pdf]] The achieved background rate, see fig. [[fig:background:background_rate_line_veto]], goes well into the $10⁻⁶$ range with this veto over the whole energy range, as the influence is largest at low energies. The rate comes out to $\SI{7.9164(5901)e-06}{keV⁻¹.cm⁻².s⁻¹}$ over the whole range up to $\SI{8}{keV}$, to $\SI{7.5645(6660)e-06}{keV⁻¹.cm⁻².s⁻¹}$ if starting from $\SI{2}{keV}$ and $\SI{3.7823(5768)e-06}{keV⁻¹.cm⁻².s⁻¹}$ if starting from $\SI{4}{keV}$. #+begin_comment [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 0.0 .. 8.0: 6.3331(4720)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 7.9164(5901)e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.5387(3996)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 7.5645(6660)e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.5129(2307)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 3.7823(5768)e-06 keV⁻¹·cm⁻²·s⁻¹ #+end_comment #+CAPTION: Background rate of all CAST 2017/18 data in the center #+CAPTION: $\SI[parse-numbers=false]{5 \times 5}{mm²}$ using all vetoes including the #+CAPTION: line veto. Improvements are seen especially at low energies. The mean #+CAPTION: background rate in the full range below $\SI{8}{keV}$ is #+CAPTION: $\SI{7.9164(5901)e-06}{keV⁻¹.cm⁻².s⁻¹}$. #+NAME: fig:background:background_rate_line_veto [[~/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line.pdf]] [fn:roughly_radius] This is roughly the radius of the cluster going by most extended active pixels of the cluster. **** TODOs for section above [/] :noexport: - [ ] *EXTEND THIS ON THE NOTION OF THE ECCENTRICITY LINE CUTOFF!* - [ ] *MOVE MOST OF THIS DOWN TO sec. [[#sec:background:all_vetoes_combined]]* - [X] Not really a TODO, but interestingly the # of background clusters on the whole chip with all vetoes increased from 9500 (ref. IAXO TDR plots) to ~10400 or so. But the number in the gold region dropped significantly from ~500 to ~350! The changes are many that were made in the mean time. All aspects of the method underwent changes and fixes, so this is not really a surprise. -> *UPDATE*: <2023-03-10 Fri 09:41> Ohhh, I think I understand what's going on and why we have more total clusters... It's the fact that in this case we're running without the tracking information in the file, ergo we use all of the data and therefore have more clusters in total! **** Possible eccentricity cutoff [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:background:line_veto:eccentricity_cutoff :END: - [ ] *DISCUSS THE ECCENTRICITY CUTOFF IN LITTLE MORE DETAIL, SHOW THE EFFICIENCIES AS DESCRIBED IN SUBSECTION BELOW SOMEWHERE!* - [ ] This should be *after* we talk about random coincidence and efficiency etc etc -> The plot shows that the efficiency scaling is much steeper for real data than for fake data (purple vs green). This implies that we gain more in efficiency for real data than we lose in signal efficiency due to random coincidence. For the final usage in the thesis we ended up allowing all clusters to participate in the line veto. While the Fig. [[fig:background:fraction_passing_line_veto_ecc_cut]] shows the behavior of the line veto efficiency for real data and fake bootstrapped data depending on what eccentricity a cluster needs to participate as an active cluster in the line veto. The ideal choice is somewhere in the middle. - [ ] *DECIDE ON A FINAL VALUE WE USE AND GIVE ARGUMENTS!* -> Also see ratio plot! But to decide need to think through what we actually care about most! Direct ratio may not be optimal. #+CAPTION: Fraction of events in Run-3 data (green), which pass (i.e. not rejected) the line #+CAPTION: veto depending on the eccentricity cut used, which decides how eccentric a #+CAPTION: cluster needs to be in order to be used for the veto. The purple points are #+CAPTION: using fake bootstrapped data from real clusters passing the $\ln\mathcal{L}$ cut #+CAPTION: together with real outer GridPix data from *other* events. The fraction of events #+CAPTION: being vetoed in the latter is a measure for the random coincidence rate. #+CAPTION: (See ratio plot to see that 1.4 - 1.5 is likely best?) #+NAME: fig:background:fraction_passing_line_veto_ecc_cut [[~/phd/Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut.pdf]] **** Generate example plot of 'line veto' event :extended: The septemboard event for the line veto is generated as a byproduct of the ~likelihood~ program, if it is being run with ~--plotSeptem~ as a command line argument. #+begin_src sh PLOT_SEPTEM_E_CUTOFF=1.0 PLOT_SEPTEM_EVENT_NUMBER=44109 \ likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --run 261 \ --h5out /tmp/dummy.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lnL \ --signalEfficiency 0.8 \ --lineveto \ --plotSeptem \ --plotPath ~/phd/Figs/background/exampleEvents/ \ --useTeX #+end_src -> Works. -> To produce thesis plot, run command in ~/phd/Figs/<subdirectory>~ **** Generate plot for background rate with all vetoes :extended: :PROPERTIES: :CUSTOM_ID: sec:background:gen_bck_all_vetoes_compared :END: Let's generate the plot: #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "No vetoes" --names "No vetoes" \ --names "Scinti" --names "Scinti" \ --names "FADC" --names "FADC" \ --names "Septem" --names "Septem" \ --names "Line" --names "Line" \ --centerChip 3 \ --region crGold \ --title "Background rate from CAST data, incl. scinti, FADC, septem, line veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --quiet #+end_src #+RESULTS: :results: Manual rate = 1.50411(6641)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.80494(7969)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.50411(6641)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.4997(5278)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.13996(6333)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 9.4997(5278)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.97910(7618)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.37492(9141)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.97910(7618)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.81490(7295)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 12.0: 2.17789(8754)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.81490(7295)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.07604(5617)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 12.0: 1.29125(6740)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 12.0: 1.07604(5617)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.8675(2246)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.5 .. 2.5: 5.7350(4492)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 2.8675(2246)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.685(1084)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.3370(2169)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 6.685(1084)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.5008(2482)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 2.5: 7.0016(4963)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.5008(2482)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.4304(2457)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 2.5: 6.8609(4913)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 3.4304(2457)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.0555(1363)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.5 .. 2.5: 2.1110(2725)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 1.0555(1363)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.0328(1261)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.5 .. 5.0: 9.1478(5673)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.0328(1261)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.4606(8601)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 0.5 .. 5.0: 4.2573(3870)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 9.4606(8601)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7053(1454)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.21736(6545)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.7053(1454)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.5020(1399)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.5 .. 5.0: 1.12589(6294)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 2.5020(1399)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.14934(9480)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.5 .. 5.0: 5.1720(4266)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.14934(9480)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.0821(2083)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 2.5: 7.7053(5207)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.0821(2083)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.585(1099)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 0.0 .. 2.5: 2.1462(2748)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 8.585(1099)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.8421(2325)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.6052(5813)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.8421(2325)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.7717(2304)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 2.5: 9.4293(5760)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 3.7717(2304)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.2807(1343)e-05 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 2.5: 3.2017(3356)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 2.5: 1.2807(1343)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.5739(6343)e-06 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.8296(2537)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.5739(6343)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.7823(5768)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.5129(2307)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 3.7823(5768)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.7729(7718)e-06 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.7092(3087)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.7729(7718)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.1572(7359)e-06 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 4.0 .. 8.0: 2.4629(2944)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 6.1572(7359)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.0462(5966)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.6185(2386)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.0462(5966)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.0305(7277)e-06 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 2.0 .. 8.0: 5.4183(4366)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 9.0305(7277)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 7.5645(6660)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.5387(3996)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 7.5645(6660)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.37217(8970)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 2.0 .. 8.0: 8.2330(5382)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.37217(8970)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.19039(8355)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 2.0 .. 8.0: 7.1423(5013)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 1.19039(8355)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.0337(6864)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.8202(4118)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 8.0337(6864)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.57888(8333)e-05 [INFO]:Dataset: FADC [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.26310(6666)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.57888(8333)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 7.9164(5901)e-06 [INFO]:Dataset: Line [INFO]: Integrated background rate in range: 0.0 .. 8.0: 6.3331(4720)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 7.9164(5901)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.12423(9666)e-05 [INFO]:Dataset: No vetoes [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.69938(7732)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 2.12423(9666)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.97910(9330)e-05 [INFO]:Dataset: Scinti [INFO]: Integrated background rate in range: 0.0 .. 8.0: 1.58328(7464)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 1.97910(9330)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.5436(6479)e-06 [INFO]:Dataset: Septem [INFO]: Integrated background rate in range: 0.0 .. 8.0: 7.6349(5183)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.0 .. 8.0: 9.5436(6479)e-06 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | true | true | false | false | 0.784 | 1.57888(8333)e-05 | | LnL | 0.800 | true | true | true | true | 0.574 | 7.9164(5901)e-06 | | LnL | 0.800 | false | false | false | false | 0.800 | 2.12423(9666)e-05 | | LnL | 0.800 | true | false | false | false | 0.800 | 1.97910(9330)e-05 | | LnL | 0.800 | true | true | true | false | 0.615 | 9.5436(6479)e-06 | [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line.pdf [WARNING]: Printing total background time currently only supported for single datasets. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line.tex Generated: /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line.pdf :end: (old files: ~/phd/resources/background/autoGen/likelihood_cdl2018_Run2_crGold.h5 \ ~/phd/resources/background/autoGen/likelihood_cdl2018_Run3_crGold.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run2_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_preliminary/lhood_outputs_adaptive_fadc/likelihood_cdl2018_Run3_crGold_scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99_line_vetoPercentile_0.99.h5 \ ) *** Estimating the random coincidence rate of the septem & line veto :PROPERTIES: :CUSTOM_ID: sec:background:estimate_veto_efficiency :END: One potential issue with the septem and line veto is that the shutter times we ran with at CAST are very long ($> \SI{2}{s}$), but only the center chip is triggered by the FADC. This means that the outer chips can record cluster data not correlated to what the center chip sees. When applying one of these two vetoes the chance for random coincidence might be non negligible. This random coincidence rate needs to be treated as a dead time for the detector, reducing the effective solar tracking time available. In order to estimate this we bootstrap fake events with guaranteed random coincidence. For each data taking run, we take events with clusters on the center chip, which pass the likelihood or MLP cuts and combine these with data from the outer chips from different events. The outer GridPix ring data is sampled from /all/ events in that run, including empty events as otherwise we would bias the coincidence rate. The vetoes (either only septem veto, only line veto or both) are then applied to this bootstrapped dataset. The fraction of rejected events due to the veto is the random coincidence rate, because there is no physical correlation between center cluster and outer GridPix ring clusters. By default $\num{2000}$ such events are generated for each run. The random coincidence rate is the mean of all obtained values. Further, this can be used to study the effectiveness of the eccentricity cutoff $ε_{\text{cut}}$ mentioned for the line veto. It is expected that a stricter eccentricity cut should yield less random coincidences. We can then compare the obtained random coincidence rates to the real fractions of events removed by the same veto setup. The ratio of real effectiveness to random coincidence rate is a measure of the potency of the veto setup. Fig. [[fig:background:fraction_passing_line_veto_ecc_cut]] shows how the fraction of passing events changes for the line veto as a function of the eccentricity cluster cutoff. The cases of line veto only for fake and real data, as well as septem veto plus line veto for the same are shown. We can see that the fraction that passes the veto setups (y axis) drops the further we go towards a low eccentricity cut (x axis). For the real data (~Real~ suffix in the legend) the drop is _faster_ than for fake boostrapped data (~Fake~ suffix in the legend), which means that we can use the lowest eccentricity cut possible (effectively disabling the cut at $ε_\text{cut} = 1.0$). In any case, the difference is very minor independent of the cutoff value. The septem veto without line veto is also shown in the plot (~SeptemFake~ and ~SeptemReal~) with only a single point at $ε_{\text{cut}} = 1.0$ for completeness. Table [[tab:background:veto_random_coinc_rate]] shows the percentages of clusters left over after the line veto (at $ε_\text{cut} = 1.0$) and septem veto setups, both for real data (column 'Real') and bootstrapped fake data (column 'Fake'). For fake data this corresponds to the random coincidence rate of the veto setup (strictly speaking $1 - x$ of course). For real data it is the real percentage of clusters left, meaning the vetoes achieve reduction to $\frac{1}{10}$ of background, depending on setup. The random coincidence rates are between $\SI{78.6}{\%}$ in case both outer GridPix vetoes are used, $\SI{83.1}{\%}$ for the septem veto and $\SI{85.4}{\%}$ for the line veto only. The line veto has lower random coincidence rate, but by itself is also less efficient. This effect on the signal efficiency is one of the major reasons the choice of vetoes is not finalized outside the context of a limit calculation. #+CAPTION: Fraction of events in Run-3 data, which pass (i.e. not rejected) the line #+CAPTION: veto depending on the eccentricity cut used, which decides how eccentric a #+CAPTION: cluster needs to be in order to be used for the veto. 'Real' and 'Fake' suffixes #+CAPTION: refer to application of the vetoes to the bootstrapped fake data discussed or #+CAPTION: real data. For fake data the percentage corresponds to the random coincidence rate. #+CAPTION: Septem veto without line veto shown as single points (~SeptemFake/Real~). #+CAPTION: #+NAME: fig:background:fraction_passing_line_veto_ecc_cut [[~/phd/Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut_only_relevant.pdf]] #+CAPTION: Overview of the percentages of clusters left after the line veto and #+CAPTION: and septem veto setup combinations (y = activated, n = not activated) for #+CAPTION: bootstrapped ('Fake') data and real background data. The percentages for #+CAPTION: fake data correspond to the random coincidence rate. #+NAME: tab:background:veto_random_coinc_rate #+ATTR_LATEX: :booktabs t | Septem veto | Line veto | Real [%] | Fake [%] | |-------------+-----------+---------------+---------------| | y | n | $\num{14.12}$ | $\num{83.11}$ | | n | y | $\num{25.32}$ | $\num{85.39}$ | | y | y | $\num{9.17}$ | $\num{78.63}$ | **** TODOs for this section [/] :noexport: Old text about the eccentricity cutoff fraction plot: #+begin_quote The exact choice between the purple / red pair (line veto including all clusters, even the one containing the original cluster) and the turquoise / blue pair (septem veto + line veto with only those clusters that do not contain the original; those are covered by the septem veto) is not entirely clear. Both will be investigated for their effect on the expected limit. The important point is that the fake data allows us to estimate the random coincidence rate, which needs to be treated as an additional dead time during background _and_ solar tracking time. A lower background may or may not be beneficial, compared to a higher dead time. #+end_quote Old text starting the last paragraph, when structure was different. #+begin_quote Combining different options of the line veto and the eccentricity cut for the line veto, as well as applying both the septem and the line veto for real data as well as fake bootstrapped data we can make an informed decision about the settings to use. At the same time get an understanding for the real dead time we introduce. #+end_quote - [ ] *RECREATE THE ABOVE PLOT* -> Can be recreated with code further down using existing CSV file! -> Remove the different line veto kinds. Only leave line veto regular fake + line veto regular real, + septem veto, line + septem, eccentricity cutoff - [ ] *NEED to explain that eccentricity line veto cutoff is not used, but tested. Also NEED to obviously give the numbers for both setups.* - [ ] *NAME THE ABSOLUTE EFFICIENCIES OF EACH SETUP* - [X] *REWRITE FOR UPDATED VALUES* - [ ] *IMPORTANT:* The random coincidence we calculate here changes not only the dead time for the tracking time, but also for the background rate! As such we need to regulate both! - [ ] *REWRITE THIS!* -> Important parts are that background rates are only interesting if one understands the associated efficiencies. So need to explain that. This part should become :noexport:, but a shortened simpler version of this should remain. Old table before septem veto fixes: #+begin_quote | Septem veto | Line veto | Fake [%] | Real [%] | |-------------+-----------+---------------+---------------| | y | n | $\num{78.41}$ | $\num{23.15}$ | | n | y | $\num{86.02}$ | $\num{22.03}$ | | y | y | $\num{73.25}$ | $\num{13.68}$ | #+end_quote **** Calculate random coincidence rates and real efficiencies of veto setups :extended: :PROPERTIES: :CUSTOM_ID: sec:background:calculate_random_coincidences :END: The following code snippet runs the ~TimepixAnalysis/Analysis/ingrid/likelihood.nim~ program with different arguments in order to compute the real efficiency and random coincidence rate for different veto setups. For the line veto, the different eccentricity cutoffs are studied. #+begin_src nim :tangle code/analyze_random_coinc_and_efficiency_vetoes.nim import shell, strutils, os, sequtils, times # for multiprocessing import cligen / [procpool, mslice, osUt] type Veto = enum Septem, Line RealOrFake = enum Real, Fake ## Ecc = 1.0 is not in `eccs` because by default we run `ε = 1.0`. const eccStudy = @[1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0] ## Line veto kind is not needed anymore. We select the correct kind in `likelihood` ## -> lvRegular for line veto without septem ## -> lvRegularNoHLC for line veto with septem ## UseRealLayout is also set accordingly. const cmd = """ ECC_LINE_VETO_CUT=$# \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out $#/lhood_2018_crAll_80eff_septem_line_ecc_cutoff_$#_$#.h5 \ --region crAll --cdlYear 2018 --readOnly --lnL --signalEfficiency 0.8 \ --septemLineVetoEfficiencyFile $# \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 $# """ proc toName(rf: RealOrFake): string = if rf == Real: "" else: "fake_events" proc toName(vetoes: set[Veto], rf: RealOrFake): string = (toSeq(vetoes).mapIt($it).join("_") & "_" & toName(rf)).strip(chars = {'_'}) proc toCommand(rf: RealOrFake): string = if rf == Real: "" else: "--estimateRandomCoinc" proc toCommand(veto: Veto): string = case veto of Septem: "--septemveto" of Line: "--lineveto" proc toCommand(vetoes: set[Veto], rf: RealOrFake): string = (toSeq(vetoes).mapIt(toCommand it).join(" ") & " " & toCommand(rf)).strip proc toEffFile(vetoes: set[Veto], rf: RealOrFake, ecc: float, outpath: string): string = result = &"{outpath}/septem_veto_before_after_septem_line_ecc_cutoff_{ecc}_{toName(vetoes, rf)}.txt" ## `flatBuffers` to copy objects containing string to `procpool` import flatBuffers type Command = object cmd: string outputFile: string proc runCommand(r, w: cint) = let o = open(w, fmWrite) for cmdBuf in getLenPfx[int](r.open): let cmdObj = flatTo[Command](fromString(cmdBuf)) let cmdStr = cmdObj.cmd let (res, err) = shellVerbose: one: cd /tmp ($cmdStr) # write the program output as a logfile writeFile(cmdObj.outputFile, res) o.urite "Processing done for: " & $cmdObj proc main(septem = false, line = false, septemLine = false, eccs = false, dryRun = false, outpath = "/tmp/", jobs = 6) = var vetoSetups: seq[set[Veto]] if septem: vetoSetups.add {Septem} if line: vetoSetups.add {Line} if septemLine: vetoSetups.add {Septem, Line} # First run individual at `ε = 1.0` var cmds = newSeq[Command]() var eccVals = @[1.0] if eccs: eccVals.add eccStudy for rf in RealOrFake: for ecc in eccVals: for vetoes in vetoSetups: if Line in vetoes or ecc == 1.0: ## If only septem veto do not perform eccentricity study! let final = cmd % [ $ecc, outpath, $ecc, toName(vetoes, rf), toEffFile(vetoes, rf, ecc, outpath), toCommand(vetoes, rf) ] let outputFile = &"{outpath}/logL_output_septem_line_ecc_cutoff_$#_$#.txt" % [$ecc, toName(vetoes, rf)] cmds.add Command(cmd: final, outputFile: outputFile) echo "Commands to run:" for cmd in cmds: echo "\tCommand: ", cmd.cmd echo "\t\tOutput: ", cmd.outputFile # now run if desired if not dryRun: # 1. fill the channel completely (so workers simply work until channel empty, then stop let t0 = epochTime() let cmdBufs = cmds.mapIt(asFlat(it).toString()) var pp = initProcPool(runCommand, framesLenPfx, jobs) var readRes = proc(s: MSlice) = echo $s pp.evalLenPfx cmdBufs, readRes echo "Running all commands took: ", epochTime() - t0, " s" when isMainModule: import cligen dispatch main #+end_src #+RESULTS: | Commands | to | run: | | | | | Running | all | commands | took: | 0.0008711814880371094 | s | To run it: #+begin_src sh :dir ~/phd/ code/analyze_random_coinc_and_efficiency_vetoes \ --septem --line --septemLine --eccs \ --outpath ~/phd/resources/estimateRandomCoinc/ \ --jobs 16 \ --dryRun #+end_src (remove the ~--dryRun~ to actually run it! This way it just prints the commands it would run) Note that it uses much less RAM than regular ~likelihood~. So we can get away with 16 jobs. <2023-11-20 Mon 11:29>: I'll rerun the code now with all eccentricities. I remove the eccentricity 1.0 values, because they were produced before we fixed the septem veto geometry stuff! -> Rerun finished. Now we need to parse the data and compute the results from the output text files. Let's calculate the fraction in all cases: #+begin_src nim :results drawer :tangle code/analyze_random_coincidence_results.nim import strutils proc parseFile(fname: string): float = var lines = fname.readFile.strip.splitLines() var line = 0 var numRuns = 0 var outputs = 0 # if file has more than 68 lines, remove everything before, as that means # those were from a previous run if lines.len > 68: lines = lines[^68 .. ^1] doAssert lines.len == 68 while line < lines.len: if lines[line].len == 0: break # parse input # `Septem events before: 1069 (L,F) = (false, false)` let input = lines[line].split(':')[1].strip.split()[0].parseInt # parse output # `Septem events after fake cut: 137` inc line let output = lines[line].split(':')[1].strip.parseInt result += output.float / input.float outputs += output inc numRuns inc line echo "\tMean output = ", outputs.float / numRuns.float result = result / numRuns.float # now all files in our eccentricity cut run directory const path = "/home/basti/phd/resources/estimateRandomCoinc/" import std / [os, parseutils] import ggplotnim import strscans proc parseEccentricityCutoff(f: string): float = let (success, _, ecc) = scanTuple(f, "$+ecc_cutoff_$f_") result = ecc proc determineType(f: string): string = ## I'm sorry for this. :) if "Septem_Line" in f: result.add "SeptemLine" elif "Septem" in f: result.add "Septem" elif "Line" in f: result.add "Line" if "_fake_events.txt" in f: result.add "Fake" else: result.add "Real" proc hasSeptem(f: string): bool = "Septem" in f proc hasLine(f: string): bool = "Line" in f proc isFake(f: string): string = if "fake_events" in f: "Fake" else: "Real" var df = newDataFrame() # walk all files and determine the type for f in walkFiles(path / "septem_veto_before_after*.txt"): echo "File: ", f let frac = parseFile(f) let eccCut = parseEccentricityCutoff(f) echo "\tFraction of events left = ", frac let typ = determineType(f) echo "\tFraction of events left = ", frac df.add toDf({"Type" : typ, "Septem" : hasSeptem(f), "Line" : hasLine(f), "Fake" : isFake(f), "ε_cut" : eccCut, "FractionPass" : frac}) # Now write the table we want to use in the thesis for the efficiencies & random coinc # rate import std / strformat proc convert(x: float): string = let s = &"{x * 100.0:.2f}" result = r"$\num{" & s & "}$" echo df.filter(f{`ε_cut` == 1.0}) .mutate(f{float -> string: "FractionPass" ~ convert(idx("FractionPass"))}) .drop("Type", "ε_cut") .spread("Fake", "FractionPass").toOrgTable() # And finally create the plots and output CSV file if true: df.writeCsv("/home/basti/phd/resources/septem_line_random_coincidences_ecc_cut.csv", precision = 8) block PlotFromCsv: block OldPlot: let oldFile = "/home/basti/org/resources/septem_line_random_coincidences_ecc_cut.csv" if fileExists(oldFile): let df = readCsv(oldFile) .filter(f{`Type` notin ["LinelvRegularNoHLCReal", "LinelvRegularNoHLCFake"]}) .mutate(f{string: "Type" ~ `Type`.replace("lvRegular", "").replace("NoHLC", "")}) ggplot(df, aes("ε_cut", "FractionPass", color = "Type")) + geom_point() + ggtitle("Fraction of events passing line veto based on ε cutoff") + #margin(right = 9) + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave("Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut_only_relevant.pdf", width = 600, height = 420, useTeX = true, standalone = true) #ggsave("/tmp/fraction_passing_line_veto_ecc_cut.pdf", width = 800, height = 480) block NewPlot: ggplot(df, aes("ε_cut", "FractionPass", color = "Type")) + geom_point() + ggtitle("Fraction of events passing line veto based on ε cutoff") + #margin(right = 9) + margin(right = 5.5) + xlab("Eccentricity cut 'ε_cut'") + ylab("Fraction passing [%]") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave("Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut_only_relevant.pdf", width = 600, height = 420, useTeX = true, standalone = true) ## XXX: we probably don't need the following plot for the real data, as the eccentricity ## cut does not cause anything to get worse at lower values. Real improvement better than ## fake coincidence rate. #df = df.spread("Type", "FractionPass").mutate(f{float: "Ratio" ~ `Real` / `Fake`}) #ggplot(df, aes("ε_cut", "Ratio")) + # geom_point() + # ggtitle("Ratio of fraction of events passing line veto real/fake based on ε cutoff") + # #ggsave("Figs/background/estimateSeptemVetoRandomCoinc/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf") # ggsave("/tmp/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf") #+end_src #+RESULTS: :results: File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.0_Line.txt Mean output = 206.4705882352941 Fraction of events left = 0.2532210933821111 Fraction of events left = 0.2532210933821111 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.0_Line_fake_events.txt Mean output = 1707.823529411765 Fraction of events left = 0.8539117647058824 Fraction of events left = 0.8539117647058824 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.0_Septem.txt Mean output = 116.6176470588235 Fraction of events left = 0.1411615828924652 Fraction of events left = 0.1411615828924652 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.0_Septem_Line.txt Mean output = 75.47058823529412 Fraction of events left = 0.09174300323817278 Fraction of events left = 0.09174300323817278 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.0_Septem_Line_fake_events.txt Mean output = 1572.558823529412 Fraction of events left = 0.7862794117647059 Fraction of events left = 0.7862794117647059 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.0_Septem_fake_events.txt Mean output = 1662.117647058823 Fraction of events left = 0.8310588235294116 Fraction of events left = 0.8310588235294116 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.1_Line.txt Mean output = 207.2058823529412 Fraction of events left = 0.2539920546337319 Fraction of events left = 0.2539920546337319 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.1_Line_fake_events.txt Mean output = 1708.911764705882 Fraction of events left = 0.854455882352941 Fraction of events left = 0.854455882352941 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.1_Septem_Line.txt Mean output = 75.55882352941177 Fraction of events left = 0.09183984521917411 Fraction of events left = 0.09183984521917411 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.1_Septem_Line_fake_events.txt Mean output = 1573.411764705882 Fraction of events left = 0.7867058823529411 Fraction of events left = 0.7867058823529411 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.2_Line.txt Mean output = 208.5294117647059 Fraction of events left = 0.2554137342850585 Fraction of events left = 0.2554137342850585 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.2_Line_fake_events.txt Mean output = 1711.088235294118 Fraction of events left = 0.8555441176470588 Fraction of events left = 0.8555441176470588 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.2_Septem_Line.txt Mean output = 75.64705882352941 Fraction of events left = 0.09202286831585582 Fraction of events left = 0.09202286831585582 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.2_Septem_Line_fake_events.txt Mean output = 1574.85294117647 Fraction of events left = 0.7874264705882352 Fraction of events left = 0.7874264705882352 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.3_Line.txt Mean output = 210.2647058823529 Fraction of events left = 0.2574040555019554 Fraction of events left = 0.2574040555019554 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.3_Line_fake_events.txt Mean output = 1713.676470588235 Fraction of events left = 0.8568382352941176 Fraction of events left = 0.8568382352941176 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.3_Septem_Line.txt Mean output = 75.91176470588235 Fraction of events left = 0.09236293067907482 Fraction of events left = 0.09236293067907482 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.3_Septem_Line_fake_events.txt Mean output = 1576.411764705882 Fraction of events left = 0.7882058823529414 Fraction of events left = 0.7882058823529414 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.4_Line.txt Mean output = 212.8823529411765 Fraction of events left = 0.2605995866710243 Fraction of events left = 0.2605995866710243 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.4_Line_fake_events.txt Mean output = 1717.323529411765 Fraction of events left = 0.8586617647058823 Fraction of events left = 0.8586617647058823 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.4_Septem_Line.txt Mean output = 76.32352941176471 Fraction of events left = 0.09278632207926356 Fraction of events left = 0.09278632207926356 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.4_Septem_Line_fake_events.txt Mean output = 1578.294117647059 Fraction of events left = 0.7891470588235293 Fraction of events left = 0.7891470588235293 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.5_Line.txt Mean output = 215.7647058823529 Fraction of events left = 0.2638551947156842 Fraction of events left = 0.2638551947156842 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.5_Line_fake_events.txt Mean output = 1721.029411764706 Fraction of events left = 0.8605147058823531 Fraction of events left = 0.8605147058823531 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.5_Septem_Line.txt Mean output = 76.76470588235294 Fraction of events left = 0.09315280077085082 Fraction of events left = 0.09315280077085082 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.5_Septem_Line_fake_events.txt Mean output = 1580.205882352941 Fraction of events left = 0.7901029411764706 Fraction of events left = 0.7901029411764706 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.6_Line.txt Mean output = 219.3529411764706 Fraction of events left = 0.2682870402202033 Fraction of events left = 0.2682870402202033 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.6_Line_fake_events.txt Mean output = 1724.617647058823 Fraction of events left = 0.8623088235294117 Fraction of events left = 0.8623088235294117 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.6_Septem_Line.txt Mean output = 77.02941176470588 Fraction of events left = 0.09352258111195 Fraction of events left = 0.09352258111195 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.6_Septem_Line_fake_events.txt Mean output = 1581.970588235294 Fraction of events left = 0.7909852941176471 Fraction of events left = 0.7909852941176471 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.7_Line.txt Mean output = 222.7941176470588 Fraction of events left = 0.2724287318429282 Fraction of events left = 0.2724287318429282 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.7_Line_fake_events.txt Mean output = 1729.294117647059 Fraction of events left = 0.864647058823529 Fraction of events left = 0.864647058823529 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.7_Septem_Line.txt Mean output = 77.5 Fraction of events left = 0.0941890725792015 Fraction of events left = 0.0941890725792015 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.7_Septem_Line_fake_events.txt Mean output = 1583.823529411765 Fraction of events left = 0.7919117647058823 Fraction of events left = 0.7919117647058823 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.8_Line.txt Mean output = 226.2941176470588 Fraction of events left = 0.2767013938351873 Fraction of events left = 0.2767013938351873 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.8_Line_fake_events.txt Mean output = 1733.823529411765 Fraction of events left = 0.8669117647058823 Fraction of events left = 0.8669117647058823 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.8_Septem_Line.txt Mean output = 78.05882352941177 Fraction of events left = 0.09518435513087023 Fraction of events left = 0.09518435513087023 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.8_Septem_Line_fake_events.txt Mean output = 1586.235294117647 Fraction of events left = 0.7931176470588235 Fraction of events left = 0.7931176470588235 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.9_Line.txt Mean output = 230.6176470588235 Fraction of events left = 0.2822650687339636 Fraction of events left = 0.2822650687339636 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.9_Line_fake_events.txt Mean output = 1738.264705882353 Fraction of events left = 0.8691323529411764 Fraction of events left = 0.8691323529411764 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.9_Septem_Line.txt Mean output = 78.70588235294117 Fraction of events left = 0.09597833815174873 Fraction of events left = 0.09597833815174873 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_1.9_Septem_Line_fake_events.txt Mean output = 1588.176470588235 Fraction of events left = 0.7940882352941175 Fraction of events left = 0.7940882352941175 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_2.0_Line.txt Mean output = 235.3235294117647 Fraction of events left = 0.288884279607195 Fraction of events left = 0.288884279607195 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_2.0_Line_fake_events.txt Mean output = 1742.85294117647 Fraction of events left = 0.8714264705882353 Fraction of events left = 0.8714264705882353 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_2.0_Septem_Line.txt Mean output = 79.58823529411765 Fraction of events left = 0.09703178862057435 Fraction of events left = 0.09703178862057435 File: /home/basti/phd/resources/estimateRandomCoinc/septem_veto_before_after_septem_line_ecc_cutoff_2.0_Septem_Line_fake_events.txt Mean output = 1590.0 Fraction of events left = 0.7949999999999999 Fraction of events left = 0.7949999999999999 | Septem | Line | Real | Fake | |----|----|----|----| |false|true|$\num{25.32}$|$\num{85.39}$| |true|false|$\num{14.12}$|$\num{83.11}$| |true|true|$\num{9.17}$|$\num{78.63}$| :end: #+begin_src sh :dir ~/phd ESCAPE_LATEX=true code/analyze_random_coincidence_results #+end_src Old values used in ~mcmc_limit_calculation~: #+begin_src nim septemVetoRandomCoinc = 0.7841029411764704, # only septem veto random coinc based on bootstrapped fake data lineVetoRandomCoinc = 0.8601764705882353, # lvRegular based on bootstrapped fake data septemLineVetoRandomCoinc = 0.732514705882353, # lvRegularNoHLC based on bootstrapped fake data #+end_src New values: #+begin_src nim septemVetoRandomCoinc = 0.8311, # only septem veto random coinc based on bootstrapped fake data lineVetoRandomCoinc = 0.8539, # lvRegular based on bootstrapped fake data septemLineVetoRandomCoinc = 0.7863, # lvRegularNoHLC based on bootstrapped fake data #+end_src **** Initial thoughts and explanations :extended: This way we bootstrap a larger number of events than otherwise available and knowing that the geometric data cannot be correlated. Any vetoing in these cases therefore *must* be a random coincidence. As the ~likelihood~ tool already uses effectively an index to map the cluster indices for each chip to their respective event number, we've implemented this there (~--estimateRandomCoinc~) by rewriting the index. It is a good idea to also run it together with the ~--plotseptem~ option to actually see some events and verify with your own eyes that the events are actually "correct" (i.e. not the original ones). You will note that there are many events that "clearly" look as if the bootstrapping is not working correctly, because they look way too much as if they are "obviously correlated". To give yourself a better sense that this is indeed just coincidence, you can run the tool with the ~--estFixedEvents~ option, which bootstraps events using a fixed cluster in the center for each run. Checking out the event displays of those is a convincing affair that unfortunately random coincidences are even convincing to our own eyes. #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --estimateRandomCoinc #+end_src which writes the file ~/tmp/septem_fake_veto.txt~, which for this case is found [[file:~/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_old.txt]] (note: updated numbers from latest state of code is the same file without ~_old~ suffix) Mean value of and fraction (from script in next section): File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem.txt Mean output = 1674.705882352941 Fraction of events left = 0.8373529411764704 From this file the method seems to remove typically a bit more than 300 out of 2000 bootstrapped fake events. This seems to imply a random coincidence rate of about 17% (or effectively a reduction of further 17% in efficiency / 17% increase in dead time). Of course this does not even include the line veto, which will drop it further. Before we combine both of them, let's run it with the line veto _alone_: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_line_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto --estimateRandomCoinc #+end_src this results in: [[~/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_line.txt]] Mean value of: File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_line.txt Mean output = 1708.382352941177 Fraction of events left = 0.8541911764705882 And finally both together: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_line_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc #+end_src which generated the following output: [[file:~/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_line.txt]] Mean value of: File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_line.txt Mean output = 1573.676470588235 Fraction of events left = 0.7868382352941178 This comes out to a fraction of 78.68% of the events left after running the vetoes on our bootstrapped fake events. Combining it with a software efficiency of ε = 80% the total combined efficiency then would be $ε_\text{total} = 0.8 · 0.7868 = 0.629$, so about 63%. Finally now let's prepare some event displays for the case of using different center clusters and using the same ones. We run the ~likelihood~ tool with the ~--plotSeptem~ option and stop the program after we have enough plots. In this context note the energy cut range for the ~--plotseptem~ option (by default set to 5 keV), adjustable by the ~PLOT_SEPTEM_E_CUTOFF~ environment variable. First with different center clusters: #+begin_src sh PLOT_SEPTEM_E_CUTOFF=10.0 likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc --plotseptem #+end_src which are wrapped up using ~pdfunite~ and stored in: [[file:Figs/background/estimateSeptemVetoRandomCoinc/fake_events_septem_line_veto_all_outer_events.pdf]] and now with fixed clusters: #+begin_src sh PLOT_SEPTEM_E_CUTOFF=10.0 likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc --estFixedEvent --plotseptem #+end_src (Note that the cluster that is chosen can be set using ~SEPTEM_FAKE_FIXED_CLUSTER~ to a different index, by default it just uses ~5~). These events are here: [[file:Figs/background/estimateSeptemVetoRandomCoinc/fake_events_fixed_cluster_septem_line_veto_all_outer_events.pdf]] **** TODO Rewrite the whole estimation to a proper program [/] :extended: *IMPORTANT* That program should call ~likelihood~ alone, and ~likelihood~ needs to be rewritten such that it outputs the septem random coincidence (or real removal) into the H5 output file. Maybe just add a type that stores the information which we serialize. With the serialized info about the veto settings we can then reconstruct in code what is what. Or possibly better if the output is written to a separate file such that we don't store all the cluster data. Anyhow, then rewrite the code snippet in the section below that prints the information about the random coincidence rates and creates the plot. **** Run a whole bunch more cases :extended: The below is running now <2023-02-10 Fri 01:43>. Still running as of <2023-02-10 Fri 11:55>, damn this is slow. - [X] *INVESTIGATE PERFORMANCE AFTER IT'S DONE* - [ ] We should be able to run ~4 (depending on choice even more) in parallel, no? #+begin_src nim :tangle code/analyze_line_veto_different_ecc_cutoff.nim import shell, strutils, os #let vals = @[1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0] #let vals = @[1.0, 1.1] let vals = @[1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.0] #let vetoes = @["--lineveto", "--lineveto --estimateRandomCoinc"] let vetoes = @["--septemveto --lineveto", "--septemveto --lineveto --estimateRandomCoinc"] ## XXX: ADD CODE DIFFERENTIATING SEPTEM + LINE & LINE ONLY IN NAMES AS WELL! #const lineVeto = "lvRegular" const lineVeto = "lvRegularNoHLC" let cmd = """ LINE_VETO_KIND=$# \ ECC_LINE_VETO_CUT=$# \ USE_REAL_LAYOUT=true \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /t/lhood_2018_crAll_80eff_septem_line_ecc_cutoff_$#_$#_real_layout$#.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 $# """ proc toName(veto: string): string = (if "estimateRandomCoinc" in veto: "_fake_events" else: "") for val in vals: for veto in vetoes: let final = cmd % [ lineVeto, $val, $val, lineVeto, toName(veto), $veto ] let (res, err) = shellVerbose: one: cd /tmp ($final) writeFile("/tmp/logL_output_septem_line_ecc_cutoff_$#_$#_real_layout$#.txt" % [$val, lineVeto, toName(veto)], res) let outpath = "/home/basti/org/resources/septem_veto_random_coincidences/autoGen/" let outfile = "septem_veto_before_after_septem_line_ecc_cutoff_$#_$#_real_layout$#.txt" % [$val, lineVeto, toName(veto)] copyFile("/tmp/septem_veto_before_after.txt", outpath / outfile) removeFile("/tmp/septem_veto_before_after.txt") # remove file to not append more and more to file #+end_src It has finally finished some time before <2023-02-10 Fri 20:02>. Holy moly how slow. We will keep the generated ~lhood_*~ and ~logL_output_*~ files in [[file:~/org/resources/septem_veto_random_coincidences/autoGen/]] together with the ~septem_veto_befor_after_*~ files. See the code in one of the next sections for the 'analysis' of this dataset. ***** TODOs for this section [/] :noexport: - [X] *RERUN THE ABOVE AFTER LINE VETO BUGFIX & PERF IMPROVEMENTS* - [ ] Rerun everything in check for thesis final. **** Number of events removed in real usage :extended: - [ ] *MAYBE EXTEND CODE SNIPPET ABOVE TO ALLOW CHOOSING BETWEEN ε_cut ANALYSIS AND REAL FRACTIONS* As a reference let's quickly run the code also for the normal use case where we don't do any bootstrapping: #+begin_src sh likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy_real.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto #+end_src which results in [[file:~/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_septem.txt]] Next the line veto alone: #+begin_src sh likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy_real.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto #+end_src which results in: [[file:~/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_line.txt]] And finally both together: #+begin_src sh likelihood \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/dummy_real_2.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto #+end_src and this finally yields: [[file:~/org/resources/septem_veto_random_coincidences/septem_veto_before_after_septem_line.txt]] And further for reference let's compute the fake rate when only using the septem veto (as we have no eccentricity dependence, hence a single value): #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_real_layout.h5 \ --region crAll \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto \ --estimateRandomCoinc #+end_src Run the line veto with new features: - real septemboard layout - eccentricity cut off for tracks participating (ecc > 1.6) #+begin_src sh LINE_VETO_KIND=lvRegularNoHLC \ ECC_LINE_VETO_CUT=1.6 \ USE_REAL_LAYOUT=true \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_line_ecc_cutof_1.6_real_layout.h5 \ --region crAll \ --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto #+end_src - [ ] *WE SHOULD REALLY LOOK INTO RUNNING THE LINE VETO ONLY USING DIFFERENT ε CUTOFFS!* -> Then compare the real application with the fake bootstrap application and see if there is a sweet spot in terms of S/N. Let's calculate the fraction in all cases: #+begin_src nim :results drawer import strutils let files = @["/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_septem.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_line.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_septem_line.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_line.txt", "/home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_line.txt"] proc parseFile(fname: string): float = var lines = fname.readFile.strip.splitLines() var line = 0 var numRuns = 0 var outputs = 0 # if file has more than 68 lines, remove everything before, as that means # those were from a previous run if lines.len > 68: lines = lines[^68 .. ^1] doAssert lines.len == 68 while line < lines.len: if lines[line].len == 0: break # parse input # `Septem events before: 1069 (L,F) = (false, false)` let input = lines[line].split(':')[1].strip.split()[0].parseInt # parse output # `Septem events after fake cut: 137` inc line let output = lines[line].split(':')[1].strip.parseInt result += output.float / input.float outputs += output inc numRuns inc line echo "\tMean output = ", outputs.float / numRuns.float result = result / numRuns.float # first the predefined files: for f in files: echo "File: ", f echo "\tFraction of events left = ", parseFile(f) # now all files in our eccentricity cut run directory const path = "/home/basti/org/resources/septem_veto_random_coincidences/autoGen/" import std / [os, parseutils, strutils] import ggplotnim proc parseEccentricityCutoff(f: string): float = let str = "ecc_cutoff_" let startIdx = find(f, str) + str.len var res = "" let stopIdx = parseUntil(f, res, until = "_", start = startIdx) echo res result = parseFloat(res) proc determineType(f: string): string = ## I'm sorry for this. :) if "only_line_ecc" in f: result.add "Line" elif "septem_line_ecc" in f: result.add "SeptemLine" else: doAssert false, "What? " & $f if "lvRegularNoHLC" in f: result.add "lvRegularNoHLC" elif "lvRegular" in f: result.add "lvRegular" else: # also lvRegularNoHLC, could use else above, but clearer this way. Files result.add "lvRegularNoHLC" # without veto kind are older, therefore no HLC if "_fake_events.txt" in f: result.add "Fake" else: result.add "Real" var df = newDataFrame() # walk all files and determine the type for f in walkFiles(path / "septem_veto_before_after*.txt"): echo "File: ", f let frac = parseFile(f) let eccCut = parseEccentricityCutoff(f) let typ = determineType(f) echo "\tFraction of events left = ", frac df.add toDf({"Type" : typ, "ε_cut" : eccCut, "FractionPass" : frac}) df.writeCsv("/home/basti/org/resources/septem_line_random_coincidences_ecc_cut.csv", precision = 8) ggplot(df, aes("ε_cut", "FractionPass", color = "Type")) + geom_point() + ggtitle("Fraction of events passing line veto based on ε cutoff") + margin(right = 9) + ggsave("Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut.pdf", width = 800, height = 480) #ggsave("/tmp/fraction_passing_line_veto_ecc_cut.pdf", width = 800, height = 480) ## XXX: we probably don't need the following plot for the real data, as the eccentricity ## cut does not cause anything to get worse at lower values. Real improvement better than ## fake coincidence rate. #df = df.spread("Type", "FractionPass").mutate(f{float: "Ratio" ~ `Real` / `Fake`}) #ggplot(df, aes("ε_cut", "Ratio")) + # geom_point() + # ggtitle("Ratio of fraction of events passing line veto real/fake based on ε cutoff") + # #ggsave("Figs/background/estimateSeptemVetoRandomCoinc/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf") # ggsave("/tmp/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf") #+end_src #+RESULTS: :results: File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_septem.txt Mean output = 129.6176470588235 Fraction of events left = 0.1482137110344671 File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_only_line.txt Mean output = 279.9117647058824 Fraction of events left = 0.3226213387764036 File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_septem_line.txt Mean output = 86.79411764705883 Fraction of events left = 0.09919758241761836 File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem.txt Mean output = 1568.205882352941 Fraction of events left = 0.7841029411764704 File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_line.txt Mean output = 1708.382352941177 Fraction of events left = 0.8541911764705882 File: /home/basti/org/resources/septem_veto_random_coincidences/septem_veto_before_after_fake_events_septem_line.txt Mean output = 1573.676470588235 Fraction of events left = 0.7868382352941178 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.0_lvRegular_real_layout.txt Mean output = 193.3235294117647 1.0 Fraction of events left = 0.2213710741366144 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.0_lvRegular_real_layout_fake_events.txt Mean output = 1720.35294117647 1.0 Fraction of events left = 0.8601764705882353 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.0_real_layout.txt Mean output = 610.3529411764706 1.0 Fraction of events left = 0.690861976619262 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.0_real_layout_fake_events.txt Mean output = 1834.0 1.0 Fraction of events left = 0.9170000000000001 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.1_lvRegular_real_layout.txt Mean output = 232.0294117647059 1.1 Fraction of events left = 0.2658086845496733 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.1_lvRegular_real_layout_fake_events.txt Mean output = 1740.617647058823 1.1 Fraction of events left = 0.8703088235294119 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.1_real_layout.txt Mean output = 625.7058823529412 1.1 Fraction of events left = 0.7079137296253094 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.1_real_layout_fake_events.txt Mean output = 1848.705882352941 1.1 Fraction of events left = 0.9243529411764705 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.2_lvRegular_real_layout.txt Mean output = 267.1176470588235 1.2 Fraction of events left = 0.3053849927113212 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.2_lvRegular_real_layout_fake_events.txt Mean output = 1758.85294117647 1.2 Fraction of events left = 0.8794264705882355 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.2_real_layout.txt Mean output = 639.1176470588235 1.2 Fraction of events left = 0.7226167303002865 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.2_real_layout_fake_events.txt Mean output = 1861.35294117647 1.2 Fraction of events left = 0.9306764705882352 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.3_lvRegular_real_layout.txt Mean output = 301.6764705882353 1.3 Fraction of events left = 0.3454157240905325 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.3_lvRegular_real_layout_fake_events.txt Mean output = 1774.705882352941 1.3 Fraction of events left = 0.8873529411764708 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.3_real_layout.txt Mean output = 653.2941176470588 1.3 Fraction of events left = 0.7387264181345766 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.3_real_layout_fake_events.txt Mean output = 1872.294117647059 1.3 Fraction of events left = 0.936147058823529 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.4_lvRegular_real_layout.txt Mean output = 334.2058823529412 1.4 Fraction of events left = 0.3819518596290207 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.4_lvRegular_real_layout_fake_events.txt Mean output = 1789.264705882353 1.4 Fraction of events left = 0.8946323529411766 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.4_real_layout.txt Mean output = 666.0588235294117 1.4 Fraction of events left = 0.7536070159704531 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.4_real_layout_fake_events.txt Mean output = 1881.705882352941 1.4 Fraction of events left = 0.9408529411764706 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.5_lvRegular_real_layout.txt Mean output = 364.1764705882353 1.5 Fraction of events left = 0.4153775763953914 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.5_lvRegular_real_layout_fake_events.txt Mean output = 1802.088235294118 1.5 Fraction of events left = 0.9010441176470585 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.5_real_layout.txt Mean output = 677.5294117647059 1.5 Fraction of events left = 0.7674234785251015 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.5_real_layout_fake_events.txt Mean output = 1890.5 1.5 Fraction of events left = 0.9452499999999998 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.6_lvRegular_real_layout.txt Mean output = 392.3235294117647 1.6 Fraction of events left = 0.4465736613326556 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.6_lvRegular_real_layout_fake_events.txt Mean output = 1815.176470588235 1.6 Fraction of events left = 0.9075882352941179 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.6_real_layout.txt Mean output = 688.5 1.6 Fraction of events left = 0.7796355368668799 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.6_real_layout_fake_events.txt Mean output = 1898.676470588235 1.6 Fraction of events left = 0.9493382352941175 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.7_lvRegular_real_layout.txt Mean output = 417.6470588235294 1.7 Fraction of events left = 0.4752128436833915 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.7_lvRegular_real_layout_fake_events.txt Mean output = 1827.294117647059 1.7 Fraction of events left = 0.9136470588235294 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.7_real_layout.txt Mean output = 698.7058823529412 1.7 Fraction of events left = 0.7910478112082485 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.7_real_layout_fake_events.txt Mean output = 1905.764705882353 1.7 Fraction of events left = 0.9528823529411765 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.8_lvRegular_real_layout.txt Mean output = 441.5882352941176 1.8 Fraction of events left = 0.5016402435934247 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.8_lvRegular_real_layout_fake_events.txt Mean output = 1838.558823529412 1.8 Fraction of events left = 0.9192794117647062 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.8_real_layout.txt Mean output = 708.9705882352941 1.8 Fraction of events left = 0.8019786834892618 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.8_real_layout_fake_events.txt Mean output = 1912.382352941177 1.8 Fraction of events left = 0.956191176470588 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.9_lvRegular_real_layout.txt Mean output = 464.5294117647059 1.9 Fraction of events left = 0.5285049560731815 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.9_lvRegular_real_layout_fake_events.txt Mean output = 1848.5 1.9 Fraction of events left = 0.9242499999999999 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.9_real_layout.txt Mean output = 718.7352941176471 1.9 Fraction of events left = 0.813399253104142 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_1.9_real_layout_fake_events.txt Mean output = 1917.411764705882 1.9 Fraction of events left = 0.9587058823529411 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_2.0_lvRegular_real_layout.txt Mean output = 487.0294117647059 2.0 Fraction of events left = 0.5530554088044699 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_2.0_lvRegular_real_layout_fake_events.txt Mean output = 1857.176470588235 2.0 Fraction of events left = 0.9285882352941174 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_2.0_real_layout.txt Mean output = 727.7352941176471 2.0 Fraction of events left = 0.8236163260425843 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_only_line_ecc_cutoff_2.0_real_layout_fake_events.txt Mean output = 1922.911764705882 2.0 Fraction of events left = 0.961455882352941 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.0_lvRegularNoHLC_real_layout.txt Mean output = 121.5 1.0 Fraction of events left = 0.1372058585160641 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.0_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1465.029411764706 1.0 Fraction of events left = 0.732514705882353 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.1_lvRegularNoHLC_real_layout.txt Mean output = 126.8529411764706 1.1 Fraction of events left = 0.1432055704027816 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.1_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1474.617647058823 1.1 Fraction of events left = 0.7373088235294116 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.2_lvRegularNoHLC_real_layout.txt Mean output = 132.5588235294118 1.2 Fraction of events left = 0.1493895345390256 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.2_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1483.029411764706 1.2 Fraction of events left = 0.7415147058823529 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.3_lvRegularNoHLC_real_layout.txt Mean output = 137.4705882352941 1.3 Fraction of events left = 0.1551740781879195 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.3_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1489.85294117647 1.3 Fraction of events left = 0.7449264705882355 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.4_lvRegularNoHLC_real_layout.txt Mean output = 142.1176470588235 1.4 Fraction of events left = 0.1605457614353923 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.4_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1496.323529411765 1.4 Fraction of events left = 0.7481617647058825 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.5_lvRegularNoHLC_real_layout.txt Mean output = 146.2058823529412 1.5 Fraction of events left = 0.1649403056130558 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.5_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1502.470588235294 1.5 Fraction of events left = 0.751235294117647 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.6_lvRegularNoHLC_real_layout.txt Mean output = 150.2058823529412 1.6 Fraction of events left = 0.1691456138087639 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.6_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1507.970588235294 1.6 Fraction of events left = 0.7539852941176468 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.7_lvRegularNoHLC_real_layout.txt Mean output = 153.8529411764706 1.7 Fraction of events left = 0.173365478568938 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.7_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1512.705882352941 1.7 Fraction of events left = 0.7563529411764706 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.8_lvRegularNoHLC_real_layout.txt Mean output = 157.0 1.8 Fraction of events left = 0.1769774598234377 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.8_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1517.058823529412 1.8 Fraction of events left = 0.7585294117647059 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.9_lvRegularNoHLC_real_layout.txt Mean output = 160.5 1.9 Fraction of events left = 0.1813640335371248 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_1.9_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1520.323529411765 1.9 Fraction of events left = 0.7601617647058823 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_2.0_lvRegularNoHLC_real_layout.txt Mean output = 163.2647058823529 2.0 Fraction of events left = 0.1846827506979342 File: /home/basti/org/resources/septem_veto_random_coincidences/autoGen/septem_veto_before_after_septem_line_ecc_cutoff_2.0_lvRegularNoHLC_real_layout_fake_events.txt Mean output = 1523.470588235294 2.0 Fraction of events left = 0.761735294117647 :end: (about the first set of files) So about 14.8% in the only septem case and 9.9% in the septem + line veto case. - [ ] *MOVE BELOW TO PROPER THESIS PART!* (about the ε cut) #+CAPTION: Fraction of events in Run-3 data (green), which pass (i.e. not rejected) the line #+CAPTION: veto depending on the eccentricity cut used, which decides how eccentric a #+CAPTION: cluster needs to be in order to be used for the veto. The purple points are #+CAPTION: using fake bootstrapped data from real clusters passing the $\ln\mathcal{L}$ cut #+CAPTION: together with real outer GridPix data from *other* events. The fraction of events #+CAPTION: being vetoed in the latter is a measure for the random coincidence rate. #+NAME: fig:background:fraction_passing_line_veto_ecc_cut [[~/phd/Figs/background/estimateSeptemVetoRandomCoinc/fraction_passing_line_veto_ecc_cut.pdf]] #+CAPTION: Ratio of the events passing in the real line veto application to the fake data application #+CAPTION: for different $ε_{\text{cut}}$ cutoff values. The optimum seems to be in the range of #+CAPTION: 1.4 to 1.5 [[~/phd/Figs/background/estimateSeptemVetoRandomCoinc/ratio_real_fake_fraction_passing_line_veto_ecc_cut.pdf]] ***** Investigate significantly lower fake event fraction passing *UPDATE*: <2023-02-13 Mon 16:50> The numbers visible in the plot are *MUCH LOWER* than what we had previously after implementing the line veto alone!! Let's run with the equivalent of the old parameters: #+begin_src sh LINE_VETO_KIND=lvRegular \ ECC_LINE_VETO_CUT=1.0 \ USE_REAL_LAYOUT=false \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /t/lhood_2018_crAll_80eff_line_ecc_cutof_1.0_tight_layout_lvRegular.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 --lineveto --estimateRandomCoinc #+end_src -> As it turns out this was a bug in our logic that decides which cluster is of interest to the line veto. We accidentally always deemed it interesting, if the original cluster was on its own... Fixed now. *** On the line veto without septem veto :extended: When dealing with the line veto without the septem veto there are multiple questions that come up of course. First of all, what is the cluster that you're actually targeting with our 'line'? The original cluster (OC) that passed lnL or a hypothetical larger cluster that was found during the septem event reconstruction (HLC). Assuming the former, the next question is whether we want to allow an HLC to veto our OC? In a naive implementation this is precisely what's happening, because in the regular use case of septem veto + line veto, the line veto would never have any effect anymore, as an HLC would almost certainly be vetoed by the septem veto! But without the septem veto, this decision is fully up to the line veto and the question becomes relevant. (we will implement a switch, maybe based on an environment variable or flag) In the latter case the tricky part is mainly just identifying the correct cluster which to test for in order to find its center. However, this needs to be implemented to avoid the HLC in the above mentioned case. With it done, we then have 3 different ways to do the line veto: 1. 'regular' line veto. *Every* cluster checks the line to the center cluster. Without septem veto this includes HLC checking OC. 2. 'regular without HLC' line veto: Lines check the OC, but the HLC is explicitly *not* considered. 3. 'checking the HLC' line veto: In this case *all* clusters check the center of the HLC. Thoughts on LvCheckHLC: - The radii around the new HLC become so large that in practice this won't be a very good idea I think! - The ~lineVetoRejected~ part of the title seems to be "true" in too many cases. What's going on here? See: [[file:~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/septem_events_only_line_veto_check_hlc_not_final.pdf]] for example "2882 and run 297" on page 31. Like huh? My first guess is that the distance calculation is off somehow? Similar page 33 and probably many more. Even worse is page 34: "event 30 and run 297"! -> Yeah, as it turns out the problem was just that our ~inRegionOfInterest~ check had become outdated due to our change of - [ ] Select example events for each of the 'line veto kinds' to demonstrate their differences. OC: Original Cluster (passing lnL cut on center chip) HCL: Hypothetical Large Cluster (new cluster that OC is part of after septemboard reco) Regular: [[~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/example_event_line_veto_regular.pdf]] is an example event in which we see the "regular" line veto without using the septem veto. Things to note: - the black circle shows the 'radius' of the OC, *not* the HLC - the OC is actually part of a HLC - because of this and because the HLC is a nice track, the event is *vetoed*, not by the green track, but by the HLC itself! This wouldn't be a problem if we also used the septem veto, as this event would already be removed due to the septem veto! (More plots: [[file:~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/septem_events_only_line_veto_regular_fixed_check.pdf]]) Regular no HLC: [[~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/example_event_line_veto_regular_noHLC.pdf]] The reference cluster to check for is still the regular OC with the same radius. And again the OC is part of an HLC. However, in contrast to the 'regular' case, this event is not vetoed. The green and purple clusters simply don't point at the black circle and the HLC itself is *not considered* here. This defines the 'regular no HLC' veto. [[~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/example_event_line_veto_regular_noHLC_close_hit.pdf]] is just an example of an event that proves the method works & a nice example of a cluster _barely_ hitting the radius of the OC. On the other hand though this is also a good example for why we should have an eccentricity cut on those clusters that we use to check for lines! The green cluster in this second event is not even remotely eccentric enough and indeed is actually part of the orange track! (More plots: [[file:~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/septem_events_only_line_veto_regular_noHCL_fixed_check.pdf]]) Check HLC cluster: [[~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/example_event_line_veto_check_hlc_is_problematic.pdf]] Is an example event where we can see how ridiculous the "check HLC" veto kind can become. There is a very large cluster that the OC is actually part of (in red). But because of that the radius is *SO LARGE* that it even encapsulates a whole other cluster (that technically should ideally be part of the 'lower' of the tracks!). For this reason I don't think this method is particularly useful. In other events of course it looks more reasonable, but still. There probably isn't a good way to make this work reliably. In any case though, for events that are significant in size, they would almost certainly never pass any lnL cuts anyhow. (More plots: [[file:~/org/Figs/statusAndProgress/estimateSeptemVetoRandomCoinc/septem_events_only_line_veto_check_hlc_fixed_check.pdf]]) The following is a broken event. THe purple cluster is not used for line veto. Why? /t/problem_event_12435_run_297.pdf - [X] Implement a cutoff for the eccentricity that a cluster must have in order to partake in the line veto. Currently this can only be set via an environment variable (~ECC_LINE_VETO_CUT~). A good value is around the 1.4 - 1.6 range I think (anything that rules out most X-ray like clusters!) **** Note on real septemboard spacing being important :extended: [[~/org/Figs/statusAndProgress/exampleEvents/example_line_veto_needs_chip_spacing.pdf]] is an example event that shows we need to introduce the correct chip spacing *for the line veto*. For the septem veto it's not very important, because the distance is way more important than the angle of how things match up. But for the line veto it's essential, as can be seen in that example (note that it uses ~lvRegularNoHLC~ and no septem veto, i.e. that's why the veto is false, despite the purple HLC of course "hitting" the original cluster) -> This has been implemented now. Activated (for now) via an environment variable ~USE_REAL_LAYOUT~. An example event for the spacing & the eccentricity cutoff is: file:~/org/Figs/statusAndProgress/exampleEvents/example_event_with_line_spacing_and_ecc_cutoff.pdf which was generated using: #+begin_src sh LINE_VETO_KIND=lvRegularNoHLC \ ECC_LINE_VETO_CUT=1.6 \ USE_REAL_LAYOUT=true \ likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_line.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --lineveto --plotseptem #+end_src and then just extract it from the ~/plots/septemEvents~ directory. Note the definition of the environment variables like this! **** Outdated: Estimation using subset of outer ring events :extended: The text here was written when we were still bootstrapping events only from the subset of *event numbers* that actually have a cluster that passes lnL on the center chip. This subset is of course biased even on the outer chip. Assuming that center clusters often come with activity on the outer chips, means there are less events representing those cases where there isn't even any activity in the center. This over represents activity on the outer chip. #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --estimateRandomCoinc #+end_src which writes the file ~/tmp/septem_fake_veto.txt~, which for this case is found [[file:~/org/resources/septem_veto_random_coincidences/estimates_septem_veto_random_coincidences.txt]] Mean value of: 1522.61764706. From this file the method seems to remove typically a bit less than 500 out of 2000 bootstrapped fake events. This seems to imply a random coincidence rate of almost 25% (or effectively a reduction of further 25% in efficiency / 25% increase in dead time). Pretty scary stuff. Of course this does not even include the line veto, which will drop it further. Let's run that: #+begin_src sh likelihood -f ~/CastData/data/DataRuns2018_Reco.h5 \ --h5out /tmp/lhood_2018_crAll_80eff_septem_line_fake.h5 \ --region crAll --cdlYear 2018 \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --septemveto --lineveto --estimateRandomCoinc #+end_src which generated the following output: [[file:~/org/resources/septem_veto_random_coincidences/estimates_septem_line_veto_random_coincidences.txt]] Mean value of: 1373.70588235. This comes out to a fraction of 68.68% of the events left after running the vetoes on our bootstrapped fake events. Combining it with a software efficiency of ε = 80% the total combined efficiency then would be $ε_\text{total} = 0.8 · 0.6868 = 0.5494$, so about 55%. *** Veto setups of interest [/] :extended: - [ ] *TABLE OF FEATURE, EFFICIENCY, BACKGROUND RATE* -> That table essentially motivates looking at different veto setups. - [ ] Write a section about a) the motivation behind looking at different setups to begin with and b) show the kind of setups we will be looking at later in the limit part. As a reference the random coincidence rates of: - pure septem veto = 0.7841029411764704 - pure line veto = 0.8601764705882353 - septem + line veto = 0.732514705882353 Let's compute the total efficiencies of all the setups we look at. | limit | name | |------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 8.9258e-23 | | | 8.8516e-23 | _scinti | | 8.5385e-23 | _scinti_vetoPercentile_0.95_fadc_vetoPercentile_0.95 | | 7.2889e-23 | _scinti_vetoPercentile_0.95_fadc_vetoPercentile_0.95_line_vetoPercentile_0.95 | | 7.538e-23 | _scinti_vetoPercentile_0.95_fadc_vetoPercentile_0.95_septem_vetoPercentile_0.95 | | 7.2365e-23 | _scinti_vetoPercentile_0.95_fadc_vetoPercentile_0.95_septem_vetoPercentile_0.95_line_vetoPercentile_0.95 | | 8.6007e-23 | _scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99 | | 7.3555e-23 | _scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_line_vetoPercentile_0.99 | | 7.5671e-23 | _scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99 | | 7.3249e-23 | _scinti_vetoPercentile_0.99_fadc_vetoPercentile_0.99_septem_vetoPercentile_0.99_line_vetoPercentile_0.99 | | 8.4108e-23 | _scinti_vetoPercentile_0.9_fadc_vetoPercentile_0.9 | | 7.2315e-23 | _scinti_vetoPercentile_0.9_fadc_vetoPercentile_0.9_line_vetoPercentile_0.9 | | 7.4109e-23 | _scinti_vetoPercentile_0.9_fadc_vetoPercentile_0.9_septem_vetoPercentile_0.9 | | 7.1508e-23 | _scinti_vetoPercentile_0.9_fadc_vetoPercentile_0.9_septem_vetoPercentile_0.9_line_vetoPercentile_0.9 | | $ε_{\ln\mathcal{L}, \text{eff}}$ | Scinti | FADC | $ε_{\text{FADC, eff}}$ | Septem | Line | Efficiency | Expected limit (nmc=1000) | |----------------------------------+--------+------+------------------------+--------+------+------------+---------------------------| | 0.8 | x | x | - | x | x | 0.8 | 8.9258e-23 | | 0.8 | o | x | - | x | x | 0.8 | 8.8516e-23 | |----------------------------------+--------+------+------------------------+--------+------+------------+---------------------------| | 0.8 | o | o | 0.98 | x | x | 0.784 | 8.6007e-23 | | 0.8 | o | o | 0.90 | x | x | 0.72 | 8.5385e-23 | | 0.8 | o | o | 0.80 | x | x | 0.64 | 8.4108e-23 | |----------------------------------+--------+------+------------------------+--------+------+------------+---------------------------| | 0.8 | o | o | 0.98 | o | x | 0.61 | 7.5671e-23 | | 0.8 | o | o | 0.90 | o | x | 0.56 | 7.538e-23 | | 0.8 | o | o | 0.80 | o | x | 0.50 | 7.4109e-23 | |----------------------------------+--------+------+------------------------+--------+------+------------+---------------------------| | 0.8 | o | o | 0.98 | x | o | 0.67 | 7.3555e-23 | | 0.8 | o | o | 0.90 | x | o | 0.62 | 7.2889e-23 | | 0.8 | o | o | 0.80 | x | o | 0.55 | 7.2315e-23 | |----------------------------------+--------+------+------------------------+--------+------+------------+---------------------------| | 0.8 | o | o | 0.98 | o | o | 0.57 | 7.3249e-23 | | 0.8 | o | o | 0.90 | o | o | 0.52 | 7.2365e-23 | | 0.8 | o | o | 0.80 | o | o | 0.47 | 7.1508e-23 | - [ ] *ADD THE VERSIONS WE NOW LOOK AT* - [ ] without FADC - [ ] different lnL cut (70% & 90%) - [ ] Maybe introduce a couple of versions with lower lnL software efficiency? - [ ] Introduce all setups we care about for the limit calculation, final decision will be done based on which yields the best _expected_ limit. - [ ] Have a table with the setups, their background rates (full range) and their efficiencies! ** Background rates of combined vetoes and efficiencies :PROPERTIES: :CUSTOM_ID: sec:background:all_vetoes_combined :END: Having discussed the two main classifiers (likelihood and MLP) as well as all the different possible vetoes, let us now recap. We will look at background rates obtained for different veto setups and corresponding signal efficiencies. In addition, we will compare with fig. sref:fig:background:background_no_vetoes_clusters -- the background clusters left over the entire chip and local background suppression -- when using all vetoes. For a comparison of the background rates achievable in the center \goldArea region using the \lnL method and the MLP, see fig. [[fig:background:final_compare_LnL_mlp]]. As already expected from the MLP performance in sec. [[#sec:background:mlp:background_rate]], it achieves comparable performance at significantly higher signal efficiency. This will be a key feature in the limit calculation to increase the total signal efficiency. The center region does not tell the whole story though, as the background is not homogeneous. #+CAPTION: Comparison of the \lnL at $ε_{\text{eff}} = \SI{80}{\%}$ and the MLP at #+CAPTION: $ε_{\text{eff}} = \SI{95}{\%}$ signal efficiency with all added vetoes. #+CAPTION: For the total combined efficiencies see tab. [[tab:background:background_rate_eff_comparisons]]. #+NAME: fig:background:final_compare_LnL_mlp [[~/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line_lnL80_mlp95.pdf]] Utilizing all vetoes in addition to the \lnL or MLP classifier alone significantly improves the background rejection over the entire chip. See fig. sref:fig:background:cluster_center_comparison comparing all remaining X-ray like background cluster centers, without any vetoes and with all vetoes, using the \lnL method at $ε_{\text{eff}} = \SI{80}{\%}$. While the center background improves, the largest improvements are seen towards the edges and the corners. From a highly non uniform background rate in fig. sref:fig:background:cluster_centers_lnL80_without, the vetoes produce an almost homogeneous background in sref:fig:background:cluster_centers_lnL80_all. In total they yield a $\sim\num{13.5}$ fold reduction in background. Fig. sref:fig:background:background_suppression_comparison visualizes the improvements by showing the local background suppression, as a factor of the clusters left in each tile over the total number of clusters in the raw data. At closer inspection it is visible that the background improvements are slightly less near the bottom edge of the chip. This is because the spacing to the bottom row of the Septemboard is larger, decreasing the likelihood of detecting a larger cluster. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "No vetoes") (label "fig:background:cluster_centers_lnL80_without") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/backgroundClusters/background_cluster_centers_lnL80_no_vetoes.pdf")) (subfigure (linewidth 0.5) (caption "All vetoes") (label "fig:background:cluster_centers_lnL80_all") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/backgroundClusters//background_cluster_centers_lnL80_all_vetoes.pdf")) (caption "Cluster centers of all X-ray like clusters in the 2017/18 CAST background data. " (subref "fig:background:cluster_centers_lnL80_without") " shows the remaining data for the \\lnL method without any vetoes, " (subref "fig:background:cluster_centers_lnL80_all") " includes all vetoes. The vetoes lead to a dramatic reduction in background especially outside the center \\goldArea.") (label "fig:background:cluster_center_comparison")) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "No vetoes") (label "fig:background:suppression_lnL80_without") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_no_vetoes_zMaxS_1000.pdf")) (subfigure (linewidth 0.5) (caption "All vetoes") (label "fig:background:suppression_lnL80_all") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_all_vetoes_zMaxS_1000.pdf")) (caption "Local background suppression of the \\lnL method compared to the raw 2017/18 CAST data. " (subref "fig:background:suppression_lnL80_without") " is the suppression without vetoes, " (subref "fig:background:suppression_lnL80_all") " includes all vetoes.") (label "fig:background:background_suppression_comparison")) #+end_src Tab. [[tab:background:background_rate_eff_comparisons]] shows a comparison of many different possible combinations of classifier, signal efficiency $ε_{\text{eff}}$, choice of vetoes and total efficiency $ε_{\text{total}}$. The 'Rate' column is the mean background rate in the range $\SIrange{0.2}{8}{keV}$ in the center \goldArea region. The $\ln\mathcal{L}$ produces the lowest background rates, at the cost of low total efficiencies. Generally, many different setups reach into below the $\SI{1e-5}{keV^{-1}.cm^{-2}.s^{-1}}$ level in the center region, while even large sacrifices in signal efficiency do not yield significantly lower rates. This implies the detector is fundamentally limited at this point in its design. The likely causes being radioactive impurities in the detector material, imperfect cosmic vetoes and lack of better time information in the events. See appendix [[#sec:appendix:background_rates]], in particular tab. [[tab:appendix:background_rates:mean_background_rates]] for the same table with background rates over the entire chip. The choice for the optimal setup purely based on background rate and total efficiency is difficult, especially because reducing the background to a single number is flawed [fn:flawed]. For this reason we will compute expected limits for each of these setups. The setup yielding the best expected limit will then be used for the calculation of solar tracking candidates and observed limit. #+CAPTION: Overview of different classifier, veto and software efficiency $ε_{\text{eff}}$ setups #+CAPTION: yielding different total efficiencies $ε_{\text{total}}$. The background rate shown #+CAPTION: in the 'Rate' column is between $\SIrange{0.2}{8}{keV}$ for the center \goldArea region. #+NAME: tab:background:background_rate_eff_comparisons #+ATTR_LATEX: :booktabs t |------------+-------+--------+-------+--------+-------+--------+---------------------------------------| | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate [$\si{keV^{-1}.cm^{-2}.s^{-1}}$] | |------------+-------+--------+-------+--------+-------+--------+---------------------------------------| | LnL | 0.700 | true | true | true | true | 0.503 | $\num{ 6.9015(5580)e-06}$ | | LnL | 0.700 | true | true | false | true | 0.590 | $\num{ 7.3074(5741)e-06}$ | | LnL | 0.800 | true | true | true | true | 0.574 | $\num{ 7.6683(5881)e-06}$ | | LnL | 0.800 | true | true | false | true | 0.674 | $\num{ 8.0743(6035)e-06}$ | | MLP | 0.865 | true | true | true | true | 0.621 | $\num{ 8.2096(6085)e-06}$ | | MLP | 0.865 | true | true | false | true | 0.729 | $\num{ 8.8411(6315)e-06}$ | | MLP | 0.912 | true | true | true | true | 0.655 | $\num{ 9.1117(6411)e-06}$ | | LnL | 0.800 | true | true | true | false | 0.615 | $\num{ 9.3373(6490)e-06}$ | | LnL | 0.900 | true | true | true | true | 0.646 | $\num{ 9.3824(6506)e-06}$ | | MLP | 0.912 | true | true | false | true | 0.769 | $\num{ 9.6981(6614)e-06}$ | | LnL | 0.900 | true | true | false | true | 0.759 | $\num{ 9.9688(6706)e-06}$ | | MLP | 0.957 | true | true | true | true | 0.687 | $\num{1.00139(6721)e-05}$ | | MLP | 0.957 | true | true | false | true | 0.807 | $\num{1.05552(6900)e-05}$ | | MLP | 0.983 | true | true | true | true | 0.706 | $\num{1.10063(7046)e-05}$ | | MLP | 0.983 | true | true | false | true | 0.829 | $\num{1.16378(7245)e-05}$ | | LnL | 0.800 | true | true | false | false | 0.784 | $\num{1.55621(8378)e-05}$ | | MLP | 0.865 | false | false | false | false | 0.865 | $\num{1.56523(8403)e-05}$ | | LnL | 0.700 | false | false | false | false | 0.700 | $\num{1.65545(8641)e-05}$ | | MLP | 0.912 | false | false | false | false | 0.912 | $\num{1.74566(8874)e-05}$ | | LnL | 0.800 | true | false | false | false | 0.800 | $\num{1.91256(9288)e-05}$ | | MLP | 0.957 | false | false | false | false | 0.957 | $\num{2.01631(9537)e-05}$ | | LnL | 0.800 | false | false | false | false | 0.800 | $\num{2.06142(9643)e-05}$ | | MLP | 0.983 | false | false | false | false | 0.983 | $\num{ 2.3862(1037)e-05}$ | | LnL | 0.900 | false | false | false | false | 0.900 | $\num{ 2.7561(1115)e-05}$ | [fn:flawed] Flawed, because a) the background is not homogeneous over the entire chip and the effect of vetoes outside the center region is not visible and b) because the most important range for axion searches is between $\SIrange{0.5}{5}{keV}$ (depending on the model). *** TODOs for this section [/] :noexport: - [ ] A key takeaway: The MLP is not actually better at producing a low background rate in the limit of low signal efficiency. This implies that a) either there is not enough information left in the cluster properties of those left at e.g. 80% eff to separate them or b) they simply are real X-rays. The main feature of the MLP is the significantly higher software efficiency that can be achieved. This behavior is technically already visible in the MLP prediction fig. [[ ... ]], where we see the extremely steep decay of the distributions (i.e. the cut value for slightly different percentages are extremely close to one another!) -> *SHOW DIFFERENT MLP*? No, already shown. - [X] Show background clusters of including septem veto only in appendix, but mention this will likely be a very important veto for GridPix3 detector! -> Or show the 'best possible case' instead after all? -> This is now outdated, because the efficiency has gone up between the two! - [ ] A key takeaway: The MLP is not actually better at producing a low background rate in the limit of low signal efficiency. This implies that a) either there is not enough information left in the cluster properties of those left at e.g. 80% eff to separate them or b) they simply are real X-rays. The main feature of the MLP is the significantly higher software efficiency that can be achieved. This behavior is technically already visible in the MLP prediction fig. [[ ... ]], where we see the extremely steep decay of the distributions (i.e. the cut value for slightly different percentages are extremely close to one another!) - [ ] Highlight that Scinti + FADC vetoes are obviously going to be used, no question, due to negligible impact on efficiency - [X] Normalized background rate -> We implemented this, but it does not tell the whole story. Leaving it out. - [X] Best case of MLP + all vetoes, LnL + all vetoes -> Something else? - [X] Plot of all clusters over entire chip - [X] Plot of background suppression (side-by-side) - [X] (Maybe) Efficiency adjusted background rates? Multiply total time by combined efficiency to take dead time into account? -> No, it's a bit confusing. - [X] Table of mean background rates of different setups, including combined efficiency -> This motivates the "different setups to consider" - [ ] *ADD MLP in addition!* - [X] How to generate table of all background rates, all plots? -> Plots by hand. -> Generate table of rates programatically? I.e. hand all files to ~plotBackgroundRate~ that we want numbers from, have a ~--noPlot~ option to avoid the actual plotting, read efficiency & then print the efficiencies to a table? -> Implemented. Argument ~rateTable~ | Classifier | ε_signal | FADC | Scinti | Septem | Line | ε | 0-8 [cm⁻²·s⁻¹] | 2-8 [cm⁻²·s⁻¹] | 4-8 [cm⁻²·s⁻¹] | |------------+---------+------+--------+--------+------+------+---------------+----------------+---------------| | LnL | 0.8 | y | y | y | y | 0.64 | 5.0841e-05 | 5.0841e-05 | 5.0841e-05 | | MLP | 0.95 | y | y | y | y | 0.74 | 5.0841e-05 | 5.0841e-05 | 5.0841e-05 | Old plot: #+CAPTION: The equivalent figure to fig. [[fig:background_clusters_no_septem_veto]] but including #+CAPTION: the 'septem veto'. The main improvement happens towards the corners. In total the #+CAPTION: number of background clusters drops by a factor of 4, from $\sim\num{43000}$ to #+CAPTION: $\sim\num{9600}$. #+CAPTION: *TODO THIS INCLUDES THE LINE VETO. COMPARISON SHOULD BE SHOWN LATER!!!!* #+NAME: fig:background:background_clusters_septem_line_veto [[~/phd/Figs/backgroundClusters/background_cluster_centers_all_vetoes.pdf]] Old paragraph: #+begin_quote In total all vetoes together have achieved a background reduction of a factor of about 10 over the regular $\ln\mathcal{L}$ method. But also the centermost region (a square of the center \SI{25}{mm²}) sees an improvement, especially at low energies. Fig. [[fig:background:background_suppression_all_vetoes]] highlights the massive improvement over the pure $\ln\mathcal{L}$ method when compared to fig. sref:fig:background:background_suppression_tiles_no_vetoes_2017_18. In the corners the improvements reach up to a factor of 30 (top left), but even in the center improvements of about a factor of 2 are achieved. #+end_quote Old plot: - [ ] *SHOULD THIS BE SIDE BY SIDE WITH OTHER PLOT?* -> I think this should be in the "fun combine all vetoes" subsection! #+CAPTION: Local background suppression using all vetoes in the 2017/18 CAST #+CAPTION: data. Compared to fig. \ref{fig:background:background_suppression_tiles_no_vetoes_2017_18} massive #+CAPTION: improvements, up to a factor of 30 are visible. Even the center region #+CAPTION: improves by a factor of 2. #+NAME: fig:background:background_suppression_all_vetoes [[~/phd/Figs/backgroundClusters/background_suppression_tile_map_all_vetoes.pdf]] *** Generate background rate plots :extended: What plots do we want exactly? #+begin_src sh :results drawer ESCAPE_LATEX=true plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "LnL@80+V" --names "LnL@80+V" \ --names "MLP@95+V" --names "MLP@95+V" \ --centerChip 3 \ --title "Background rate from CAST data, LnL@80% and MLP@95%, all vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line_lnL80_mlp95.pdf \ --outpath ~/phd/Figs/background/ \ --useTeX \ --region crGold \ --energyMin 0.2 \ --quiet # --applyEfficiencyNormalization \ #+end_src #+RESULTS: :results: [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.0960e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 9.2880e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 0.2 .. 12.0: 1.3423e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 12.0: 1.1375e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.2138e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 6.0692e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 0.5 .. 2.5: 1.1963e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 2.5: 5.9813e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 0.5 .. 5.0: 4.1517e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 9.2260e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 0.5 .. 5.0: 5.3128e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.5 .. 5.0: 1.1806e-05 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 0.2 .. 2.5: 1.6888e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 7.3427e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 0.2 .. 2.5: 1.8647e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 2.5: 8.1076e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.4249e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 3.5624e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 4.0 .. 8.0: 1.9527e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 4.0 .. 8.0: 4.8818e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 2.0 .. 8.0: 4.3980e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 7.3300e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 2.0 .. 8.0: 5.9461e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 2.0 .. 8.0: 9.9101e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: LnL@80+V [INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.8405e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.4879e-06 keV⁻¹·cm⁻²·s⁻¹ [INFO]:Dataset: MLP@95+V [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.5821e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.7207e-06 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.800 | true | true | true | true | 0.574 | 7.488e-06 | | MLP | 0.957 | true | true | true | true | 0.687 | 9.721e-06 | [INFO]:DataFrame with 15 columns and 122 rows: Idx Energy Rate totalTime RateErr Dataset yMin yMax File ε_total ε_eff Classifier Scinti FADC Septem Line dtype: float float constant float string float float string float float string constant constant constant constant 0 0 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 1 0.2 0.87959846 3158.0066 0.39336839 LnL@80+V 0.48623007 1.2729669 LnL@80+V 0.57429153 0.8 LnL true true true true 2 0.4 1.4073575 3158.0066 0.49757603 LnL@80+V 0.90978151 1.9049336 LnL@80+V 0.57429153 0.8 LnL true true true true 3 0.6 1.0555182 3158.0066 0.43091348 LnL@80+V 0.62460467 1.4864316 LnL@80+V 0.57429153 0.8 LnL true true true true 4 0.8 0.87959846 3158.0066 0.39336839 LnL@80+V 0.48623007 1.2729669 LnL@80+V 0.57429153 0.8 LnL true true true true 5 1 0.17591969 3158.0066 0.17591969 LnL@80+V 0 0.35183938 LnL@80+V 0.57429153 0.8 LnL true true true true 6 1.2 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 7 1.4 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 8 1.6 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 9 1.8 1.4073575 3158.0066 0.49757603 LnL@80+V 0.90978151 1.9049336 LnL@80+V 0.57429153 0.8 LnL true true true true 10 2 0.87959846 3158.0066 0.39336839 LnL@80+V 0.48623007 1.2729669 LnL@80+V 0.57429153 0.8 LnL true true true true 11 2.2 0.70367877 3158.0066 0.35183938 LnL@80+V 0.35183938 1.0555182 LnL@80+V 0.57429153 0.8 LnL true true true true 12 2.4 0.17591969 3158.0066 0.17591969 LnL@80+V 0 0.35183938 LnL@80+V 0.57429153 0.8 LnL true true true true 13 2.6 0.87959846 3158.0066 0.39336839 LnL@80+V 0.48623007 1.2729669 LnL@80+V 0.57429153 0.8 LnL true true true true 14 2.8 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 15 3 3.8702332 3158.0066 0.8251365 LnL@80+V 3.0450967 4.6953697 LnL@80+V 0.57429153 0.8 LnL true true true true 16 3.2 2.8147151 3158.0066 0.70367877 LnL@80+V 2.1110363 3.5183938 LnL@80+V 0.57429153 0.8 LnL true true true true 17 3.4 2.286956 3158.0066 0.63428747 LnL@80+V 1.6526685 2.9212435 LnL@80+V 0.57429153 0.8 LnL true true true true 18 3.6 1.5832772 3158.0066 0.52775908 LnL@80+V 1.0555182 2.1110363 LnL@80+V 0.57429153 0.8 LnL true true true true 19 3.8 1.5832772 3158.0066 0.52775908 LnL@80+V 1.0555182 2.1110363 LnL@80+V 0.57429153 0.8 LnL true true true true 20 4 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 21 4.2 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 22 4.4 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 23 4.6 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 24 4.8 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 25 5 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 26 5.2 0.17591969 3158.0066 0.17591969 LnL@80+V 0 0.35183938 LnL@80+V 0.57429153 0.8 LnL true true true true 27 5.4 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 28 5.6 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 29 5.8 0.70367877 3158.0066 0.35183938 LnL@80+V 0.35183938 1.0555182 LnL@80+V 0.57429153 0.8 LnL true true true true 30 6 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 31 6.2 0.87959846 3158.0066 0.39336839 LnL@80+V 0.48623007 1.2729669 LnL@80+V 0.57429153 0.8 LnL true true true true 32 6.4 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 33 6.6 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 34 6.8 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 35 7 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 36 7.2 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 37 7.4 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 38 7.6 0.17591969 3158.0066 0.17591969 LnL@80+V 0 0.35183938 LnL@80+V 0.57429153 0.8 LnL true true true true 39 7.8 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 40 8 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 41 8.2 1.4073575 3158.0066 0.49757603 LnL@80+V 0.90978151 1.9049336 LnL@80+V 0.57429153 0.8 LnL true true true true 42 8.4 1.2314378 3158.0066 0.46543976 LnL@80+V 0.76599809 1.6968776 LnL@80+V 0.57429153 0.8 LnL true true true true 43 8.6 1.9351166 3158.0066 0.58345961 LnL@80+V 1.351657 2.5185762 LnL@80+V 0.57429153 0.8 LnL true true true true 44 8.8 2.4628757 3158.0066 0.65823122 LnL@80+V 1.8046445 3.1211069 LnL@80+V 0.57429153 0.8 LnL true true true true 45 9 3.1665545 3158.0066 0.74636404 LnL@80+V 2.4201904 3.9129185 LnL@80+V 0.57429153 0.8 LnL true true true true 46 9.2 2.9906348 3158.0066 0.72533547 LnL@80+V 2.2652993 3.7159702 LnL@80+V 0.57429153 0.8 LnL true true true true 47 9.4 2.6387954 3158.0066 0.68133404 LnL@80+V 1.9574613 3.3201294 LnL@80+V 0.57429153 0.8 LnL true true true true 48 9.6 2.1110363 3158.0066 0.60940369 LnL@80+V 1.5016326 2.72044 LnL@80+V 0.57429153 0.8 LnL true true true true 49 9.8 2.1110363 3158.0066 0.60940369 LnL@80+V 1.5016326 2.72044 LnL@80+V 0.57429153 0.8 LnL true true true true 50 10 1.2314378 3158.0066 0.46543976 LnL@80+V 0.76599809 1.6968776 LnL@80+V 0.57429153 0.8 LnL true true true true 51 10.2 0.87959846 3158.0066 0.39336839 LnL@80+V 0.48623007 1.2729669 LnL@80+V 0.57429153 0.8 LnL true true true true 52 10.4 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 53 10.6 0.70367877 3158.0066 0.35183938 LnL@80+V 0.35183938 1.0555182 LnL@80+V 0.57429153 0.8 LnL true true true true 54 10.8 0.35183938 3158.0066 0.24878801 LnL@80+V 0.10305137 0.6006274 LnL@80+V 0.57429153 0.8 LnL true true true true 55 11 0.70367877 3158.0066 0.35183938 LnL@80+V 0.35183938 1.0555182 LnL@80+V 0.57429153 0.8 LnL true true true true 56 11.2 0.52775908 3158.0066 0.30470185 LnL@80+V 0.22305723 0.83246092 LnL@80+V 0.57429153 0.8 LnL true true true true 57 11.4 0.17591969 3158.0066 0.17591969 LnL@80+V 0 0.35183938 LnL@80+V 0.57429153 0.8 LnL true true true true 58 11.6 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 59 11.8 0.17591969 3158.0066 0.17591969 LnL@80+V 0 0.35183938 LnL@80+V 0.57429153 0.8 LnL true true true true 60 12 0 3158.0066 0 LnL@80+V 0 0 LnL@80+V 0.57429153 0.8 LnL true true true true 61 0 0 3158.0066 0 MLP@95+V 0 0 MLP@95+V 0.68685906 0.9568089 MLP true true true true 62 0.2 1.2314378 3158.0066 0.46543976 MLP@95+V 0.76599809 1.6968776 MLP@95+V 0.68685906 0.9568089 MLP true true true true 63 0.4 1.9351166 3158.0066 0.58345961 MLP@95+V 1.351657 2.5185762 MLP@95+V 0.68685906 0.9568089 MLP true true true true 64 0.6 1.5832772 3158.0066 0.52775908 MLP@95+V 1.0555182 2.1110363 MLP@95+V 0.68685906 0.9568089 MLP true true true true 65 0.8 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 66 1 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 67 1.2 0.35183938 3158.0066 0.24878801 MLP@95+V 0.10305137 0.6006274 MLP@95+V 0.68685906 0.9568089 MLP true true true true 68 1.4 0.35183938 3158.0066 0.24878801 MLP@95+V 0.10305137 0.6006274 MLP@95+V 0.68685906 0.9568089 MLP true true true true 69 1.6 0.70367877 3158.0066 0.35183938 MLP@95+V 0.35183938 1.0555182 MLP@95+V 0.68685906 0.9568089 MLP true true true true 70 1.8 1.4073575 3158.0066 0.49757603 MLP@95+V 0.90978151 1.9049336 MLP@95+V 0.68685906 0.9568089 MLP true true true true 71 2 1.0555182 3158.0066 0.43091348 MLP@95+V 0.62460467 1.4864316 MLP@95+V 0.68685906 0.9568089 MLP true true true true 72 2.2 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 73 2.4 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 74 2.6 1.4073575 3158.0066 0.49757603 MLP@95+V 0.90978151 1.9049336 MLP@95+V 0.68685906 0.9568089 MLP true true true true 75 2.8 0.87959846 3158.0066 0.39336839 MLP@95+V 0.48623007 1.2729669 MLP@95+V 0.68685906 0.9568089 MLP true true true true 76 3 5.4535105 3158.0066 0.97947939 MLP@95+V 4.4740311 6.4329899 MLP@95+V 0.68685906 0.9568089 MLP true true true true 77 3.2 3.8702332 3158.0066 0.8251365 MLP@95+V 3.0450967 4.6953697 MLP@95+V 0.68685906 0.9568089 MLP true true true true 78 3.4 2.6387954 3158.0066 0.68133404 MLP@95+V 1.9574613 3.3201294 MLP@95+V 0.68685906 0.9568089 MLP true true true true 79 3.6 1.9351166 3158.0066 0.58345961 MLP@95+V 1.351657 2.5185762 MLP@95+V 0.68685906 0.9568089 MLP true true true true 80 3.8 2.1110363 3158.0066 0.60940369 MLP@95+V 1.5016326 2.72044 MLP@95+V 0.68685906 0.9568089 MLP true true true true 81 4 0.87959846 3158.0066 0.39336839 MLP@95+V 0.48623007 1.2729669 MLP@95+V 0.68685906 0.9568089 MLP true true true true 82 4.2 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 83 4.4 0 3158.0066 0 MLP@95+V 0 0 MLP@95+V 0.68685906 0.9568089 MLP true true true true 84 4.6 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 85 4.8 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 86 5 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 87 5.2 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 88 5.4 0.70367877 3158.0066 0.35183938 MLP@95+V 0.35183938 1.0555182 MLP@95+V 0.68685906 0.9568089 MLP true true true true 89 5.6 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 90 5.8 0.87959846 3158.0066 0.39336839 MLP@95+V 0.48623007 1.2729669 MLP@95+V 0.68685906 0.9568089 MLP true true true true 91 6 0.35183938 3158.0066 0.24878801 MLP@95+V 0.10305137 0.6006274 MLP@95+V 0.68685906 0.9568089 MLP true true true true 92 6.2 0.87959846 3158.0066 0.39336839 MLP@95+V 0.48623007 1.2729669 MLP@95+V 0.68685906 0.9568089 MLP true true true true 93 6.4 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 94 6.6 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 95 6.8 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 96 7 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 97 7.2 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 98 7.4 0.35183938 3158.0066 0.24878801 MLP@95+V 0.10305137 0.6006274 MLP@95+V 0.68685906 0.9568089 MLP true true true true 99 7.6 0.35183938 3158.0066 0.24878801 MLP@95+V 0.10305137 0.6006274 MLP@95+V 0.68685906 0.9568089 MLP true true true true 100 7.8 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 101 8 1.0555182 3158.0066 0.43091348 MLP@95+V 0.62460467 1.4864316 MLP@95+V 0.68685906 0.9568089 MLP true true true true 102 8.2 1.9351166 3158.0066 0.58345961 MLP@95+V 1.351657 2.5185762 MLP@95+V 0.68685906 0.9568089 MLP true true true true 103 8.4 1.5832772 3158.0066 0.52775908 MLP@95+V 1.0555182 2.1110363 MLP@95+V 0.68685906 0.9568089 MLP true true true true 104 8.6 2.286956 3158.0066 0.63428747 MLP@95+V 1.6526685 2.9212435 MLP@95+V 0.68685906 0.9568089 MLP true true true true 105 8.8 3.1665545 3158.0066 0.74636404 MLP@95+V 2.4201904 3.9129185 MLP@95+V 0.68685906 0.9568089 MLP true true true true 106 9 3.8702332 3158.0066 0.8251365 MLP@95+V 3.0450967 4.6953697 MLP@95+V 0.68685906 0.9568089 MLP true true true true 107 9.2 2.9906348 3158.0066 0.72533547 MLP@95+V 2.2652993 3.7159702 MLP@95+V 0.68685906 0.9568089 MLP true true true true 108 9.4 2.8147151 3158.0066 0.70367877 MLP@95+V 2.1110363 3.5183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 109 9.6 2.4628757 3158.0066 0.65823122 MLP@95+V 1.8046445 3.1211069 MLP@95+V 0.68685906 0.9568089 MLP true true true true 110 9.8 2.4628757 3158.0066 0.65823122 MLP@95+V 1.8046445 3.1211069 MLP@95+V 0.68685906 0.9568089 MLP true true true true 111 10 1.2314378 3158.0066 0.46543976 MLP@95+V 0.76599809 1.6968776 MLP@95+V 0.68685906 0.9568089 MLP true true true true 112 10.2 1.0555182 3158.0066 0.43091348 MLP@95+V 0.62460467 1.4864316 MLP@95+V 0.68685906 0.9568089 MLP true true true true 113 10.4 0.52775908 3158.0066 0.30470185 MLP@95+V 0.22305723 0.83246092 MLP@95+V 0.68685906 0.9568089 MLP true true true true 114 10.6 0.87959846 3158.0066 0.39336839 MLP@95+V 0.48623007 1.2729669 MLP@95+V 0.68685906 0.9568089 MLP true true true true 115 10.8 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 116 11 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 117 11.2 0.70367877 3158.0066 0.35183938 MLP@95+V 0.35183938 1.0555182 MLP@95+V 0.68685906 0.9568089 MLP true true true true 118 11.4 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 119 11.6 0 3158.0066 0 MLP@95+V 0 0 MLP@95+V 0.68685906 0.9568089 MLP true true true true 120 11.8 0.17591969 3158.0066 0.17591969 MLP@95+V 0 0.35183938 MLP@95+V 0.68685906 0.9568089 MLP true true true true 121 12 0 3158.0066 0 MLP@95+V 0 0 MLP@95+V 0.68685906 0.9568089 MLP true true true true [INFO]:INFO: storing plot in /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line_lnL80_mlp95.pdf [WARNING]: Printing total background time currently only supported for single datasets. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/background /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line_lnL80_mlp95.tex Generated: /home/basti/phd/Figs/background/background_rate_crGold_scinti_fadc_septem_line_lnL80_mlp95.pdf :end: #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ --names "MLP@80" --names "MLP@80" \ --names "MLP8+Sc+F+L" --names "MLP8+Sc+F+L" \ --names "MLP@91" --names "MLP@91" \ --names "MLP9+Sc+F+L" --names "MLP9+Sc+F+L" \ --names "LnL@80" --names "LnL@80" \ --names "LnL+Sc+F+L" --names "LnL+Sc+F+L" \ --centerChip 3 \ --title "Background rate from CAST data" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath /tmp \ --useTeX \ --region crGold \ --energyMin 0.2 \ --hideErrors \ --hidePoints \ --quiet # --applyEfficiencyNormalization \ #+end_src And now a plot with data 'normalized' by the total signal efficiency. Meaning that if we have e.g. 80% total efficiency, we multiply the total time by that factor, reducing the 'live' time. This is a decent way to get an idea what veto setting is useful and which isn't (i.e. losing too much efficiency for too little gain in efficiency). #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_septem_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_septem_line_vetoPercentile_0.99.h5 \ --names "No vetoes" --names "No vetoes" \ --names "Scinti" --names "Scinti" \ --names "FADC" --names "FADC" \ --names "Septem" --names "Septem" \ --names "noSeptem" --names "noSeptem" \ --names "Line" --names "Line" \ --centerChip 3 \ --title "Background rate from CAST data, incl. scinti, FADC, septem, line veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath /tmp \ --useTeX \ --applyEfficiencyNormalization \ --region crGold \ --quiet #+end_src -> Ok, the above works with, the files all have the ~logCtx~ group. **** *OUTDATED* Generate background rate plots :noexport: What plots do we want exactly? #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_septem_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_septem_line_vetoPercentile_0.99.h5 \ --names "MLP@80" --names "MLP@80" \ --names "MLP@80+V" --names "MLP@80+V" \ --names "MLP@91" --names "MLP@91" \ --names "MLP@91+V" --names "MLP@91+V" \ --names "LnL@80" --names "LnL@80" \ --names "LnL@80+V" --names "LnL@80+V" \ --centerChip 3 \ --title "Background rate from CAST data" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath /tmp \ --useTeX \ --region crGold \ --energyMin 0.2 \ --hideErrors \ --hidePoints \ --quiet # --applyEfficiencyNormalization \ #+end_src #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_tanh300_outer_chip_training_30_10_23/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_SGD_300_2_vQ_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ --names "MLP@80" --names "MLP@80" \ --names "MLP8+Sc+F+L" --names "MLP8+Sc+F+L" \ --names "MLP@91" --names "MLP@91" \ --names "MLP9+Sc+F+L" --names "MLP9+Sc+F+L" \ --names "LnL@80" --names "LnL@80" \ --names "LnL+Sc+F+L" --names "LnL+Sc+F+L" \ --centerChip 3 \ --title "Background rate from CAST data" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath /tmp \ --useTeX \ --region crGold \ --energyMin 0.2 \ --hideErrors \ --hidePoints \ --quiet # --applyEfficiencyNormalization \ #+end_src And now a plot with data 'normalized' by the total signal efficiency. Meaning that if we have e.g. 80% total efficiency, we multiply the total time by that factor, reducing the 'live' time. This is a decent way to get an idea what veto setting is useful and which isn't (i.e. losing too much efficiency for too little gain in efficiency). #+begin_src sh plotBackgroundRate \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_septem_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run2_crAll_lnL_scinti_fadc_septem_line_vetoPercentile_0.99.h5 \ ~/org/resources/lhood_limits_automation_with_nn_support/lhood/likelihood_cdl2018_Run3_crAll_lnL_scinti_fadc_septem_line_vetoPercentile_0.99.h5 \ --names "No vetoes" --names "No vetoes" \ --names "Scinti" --names "Scinti" \ --names "FADC" --names "FADC" \ --names "Septem" --names "Septem" \ --names "noSeptem" --names "noSeptem" \ --names "Line" --names "Line" \ --centerChip 3 \ --title "Background rate from CAST data, incl. scinti, FADC, septem, line veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath /tmp \ --useTeX \ --applyEfficiencyNormalization \ --region crGold \ --quiet #+end_src -> Ok, the above works with, the files all have the ~logCtx~ group. *** Generate plot of background clusters left over with all vetoes [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:background:gen_bck_cluster_plots_all_vetoes :END: - [ ] *MAYBE MAKE TEXT FONT WHITE AGAIN?* LnL @ 80% and no vetoes, but using the same zMax as for the all veto case below. #+begin_src sh :results drawer ESCAPE_LATEX=true plotBackgroundClusters \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ --zMax 5 \ --zMaxSuppression 1000 \ --title "CAST data, LnL@80%, no vetoes" \ --outpath ~/phd/Figs/backgroundClusters/ \ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 \ --showGoldRegion \ --backgroundSuppression \ --suffix "_lnL80_no_vetoes_zMaxS_1000" \ --useTikZ #+end_src LnL @ 80% and all vetoes: #+begin_src sh :results drawer ESCAPE_LATEX=true plotBackgroundClusters \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --zMax 5 \ --zMaxSuppression 1000 \ --title "CAST data, LnL@80%, all vetoes" \ --outpath ~/phd/Figs/backgroundClusters/ \ --filterNoisyPixels \ --energyMin 0.2 --energyMax 12.0 \ --showGoldRegion \ --backgroundSuppression \ --suffix "_lnL80_all_vetoes_zMaxS_1000" \ --useTikZ #+end_src #+RESULTS: :results: reading: /home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 reading: /home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 @["/home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5", "/home/basti/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5"] DataFrame with 3 columns and 4740 rows: Idx x y count dtype: int int int 0 3 247 3 1 6 1 1 2 6 166 6 3 9 33 1 4 9 122 1 5 10 7 1 6 10 146 1 7 10 165 1 8 10 166 1 9 10 224 1 10 10 235 1 11 11 102 1 12 11 105 1 13 11 200 1 14 11 229 1 15 12 10 1 16 12 11 1 17 12 13 1 18 12 38 1 19 12 92 1 [INFO]: Saving plot to /home/basti/phd/Figs/backgroundClusters//background_cluster_centers_lnL80_all_vetoes.pdf INFO: The integer column `x` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("x"), ...)`. INFO: The integer column `xs` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("xs"), ...)`. INFO: The integer column `y` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("y"), ...)`. INFO: The integer column `ys` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("ys"), ...)`. DataFrame with 4 columns and 49 rows: Idx xI yI cI sI dtype: int int int float 0 0 0 209 158.8 1 0 36 76 436.6 2 0 73 42 790 3 0 109 30 1106 4 0 146 59 562.4 5 0 182 82 404.6 6 0 219 88 377 7 36 0 387 85.74 8 36 36 255 130.1 9 36 73 80 414.8 10 36 109 78 425.4 11 36 146 75 442.4 12 36 182 179 185.4 13 36 219 136 244 14 73 0 170 195.2 15 73 36 104 319 16 73 73 44 754.1 17 73 109 52 638.1 18 73 146 47 706 19 73 182 110 301.6 [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/backgroundClusters /home/basti/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_all_vetoes.tex Generated: /home/basti/phd/Figs/backgroundClusters/background_suppression_tile_map_lnL80_all_vetoes.pdf :end: Background clusters, 95%: #+begin_src sh :results drawer ESCAPE_LATEX=true plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --zMax 5 \ --title "Cluster centers CAST data, MLP@95%, all vetoes" \ --outpath ~/phd/Figs/backgroundClusters/ \ --filterNoisyPixels \ --showGoldRegion \ --energyMin 0.2 --energyMax 12.0 \ --suffix "_mlp_95_all_vetoes_adam_tanh30_sigmoid_mse_82k" \ --backgroundSuppression \ --useTikZ #+end_src #+RESULTS: :results: reading: /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 reading: /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 @["/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5", "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5"] DataFrame with 3 columns and 5606 rows: Idx x y count dtype: int int int 0 3 1 1 1 3 247 2 2 4 1 9 3 5 1 15 4 6 1 9 5 6 166 1 6 12 37 1 7 15 42 1 8 16 12 1 9 16 99 1 10 16 224 1 11 17 166 1 12 18 64 1 13 18 168 1 14 18 173 1 15 18 201 1 16 19 35 1 17 19 37 2 18 19 41 1 19 19 98 1 [INFO]: Saving plot to /home/basti/phd/Figs/backgroundClusters//background_cluster_centers_mlp_95_all_vetoes_adam_tanh30_sigmoid_mse_82k.pdf INFO: The integer column `x` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("x"), ...)`. INFO: The integer column `xs` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("xs"), ...)`. INFO: The integer column `y` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("y"), ...)`. INFO: The integer column `ys` has been automatically determined to be continuous. To overwrite this behavior add a `+ scale_x/y_discrete()` call to the plotting chain. Choose `x` or `y` depending on which axis this column refers to. Or apply a `factor` to the column name in the `aes` call, i.e. `aes(..., factor("ys"), ...)`. DataFrame with 4 columns and 49 rows: Idx xI yI cI sI dtype: int int int float 0 0 0 153 216.9 1 0 36 120 276.5 2 0 73 30 1106 3 0 109 34 975.9 4 0 146 31 1070 5 0 182 109 304.4 6 0 219 28 1185 7 36 0 478 69.41 8 36 36 405 81.93 9 36 73 92 360.7 10 36 109 100 331.8 11 36 146 101 328.5 12 36 182 315 105.3 13 36 219 165 201.1 14 73 0 271 122.4 15 73 36 151 219.7 16 73 73 59 562.4 17 73 109 67 495.2 18 73 146 55 603.3 19 73 182 124 267.6 [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/backgroundClusters /home/basti/phd/Figs/backgroundClusters/background_suppression_tile_map_mlp_95_all_vetoes_adam_tanh30_sigmoid_mse_82k.tex Generated: /home/basti/phd/Figs/backgroundClusters/background_suppression_tile_map_mlp_95_all_vetoes_adam_tanh30_sigmoid_mse_82k.pdf :end: *** Generate background rate table :extended: :PROPERTIES: :CUSTOM_ID: sec:background:generate_background_rate_table :END: Uhhh, excuse this massive wall. I could have written a scrip that reads the files from two directories, but well... #+begin_src sh :results drawer plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 \ --names "LnL80" --names "LnL80" \ --names "LnL80+Sc" --names "LnL80+Sc" \ --names "LnL80+F" --names "LnL80+F" \ --names "LnL80+S" --names "LnL80+S" \ --names "LnL80+SL" --names "LnL80+SL" \ --names "LnL80+L" --names "LnL80+L" \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "LnL70" --names "LnL70" \ --names "LnL70+L" --names "LnL70+L" \ --names "LnL70+S" --names "LnL70+S" \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "LnL90" --names "LnL90" \ --names "LnL90+L" --names "LnL90+L" \ --names "LnL90+S" --names "LnL90+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@85" --names "MLP@85" \ --names "MLP@85+L" --names "MLP@85+L" \ --names "MLP@85+S" --names "MLP@85+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@9" --names "MLP@9" \ --names "MLP@9+L" --names "MLP@9+L" \ --names "MLP@9+S" --names "MLP@9+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@95" --names "MLP@95" \ --names "MLP@95+L" --names "MLP@95+L" \ --names "MLP@95+S" --names "MLP@95+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@98" --names "MLP@98" \ --names "MLP@98+L" --names "MLP@98+L" \ --names "MLP@98+S" --names "MLP@98+S" \ --centerChip 3 \ --title "Background rate from CAST data, incl. scinti, FADC, septem, line veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crGold_scinti_fadc_septem_line.pdf \ --outpath /tmp/ \ --region crGold \ --energyMin 0.2 \ --rateTable ~/phd/resources/background_rate_comparisons.org \ --noPlot \ --quiet #+end_src #+RESULTS: :results: Manual rate = 1.65545(8641)e-05 [INFO]:Dataset: LnL70 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.29125(6740)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.65545(8641)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 7.3074(5741)e-06 [INFO]:Dataset: LnL70+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.6998(4478)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.3074(5741)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.9015(5580)e-06 [INFO]:Dataset: LnL70+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.3831(4352)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 6.9015(5580)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.06142(9643)e-05 [INFO]:Dataset: LnL80 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.60791(7521)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.06142(9643)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.55621(8378)e-05 [INFO]:Dataset: LnL80+F [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.21385(6535)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.55621(8378)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.0743(6035)e-06 [INFO]:Dataset: LnL80+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 6.2979(4707)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 8.0743(6035)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.3373(6490)e-06 [INFO]:Dataset: LnL80+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.2831(5062)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.3373(6490)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 7.6683(5881)e-06 [INFO]:Dataset: LnL80+SL [INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.9813(4587)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.6683(5881)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.91256(9288)e-05 [INFO]:Dataset: LnL80+Sc [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.49180(7245)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.91256(9288)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.7561(1115)e-05 [INFO]:Dataset: LnL90 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 2.14974(8697)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.7561(1115)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.9688(6706)e-06 [INFO]:Dataset: LnL90+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.7757(5230)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.9688(6706)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.3824(6506)e-06 [INFO]:Dataset: LnL90+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.3183(5074)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.3824(6506)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.56523(8403)e-05 [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.22088(6554)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.56523(8403)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.8411(6315)e-06 [INFO]:Dataset: MLP@85+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 6.8961(4926)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 8.8411(6315)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.2096(6085)e-06 [INFO]:Dataset: MLP@85+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 6.4035(4747)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 8.2096(6085)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.74566(8874)e-05 [INFO]:Dataset: MLP@9 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.36162(6921)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.74566(8874)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.6981(6614)e-06 [INFO]:Dataset: MLP@9+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.5645(5159)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.6981(6614)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 9.1117(6411)e-06 [INFO]:Dataset: MLP@9+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.1072(5001)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.1117(6411)e-06 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.01631(9537)e-05 [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.57272(7439)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.01631(9537)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.05552(6900)e-05 [INFO]:Dataset: MLP@95+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 8.2330(5382)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.05552(6900)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.00139(6721)e-05 [INFO]:Dataset: MLP@95+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.8108(5242)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.00139(6721)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.3862(1037)e-05 [INFO]:Dataset: MLP@98 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.86123(8092)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.3862(1037)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.16378(7245)e-05 [INFO]:Dataset: MLP@98+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 9.0775(5651)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.16378(7245)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.10063(7046)e-05 [INFO]:Dataset: MLP@98+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 8.5849(5496)e-05 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.10063(7046)e-05 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | Rate | | LnL | 0.700 | false | false | false | false | 0.700 | 1.65545(8641)e-05 | | LnL | 0.700 | true | true | false | true | 0.590 | 7.3074(5741)e-06 | | LnL | 0.700 | true | true | true | true | 0.503 | 6.9015(5580)e-06 | | LnL | 0.800 | false | false | false | false | 0.800 | 2.06142(9643)e-05 | | LnL | 0.800 | true | true | false | false | 0.784 | 1.55621(8378)e-05 | | LnL | 0.800 | true | true | false | true | 0.674 | 8.0743(6035)e-06 | | LnL | 0.800 | true | true | true | false | 0.615 | 9.3373(6490)e-06 | | LnL | 0.800 | true | true | true | true | 0.574 | 7.6683(5881)e-06 | | LnL | 0.800 | true | false | false | false | 0.800 | 1.91256(9288)e-05 | | LnL | 0.900 | false | false | false | false | 0.900 | 2.7561(1115)e-05 | | LnL | 0.900 | true | true | false | true | 0.759 | 9.9688(6706)e-06 | | LnL | 0.900 | true | true | true | true | 0.646 | 9.3824(6506)e-06 | | MLP | 0.865 | false | false | false | false | 0.865 | 1.56523(8403)e-05 | | MLP | 0.865 | true | true | false | true | 0.729 | 8.8411(6315)e-06 | | MLP | 0.865 | true | true | true | true | 0.621 | 8.2096(6085)e-06 | | MLP | 0.912 | false | false | false | false | 0.912 | 1.74566(8874)e-05 | | MLP | 0.912 | true | true | false | true | 0.769 | 9.6981(6614)e-06 | | MLP | 0.912 | true | true | true | true | 0.655 | 9.1117(6411)e-06 | | MLP | 0.957 | false | false | false | false | 0.957 | 2.01631(9537)e-05 | | MLP | 0.957 | true | true | false | true | 0.807 | 1.05552(6900)e-05 | | MLP | 0.957 | true | true | true | true | 0.687 | 1.00139(6721)e-05 | | MLP | 0.983 | false | false | false | false | 0.983 | 2.3862(1037)e-05 | | MLP | 0.983 | true | true | false | true | 0.829 | 1.16378(7245)e-05 | | MLP | 0.983 | true | true | true | true | 0.706 | 1.10063(7046)e-05 | :end: *** Old text from IAXO TDR text :noexport: All the vetoes discussed above yield a very good improvement of the background rate, shown in fig. [[fig:background_rate_all_vetoes]], which contains the comparisons of all vetoes discussed above. Each veto builds on the previous ones. The background rate between $\SIrange{0}{8}{keV}$ ends up at $<\SI{1.1e-5}{keV⁻¹ cm⁻² s⁻¹}$. While the vetoes allow for a good reduction over the initial background rate (in particular over the whole chip, as seen in fig. [[fig:background_clusters_septem_veto]], the limitations of the achieved background needs to be discussed in the context of a GridPix3 based detector with 7 GridPix. First of all the main features visible in the background rate are of course the argon fluorescence line at around $\SI{3}{keV}$ and the copper fluorescence lines at $\sim\SIrange{8}{9}{keV}$. There are two main ways to excite these lines: 1. via cosmics 2. via radioactive impurities of the detector material These also split into two groups: 1. the inducing particle induces these *within* the sensitive area of the readout (only possible for the excitation of the argon line or the copper of the anode) 2. the inducing particle induces these *outside* the sensitive area of the readout In the first case the GridPix1 detector runs into a severe limitation due to its readout. The readout of the Timepix1 is shutter based. The combination of 7 such GridPix1 leads to a significantly long readout time, which is why a shutter length of about $\SI{2.4}{s}$ was chosen for the data taking at CAST. As we only had an FADC to trigger and thus close the shutter prematurely for the center chip, the long time scales mean the chance of random coincidences of events on the outer chips is significant. For example a muon that traverses over the outer chips, but neither fulfills the septem or line veto, there is no way to be certain whether the cluster seen on the center chip is correlated or not. While an aggressive "no activity on outer chips allowed" veto is possible, it severely increases the dead time due to the long shutter times. This particular case is perfectly resolved by the usage of the GridPix3, as that version not only allows for a data driven readout resolving the problem of long shutter times, but also allows for *simultaneous* readout of ~ToT~ and ToA. So with a GridPix3 detector such random coincidences are reduced to 'real' random coincidences, which are of the time scale of the physical processes we are interested in. The second case will remain an issue even with a GridPix3 based detector. *However*, passive mitigation of this is possible by using a different gas mixture (e.g. xenon based) to avoid the argon fluorescence line altogether. *TODO*: once merged, replace/add radiopure talk with reference to corresponding section Further, the GridPix3 detector will be built using radiopure materials. This should significantly reduce the amount of induced fluorescence to those parts that are cosmic induced. And finally the cosmic induced events will also be further reduced by usage of a fully covering scintillator veto system. All of these combinations should lead to a significant improvement of the background rate. #+CAPTION: Background rate in the center \goldArea region using #+CAPTION: the full 2017/18 GridPix1 dataset from CAST. Each successive veto, applied in the order #+CAPTION: in which they are explained above is shown cumulatively. The 'line veto' contains all #+CAPTION: discussed vetoes. It yields a background rate in the region between $\SIrange{0}{8}{keV}$ of #+CAPTION: $<\SI{1.1e-5}{keV⁻¹ cm⁻² s⁻¹}$. #+NAME: fig:background_rate_all_vetoes [[~/org/Figs/statusAndProgress/IAXO_TDR/background_rate_2017_2018_scinti_veto_septem_veto_line_veto.pdf]] - [ ] *INCLUDE MLP* *** Example using GridPix3 data :noexport: -> No, not in this thesis. Maybe separate, maybe merged into the other sections. ** Understanding background rate :noexport: - [ ] this is already partially handled in the summary section of background with all vetoes. 3 keV is easy to understand. 8-9 keV is more difficult. ** Muon calculations :noexport: Probably not going to make it into final thesis? We'll see, take from StatusAndProgress. Maybe shortened version will make it. - [ ] *INSERT THESE!* They are now referenced in a foot note in the scintillator section! - [ ] *IF NOT ALREADY DONE, ESTIMATE THE EXPECTED RATE OF MUONS UNDER THOSE ANGLES NUMERICALLY* -> so that we have a total number of muons we should have detected e.g. orthogonally in the CAST data, e.g. to compare with SiPM trigger rates etc. ** Hough transformation as a cluster finding helper :noexport: At some point we considered whether a Hough transformation could be a useful tool in the application of the outer GridPix ring as a veto. The notes about this are here for completeness, as it showcases an interesting idea leading to a dead end. To avoid others investing time in it, we include it. Or rather, if you have plenty of experience with Hough transformations and know what to make of this, then feel free and use it as a starting inspiration! - [X] Add our notes about attempts to use hough transformations. *** Hough transformation for cluster finding :noexport: I started a Hough trafo playground in [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/houghTrafoPlayground/houghTrafoPlayground.nim]]. Reading up on Hough transformations is a bit confusing, but what we are doing for the moment: - compute connecting lines between *each* point pair in a septem event (so for N hits, that's N² lines) - for each line compute the slope and intersect From this information we can look at different things: 1. the plots of all lines. Very messy, but gives an idea if the lines are correct. 2. a histogram of all found slopes 3. a histogram of all found intersects 4. a scatter plot of slopes vs. intersects The "algorithm" to compute the Hough transformation is pretty dumb at the moment: #+begin_src nim var xs = newSeqOfCap[int](x.len * x.len) var ys = newSeqOfCap[int](x.len * x.len) var ids = newSeqOfCap[string](x.len * x.len) var slopes = newSeqOfCap[float](x.len * x.len) var intersects = newSeqOfCap[float](x.len * x.len) echo x for i in 0 ..< x.len: for j in 0 ..< x.len: if i != j: # don't look at same point xs.add x[j] ys.add y[j] xs.add x[i] ys.add y[i] ids.add $i & "/" & $j ids.add $i & "/" & $j if xs[^1] - xs[^2] > 0: # if same x, slope is inf let slope = (ys[^1] - ys[^2]).float / (xs[^1] - xs[^2]).float slopes.add slope # make sure both points yield same intercept doAssert abs( (y[j].float - slope * x[j].float) - (y[i].float - slope * x[i].float) ) < 1e-4 intersects.add (y[j].float - slope * x[j].float) #+end_src Let's look at a couple of examples: **** Example 0 =septemEvent_run_272_event_95288.csv= #+begin_center #+CAPTION: The corresponding septem event. #+NAME: fig_hough_septem_example_0 [[~/org/Figs/statusAndProgress/houghTrafo/plot_septem_run_272_event_95288.pdf]] #+end_center #+begin_center #+CAPTION: All connecting lines between the pixel pairs. #+NAME: fig_hough_lines_example_0 [[~/org/Figs/statusAndProgress/houghTrafo/lines_run_272_event_95288.png]] #+end_center #+begin_center #+CAPTION: Histogram of all slopes #+NAME: fig_hough_histo_slopes_example_0 [[~/org/Figs/statusAndProgress/houghTrafo/histo_slopes_run_272_event_95288.pdf]] #+end_center #+begin_center #+CAPTION: Histogram of all intersects #+NAME: fig_hough_histo_intersects_example_0 [[~/org/Figs/statusAndProgress/houghTrafo/histo_intersects_run_272_event_95288.pdf]] #+end_center #+begin_center #+CAPTION: Scatter plot of slopes vs intersects. #+NAME: fig_hough_slope_vs_intersects_example_0 [[~/org/Figs/statusAndProgress/houghTrafo/slope_vs_intersects_run_272_event_95288.png]] #+end_center **** Example 1 =septemEvent_run_265_event_1662.csv= #+begin_center #+CAPTION: The corresponding septem event. #+NAME: fig_hough_septem_example_1 [[~/org/Figs/statusAndProgress/houghTrafo/plot_septem_run_265_event_1662.pdf]] #+end_center #+begin_center #+CAPTION: All connecting lines between the pixel pairs. #+NAME: fig_hough_lines_example_1 [[~/org/Figs/statusAndProgress/houghTrafo/lines_run_265_event_1662.png]] #+end_center #+begin_center #+CAPTION: Histogram of all slopes #+NAME: fig_hough_histo_slopes_example_1 [[~/org/Figs/statusAndProgress/houghTrafo/histo_slopes_run_265_event_1662.pdf]] #+end_center #+begin_center #+CAPTION: Histogram of all intersects #+NAME: fig_hough_histo_intersects_example_1 [[~/org/Figs/statusAndProgress/houghTrafo/histo_intersects_run_265_event_1662.pdf]] #+end_center #+begin_center #+CAPTION: Scatter plot of slopes vs intersects. #+NAME: fig_hough_slope_vs_intersects_example_1 [[~/org/Figs/statusAndProgress/houghTrafo/slope_vs_intersects_run_265_event_1662.png]] #+end_center **** Example 2 =septemEvent_run_261_event_809.csv= #+begin_center #+CAPTION: The corresponding septem event. #+NAME: fig_hough_septem_example_2 [[~/org/Figs/statusAndProgress/houghTrafo/plot_septem_run_261_event_809.pdf]] #+end_center #+begin_center #+CAPTION: All connecting lines between the pixel pairs. #+NAME: fig_hough_lines_example_2 [[~/org/Figs/statusAndProgress/houghTrafo/lines_run_261_event_809.png]] #+end_center #+begin_center #+CAPTION: Histogram of all slopes #+NAME: fig_hough_histo_slopes_example_2 [[~/org/Figs/statusAndProgress/houghTrafo/histo_slopes_run_261_event_809.pdf]] #+end_center #+begin_center #+CAPTION: Histogram of all intersects #+NAME: fig_hough_histo_intersects_example_2 [[~/org/Figs/statusAndProgress/houghTrafo/histo_intersects_run_261_event_809.pdf]] #+end_center #+begin_center #+CAPTION: Scatter plot of slopes vs intersects. #+NAME: fig_hough_slope_vs_intersects_example_2 [[~/org/Figs/statusAndProgress/houghTrafo/slope_vs_intersects_run_261_event_809.png]] #+end_center **** Example 3 =septemEvent_run_291_event_31480.csv= #+begin_center #+CAPTION: The corresponding septem event. #+NAME: fig_hough_septem_example_3 [[~/org/Figs/statusAndProgress/houghTrafo/plot_septem_run_291_event_31480.pdf]] #+end_center #+begin_center #+CAPTION: All connecting lines between the pixel pairs. #+NAME: fig_hough_lines_example_3 [[~/org/Figs/statusAndProgress/houghTrafo/lines_run_291_event_31480.png]] #+end_center #+begin_center #+CAPTION: Histogram of all slopes #+NAME: fig_hough_histo_slopes_example_3 [[~/org/Figs/statusAndProgress/houghTrafo/histo_slopes_run_291_event_31480.pdf]] #+end_center #+begin_center #+CAPTION: Histogram of all intersects #+NAME: fig_hough_histo_intersects_example_3 [[~/org/Figs/statusAndProgress/houghTrafo/histo_intersects_run_291_event_31480.pdf]] #+end_center #+begin_center #+CAPTION: Scatter plot of slopes vs intersects. #+NAME: fig_hough_slope_vs_intersects_example_3 [[~/org/Figs/statusAndProgress/houghTrafo/slope_vs_intersects_run_291_event_31480.png]] #+end_center **** Example 4 =septemEvent_run_306_event_4340.csv= #+begin_center #+CAPTION: The corresponding septem event. #+NAME: fig_hough_septem_example_4 [[~/org/Figs/statusAndProgress/houghTrafo/plot_septem_run_306_event_4340.pdf]] #+end_center #+begin_center #+CAPTION: All connecting lines between the pixel pairs. #+NAME: fig_hough_lines_example_4 [[~/org/Figs/statusAndProgress/houghTrafo/lines_run_306_event_4340.png]] #+end_center #+begin_center #+CAPTION: Histogram of all slopes #+NAME: fig_hough_histo_slopes_example_4 [[~/org/Figs/statusAndProgress/houghTrafo/histo_slopes_run_306_event_4340.pdf]] #+end_center #+begin_center #+CAPTION: Histogram of all intersects #+NAME: fig_hough_histo_intersects_example_4 [[~/org/Figs/statusAndProgress/houghTrafo/histo_intersects_run_306_event_4340.pdf]] #+end_center #+begin_center #+CAPTION: Scatter plot of slopes vs intersects. #+NAME: fig_hough_slope_vs_intersects_example_4 [[~/org/Figs/statusAndProgress/houghTrafo/slope_vs_intersects_run_306_event_4340.png]] #+end_center **** Conclusion The hough transformation produces too much data that is too hard to interpret in the context of our goal. It doesn't actually help us a lot here, so we'll drop the pursuit of that. * Limit calculation :Limit: :PROPERTIES: :CUSTOM_ID: sec:limit :END: #+LATEX: \minitoc In this chapter we will introduce a generic limit calculation method, which can be used to compute limits on different axion or ALP coupling constants. The first part of this chapter focuses on the more theoretical and conceptual aspects of our limit calculation methods. The second half discusses our inputs in detail and shows our expected and observed limits. We will start with an introduction of the method itself, a Bayesian extended likelihood approach, sec. [[#sec:limit:method_introduction]]. Step by step we will introduce the likelihood function we use (sec. [[#sec:limit:method_likelihood]]), what the individual pieces are and how likelihood values are computed (sec. [[#sec:limit:method_computing_L]]) and how a limit is computed from such a likelihood function, sec. [[#sec:limit:method_computing_a_limit]]. Then we introduce our approach to compute an expected limit by sampling toy candidates [fn:toys], sec. [[#sec:limit:method_expected_limit]]. After this we will extend our approach in sec. [[#sec:limit:method_systematics]] to include systematic uncertainties. Due to added complexity in evaluating the thus produced likelihood function, we discuss our Markov Chain Monte Carlo (MCMC) approach to evaluate the likelihood function in sec. [[#sec:limit:method_mcmc]]. This concludes the first half of the chapter. Please look into Lista's book [[cite:&lista23_statistics]] if you would like more details about Bayesian limit calculations involving nuisance parameters. Barlow [[cite:&barlow1993statistics]] and Cowan [[cite:&cowan1998statistical]] are also recommended for general statistics concepts. From here we introduce all ingredients entering the likelihood function in detail, sec. [[#sec:limit:ingredients]]. Next we discuss our systematics, sec. [[#sec:limit:systematics]], after which we explain our MCMC approach in more detail (number of chains, parameters bounds, starting parameters etc.), sec. [[#sec:limit:mcmc_calc_limit]]. At this point we can finally tie everything together and discuss the expected limits obtained for a variety of different classifier and veto choices, sec. [[#sec:limit:expected_limits]]. For the best performing setup -- the one yielding the best expected limit -- we present our axion candidates, [[#sec:limit:candidates]]. Based on these we present our observed limit in sec. [[#sec:limit:observed_limit]]. Finally, in sec. [[#sec:limit:other_couplings]] we briefly consider two other coupling constants, the axion-photon coupling $g_{aγ}$ and the chameleon coupling $β_γ$. [fn:toys] 'Toy' is a common terminology for randomly sampled cases in Monte Carlo calculations. In our case, sampling representative candidates from the background distribution yields a set of 'toy candidates'. ** TODOs for this section [/] :noexport: - [ ] Need to rethink how to structure it first... - [X] *TALK ABOUT ASIMOV DATASET IN CONTEXT OF EXPECTED LIMITS* and how it likely won't help us, as we cannot 'compute' the Asimov dataset. The closest we could do is to take the exact numbers of candidates predicted by the ~expCounts~ we use to sample background events from. Problem: what is 0.1 candidates? Well, we could take our *derivation of the unbinned approach literally* and simply take the power to $c_i = 0.1$ or similar! -> That might _actually_ work. -> We mention them now. #+begin_comment If we decide to present different limits, we should have one section (maybe in theory) where we present our methodology and then compute the limits based on that method for each of the different cases: - chameleon - axion photon - axion electron Using background rate & methods to determine it. *Need* to show the log L phase space according to Igor. Well, that seems useful anyway. *TODO* somewhere explain what an expected vs. a real limit is. #+end_comment ** Limit method (from the paper) :noexport: To compute a limit on the axion-electron coupling constant we use a Bayesian approach based on finding the $95^{\text{th}}$ percentile of the marginal posterior likelihood. Our initial likelihood function is derived from a ratio of two Poisson distributions, the signal plus background hypothesis over the pure background hypothesis: \[ \mathcal{L} = \prod_i \frac{P_{\text{pois}}(c_i; s_i + b_i)}{P_{\text{pois}}(c_i; b_i)} \] which runs over all channels $i$ and $c_i$ are the number of candidates in each bin, $s_i, b_i$ the signal and background, respectively. The signal $s_i$ is the expected amount of signal in bin $i$ based on the solar axion flux and all detection efficiencies included. The background is given by a background model constructed from the entire background dataset at CAST during non-tracking times. This likelihood is taken to the unbinned likelihood by choosing bins in time such that each bin only contains either 0 or 1 candidates. The likelihood function simplifies to \[ \mathcal{L} = e^{-s_{\text{tot}}} \prod_i \left(1 + \frac{s_i}{b_i}\right) \] in this case. Here $s_{\text{tot}}$ is the total expected signal over the entire signal sensitive data taking period - a total number of expected axion induced X-rays recorded by our detector. Further, systematics are taken into account by multiplying with one normal distribution for each nuisance parameter, which is normalized such that $\mathcal{N}(θ = 0, σ) = 1$. The signal and background parameters are scaled by $θ$, such that a positive $θ$ increases the parameter and a negative decreases it. At the same time the normal distribution acts as a penalty term. To compute the limit the explicit $θ$ dependencies must be removed, which is done by marginalization, i.e. integrating them out \[ \mathcal{L}_{M} = \iiiint_{-∞}^∞ \exp(-s'_{\text{tot}}) \cdot \prod_i \left(1 + \frac{s_i''}{b_i'}\right) \cdot \exp\left[-\frac{θ_b²}{2 σ_b²} - \frac{θ_s²}{2 σ_s²} - \frac{θ_x²}{2 σ_{xy}²} - \frac{θ_y²}{2 σ_{xy}²} \right] \, \mathrm{d}\,θ_b \mathrm{d}\,θ_s \mathrm{d}\,θ_x \mathrm{d}\,θ_y \] # \begin{align*} # \mathcal{L}_M &= \iiiint_{-∞}^∞ \left(\prod_i \frac{P_{\text{pois}}(n_i; s_i'' + b_i')}{P_{\text{pois}}(n_i; b_i')}\right) \cdot \mathcal{N}(θ_s, σ_s) # \cdot \mathcal{N}(θ_b, σ_b) \cdot \mathcal{N}(θ_x, σ_x) \cdot \mathcal{N}(θ_y, σ_y) \\ # \mathcal{L}'(g, θ_s, θ_b, θ_x, θ_y) &= e^{-s'_\text{tot}} \prod_i (1 + \frac{s_i''}{b_i'}) · # \exp\left[-\frac{1}{2} \left(\frac{θ_s}{σ_s}\right)² # -\frac{1}{2} \left(\frac{θ_b}{σ_b}\right)² # -\frac{1}{2} \left(\frac{θ_x}{σ_x}\right)² # -\frac{1}{2} \left(\frac{θ_y}{σ_y}\right)² \right] # \end{align*} with $a' = a ( 1 + θ_a )$ and $a'' = a ( 1 + θ_a ) ( 1 + θ_x ) ( 1 + θ_y )$. This keeps the possibility of variance due to systematics including the penalization embedded in the marginal likelihood, but restores a single variable likelihood function with a well defined single value for the $95^{\text{th}}$ percentile. The limit $g'_{ae}$ is then defined by \[ 0.95 = \frac{∫_{-∞}^{g_{ae}'} \mathcal{L}(g_{ae}) π(g_{ae}) \, \mathrm{d}g_{ae}}{∫_{-∞}^∞ \mathcal{L}(g_{ae}) π(g_{ae}) \, \mathrm{d}g_{ae}} \] which is computed from an empirical cumulative distribution function. Evaluation of a four-fold integral where the integrand is expensive to evaluate computationally due to the curse of dimensionality. As such the Metropolis-Hastings Markov Chain Monte Carlo algorithm is used to build evaluate the integrand efficiently only in those regions of the parameter space where the integrand contributes to the integral. ** Limit method - introduction :PROPERTIES: :CUSTOM_ID: sec:limit:method_introduction :END: We start with a few words on the terminology we use and what we have in mind when we talk about 'limits'. - Context and terminology :: An experiment tries to detect a new phenomenon of the kind where you expect very little signal compared to background sources. We have a dataset in which the experiment is /sensitive/ to the phenomenon, another dataset in which it is /not sensitive/ and finally a theoretical model of our /expected signal/. Any data entry (after cleaning) in the sensitive dataset is a /candidate/. Each candidate is drawn from a distribution of the present background plus the expected signal contribution ($c = s + b$). Any entry in the non sensitive dataset is /background/ only. - Goal :: Compute the value of a parameter (coupling constant) such that there is $\SI{95}{\%}$ confidence that the combined hypothesis of signal and background sources are compatible with the background only hypothesis. - Condition :: Our experiment should be such that the data in some "channels" of our choice can be modeled by a Poisson distribution \[ P_{\text{Pois}}(k; λ) = \frac{λ^k e^{-λ}}{k!}. \] Each such channel with an expectation value of $λ$ counts has probability $P_{\text{Pois}}(k; λ)$ to measure $k$ counts. Because the Poisson distribution (as written here) is a probability density function, multiple different channels can be combined to a "likelihood" for an experiment outcome by taking the product of each channel's Poisson probability \[ \mathcal{L}(λ) = \prod_i P_{i, \text{Pois}}(k; λ) = \prod_i \frac{λ_i^{k_i} e^{-λ_i}}{k_i!}. \] Given a set of $k_i$ recorded counts for all different channels $i$ with expectation value $λ_i$ the "likelihood" gives us the literal likelihood to record exactly that experimental outcome. Note that the parameter of the likelihood function is the mean $λ$ and not the recorded data $k$! The likelihood function describes the likelihood for a *fixed set of data* (our real measured counts) for different parameters (our signal & background models, where the background model is constant as well). In addition, the method described in the next section is valid under the assumption that our experiment did not have a statistically significant detection in the signal sensitive dataset compared to the background dataset. # - [ ] Why do we use the likelihood ratio that we use? ** Limit method - likelihood function $\mathcal{L}$ :PROPERTIES: :CUSTOM_ID: sec:limit:method_likelihood :END: The likelihood function as described in the previous section is not optimal to compute a limit for the usage with different datasets as described before. [fn:we_could] For that case we want to have some kind of a "test statistic" that relates the sensitive dataset with its seen candidates to the background dataset. For practical purposes we prefer to define a statistic, which is monotonically increasing in the number of candidates (see for example cite:junk99_mclimit). There are different choices possible, but the one we use is \[ \mathcal{L}(s, b) = \prod_i \frac{P_{\text{pois}}(c_i; s_i + b_i)}{P_{\text{pois}}(c_i; b_i)}, \] so the ratio of the signal plus background over the pure background hypothesis. The number $c_i$ is the real number of measured *candidates*. So the numerator gives the probability to measure $c_i$ counts in each channel $i$ given the signal plus background hypothesis. On the other hand the denominator measures the probability to measure $c_i$ counts in each channel $i$ assuming only the background hypothesis. #+begin_quote Note: For each channel $i$ the ratio of probabilities itself is not strictly speaking a probability density function, because the integral \[ \int_{-∞}^{∞}Q\, \mathrm{d}x = N \neq 1, \] with $Q$ an arbitrary distribution. $N$ can be interpreted as a hypothetical number of total number of counts measured in the experiment. A PDF requires this integral to be 1. As a result the full construct $\mathcal{L}$ of the product of these ratios is technically not a likelihood function either. It is usually referred to as an "extended likelihood function". For all practical purposes though we will continue to treat is as a likelihood function and call it $\mathcal{L}$ as usual. #+end_quote Note the important fact that $\mathcal{L}$ really is only a function of our signal hypothesis $s$ and our background model $b$. Each experimental outcome *has its own* $\mathcal{L}$. This is precisely why the likelihood function describes everything about an experimental outcome (at least if the signal and background models are reasonably understood) and thus different experiments can be combined by combining them in "likelihood space" (multiplying their $\mathcal{L}$ or adding $\ln \mathcal{L}$ values) to get a combined likelihood to compute a limit for. - Deriving a practical version of $\mathcal{L}$ :: The version of $\mathcal{L}$ presented above is still quite impractical to use and the ratio of exponentials of the Poisson distributions can be simplified significantly: \begin{align*} \mathcal{L} &= \prod_i \frac{P(c_i, s_i + b_i)}{P(c_i, b_i)} = \prod_i \frac{ \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} }{ \frac{b_i^{n_i}}{n_i!} e^{-b_i}} \\ &= \prod_i \frac{e^{-s_i} (s_i + b_i)^{c_i}}{b_i^{c_i}} = e^{-s_\text{tot}} \prod_i \frac{(s_i + b_i)^{c_i}}{b_i^{c_i}} \\ &= e^{-s_\text{tot}} \prod_i \left(1 + \frac{s_i}{b_i} \right)^{c_i} \end{align*} This really is the heart of computing a limit with a number of $s_{\text{tot}}$ expected events from the signal hypothesis (depending on the parameter to be studied, the coupling constant), $c_i$ measured counts in each channel and $s_i$ expected signal events and $b_i$ expected background events in that channel. As mentioned previously though, the choice of what a channel is, is completely up to us! One such choice might be binning the candidates in energy. However, there is one choice that is particularly simple and is often referred to as the "unbinned likelihood". Namely, we create channels in _time_ such that each "time bin" is so short as to either have 0 entries (most channels) or 1 entry. This means we have a large number of channels, but because of our definition of $\mathcal{L}$ this does not matter. All channels with 0 candidates do not contribute to $\mathcal{L}$ (they are $\left(1 + \frac{s_i}{b_i}\right)^0 = 1$). As a result our expression of $\mathcal{L}$ simplifies further to: \[ \mathcal{L} = e^{-s_\text{tot}} \prod_i \left(1 + \frac{s_i}{b_i}\right) \] where $i$ is now all channels where a candidate is contained ($c_i = 1$). [fn:we_could] We could use $P_i(k_i; λ_i = s_i + b_i)$, but among other things a ratio is numerically more stable. *** TODOs for this section :noexport: - [ ] *PUT NOTE ON EXTENDED* below the 'Note the important fact' part? I. directly above 'Deriving a practical version of L'? *** Notes on more explanations :extended: For more explanations on this, I really recommend to read Thomas Junk 1999 paper about ~mclimit~, cite:junk99_mclimit. While it only covers binned limit approaches, it is anyhow very clear in its explanations. In general I recommend the following resources on statistics and limit calculations. Roughly in the order in which I would recommend them. - Luca Lista's book on statistics, [[cite:&lista23_statistics]] -> If you log in to a CERN account, you can just download it directly from Springer (I think that's the reason it works for me) - Luca List also uploaded a 'shortened version' if you will to arxiv cite:lista16_arxiv. - Barlow's book on statistics is still a good book, but from 1989, [[cite:&barlow1993statistics]] - Barlow also wrote a paper for the arxiv recently [[cite:&barlow19_arxiv_statistics]] - Cowan's book on statistics, [[cite:&cowan1998statistical]] Cowan's and Barlow's books are good to check both. They mostly cover the same topics, but reading in each can be helpful. Luca Lista is my personal preference though, because it seems clearer to me. Also it's more up to date with modern methods. For the topic here in particular, maybe also see my own notes from a few years back trying to better understand the maths behind CL_S and CL_s+b: [[file:~/org/Doc/StatusAndProgress.org::#sec:math_behind_cls_plus_b]] ** Limit method - computing $\mathcal{L}$ :PROPERTIES: :CUSTOM_ID: sec:limit:method_computing_L :END: Our simplified version of $\mathcal{L}$ using very short time bins now allows to explicitly compute the likelihood for a set of parameters. Let's now look at each of the constituents $s_{\text{tot}}$, $s_i$ and $b_i$ and discuss how they are computed. We will focus on the explicit case of an X-ray detector behind a telescope at CAST. Here it is important to note that the signal hypothesis depends on the coupling constant we wish to compute a limit for, we will just call it $g$ in the remainder (it may be $g_{aγ}$ or $g_{ae}$ or any other coupling constant). This is the actual parameter of $\mathcal{L}$. First of all the signal contribution in each channel $s_i$. It is effectively a number of counts that one would expect within the time window of the channel $i$. While this seems tricky given that we have not explicitly defined such a window we can: - either assume our time interval to be infinitesimally small and give a _signal rate_ (i.e. per second) - or make use of the neat property that our expression only contains *the ratio* of $s_i$ and $b_i$. What this means is that we can choose our units ourselves, _as long as we use the same units for $s_i$ as for $b_i$_! We will use the second case and scale each candidate's signal and background contribution to the total tracking time (signal sensitive dataset length). Each parameter with a subscript $i$ is the corresponding value that the candidate has we are currently looking at (e.g. $E_i$ is the energy of the recorded candidate $i$ used to compute the expected signal). #+NAME: eq:limit_method_signal_si \begin{equation} s_i(g²) = f(g², E_i) · A · t · P_{a \rightarrow γ}(g²_{aγ}) · ε(E_i) · r(x_i, y_i) \end{equation} where: - $f(g, E_i)$ is the axion flux at energy $E_i$ in units of $\si{keV^{-1}.cm^{-2}.s^{-1}}$ as a function of $g²$, sec. [[#sec:limit:ingredients:solar_axion_flux]], - $A$ is the area of the magnet bore in $\si{cm²}$, sec. [[#sec:limit:ingredients:magnet_tracking]], - $t$ is the tracking time in $\si{s}$, also sec. [[#sec:limit:ingredients:magnet_tracking]], - $P_{a \rightarrow γ}$ is the conversion probability of the axion converting into a photon computed via \[ P_{a \rightarrow γ}(g²_{aγ}) = \left( \frac{g_{aγ} B L}{2} \right)² \] written in /natural units/ (meaning we need to convert $B$ and $L$ into values expressed in powers of $\si{eV}$, as discussed in sec. [[#sec:theory:axion_interactions]]), sec. [[#sec:limit:ingredients:conversion_probability]], - $ε(E_i)$ is the combined detection efficiency, i.e. the combination of X-ray telescope effective area, the transparency of the detector window and the absorption probability of an X-ray in the gas, sec. [[#sec:limit:ingredients:detection_eff]], - $r(x_i, y_i)$ is the expected amount of flux from solar axions after it is focused by the X-ray telescope in the readout plane of the detector at the candidate's position $(x_i, y_i)$ (this requires a raytracing model). It should be expressed as a fractional value in units of $\si{cm^{-2}}$. See sec. [[#sec:limit:ingredients:raytracing]]. As a result the units of $s_i$ are then given in $\si{keV^{-1}.cm^{-2}}$ with the tracking time integrated out. If one computes a limit for $g_{aγ}$ then $f$ and $P$ both depend on the coupling of interest, making $s_i$ a function of $g⁴_{aγ}$. In case of e.g. an axion-electron coupling limit $g_{ae}$, the conversion probability can be treated as constant (with a fixed $g_{aγ}$). Secondly, the background hypothesis $b_i$ for each channel. Its value depends on whether we assume a constant background model, an energy dependent one or even an energy plus position dependent model. In either case the main point is to evaluate that background model at the position $(x_i, y_i)$ of the candidate and energy $E_i$ of the candidate. The value should then be scaled to the same units of as $s_i$, namely $\si{keV^{-1}.cm^{-2}}$. Depending on how the model is defined this might just be a multiplication by the total tracking time in seconds. We discuss this in detail in sec. [[#sec:limit:ingredients:background]]. The final piece is the total signal $s_{\text{tot}}$, corresponding to the total number of counts expected from our signal hypothesis for the given dataset. This is nothing else as the integration of $s_i$ over the entire energy range and detection area. However, because $s_i$ implies the signal for candidate $i$, we write $s(E, x, y)$ to mean the equivalent signal as if we had a candidate at $(E, x, y)$ \[ s_{\text{tot}} = ∫_0^{E_{\text{max}}} ∫_A s(E, x, y)\, \mathrm{d}E\, \mathrm{d}x\, \mathrm{d}y \] where $A$ simply implies integrating the full area in which $(x, y)$ is defined. The axion flux is bounded within a region much smaller than the active detection area and hence all contributions outside are 0. ** Limit method - computing a limit :PROPERTIES: :CUSTOM_ID: sec:limit:method_computing_a_limit :END: With the above we are now able to evaluate $\mathcal{L}$ for a set of candidates ${c_i(E_i, x_i, y_i)}$. As mentioned before it is important to realize that $\mathcal{L}$ is a function of the coupling constant $g$, $\mathcal{L}(g)$ with all other parameters effectively constant in the context of "one experiment". $g$ is a placeholder for the parameter, in which $\mathcal{L}$ is linear, i.e. $g²_{ae}$ for axion-electron or $g⁴_{aγ}$ and $β⁴_γ$ for axion-photon and chameleon, respectively. With this in mind the 'limit' is defined as the 95-th percentile of $\mathcal{L}(g)$ /within the physical region of $g$/. The region $g < 0$ is explicitly ignored, as a coupling constant cannot be negative! This can be rigorously justified in Bayesian statistics by saying the prior $π(g)$ is 0 for $g < 0$. We can define the limit implicitly as [fn:note_α] #+NAME: eq:limit_method:limit_def \begin{equation} 0.95 = \frac{∫_0^{g'} \mathcal{L}(g)\, \mathrm{d}g}{∫_0^∞ \mathcal{L}(g)\, \mathrm{d}g} \end{equation} In practice the integral cannot be evaluated until infinity. Fortunately, our choice of $\mathcal{L}$ in the first place means that the function converges to $0$ quickly for large values of $g$. Therefore, we only need to compute values to a "large enough" value of $g$ to get the shape of $\mathcal{L}(g)$. From there we can use any numerical approach (via an empirical cumulative distribution function for example) to determine the coupling constant $g'$ that corresponds to the 95-th percentile of $\mathcal{L}(g)$. In an intuitive sense the limit means the following: $\SI{95}{\percent}$ of all coupling constants that reproduce the data we measured -- given our signal and background hypotheses -- are smaller than $g'$. Fig. [[fig:limit_method:example_limit_95th_perc]] shows an example of a likelihood function of some coupling constant. The blue area is the lower $\SI{95}{\%}$ of the parameter space and the red area is the upper $\SI{5}{\%}$. Therefore, the limit in this particular set of toy candidates is at the intersection of the two colors. #+CAPTION: Example likelihood function for a set of toy candidates. #+CAPTION: Blue is the lower 95-th percentile of the integral over the likelihood #+CAPTION: function and red the upper 5-th. The limit is at the intersection. #+NAME: fig:limit_method:example_limit_95th_perc [[~/phd/Figs/limit/simple_likelihood_limit_example.pdf]] [fn:note_α] This equation essentially computes the confidence level at $\text{CL} = \SI{95}{\%} \equiv 0.95 = 1 - 0.05 = 1 - α$. In the equation we already removed the prior and therefore adjusted the integration range. *** TODOs for this section [/] :noexport: - [X] *REPLACE THIS PLOT* by an analytical likelihood function! -> Use the binned example limit code to produce it. -> Done. #+begin_quote In practice one might even work with $g²$ or $g⁴$, but the same idea holds. #+end_quote *** Implementing a basic limit calculation method [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:implement_basic_method :END: The following are two examples for a basic limit calculation in code. This is to showcase the basic idea without getting lost in too many details. In terms of the main thesis, we use the first example to produce a plot to illustrate how the limit is computed via the 95% percentile. The real code we use for the limit is found here: https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/mcmc_limit_calculation.nim Simplest implementation: - single channel - no detection efficiencies etc., just a flux that scales with $g²$ - constant background (due to single channel) - no telescope, i.e. area for signal flux is the same as for background (due to no focusing) #+begin_src nim :results drawer :exports both :tangle /tmp/simple_limit_example.nim :flags -d:EscapeLatex=true import unchained, math ## Assumptions: const totalTime = 100.0.h # 100 of "tracking time" const totalArea = 10.cm² # assume 10 cm² area (magnet bore and chip! This case has no telescope) defUnit(cm⁻²•s⁻¹) proc flux(g²: float): cm⁻²•s⁻¹ = ## Dummy flux. Just the coupling constant squared · 1e-6 result = 1e-6 * (g²).cm⁻²•s⁻¹ proc totalFlux(g²: float): float = ## Flux integrated to total time and area result = flux(g²) * totalTime.to(Second) * totalArea ## Assume signal and background in counts of the single channel! ## (Yes, `signal` is the same as `totalFlux` in this case) proc signal(g²: float): float = flux(g²) * totalTime * totalArea ## Signal only depends on coupling in this simple model proc background(): float = 1e-6.cm⁻²•s⁻¹ * totalTime * totalArea ## Single channel, i.e. constant background proc likelihood(g²: float, cs: int): float = ## `cs` = number of candidates in the single channel result = exp(-totalFlux(g²)) # `e^{-s_tot}` result *= pow(1 + signal(g²) / background(), cs.float) proc poisson(k: int, λ: float): float = λ^k * exp(-λ) / (fac(k)) echo "Background counts = ", background(), ". Probabilty to measure 4 counts given background: ", poisson(4, background()) echo "equal to signal counts at g = 1: ", signal(1.0) echo "Likelihood at g = 1 for 4 candidates = ", likelihood(1.0, 4) ## Let's plot it from 0 to 3 assuming 4 candidates import ggplotnim let xs = linspace(0.0, 3.0, 100) let ys = xs.map_inline(likelihood(x, 4)) ## Compute limit, CDF@95% import algorithm let yCumSum = ys.cumSum() # cumulative sum let yMax = yCumSum.max # maximum of the cumulative sum let yCdf = yCumSum.map_inline(x / yMax) # normalize to get (empirical) CDF let limitIdx = yCdf.toSeq1D.lowerBound(0.95) # limit at 95% of the CDF echo "Limit at : ", xs[limitIdx] let L_atLimit = ys[limitIdx] let df = toDf(xs, ys) let dfLimit = df.filter(f{float: `xs` >= xs[limitIdx]}) echo dfLimit ggplot(df, aes("xs", "ys")) + xlab("Coupling constant") + ylab("Likelihood") + geom_line(fillColor = "blue", alpha = 0.4) + geom_line(data = dfLimit, fillColor = "red") + #geom_linerange(aes = aes(x = xs[limitIdx], yMin = 0.0, yMax = L_atLimit), ) + annotate(x = xs[limitIdx], y = L_atLimit + 0.1, text = "Limit at 95% area") + ggtitle("Example likelihood and limit") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot, useTeX = true) + ggsave("/home/basti/phd/Figs/limit/simple_likelihood_limit_example.pdf", width = 600, height = 380) #+end_src #+RESULTS: :results: Background counts = 3.6. Probabilty to measure 4 counts given background: 0.1912223391751322 equal to signal counts at g = 1: 3.6 Likelihood at g = 1 for 4 candidates = 0.4371795591566811 Limit at : 1.666666666666665 DataFrame with 2 columns and 45 rows: Idx xs ys dtype: float float 0 1.667 0.1253 1 1.697 0.1176 2 1.727 0.1103 3 1.758 0.1033 4 1.788 0.09679 5 1.818 0.09062 6 1.848 0.08481 7 1.879 0.07933 8 1.909 0.07417 9 1.939 0.06932 10 1.97 0.06476 11 2 0.06047 12 2.03 0.05645 13 2.061 0.05267 14 2.091 0.04912 15 2.121 0.0458 16 2.152 0.04268 17 2.182 0.03977 18 2.212 0.03703 19 2.242 0.03448 [INFO]: No plot ratio given, using golden ratio. [INFO] TeXDaemon ready for input. shellCmd: command -v lualatex shellCmd: lualatex -output-directory /home/basti/phd/Figs/limit /home/basti/phd/Figs/limit/simple_likelihood_limit_example.tex Generated: /home/basti/phd/Figs/limit/simple_likelihood_limit_example.pdf :end: More realistic implementation, above plus: - real solar axion flux - TODO: (detection efficiency) (could just use fixed efficiency) - X-ray telescope without usage of local flux information - multiple channels in energy #+begin_src nim :results drawer :exports both :tangle /tmp/energy_bins_limit_example.nim import unchained, math, seqmath, sequtils, algorithm ## Assumptions: const totalTime = 100.0.h # 100 of "tracking time" const areaBore = π * (2.15 * 2.15).cm² const chipArea = 5.mm * 5.mm # assume all flux is focused into an area of 5x5 mm² # on the detector. Relevant area for background! defUnit(GeV⁻¹) defUnit(cm⁻²•s⁻¹) defUnit(keV⁻¹) defUnit(keV⁻¹•cm⁻²•s⁻¹) ## Constants defining the channels and background info const Energies = @[0.5, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5].mapIt(it.keV) Background = @[0.5e-5, 2.5e-5, 4.5e-5, 4.0e-5, 1.0e-5, 0.75e-5, 0.8e-5, 3e-5, 3.5e-5, 2.0e-5] .mapIt(it.keV⁻¹•cm⁻²•s⁻¹) # convert to a rate ## A possible set of candidates from `Background · chipArea · totalTime · 1 keV` ## (1e-5 · 5x5mm² · 100h = 0.9 counts) Candidates = @[0, 2, 7, 3, 1, 0, 1, 4, 3, 2] proc solarAxionFlux(ω: keV, g_aγ: GeV⁻¹): keV⁻¹•cm⁻²•s⁻¹ = # axion flux produced by the Primakoff effect in solar core # in units of keV⁻¹•m⁻²•yr⁻¹ let flux = 2.0 * 1e18.keV⁻¹•m⁻²•yr⁻¹ * (g_aγ / 1e-12.GeV⁻¹)^2 * pow(ω / 1.keV, 2.450) * exp(-0.829 * ω / 1.keV) # convert flux to correct units result = flux.to(keV⁻¹•cm⁻²•s⁻¹) func conversionProbability(g_aγ: GeV⁻¹): UnitLess = ## the conversion probability in the CAST magnet (depends on g_aγ) ## simplified vacuum conversion prob. for small masses let B = 9.0.T let L = 9.26.m result = pow( (g_aγ * B.toNaturalUnit * L.toNaturalUnit / 2.0), 2.0 ) from numericalnim import simpson # simpson numerical integration routine proc totalFlux(g_aγ: GeV⁻¹): float = ## Flux integrated to total time, energy and area # 1. integrate the solar flux ## NOTE: in practice this integration must not be done in this proc! Only perform once! let xs = linspace(0.0, 10.0, 100) let fl = xs.mapIt(solarAxionFlux(it.keV, g_aγ)) let integral = simpson(fl.mapIt(it.float), # convert units to float for compatibility xs).cm⁻²•s⁻¹ # convert back to units (integrated out `keV⁻¹`!) # 2. compute final flux by "integrating" out the time and area result = integral * totalTime * areaBore * conversionProbability(g_aγ) ## NOTE: only important that signal and background have the same units! proc signal(E: keV, g_aγ: GeV⁻¹): keV⁻¹ = ## Returns the axion flux based on `g` and energy `E` result = solarAxionFlux(E, g_aγ) * totalTime.to(Second) * areaBore * conversionProbability(g_aγ) proc background(E: keV): keV⁻¹ = ## Compute an interpolation of energies and background ## NOTE: For simplicity we only evaluate at the channel energies anyway. In practice ## one likely wants interpolation to handle all energies in the allowed range correctly! let idx = Energies.lowerBound(E) # get idx of this energy ## Note: area of interest is the region on the chip, in which the signal is focused! ## This also allows us to see that the "closer" we cut to the expected axion signal on the ## detector, the less background we have compared to the *fixed* signal flux! result = (Background[idx] * totalTime * chipArea).to(keV⁻¹) proc likelihood(g_aγ: GeV⁻¹, energies: seq[keV], cs: seq[int]): float = ## `energies` = energies corresponding to each channel ## `cs` = each element is number of counts in that energy channel result = exp(-totalFlux(g_aγ)) # `e^{-s_tot}` for i in 0 ..< cs.len: let c = cs[i] # number of candidates in this channel let E = energies[i] # energy of this channel let s = signal(E, g_aγ) let b = background(E) result *= pow(1 + signal(E, g_aγ) / background(E), c.float) ## Let's plot it from 0 to 3 assuming 4 candidates import ggplotnim # define coupling constants let xs = logspace(-13, -10, 300).mapIt(it.GeV⁻¹) # logspace 1e-13 GeV⁻¹ to 1e-8 GeV⁻¹ let ys = xs.mapIt(likelihood(it, Energies, Candidates)) let df = toDf({"xs" : xs.mapIt(it.float), ys}) ggplot(df, aes("xs", "ys")) + geom_line() + ggsave("/tmp/energy_bins_likelihood.pdf") ## Compute limit, CDF@95% import algorithm # limit needs non logspace x & y data! (at least if computed in this simple way) let xLin = linspace(0.0, 1e-10, 1000).mapIt(it.GeV⁻¹) let yLin = xLin.mapIt(likelihood(it, Energies, Candidates)) let yCumSum = yLin.cumSum() # cumulative sum let yMax = yCumSum.max # maximum of the cumulative sum let yCdf = yCumSum.mapIt(it / yMax) # normalize to get (empirical) CDF let limitIdx = yCdf.lowerBound(0.95) # limit at 95% of the CDF echo "Limit at : ", xLin[limitIdx] # Code outputs: # Limit at : 6.44645e-11 GeV⁻¹ #+end_src #+RESULTS: :results: Limit at : 6.44645e-11 GeV⁻¹ :end: ** Limit method - toy candidate sets and expected limits :PROPERTIES: :CUSTOM_ID: sec:limit:method_expected_limit :END: Assuming a constant background over some chip area with only an energy dependence, the background hypothesis can be used to draw toy candidates that can be used in place for the real candidates to compute limits. In this situation the background hypothesis can be modeled as follows: \[ B = \{ P_{\text{Pois}}(k; λ = b_i) \: | \: \text{for all energy bins } E_i \}, \] that is, the background is the set of all energy bins $E_i$, where each bin content is described by a Poisson distribution with a mean and expectation value of $λ = b_i$ counts. To compute a set of toy candidates then, we simply iterate over all energy bins and draw a number from each Poisson distribution. This is the number of candidates in that bin for the toy. Given that we assumed a constant background over the chip area, we finally need to draw the $(x_i, y_i)$ positions for each toy candidate from a uniform distribution. [fn:if_pos_dep] These sets of toy candidates can be used to compute an "expected limit". The term expected limit is usually understood to mean the median of many limits computed based on representative toy candidate sets. If $L_{t_i}$ is the limit of the toy candidate set $t_i$, the expected limit $⟨L⟩$ is defined as \[ ⟨L⟩ = \mathrm{median}( \{ L_{t_i} \} ) \] If the number of toy candidate sets is large enough, the expected limit should prove accurate. The real limit will then be below or above with $\SI{50}{\%}$ chance each. [fn:if_pos_dep] If one considers a position independent likelihood function, there is no need to sample positions of course. ** Limit method - extending $\mathcal{L}$ for systematics :PROPERTIES: :CUSTOM_ID: sec:limit:method_systematics :END: The aforementioned likelihood ratio assumes perfect knowledge of the inputs for the signal and background hypotheses. In practice neither of these is known perfectly though. Each input typically has associated a small systematic uncertainty (e.g. the width of the detector window is only known up to N nanometers, the pressure in the chamber only up to M millibar, magnet length only up to C centimeters etc.). These all affect the "real" numbers one should actually calculate with. So how does one treat these uncertainties? The basic starting point is realizing that the values we use are our "best guess" of the real value. _Usually_ it is a reasonable approximation that the real value will likely be within some standard deviation around that best guess, following a normal distribution. Further, it is a good idea to identify all systematic uncertainties and classify them by which aspect of $s_i$, $b_i$ or $(x_i, y_i)$ they affect (amount of signal, background or the position [fn:other_likelihood] ). Another reasonable assumption is to combine different uncertainties of the same type by \[ Δx = \sqrt{ \sum_{i=1}^N Δx²_i }, \] i.e. computing the euclidean radius in N dimensions, for N uncertainties of the same type. The above explanation can be followed to encode these uncertainties into the limit calculation. For a value corresponding to our "best guess" we want to recover the likelihood function $\mathcal{L}$ from before. The further we get away from our "best guess", the more the likelihood function should be "penalized", meaning the actual likelihood of that parameter given our data should be *lower*. The initial likelihood $\mathcal{L}$ will be modified by multiplying with additional normal distributions, one for each uncertainty (4 in total in our case, signal, background, and two position uncertainties). Each adds an additional parameter, a 'nuisance parameter'. To illustrate the details, let's look at the case of adding a single nuisance parameter. In particular we'll look at the nuisance parameter for the signal as it requires more care. The idea is to express our uncertainty of a parameter -- in this case the signal -- by introducing an additional parameter $s_i'$. In contrast to $s_i$ it describes a possible _other_ value of $s_i$ due to our systematic uncertainty. For simplicity we rewrite our likelihood $\mathcal{L}$ as $\mathcal{L}'(s_i, s_i', b_i)$: \[ \mathcal{L}' = e^{-s'_\text{tot}} \prod_i \left(1 + \frac{s_i'}{b_i}\right) · \exp\left[-\frac{1}{2} \left(\frac{s_i' - s_i}{σ_s'}\right)² \right] \] where $s_i'$ takes the place of the $s_i$. The added gaussian then provides a penalty for any deviation from $s_i$. The standard deviation of the gaussian $σ_s'$ is the actual systematic uncertainty on our parameter $s_i$ in units of $s_i$. This form of adding a secondary parameter $s_i'$ of the same units as $s_i$ is not the most practical, but maybe provides the best explanation as to how the name 'penalty term' arises for the added gaussian. If $s_i = s_i'$ then the exponential term is $1$ meaning the likelihood remains unchanged. For any other value the exponential is $< 1$, _decreasing_ the likelihood $\mathcal{L}'$. By a change of variables we can replace the "unitful" parameter $s_i'$ by a unitless number $ϑ_s$. We would like the exponential to be $\exp(-ϑ_s²/(2 σ_s²))$ to only express deviation from our best guess $s_i$. $ϑ_s = 0$ means no deviation and $|ϑ_s| = 1$ implies $s_i = -s_i'$. Note that the standard deviation of this is now $σ_s$ and *not* $σ_s'$ as seen in the expression above. This $σ_s$ corresponds to our systematic uncertainty on the signal as a percentage. To arrive at this expression: \begin{align*} \frac{s_i' - s_i}{σ_s'} &= \frac{ϑ_s}{σ_s} \\ \Rightarrow s_i' &= \frac{σ_s'}{σ_s} ϑ_s + s_i \\ \text{with } s_i &= \frac{σ_s'}{σ_s} \\ s_i' &= s_i + s_i ϑ_s \\ \Rightarrow s_i' &= s_i (1 + ϑ_s) \\ \end{align*} where we made use of the fact that the two standard deviations are related by the signal $s_i$ (which can be seen by defining $ϑ_s$ as the normalized difference $ϑ_s = \frac{s'_i - s_i}{s_i}$). This results in the following final (single nuisance parameter) likelihood $\mathcal{L}'$: \[ \mathcal{L}' = e^{-s'_\text{tot}} \prod_i \left(1 + \frac{s_i'}{b_i}\right) · \exp\left[-\frac{1}{2} \left(\frac{ϑ_s}{σ_s}\right)² \right] \] where $s_i' = s_i (1 + ϑ_s)$ and similarly $s_{\text{tot}}' = s_{\text{tot}} ( 1 + ϑ_s )$ (the latter just follows because $1 + ϑ_s$ is a constant under the different channels $i$). The same approach is used to encode the background systematic uncertainty. The position uncertainty is generally handled the same, but the $x$ and $y$ coordinates are treated separately. As shown in eq. [[eq:limit_method_signal_si]] the signal $s_i$ actually depends on the positions $(x_i, y_i)$ of each candidate via the raytracing image $r$. With this we can introduce the nuisance parameters by replacing $r$ by an $r'$ such that \[ r' ↦ r(x_i - x'_i, y_i - y'_i) \] which effectively moves the center position by $(x'_i, y'_i)$. In addition we need to add penalty terms for each of these introduced parameters: \[ \mathcal{L}' = \exp[-s] \cdot \prod_i \left(1 + \frac{s'_i}{b_i}\right) \cdot \exp\left[-\left(\frac{x_i - x'_i}{\sqrt{2}σ} \right)² \right] \cdot \exp\left[-\left(\frac{y_i - y'_i}{\sqrt{2}σ} \right)² \right] \] where $s'_i$ is now the modification from above using $r'$ instead of $r$. Now we perform the same substitution as we do for $ϑ_b$ and $ϑ_s$ to arrive at: \[ \mathcal{L}' = \exp[-s] \cdot \prod_i \left(1 + \frac{s'_i}{b_i}\right) \cdot \exp\left[-\left(\frac{ϑ_x}{\sqrt{2}σ_x} \right)² \right] \cdot \exp\left[-\left(\frac{ϑ_y}{\sqrt{2}σ_y} \right)² \right] \] The substitution for $r'$ means the following for the parameters: \[ r' = r\left(x (1 + ϑ_x), y (1 + ϑ_y)\right) \] where essentially a deviation of $|ϑ| = 1$ means we move the center of the axion image to the edge of the chip. Putting all these four nuisance parameters together we have #+NAME: eq:limit_method:likelihood_function_def \begin{align} \mathcal{L}' &= \left(\prod_i \frac{P_{\text{pois}}(n_i; s_i + b_i)}{P_{\text{pois}}(n_i; b_i)}\right) \cdot \mathcal{N}(ϑ_s, σ_s) \cdot \mathcal{N}(ϑ_b, σ_b) \cdot \mathcal{N}(ϑ_x, σ_x) \cdot \mathcal{N}(ϑ_y, σ_y) \\ \mathcal{L}'(g, ϑ_s, ϑ_b, ϑ_x, ϑ_y) &= e^{-s'_\text{tot}} \prod_i \left(1 + \frac{s_i''}{b_i'} \right) · \exp\left[-\frac{1}{2} \left(\frac{ϑ_s}{σ_s}\right)² -\frac{1}{2} \left(\frac{ϑ_b}{σ_b}\right)² -\frac{1}{2} \left(\frac{ϑ_x}{σ_x}\right)² -\frac{1}{2} \left(\frac{ϑ_y}{σ_y}\right)² \right] \end{align} where here the doubly primed $s_i''$ refers to modification for the signal nuisance parameter _as well as_ for the position uncertainty via $r'$. An example of the impact of the nuisance parameters on the likelihood space as well as on the parameters ($s, b, x, y$) is shown in fig. sref:fig:limit:method_systematics:example. First, fig. sref:fig:limit:method_systematics:example_theta_0.6 shows how the axion image moves when $ϑ_{x,y}$ change, in this example $ϑ_{x,y} = 0.6$ moves the image center to the bottom left ($ϑ_{x,y} = 1$ would move the center into the corner). Note that the window strongback is not tied to the axion image, but remains fixed (the cut out diagonal lines). Fig. sref:fig:limit:method_systematics:example_sigma_0.05 and sref:fig:limit:method_systematics:example_sigma_0.25 show the impact of the nuisance parameters on the likelihood space. The larger the standard deviation $σ_{x,y}$ is, the more of the $ϑ_{x,y}$ space contributes meaningfully to $\mathcal{L}_M$. In the former example -- a realistic uncertainty -- only small regions around the center are allowed to contribute. Regions further outside receive too large of a penalty. However, at large uncertainties significant regions of the parameter space play an important role. Given that each point on the figures sref:fig:limit:method_systematics:example_sigma_0.05 and sref:fig:limit:method_systematics:example_sigma_0.25 describes one axion image like sref:fig:limit:method_systematics:example_theta_0.6, brighter regions imply positions where the axion image is moved to parts that provide a larger $s/b$ in the center portion of the axion image, while still only having a small enough penalty. For the realistic uncertainty, $σ = 0.05$, roughly the inner $-0.1 < ϑ < 0.1$ space contributes. This corresponds to a range of $\SI{-0.7}{mm} < x < \SI{0.7}{mm}$ around the center in fig. sref:fig:limit:method_systematics:example_theta_0.6. # "~/org/Figs/statusAndProgress/limitSanityChecks/axion_image_limit_calc_theta_0_6.pdf")) #+begin_src subfigure (figure () (subfigure (linewidth 0.33) (caption "Axion image at $ϑ_{x,y} = 0.6$") (label "fig:limit:method_systematics:example_theta_0.6") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/fWidth0.3/axion_image_limit_calc_theta_0_6.pdf")) (subfigure (linewidth 0.33) (caption "$σ_{x,y} = 0.05$") (label "fig:limit:method_systematics:example_sigma_0.05") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/fWidth0.3/likelihood_sigma_0.05_manyϑx_ϑy.pdf")) (subfigure (linewidth 0.33) (caption "$σ_{x,y} = 0.25$") (label "fig:limit:method_systematics:example_sigma_0.25") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/fWidth0.3/likelihood_sigma_0.25_manyϑx_ϑy.pdf")) (caption (subref "fig:limit:method_systematics:example_theta_0.6") ": Impact of the position nuisance parameter on the axion image. A value of $ϑ_{x,y} = 0.6$ is shown, moving the center of the image to the bottom left corner. " (subref "fig:limit:method_systematics:example_sigma_0.05") " shows the impact on the likelihood itself for varying " ($ "ϑ_{x,y}") " values given a standard deviation of " ($ "σ_{x,y} = 0.05") ". Small variations of the position still yield contributions to " ($ "\\mathcal{L}_M") ". " (subref "fig:limit:method_systematics:example_sigma_0.25") " the same for " ($ "σ_{x,y} = 0.25") ". At this value large regions of the " ($ "ϑ_{x,y}") " parameter space contribute to " ($ "\\mathcal{L}_M") " , generally regions of larger " ($ "s/b") ".") (label "fig:limit:method_systematics:example")) #+end_src [fn:other_likelihood] For different likelihood functions other parameters may be affected. *** TODOs for this section [1/3] :noexport: - [X] *INSERT SUB FIG OF THE EXAMPLE FROM THE TALK. θ_x, θ_y*!! - [ ] *POTENTIALLY* think about moving the example images to a section further down. Reasoning being that at this point we have not shown or talked about the axion image at all here. This being mostly theoretical after all. - [ ] *REDO THE SYSTEMATICS PLOT WITH NEW AXION IMAGE!!* *** Example for systematics :extended: - [ ] *THINK ABOUT IF THIS IN THESIS!* For example assuming we had these systematics (expressed as relative numbers from the best guess): - signal uncertainties: - magnet length: $\SI{0.2}{\%}$ - magnet bore diameter: $\SI{2.3}{\%}$ - window thickness: $\SI{0.6}{\%}$ - position uncertainty (of where the axion image is projected): - detector alignment: $\SI{5}{\%}$ - background uncertainty: - A: $\SI{0.5}{\%}$ (whatever it may be, all real ones of mine are very specific) From here we compute 3 combined systematics: - $σ_s = \sqrt{ 0.2² + 2.3² + 0.6²} = \SI{2.38}{\%}$ - $σ_p = \SI{5}{\%}$ - $σ_b = \SI{0.5}{\%}$ *** Generate plots of systematic :extended: The left most image in fig. sref:fig:limit:method_systematics:example is created as part of the ~--raytracing~ sanity check. The other two are part of the ~likelihoodSystematics~ sanity check (from the ~plotLikelihoodCurves~ proc via ~calcPosition~ for either the "few" or "many" candidates case. We place these into a separate directory, because for this particular set of plots we wish to produce them with a target width of ~0.3333\textwidth~. #+begin_src sh F_WIDTH=0.33333333333 DEBUG_TEX=true ESCAPE_LATEX=true USE_TEX=true \ mcmc_limit_calculation sanity \ --limitKind lkMCMC \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --sanityPath ~/phd/Figs/limit/sanity/fWidth0.3/ \ --likelihoodSystematics \ --raytracing \ --rombergIntegrationDepth 3 #+end_src *** $s'$ is equivalent to $s_i'$ ? :extended: \begin{align*} s &= Σ_i s_i \\ s_i' &= s_i (1 + θ_s) \\ s' &= Σ_i s_i' \\ &= Σ_i s_i (1 + θ_s) \\ &\text{as }(1 + θ_s)\text{ is constant} \\ &= (1 + θ_s) Σ_i s_i \\ &= (1 + θ_s) s \\ s' &= s (1 + θ_s) \\ \end{align*} so indeed, this is perfectly valid. ** Limit method - evaluating $\mathcal{L}$ with nuisance parameters :PROPERTIES: :CUSTOM_ID: sec:limit:method_mcmc :END: The likelihood function we started with $\mathcal{L}$ was only a function of the coupling constant $g$ we want to compute a limit for. With the inclusion of the four nuisance parameters however, $\mathcal{L}'$ is now a function of 5 parameters, $\mathcal{L}'(g, ϑ_s, ϑ_b, ϑ_x, ϑ_y)$. Following our definition of a limit via a fixed percentile of the integral over the coupling constant, eq. [[eq:limit_method:limit_def]], leads to a problem for $\mathcal{L}'$. If anything, one could define a contour describing the 95-th percentile of the "integral volume", but this would lead to infinitely many values of $g$ that describe said contour. As a result, to still define a sane limit value, the concept of the marginal likelihood function $\mathcal{L}'_M$ is introduced. The idea is to integrate out the nuisance parameters #+NAME: eq:limit:method_mcmc:L_integral \begin{equation} \mathcal{L}'_M(g) = \iiiint_{-∞}^∞ \mathcal{L}'(g, ϑ_s, ϑ_b,ϑ_x,ϑ_y)\, \dd ϑ_s \dd ϑ_b \dd ϑ_x \dd ϑ_y. \end{equation} Depending on the exact definition of $\mathcal{L}'$ in use, these integrals may be analytically computable. In many cases however they are not and numerical techniques to evaluate the integral must be utilized. Aside from the technical aspects about how to evaluate $\mathcal{L}'_M(g)$ at a specific $g$, the limit calculation continues exactly as for the case without nuisance parameters once $\mathcal{L}'_M(g)$ is defined as such. - Practical calculation of $\mathcal{L}'_M(g)$ in our case :: In case of our explicit likelihood function eq. [[eq:limit_method:likelihood_function_def]] there is already one particular case that makes the marginal likelihood not analytically integrable because the $b_i' = b_i(1 + ϑ_b)$ term introduces a singularity for $ϑ_b = -1$. For practical purposes this is not too relevant, as values approaching $ϑ_b = -1$ would imply having zero background and within a reasonable systematic uncertainty the penalty term makes contributions in this limit so small such that this area does not physically contribute to the integral. Using standard numerical integration routines (simpson, adaptive Gauss-Kronrod etc.) are all too expensive to compute such a four-fold integration under the context of computing many toy limits for an expected limit. For this reason Monte Carlo approaches are used, in particular the Metropolis-Hastings cite:metropolis53_mcmc,hastings70_mcmc (MH) Markov Chain Monte Carlo (MCMC). The basic idea of general Monte Carlo integration routines is to evaluate the function at random points and computing the integral based on the function evaluation at these points (by scaling the evaluations correctly). Unless the function is very 'spiky' in the integration space, Monte Carlo approaches provide good accuracy at a fraction of the computational effort as normal numerical algorithms even in higher dimensions. However, we can do better than relying on _fully_ random points in the integration space. The Metropolis-Hastings algorithm tries to evaluate the function more often in those points where the contributions are large. The basic idea is the following: Pick a random point in the integration space as a starting point $p_0$. Next, pick another random point $p_1$ within the vicinity of $p_0$. If the function $f$ evaluates to a larger value at $p_1$, accept it as the new current position. If it is smaller, accept it with a probability of $\frac{f(p_i)}{f(p_{i-1})}$ (i.e. if the new value is close to the old one we accept it with a high probability and if the new one is much lower accept it rarely). This guarantees to pick values inching closer to the most contributing areas of the integral in the integration space, while still allowing to get out of local maxima due to the random acceptance of "worse" positions. However, this also implies that regions of constant $\mathcal{L}$ (regions where the values are close to 0, but also generally 'flat' regions) produce a pure random walk from the algorithm, because $\frac{f(p_i)}{f(p_{i-1})} \approx 1$ in those regions. This needs to be taken into account. If a new point is accepted and becomes the current position, the "chain" of points is extended (hence "Markov Chain"). If a point is rejected, extend the chain by duplicating the last point. By creating a chain of reasonable length, the integration space is sampled well. Because the initial point is completely random (up to some possible prior) the first $N$ links of the chain are in a region of low interest (and depending on the interpretation of the chain "wrong"). For that reason one defines a cutoff $N_b$ of the first elements that are thrown away as "burn-in" before using the chain to evaluate the integral or parameters. In addition it can be valuable to not only start a single Markov Chain from one random point, but instead start /multiple/ chains from different points in the integration space. This increases the chance to cover different regions of interest even in the presence of multiple peaks separated too far away to likely "jump over" via the probabilistic acceptance. As such it reduces bias from the starting sampling. To summarize the algorithm: 1. let $\vec{p}$ be a random vector in the integration space and $f(\vec{p})$ the function to evaluate, 2. pick new point $\vec{p}'$ in vicinity of $\vec{p}$, 3. sample from random uniform in $[0, 1]$: $u$, 4. accept $\vec{p}'$ if $u < \frac{f(\vec{p}')}{f(\vec{p})}$, add $\vec{p}'$ to chain and iterate (if $f(\vec{p}') > f(\vec{p})$ every new link accepted!). If rejected, add $\vec{p}$ again, 5. generate a long enough chain to sample the integration space well, 6. throw away first N elements as "burn in", 7. generate multiple chains to be less dependent on starting position. Applied to eq. [[eq:limit:method_mcmc:L_integral]], we obtain $\mathcal{L}_M(g)$ by computing the histogram of all sampled $g$ values, which are one component of the vector $\vec{p}$. More on that in sec. [[#sec:limit:mcmc_calc_limit]]. *** TODOs for this section [/] :noexport: - [ ] *REWRITE THIS!!!* - [ ] Check sagemath calculations for x and y systematics -> ?? The integrals? Or what am I referring to? Old paragraph: #+begin_quote Furthermore, outside of using Metropolis-Hastings we still have to make sure the evaluation of $\mathcal{L}'(g, ϑ_s, ϑ_b, ϑ_x, ϑ_y)$ is fast. We will discuss this in the next section about the evaluation of $\mathcal{L}'$. #+end_quote -> We don't actually even have this section. But it is partially mentioned in parts below of course. No need to be specific I guess. - [ ] *DO NOT HAVE* a section *"about the evaluation of L'"* - [ ] *REFERENCES FOR METROPOLIS HASTINGS*: #+begin_quote The algorithm is named in part for Nicholas Metropolis, the first coauthor of a 1953 paper, entitled Equation of State Calculations by Fast Computing Machines, with Arianna W. Rosenbluth, Marshall Rosenbluth, Augusta H. Teller and Edward Teller. For many years the algorithm was known simply as the Metropolis algorithm.[1][2] The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970, W.K. Hastings extended it to the more general case.[3] #+end_quote Kalos, Malvin H.; Whitlock, Paula A. (1986). Monte Carlo Methods Volume I: Basics. New York: Wiley. pp. 78–88. Tierney, Luke (1994). "Markov chains for exploring posterior distributions". The Annals of Statistics. 22 (4): 1701–1762. Hastings, W.K. (1970). "Monte Carlo Sampling Methods Using Markov Chains and Their Applications". Biometrika. 57 (1): 97–109. Bibcode:1970Bimka..57...97H. doi:10.1093/biomet/57.1.97. JSTOR 2334940. Zbl 0219.65008. ** Limit method - practicalities for our real case :noexport: - Evaluate $\mathcal{L}'$ in our case :: - [ ] background position dependent - [ ] use k-d tree to store background cluster information of (x, y, E) per cluster. Interpolation using custom metric with gaussian weighting in (x, y) but constant weight in E - [ ] towards corners need to correct for loss of area #+begin_src nim template computeBackground(): untyped {.dirty.} = let px = c.pos.x.toIdx let py = c.pos.y.toIdx interp.kd.query_ball_point([px.float, py.float, c.energy.float].toTensor, radius = interp.radius, metric = CustomMetric) .compValue() .correctEdgeCutoff(interp.radius, px, py) # this should be correct .normalizeValue(interp.radius, interp.energyRange, interp.backgroundTime) .toIntegrated(interp.trackingTime) #+end_src - [ ] background values cached, to avoid recomputing values if same candidate is asked for - [ ] Signal - [ ] detection efficiency, window (w/o strongback) + gas + telescope efficiency (energy dependent) - [ ] axion flux, rescale by g_ae² - [ ] conversion prob - [ ] raytracing result (telescope focusing) + window strongback - [ ] candidate sampling - [ ] handled using a grid of NxNxM volumes (x, y, E) - [ ] sample in each volume & assign uniform positions in volume ** Note about likelihood integral :extended: The likelihood is a product of probability density functions. However, it is important to note that the likelihood is a function of the *parameter* and not the data. As such integrating over all parameters does not necessarily equate to 1! ** Derivation of short form of $\mathcal{L}$ [/] :extended: - [ ] *WRITE THE NON LOG FORM* This uses the logarithm form, but the non log form is even easier actually. \begin{align*} \ln \mathcal{\mathcal{L}} &= \ln \prod_i \frac{ \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} }{ \frac{b_i^{n_i}}{n_i!} e^{-b_i} } \\ &= \sum_i \ln \frac{ \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} }{ \frac{b_i^{n_i}}{n_i!} e^{-b_i} } \\ &= \sum_i \ln \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} - \ln \frac{b_i^{n_i}}{n_i!} e^{-b_i} \\ &= \sum_i n_i \ln (s_i + b_i) - \ln n_i! - (s_i + b_i) - (n_i \ln b_i - \ln n_i! -b_i) \\ &= \sum_i n_i \ln (s_i + b_i) - (s_i + b_i) - n_i \ln b_i + b_i \\ &= \sum_i n_i \ln (s_i + b_i) - (s_i + b_i - b_i) - n_i \ln b_i \\ &= \sum_i n_i \ln \left(\frac{s_i + b_i}{b_i}\right) - s_i \\ &= -s_{\text{tot}} + \sum_i n_i \ln \left(\frac{s_i + b_i}{b_i} \right) \\ &\text{or alternatively} \\ &= -s_{\text{tot}} + \sum_i n_i \ln \left(1 + \frac{s_i}{b_i} \right) \\ \end{align*} ** Generic limit calculation method :noexport: :PROPERTIES: :CUSTOM_ID: sec:limit:limit_method :END: - [ ] *REPHRASE ALL THIS AS TO FIRST DERIVE THE ORIGIN OF THE NATURE VERSION, THEN PRESENT OUR EXTENSION* We will now present a limit calculation method that is based on the limit presented in cite:cast_nature, but extended to provide a fully generic limit calculation method that requires no restriction to specific regions of interest. The likelihood function used in cite:cast_nature is #+NAME: eq:nature_likelihood_function \begin{align} \ln \mathcal{L} = -R_T + \sum_i^n \ln R(E_i, d_i, \vec{x}_i) \end{align} where $R_T$ is the total expected signal and $R$ the sum of signal and background contributions. The details will be explained further down. First we will derive the Bayesian method and discuss the individual contributions. #+begin_comment Describe log L and χ² distribution and how to compute limit. Unphysicality, fix by rescaling, χ² min + 4 thing *UPDATE*: good that we now understand how this actually works, i.e. integrate the posterior probability (likelihood * prior / normalization) #+end_comment The maths of the likelihood expression we use, is mostly straight forward. A likelihood function is purely defined as the product of the individual probabilities for each 'channel' in our measurement. That way the likelihood gives us the total probability to get this exact measurement outcome out of all possible outcomes, as a function of the coupling constant (in our case). If we start from a likelihood ratio \footnote{Likelihood ratio simply means taking a ratio of two different likelihood functions.} of the signal + background hypothesis over the pure background hypothesis, in the binned case we can derive formula [[#eq:nature_likelihood_function]] from first principles. The number of measured counts in each bin is simply a Poisson distribution: #+begin_comment Clarify what a "bin" refers to and what "channels" are. #+end_comment \[ P_{\text{Pois}}(k; λ) = \frac{λ^k e^{-λ}}{k!} \] for each bin an expected number of counts $λ$ (the mean) then means a probability given $P$ for a "measured" number of counts $k$. Combining multiple "channels" is then simply the product of these individual channels, giving us the likelihood for one experiment: \[ \mathcal{L} = \prod_i P_{i, \text{Pois}}(k; λ) = \prod_i \frac{λ_i^{k_i} e^{-λ_i}}{k_i!} \] Applying this to the previously mentioned likelihood ratio $\mathcal{L}$ gives us: \[ \mathcal{L} = \frac{\prod_i P_{i, \text{Pois}}(n_i; s_i + b_i)}{\prod_iP_{i,\text{Pois}}(n_i; b_i)} = \prod_i \frac{ \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} }{ \frac{b_i^{n_i}}{n_i!} e^{-b_i}} \] where $n_i$ is simply the number of measured candidates within the signal sensitive region in each bin $i$. Typically, each bin might be a bin in energy, but it can be any kind of "bin" as long as each bin corresponds to something that follows a Poisson distribution. We will make use of this fact in a bit. This can be interpreted as an extended likelihood function, as the ratio of two Poisson distributions does not satisfy the normalization condition: \[ \int_{-∞}^{∞}P\, \mathrm{d}x = 1 \] anymore. Instead we have: \[ \int_{-∞}^{∞}Q\, \mathrm{d}x = N \] where $N$ can be interpreted as a hypothetical number of total number of counts measured in the experiment; the starting point of the definition of the extended maximum likelihood estimation. From here we derive the logarithm of the expression to get the numerically more stable $\ln \mathcal{L}$ expression: #+NAME: eq:likelihood_1_plus_s_over_b_form \begin{align*} \ln \mathcal{\mathcal{L}} &= \ln \prod_i \frac{ \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} }{ \frac{b_i^{n_i}}{n_i!} e^{-b_i} } \\ &= \sum_i \ln \frac{ \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} }{ \frac{b_i^{n_i}}{n_i!} e^{-b_i} } \\ &= \sum_i \ln \frac{(s_i + b_i)^{n_i}}{n_i!} e^{-(s_i + b_i)} - \ln \frac{b_i^{n_i}}{n_i!} e^{-b_i} \\ &= \sum_i n_i \ln (s_i + b_i) - \ln n_i! - (s_i + b_i) - (n_i \ln b_i - \ln n_i! -b_i) \\ &= \sum_i n_i \ln (s_i + b_i) - (s_i + b_i) - n_i \ln b_i + b_i \\ &= \sum_i n_i \ln (s_i + b_i) - (s_i + b_i - b_i) - n_i \ln b_i \\ &= \sum_i n_i \ln \left(\frac{s_i + b_i}{b_i}\right) - s_i \\ &= -s_{\text{tot}} + \sum_i n_i \ln \left(\frac{s_i + b_i}{b_i} \right) \\ &\text{or alternatively} \\ &= -s_{\text{tot}} + \sum_i n_i \ln \left(1 + \frac{s_i}{b_i} \right) \\ \end{align*} here $s_{\text{tot}}$ represents the total number of "signal" like counts expected (in our case total number of expected photons due to axion conversion). The last two lines show us that all that is really important is that the normalization of $s_i$ and $b_i$ are the same. The absolute normalization (e.g. if it's $\si{keV⁻¹}$ or absolute counts etc.) plays no role as that will be a constant multiplier. That constant can always be neglected in the total likelihood (a constant only moves the logL curve up / down, but does not change the location of its maximum!). Coming back to the "channels" / the binning: if we choose our bins such that they are bins in time, so small that each bin either contains 0 or 1 count, we reduce this to our desired unbinned log likelihood, if we start from the second to last line above and drop the constant $-\ln b_i$ term: \[ \ln \mathcal{L} = -R_{\text{tot}} + \sum_{\text{candidates}} \ln(s_i + b_i) \] where the sum now runs over each candidate instead of the abstract "channels". - [ ] *CLARIFY THE MEANING OF EACH TERM AND WHAT IT CORRESPONDS TO BETTER!* With an understanding of where the formula comes from, we can more safely make statements about what the shape of the logL curve is going to look like. For that it is important to realize that: - $-R_{\text{tot}}$ depends on $g_{ae²}$, $-R_{\text{tot}}(g_{ae²})$ - $s_i$ depends on $g_{ae²}$, $\vec{x}$ and the cluster energy $E$, $s_i(g_{ae²}, \vec{x}, E)$ - $b_i$ only depends on the cluster energy $E$ - [ ] *IN OUR APPLICATION $b_i$ DOES DEPEND ON POSITION AS WELL!* This means $b_i$ for all intents and purposes is constant under a change of the coupling constant for a fixed set of candidate clusters. For a scan over $\ln\mathcal{L}$ this is precisely given. Now let's consider what each part's contribution will look like as a $\ln\mathcal{L}$ curve: 1. $-R_{\text{tot}}$: the total number of counts depends on the axion flux, the tracking time and the total detection efficiencies. The latter two are simply constants when integrating over the region of interest in energy, namely \SIrange{0}{10}{\keV}. The axion flux scales as: \[ f(g') = f(g) \frac{g^{\prime 2}}{g²} \] i.e. a squared rescaling of the flux. Given that we scan the logL space in $g_{ae²}$, this means the total flux (the integral of $f$ over all energies) just scales linearly with $g_{ae²}$. Due to the minus sign in front, the result is plainly a line with negative slope going through the origin at $g_{ae²} = 0$. 2. $b_i$: the background hypothesis is a function only depending on the energy of each cluster (and implicitly on the relevant area and tracking time). For a fixed set of candidates during a $g_{ae²}$ scan, its contribution is plainly constant. 3. $s_i$: this is the complicated one. It not only depends on $g_{ae²}$, but also the energy $E$ and more importantly the position of the cluster center $\vec{x}$. The latter is used for the effective flux at each position on the chip that is expected from axion conversion after focusing of the X-ray telescope. In principle the cluster center positions are constant for a single scan of $g_{ae²}$. This means effectively the signal $s_i$ behaves exactly like $R_{\text{tot}}$ under $g_{ae²}$. The resulting behavior is thus also linear, except with a positive slope. The big difference between $s_i$ and $R_{\text{tot}}$ however is that $R_{\text{tot}}$ is integrated over all energies, whereas $s_i$ is only evaluated at specific energies (in keV⁻¹). #+begin_comment Rephrase this whole part & remove the confusion, as it's now fully understood how to approach it. #+end_comment Combining these three facts we expect to find some maximum somewhere, where the $R_{\text{tot}}$ term and the $s_i$ term cancel each other. The $b_i$ term only contributes an offset (which however depends on the candidates, which is why it cannot be ignored!). This brings us to a particular problem: What happens if the candidates are all located outside of the axion sensitive region? In that case their contribution will be zero, due to the position dependency of $\vec{x}$. This leads to a pure $R_{\text{tot}}$ negative slope on top of a constant background $b_i$. In this case the logL curve does *not* have a maximum! And in cases where arbitrarily little contribution is had, there *will* be a maximum somewhere, but it will be very far into the unphysical range, at which point the 1σ width (based on logL_max - 0.5) yield a width that leads to a physical limit at 0 (due to the gaussian CDF being essentially 1 many σ away from the center). - [ ] *THIS IS NOT A PROBLEM ANYMORE* Given our statistics of $O(30)$ candidates in our tracking time, this presents a problem. For toy experiments there is a high chance of getting precisely that problem. A large number of candidates (70 - 90% maybe) will be outside the sensitive region, resulting in no good way to determine the limit. **** Further expl: Bayes integral *AFTER MAIL FROM IGOR ON <2021-10-03 Sun>*: - [ ] *THIS SHOULD BE REPHRASED (how it relates to final version / application of nature version) AND BECOME NOEXPORT* -> This is very useful knowledge, because otherwise one gets as confused as I did! Essentially saying that we simply integrate over and demand: 0.95 = ∫_-∞^∞ L(g_ae²) Π(g_ae²) / L_0 d(g_ae²) where L is the likelihood function (*not* the ln L!), Π is the prior that is used to exclude the unphysical region of the likelihood phase space, i.e. it is: Π(g_ae²) = { 0 if g_ae² < 0, 1 if g_ae² >= 0 } And L_0 is simply a normalization constant to make sure the integral is normalized to 1. Thus, the integral reduces to the physical range: 0.95 = ∫_0^∞ L(g_ae²) / L_0 d(g_ae²) where the 0.95 is, due to normalization, simply the requirement of a 95% confidence limit. With this out of the way I implemented this into the limit calculation code as the =lkBayesScan= limit. ** Likelihood ingredients in detail :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients :END: To reiterate, the likelihood function we finally evaluate using MCMC, with explicit dependency on the coupling constant we intend to (mainly) consider -- the axion-electron coupling $g_{ae}$ -- can be written as \[ \mathcal{L'}_{M}(g²_{ae}) = \iiiint_{-∞}^∞ e^{-s'_{\text{tot}}(g²_{ae})} · \prod_i \left(1 +\frac{s_i''(g²_{ae})}{b_i'}\right) · \exp\left[ -\frac{ϑ_b²}{2 σ_b²} -\frac{ϑ_s²}{2 σ_s²} -\frac{ϑ_x²}{2σ_x²} -\frac{ϑ_y²}{2 σ_y²} \right] \, \dd ϑ_b \dd ϑ_s \dd ϑ_x \dd ϑ_y, \] where $i$ runs over all candidates. We alluded to the general make up of both the signal terms $s_{\text{tot}}$ and $s_i$ as well as the background $b_i$ in sec. [[#sec:limit:method_computing_L]]. Let us now look at what goes into each of these explicitly and how they are calculated, starting with each of the signal contributions in \[ s_i(g²_{ae}) = f(g²_{ae}, E_i) · A · t · P_{a \rightarrow γ}(g²_{aγ}) · ε(E_i) · r(x_i, y_i), \] sec. [[#sec:limit:ingredients:magnet_tracking]] to sec. [[#sec:limit:ingredients:total_signal]] and the background in sec. [[#sec:limit:ingredients:background]]. Finally, sec. [[#sec:limit:ingredients:candidates]] explains how we sample toy candidate sets. *** TODOs for this section [/] :noexport: - [X] *All The text here below is irrelevant now* -> Must become an introduction of sort for the ingredients. - [X] *RENAME THE CUSTOM ID OF THIS SECTION* - [ ] *THINK ABOUT WHETHER TO ADD DEFINITION OF s AGAIN* Old text for this section from before we added all the subsections. #+begin_quote <insert equation> where primed symbols refer to the base symbol with a modification due to the value of the corresponding nuisance parameter, i.e. $x' = x(1 + ϑ_x)$. The double primed $s_i''$ not only includes $ϑ_s$, but also the position dependent nuisance parameters $ϑ_x$ and $ϑ_y$ (see again the previous section). ... The limit calculation method is based on the approach presented in cite:cast_nature, with modifications to better suit the GridPix detector and make the method more generic (under exchange of the model to be studied). #+begin_comment Extend this by the derivation for the marginal likelihood that shows how one gets to the shown equation from what's shown in the previous section. -> We derive this in ~statusAndProgress~, incl Sagemath stuff. The sagemath must become a :noexport: section. It's important we highlight how one adds additional terms to the original likelihood to end up at the marginal likelihood. #+end_comment We will now go through the ingredients for the limit method one by one. The final likelihood including nuisance parameters we evaluate is (see section [[#sec:limit:limit_method]] for the derivation): The inputs required to compute a likelihood value are (with the relevant parameters): - a set of candidate clusters (either from the real solar tracking or randomly sampled ones) with cluster centers at $(x_i, y_i)$ and associated energies $E_i$ (over which the product $i$ runs). - the solar axion flux produced the model to be analyzed, as a function of energy (depending on $g_{ae}$ and $g_{aγ}$, but the $g_{aγ}$ contribution can be ignored for certain choices of $g_{ae}$ and $g_{aγ}$. - the conversion probability of axions in a magnetic field (depending on $g_{aγ}$. - the efficiency of the X-ray optics as a function of energy $E$. - the transmission probability of X-rays through the detector window as a function of $E$ and the entrance position $(x, y)$. - the absorption probability of X-rays in the used argon gas as a function of energy $E$. - the average depth $⟨d⟩$ at which the X-rays produce a photoelectron in the argon gas for the expected flux of converted solar axions (after propagation through the X-ray optics and detector window). - the resulting flux of axion induced X-rays as a function of the position $(x, y)$ on the detector, depending on $g_{ae}$. - the expected background rate at any position $(x, y)$ in the detector and any energy $E$. From here we will go through each of these contributions to explain how each is obtained and what they look like, specific to our limit calculation and the CAST data taking with the Septemboard detector. \begin{equation} s_i(g) = f(g, E_i) · A · t · P_{a \rightarrow γ}(g_{aγ}) · ε(E_i) · r(x_i, y_i) \end{equation} - [ ] Should we here essentially just refer back to the "theory like" sections before where each of these ingredients is already introduced? At least for things like argon absorption / detection efficiency etc. those will certainly be presented before. That means by showing them here again, we essentially show the same thing again. For things like the background interpolation that I would just introduce "somewhere here". One option would be: for all where it _makes sense_, have a big "inputs plot" (maybe a facet, or a =ggmulti= plot) of all inputs we have? (*UPDATE*: <2022-08-13 Sat 15:48> see sanity checks for likelihood code) Then again, here we mainly present the *technique* still. So for example the final candidates wouldn't show up here. But ok, it's *only* the candidates that actually changes. #+end_quote *** Magnet bore and solar tracking time - $A$, $t$ :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:magnet_tracking :END: Starting with the simplest inputs to the signal, the magnet bore area and the solar tracking time. The CAST magnet has a bore diameter of $d_{\text{bore}} = \SI{43}{mm}$, as introduced in sec. [[#sec:helioscopes:cast]]. The relevant area for the solar axion flux is the entire magnet bore, because the X-ray telescope covers the full area. As such, $A$ is a constant of: \[ A = π (\SI{21.5}{mm})² = \SI{1452.2}{mm²}. \] The time of interest is the total solar tracking duration, in which the detector was sensitive (i.e. removing the dead time due to readout). As given in the CAST data taking overview, sec. [[#sec:cast:data_taking_campaigns]], the amount of active solar tracking time is \[ t = \SI{160.38}{h}. \] *** Solar axion flux - $f(g, E_i)$ :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:solar_axion_flux :END: The solar axion flux is based on the calculations by J. Redondo cite:Redondo_2013 as already introduced in sec. [[#sec:theory:solar_axion_flux]]. The $f(g², E_i)$ term of the signal refers to the differential solar axion flux. The flux, fig. sref:fig:limit:ingredients:solar_axion_flux, is computed for a specific axion model and coupling constant, in this case $g_{\text{ref}} = g_{ae} = \num{1e-13}$ and $g_{aγ} = \SI{1e-12}{GeV^{-1}}$. As the flux scales by the coupling constant squared, it is rescaled to a new coupling constant $g²_{ae}$ by \[ f(g²_{ae}, E_i) = f(g²_{ae, \text{ref}}, E_i) · \frac{g²_{ae}}{g²_{\text{ref}, ae}}. \] $g_{aγ}$ is kept constant. At this ratio of the two coupling constants, the axion-photon flux is negligible. The shown differential flux is computed using a Sun to Earth distance of $d_{S⇔E} = \SI{0.989}{AU}$ due to the times of the year in which solar trackings were taken at CAST. Fig. [[sref:fig:limit:ingredients:distance_sun_earth]] shows the distance between Sun and Earth during the entire data taking period, with the solar trackings marked in green. The data for the distance is obtained using the JPL Horizons API [[cite:&jplHorizons]]. The code used to calculate the differential flux, [[cite:&JvO_axionElectron]] [fn:axion_flux_raytracing], can also be used to compute the flux for other axion models, for example a pure axion-photon coupling model. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Solar axion flux") (label "fig:limit:ingredients:solar_axion_flux") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/axions/differential_solar_axion_flux_by_type.pdf")) (subfigure (linewidth 0.5) (caption "Distance Sun-Earth") (label "fig:limit:ingredients:distance_sun_earth") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/systematics/sun_earth_distance_cast_solar_tracking.pdf")) (caption (subref "fig:limit:ingredients:solar_axion_flux") ": Differential solar axion flux assuming a distance to the Sun of " ($ (SI 0.989 "AU")) " based on " (subref "fig:limit:ingredients:distance_sun_earth") ".") (label "fig:limit:ingredients:flux_distance")) #+end_src [fn:axion_flux_raytracing] The code was mainly developed by Johanna von Oy under my supervision. I only contributed minor feature additions and refactoring, as well as performance related improvements. **** TODOs for this section [/] :noexport: -> This is only for the axion *IMAGE*: This means we require knowledge about the production as shown in the heatmap of fig. [[fig:limit:axion_production_heatmap]]. (Move this to theory?) - [X] generate heatmap of production as function of energy and solar radius -> In theory section! - [ ] REFERENCE THEORY DISTRIBUTION - [X] or show here just the signal? - [ ] the idea being here that we highlight what one needs to replace in order to compute a limit of something else possibly (which then allows us later to have a section "axion-photon" or "chameleon" where we just present the inputs "see sec. blub, here's the production heatmap" kind of deal - [X] *Should we show the Earth⇔Sun distance here or in data summary of CAST data taking?* -> For now here. Might be moved at some point. - [X] *JPL HORIZONS API CITATION* -> I started the citation in ~references.org~, but it's not finished yet!!!!! There's probably an official citation that one should use for it! - [X] *ADD REFERENCE TO AXIONELECTRONLIMIT*. - [ ] *Rename the repository!* Has nothing to do with a *LIMIT* **** Generate solar axion flux plot and distance Sun-Earth :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:solar_axion_flux:gen_flux_distance_sun :END: Differential flux: -> ~readOpacityFiles~ -> See sec. [[#sec:theory:gen_solar_axion_flux_plots]] and sec. [[#sec:appendix:raytracing:generate_axion_image]] ***** Use Horizons API to download data for distance during CAST data taking See [[file:~/org/journal.org::#sec:journal:01_07_23_sun_earth_dist]] -> notes about writing the below code. See [[file:~/org/journal.org::#sec:journal:28_06_23:horizonsapi_devel]] -> notes on development of the ~horizonsapi~ Nim library that we use below. First we download the distance between the Sun and Earth during the data taking campaign at CAST (between Jan 2017 and Dec 2019; we could be more strict, but well). This is done using https://github.com/SciNim/horizonsAPI, a simple library to interface with JPL's Horizons API. An API that allows to access all sorts of data about the solar system. #+begin_src nim :tangle code/get_cast_sun_earth_distances.nim import horizonsapi, datamancer, times let startDate = initDateTime(01, mJan, 2017, 00, 00, 00, 00, local()) let stopDate = initDateTime(31, mDec, 2019, 23, 59, 59, 00, local()) let nMins = (stopDate - startDate).inMinutes() const blockSize = 85_000 # max line number somewhere above 90k. Do less to have some buffer let numBlocks = ceil(nMins.float / blockSize.float).int # we end up at a later date than `stopDate`, but that's fine echo numBlocks let blockDur = initDuration(minutes = blockSize) let comOpt = { #coFormat : "json", # data returned as "fake" JSON coMakeEphem : "YES", coCommand : "10", # our target is the Sun, index 10 coEphemType : "OBSERVER" }.toTable # observational parameters var ephOpt = { eoCenter : "coord@399", # observational point is a coordinate on Earth (Earth idx 399) eoStartTime : startDate.format("yyyy-MM-dd"), eoStopTime : (startDate + blockDur).format("yyyy-MM-dd"), eoStepSize : "1 MIN", # in 1 min steps eoCoordType : "GEODETIC", eoSiteCoord : "+6.06670,+46.23330,0", # Geneva eoCSVFormat : "YES" }.toTable # data as CSV within the JSON (yes, really) var q: Quantities q.incl 20 ## Observer range! In this case range between our coordinates on Earth and target var reqs = newSeq[HorizonsRequest]() for i in 0 ..< numBlocks: # modify the start and end dates ephOpt[eoStartTime] = (startDate + i * blockDur).format("yyyy-MM-dd") ephOpt[eoStopTime] = (startDate + (i+1) * blockDur).format("yyyy-MM-dd") echo "From : ", ephOpt[eoStartTime], " to ", ephOpt[eoStopTime] reqs.add initHorizonsRequest(comOpt, ephOpt, q) let res = getResponsesSync(reqs) proc convertToDf(res: seq[HorizonsResponse]): DataFrame = result = newDataFrame() for r in res: result.add parseCsvString(r.csvData) let df = res.convertToDf().unique("Date__(UT)__HR:MN") .select(["Date__(UT)__HR:MN", "delta", "deldot"]) echo df df.writeCsv("/home/basti/phd/resources/sun_earth_distance_cast_datataking.csv", precision = 16) #+end_src ***** Generate plot of distance with CAST trackings marked See again [[file:~/org/journal.org::#sec:journal:01_07_23_sun_earth_dist]] With the CSV file produced in the previous section we can now plot the CAST trackings (from the TimepixAnalysis ~resources~ directory) against it. Note: We need to use the same plot height as for the differential axion flux produced in sec. [[#sec:theory:gen_solar_axion_flux_plots]]. Height not defined, width 600 (golden ratio). #+begin_src nim :tangle code/plot_distance_sun_earth_horizons_cast.nim import ggplotnim, sequtils, times, strutils, strformat # 2017-Jan-01 00:00 const Format = "yyyy-MMM-dd HH:mm" const OrgFormat = "'<'yyyy-MM-dd ddd H:mm'>'" const p2017 = "~/CastData/ExternCode/TimepixAnalysis/resources/DataRuns2017_Reco_tracking_times.csv" const p2018 = "~/CastData/ExternCode/TimepixAnalysis/resources/DataRuns2018_Reco_tracking_times.csv" var df = readCsv("~/phd/resources/sun_earth_distance_cast_datataking.csv") .mutate(f{string -> int: "Timestamp" ~ parseTime(idx("Date__(UT)__HR:MN").strip, Format, local()).toUnix.int}) proc readRuns(f: string): DataFrame = result = readCsv(f) .mutate(f{string -> int: "TimestampStart" ~ parseTime(idx("Tracking start"), OrgFormat, local()).toUnix.int}) .mutate(f{string -> int: "TimestampStop" ~ parseTime(idx("Tracking stop"), OrgFormat, local()).toUnix.int}) var dfR = readRuns(p2017) dfR.add readRuns(p2018) var dfHT = newDataFrame() for tracking in dfR: let start = tracking["TimestampStart"].toInt let stop = tracking["TimestampStop"].toInt dfHT.add df.filter(f{int: `Timestamp` >= start and `Timestamp` <= stop}) dfHT["Type"] = "Trackings" df["Type"] = "HorizonsAPI" df.add dfHT let deltas = dfHT["delta", float] let meanD = deltas.mean let varD = deltas.variance let stdD = deltas.std echo "Mean distance during trackings = ", meanD echo "Variance of distance during trackings = ", varD echo "Std of distance during trackings = ", stdD # and write back the DF of the tracking positions #dfHT.writeCsv("~/phd/resources/sun_earth_distance_cast_solar_trackings.csv") let texts = @[r"$μ_{\text{distance}} = " & &"{meanD:.4f}$", #r"$\text{Variance} = " & &"{varD:.4g}$", r"$σ_{\text{distance}} = " & &"{stdD:.4f}$"] let annot = texts.join(r"\\") echo "Annot: ", annot proc thm(): Theme = result = sideBySide() result.annotationFont = some(font(7.0)) # we don't want monospace font! ggplot(df, aes("Timestamp", "delta", color = "Type")) + geom_line(data = df.filter(f{`Type` == "HorizonsAPI"})) + geom_point(data = df.filter(f{`Type` == "Trackings"}), size = 1.0) + scale_x_date(isTimestamp = true, formatString = "yyyy-MM", dateSpacing = initDuration(days = 90)) + xlab("Date", rotate = -45.0, alignTo = "right", margin = 3.0) + annotate(text = annot, x = 1.5975e9, y = 1.0075) + ggtitle("Distance in AU Sun ⇔ Earth") + legendPosition(0.7, 0.2) + themeLatex(fWidth = 0.5, width = 600, baseTheme = thm, useTeX = true) + margin(left = 3.5, bottom = 3.75) + ggsave("~/phd/Figs/systematics/sun_earth_distance_cast_solar_tracking.pdf", width = 600, height = 360, dataAsBitmap = true) #+end_src #+RESULTS: | Mean | distance | during | trackings | = | 0.9891142629616164 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Variance | of | distance | during | trackings | = | 1.399200318749014e-05 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Std | of | distance | during | trackings | = | 0.003740588615109946 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `Timestamp` | has | been | automatically | determined | to | be | continuous. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_discrete()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | Or | apply | a | `factor` | to | the | column | name | in | the | `aes` | call, | i.e. | `aes(..., | factor("Timestamp"), | ...)`. | *** Conversion probability - $P_{aγ}(g²_{aγ})$ :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:conversion_probability :END: The conversion probability of the arriving axions is simply a constant factor, depending on $g_{aγ}$, see section [[#sec:theory:axion_interactions]] for the derivation from the general formula. The simplified expression for coherent conversion [fn:incoherent] in a constant magnetic field [fn:inhomogeneous_fields] is \[ P(g²_{aγ}, B, L) = \left(\frac{g_{aγ} \cdot B \cdot L}{2}\right)^2 \] where the relevant numbers for the CAST magnet are: \begin{align*} B &= \SI{8.8}{T} &↦ B_{\text{natural}} &= \SI{1719.1}{eV^2} \\ L &= \SI{9.26}{m} &↦ L_{\text{natural}} &= \SI{4.69272e7}{eV^{-1}}. \end{align*} The magnetic field is taken from the CAST slow control log files and matches the values used in the paper of CAST CAPP cite:cast_capp_nature (in contrast to some older papers which assumed $\SI{9}{T}$, based on when the magnet was still intended to be run at above $\SI{13000}{A}$). Assuming a fixed axion-photon coupling of $g_{aγ} = \SI{1e-12}{GeV^{-1}}$ the conversion probability comes out to: \begin{align*} P(g²_{aγ}, B, L) &= \left(\frac{g_{aγ} \cdot B \cdot L}{2}\right)^2 \\ &= \left(\frac{\SI{1e-12}{GeV^{-1}} \cdot \SI{1719.1}{eV^2} \cdot \SI{4.693e7}{eV^{-1}}}{2}\right)^2 \\ &= \num{1.627e-21} \end{align*} [fn:incoherent] Note that to calculate limits for larger axion masses the $\sinc$ term of eq. [[eq:theory:axion_interaction:conversion_probability]] needs to be included. [fn:inhomogeneous_fields] Also note that in a perfect analysis one would compute the conversion in a realistic magnetic field, as the field strength is not perfectly homogeneous. That would require a very precise field map of the magnet. In addition, the calculations for axion conversions in inhomogeneous magnetic fields is significantly more complicated. As far as I understand it requires essentially a "path integral like" approach of all possible paths through the magnet, where each path sees different, varying field strengths. Due to the small size of the LHC dipole prototype magnet and general stringent requirements for homogeneity this is not done for this analysis. However, likely for future (Baby)IAXO analyses this will be necessary. **** TODO for this section [/] :noexport: - [ ] *FIX CITATION OF CAST CAPP PAPER* - [X] *Introduce the full expression* -> In theory - [X] *Show that it simplifies to sinc(x) case in vacuum* -> In theory - [X] *Simplify for small masses sin(x) -> x* -> In theory - [X] *Show probability in SI units!* -> In theory - [ ] *MENTION 8.8 T from CAPP & CAST SLOW CONTROL LOGS* **** Computing conversion factors and comparing natural to SI eq. :extended: The conversion factors from Tesla and meter to natural units are as follows: #+begin_src nim :results raw import unchained echo "Conversion factor Tesla: ", 1.T.toNaturalUnit() echo "Conversion factor Meter: ", 1.m.toNaturalUnit() #+end_src #+RESULTS: Conversion factor Tesla: 195.353 ElectronVolt² Conversion factor Meter: 5.06773e+06 ElectronVolt⁻¹ *TODO*: Move this out of the thesis and just show the numbers in text? Keep the "derivation / computation" for the "full" version (:noexport: ?). As such, the resulting conversion probability ends up as: #+begin_src nim :results raw import unchained, math echo "8.8 T = ", 8.8.T.toNaturalUnit() echo "9.26 m = ", 9.26.m.toNaturalUnit() echo "P = ", pow( 1e-12.GeV⁻¹ * 8.8.T.toNaturalUnit() * 9.26.m.toNaturalUnit() / 2.0, 2.0) #+end_src #+RESULTS: 8.8 T = 1719.1 eV² 9.26 m = 4.69272e+07 eV⁻¹ P = 1.627022264358953e-21 \begin{align*} P(g_{aγ}, B, L) &= \left(\frac{g_{aγ} \cdot B \cdot L}{2}\right)^2 \\ &= \left(\frac{\SI{1e-12}{\per GeV} \cdot \SI{1719.1}{eV^2} \cdot \SI{4.693e7}{eV}}{2}\right)^2 \\ &= \num{1.627e-21} \end{align*} Note that this is of the same (inverse) order of magnitude as the flux of solar axions ($\sim10^{21}$ in some sensible unit of time), meaning the experiment expects $\mathcal{O}(1)$ counts, which is sensible. #+begin_src nim import unchained, math echo "8.8 T = ", 8.8.T.toNaturalUnit() echo "9.26 m = ", 9.26.m.toNaturalUnit() echo "P(natural) = ", pow( 1e-12.GeV⁻¹ * 8.8.T.toNaturalUnit() * 9.26.m.toNaturalUnit() / 2.0, 2.0) echo "P(SI) = ", ε0 * (hp / (2*π)) * (c^3) * (1e-12.GeV⁻¹ * 8.8.T * 9.26.m / 2.0)^2 #+end_src #+RESULTS: | 8.8 | T | = | 1719.1 eV² | | 9.26 | m | = | 4.69272e+07 eV⁻¹ | | P(natural) | = | 1.627022264358953e-21 | | | P(SI) | = | 1.62702e-21 UnitLess | | As we can see, both approaches yield the same numbers, meaning the additional conversion factors are correct. *** Detection efficiency - $ε(E_i)$ :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:detection_eff :END: The detection efficiency $ε(E_i)$ includes multiple aspects of the full setup. It can be further decomposed into the telescope efficiency, window transparency, gas absorption, software efficiency of the classifier and veto efficiency, \[ ε(E_i) = ε_{\text{telescope}}(E_i) · ε_{\text{window}}(E_i) · ε_{\text{gas}}(E_i) · ε_{\text{software eff.}} · ε_{\text{veto eff.}}. \] The first three are energy dependent and the latter two constants, but dependent on the classifier and veto setup for which we compute limits. **** TODOs for this section [/] :noexport: - [ ] *Don't forget this includes things like the veto efficiencies as well! Strongback only implicitly!* - [ ] *THINK ABOUT TURNING SEC INTO PARAGRAPHS?* **** Telescope efficiency - $ε_{\text{telescope}}(E_i)$ The X-ray telescope further has a direct impact not only on the shape of the axion signal on the readout, but also the total number of X-rays transmitted. The effective transmission of an X-ray telescope is significantly lower than in the optical range. This is typically quoted using the term "effective area". In section [[#sec:helioscopes:cast:xray_optics]] the effective area of the two X-ray optics used at CAST is shown. The term effective area refers to the equivalent area a perfect X-ray telescope ($\SI{100}{\%}$ transmission) would cover. As such, the real efficiency $ε_{\text{tel}}$ can be computed by the ratio of the effective area $A_{\text{eff}}$ and the total area of the optic $A_{\text{tel}}$ exposed to light. \[ ε_{\text{tel}}(E) = \frac{A_{\text{eff}}(E)}{A_{\text{tel}}} \] where the effective area $A_{\text{eff}}$ depends on the energy. [fn:eff_area_reflectivity] In case of CAST the relevant total area is not actually the cross-sectional area of the optic itself, but rather the exposed area due to the diameter of the magnet coldbore. With a coldbore diameter of $d_{\text{bore}} = \SI{43}{mm}$ the effective area can be converted to $ε_{\text{tel}}$. The resulting effective area is shown in fig. [[fig:limit:limit_method:combined_detection_eff]] in the next section together with the window transmission and gas absorption. #+begin_quote Note: all publicly available effective areas for the LLNL telescope, meaning [[cite:&anders_phd]] and [[cite:&llnl_telescope_first_cast_results]], are either inapplicable, outdated or unfortunately wrong. Jaime Ruz sent me the simulation results used for the CAST Nature paper [[cite:&cast_nature]], which include the effective area. These numbers are used in the figure below and our limit calculation. #+end_quote [fn:eff_area_reflectivity] Note that $ε_{\text{tel}}$ here is the average effective efficiency of the full telescope and /not/ the reflectivity of a single shell. As a Wolter I optic requires two reflections $ε_{\text{tel}}$ is equivalent to the reflectivity squared $R²$. Individual reflectivities of shells are further complicated by the fact that different shells receive parallel light under different angles, which means the reflectivity varies between shells. Therefore, this is a measure for the average efficiency. ***** TODOs for this section [/] :noexport: - [ ] *TALK ABOUT WHERE DATA COMES FROM!!!!!!* - [X] *UPDATE THE USED EFFECTIVE AREA!!!* See journal [[#sec:journal:2023_07_13]]!! - [X] Merge with window transmission and argon gas, as detection efficiency - [X] Refer back to section that describes the LLNL telescope! - [ ] Think about having a proper introduction to effective area in the theory introduction? -> Then here we can just give a quick reiteration and refer to the plot of the combined efficiency? - [X] plot of the effective area ***** Notes on the effective area :extended: Some might say people working with X-ray telescopes prefer the 'effective area' as a measure of efficiency to hide the fact how inefficient X-ray telescopes are, whoops. Anyway, the effective area of the LLNL telescope is still the biggest mystery to me. If you haven't read the raytracing appendix [[#sec:appendix:raytracing]], in particular the section about the LLNL telescope, sec. [[#sec:appendix:raytracing:llnl_telescope]], the public information available about the LLNL telescope is either outdated, contradictory or plain wrong. The PhD thesis of Anders Jakobsen [[cite:&anders_phd]] contains a plot of the effective area (fig. 4.13 on page 64, 87 of PDF), which peaks near ~10 cm². However, it is unclear what the numbers are actually based on. Likely they describe parallel incoming light. In addition they likely include the initial telescope design of 14 instead of the final 13 shells. Both means the result is an overestimate. Then, [[cite:&llnl_telescope_first_cast_results]], the paper about the telescope at CAST, contains another effective area plot peaking at about 8.2 cm². It is stated the numbers are for an HPD (half power diameter) of 75 arc seconds using a solar axion emission from a 3 arcmin disc size. And yet, apparently these numbers are still an overestimate. As mentioned in the main text above, I was sent the simulations used for the CAST Nature paper [[cite:&cast_nature]] by Jaime Ruz, which contain the axion image and effective area. These numbers peak at only about 7.3 cm²! At the very least this roughly matches the slides from the CAST collaboration meeting on <2017-01-23 Mon>, on slide 36. If one looks at those slides, one might notice that the results on slide 35 for the best model actually peak closer to the aforementioned 8.2 cm². According to Jaime the reason for this is that the higher numbers are based on the /full/ telescope area and the lower numbers only the size of CAST's magnet bore. This may very well all be true. My personal skepticism is due to two things: 1. my general feeling that the numbers are exceptionally low. Essentially the telescope is mostly /worse/ than the ABRIXAS telescope, which just surprises me. But I'm obviously not an X-ray telescope expert. 2. more importantly, every attempt of mine to compute the effective area based on the reflectivities of the shells with parallel or realistic solar axion emission yielded numbers quite a bit higher than the data sent to me by Jaime. *One note though*: I still need to repeat the effective area calculations for the 'realistic' solar axion emission after fixing a random sampling bug. It may very well affect the result, even though it would surprise me if that explained the difference I saw. The most likely reason is that simply my simulation is off. Possibly the -- mentioned in the slides of the CCM -- contamination of hydrcarbons affect the reflectivity so much as to explain the difference. **** Window transmission and argon gas absorption - $ε_{\text{window}}(E_i), ε_{\text{gas}}(E_i)$ The detector entrance window is the next point affecting the possible signal to be detected. The windows, as explained in section [[#sec:detector:sin_window]] are made from $\SI{300}{nm}$ thick silicon nitride with a $\SI{20}{nm}$ thick aluminium coating. Its transmission is very good down to about $\SI{1}{keV}$ below which it also starts to degrade rapidly. While the window also has four $\SI{500}{μm}$ thick strongbacks which in total occlude about $\SI{22.2}{\%}$ of the center region, these are /not/ taken into account into the combined detection efficiency. Instead they are handled together with the axion image $r(x_i, y_i)$ in sec. [[#sec:limit:ingredients:raytracing]]. ***** TODOs for this section [/] :noexport: - [X] compute window transmission with =xrayAttenuation= -> Needs to be done in theory already! Redo that plot that still uses Henke data! -> Already *is* an xrayAttenuation plot! But it only shows Si3N4 and argon, no aluminum or isobutane. **** Software efficiency and veto efficiency - $ε_{\text{software eff.}} · ε_{\text{veto eff.}}$ The software efficiency $ε_{\text{software eff.}}$ of course depends on the specific setting which is used. Its value will range from somewhere between \SIrange{80}{97}{\%}. The veto efficiencies in principle can also vary significantly depending on the choice of parameters (e.g. whether the 'line veto' uses an eccentricity cutoff or not), but as explained in sec. [[#sec:background:estimate_veto_efficiency]] the septem and line vetoes are just considered as either yes or no. The FADC veto has also been fixed to a $1^{\text{st}}$ to $99^{\text{th}}$ percentile cut on the signal rise time, see sec. [[#sec:background:fadc_veto]]. As such the relevant veto efficiencies are: \begin{align*} ε_{\text{FADC}} &= \SI{98}{\%} \\ ε_{\text{septem}} &= \SI{83.11}{\%} \\ ε_{\text{line}} &= \SI{85.39}{\%} \\ ε_{\text{septem+line}} &= \SI{78.63}{\%} \end{align*} where the last one corresponds to using both the septem and the line veto at the same time. Considering for example the case of using these vetoes together with a software efficiency of $\SI{80}{\%}$ we see that the combined efficiency is already only about $\SI{61.6}{\%}$, which is an extreme loss in sensitivity. ***** TODOs for this section [/] :noexport: The table from sec. [[#sec:background:estimate_veto_efficiency]] repeated here: | Septem veto | Line veto | Real [%] | Fake [%] | |-------------+-----------+---------------+---------------| | y | n | $\num{14.12}$ | $\num{83.11}$ | | n | y | $\num{25.32}$ | $\num{85.39}$ | | y | y | $\num{9.17}$ | $\num{78.63}$ | **** Combined detection efficiency - $ε(E_i)$ The previous sections cover aspects which affect the detection efficiency of the detector and thus impact the amount of signal available. Combined they yield a detection efficiency as shown in fig. [[fig:limit:limit_method:combined_detection_eff]]. As can be seen, the combined detection efficiency maxes out at about $\sim\SI{46}{\%}$ around $\SI{1.5}{keV}$ without taking into account the software and veto efficiencies. If one combines this with using all vetoes at a software efficiency of $\SI{80}{\%}$, the total detection efficiency of the detector would peak at only $\SI{28.4}{\%}$ at that energy. #+CAPTION: The combined detection efficiency of the detector, taking into account the #+CAPTION: telescope efficiency via the effective area, the window absorption probability #+CAPTION: and the absorption probability in the detector gas. #+NAME: fig:limit:limit_method:combined_detection_eff [[~/phd/Figs/limit/detection_efficiency.pdf]] ***** TODOs for this section [/] :noexport: *TODO*: Also include the version that is split up into the individual pieces somewhere! -> ?? # The old plot we had here, from the sanity checks # ~/org/Figs/statusAndProgress/limitSanityChecks/sanity_detection_eff.pdf And the second old plot [[~/phd/Figs/limit/combined_detection_efficiency.pdf]] ***** Generate plot of detection efficiency [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:gen_detection_eff :END: *NOTE*: We also have [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/septemboardDetectionEff/septemboardDetectionEff.nim]] nowadays for the limit calculation (to produce the CSV file including LLNL effective area). *UPDATE*: <2024-05-10 Fri 17:36> Updated the code of ~septemboardDetectionEff~ to not include a mention of the 'software eff.' in the title, as that is plain wrong. To produce the CSV file #+begin_src sh USE_TEX=true ./septemboardDetectionEff \ --outpath ~/phd/resources/ \ --plotPath ~/phd/Figs/limit/ \ --llnlEff ~/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/EffectiveArea.txt \ --sep ' ' #+end_src note the usage of the "correct" effective area file. - [X] Well, do we need the ingredients separately? Not really right? -> No. We need the effective area (ideally we would compute it! but of course currently we cannot reproduce it :( ). So just read the extended LLNL file. - [X] Need densities of Aluminium, ... -> 2.7 g•cm⁻³ - [X] Need to update xrayAttenuation to create the plot! -> Done. - [X] NEED TO update numericalnim for interpolation! - [X] NEED TO update seqmath for linspace fixes - [X] *USE 2016 FINAL EFFECTIVE AREA* *** Average absorption depth of X-rays :PROPERTIES: :CUSTOM_ID: sec:limit:median_absorption_depth :END: In order to compute a realistic axion image based on raytracing, the plane at which to compute the image needs to be known, as the focal spot size changes significantly depending on the distance to the focal point of the X-ray optics. The beamline behind the telescope is designed such that the focal spot is $\SI{1}{cm}$ behind the entrance window. [fn:source] This is of particular importance for a gaseous detector, as the raytracing only makes sense up to the generation of a photoelectron, after which the produced primary electrons undergo diffusion. Therefore, one needs to compute the typical absorption depth of X-rays in the relevant energy ranges for the used gas mixture of the detector. This is easiest done based on a Monte Carlo simulation taking into account the incoming X-ray flux distribution (given the solar axion flux we consider) $f(E)$, the telescope effective area $ε_{\text{LLNL}}(E)$ and window transmission, $ε_{\ce{Si3 N4}}(E), ε_{\ce{Al}}(E)$, \[ I(E) = f(E) · ε_{\text{LLNL}}(E) · ε_{\ce{Si3 N4}}(E) · ε_{\ce{Al}}(E). \] $I(E)$ yields the correct energy distribution of expected signal X-rays. For each sampled X-ray we can then draw a conversion point based on the attenuation length and the Beer-Lambert law for its energy introduced in sec. [[#sec:theory:xray_matter_gas]]. Computing the median of all conversion points is then an estimator for the point at which to compute the axion image. Performing this calculation leads to a median conversion point of $⟨d⟩ = \SI{0.2928}{cm}$ behind the detector window, with a standard deviation of $\SI{0.4247}{cm}$ due to a long tail from higher energy X-rays. It may be worthwhile to perform this calculation for distinct energies to then compute different axion images for different energies with each their own effective 'depth' behind the window, however for the time being we do not. For the calculation of these numbers, see appendix [[#sec:appendix:average_depth_xrays_argon]]. [fn:source] To my knowledge there exists no technical design documentation about how the beamline was designed exactly. Jaime Ruz, who was in charge of the LLNL telescope installation, told me this is what he aligned to. **** TODOs for this section [/] :noexport: - [X] Focal spot in center of chamber - [X] Updated calc monte carlo for median pos - [X] *INSERT COMPUTATION* -> in appendix, sec. [[#sec:appendix:average_depth_xrays_argon]] - [X] *REFERENCE IT PROPERLY* - [ ] *EXTEND WITH REFERENCES TO THEORY CHAPTER ABOUT ABSORPTION, ATTENUATION LENGTH, ETC* *** Raytracing axion image - $r(x_i, y_i)$ :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:raytracing :END: The axion image is computed based on a raytracing Monte Carlo simulation, using TrAXer [[cite:&traxer]], written as part of this thesis. Appendix [[#sec:appendix:raytracing]] contains an introduction to raytracing techniques, details about the LLNL telescope, verification of the raytracing results using PANTER measurements of the real telescope and details about the calculation of the axion image. Fig. [[fig:limit:ingredients:axion_image]] shows the image, computed for a Sun-Earth distance of $\SI{0.989}{AU}$ and a distance of $\SI{0.2928}{cm}$ behind the detector window. So it is $\SI{0.7072}{cm}$ _in front_ of the focal point. Hence, the image is very slightly asymmetric along the long axis. Instead of using the raytracing image to fully characterize the axion flux including efficiency losses, we /only/ use it to define the spatial distribution [fn:why_not]. This means we rescale the full axion flux distribution -- before taking the window strongback into account -- such that it represents the fractional X-ray flux per square centimeter. That way, when we multiply it with the rest of the expression in the signal calculation eq. [[eq:limit_method_signal_si]], the result is the expected number of counts at the given position and energy per $\si{cm²}$. The window strongback is not part of the simulation, because for the position uncertainty, we need to move the axion image without moving the strongback. As such the strongback is added as part of the limit calculation based on the physical position on the chip of a given candidate. #+CAPTION: Axion image as computed using raytracing for the AGSS09 cite:agss09_chemical,agss09_new_solar solar model #+CAPTION: and under the assumption that the axion-electron coupling constant #+CAPTION: $g_{ae} = \num{1e-13}$ dominates over the axion-photon coupling $g_{aγ} = \SI{1e-12}{GeV^{-1}}$. #+CAPTION: The diagonal lines with missing flux are the detector window strongbacks. #+CAPTION: It is very slightly asymmetric, because of being $\SI{0.7}{cm}$ in front #+CAPTION: of the focal point. #+NAME: fig:limit:ingredients:axion_image [[~/phd/Figs/limit/sanity/axion_image_limit_calc_no_theta.pdf]] [fn:why_not] A big reason for this approach is that so far I have not been able to reproduce the reflectivity (and thus effective area) of the telescope to a sufficient degree. A pure raytracing approach would overestimate the amount of flux currently. **** TODOs for this section [/] :noexport: It is part of the same code base as the code producing the differential solar axion flux, as mentioned in sec. [[#sec:limit:ingredients:solar_axion_flux]]. - [ ] *UPDATE NUMBER TO USE 1cm FROM WINDOW* **** Generate the axion image plot with strongback :extended: In the raytracing appendix we only compute the axion image without the strongback (even though we support placing the strongback into the simulation). We could either produce the plot based on ~plotBinary~, part of the TrAXer repository, after running /with/ the strongback in the simulation, or alternatively as part of the limit calculation sanity checks. The latter is the cleaner approach, because it directly shows us the strongback is added correctly in the code where it matters. We produce it by running the ~sanity~ subcommand of ~mcmc_limit_calculation~, in particular the ~raytracing~ argument. Note that we don't need any input files, the default ones are fine, because we don't run any input related sanity checks. #+begin_src sh F_WIDTH=0.9 DEBUG_TEX=true ESCAPE_LATEX=true USE_TEX=true \ mcmc_limit_calculation sanity \ --limitKind lkMCMC \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --sanityPath ~/phd/Figs/limit/sanity/ \ --raytracing #+end_src *** Computing the total signal - $s_{\text{tot}}$ :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:total_signal :END: As mentioned in sec. [[#sec:limit:method_computing_L]] in principle we need to integrate the signal function $s(E, x, y)$ over the entire chip area and all energies. However, we do not actually need to perform that integration, because we know the efficiency of our telescope and detector setup as well as the amount of flux entering the telescope. Therefore we compute $s_{\text{tot}}$ via \[ s_{\text{tot}}(g²_{ae}) = ∫_0^{E_{\text{max}}} f(g²_{ae}, E) · A · t · P_{a ↦ γ}(g²_{aγ}) · ε(E) \, \dd E, \] making use of the fact that the position dependent function $r(x, y)$ integrates to $\num{1}$ over the entire axion image. This allows us to precompute the integral and only rescale the result for the current coupling constant $g²_{ae}$ via \[ s_{\text{tot}}(g²_{ae}) = s_{\text{tot}}(g²_{ae,\text{ref}}) · \frac{g²_{ae}}{g²_{ae, \text{ref}}}, \] where $g²_{ae, \text{ref}}$ is the reference coupling constant for which the integral is computed initially. Similar rescaling needs to be done for the axion-photon coupling or chameleon coupling, when computing a limit for either. **** TODOs for this section [0/1] :noexport: - [X] Shortly discuss what we do 1. raytracing ignored: sums to total 1 2. take solar axion flux, integrate numerically 3. ? **** Code for the calculation of the total signal :extended: This is the implementation of the ~totalSignal~ in code. We simply circumvent the integration when calculating limits by precomputing the integral in the initialization (into ~integralBase~), taking into account the detection efficiency. From there it is just a multiplication of magnet bore, tracking time and conversion probability. #+begin_src nim proc totalSignal(ctx: Context): UnitLess = ## Computes the total signal expected in the detector, by integrating the ## axion flux arriving over the total magnet bore, total tracking time. ## ## The `integralBase` is the integral over the axion flux multiplied by the detection ## efficiency (window, gas and telescope). const areaBore = π * (2.15 * 2.15).cm² let integral = ctx.integralBase.rescale(ctx) result = integral.cm⁻²•s⁻¹ * areaBore * ctx.totalTrackingTime.to(s) * conversionProbability(ctx) #+end_src *** Background :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:background :END: The background must be evaluated at the position and energy of each cluster candidate. As the background is not constant in energy or position on the chip (see sec. [[#sec:background:all_vetoes_combined]]), we need a continuous description in those dimensions of the background rate. In order to obtain such a thing, we start from all X-ray like clusters remaining after background rejection, see for example fig. [[sref:fig:background:cluster_center_comparison]], and construct a background interpolation. We define $b_i$ as a function of candidate position $x_i, y_i$ and energy $E_i$, \[ b_i(x_i, y_i, E_i) = \frac{I(x_i, y_i, E_i)}{W(x_i, y_i, E_i)}. \] where $I$ is an intensity defined over clusters within a range $R$ and a normalization weight $W$. From here on we will drop the candidate suffix $i$. The arguments will be combined to vectors \[ \mathbf{x} = \vektor{ \vec{x} \\ E } = \vektor{ x \\ y \\ E }. \] The intensity $I$ is given by \[ I(\mathbf{x}) = \sum_{b ∈ \{ \mathcal{D}(\mathbf{x}_b, \mathbf{x}) \leq R \}}\mathcal{M}(\mathbf{x}_b, \mathbf{x}) = \sum_{b ∈ \{ \mathcal{D}(\mathbf{x}_b, \mathbf{x}) \leq R \} } \exp \left[ -\frac{1}{2} \mathcal{D}² / σ² \right], \] where we introduce $\mathcal{M}$ to refer to a normal distribution-like measure and $\mathcal{D}$ to a custom metric (for clarity without arguments). All background clusters $\mathbf{x}_b$ within some 'radius' $R$ contribute to the intensity $I$, weighted by their distance to the point of interest $\mathbf{x}$. The metric is given by \begin{equation*} \mathcal{D}( \mathbf{x}_1, \mathbf{x}_2) = \mathcal{D}( (\vec{x}_1, E_1), (\vec{x}_2, E_2)) = \begin{cases} (\vec{x}_1 - \vec{x}_2)² \text{ if } |E_1 - E_2| \leq R \\ ∞ \text{ if } (\vec{x}_1 - \vec{x}_2)² > R² \\ ∞ \text{ if } |E_1 - E_2| > R \end{cases} \end{equation*} with $\vec{x} = \vektor{x \\ y}$. Note first of all that this effectively describes a cylinder. Any point inside $| \vec{x}_1 - \vec{x}_2 | \leq R$ simply yields a euclidean distance, as long as their energy is smaller than $R$. Further note, that the distance is only dependent on the distance in the x-y plane, /not/ their energy difference. Finally, this requires rescaling the energy as a common number $R$, but in practice the implementation of this custom metric simply compares energies directly, with the 'height' in energy of the cylinder expressed as $ΔE$. The commonly used value for the radius $R$ in the x-y plane are $R = \SI{40}{pixel}$ and in energy $ΔE = ± \SI{0.3}{keV}$. The standard deviation of the normal distribution for the weighting in the measure $σ$ is set to $\frac{R}{3}$. The basic idea of the measure is simply to provide the highest weight to those clusters close to the point we evaluate and approach 0 at the edge of $R$ to avoid discontinuities in the resulting interpolation. Finally, the normalization weight $W$ is required to convert the sum of $I$ into a background rate. It is the 'volume' of our measure within the boundaries set by our metric $\mathcal{D}$: \begin{align*} W(x', y', E') &= t_B ∫_{E' - E_c}^{E' + E_c} ∫_{\mathcal{D}(\vec{x'}, \vec{x}) \leq R} \mathcal{M}(x', y') \, \dd x\, \dd y\, \dd E \\ &= t_B ∫_{E' - E_c}^{E' + E_c} ∫_{\mathcal{D}(\vec{x'}, \vec{x}) \leq R} \exp\left[ -\frac{1}{2} \mathcal{D}² / σ² \right] \, \dd x \, \dd y \, \dd E \\ &= t_B ∫_{E' - E_c}^{E' + E_c} ∫_0^R ∫_0^{2π} r \exp\left[ -\frac{1}{2} \frac{\mathcal{D}² }{σ²} \right] \, \dd r\, \dd φ\, \dd E \\ &= t_B ∫_{E' - E_c}^{E' + E_c} -2 π \left( σ² \exp\left[ -\frac{1}{2} \frac{R²}{σ^2} \right] - σ² \right) \, \dd E \\ &= -4 π t_B E_c \left( σ² \exp\left[ -\frac{1}{2} \frac{R²}{σ^2} \right] - σ² \right), \\ \end{align*} where we made use of the fact that within the region of interest $\mathcal{D}'$ is effectively just a radius $r$ around the point we evaluate. $t_B$ is the total active background data taking time. If our measure was $\mathcal{M} = 1$, meaning we would count the clusters in $\mathcal{D}(\vec{x}, \vec{x}') \leq R$, the normalization $W$ would simply be the volume of the cylinder. This yields a smooth and continuous interpolation of the background over the entire chip. However, towards the edges of the chip it underestimates the background rate, because once part of the cylinder is not contained within the chip, fewer clusters contribute. For that reason we correct for the chip edges by upscaling the value within the chip by the missing area. See appendix [[#sec:appendix:background_interpolation_chip_area]]. Fig. sref:fig:limit:background_interpolation shows an example of the background interpolation centered at $\SI{3}{keV}$, with all clusters within a radius of $\num{40}$ pixels and in an energy range from $\SIrange{2.7}{3.3}{keV}$. Fig. sref:fig:limit:interpolation_clusters shows the initial step of the interpolation, with all colored points inside the circle being clusters that are contained in $\mathcal{D} \leq R$. Their color represents the weight based on the measure $\mathcal{M}$. After normalization and calculation for each point on the chip, we get the interpolation shown in fig. sref:fig:limit:background_interpolation_example. Implementation wise, as the lookup of the closest neighbors in general is an $N²$ operation for $N$ clusters, all clusters are stored in a $k\text{-d}$ tree, for fast querying of clusters close to the point to be evaluated. Furthermore, because the likelihood $\mathcal{L}$ is evaluated many times for a given set of candidates to compute a limit, we perform caching of the background interpolation values for each candidate. That way we only compute the interpolation once for each candidate. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Intensity at a point") (label "fig:limit:interpolation_clusters") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/interpolation_clusters_E_3.0_keV_x_110_y_80.pdf")) (subfigure (linewidth 0.5) (caption "Interpolation") (label "fig:limit:background_interpolation_example") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/normalized_interpolation_at_3.0keV_ymax_5e-05.pdf")) (caption (subref "fig:limit:interpolation_clusters") ": Calculation of intensity " ($ "I") " at the center of the red circle. Black crosses indicate all background clusters. The red circle indicates cutoff " ($ "R") " in the x-y plane. Only clusters with colored dots inside the circle are within " ($ "\\SIrange{2.70}{3.3}{keV}") ". Their color is the weight based on the gaussian measure " ($ "\\mathcal{M}") ". " (subref "fig:limit:background_interpolation_example") ": Example of the resulting background interpolation at " ($ (SI "3" "keV")) " computed over the entire chip. A smooth, correctly normalized interpolation is obtained.") (label "fig:limit:background_interpolation")) #+end_src **** Notes about this section [2/9] :noexport: - [X] *SHOW RESULT OF INTEGRAL, INCLUDE SAGEMATH CALC* - [ ] *VERIFY IT IS ACTUALLY CORRECT LIKE THIS RIGHT NOW, i.e. the math notation* - [ ] *THINK ABOUT* whether to move missing circle segment to the appendix, possibly. - [X] *REFERNECE k-d TREE* -> Not even sure what there is to reference. - [X] *CLUSTER PLOT HERE AGAIN?* -> Sort of. Figures: 1. [X] background clusters -> They are part of the background rate section -> Partially in plot below. Note though that we don't have the exact set of clusters that are used in e.g. the 95% case etc. 2. [X] (optional) plot showing "selection" of clusters based on =queryBallPoint=? I.e. show all clusters in grey, radius & then in color those that are in the radius? -> Done 3. [-] background "interpolation" based purely on query ball point of raw data, at a slice of energy. -> not important enough 4. in a facet with 3 show same after normalization? 5. [-] show effect of area cutoff correction. Also results in a "final" background rate interpolated. -> Also not that relevant. 6. [X] interpolation at one slice - [ ] About edge correction: - [ ] Maybe make a quick schematic of Inkscape showing the different areas A, B, C, D, E, F? -> *If* we keep this, that would be very valuable. - [ ] Maybe take most of the description out of the regular thesis? -> Ding ding ding. Appendix. Highlight: So what? This allows us to evaluate the background rate correctly on the full chip! Generic ALPs can be studied this way, as we don't have to manually define regions on the chip with specific backgrounds etc.! One of the fundamental points about making this whole procedure generic. - [X] explain using k-d tree to efficiently look up "neighbors" at any point (x, y, E). In particular explain how energy works. Not taken into account in distance aside from whether inside. So gaussian weighting only in x/y. - [X] use number & distance to these points to compute a weighted number of elements in desired "radius" -> Explained - [X] Explain that energy is rescaled according to a the "radius" of the metric - [X] potentially rescale number based on area cut off due to edges of the chip. how does this work. -> Just need to rephrase it. - [X] renormalize from an effective "number" of clusters to a rate. . how does rescaling work. -> Done - [X] explain how the integration works etc, copy from sections [[sec:correct_inter_cutoff]] and [[sec:limit:gaussian_weight_normalization]] in status. -> Both sections copied now below. - [X] *INSERT CROSSES FOR CLUSTERS IN THE ENERGY RANGE!* -> Done as a separate plot! - [X] *TODO*: check the integration of the gaussian weight again. Is that really correct??!! Ahh, it might be correct. What we try to do is not to compute anything related to the actual neighbors found in the radius, but rather to get the "equivalent area" of the weighted data! - [X] *IMPORTANT:* Our knowledge of the random coincidence rate of the septem & line veto implies that the *time* used to calculate the tracking time and background time must be modified by that factor. So not only a dead time in the *tracking* part, but also for the *background* part! -> No, I don't think this is true. Our background rate is valid as is. This is the same as the thought of whether a background rate should be corrected for by the signal efficiency. This is never done. **** Generate the interpolation figure :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:gen_background_interpolation_plots :END: Sanity check for background interpolation: #+begin_src sh F_WIDTH=0.5 DEBUG_TEX=true ESCAPE_LATEX=true USE_TEX=true \ mcmc_limit_calculation sanity \ --limitKind lkMCMC \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --sanityPath ~/phd/Figs/limit/sanity/ \ --backgroundInterp #+end_src **** Sampling of background interpolation :noexport: Take interpolation. For sampling purposes we need to sample from background according to rate. How? Take a grid in x, y, E of N grid cells. Take background in those volumes & normalize to # counts in tracking time. Then Poisson sample from that as mean. Sample an (x, y, E) "position" in that cube. 10x10x20 ? cells used. Plot of the cells, normalized to counts. **** STARTED Homogeneous background for energies > 2 keV [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:correct_inter_cutoff :END: - [ ] finish the final explanation part!! - [ ] Remove nowadays understood aspects of below! The issue with energies larger than 2 keV and performing the interpolation of all events larger than 2 in a reasonably large radius, has one specific problem. The plot in fig. [[fig:background_interpolation_larger_2keV]] shows what the interpolation for > 2 keV looks like for a radius of 80 pixels. It is very evident that the background *appears* higher in the center area than in the edges / corners of the chip. The reason for this is pretty obvious once one thinks about it deeper. Namely, an event with a significant energy that went through decent amounts of diffusion, cannot have its cluster center (given that it's X-ray like here) actually close to the edge / corner of the detector. On average its center will be half the diffusion radius *away* from the edges. If we then interpolate based on the cluster center information, we end up at a typical boundary problem, i.e. they are underrepresented. #+CAPTION: Background interpolation for 2017/18 X-ray like data for all clusters above \SI{2}{keV} #+CAPTION: using a radius of 80 pixels. It is evident that the background in the center appears higher #+CAPTION: than at the edges, despite expecting either the opposite or constant background. Reason is #+CAPTION: cutoff at edges, so no contributions can come from outside + diffusion causing cluster #+CAPTION: centers to always be a distance away from the edges. #+NAME: fig:background_interpolation_larger_2keV [[~/org/Figs/statusAndProgress/background_interpolation/background_interp_larger_2keV_radius_80.pdf]] Now, what is a good solution for this problem? In principle we can just say "background is constant over the chip at this energy above 2 keV" and neglect the whole interpolation here, i.e. set it constant. If we wish to keep an interpolation around, we will have to modify the data that we use to create the actual 2D interpolator. Of course the same issue is present in the < 2 keV dataset to an extent. The question there is: does it matter? Essentially, the statement about having *less* background there is factually true. But only to the extent of diffusion putting the centers away from the edges, *not* from just picking up nothing from the area within the search radius that lies outside the chip (where thus no data can be found). Ideally, we correct for this by scaling all points that contain data outside the chip by the fraction of area that is within the radius divided by the total area. That way we pretend that there is an 'equal' amount of background found in this area in the full radius around the point. How? Trigonometry for that isn't fully trivial, but also not super hard. Keep in mind the area of a [[https://en.wikipedia.org/wiki/Circular_segment][circle segment]]: \[ A = r² / 2 * (ϑ - sin(ϑ)) \] where $r$ is the radius of the circle and ϑ the angle that cuts off the circle. However, in the general case we need to know the area of a circle that is cut off from 2 sides. By subtracting the corresponding areas of circle segments for each of the lines that cut something off, we remove too much. So we need to add back: - another circle segment, of the angle between the two angles given by the twice counted area - the area of the triangle with the two sides given by $R - r'$ in length, where $r'$ is the distance that is cut off from the circle. In combination the area remaining for a circle cut off from two (orthogonal, fortunately) lines is: \[ E = F - A - B + C + D \] where: - $F$: the total area of the circle - $A$: the area of the first circle segment - $B$: the area of the second circle segment - $C$: the area of the triangle built by the two line cutoffs: \[ C = \frac{r' r''}{2} \] with $r'$ as defined above for cutoff A and $r''$ for cutoff B. - $D$: the area of the circle segment given by the angles between the two cutoff lines touching the circle edge: \[ D = r² / 2 * (α - sin(α)) \] where $α$ is: \[ α = π/2 - ϑ_1 - ϑ_2 \] where $ϑ_{1,2}$ are the related to the angles $ϑ$ needed to compute each circle segment, by: \[ ϑ' = (π - ϑ) / 2 \] denoted as $ϑ'$ here. Implemented this as a prototype in: [[file:~/org/Misc/circle_segments.nim]] *UPDATE*: <2023-03-01 Wed 18:02> which now also lives in TPA in the ~NimUtil/helpers~ directory! Next step: incorporate this into the interpolation to re-weight the interpolation near the corners. **** Normalization of gaussian weighted k-d tree background interpolation :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:gaussian_weight_normalization :END: The background interpolation described above includes multiple steps required to finalize it. As mentioned, we start by building a k-d tree on the data using a custom metric: #+begin_src nim proc distance(metric: typedesc[CustomMetric], v, w: Tensor[float]): float = doAssert v.squeeze.rank == 1 doAssert w.squeeze.rank == 1 let xyDist = pow(abs(v[0] - w[0]), 2.0) + pow(abs(v[1] - w[1]), 2.0) let zDist = pow(abs(v[2] - w[2]), 2.0) if zDist <= Radius * Radius: result = xyDist else: result = zDist #if xyDist > zDist: # result = xyDist #elif xyDist < zDist and zDist <= Radius: # result = xyDist #else: # result = zDist #+end_src or in pure math: Let $R$ be a cutoff value. \begin{equation} \mathcal{D}( (\vec{x}_1, E_1), (\vec{x}_2, E_2)) = \begin{cases} (\vec{x}_1 - \vec{x}_2)² \text{ if } |E_1 - E_2| \leq R \\ |E_1 - E_2| \end{cases} \end{equation} where we make sure to scale the energies such that a value for the radius in Euclidean space of the x / y geometry covers the same range as it does in energy. This creates essentially a cylinder. In words it means we use the distance in x and y as the actual distance, unless the distance in energy is larger than the allowed cutoff, in which case we return it. This simply assures that: - if two clusters are close in energy, but further in Euclidean distance than the allowed cutoff, they will be removed later - if two clusters are too far away in energy they will be removed, despite possibly being close in x/y - otherwise the distance in energy is *irrelevant*. The next step is to compute the actual background value associated with each $(x, y, E)$ point. In the most naive approach (as presented in the first few plots in the section above), we can associate to each point the number of clusters found within a certain radius (including or excluding the energy dimension). For obvious reasons treating each point independent of the distance as a single count (pure nearest neighbor) is problematic, as the distance matters of course. Thus, our choice is a weighted nearest neighbor. Indeed, we weigh the distance by normal distribution centered around the location at which we want to compute the background. So, in code our total weight for an individual point is: #+begin_src nim template compValue(tup: untyped, byCount = false, energyConst = false): untyped = if byCount: tup.idx.size.float # for the pure nearest neighbor case else: # weigh by distance using gaussian of radius being 3 sigma let dists = tup[0] var val = 0.0 for d in items(dists): # default, gaussian an energy val += smath.gauss(d, mean = 0.0, sigma = radius / 3.0) val #+end_src where =tup= contains the distances to all neighbors found within the desired radius. In math this means we first modify our distance measure $\mathcal{D}$ from above to: \begin{equation} \mathcal{D'}( (\vec{x}_1, E_1), (\vec{x}_2, E_2)) = \begin{cases} (\vec{x}_1 - \vec{x}_2)² \text{ if } |E_1 - E_2| \leq R \\ 0 \text{ if } (\vec{x}_1 - \vec{x}_2)² > R² \\ 0 \text{ if } |E_1 - E_2| > R \end{cases} \end{equation} to incorporate the nearest neighbor properties of dropping everything outside of the radius either in x/y or in (scaled) energy. \begin{align*} I(\vec{x}_e, E_e) &= Σ_i \exp \left[ -\frac{1}{2} \left( \mathcal{D'}((\vec{x}_e, E_e), (\vec{x}_i, E_i)) \right)² / σ² \right] \\ I(\vec{x}_e, E_e) &= Σ_i \exp \left[ -\frac{1}{2} \mathcal{D}^{'2} / σ² \right] \text{ for clarity w/o arguments} \\ I(\vec{x}_e, E_e) &= Σ_i \mathcal{M}(\vec{x}_i, E_i) \end{align*} where we introduce $\mathcal{M}$ to refer to the measure we use and =i= runs over all clusters ($\mathcal{D'}$ takes care of only allowing points in the radius to contribute) and the subscript =e= stands for the evaluation point. $σ$ is the sigma of the (non-normalized!) Gaussian distribution for the weights, which is set to $σ = \frac{R}{3}$. This gives us a valid interpolated value for each possible value of position and energy pairs. However, these are still not normalized, nor corrected for the cutoff of the radius once it's not fully "on" the chip anymore. The normalization is done via the area of circle segments, as described in the previous section [[#sec:correct_inter_cutoff]]. The normalization will be described next. For the case of unweighted points (taking every cluster in the 'cylinder'), it would simply be done by dividing by the: - background data taking time - energy range of interest - *volume of the cylinder* But for a weighted distance measure $\mathcal{D'}$, we need to perform the integration over the measure (which we do implicitly for the non-weighted case by taking the area! Each point simply contributes with 1, resulting in the area of the circle). The necessary integration over the energy can be reduced to simply dividing by the energy range (the 'cylinder height' part if one will), as everything is constant in the energy direction, i.e. no weighting in that axis. Let's look at what happens in the trivial case for an understanding of what we are actually doing when normalizing by area of a non-weighted thing. The measure in the unweighted case is thus: \[ \mathcal{M}(x, y) = 1 \] Now, we need to integrate this measure over the region of interest around a point (i.e from a point x over the full radius that we consider): \begin{align*} W &= \int_{x² + y² < R²} \mathcal{M}(x', y')\, \mathrm{d}x \mathrm{d}y \\ &= \int_{x² + y² < R²} 1\, \mathrm{d}x \mathrm{d}y \\ &= \int_0^R \int_0^{2 π} r\, \mathrm{d}r \mathrm{d}φ \\ &= \int_0^{2 π} \frac{1}{2} R² \, \mathrm{d}φ \\ &= 2 π\frac{1}{2} R² \\ &= π R² \end{align*} where the additional $r$ after transformation from cartesian coordinates to polar coordinates is from the Jacobi determinant (ref: https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant#Example_2:_polar-Cartesian_transformation as a reminder). For this reason it is important that we start our assumption in cartesian coordinates, as otherwise we miss out that crucial factor! Unexpectedly, the result is simply the area of a circle with radius $R$, as we intuitively assumed to be for a trivial measure. For our actual measure we use: \[ \mathcal{M}(\vec{x}_i, E_i) = \exp \left[ - \frac{1}{2} \mathcal{D}^{'2}((\vec{x}_e, E_e), (\vec{x}_i, E_i)) / σ² \right] \] the procedure follows in the exact same fashion (we leave out the arguments to $\mathcal{D}$ in the further part: \begin{align*} W &= \int_{x² + y² < R²} \mathcal{M}(x', y')\, \mathrm{d}x \mathrm{d}y \\ &= \int_{x² + y² < R²} \exp \left[ - \frac{1}{2} \mathcal{D}^{'2} / σ² \right] \, \mathrm{d}x \mathrm{d}y \\ &= \int_0^R \int_0^{2 π} r \exp \left[ - \frac{1}{2} \mathcal{D}^{'2} / σ² \right]\, \mathrm{d}r \mathrm{d}φ \end{align*} which can be integrated using standard procedures or just using SageMath, ...: #+begin_src sage :eval no-export sage: r = var('r') # for radial variable we integrate over sage: σ = var('σ') # for constant sigma sage: φ = var('φ') # for angle variable we integrate over sage: R = var('R') # for the radius to which we integrate sage: assume(R > 0) # required for sensible integration sage: f = exp(-r ** 2 / (sqrt(2) * σ) ** 2) * r sage: result = integrate(integrate(f, r, 0, R), φ, 0, 2 * pi) sage: result -2*pi*(σ^2*e^(-1/2*R^2/σ^2) - σ^2) sage: result(R = 100, σ = 33.33333).n() 6903.76027055093 #+end_src The final normalization in code: #+begin_src nim proc normalizeValue*(x, radius: float, energyRange: keV, backgroundTime: Hour): keV⁻¹•cm⁻²•s⁻¹ = let pixelSizeRatio = 65536 / (1.4 * 1.4).cm² let σ = Sigma ## This comes for integration with `sagemath` over the gaussian weighting. See the notes. let area = -2*π*(σ*σ * exp(-1/2 * radius*radius / (σ*σ)) - (σ*σ)) let energyRange = energyRange * 2.0 # we look at (factor 2 for radius) let factor = area / pixelSizeRatio * # area in cm² energyRange * backgroundTime.to(Second) result = x / factor #+end_src **** Error propagation of background interpolation :extended: :PROPERTIES: :CUSTOM_ID: sec:background_interpolation_uncertainty :END: For obvious reasons the background interpolation suffers from statistical uncertainties. Ideally, we compute the resulting error from the statistical uncertainty for the points by propagating the errors through the whole computation. That is from the nearest neighbor lookup, through the sum of the weighted distance calculation and then the normalization. We'll use [[https://github.com/SciNim/Measuremancer]]. #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim import datamancer, measuremancer, unchained, seqmath #+end_src Start by importing some data taken from running the main program. These are the distances at some energy at pixel (127, 127) to the nearest neighbors. #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim when isMainModule: const data = """ dists 32.14 31.89 29.41 29.12 27.86 21.38 16.16 16.03 """ #+end_src Parse and look at it: #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim when isMainModule: var df = parseCsvString(data) echo df #+end_src #+RESULTS: | Dataframe | with | 1 | columns | and | 8 | rows: | | Idx | dists | | | | | | | dtype: | float | | | | | | | 0 | 32.14 | | | | | | | 1 | 31.89 | | | | | | | 2 | 29.41 | | | | | | | 3 | 29.12 | | | | | | | 4 | 27.86 | | | | | | | 5 | 21.38 | | | | | | | 6 | 16.16 | | | | | | | 7 | 16.03 | | | | | | | | | | | | | | Now import the required transformations of the code, straight from the limit code (we will remove all unnecessary bits). First get the radius and sigma that we used here: #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim when isMainModule: let Radius = 33.3 let Sigma = Radius / 3.0 let EnergyRange = 0.3.keV #+end_src and now the functions: #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim template compValue(tup: untyped, byCount = false): untyped = if byCount: tup.size.float else: # weigh by distance using gaussian of radius being 3 sigma let dists = tup # `NOTE:` not a tuple here anymore var val = 0.0 for d in items(dists): val += smath.gauss(d, mean = 0.0, sigma = Sigma) val defUnit(cm²) proc normalizeValue*[T](x: T, radius, σ: float, energyRange: keV, byCount = false): auto = let pixelSizeRatio = 65536 / (1.4 * 1.4).cm² var area: float if byCount: # case for regular circle with weights 1 area = π * radius * radius # area in pixel else: area = -2*Pi*(σ*σ * exp(-1/2 * radius*radius / (σ*σ)) - (σ*σ)) let energyRange = energyRange * 2.0 #radius / 6.0 / 256.0 * 12.0 * 2.0 # fraction of full 12 keV range # we look at (factor 2 for radius) let backgroundTime = 3300.h.to(Second) let factor = area / pixelSizeRatio * # area in cm² energyRange * backgroundTime result = x / factor #+end_src =compValue= computes the weighted (or unweighted) distance measure and =normalizeValue= computes the correct normalization based on the radius. The associated area is obtained using the integration shown in the previous section (using sagemath). Let's check if we can run the computation and see what we get #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim when isMainModule: let dists = df["dists", float] echo "Weighted value : ", compValue(dists) echo "Normalized value : ", compValue(dists).normalizeValue(Radius, Sigma, EnergyRange) #+end_src #+RESULTS: :RESULTS: | Weighted value | 0.9915005015651535 | | Normalized value | 6.07539968974599e-06 CentiMeter⁻²•Second⁻¹ | :END: values that seem reasonable. To compute the associated errors, we need to promote the functions we use above to work with =Measurement[T]= objects. =normalizeValue= we can just make generic (DONE). For =compValue= we still need a Gaussian implementation (note: we don't have errors associated with $μ$ and $σ$ for now. We might add that.). The logic for the error calculation / getting an uncertainty from the set of clusters in the search radius is somewhat subtle. Consider the unweighted case: If we have $N$ clusters, we associate an uncertainty to these number of clusters to $ΔN = √N$. Why is that? Because: \[ N = Σ_i (1 ± 1) =: f \] leads to precisely that result using linear error propagation! Each value has an uncertainty of $√1$. Computing the uncertainty of a single value just yields $√((∂(N)/∂N)² ΔN²) = ΔN$. Doing the same of the *sum* of elements, just means \[ ΔN = √( Σ_i (∂f/∂N_i)²(ΔN_i)² ) = √( Σ_i 1² ) = √N \] precisely what we expect. We can then just treat the gaussian in the same way, namely: \[ f = Σ_i (1 ± 1) * \text{gauss}(\vec{x} - \vec{x_i}, μ = 0, σ) \] and transform it the same way. This has the effect that points that are further away contribute less than those closer! This is implemented here (thanks to =Measuremancer=, damn): #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim proc gauss*[T](x: T, μ, σ: float): T = let arg = (x - μ) / σ res = exp(-0.5 * arg * arg) result = res proc compMeasureValue*[T](tup: Tensor[T], σ: float, byCount: bool = false): auto = if byCount: let dists = tup # only a tuple in real interp code let num = tup.size.float var val = 0.0 ± 0.0 for d in items(dists): val = val + (1.0 ± 1.0) * 1.0 # last * 1.0 represents the weight that is one !this holds! doAssert val == (num ± sqrt(num)) # sanity check that our math works out val else: # weigh by distance using gaussian of radius being 3 sigma let dists = tup # `NOTE:` not a tuple here anymore var val = 0.0 ± 0.0 for d in items(dists): let gv = (1.0 ± 1.0) * gauss(d, μ = 0.0, σ = σ) # equivalent to unweighted but with gaussian weights val = val + gv val #+end_src Time to take our data and plug it into the two procedures: #+begin_src nim :tangle /tmp/background_interpolation_error_propagation.nim when isMainModule: let dists = df["dists", float] echo "Weighted values (byCount) : ", compMeasureValue(dists, σ = Sigma, byCount = true) echo "Normalized value (byCount) : ", compMeasureValue(dists, σ = Sigma, byCount = true) .normalizeValue(Radius, Sigma, EnergyRange, byCount = true) echo "Weighted values (gauss) : ", compMeasureValue(dists, σ = Sigma, byCount = false) echo "Normalized value (gauss) : ", compMeasureValue(dists, σ = Sigma, byCount = false) .normalizeValue(Radius, Sigma, EnergyRange) #+end_src #+RESULTS: :RESULTS: Weighted values (byCount) : 8.00 ± 2.83 Normalized value (byCount) : 1.08e-05 ± 3.81e-06 CentiMeter⁻²•Second⁻¹ Weighted values (gauss) : 0.992 ± 0.523 Normalized value (gauss) : 6.08e-06 ± 3.20e-06 CentiMeter⁻²•Second⁻¹ :END: The result mostly makes sense: Namely, in the case of the gaussian, we essentially have "less" statistics, because we weigh the events further away less. The result is a larger error on the weighted case with less statistics. *Note:* In this particular case the computed background rate is significantly lower (but almost within 1σ!) than in the non weighted case. This is expected and also essentially proving the correctness of the uncertainty. The distances of the points in the input data is simply quite far away for all values. ***** Random sampling to simulate background uncertainty We'll do a simple Monte Carlo experiment to assess the uncertainties from a statistical point of view and compare with the results obtained in the section above. First do the sampling of backgrounds part: #+begin_src nim :tangle /tmp/background_uncertainty_mc.nim import std / [random, math, strformat, strutils] const outDir = "/home/basti/org/Figs/statusAndProgress/background_interpolation/uncertainty" import ./sampling_helpers proc sampleBackgroundClusters(rng: var Rand, num: int, sampleFn: (proc(x: float): float) ): seq[tuple[x, y: int]] = ## Samples a number `num` of background clusters distributed over the whole chip. result = newSeq[tuple[x, y: int]](num) # sample in `y` from function let ySamples = sampleFrom(sampleFn, 0.0, 255.0, num) for i in 0 ..< num: result[i] = (x: rng.rand(255), y: ySamples[i].round.int) import ggplotnim, sequtils proc plotClusters(s: seq[tuple[x, y: int]], suffix: string) = let df = toDf({"x" : s.mapIt(it.x), "y" : s.mapIt(it.y)}) let outname = &"{outDir}/clusters{suffix}.pdf" ggplot(df, aes("x", "y")) + geom_point(size = some(1.0)) + ggtitle(&"Sampling bias: {suffix}. Num clusters: {s.len}") + ggsave(outname) import unchained defUnit(keV⁻¹•cm⁻²•s⁻¹) proc computeNumClusters(backgroundRate: keV⁻¹•cm⁻²•s⁻¹, energyRange: keV): float = ## computes the number of clusters we need to simulate a certain background level let goldArea = 5.mm * 5.mm let area = 1.4.cm * 1.4.cm let time = 3300.h # let clusters = 10000 # about 10000 clusters in total chip background result = backgroundRate * area * time.to(Second) * energyRange import arraymancer, measuremancer import ./background_interpolation_error_propagation import numericalnim proc compClusters(fn: (proc(x: float): float), numClusters: int): float = proc hFn(x: float, ctx: NumContext[float, float]): float = (numClusters / (256.0 * fn(127.0))) * fn(x) result = simpson(hfn, 0.0, 256.0) doAssert almostEqual(hFn(127.0, newNumContext[float, float]()), numClusters / 256.0) proc computeToy(rng: var Rand, numClusters: int, radius, σ: float, energyRange: keV, sampleFn: (proc(x: float): float), correctNumClusters = false, verbose = false, suffix = ""): tuple[m: Measurement[keV⁻¹•cm⁻²•s⁻¹], num: int] = var numClusters = numClusters if correctNumClusters: numClusters = compClusters(sampleFn, numClusters).round.int let clusters = rng.sampleBackgroundClusters(numClusters.int, sampleFn) if verbose: plotClusters(clusters, suffix) # generate a kd tree based on the data let tTree = stack([clusters.mapIt(it.x.float).toTensor, clusters.mapIt(it.y.float).toTensor], axis = 1) let kd = kdTree(tTree, leafSize = 16, balancedTree = true) let tup = kd.queryBallPoint([127.float, 127.float].toTensor, radius) let m = compMeasureValue(tup[0], σ = radius / 3.0, byCount = false) .normalizeValue(radius, σ, energyRange) let num = tup[0].len if verbose: echo "Normalized value (gauss) : ", m, " based on ", num, " clusters in radius" result = (m: m, num: num) let radius = 33.3 let σ = radius / 3.0 let energyRange = 0.3.keV let num = computeNumClusters(5e-6.keV⁻¹•cm⁻²•s⁻¹, energyRange * 2.0).round.int var rng = initRand(1337) import sugar # first look at / generate some clusters to see sampling works discard rng.computeToy(num, radius, σ, energyRange, sampleFn = (x => 1.0), verbose = true, suffix = "_constant_gold_region_rate") # should be the same number of clusters! discard rng.computeToy(num, radius, σ, energyRange, sampleFn = (x => 1.0), correctNumClusters = true, verbose = true, suffix = "_constant_gold_region_rate_corrected") # now again with more statistics discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => 1.0), verbose = true, suffix = "_constant") # should be the same number of clusters! discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => 1.0), correctNumClusters = true, verbose = true, suffix = "_constant_corrected") # linear discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => x), verbose = true, suffix = "_linear") # should be the same number of clusters! discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => x), correctNumClusters = true, verbose = true, suffix = "_linear_corrected") # square discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => x*x), verbose = true, suffix = "_square") # number of clusters should differ! discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => x*x), correctNumClusters = true, verbose = true, suffix = "_square_corrected") # exp discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => exp(x/64.0)), verbose = true, suffix = "_exp64") # number of clusters should differ! discard rng.computeToy(100 * num, radius, σ, energyRange, sampleFn = (x => exp(x/64.0)), correctNumClusters = true, verbose = true, suffix = "_exp64_corrected") proc performToys(nmc: int, numClusters: int, sampleFn: (proc(x: float): float), suffix: string, correctNumClusters = true): DataFrame = var numClusters = numClusters if correctNumClusters: echo "Old number of clusters: ", numClusters numClusters = compClusters(sampleFn, numClusters).round.int echo "Corrected number of clusters: ", numClusters var data = newSeq[Measurement[keV⁻¹•cm⁻²•s⁻¹]](nmc) var clustersInRadius = newSeq[int](nmc) for i in 0 ..< nmc: if i mod 500 == 0: echo "Iteration: ", i let (m, numInRadius) = rng.computeToy(numClusters, radius, σ, energyRange, sampleFn = sampleFn) data[i] = m clustersInRadius[i] = numInRadius let df = toDf({ "values" : data.mapIt(it.value.float), "errors" : data.mapIt(it.error.float), "numInRadius" : clustersInRadius }) ggplot(df, aes("values")) + geom_histogram(bins = 500) + ggsave(&"{outDir}/background_uncertainty_mc_{suffix}.pdf") ggplot(df, aes("errors")) + geom_histogram(bins = 500) + ggsave(&"{outDir}/background_uncertainty_mc_errors_{suffix}.pdf") if numClusters < 500: ggplot(df, aes("numInRadius")) + geom_bar() + ggsave(&"{outDir}/background_uncertainty_mc_numInRadius_{suffix}.pdf") else: ggplot(df, aes("numInRadius")) + geom_histogram(bins = clustersInRadius.max) + ggsave(&"{outDir}/background_uncertainty_mc_numInRadius_{suffix}.pdf") let dfG = df.gather(["values", "errors"], key = "Type", value = "Value") ggplot(dfG, aes("Value", fill = "Type")) + geom_histogram(bins = 500, position = "identity", hdKind = hdOutline, alpha = some(0.5)) + ggtitle("Sampling bias: {suffix}. NMC = {nmc}, numClusters = {int}") + ggsave(&"{outDir}/background_uncertainty_mc_combined_{suffix}.pdf") result = dfG result["sampling"] = suffix proc performAllToys(nmc, numClusters: int, suffix = "", correctNumClusters = true) = var df = newDataFrame() df.add performToys(nmc, numClusters, (x => 1.0), "constant", correctNumClusters) df.add performToys(nmc, numClusters, (x => x), "linear", correctNumClusters) df.add performToys(nmc, numClusters, (x => x*x), "square", correctNumClusters) df.add performToys(nmc, numClusters, (x => exp(x/64.0)), "exp_x_div_64", correctNumClusters) #df = if numClusters < 100: df.filter(f{`Value` < 2e-5}) else: df let suffixClean = suffix.strip(chars = {'_'}) let pltVals = ggplot(df, aes("Value", fill = "sampling")) + facet_wrap("Type") + geom_histogram(bins = 500, position = "identity", hdKind = hdOutline, alpha = some(0.5)) + prefer_rows() + ggtitle(&"Comp diff. sampling biases, {suffixClean}. NMC = {nmc}, numClusters = {numClusters}") #ggsave(&"{outDir}/background_uncertainty_mc_all_samplers{suffix}.pdf", height = 600, width = 800) # stacked version of number in radius let width = if numClusters < 100: 800.0 else: 1000.0 # stacked version ggplot(df.filter(f{`Type` == "values"}), aes("numInRadius", fill = "sampling")) + geom_bar(position = "stack") + scale_x_discrete() + xlab("# cluster in radius") + ggtitle(&"# clusters in interp radius, {suffixClean}. NMC = {nmc}, numClusters = {numClusters}") + ggsave(&"{outDir}/background_uncertainty_mc_all_samplers_numInRadius_stacked{suffix}.pdf", height = 600, width = width) # ridgeline version ggplot(df.filter(f{`Type` == "values"}), aes("numInRadius", fill = "sampling")) + ggridges("sampling", overlap = 1.3) + geom_bar(position = "identity") + scale_x_discrete() + xlab("# cluster in radius") + ggtitle(&"# clusters in interp radius, {suffixClean}. NMC = {nmc}, numClusters = {numClusters}") + ggsave(&"{outDir}/background_uncertainty_mc_all_samplers_numInRadius_ridges{suffix}.pdf", height = 600, width = width) var pltNum: GgPlot # non stacked bar/histogram with alpha if numClusters < 100: pltNum = ggplot(df.filter(f{`Type` == "values"}), aes("numInRadius", fill = "sampling")) + geom_bar(position = "identity", alpha = some(0.5)) + scale_x_discrete() + ggtitle(&"# clusters in interp radius, {suffixClean}. NMC = {nmc}, numClusters = {numClusters}") else: let binEdges = toSeq(0 .. df["numInRadius", int].max + 1).mapIt(it.float - 0.5) pltNum = ggplot(df.filter(f{`Type` == "values"}), aes("numInRadius", fill = "sampling")) + geom_histogram(breaks = binEdges, hdKind = hdOutline, position = "identity", alpha = some(0.5)) + ggtitle(&"# clusters in interp radius, {suffixClean}. NMC = {nmc}, numClusters = {numClusters}")# + ggmulti([pltVals, pltNum], fname = &"{outDir}/background_uncertainty_mc_all_samplers{suffix}.pdf", widths = @[800], heights = @[600, 300]) # first regular MC const nmc = 100_000 performAllToys(nmc, num, suffix = "_uncorrected", correctNumClusters = false) # and now the artificial increased toy example performAllToys(nmc div 10, 10 * num, "_uncorrected_artificial_statistics", correctNumClusters = false) ## and now with cluster correction performAllToys(nmc, num, suffix = "_corrected", correctNumClusters = true) # and now the artificial increased toy example performAllToys(nmc div 10, 10 * num, "_corrected_artificial_statistics", correctNumClusters = true) #+end_src #+begin_src nim :tangle /tmp/sampling_helpers.nim import random, seqmath, sequtils, algorithm proc cdf[T](data: T): T = result = data.cumSum() result.applyIt(it / result[^1]) proc sampleFromCdf[T](data, cdf: seq[T]): T = # sample an index based on this CDF let idx = cdf.lowerBound(rand(1.0)) result = data[idx] proc sampleFrom*[T](data: seq[T], start, stop: T, numSamples: int): seq[T] = # get the normalized (to 1) CDF for this radius let points = linspace(start, stop, data.len) let cdfD = cdf(data) result = newSeq[T](numSamples) for i in 0 ..< numSamples: # sample an index based on this CDF let idx = cdfD.lowerBound(rand(1.0)) result[i] = points[idx] proc sampleFrom*[T](fn: (proc(x: T): T), start, stop: T, numSamples: int, numInterp = 10_000): seq[T] = # get the normalized (to 1) CDF for this radius let points = linspace(start, stop, numInterp) let data = points.mapIt(fn(it)) let cdfD = cdf(data) result = newSeq[T](numSamples) for i in 0 ..< numSamples: # sample an index based on this CDF let idx = cdfD.lowerBound(rand(1.0)) result[i] = points[idx] #+end_src So, from these Monte Carlo toy experiments, we can gleam quite some insight. We have implemented unbiased clusters as well as biased clusters. First one example for the four different cluster samplers, with the condition each time that the *number of total clusters* is the same as in the constant background rate case: #+CAPTION: Example of an unbiased cluster sampling. Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+NAME: unbiased_sampled_background_clusters_uncorrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_constant.pdf]] #+CAPTION: Example of a linearly biased cluster sampling. Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+NAME: linear_sampled_background_clusters_uncorrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_linear.pdf]] #+CAPTION: Example of a squarely biased cluster sampling. Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+NAME: square_sampled_background_clusters_uncorrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_square.pdf]] #+CAPTION: Example of a $\exp(x/64)$ biased cluster sampling. Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+NAME: exp64_sampled_background_clusters_uncorrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_exp64.pdf]] With these in place, we performed two sets of Monte Carlo experiments to compute the value & uncertainty of the center point =(127, 127)= using the gaussian weighted nearest neighbor interpolation from the previous section. This is done for all four different samplers and the obtained values and their errors (propagated via =Measuremancer=) plotted as a histogram Once for the number of expected clusters (based on the gold region background rate), fig. [[background_uncertainty_mc_all_samplers_uncorrected]] and once for a lower statistics, but much 10 times higher number of clusters, fig. [[background_uncertainty_mc_all_samplers_uncorrected_artificial_statistics]] #+CAPTION: Comparison of four different samplers (unbiased + 3 biased), showing the #+CAPTION: result of \num{100000} MC toy experiments based on the expected number of #+CAPTION: clusters if the same background rate of the gold region covered the whole chip. #+CAPTION: Below a bar chart of the number of clusters found inside the radius. #+CAPTION: The number of clusters corresponds to about =5e-6 keV⁻¹•cm⁻²•s⁻¹= over the #+CAPTION: whole chip. #+NAME: background_uncertainty_mc_all_samplers_uncorrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/background_uncertainty_mc_all_samplers_uncorrected.pdf]] #+CAPTION: Comparison of four different samplers (unbiased + 3 biased), showing the #+CAPTION: result of \num{10000} MC toy experiments based on the 10 times the expected number of #+CAPTION: clusters if the same background rate of the gold region covered the whole chip. #+CAPTION: Below a histogram of the number of clusters found inside the radius. #+CAPTION: The number of clusters corresponds to about =5e-5 keV⁻¹•cm⁻²•s⁻¹= over the #+CAPTION: whole chip. #+NAME: background_uncertainty_mc_all_samplers_uncorrected_artificial_statistics [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/background_uncertainty_mc_all_samplers_uncorrected_artificial_statistics.pdf]] First of all there is some visible structure in the low statistics figure (fig. [[background_uncertainty_mc_all_samplers_uncorrected]]). The meaning of it, is not entirely clear to me. Initially, we thought it might be an integer effect of 0, 1, 2, ... clusters within the radius and the additional slope being from the distance these clusters are away from the center. Further away, less weight, less background rate. But looking at the number of clusters in the radius (lowest plot in the figure), this explanation alone does not really seem to explain it. For the high statistics case, we can see that the mean of the distribution shifts lower and lower, the more extreme the bias is. This is likely, because the bias causes a larger and larger number of clusters to land near the top corner of the chip, meaning that there are less and less clusters found within the point of interpolation. Comparing the number of clusters in radius figure for this case shows that indeed, the square and exponential bias case show a peak at lower energies. Therefore, I also computed a correction function to compute a biased distribution that matches the background rate exactly at the center of the chip, but therefore allows for a larger number of sampled clusters in total. We know that (projecting onto the y axis alone), there are: \[ ∫_0^{256} f(x) dx = N \] where $N$ is the total number of clusters we draw and $f(x)$ the function we use to sample. For the constant case, this means that we have a rate of $N / 256$ clusters per pixel along the y axis (i.e. per row). So in order to correct for this and compute the new required number of clusters in total that gives us the same rate of $N / 256$ in the center, we can: \[ ∫_0^{256} \frac{N}{256 · f(127)} f(x) dx = N' \] where the point $f(127)$ is simply the value of the "background rate" the function we currently use produces as is in the center of the chip. Given our definition of the functions (essentially as primitive ~f(x)= x~, ~f(x) = x * x~, etc. we expect the linear function to match the required background rate of the constant case exactly in the middle, i.e. at 127. And this is indeed the case (as can be seen in the new linear plot below, fig. [[linear_sampled_background_clusters_corrected]]). This correction has been implemented. The equivalent figures to the cluster distributions from further above are: #+CAPTION: Example of an unbiased cluster sampling with the applied correction. #+CAPTION: Sampled 100 times (for better visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+CAPTION: As expected the number of clusters is still the same number as above. #+NAME: unbiased_sampled_background_clusters_corrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_constant_corrected.pdf]] #+CAPTION: Example of a linearly biased cluster sampling with the applied correction. #+CAPTION: Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+NAME: linear_sampled_background_clusters_corrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_linear_corrected.pdf]] #+CAPTION: Example of a squarely biased cluster sampling with the applied correction. Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+CAPTION: The correction means that the total number of clusters is now almost 2500 more than #+CAPTION: in the uncorrected case. #+NAME: square_sampled_background_clusters_corrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_square_corrected.pdf]] #+CAPTION: Example of a $\exp(x/64)$ biased cluster sampling with the applied correction. Sampled 100 times (for better #+CAPTION: visibility of the distribution) as many #+CAPTION: clusters as predicted for our background data taking. #+CAPTION: The correction means that the total number of clusters is now almost double the #+CAPTION: amount in the uncorrected case. #+NAME: exp64_sampled_background_clusters_corrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/clusters_exp64_corrected.pdf]] The correction works nicely. It is visible that in the center the density seems to be the same as in the constant case. From here we can again look at the same plots as above, i.e. the corrected monte carlo plots: #+CAPTION: Comparison of four different samplers (unbiased + 3 biased), showing the #+CAPTION: result of \num{100000} MC toy experiments based on the expected number of #+CAPTION: clusters such that the background is biased and produces the same background #+CAPTION: rate as in the gold region in the constant case. #+CAPTION: Below a bar chart of the number of clusters found inside the radius. #+CAPTION: The number of clusters corresponds to about =5e-6 keV⁻¹•cm⁻²•s⁻¹= over the #+CAPTION: whole chip. #+NAME: background_uncertainty_mc_all_samplers_corrected [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/background_uncertainty_mc_all_samplers_corrected.pdf]] #+CAPTION: Comparison of four different samplers (unbiased + 3 biased), showing the #+CAPTION: result of \num{10000} MC toy experiments based on the 10 times the expected number of #+CAPTION: clusters such that the background is biased and produces the same background #+CAPTION: rate as in the gold region in the constant case. #+CAPTION: Below a histogram of the number of clusters found inside the radius. #+CAPTION: The number of clusters corresponds to about =5e-5 keV⁻¹•cm⁻²•s⁻¹= over the #+CAPTION: whole chip. #+NAME: background_uncertainty_mc_all_samplers_uncorrected_artificial_statistics [[~/org/Figs/statusAndProgress/background_interpolation/uncertainty/background_uncertainty_mc_all_samplers_corrected_artificial_statistics.pdf]] It can be nicely seen that the mean of the value is now again at the same place for all samplers! This is reassuring, because it implies that any systematic uncertainty due to such a bias in our real data is *probably* negligible, as the effects will never be as strong as simulated here. Secondly, we can nicely see that the computed uncertainty for a single element seems to follow nicely the actual width of the distribution. In particular this is visible in the artificial high statistics case, where the mean value of the error is comparable to the width of the =value= histogram. *** Candidates :PROPERTIES: :CUSTOM_ID: sec:limit:ingredients:candidates :END: Finally, the candidates are the X-ray like clusters remaining after the background rejection algorithm has been applied to the data taken during the solar tracking. For the computation of the expected limit, the set of candidates is drawn from the background rate distribution via sampling from a Poisson distribution with the mean of the background rate. As our background model is an interpolation instead of a binned model with Poisson distributed bins, we create a grid of $(x, y, E) = (10, 10, 20)$ cells from the interpolation, which we scale such that they contain the expected number of candidates from each cell after the solar tracking duration, $b_{ijk}$. Then we can walk over the entire grid and sample from a Poisson distribution for each grid cell with mean $λ_{ijk} = b_{ijk}$. For all sampled candidates $κ_{ijk}$ in each grid cell, we then compute a random position and energy from uniform distributions along each dimension. A slice of the grid cells centered at $\SI{2.75}{keV}$ is shown in fig. sref:fig:limit:expected_counts, with the color indicating how many candidates are expected in each cell after the solar tracking duration. A set of toy candidates generated in this manner is shown in fig. sref:fig:limit:toy_candidate_set. Each point represents one toy candidate at its cluster center position. The color scale represents the energy of each cluster in $\si{keV}$. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Grid of expected counts") (label "fig:limit:expected_counts") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/candidate_sampling_grid_index_5.pdf")) (subfigure (linewidth 0.5) (caption "Toy candidate set") (label "fig:limit:toy_candidate_set") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/example_candidates_1.pdf")) (caption (subref "fig:limit:expected_counts") ": Expected counts in " ($ "(10, 10, 20)") " cells, centered around " ($ (SI 2.75 "keV")) " obtained from background interpolation and normalized back to counts in solar tracking within the volume of the grid cell. " (subref "fig:limit:toy_candidate_set") ": A set of toy candidates drawn from cells of expected counts using a Poisson distribution with mean based on each grid cell. Each point is the center of a cluster with the color scale showing the energy of that cluster.") (label "fig:limit:toy_candidates")) #+end_src **** TODOs for this section [/] :noexport: - [ ] *MOVE THE SAMPLING PART* to the section about expected limits? Hmm. **** Generate the candidate sampling figure :extended: Sanity check for candidate sampling: #+begin_src sh F_WIDTH=0.5 DEBUG_TEX=true ESCAPE_LATEX=true USE_TEX=true \ mcmc_limit_calculation sanity \ --limitKind lkMCMC \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --sanityPath ~/phd/Figs/limit/sanity/ \ --backgroundSampling #+end_src ** Systematics :PROPERTIES: :CUSTOM_ID: sec:limit:systematics :END: As explained previously in sec. [[#sec:limit:method_systematics]], we introduce 4 different nuisance parameters to handle systematics. These are split by those impacting the signal, the background and each position axis independently. Tab. [[tab:limit:systematic_uncertainties]] shows the different systematic uncertainties we consider, whether they affect signal, background or the position, their value and finally potential biases due to some imperfect knowledge. Note that the listed software efficiency systematic is an upper bound. The explicit value depends on the parameter setup for which we compute a limit, as each setup with differing software efficiency can have differing uncertainties. Further note that the accuracy given is purely the result of our estimation on the signal or background of the underlying systematic assuming some uncertainty. It does not strictly speaking reflect our knowledge of it to that extent. All individual systematic uncertainties are combined in the form of a euclidean distance \[ \bar{σ} = \sqrt{\sum_i σ_i²} \] for each type of systematic ($s$, $b$). The combined uncertainties come out to \begin{align*} σ_s &\leq \SI{3.38}{\percent} \text{ (assuming } σ_{\text{software}} = \SI{2}{\%} \text{)} \\ σ_b &= \SI{0.28}{\percent} \\ σ_{xy} &= \SI{5}{\percent} \text{ (fixed, uncertainty numbers are bounds)} \end{align*} where again the final $σ_s$ depends on the specific setup and the given value is for a case of $σ_{\text{software}} = \SI{2}{\%}$, which is a bound larger than the observed uncertainties. The position uncertainty is fixed by hand to $\SI{5}{\%}$ due to lack of knowledge about parameters that could be used to calculate a specific value. The numbers in the table represent bounds about the maximum deviation possible. For a derivation of these numbers, see the extended thesis [fn:why_not_appendix]. #+CAPTION: Overview of the different systematics that are considered as well as possible #+CAPTION: biases due to our understanding. #+NAME: tab:limit:systematic_uncertainties #+ATTR_LATEX: :booktabs t | Uncertainty | s or b? | rel. σ [%] | bias? | |---------------------------------------+----------+------------+-------------------------| | Earth $⇔$ Sun distance | s | 0.7732 | none | | Window thickness (± 10 nm) | s | 0.5807 | none | | Solar models | s | $< 1$ | none | | Magnet length (- 1 cm) | s | 0.2159 | likely $\SI{9.26}{m}$ | | Magnet bore diameter (± 0.5 mm) | s | 2.32558 | measurements: 42.x - 43 | | Window rotation (30° ± 0.5°) | s | 0.18521 | none | | Software efficiency | s | $< 2$ | none | |---------------------------------------+----------+------------+-------------------------| | Gas gain time binning | b | 0.26918 | to 0 | | Reference dist interp (CDL morphing) | b | 0.0844 | none | |---------------------------------------+----------+------------+-------------------------| | Alignment (signal, related mounting) | s (pos.) | 0.5 mm | none | | Detector mounting precision (±0.25 mm) | s (pos.) | 0.25 mm | none | # | note | reference | # |--------------------------------------------------------------+----------------------------------------------------| # | | [[#sec:uncertain:distance_earth_sun]] | # | | [[#sec:uncertain:window_thickness]] | # | unclear from plot, need to look at code | [[~/org/Figs/lennert_seb_comparison_solar_models.png]] | # | | [[#sec:magnet_length_bore_uncertainty]] | # | | [[#sec:magnet_length_bore_uncertainty]] | # | rotation seems to be same in both data takings | [[#sec:window_rotation_uncertainty]] | # | For performance reasons less precise integrations. | | # | Eff. ε_photo < 2, but ε_escape > 3% (less reliable). Choose! | [[#sec:uncertain:software_efficiency]] | # |--------------------------------------------------------------+----------------------------------------------------| # | Computed background clusters for different gas gain binnings | [[#sec:uncertain:gas_gain_binning]] | # | | [[#sec:uncertain:cdl_morphing]] | # | Partially encoded / fixed w/ gas gain time binning. | | # | | [[#sec:uncertain:random_coincidences]] | # | From error prop. But unclear interpretation. Statistical. | [[#sec:uncertain:background_interpolation]] | # | | [[#sec:uncertain:energy_calibration]] | # |--------------------------------------------------------------+----------------------------------------------------| # | From X-ray finger & laser alignment | [[#sec:window_rotation_uncertainty]] | # | M6 screws in 6.5mm holes. Results in misalignment, # above. | | [fn:why_not_appendix] I did not put them into the general appendix, because mostly they are small to medium large pieces of code, which simply run the relevant calculation with slightly different parameters and in the end computes the ratio of the result to the unchanged parameter result. *** TODOs for this section [/] :noexport: - [ ] Missing systematics: - Effective area uncertainty of telescope (Bias to positive!) - See ~statusAndProgress~ section about it! Full table: | Uncertainty | s or b? | rel. σ [%] | bias? | |-------------------------------------------+----------+------------+---------------------------------+ | Earth <-> Sun distance | s | 0.7732 | none | | Window thickness (± 10nm) | s | 0.5807 | none | | Solar models | s | $< 1$ | none | | Magnet length (- 1cm) | s | 0.2159 | likely 9.26m | | Magnet bore diameter (± 0.5mm) | s | 2.32558 | measurements indicate 42.x - 43 | | Window rotation (30° ± 0.5°) | s | 0.18521 | none | | Nuisance parameter integration routine | | | | | Software efficiency | s | $<2$ | none | |-------------------------------------------+----------+------------+---------------------------------+ | Gas gain time binning | b | 0.26918 | to 0 | | Reference dist interp (CDL morphing) | b | 0.0844 | none | | Gas gain variation | ? | | | | Random coincidences in septem/line veto | | | | | Background interpolation (params & shape) | b | ? | none | | Energy calibration | | | | |-------------------------------------------+----------+------------+---------------------------------+ | Alignment (signal, related mounting) | s (pos.) | 0.5 mm | none | | Detector mounting precision (±0.25mm) | s (pos.) | 0.25 mm | | | Gas gain vs charge calib fit | | ? | none | *** Derivation of the systematics :noexport: Copy over from =[status]=. *** TODO Update numbers for systematics relating to software efficiency :noexport: There was the bug mentioned in ~StatusAndProgress~ section [[#sec:uncertain:software_efficiency]] about the code there using wrong energies & DFs. #+begin_src sh ./determineEffectiveEfficiency ~/CastData/data/CalibrationRuns2017_Reco.h5 ~/CastData/data/CalibrationRuns2018_Reco.h5 --real #+end_src See the latest update there too. *** Thoughts on systematics :extended: Systematics like the above are obviously useful. However, one can easily fall into the trap of realizing that -- if one is being honest -- there are a myriad of other things that introduce bias and hence yield a form of systematic uncertainty. The numbers shown in the table are those where - understanding and calculating their impact was somewhat possible - estimating a reasonable number possible. By no means are those the only possible numbers. Some things that one could include relate to most of the algorithms used for background rejection and signal calculations. For example the random coincidence determination itself comes with both a statistical and likely systematic uncertainty, which we did not attempt to estimate (statistical uncertainties take care of themselves to an extent via our expected limits). The energy calibration comes with systematic uncertainties of many different kinds, but putting these into numbers is tricky. Or even things like CAST's pointing uncertainty (some of it could be computed from the CAST slow control files). Generally, by combining all systematics we /do/ consider as a square root of the squared sum should generally be a conservative estimate. Therefore, I hope that even if some numbers are not taken into account, the combined uncertainty is anyhow a roughly realistic estimate of our systematic uncertainty. One parameter of interest that I would have included, had I had the data at an earlier times, is the uncertainty of the telescope effective area. The numbers sent to me by Jaime Ruz do contain an uncertainty band, which one could have attempted to utilize. In any case though, to me the most important aspect about these systematics here is that we show that including systematics into such a limit calculation directly via such a Bayesian approach works well. This is interesting, because -- as far as I'm aware -- no CAST analysis before actually did that. This means for BabyIAXO in the future, there should be more emphasis on estimating systematic uncertainties and there need not be worry about handling them in the limit calculations. In particular each group responsible for a certain subset of the experiment should document their own systematic uncertainties. A PhD student should not be in charge of estimating uncertainties for aspects of the experiment they have no expertise in. *** Computing the combined uncertainties :extended: :PROPERTIES: :CUSTOM_ID: sec:systematics:combined_uncertainties :END: #+begin_src nim import math let ss = [0.77315941, # based on real tracking dates # 3.3456, <- old number for Sun ⇔ Earth using min/max perihelion/aphelion 0.5807, 1.0, 0.2159, 2.32558, 0.18521] #1.727] # software efficiency of LnL method. Included in `mcmc_limit` directly! let bs = [0.26918, 0.0844] proc total(vals: openArray[float]): float = for x in vals: result += x * x result = sqrt(result) echo "Combined uncertainty signal: ", total(ss) / 100.0 echo "Combined uncertainty background: ", total(bs) / 100.0 echo "Position: ", sqrt(pow((0.5 / 7.0), 2) + pow((0.25 / 7.0), 2)) #+end_src #+RESULTS: | Combined | uncertainty | signal: | 0.02724743263827172 | | Combined | uncertainty | background: | 0.002821014576353691 | | Position: | 0.07985957062499248 | | | Compared to 4.582 % we're now down to 3.22%! (in each case including already the software efficiency, which we don't actually include anymore here, but in ~mcmc_limit~). Without the software efficiency we're down to 2.7%! **** Old results These were the numbers that still used the Perihelion/Aphelion based distances for the systematic of Sun ⇔ Earth distance. | Combined | uncertainty | signal: | 0.04582795952309026 | | Combined | uncertainty | background: | 0.002821014576353691 | | Position: | 0.07985957062499248 | | | *NOTE*: The value used here is not the one that was used in most mcmc limit calculations. There we used: #+begin_src nim σ_sig = 0.04692492913207222, #+end_src which comes out from assuming 2% uncertainty for the software efficiency instead of the ~1.727~ that now show up in the code! *** Signal [0/3] :extended: - [ ] *signal position* (i.e. the spot of the raytracing result) - to be implemented as a nuisance parameter (actually 2) in the limit calculation code. - [ ] pointing precision of the CAST magnet - check the reports of the CAST sun filming. That should give us a good number for the alignment accuracy - [ ] detector and telescope alignment - detector alignment goes straight into the signal position one. The telescope alignment can be estimated maybe from the geometer measurements. In any case that will also directly impact the placement / shape of the axion image. So this should be redundant. Still need to check the geometer measurements to get a good idea here. - [X] compute center based on X-ray finger run - [X] find image of laser alignment with plastic target - [ ] find geometer measurements and see where they place us (good for relative from 2017/18 to end of 2018) *** Signal rate & efficiency [5/7] :extended: - [ ] *CLEAN THIS UP SOMEWHAT!* - [ ] (solar model) - [X] look into the work by Lennert & Sebastian. What does their study of different solar models imply for different fluxes? - [ ] check absolute number for - [X] axion rate as a function of distance Earth ⇔ Sun (depends on time data was taken) - [X] simple: compute different rate based on perihelion & aphelion. Difference is measure for > 1σ uncertainty on flux - [ ] more complex: compute actual distance at roughly times when data taking took place. Compare those numbers with the AU distance used in the ray tracer & in axion flux (=expRate= in code). - [X] telescope and window efficiencies - [X] window: especially uncertainty of window thickness: Yevgen measured thickness of 3 samples using ellipsometry and got values O(350 nm)! Norcada themselves say 300 ± 10 nm - compute different absorptions for the 300 ± 10 nm case (integrated over some energy range) and for the extrema (Yevgen). That should give us a number in flux one might lose / gain. - [X] window rotation (position of the strongbacks), different for two run periods & somewhat uncertain - [X] measurement: look at occupancy of calibration runs. This *should* give us a well defined orientation for the strongback. From that we can adjust the raytracing. Ideally this does not count as a systematic as we can measure it (I think, but need to do!) - [X] need to look at X-ray finger runs reconstructed & check occupancy to compare with occupancies of the calibration data - [X] determine the actual loss based on the rotation uncertainty if plugged into raytracer & computed total signal? - [X] magnet length, diameter and field strength (9 T?) - magnet length sometimes reported as 9.25 m, other times as 9.26 - [X] compute conversion probability for 9.26 ± 0.01 m. Result affects signal. Get number. - diameter sometimes reported as 43 mm, sometimes 42.5 (iirc, look up again!), but numbers given by Theodoros from a measurement for CAPP indicated essentially 43 (with some measured uncertainty!) - [X] treated the same way as magnet length. Adjust area accordingly & get number for the possible range. - [ ] Software signal efficiency due to linear logL interpolation, for classification signal / background - [ ] what we already did: took two bins surrounding a center bin and interpolated the middle one. -> what is difference between interpolated and real? This is a measure for its uncertainty. - [X] detector mounting precision: - [X] 6 mounting holes, a M6. Hole size 6.5 mm. Thus, easily 0.25mm variation is possible (discussed with Tobi). - [X] plug can be moved about ±0.43mm away from the center. On septemboard variance of plugs is ±0.61mm. **** Distance Earth ⇔ Sun :PROPERTIES: :CUSTOM_ID: sec:uncertain:distance_earth_sun :END: The distance between Earth and the Sun varies between: Aphelion: 152100000 km Perihelion: 147095000 km Semi-major axis: 149598023 km #+REF: https://en.wikipedia.org/wiki/Earth which first of all is a variation of a bit more than 3% or about ~1.5% from one AU. The naive interpretation of the effect on the signal variation would then be 1 / (1.015²) = ~0.971, a loss of about 3% for the increase from the semi-major axis to the aphelion (or the inverse for an increase to the aphelion). In more explicit numbers: #+begin_src nim import math proc flux(r: float): float = result = 1 / (r * r) let f_au = flux(149598023) let f_pe = flux(147095000) let f_ap = flux(152100000) echo "Flux at 1 AU: ", f_au echo "Flux at Perihelion: ", f_pe echo "Flux at Aphelion: ", f_ap echo "Flux decrease from 1 AU to Perihelion: ", f_au / f_pe echo "Flux increase from 1 AU to Aphelion: ", f_au / f_ap echo "Mean of increase & decrease: ", (abs(1.0 - f_au / f_pe) + abs(1.0 - f_au / f_ap)) / 2.0 echo "Total flux difference: ", f_pe / f_ap #+end_src #+RESULTS: | Flux | at | 1 | AU: | 4.468361401371663e-17 | | | | | Flux | at | Perihelion: | 4.621725831202688e-17 | | | | | | Flux | at | Aphelion: | 4.322565390688589e-17 | | | | | | Flux | decrease | from | 1 | AU | to | Perihelion: | 0.9668166318314223 | | Flux | increase | from | 1 | AU | to | Aphelion: | 1.033729046875066 | | Mean | of | increase | & | decrease: | 0.03345620752182193 | | | | Total | flux | difference: | 1.069209002866338 | | | | | | Total | flux | difference: | 0.06691241504364387 | | | | | ***** *UPDATE*: <2023-07-01 Sat 15:50> In section [[file:~/org/journal.org::#sec:journal:01_07_23_sun_earth_dist]] of the ~journal.org~ we discuss the real distances during the CAST trackings. The numbers we actually need to care about are the following: #+begin_src Mean distance during trackings = 0.9891144450781392 Variance of distance during trackings = 1.399449924353128e-05 Std of distance during trackings = 0.003740922245052853 #+end_src referring to the CSV file: [[~/org/resources/sun_earth_distance_cast_solar_trackings.csv]] where the numbers are in units of 1 AU. So the absolute numbers come out to: #+begin_src nim import unchained const mean = 0.9891144450781392 echo "Actual distance = ", mean.AU.to(km) #+end_src #+RESULTS: : Actual distance = 1.47969e+08 km This means an improvement in flux, following the code snippet above: #+begin_src nim import math, unchained, measuremancer proc flux[T](r: T): T = result = 1 / (r * r) let mean = 0.9891144450781392.AU.to(km).float ± 0.003740922245052853.AU.to(km).float echo "Flux increase from 1 AU to our actual mean: ", pretty(flux(mean) / flux(1.AU.to(km).float), precision = 8) #+end_src #+RESULTS: : Flux increase from 1 AU to our actual mean: 1.0221318 ± 0.0077315941 Which comes out to be an equivalent of 0.773% for the signal uncertainty now! This is a really nice improvement from the 3.3% we had before! It should bring the signal uncertainty from ~4.5% down to close to 3% probably. This number was reproduced using ~readOpacityFile~ as well by (see ~journal.org~ on <2023-07-03 Mon 14:09> for more details): #+begin_src nim import ggplotnim let df1 = readCsv("~/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_1AU.csv") .filter(f{`type` == "Total flux"}) let df2 = readCsv("~/org/resources/differential_flux_sun_earth_distance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv") .filter(f{`type` == "Total flux"}) let max1AU = df1["diffFlux", float].max let max0989AU = df2["diffFlux", float].max echo "Ratio of 1 AU to 0.989 AU = ", max0989AU / max1AU #+end_src #+RESULTS: : Ratio of 1 AU to 0.989 AU = 1.022131825899129 Bang on! **** Variation of window thickness :PROPERTIES: :CUSTOM_ID: sec:uncertain:window_thickness :END: The thickness of the SiN windows will vary somewhat. Norcada says they are within 10nm of 300nm thickness. Measurements done by Yevgen rather imply variations on the O(50 nm). Difficult to know which numbers to trust. The thickness goes into the transmission according to Beer-Lambert's law. Does this imply quadratically? I'm a bit confused playing around with the Henke tool. TODO: get a data file for 1 μm and for 2 μm and check what the difference is. #+begin_src nim import ggplotnim let df1 = readCsv("/home/basti/org/resources/si_nitride_1_micron_5_to_10_kev.txt", sep = ' ') .mutate(f{"TSq" ~ `Transmission` * `Transmission`}) let df2 = readCsv("/home/basti/org/resources/si_nitride_2_micron_5_to_10_kev.txt", sep = ' ') let df = bind_rows(df1, df2, id = "id") ggplot(df, aes("Energy[eV]", "Transmission", color = "id")) + geom_line() + geom_line(data = df1, aes = aes(y = "TSq"), color = "purple", lineType = ltDashed) + ggsave("/tmp/transmissions.pdf") # compute the ratio let dfI = inner_join(df1.rename(f{"T1" <- "Transmission"}), df2.rename(f{"T2" <- "Transmission"}), by = "Energy[eV]") .mutate(f{"Ratio" ~ `T1` / `T2`}) echo dfI ggplot(dfI, aes("Energy[eV]", "Ratio")) + geom_line() + ggsave("/tmp/ratio_transmissions_1_to_2_micron.pdf") #+end_src #+RESULTS: | Dataframe | with | 7 | columns | and | 101 | rows: | | | Idx | TSq | id | prevVals_GGPLOTNIM_INTERNAL | T1 | T2 | Energy[eV] | Ratio | | dtype: | float | string | float | float | float | float | float | | 0 | 0.8987 | id | 0 | 0.948 | 0.8987 | 5000 | 1.055 | | 1 | 0.9014 | id | 0 | 0.9494 | 0.9014 | 5050 | 1.053 | | 2 | 0.904 | id | 0 | 0.9508 | 0.904 | 5100 | 1.052 | | 3 | 0.9065 | id | 0 | 0.9521 | 0.9065 | 5150 | 1.05 | | 4 | 0.9089 | id | 0 | 0.9534 | 0.9089 | 5200 | 1.049 | | 5 | 0.9113 | id | 0 | 0.9546 | 0.9113 | 5250 | 1.048 | | 6 | 0.9135 | id | 0 | 0.9558 | 0.9135 | 5300 | 1.046 | | 7 | 0.9157 | id | 0 | 0.9569 | 0.9157 | 5350 | 1.045 | | 8 | 0.9178 | id | 0 | 0.958 | 0.9179 | 5400 | 1.044 | | 9 | 0.9199 | id | 0 | 0.9591 | 0.9199 | 5450 | 1.043 | | 10 | 0.9219 | id | 0 | 0.9602 | 0.9219 | 5500 | 1.042 | | 11 | 0.9238 | id | 0 | 0.9612 | 0.9238 | 5550 | 1.04 | | 12 | 0.9257 | id | 0 | 0.9621 | 0.9257 | 5600 | 1.039 | | 13 | 0.9275 | id | 0 | 0.9631 | 0.9275 | 5650 | 1.038 | | 14 | 0.9292 | id | 0 | 0.964 | 0.9292 | 5700 | 1.037 | | 15 | 0.9309 | id | 0 | 0.9649 | 0.9309 | 5750 | 1.036 | | 16 | 0.9326 | id | 0 | 0.9657 | 0.9326 | 5800 | 1.036 | | 17 | 0.9342 | id | 0 | 0.9665 | 0.9342 | 5850 | 1.035 | | 18 | 0.9357 | id | 0 | 0.9673 | 0.9357 | 5900 | 1.034 | | 19 | 0.9372 | id | 0 | 0.9681 | 0.9372 | 5950 | 1.033 | | | | | | | | | | The resulting =Ratio= here kind of implies that we're missing something.... Ah, no. The =Ratio= thing was a brain fart. Just squaring the 1μm thing does indeed reproduce the 2μm case! All good here. So how do we get the correct value then for e.g. 310nm when having 300nm? If my intuition is correct (we'll check with a few other numbers in a minute) then essentially the following holds: \[ T_{xd} = (T_d)^x \] where =T_d= is the transmission of the material at thickness =d= and we get the correct transmission for a different thickness that is a multiple =x= of =d= by the given power-law relation. Let's apply this to the files we have for the 300nm window and see what we get if we also add 290 and 300 nm. #+begin_src nim import ggplotnim, strformat, math proc readFile(fname: string): DataFrame = result = readCsv(fname, sep = ' ') .rename(f{"Energy / eV" <- "PhotonEnergy(eV)"}) .mutate(f{"E / keV" ~ c"Energy / eV" / 1000.0}) let sinDf = readFile("/home/basti/org/resources/Si3N4_density_3.44_thickness_0.3microns.txt") .mutate(f{float: "T310" ~ pow(`Transmission`, 310.0 / 300.0)}) .mutate(f{float: "T290" ~ pow(`Transmission`, 290.0 / 300.0)}) var sin1Mu = readFile("/home/basti/org/resources/Si3N4_density_3.44_thickness_1microns.txt") .mutate(f{float: "Transmission" ~ pow(`Transmission`, 0.3 / 1.0)}) sin1Mu["Setup"] = "T300_from1μm" var winDf = sinDf.gather(["Transmission", "T310", "T290"], key = "Setup", value = "Transmission") ggplot(winDf, aes("E / keV", "Transmission", color = "Setup")) + geom_line() + geom_line(data = sin1Mu, lineType = ltDashed, color = "purple") + xlim(0.0, 3.0, outsideRange = "drop") + xMargin(0.02) + yMargin(0.02) + margin(top = 1.5) + ggtitle("Impact of 10nm uncertainty on window thickness. Dashed line: 300nm transmission computed " & "from 1μm via power law T₃₀₀ = T₁₀₀₀^{0.3/1}") + ggsave("/home/basti/org/Figs/statusAndProgress/window_uncertainty_transmission.pdf", width = 853, height = 480) #+end_src Plot [[~/org/Figs/statusAndProgress/window_uncertainty_transmission.pdf]] shows us the impact on the transmission of the uncertainty on the window thickness. In terms of such transmission the impact seems almost negligible as long as it's small. However, to get an accurate number, we should check the integrated effect on the axion flux after conversion & going through the window. That then takes into account the energy dependence and thus gives us a proper number of the impact on the signal. #+begin_src nim import sequtils, math, unchained, datamancer import numericalnim except linspace, cumSum # import ./background_interpolation defUnit(keV⁻¹•cm⁻²) type Context = object integralBase: float efficiencySpl: InterpolatorType[float] defUnit(keV⁻¹•cm⁻²•s⁻¹) defUnit(keV⁻¹•m⁻²•yr⁻¹) defUnit(cm⁻²) defUnit(keV⁻¹•cm⁻²) proc readAxModel(): DataFrame = let upperBin = 10.0 proc convert(x: float): float = result = x.keV⁻¹•m⁻²•yr⁻¹.to(keV⁻¹•cm⁻²•s⁻¹).float result = readCsv("/home/basti/CastData/ExternCode/AxionElectronLimit/axion_diff_flux_gae_1e-13_gagamma_1e-12.csv") .mutate(f{"Energy / keV" ~ c"Energy / eV" / 1000.0}, f{float: "Flux / keV⁻¹•cm⁻²•s⁻¹" ~ convert(idx("Flux / keV⁻¹ m⁻² yr⁻¹"))}) .filter(f{float: c"Energy / keV" <= upperBin}) proc detectionEff(spl: InterpolatorType[float], energy: keV): UnitLess = # window + gas if energy < 0.001.keV or energy > 10.0.keV: return 0.0 result = spl.eval(energy.float) proc initContext(thickness: NanoMeter): Context = let combEffDf = readCsv("/home/basti/org/resources/combined_detector_efficiencies.csv") .mutate(f{float: "Efficiency" ~ pow(idx("300nm SiN"), thickness / 300.nm)}) ## no-op if input is also 300nm let effSpl = newCubicSpline(combEffDf["Energy [keV]", float].toRawSeq, combEffDf["Efficiency", float].toRawSeq) # effective area included in raytracer let axData = readAxModel() let axModel = axData .mutate(f{"Flux" ~ idx("Flux / keV⁻¹•cm⁻²•s⁻¹") * detectionEff(effSpl, idx("Energy / keV").keV) }) let integralBase = simpson(axModel["Flux", float].toRawSeq, axModel["Energy / keV", float].toRawSeq) result = Context(integralBase: integralBase, efficiencySpl: effSpl) defUnit(cm²) defUnit(keV⁻¹) func conversionProbability(): UnitLess = ## the conversion probability in the CAST magnet (depends on g_aγ) ## simplified vacuum conversion prob. for small masses let B = 9.0.T let L = 9.26.m let g_aγ = 1e-12.GeV⁻¹ # ``must`` be same as reference in Context result = pow( (g_aγ * B.toNaturalUnit * L.toNaturalUnit / 2.0), 2.0 ) defUnit(cm⁻²•s⁻¹) defUnit(m⁻²•yr⁻¹) proc expRate(integralBase: float): UnitLess = let trackingTime = 190.h let areaBore = π * (2.15 * 2.15).cm² result = integralBase.cm⁻²•s⁻¹ * areaBore * trackingTime.to(s) * conversionProbability() let ctx300 = initContext(300.nm) let rate300 = expRate(ctx300.integralBase) let ctx310 = initContext(310.nm) let rate310 = expRate(ctx310.integralBase) let ctx290 = initContext(290.nm) let rate290 = expRate(ctx290.integralBase) echo "Decrease: 300 ↦ 310 nm: ", rate310 / rate300 echo "Increase: 300 ↦ 290 nm: ", rate290 / rate300 echo "Total change: ", rate290 / rate310 echo "Averaged difference: ", (abs(1.0 - rate310 / rate300) + abs(1.0 - rate290 / rate300)) / 2.0 #+end_src #+RESULTS: | Decrease: | 300 | ↦ | 310 | nm: | 0.994254 | UnitLess | | Increase: | 300 | ↦ | 290 | nm: | 1.00587 | UnitLess | | Total | change: | 1.01168 | UnitLess | | | | | Averaged | difference: | 0.005806760566511471 | | | | | **** Magnet length & bore diameter :PROPERTIES: :CUSTOM_ID: sec:magnet_length_bore_uncertainty :END: Length was reported to be 9.25m in the original CAST proposal, compared to the since then reported 9.26m. Conversion probability scales by length quadratically, so the change in flux should thus also just be quadratic. The bore diameter was also given as 42.5mm (iirc) initially, but later as 43mm. The amount of flux scales by the area. #+begin_src nim import math echo 9.25 / 9.26 # Order 0.1% echo pow(42.5 / 2.0, 2.0) / pow(43 / 2.0, 2.0) # Order 2.3% #+end_src #+RESULTS: | 0.9989200863930886 | | 0.9768793942671714 | With the conversion probability: \[ P_{a↦γ, \text{vacuum}} = \left(\frac{g_{aγ} B L}{2} \right)^2 \left(\frac{\sin\left(\delta\right)}{\delta}\right)^2 \] The change in conversion probability from a variation in magnet length is thus (using the simplified form if δ is small: #+begin_src nim import unchained, math func conversionProbability(L: Meter): UnitLess = ## the conversion probability in the CAST magnet (depends on g_aγ) ## simplified vacuum conversion prob. for small masses let B = 9.0.T let g_aγ = 1e-12.GeV⁻¹ # ``must`` be same as reference in Context result = pow( (g_aγ * B.toNaturalUnit * L.toNaturalUnit / 2.0), 2.0 ) let P26 = conversionProbability(9.26.m) let P25 = conversionProbability(9.25.m) let P27 = conversionProbability(9.25.m) echo "Change from 9.26 ↦ 9.25 m = ", P26 / P25 echo "Change from 9.25 ↦ 9.27 m = ", P27 / P25 echo "Relative change = ", (abs(1.0 - P27 / P26) + abs(1.0 - P25 / P26)) / 2.0 #+end_src #+RESULTS: | Change | from | 9.26 | ↦ | 9.25 | m | = | 1.00216 | UnitLess | | Change | from | 9.25 | ↦ | 9.27 | m | = | 1 | UnitLess | | Relative | change | = | 0.002158661000424611 | | | | | | And now for the area: As it only goes into the expected rate by virtue of, well, being the area we integrate over, we simply need to look at the change in area from a change in bore radius. #+begin_src nim proc expRate(integralBase: float): UnitLess = let trackingTime = 190.h let areaBore = π * (2.15 * 2.15).cm² result = integralBase.cm⁻²•s⁻¹ * areaBore * trackingTime.to(s) * conversionProbability() #+end_src #+begin_src nim import unchained, math defUnit(MilliMeter²) proc boreArea(diameter: MilliMeter): MilliMeter² = result = π * (diameter / 2.0)^2 let areaD = boreArea(43.mm) let areaS = boreArea(42.5.mm) let areaL = boreArea(43.5.mm) echo "Change from 43 ↦ 42.5 mm = ", areaS / areaD echo "Change from 43 ↦ 43.5 mm = ", areaL / areaD echo "Relative change = ", (abs(1.0 - areaL / areaD) + abs(1.0 - areaS / areaD)) / 2.0 #+end_src #+RESULTS: | Change | from | 43 | ↦ | 42.5 | mm | = | 0.976879 | UnitLess | | Change | from | 43 | ↦ | 43.5 | mm | = | 1.02339 | UnitLess | | Relative | change | = | 0.02325581395348841 | | | | | | **** Window rotation & alignment precision [2/2] :PROPERTIES: :CUSTOM_ID: sec:window_rotation_uncertainty :END: Rotation of the window. Initially we assumed that the rotation was different in the two different data taking periods. We can check the rotation by looking at the occupancy runs taken in the 2017 dataset and in the 2018 dataset. The 2017 occupancy (filtered to only use events in eccentricity 1 - 1.4) is [[~/org/Figs/statusAndProgress/systematics/occupancy_clusters_run83_187_chip3_ckQuantile_80.0_region_crAll_eccentricity_1.0_1.3_applyAll_true_.pdf]] and for 2018: [[~/org/Figs/statusAndProgress/systematics/occupancy_clusters_run239_304_chip3_ckQuantile_80.0_region_crAll_eccentricity_1.0_1.3_applyAll_true.pdf]] They imply that the angle was indeed the same (compare with the sketch of our windows in fig. [[300nm_sin_norcada_window_layout]]). However, there seems to be a small shift in y between the two, which seems hard to explain. Such a shift *only* makes sense (unless I'm missing something!) if there is a shift between the *chip* and the *window*, but not for any kind of installation shift or shift in the position of the 55Fe source. I suppose a slight change in how the window is mounted on the detector can already explain it? This is < 1mm after all. In terms of the rotation angle, we'll just read it of using Inkscape. It comes out to pretty much _exactly_ 30°, see fig. [[fig:window_rotation]]. I suppose this makes sense given the number of screws (6?). Still, this implies that the window was mounted perfectly aligned with some line relative to 2 screws. Not that it matters. #+CAPTION: Measurement of the rotation angle of the window in 2018 data taking (2017 is the same) #+CAPTION: using Inkscape. Comes out to ~30° (with maybe 0.5° margin for error, aligned for #+CAPTION: exactly 30° for the picture, but some variation around that all looks fine). #+NAME: fig:window_rotation [[~/org/Figs/statusAndProgress/systematics/window_rotation_2018.png]] Need to check the number used in the raytracing code. There we have (also see discussion with Johanna in Discord): #+begin_src nim case wyKind of wy2017: result = degToRad(10.8) of wy2018: result = degToRad(71.5) of wyIAXO: result = degToRad(20.0) # who knows #+end_src so an angle of 71.5 (2018) and 10.8 (2017). Very different from the number we get in Inkscape based on the calibration runs. She used the following plot: [[~/org/Talks/CCM_2018_Apr/figs/xray_finger_side.pdf]] to extract the angles. The impact of this on the signal only depends on where the strongbacks are compared to the axion image. Fig. [[fig:axion_image_71_5deg]] shows the axion image for the rotation of 71.5° (Johanna from X-ray finger) and fig. [[fig:axion_image_30deg]] shows the same for a rotation of 30° (our measurement). The 30° case matches nicely with the extraction of fig. [[fig:window_rotation]]. #+CAPTION: Axion image for a window setup rotated to 71.5° (the number Johanna read off #+CAPTION: from the X-ray finger run). #+NAME: fig:axion_image_71_5deg [[~/org/Figs/statusAndProgress/systematics/axion_image_2018_71_5deg.pdf]] #+CAPTION: Axion image for a window setup rotated to 30° (the number we read off from #+CAPTION: from the calibration runs). #+NAME: fig:axion_image_30deg [[~/org/Figs/statusAndProgress/systematics/axion_image_2018_30deg.pdf]] From here there are 2 things to do: - [X] reconstruct the X-ray finger runs & check the rotation of those again using the same occupancy plots as for the calibration runs. - [X] compute the integrated signal for the 71.5°, 30° and 30°±0.5° cases and see how the signal differs. The latter will be the number for the systematic we'll use. We do that by just summing the raytracing output. To do the latter, we need to add an option to write the CSV files in the raytracer first. #+begin_src nim import datamancer proc print(fname: string): float = let hmap = readCsv(fname) result = hmap["photon flux", float].sum let f71 = print("/home/basti/org/resources/axion_images_systematics/axion_image_2018_71_5deg.csv") let f30 = print("/home/basti/org/resources/axion_images_systematics/axion_image_2018_30deg.csv") let f29 = print("/home/basti/org/resources/axion_images_systematics/axion_image_2018_29_5deg.csv") let f31 = print("/home/basti/org/resources/axion_images_systematics/axion_image_2018_30_5deg.csv") echo f71 echo f30 echo "Ratio : ", f30 / f71 echo "Ratio f29 / f31 ", f29 / f31 echo "Difference ", (abs(1.0 - (f29/f30)) + abs(1.0 - (f31/f30))) / 2.0 #+end_src #+RESULTS: | 2.177037651356148e-05 | | | | | | 2.211544112107869e-05 | | | | | | Ratio | : | 1.015850190156439 | | | | Ratio | f29 | / | f31 | 1.003711153363915 | | Difference | 0.001852083217886047 | | | | Now on to the reconstruction of the X-ray finger run. I copied the X-ray finger runs from tpc19 over to [[file:~/CastData/data/XrayFingerRuns/]]. The run of interest is mainly the run 189, as it's the run done with the detector installed as in 2017/18 data taking. #+begin_src sh :results none cd /dev/shm # store here for fast access & temporary cp ~/CastData/data/XrayFingerRuns/XrayFingerRun2018.tar.gz . tar xzf XrayFingerRun2018.tar.gz raw_data_manipulation -p Run_189_180420-09-53 --runType xray --out xray_raw_run189.h5 reconstruction -i xray_raw_run189.h5 --out xray_reco_run189.h5 # make sure `config.toml` for reconstruction uses `default` clustering! reconstruction -i xray_reco_run189.h5 --only_charge reconstruction -i xray_reco_run189.h5 --only_gas_gain reconstruction -i xray_reco_run189.h5 --only_energy_from_e plotData --h5file xray_reco_run189.h5 --runType=rtCalibration -b bGgPlot --ingrid --occupancy --config plotData.toml #+end_src which gives us the following plot: #+CAPTION: Occupancies of cluster centers of the X-ray finger run (189) in 2018. #+CAPTION: Shows the same rotation as the calibration runs here. #+NAME: fig:occupancy_cluster_xray_finger_run_189 [[~/org/Figs/statusAndProgress/systematics/occupancy_clusters_run189_chip3_ckQuantile_95.0_region_crAll_eccentricity_1.0_1.3_applyAll_true.pdf]] With many more plots here: [[file:~/org/Figs/statusAndProgress/xrayFingerRun/run189/]] Also see the relevant section in sec. [[#sec:cast:alignment]]. Using =TimepixAnalysis/Tools/printXyDataset= we can now compute the center of the X-ray finger run. #+begin_src sh :results drawer cd ~/CastData/ExternCode/TimepixAnalysis/Tools/ ./printXyDataset -f /dev/shm/xray_reco_run189.h5 -c 3 -r 189 \ --dset centerX --reco \ --cuts '("eccentricity", 0.9, 1.4)' \ --cuts '("centerX", 3.0, 11.0)' \ --cuts '("centerY", 3.0, 11.0)' ./printXyDataset -f /dev/shm/xray_reco_run189.h5 -c 3 -r 189 \ --dset centerY --reco \ --cuts '("eccentricity", 0.9, 1.4)' \ --cuts '("centerX", 3.0, 11.0)' \ --cuts '("centerY", 3.0, 11.0)' #+end_src #+RESULTS: :results: Parsing ("eccentricity", 0.9, 1.4) vals: @["\"eccentricity\"", " 0.9", " 1.4"] Parsed it to (dset: "eccentricity", lower: 0.9, upper: 1.4) Parsing ("centerX", 3.0, 11.0) vals: @["\"centerX\"", " 3.0", " 11.0"] Parsed it to (dset: "centerX", lower: 3.0, upper: 11.0) Parsing ("centerY", 3.0, 11.0) vals: @["\"centerY\"", " 3.0", " 11.0"] Parsed it to (dset: "centerY", lower: 3.0, upper: 11.0) Dataset: centerX Min: 3.000375816993464 Max: 10.9998178807947 Mean: 7.658099408977982 Sum: 144470.045350371 Parsing ("eccentricity", 0.9, 1.4) vals: @["\"eccentricity\"", " 0.9", " 1.4"] Parsed it to (dset: "eccentricity", lower: 0.9, upper: 1.4) Parsing ("centerX", 3.0, 11.0) vals: @["\"centerX\"", " 3.0", " 11.0"] Parsed it to (dset: "centerX", lower: 3.0, upper: 11.0) Parsing ("centerY", 3.0, 11.0) vals: @["\"centerY\"", " 3.0", " 11.0"] Parsed it to (dset: "centerY", lower: 3.0, upper: 11.0) Dataset: centerY Min: 3.002725 Max: 10.99956806282723 Mean: 6.448868421386536 Sum: 121657.902769457 :end: So we get a mean of: - centerX: 7.658 - centerY: 6.449 meaning we are ~0.5 mm away from the center in either direction. Given that there is distortion due to the magnet optic, uncertainty about the location of X-ray finger & emission characteristic, using a variation of 0.5mm seems reasonable. This also matches more or less the laser alignment we did initially, see fig. [[fig:cast_laser_alignment_target]]. #+CAPTION: Laser alignment using target on flange at CAST. Visible deviation is ~0.5mm #+CAPTION: more or less. #+NAME: fig:cast_laser_alignment_target [[~/org/Figs/statusAndProgress/CAST_detector_laser_alignment_target.jpg]] ***** TODO Question about signal & window One thing we currently do not take into account is that when varying the signal position using the nuisance parameters, we move the window strongback *with* the position... In principle we're not allowed to do that. The strongbacks are part of the detector & not the signal (but are currently convolved into the image). The strongback position depends on the detector mounting precision only. So if the main peak was exactly on the strongback, we'd barely see anything! **** Integration routines for nuisance parameters For performance reasons we cannot integrate out the nuisance parameters using the most sophisticated algorithms. Maybe in the end we could assign a systematic by computing a few "accurate" integrations (e.g. integrating out $ϑ_x$ and $ϑ_y$) with adaptive gauss and then with our chosen method and compare the result on the limit? Could just be a "total" uncertainty on the limit w/o changing any parameters. *** Detector behavior [0/1] :extended: - [ ] drift in # hits in ⁵⁵Fe. "Adaptive gas gain" tries to minimize this, maybe variation of mean energy over time after application a measure for uncertainty? -> should mainly have an effect on *software signal efficiency*. - goes into S of limit likelihood (ε), which is currently assumed a constant number - [ ] veto random coincidences **** Random coincidences :PROPERTIES: :CUSTOM_ID: sec:uncertain:random_coincidences :END: Come up with equation to compute rate of random coincidences. Inputs: - area of chip - rate of cosmics - shutter length - physical time scale of background events to compute rate of random coincidences. Leads to effective reduction in live data taking time (increased dead time). *** Background [0/2] :extended: - [ ] background interpolation - we already did: study of statistical uncertainty (both MC as well as via error propagation) - [X] extract from error propagation code unclear what to do with these numbers! - [ ] septem veto can suffer from uncertainties due to possible random coincidences of events on outer chip that veto a center event, which are not actually correlated. In our current application of it, this implies a) a lower background rate, but b) a lower software signal efficiency *as we might also remove real photons*. So its effect is on ε as ell. - [ ] think about random coincidences, derive some formula similar to lab course to compute chance **** Background interpolation [0/1] :PROPERTIES: :CUSTOM_ID: sec:uncertain:background_interpolation :END: Ref: [[#sec:background_interpolation_uncertainty]] and =ntangle= this file and run =/tmp/background_interpolation_error_propagation.nim= for the background interpolation with =Measuremancer= error propagation. For an input of 8 clusters in a search radius around a point we get numbers such as: =Normalized value (gauss) : 6.08e-06 ± 3.20e-06 CentiMeter⁻²•Second⁻¹= so an error that is almost 50% of the input. However, keep in mind that this is for a small area around the specific point. Just purely from Poisson statistics we expect an uncertainty of 2.82 for 8 events \[ ΔN = √8 = 2.82 \] As such this makes sense (the number is larger due to the gaussian nature of the distance calculation etc.) and just being a weighted sum of =1 ± 1= terms error propagated. If we compute the same for a larger number of points, the error should go down, which can be seen comparing fig. [[background_uncertainty_mc_all_samplers_corrected]] with fig. [[background_uncertainty_mc_all_samplers_uncorrected_artificial_statistics]] (where the latter has artificially increased statistics). As this is purely a statistical effect, I'm not sure how to quantify any kind of systematic errors. The systematics come into play, due to the: - choice of radius & sigma - choice of gaussian weighting - choice of "energy radius" - [ ] look at background interpolation uncertainty section linked above. Modify to also include a section about a flat model that varies the different parameters going into the interpolation. - [ ] use existing code to compute a systematic based on the kind of background model. Impact of background hypothesis? *** Energy calibration, likelihood method [0/1] :extended: - [ ] the energy calibration as a whole has many uncertainties (due to detector variation, etc.) - gas gain time binning: - [ ] compute everything up to background rate for no time binning, 90 min and maybe 1 or 2 other values. Influence on σ_b is the change in background that we see from this (will be a lot of work, but useful to make things more reproducible). - [ ] compute energy of 55Fe peaks after energy calibration. Variation gives indication for systematic influence. **** Gas gain time binning :PROPERTIES: :CUSTOM_ID: sec:uncertain:gas_gain_binning :END: We need to investigate the impact of the gas gain binning on the background rate. How do we achieve that? Simplest approach: 1. Compute gas gain slices for different cases (no binning, 30 min binning, 90 min binning, 240 min binning ?) 2. calculate energy based on the used gas gain binning 3. compute the background rate for each case 4. compare amount of background after that. Question: Do we need to recompute the gas gain for the calibration data as well? Yes, as the gas gain slices directly go into the 'gain fit' that needs to be done in order to compute the energy for any cluster. So, the whole process is only made complicated by the fact that we need to change the =config.toml= file in between runs. In the future this should be a CL argument. For the time being, we can use the same approach as in =/home/basti/CastData/ExternCode/TimepixAnalysis/Tools/backgroundRateDifferentEffs/backgroundRateDifferentEfficiencies.nim= where we simply read the toml file, rewrite the single line and write it back. Let's write a script that does mainly steps 1 to 3 for us. #+begin_src nim :tangle /tmp/compute_systematic_gas_gain_intervals.nim import shell, strformat, strutils, sequtils, os # an interval of 0 implies _no_ gas gain interval, i.e. full run const intervals = [0, 30, 90, 240] const Tmpl = "$#Runs$#_Reco.h5" const Path = "/home/basti/CastData/data/systematics/" const TomlFile = "/home/basti/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/config.toml" proc rewriteToml(path: string, interval: int) = ## rewrites the given TOML file in the `path` to use the `interval` ## instead of the existing value var data = readFile(path).splitLines for l in mitems(data): if interval == 0 and l.startsWith("fullRunGasGain"): l = "fullRunGasGain = true" elif interval != 0 and l.startsWith("fullRunGasGain"): l = "fullRunGasGain = false" elif interval != 0 and l.startsWith("gasGainInterval"): l = "gasGainInterval = " & $interval writeFile(path, data.join("\n")) proc computeGasGainSlices(fname: string, interval: int) = let (res, err, code) = shellVerboseErr: one: cd ~/CastData/data/systematics reconstruction ($fname) "--only_gas_gain" if code != 0: raise newException(Exception, "Error calculating gas gain for interval " & $interval) proc computeGasGainFit(fname: string, interval: int) = let (res, err, code) = shellVerboseErr: one: cd ~/CastData/data/systematics reconstruction ($fname) "--only_gain_fit" if code != 0: raise newException(Exception, "Error calculating gas gain fit for interval " & $interval) proc computeEnergy(fname: string, interval: int) = let (res, err, code) = shellVerboseErr: one: cd ~/CastData/data/systematics reconstruction ($fname) "--only_energy_from_e" if code != 0: raise newException(Exception, "Error calculating energy for interval " & $interval) proc computeLikelihood(f, outName: string, interval: int) = let args = { "--altCdlFile" : "~/CastData/data/CDL_2019/calibration-cdl-2018.h5", "--altRefFile" : "~/CastData/data/CDL_2019/XrayReferenceFile2018.h5", "--cdlYear" : "2018", "--region" : "crGold"} let argStr = args.mapIt(it[0] & " " & it[1]).join(" ") let (res, err, code) = shellVerboseErr: one: cd ~/CastData/data/systematics likelihood ($f) "--h5out" ($outName) ($argStr) if code != 0: raise newException(Exception, "Error computing likelihood cuts for interval " & $interval) #proc plotBackgroundRate(f1, f2: string, eff: float) = # let suffix = &"_eff_{eff}" # let (res, err, code) = shellVerboseErr: # one: # cd ~/CastData/ExternCode/TimepixAnalysis/Plotting/plotBackgroundRate # ./plotBackgroundRate ($f1) ($f2) "--suffix" ($suffix) # ./plotBackgroundRate ($f1) ($f2) "--separateFiles --suffix" ($suffix) # if code != 0: # raise newException(Exception, "Error plotting background rate for eff " & $eff) let years = [2017, 2018] let calibs = years.mapIt(Tmpl % ["Calibration", $it]) let backs = years.mapIt(Tmpl % ["Data", $it]) copyFile(TomlFile, "/tmp/toml_file.backup") for interval in intervals: ## rewrite toml file rewriteToml(TomlFile, interval) ## compute new gas gain for new interval for all files for f in concat(calibs, backs): computeGasGainSlices(f, interval) ## use gas gain slices to compute gas gain fit for f in calibs: computeGasGainFit(f, interval) ## compute energy based on new gain fit for f in concat(calibs, backs): computeEnergy(f, interval) ## compute likelihood based on new energies var logFs = newSeq[string]() for b in backs: let yr = if "2017" in b: "2017" else: "2018" let fname = &"out/lhood_{yr}_interval_{interval}.h5" logFs.add fname ## XXX: need to redo likelihood computation!! computeLikelihood(b, fname, interval) ## plot background rate for all combined? or just plot cluster centers? can all be done later... #plotBackgroundRate(log, eff) #+end_src #+begin_src nim import shell, strformat, strutils, sequtils, os # an interval of 0 implies _no_ gas gain interval, i.e. full run const intervals = [0, 30, 90, 240] const Tmpl = "$#Runs$#_Reco.h5" echo (Tmpl % ["Data", "2017"]).extractFilename #+end_src #+RESULTS: : DataRuns2017_Reco.h5 The resulting files are found in [[~/CastData/data/systematics/out/]] or [[~/CastData/data/systematics/]] on my laptop. Let's extract the number of clusters found on the center chip (gold region) for each of the intervals: #+begin_src sh :results drawer cd ~/CastData/data/systematics for i in 0 30 90 240 do echo Inteval: $i extractClusterInfo -f lhood_2017_interval_$i.h5 --short --region crGold extractClusterInfo -f lhood_2018_interval_$i.h5 --short --region crGold done #+end_src #+RESULTS: :results: Inteval: 0 Found 497 clusters in region: crGold Found 244 clusters in region: crGold Inteval: 30 Found 499 clusters in region: crGold Found 244 clusters in region: crGold Inteval: 90 Found 500 clusters in region: crGold Found 243 clusters in region: crGold Inteval: 240 Found 497 clusters in region: crGold Found 244 clusters in region: crGold :end: The numbers pretty much speak for themselves. #+begin_src nim let nums = { 0 : 497 + 244, 30 : 499 + 244, 90 : 500 + 243, 240 : 497 + 244 } # reference is 90 let num90 = nums[2][1] var minVal = Inf var maxVal = 0.0 for num in nums: let rat = num[1] / num90 echo "Ratio of ", num, " = ", rat minVal = min(minVal, rat) maxVal = max(maxVal, rat) echo "Deviation: ", maxVal - minVal #+end_src #+RESULTS: | Ratio | of | (0, | 741) | = | 0.9973082099596231 | | Ratio | of | (30, | 743) | = | 1.0 | | Ratio | of | (90, | 743) | = | 1.0 | | Ratio | of | (240, | 741) | = | 0.9973082099596231 | | Deviation: | 0.002691790040376896 | | | | | *NOTE*: The one 'drawback' of this approach taken here is the following: the CDL data was _not_ reconstructed using the changed gas gain data. *However* that is much less important, as we assume constant gain over the CDL runs anyway more or less / want to pick the most precise description of our data! **** Interpolation of reference distributions (CDL morphing) [/] :PROPERTIES: :CUSTOM_ID: sec:uncertain:cdl_morphing :END: We already did the study of the variation in the interpolation for the reference distributions. To estimate the systematic uncertainty related to that, we should simply look at the computation of the "intermediate" distributions again and compare the real numbers to the interpolated ones. The deviation can be done per bin. The average & some quantiles should be a good number to refer to as a systematic. The =cdlMorphing= tool [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/cdlMorphing/cdlMorphing.nim]] is well suited to this. We will compute the difference between the morphed and real data for each bin & sum the squares for each target/filter (those that are morphed, so not the outer two of course). Running the tool now yields the following output: #+begin_src Target/Filter: Cu-EPIC-0.9kV = 0.0006215219861090395 Target/Filter: Cu-EPIC-2kV = 0.0007052150065674744 Target/Filter: Al-Al-4kV = 0.001483398679126846 Target/Filter: Ag-Ag-6kV = 0.001126063558474516 Target/Filter: Ti-Ti-9kV = 0.0006524420692883554 Target/Filter: Mn-Cr-12kV = 0.0004757207676502019 Mean difference 0.0008440603445360723 #+end_src So we really have a miniscule difference there. - [ ] also compute the background rate achieved using no CDL morphing vs using it. **** Energy calibration [/] :PROPERTIES: :CUSTOM_ID: sec:uncertain:energy_calibration :END: - [ ] compute peaks of 55Fe energy. What is variation? **** Software efficiency systematic [/] :PROPERTIES: :CUSTOM_ID: sec:uncertain:software_efficiency :END: In order to guess at the systematic uncertainty of the software efficiency, we can push all calibration data through the likelihood cuts and evaluate the real efficiency that way. This means the following: - compute likelihood values for all calibration runs - for each run, remove extreme outliers using rough RMS transverse & eccentricity cuts - filter to 2 energies (essentially a secondary cut), the photopeak and escape peak - for each peak, push through likelihood cut. # after / # before is software efficiency at that energy The variation we'll see over all runs tells us something about the systematic uncertainty & potential bias. *UPDATE*: The results presented below the code were computed with the code snippet here *as is* (and multiple arguments of course, check =zsh_history= at home for details). A modified version now also lives at [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/determineEffectiveEfficiency.nim]] - [ ] *REMOVE THE PIECE OF CODE HERE, REPLACE BY CALL TO ABOVE!* *UPDATE2* :<2022-08-24 Wed 19:15> While working on the below code for the script mentioned in the first update, I noticed a bug in the =filterEvents= function: #+begin_src nim of "Escapepeak": let dset = 5.9.toRefDset() let xrayCuts = xrayCutsTab[dset] result.add applyFilters(df) of "Photopeak": let dset = 2.9.toRefDset() let xrayCuts = xrayCutsTab[dset] result.add applyFilters(df) #+end_src the energies are exchanged and =applyFilters= is applied to =df= and not =subDf= as it should here! - [ ] Investigate the effect for the systematics of CAST! -> <2023-03-20 Mon 21:53>: I just had a short look at this. It seems like this is the correct output: :RESULTS: DataFrame with 3 columns and 67 rows: Idx Escapepeak Photopeak RunNumber dtype: float float int 0 0.6579 0.7542 83 1 0.6452 0.787 88 2 0.6771 0.7667 93 3 0.7975 0.7599 96 4 0.799 0.7605 102 5 0.8155 0.7679 108 6 0.7512 0.7588 110 7 0.8253 0.7769 116 8 0.7766 0.7642 118 9 0.7752 0.7765 120 10 0.7556 0.7678 122 11 0.7788 0.7711 126 12 0.7749 0.7649 128 13 0.8162 0.7807 145 14 0.8393 0.7804 147 15 0.7778 0.78 149 16 0.8153 0.778 151 17 0.7591 0.7873 153 18 0.8229 0.7819 155 19 0.8341 0.7661 157 20 0.7788 0.7666 159 21 0.7912 0.7639 161 22 0.8041 0.7675 163 23 0.7884 0.777 165 24 0.8213 0.7791 167 25 0.7994 0.7833 169 26 0.8319 0.7891 171 27 0.8483 0.7729 173 28 0.7973 0.7733 175 29 0.834 0.7771 177 30 0.802 0.773 179 31 0.7763 0.7687 181 32 0.8061 0.766 183 33 0.7916 0.7799 185 34 0.8131 0.7745 187 35 0.8366 0.8256 239 36 0.8282 0.8035 241 37 0.8072 0.8045 243 38 0.851 0.8155 245 39 0.7637 0.8086 247 40 0.8439 0.8135 249 41 0.8571 0.8022 251 42 0.7854 0.7851 253 43 0.8159 0.7843 255 44 0.815 0.7827 257 45 0.8783 0.8123 259 46 0.8354 0.8094 260 47 0.8 0.789 262 48 0.8038 0.8097 264 49 0.7926 0.7937 266 50 0.8275 0.7961 269 51 0.8514 0.8039 271 52 0.8089 0.7835 273 53 0.8134 0.7789 275 54 0.8168 0.7873 277 55 0.8198 0.7886 280 56 0.8447 0.7833 282 57 0.7876 0.7916 284 58 0.8093 0.8032 286 59 0.7945 0.8059 288 60 0.8407 0.7981 290 61 0.7824 0.78 292 62 0.7885 0.7869 294 63 0.7933 0.7823 296 64 0.837 0.7834 300 65 0.7594 0.7826 302 66 0.8333 0.7949 304 Std Escape = 0.04106537728575545 Std Photo = 0.01581231947284212 Mean Escape = 0.8015071105396809 Mean Photo = 0.7837728948033928 :END: So a bit worse than initially thought... #+begin_src nim :tangle /tmp/calibration_software_eff_syst.nim import std / [os, strutils, random, sequtils, stats, strformat] import nimhdf5, cligen import numericalnim except linspace import ingrid / private / [likelihood_utils, hdf5_utils, ggplot_utils, geometry, cdl_cuts] import ingrid / calibration import ingrid / calibration / [fit_functions] import ingrid / ingrid_types import ingridDatabase / [databaseRead, databaseDefinitions, databaseUtils] # cut performed regardless of logL value on the data, since transverse # rms > 1.5 cannot be a physical photon, due to diffusion in 3cm drift # distance const RmsCleaningCut = 1.5 let CdlFile = "/home/basti/CastData/data/CDL_2019/calibration-cdl-2018.h5" let RefFile = "/home/basti/CastData/data/CDL_2019/XrayReferenceFile2018.h5" proc drawNewEvent(rms, energy: seq[float]): int = let num = rms.len - 1 var idx = rand(num) while rms[idx] >= RmsCleaningCut or (energy[idx] <= 4.5 or energy[idx] >= 7.5): idx = rand(num) result = idx proc computeEnergy(h5f: H5File, pix: seq[Pix], group: string, a, b, c, t, bL, mL: float): float = let totalCharge = pix.mapIt(calibrateCharge(it.ch.float, a, b, c, t)).sum # compute mean of all gas gain slices in this run (most sensible) let gain = h5f[group / "chip_3/gasGainSlices", GasGainIntervalResult].mapIt(it.G).mean let calibFactor = linearFunc(@[bL, mL], gain) * 1e-6 # now calculate energy for all hits result = totalCharge * calibFactor proc generateFakeData(h5f: H5File, nFake: int, energy = 3.0): DataFrame = ## For each run generate `nFake` fake events let refSetTuple = readRefDsets(RefFile, yr2018) result = newDataFrame() for (num, group) in runs(h5f): # first read all x / y / tot data echo "Run number: ", num let xs = h5f[group / "chip_3/x", special_type(uint8), uint8] let ys = h5f[group / "chip_3/y", special_type(uint8), uint8] let ts = h5f[group / "chip_3/ToT", special_type(uint16), uint16] let rms = h5f[group / "chip_3/rmsTransverse", float] let cX = h5f[group / "chip_3/centerX", float] let cY = h5f[group / "chip_3/centerY", float] let energyInput = h5f[group / "chip_3/energyFromCharge", float] let chipGrp = h5f[(group / "chip_3").grp_str] let chipName = chipGrp.attrs["chipName", string] # get factors for charge calibration let (a, b, c, t) = getTotCalibParameters(chipName, num) # get factors for charge / gas gain fit let (bL, mL) = getCalibVsGasGainFactors(chipName, num, suffix = $gcIndividualFits) var count = 0 var evIdx = 0 when false: for i in 0 ..< xs.len: if xs[i].len < 150 and energyInput[i] > 5.5: # recompute from data let pp = toSeq(0 ..< xs[i].len).mapIt((x: xs[i][it], y: ys[i][it], ch: ts[i][it])) let newEnergy = h5f.computeEnergy(pp, group, a, b, c, t, bL, mL) echo "Length ", xs[i].len , " w/ energy ", energyInput[i], " recomp ", newEnergy let df = toDf({"x" : pp.mapIt(it.x.int), "y" : pp.mapIt(it.y.int), "ch" : pp.mapIt(it.ch.int)}) ggplot(df, aes("x", "y", color = "ch")) + geom_point() + ggtitle("funny its real") + ggsave("/tmp/fake_event_" & $i & ".pdf") sleep(200) if true: quit() # to store fake data var energies = newSeqOfCap[float](nFake) var logLs = newSeqOfCap[float](nFake) var rmss = newSeqOfCap[float](nFake) var eccs = newSeqOfCap[float](nFake) var ldivs = newSeqOfCap[float](nFake) var frins = newSeqOfCap[float](nFake) var cxxs = newSeqOfCap[float](nFake) var cyys = newSeqOfCap[float](nFake) var lengths = newSeqOfCap[float](nFake) while count < nFake: # draw index from to generate a fake event evIdx = drawNewEvent(rms, energyInput) # draw number of fake pixels # compute ref # pixels for this event taking into account possible double counting etc. let basePixels = (energy / energyInput[evIdx] * xs[evIdx].len.float) let nPix = round(basePixels + gauss(sigma = 10.0)).int # ~115 pix as reference in 3 keV (26 eV), draw normal w/10 around if nPix < 4: echo "Less than 4 pixels: ", nPix, " skipping" continue var pix = newSeq[Pix](nPix) var seenPix: set[uint16] = {} let evNumPix = xs[evIdx].len if nPix >= evNumPix: echo "More pixels to draw than available! ", nPix, " vs ", evNumPix, ", skipping!" continue if not inRegion(cX[evIdx], cY[evIdx], crSilver): echo "Not in silver region. Not a good basis" continue var pIdx = rand(evNumPix - 1) for j in 0 ..< nPix: # draw pix index while pIdx.uint16 in seenPix: pIdx = rand(evNumPix - 1) seenPix.incl pIdx.uint16 pix[j] = (x: xs[evIdx][pIdx], y: ys[evIdx][pIdx], ch: ts[evIdx][pIdx]) # now draw when false: let df = toDf({"x" : pix.mapIt(it.x.int), "y" : pix.mapIt(it.y.int), "ch" : pix.mapIt(it.ch.int)}) ggplot(df, aes("x", "y", color = "ch")) + geom_point() + ggsave("/tmp/fake_event.pdf") sleep(200) # reconstruct event let inp = (pixels: pix, eventNumber: 0, toa: newSeq[uint16](), toaCombined: newSeq[uint64]()) let recoEv = recoEvent(inp, -1, num, searchRadius = 50, dbscanEpsilon = 65, clusterAlgo = caDefault) if recoEv.cluster.len > 1 or recoEv.cluster.len == 0: echo "Found more than 1 or 0 cluster! Skipping" continue # compute charge let energy = h5f.computeEnergy(pix, group, a, b, c, t, bL, mL) # puhhh, now the likelihood... let ecc = recoEv.cluster[0].geometry.eccentricity let ldiv = recoEv.cluster[0].geometry.lengthDivRmsTrans let frin = recoEv.cluster[0].geometry.fractionInTransverseRms let logL = calcLikelihoodForEvent(energy, ecc, ldiv, frin, refSetTuple) # finally done energies.add energy logLs.add logL rmss.add recoEv.cluster[0].geometry.rmsTransverse eccs.add ecc ldivs.add ldiv frins.add frin cxxs.add recoEv.cluster[0].centerX cyys.add recoEv.cluster[0].centerY lengths.add recoEv.cluster[0].geometry.length inc count let df = toDf({ "energyFromCharge" : energies, "likelihood" : logLs, "runNumber" : num, "rmsTransverse" : rmss, "eccentricity" : eccs, "lengthDivRmsTrans" : ldivs, "centerX" : cxxs, "centerY" : cyys, "length" : lengths, "fractionInTransverseRms" : frins }) result.add df proc applyLogLCut(df: DataFrame, cutTab: CutValueInterpolator): DataFrame = result = df.mutate(f{float: "passLogL?" ~ (block: #echo "Cut value: ", cutTab[idx(igEnergyFromCharge.toDset())], " at dset ", toRefDset(idx(igEnergyFromCharge.toDset())), " at energy ", idx(igEnergyFromCharge.toDset()) idx(igLikelihood.toDset()) < cutTab[idx(igEnergyFromCharge.toDset())])}) proc readRunData(h5f: H5File): DataFrame = result = h5f.readDsets(chipDsets = some((chip: 3, dsets: @[igEnergyFromCharge.toDset(), igRmsTransverse.toDset(), igLengthDivRmsTrans.toDset(), igFractionInTransverseRms.toDset(), igEccentricity.toDset(), igCenterX.toDset(), igCenterY.toDset(), igLength.toDset(), igLikelihood.toDset()]))) proc filterEvents(df: DataFrame, energy: float = Inf): DataFrame = let xrayCutsTab {.global.} = getXrayCleaningCuts() template applyFilters(dfI: untyped): untyped {.dirty.} = let minRms = xrayCuts.minRms let maxRms = xrayCuts.maxRms let maxLen = xrayCuts.maxLength let maxEcc = xrayCuts.maxEccentricity dfI.filter(f{float -> bool: idx(igRmsTransverse.toDset()) < RmsCleaningCut and inRegion(idx("centerX"), idx("centerY"), crSilver) and idx("rmsTransverse") >= minRms and idx("rmsTransverse") <= maxRms and idx("length") <= maxLen and idx("eccentricity") <= maxEcc }) if "Peak" in df: doAssert classify(energy) == fcInf result = newDataFrame() for (tup, subDf) in groups(df.group_by("Peak")): case tup[0][1].toStr of "Escapepeak": let dset = 5.9.toRefDset() let xrayCuts = xrayCutsTab[dset] result.add applyFilters(df) of "Photopeak": let dset = 2.9.toRefDset() let xrayCuts = xrayCutsTab[dset] result.add applyFilters(df) else: doAssert false, "Invalid name" else: doAssert classify(energy) != fcInf let dset = energy.toRefDset() let xrayCuts = xrayCutsTab[dset] result = applyFilters(df) proc splitPeaks(df: DataFrame): DataFrame = let eD = igEnergyFromCharge.toDset() result = df.mutate(f{float -> string: "Peak" ~ ( if idx(eD) < 3.5 and idx(eD) > 2.5: "Escapepeak" elif idx(eD) > 4.5 and idx(eD) < 7.5: "Photopeak" else: "None")}) .filter(f{`Peak` != "None"}) proc handleFile(fname: string, cutTab: CutValueInterpolator): DataFrame = ## Given a single input file, performs application of the likelihood cut for all ## runs in it, split by photo & escape peak. Returns a DF with column indicating ## the peak, energy of each event & a column whether it passed the likelihood cut. ## Only events that are pass the input cuts are stored. let h5f = H5open(fname, "r") randomize(423) result = newDataFrame() let data = h5f.readRunData() .splitPeaks() .filterEvents() .applyLogLCut(cutTab) result.add data when false: ggplot(result, aes("energyFromCharge")) + geom_histogram(bins = 200) + ggsave("/tmp/ugl.pdf") discard h5f.close() proc handleFakeData(fname: string, energy: float, cutTab: CutValueInterpolator): DataFrame = let h5f = H5open(fname, "r") var data = generateFakeData(h5f, 5000, energy = energy) .filterEvents(energy) .applyLogLCut(cutTab) result = data discard h5f.close() proc getIndices(dset: string): seq[int] = result = newSeq[int]() applyLogLFilterCuts(CdlFile, RefFile, dset, yr2018, igEnergyFromCharge): result.add i proc plotRefHistos(df: DataFrame, energy: float, cutTab: CutValueInterpolator, dfAdditions: seq[tuple[name: string, df: DataFrame]] = @[]) = # map input fake energy to reference dataset let grp = energy.toRefDset() let passedInds = getIndices(grp) let h5f = H5open(RefFile, "r") let h5fC = H5open(CdlFile, "r") const xray_ref = getXrayRefTable() #for (i, grp) in pairs(xray_ref): var dfR = newDataFrame() for dset in IngridDsetKind: try: let d = dset.toDset() if d notin df: continue # skip things not in input ## first read data from CDL file (exists for sure) ## extract all CDL data that passes the cuts used to generate the logL histograms var cdlFiltered = newSeq[float](passedInds.len) let cdlRaw = h5fC[cdlGroupName(grp, "2019", d), float] for i, idx in passedInds: cdlFiltered[i] = cdlRaw[idx] echo "Total number of elements ", cdlRaw.len, " filtered to ", passedInds.len dfR[d] = cdlFiltered ## now read histograms from RefFile, if they exist (not all datasets do) if grp / d in h5f: let dsetH5 = h5f[(grp / d).dset_str] let (bins, data) = dsetH5[float].reshape2D(dsetH5.shape).split(Seq2Col) let fname = &"/tmp/{grp}_{d}_energy_{energy:.1f}.pdf" echo "Storing histogram in : ", fname # now add fake data let dataSum = simpson(data, bins) let refDf = toDf({"bins" : bins, "data" : data}) .mutate(f{"data" ~ `data` / dataSum}) let df = df.filter(f{float: idx(d) <= bins[^1]}) ggplot(refDf, aes("bins", "data")) + geom_histogram(stat = "identity", hdKind = hdOutline, alpha = 0.5) + geom_histogram(data = df, aes = aes(d), bins = 200, alpha = 0.5, fillColor = "orange", density = true, hdKind = hdOutline) + ggtitle(&"{d}. Orange: fake data from 'reducing' 5.9 keV data @ {energy:.1f}. Black: CDL ref {grp}") + ggsave(fname, width = 1000, height = 600) except AssertionError: continue # get effect of logL cut on CDL data dfR = dfR.applyLogLCut(cutTab) var dfs = @[("Fake", df), ("Real", dfR)] if dfAdditions.len > 0: dfs = concat(dfs, dfAdditions) var dfPlot = bind_rows(dfs, "Type") echo "Rough filter removes: ", dfPlot.len dfPlot = dfPlot.filter(f{`lengthDivRmsTrans` <= 50.0 and `eccentricity` <= 5.0}) echo "To ", dfPlot.len, " elements" ggplot(dfPlot, aes("lengthDivRmsTrans", "fractionInTransverseRms", color = "eccentricity")) + facet_wrap("Type") + geom_point(size = 1.0, alpha = 0.5) + ggtitle(&"Fake energy: {energy:.2f}, CDL dataset: {grp}") + ggsave(&"/tmp/scatter_colored_fake_energy_{energy:.2f}.png", width = 1200, height = 800) # plot likelihood histos ggplot(dfPlot, aes("likelihood", fill = "Type")) + geom_histogram(bins = 200, alpha = 0.5, hdKind = hdOutline) + ggtitle(&"Fake energy: {energy:.2f}, CDL dataset: {grp}") + ggsave(&"/tmp/histogram_fake_energy_{energy:.2f}.pdf", width = 800, height = 600) discard h5f.close() discard h5fC.close() echo "DATASET : ", grp, "--------------------------------------------------------------------------------" echo "Efficiency of logL cut on filtered CDL data (should be 80%!) = ", dfR.filter(f{idx("passLogL?") == true}).len.float / dfR.len.float echo "Elements passing using `passLogL?` ", dfR.filter(f{idx("passLogL?") == true}).len, " vs total ", dfR.len let (hist, bins) = histogram(dfR["likelihood", float].toRawSeq, 200, (0.0, 30.0)) ggplot(toDf({"Bins" : bins, "Hist" : hist}), aes("Bins", "Hist")) + geom_histogram(stat = "identity") + ggsave("/tmp/usage_histo_" & $grp & ".pdf") let cutval = determineCutValue(hist, eff = 0.8) echo "Effficiency from `determineCutValue? ", bins[cutVal] proc main(files: seq[string], fake = false, real = false, refPlots = false, energies: seq[float] = @[]) = ## given the input files of calibration runs, walks all files to determine the ## 'real' software efficiency for them & generates a plot let cutTab = calcCutValueTab(CdlFile, RefFile, yr2018, igEnergyFromCharge) var df = newDataFrame() if real and not fake: for f in files: df.add handleFile(f, cutTab) var effEsc = newSeq[float]() var effPho = newSeq[float]() var nums = newSeq[int]() for (tup, subDf) in groups(df.group_by(@["runNumber", "Peak"])): echo "------------------" echo tup #echo subDf let eff = subDf.filter(f{idx("passLogL?") == true}).len.float / subDf.len.float echo "Software efficiency: ", eff if tup[1][1].toStr == "Escapepeak": effEsc.add eff elif tup[1][1].toStr == "Photopeak": effPho.add eff # only add in one branch nums.add tup[0][1].toInt echo "------------------" let dfEff = toDf({"Escapepeak" : effEsc, "Photopeak" : effPho, "RunNumber" : nums}) echo dfEff.pretty(-1) let stdEsc = effEsc.standardDeviationS let stdPho = effPho.standardDeviationS let meanEsc = effEsc.mean let meanPho = effPho.mean echo "Std Escape = ", stdEsc echo "Std Photo = ", stdPho echo "Mean Escape = ", meanEsc echo "Mean Photo = ", meanPho ggplot(dfEff.gather(["Escapepeak", "Photopeak"], "Type", "Value"), aes("Value", fill = "Type")) + geom_histogram(bins = 20, hdKind = hdOutline, alpha = 0.5) + ggtitle(&"σ_escape = {stdEsc:.4f}, μ_escape = {meanEsc:.4f}, σ_photo = {stdPho:.4f}, μ_photo = {meanPho:.4f}") + ggsave("/tmp/software_efficiencies_cast_escape_photo.pdf", width = 800, height = 600) for (tup, subDf) in groups(df.group_by("Peak")): case tup[0][1].toStr of "Escapepeak": plotRefHistos(df, 2.9, cutTab) of "Photopeak": plotRefHistos(df, 5.9, cutTab) else: doAssert false, "Invalid data: " & $tup[0][1].toStr if fake and not real: var effs = newSeq[float]() for e in energies: if e > 5.9: echo "Warning: energy above 5.9 keV not allowed!" return df = newDataFrame() for f in files: df.add handleFakeData(f, e, cutTab) plotRefHistos(df, e, cutTab) echo "Done generating for energy ", e effs.add(df.filter(f{idx("passLogL?") == true}).len.float / df.len.float) let dfL = toDf({"Energy" : energies, "Efficiency" : effs}) echo dfL ggplot(dfL, aes("Energy", "Efficiency")) + geom_point() + ggtitle("Software efficiency from 'fake' events") + ggsave("/tmp/fake_software_effs.pdf") if fake and real: doAssert files.len == 1, "Not more than 1 file supported!" let f = files[0] let dfCast = handleFile(f, cutTab) for (tup, subDf) in groups(dfCast.group_by("Peak")): case tup[0][1].toStr of "Escapepeak": plotRefHistos(handleFakeData(f, 2.9, cutTab), 2.9, cutTab, @[("CAST", subDf)]) of "Photopeak": plotRefHistos(handleFakeData(f, 5.9, cutTab), 5.9, cutTab, @[("CAST", subDf)]) else: doAssert false, "Invalid data: " & $tup[0][1].toStr #if refPlots: # plotRefHistos() when isMainModule: dispatch main #+end_src *UPDATE* <2022-05-06 Fri 12:32>: The discussion about the results of the above code here is limited to the results relevant for the systematic of the software efficiency. For the debugging of the unexpected software efficiencies computed for the calibration photo & escape peaks, see section [[file:~/org/Doc/StatusAndProgress.org::#sec:debug_software_efficiency_cdl_mapping_bug]]. After the debugging session trying to figure out why the hell the software efficiency is so different, here are finally the results of this study. The software efficiencies for the escape & photopeak energies from the calibration data at CAST are determined as follows: - filter to events with =rmsTransverse= <= 1.5 - filter to events within the silver region - filter to events passing the 'X-ray cuts' - for escape & photopeak each filter to energies of 1 & 1.5 keV around the peak The remaining events are then used as the "basis" for the evaluation. From here the likelihood cut method is applied to all clusters. In the final step the ratio of clusters passing the logL cut over all clusters is computed, which gives the effective software efficiency for the data. For all 2017 and 2018 runs this gives: #+begin_src sh Dataframe with 3 columns and 67 rows: Idx Escapepeak Photopeak RunNumber dtype: float float int 0 0.6886 0.756 83 1 0.6845 0.794 88 2 0.6789 0.7722 93 3 0.7748 0.7585 96 4 0.8111 0.769 102 5 0.7979 0.765 108 6 0.7346 0.7736 110 7 0.7682 0.7736 116 8 0.7593 0.775 118 9 0.7717 0.7754 120 10 0.7628 0.7714 122 11 0.7616 0.7675 126 12 0.7757 0.7659 128 13 0.8274 0.7889 145 14 0.7974 0.7908 147 15 0.7969 0.7846 149 16 0.7919 0.7853 151 17 0.7574 0.7913 153 18 0.835 0.7887 155 19 0.8119 0.7755 157 20 0.7738 0.7763 159 21 0.7937 0.7736 161 22 0.7801 0.769 163 23 0.8 0.7801 165 24 0.8014 0.785 167 25 0.7922 0.787 169 26 0.8237 0.7945 171 27 0.8392 0.781 173 28 0.8092 0.7756 175 29 0.8124 0.7864 177 30 0.803 0.7818 179 31 0.7727 0.7742 181 32 0.7758 0.7676 183 33 0.7993 0.7817 185 34 0.8201 0.7757 187 35 0.824 0.8269 239 36 0.8369 0.8186 241 37 0.7953 0.8097 243 38 0.8205 0.8145 245 39 0.775 0.8117 247 40 0.8368 0.8264 249 41 0.8405 0.8105 251 42 0.7804 0.803 253 43 0.8177 0.7907 255 44 0.801 0.7868 257 45 0.832 0.8168 259 46 0.8182 0.8074 260 47 0.7928 0.7995 262 48 0.7906 0.8185 264 49 0.7933 0.8039 266 50 0.8026 0.811 269 51 0.8328 0.8086 271 52 0.8024 0.7989 273 53 0.8065 0.7911 275 54 0.807 0.8006 277 55 0.7895 0.7963 280 56 0.8133 0.7918 282 57 0.7939 0.8037 284 58 0.7963 0.8066 286 59 0.8104 0.8181 288 60 0.8056 0.809 290 61 0.762 0.7999 292 62 0.7659 0.8021 294 63 0.7648 0.79 296 64 0.7868 0.7952 300 65 0.7815 0.8036 302 66 0.8276 0.8078 304 #+end_src with the following statistical summaries: #+begin_src sh Std Escape = 0.03320160467567293 Std Photo = 0.01727763707839311 Mean Escape = 0.7923601424260915 Mean Photo = 0.7909126317171645 #+end_src (where =Std= really is the standard deviation. For the escape data this is skewed due to the first 3 runs as visible in the DF output above). The data as a histogram: #+CAPTION: Histogram of the effective software efficiencies for escape and photopeak #+CAPTION: data at CAST for all 2017/18 calibration runs. The low efficiency outliers #+CAPTION: are the first 3 calibration runs in 2017. [[~/org/Figs/statusAndProgress/systematics/software_efficiencies_cast_escape_photo.pdf]] Further, we can also ask for the behavior of fake data now. Let's generate a set and look at the effective efficiency of fake data. #+CAPTION: Fake effective software efficiencies at different energies. Clusters are generated #+CAPTION: from valid 5.9 keV Photopeak clusters (that pass the required cuts) by randomly removing #+CAPTION: a certain number of pixels until the desired energy is reached. Given the #+CAPTION: approach, the achieved efficiencies seem fine. [[~/org/Figs/statusAndProgress/systematics/fake_effective_software_efficiencies.pdf]] #+CAPTION: Histograms showing the different distributions of the properties for the generated fake #+CAPTION: data compared to the real reference data from the CDL. At the lowest energies the #+CAPTION: properties start to diverge quite a bit, likely explaining the lower efficiency there. [[~/org/Figs/statusAndProgress/systematics/histograms_properties_fake_vs_referenc_data.pdf]] #+CAPTION: Scatter plots of the different parameters going into the logL cut method comparing #+CAPTION: the CDL reference data & the fake generated data. The cuts (X-ray for fake & #+CAPTION: X-ray + reference for CDL) are applied. [[~/org/Figs/statusAndProgress/systematics/scatter_plots_logL_parameters_fake_vs_ref_data.pdf]] *NOTE*: One big TODO is the following: - [ ] Currently the cut values for the LogL are computed using a histogram of 200 bins, resulting in significant variance already in the CDL data of around 1%. By increasing the number of bins this variance goes to 0 (eventually it depends on the number of data points). In theory I don't see why we can't compute the cut value purely based on the unbinned data. Investigate / do this! - [ ] Choose the final uncertainty for this variable that we want to use. ***** (While generating fake data) Events with large energy, but few pixels :PROPERTIES: :CUSTOM_ID: sec:large_events_few_pixels_tot :END: While developing some fake data using existing events in the photo peak & filtering out pixels to end up at ~3 keV, I noticed the prevalence of events with <150 pixels & ~6 keV energy. Code produced by splicing in the following code into the body of =generateFakeData=. #+begin_src nim for i in 0 ..< xs.len: if xs[i].len < 150 and energyInput[i] > 5.5: # recompute from data let pp = toSeq(0 ..< xs[i].len).mapIt((x: xs[i][it], y: ys[i][it], ch: ts[i][it])) let newEnergy = h5f.computeEnergy(pp, group, a, b, c, t, bL, mL) echo "Length ", xs[i].len , " w/ energy ", energyInput[i], " recomp ", newEnergy let df = toDf({"x" : pp.mapIt(it.x.int), "y" : pp.mapIt(it.y.int), "ch" : pp.mapIt(it.ch.int)}) ggplot(df, aes("x", "y", color = "ch")) + geom_point() + ggtitle("funny its real") + ggsave("/tmp/fake_event.pdf") sleep(200) if true: quit() #+end_src This gives about 100 events that fit the criteria out of a total of O(20000). A ratio of 1/200 seems probably reasonable for absorption of X-rays at 5.9 keV. While plotting them I noticed that they all share that they are incredibly dense, like: [[file:~/org/Figs/statusAndProgress/exampleEvents/event_few_pixels_large_energy.pdf]] These events must be events where the X-ray to photoelectron conversion happens very close to the grid! This is one argument "in favor" of using ~ToT~ instead of ToA on the Timepix1 and more importantly a good reason to keep using the ~ToT~ values instead of pure pixel counting for at least some events! - [ ] We should look at number of pixels vs. energy as a scatter plot to see what this gives us. ** Putting it all together :noexport: - [ ] I don't think this is necessary now? First: show the basic algorithm in pseudo code to compute a likelihood value for a set of parameters ($g_{ae}²$, and the $θ$ values). - expected rate, - nuisance parameter penalty terms - loop over candidates, for each candidate - compute signal - compute background -> combine From here either integrate out $θ$ manually, or sample via MCMC. Goes straight to next section. ** MCMC to sample the distribution and compute a limit :PROPERTIES: :CUSTOM_ID: sec:limit:mcmc_calc_limit :END: The Metropolis-Hastings algorithm cite:metropolis53_mcmc,hastings70_mcmc -- as mentioned in sec. [[#sec:limit:method_mcmc]] -- is used to evaluate the integral over the nuisance parameters to get the posterior likelihood. Instead of building a very long MCMC, we opt to construct 3 Markov chains with $\num{150000}$ links to reduce the bias introduced by the starting parameters. Ideally, one would construct even more chains, but a certain number of steps from the starting parameter are usually needed to get into the parameter space of large contributions to the integral (unless the starting parameters are chosen in a very confined region, which itself is problematic in terms of bias). These are removed as the 'burn in' and make the number of chains and links in each chain a trade off. The MCMC is built based on 5 dimensional vectors $\vec{x}$, \[ \vec{x} = \mtrix{g_{ae}² & ϑ_s & ϑ_b & ϑ_x & ϑ_y}^T \] containing the coupling constant of interest squared as the first entry and the four nuisance parameters after. Here we mention the axion-electron coupling constant $g²_{ae}$, but generally it can also be for example $g_{ae}²·g_{aγ}²$ (equivalent to $g²_{ae}$!), $g⁴_{aγ}$ or $β⁴_γ$, depending on the search to be conducted. The important point is that the parameter is used, under which the likelihood function is /linear/, as we otherwise bias our sampling (see the extended thesis for a longer explanation). Our initial starting vector $\vec{x_i}$ is randomly sampled by \[ \vec{x} = \vektor{ \text{rand}([0, 5]) · g²_{\text{base}} \\ \text{rand}([-0.4, 0.4]) \\ \text{rand}([-0.4, 0.4]) \\ \text{rand}([-0.5, 0.5]) \\ \text{rand}([-0.5, 0.5]) \\ } \] where $\text{rand}$ refers to a uniform random sampler in the given interval and $g_{\text{base}}$ is a reference coupling parameter of choice, which also depends on the specific search. Our default reference coupling constant for $g_{\text{base}}²$ [fn:not_ref] is $g_{ae}² = \num{1e-21}$, allowing for a range of parameters in the expected parameter space. The nuisance parameters are allowed to vary in a large region, given the standard deviations of $σ < 0.05$ for all four nuisance parameters. In the updating stage to propose a new vector, we use the following: \[ \vec{x_{i+1}} = \vec{x_i} + \vektor{ \text{rand}([ -0.5 · 3 g²_{\text{base}}, 0.5 · 3 g²_{\text{base}} ]) \\ \text{rand}([ -0.5 · 0.025, 0.5 · 0.025 ]) \\ \text{rand}([ -0.5 · 0.025, 0.5 · 0.025 ]) \\ \text{rand}([ -0.5 · 0.05, 0.5 · 0.05 ]) \\ \text{rand}([ -0.5 · 0.05, 0.5 · 0.05 ]) \\ } \] This combination leads to an acceptance rate of the new proposal typically between $\SIrange{20}{30}{\%}$. After all three chains are built, the first $\num{50000}$ links each are thrown out as burn-in to make sure we only include meaningful parameter space. The parameter space for each of the 5 elements is restricted based on the following \[ \vektor{ g = [0, ∞] \\ ϑ_s = [-1, 1] \\ ϑ_b = [-0.8, 1] \\ ϑ_x = [-1, 1] \\ ϑ_y = [-1, 1] }, \] meaning we restrict ourselves to physical coupling constants and give loose bounds on the nuisance parameters. In particular for the $ϑ_b$ parameter the restriction to values larger than $ϑ_b > -0.8$ is due to the singularity in $\mathcal{L}_M$ at $ϑ_b = -1$. For all realistic values for the systematic uncertainty $σ_b$ the region of $ϑ_b \ll 1$ has no physical meaning anyway. But for unit tests and sanity checks of the implementation, larger uncertainties are tested for, which cause computational issues if this restriction was not in place. Example Markov chains can be seen in fig. sref:fig:limit:mcmc_example where we see the different nuisance parameters of the chain and how they are distributed. As expected for our comparatively small values of $σ$, the chain is centered around 0 for each nuisance parameter. And the coupling constant in fig. sref:fig:limit:mcmc_theta_x_y also shows a clear increase towards low values. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "\\includegraphics[width=0.03\\textwidth]{/home/basti/org/Figs/panda-face_emoji_bubu.png} $ϑ_s$ vs. $ϑ_b$") (label "fig:limit:mcmc_theta_s_b") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/mcmc_lines_thetas_sb_real_syst_real_random_cands_0.pdf")) (subfigure (linewidth 0.5) (caption "$g²$ vs. $ϑ_y$") (label "fig:limit:mcmc_theta_x_y") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/limit/sanity/mcmc_lines_thetas_gy_real_syst_real_random_cands_0.pdf")) (caption (subref "fig:limit:mcmc_theta_s_b") ": MCMC of the $ϑ_s$ nuisance parameter against $ϑ_b$ with the coupling constant as the color scale. " (subref "fig:limit:mcmc_theta_x_y") ": MCMC of the coupling constant against $ϑ_x$ and $ϑ_y$ in color. Both show a clear centering to values around 0, with the coupling constant a decrease in population towards larger couplings.") (label "fig:limit:mcmc_example")) #+end_src The resulting three Markov chains are finally used to compute the marginal posterior likelihood function by computing the histogram of all sampled $g²$ values. The distribution of the sampled $g²$ values is that of the marginal posterior likelihood. This then allows to compute a limit by computing the empirical distribution function of the sampled $g²$ values and extracting the value corresponding to the $95^{\text{th}}$ percentile. An example for this is shown in fig. [[fig:limit:mcmc_calc_limit:limit]]. #+CAPTION: Example likelihood as a function of $g_{ae}²$ for a set of toy candidates with #+CAPTION: the limit indicated at the intersection of the blue and red colored areas. #+CAPTION: Blue is the lower 95-th percentile of the integral over the likelihood #+CAPTION: function and red the upper 5-th. #+NAME: fig:limit:mcmc_calc_limit:limit [[~/phd/Figs/limit/sanity/mcmc_histo_real_syst_real_random_cands_0.pdf]] [fn:not_ref] Do not confuse $g_{\text{base}}$ with the reference coupling constant for which the axion flux is computed $g_{\text{ref}}$ mentioned earlier. *** TODO TODOs for this section [6/13] :noexport: - [ ] *NOTE*: 5e-21 is 7e-11 for g_ae. That's lower than the old limit if g_aγ = 1e-12. Should we increase this to be sure we do not bias ourselves? - [ ] *POSSIBLY* move part about calculation of likelihood and limit to a separate section after? - [ ] *ADD APPENDIX WITH EXAMPLES WHAT HAPPENS WHEN CHAIN GOES AWRY* - [X] *MENTION* the cutoff we use at background values -0.8 for example to avoid the singularity! Generally mention it in the discussion of the 4-fold posterior integral. - [ ] *REFERENCE PAPER* about multiple chains for bias reduction! -> Did not find it quickly. Will have to search. - [X] Explanation of the MCMC method *OR* reference to previous explanation - [X] Mention restrictions of MCMC parameter space - [X] Explicit number of chains, burn in and # of samples - [X] ACCEPTANCE RATE - [X] STEP SIZE - [X] Example of MCMC plots - [X] Example of likelihood space by histogram of sampled g_ae² - [X] Mention some of the other performance optimizations we make? - [X] Caching of some values? -> Mentioned in background interpolation section. - [ ] What else do we do? No other caches. Things like custom code paths for distances are irrelevant. - [ ] *WHERE do we mention numerical integration slowness*? - [ ] WHERE do we mention sanity checks? *** Relevant code for init & number of chains of MCMC :noexport: #+begin_src nim const g_ae²Ref = 1e-21 * 1e-12^2 ## This is the reference we want to keep constant! # ... const nChains = 3 ## Burn in of 50,000 was deemed fine even for extreme walks in L = 0 space const BurnIn = 50_000 var totalChain = newSeq[seq[float]]() for i in 0 ..< nChains: let start = @[rnd.rand(0.0 .. 5.0) * couplingRef, #1e-21, # g_ae² rnd.rand(-0.4 .. 0.4), rnd.rand(-0.4 .. 0.4), # θs, θb rnd.rand(-0.5 .. 0.5), rnd.rand(-0.5 .. 0.5)] # θx, θy echo "\t\tInitial chain state: ", start let (chain, acceptanceRate) = rnd.build_MH_chain(start, @[3.0 * couplingRef, 0.025, 0.025, 0.05, 0.05], 150_000, fn) #+end_src *** Generate example MCMC plots (incl. histogram) :extended: These plots are produced from the sanity checks in ~mcmc_limit_calculation~. Note, if you run the command like this, it will take a while, because it will compute several points using regular numerical integration (Romberg). Pass ~--rombergIntegrationDepth 2~ to speed it up, but the last plot may not be produced successfully (but we don't care about that plot here): #+begin_src sh F_WIDTH=0.5 DEBUG_TEX=true ESCAPE_LATEX=true USE_TEX=true \ mcmc_limit_calculation sanity \ --limitKind lkMCMC \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --sanityPath ~/phd/Figs/limit/sanity/ \ --realSystematics \ --rombergIntegrationDepth 3 #+end_src *** Notes and thoughts about the coupling as MCMC element :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:mcmc:notes_variation_coupling_parameter :END: When evaluating the likelihood function using a MCMC approach, the choice of $g²_{ae}$ in our case is not arbitrary. It may however seem surprising, given that the previous limits for $g_{ae}$ are always quoted and given as $g_{ae}·g_{aγ}$. The reason is that when calculating a limit on $g_{ae}$ one typically works under the assumption that the solar axion flux via the axion-photon coupling $g_{aγ}$ is negligible. This means production is $g_{ae}$ based and reconversion $g_{aγ}$ based. As a result for $g_{ae}$ searches in helioscopes the axion-photon coupling can be taken as constant and only $g_{ae}$ be varied. The final limit must be quoted as a product regardless. See the sanity check ~--axionElectronAxionPhoton~ of ~mcmc_limit_calculation~ for proof that the deduced limit does not change as a function of $g_{aγ}$. In it we compute the limit for a fixed set of candidates at different $g_{aγ}$ values. The resulting limit on $g_{ae}·g_{aγ}$ remains unchanged (of course, the limit produced in $g²_{ae}$ space /will/ change, but that is precisely why one must never quote the $g²_{ae}$ 'limit' as such. It is still only a temporary number). The choice of coupling parameter to use inside of the MCMC is based on linearizing the space. We choose the parameter under which the likelihood function is a linear function. For the axion-electron coupling -- under the above stated assumptions -- the likelihood via the signal $s$ is a linear function of $g²_{ae}$ for a fixed value of $g²_{aγ}$. An /equivalent/ way to parametrize the likelihood function as a linear function is by usage of $g²_{ae}·g²_{aγ}$, hence this being a common choice in past papers. This produces the *same limit* as the $g²_{ae}$ limit. See the sanity check ~--axionElectronAxionPhotonLimit~ of ~mcmc_limit_calculation~, in which we compute limits based on $g²_{ae}·g²_{aγ}$ instead and see that the limit is the same. /Importantly though/, using $g_{ae}·g_{aγ}$ (without the square) as an argument to the MCMC does *not* linearize it and thus produces a wrong limit. For a search for the axion-photon coupling alone or the chameleon coupling, the single coupling constant affects /both/ the production in the sun via $g²_{aγ}$, $β²_γ$ as well as the conversion probability in the magnet. This means for these searches we /have/ to use $g⁴_{aγ}$ and $β⁴_γ$ as arguments to the MCMC. - Why is that? :: The reason for this behavior is due to the random sampling nature of the MCMC. The basic algorithm that adds new links to the Markov chain works by /uniformly/ sampling new values for each entry of the parameter vector. /Uniform/ is the crucial point. If we work with parameters under which the likelihood function is /not/ linear and we uniformly sample that non-linear parameter, we produce an effective /non-linear/ sampling of the linearized function. For example for the axion-photon coupling $g_{aγ}$, if wrongly use $g²_{aγ}$ in the MCMC, sample new values within \[ \text{rand}([0, 5]) · g²_{aγ, \text{base}} \] and /then/ rescale $s$ via: \[ s'(g⁴_{aγ}) = α · f(g²_{aγ, \text{base}}) · P(g²_{aγ, \text{base}}) · \left( \frac{g²_{aγ, \text{base}}}{g²_{aγ, \text{base}}} \right)² \] we square $g²_{aγ}$ again. This means we effectively did not perform a /uniform/ sampling at all! This directly affects the samples and as a result the limit. The shape of the histogram of all coupling constants ends up as stretched because of this. ** Expected limits of different setups :PROPERTIES: :CUSTOM_ID: sec:limit:expected_limits :END: One interesting approach to compute the expected limit usually employed in binned likelihood approaches is the so called 'Asimov dataset'. The idea is to compute the limit based on the dataset, which matches exactly the expectation value for each Poisson in each bin. This has the tremendous computational advantage of providing an expected limit by only computing the limit for a /single, very special/ candidate set. Unfortunately, for an unbinned approach this is less straightforward, because there is no explicit mean of expectation values anymore. On the other hand, the Asimov dataset calculation does not provide information about the possible spread of all limits due to the statistical variation possible in toy candidate sets. In our case then, we fall back to computing an expected limit based on toy candidate sets (sec. [[#sec:limit:method_expected_limit]]) that we draw from a discretized, grid version of the background interpolation, as explained in sec. [[#sec:limit:ingredients:candidates]]. We compute expected limits for different possible combinations of classifier and detector vetoes, due to the signal efficiency penalties that these imply. In a first rough 'scan' we compute expected limits based on a smaller number of toy candidates. The exact number depends on the particular setup. - $\num{1000}$ toys for setups using only a classifier without any vetoes. Due to the much more background towards the corners (see sec. [[#sec:background:all_vetoes_combined]]), many more candidates are sampled making computation significantly slower - $\num{2500}$ toys for setups with the line or septem veto. Much fewer total expected number of candidates, and hence much faster. Then we narrow it down and compute $\num{15000}$ toy limits for the best few setups. Finally, we compute $\num{50000}$ toys for the best setup we find. The resulting setup is the one for which we unblind the solar tracking data. Tab. [[tab:limit:expected_limits]] shows the different setups with their respective expected limits. Some setups appear multiple times for the different number of toy candidates that were run. It can be seen as an extension of tab. [[tab:background:background_rate_eff_comparisons]] in sec. [[#sec:background:all_vetoes_combined]]. Also shown is the limit achieved in case no candidate is present in the signal sensitive region, as a theoretical lower bound on the limit. This 'no candidate' limit scales down with the total efficiency, as one would expect. All limits are given as limits on $g_{ae}·g_{aγ}$ based on a fixed $g_{aγ} = \SI{1e-12}{GeV⁻¹}$. Finally, it shows a standard variation corresponding to how the expected limit varies when bootstrapping new sets of limits (standard deviation of $\num{1000}$ bootstrapped expected limits sampling $\num{10000}$ limits from the input). The table does not show all setups that were initially considered. Further considerations of other possible parameters were excluded in preliminary studies on their effect on the expected limit. In particular: - the scintillator veto is always used. It does not come with an efficiency penalty and therefore there is no reason not to activate it. - different FADC veto efficiencies as well as disabling it completely were considered. The current $ε_{\text{FADC}} = 0.98$ efficiency was deemed optimal. Harder cuts do not yield significant improvements. - the potential eccentricity cutoff for the line veto, as discussed in sec. [[#sec:background:line_veto]] is fully disabled, as the efficiency gains do not outweigh the positive effect on the expected limit in practice. Based on this study, the MLP produces the best expected limit, surprisingly without any vetoes at software efficiencies of $\SI{98.04}{\%}$, $\SI{90.59}{\%}$ and $\SI{95.23}{\%}$. However, for these case we did not run any more toy limits, because not having any of the septem or line means there are a large number of candidates towards the chip corners. These slow down the calculation, making it too costly to run. In any case, given the small difference in the expected limit between this case and the first including vetoes, the MLP at $\SI{95.23}{\%}$ with the line veto, we prefer to stick with the addition of vetoes. The very inhomogeneous background rates are problematic, as they make the result much more strongly dependent on the value of the systematic position uncertainty. Also, for other limit calculations with larger raytracing images, a lower background over a larger area at the cost of lower total efficiency is more valuable. In this case with the line veto, the software efficiency $ε_{\text{eff}}$ corresponds to a target software efficiency based on the simulated X-ray data of $\SI{95}{\%}$. The total combined efficiency comes out to $\SI{79.69}{\%}$ (excluding the detection efficiency of course!). This is the setup we will mainly consider for the data unblinding. The expected limit for this setup is \[ \left(g_{ae} · g_{aγ}\right)_{\text{expected}} = \SI{7.878225(6464)e-23}{GeV^{-1}}, \] based on the $\num{50 000}$ toy limits. The uncertainty is based on the standard deviation computed via bootstrapping as mentioned above. It is the uncertainty on the expected limit from a statistical point of view. \footnotesize #+CAPTION: The FADC veto is always in use (if either the septem or line veto are active) at an efficiency of $ε_{\text{FADC}} = 0.98$ and so is the #+CAPTION: scintillator veto. These settings were defined in preliminary studies of the expected limits. #+CAPTION: Note the efficiencies associated with the septem veto $ε_{\text{septem}} = \SI{83.11}{\%}$ and the line veto #+CAPTION: $ε_{\text{line}} = \SI{85.39}{\%}$ and combined $ε_{\text{septem+line}} = \SI{78.63}{\%}$, which are implicitly #+CAPTION: included based on the 'Septem' and 'Line' column values into the total efficiency $ε_{\text{total}}$. #+CAPTION: 'No signal' is the limit without any candidates, 'Exp. $σ$' the bootstrapped standard deviation of the expected limit. #+NAME: tab:limit:expected_limits #+ATTR_LATEX: :booktabs t :environment longtable :align lrlllllll | ε_eff | nmc | Type | Septem | Line | ε_total | No signal [GeV⁻¹] | Expected [GeV⁻¹] | Exp. σ [GeV⁻¹] | |--------+-------+--------+--------+-------+--------+------------------+-----------------+---------------| | 0.9804 | 1000 | MLP | false | false | 0.9804 | 5.739e-23 | 7.805e-23 | 3.6807e-25 | | 0.9059 | 1000 | MLP | false | false | 0.9059 | 6.0109e-23 | 7.856e-23 | 4.301e-25 | | 0.9523 | 1000 | MLP | false | false | 0.9523 | 5.7685e-23 | 7.8599e-23 | 5.1078e-25 | | 0.9523 | 2500 | MLP | false | true | 0.7969 | 6.3874e-23 | 7.8615e-23 | 2.9482e-25 | | 0.9523 | 50000 | MLP | false | true | 0.7969 | 6.3874e-23 | 7.8782e-23 | 6.4635e-26 | | 0.9804 | 2500 | MLP | false | true | 0.8204 | 6.1992e-23 | 7.8833e-23 | 2.9977e-25 | | 0.8587 | 1000 | MLP | false | false | 0.8587 | 6.1067e-23 | 7.9597e-23 | 5.0781e-25 | | 0.9059 | 2500 | MLP | false | true | 0.7581 | 6.4704e-23 | 7.9886e-23 | 2.6437e-25 | | 0.9804 | 2500 | MLP | true | true | 0.7554 | 6.5492e-23 | 8.0852e-23 | 2.9225e-25 | | 0.9523 | 2500 | MLP | true | false | 0.7756 | 6.4906e-23 | 8.1135e-23 | 3.5689e-25 | | 0.9523 | 2500 | MLP | true | true | 0.7338 | 6.6833e-23 | 8.1251e-23 | 3.0965e-25 | | 0.9804 | 2500 | MLP | true | false | 0.7985 | 6.2664e-23 | 8.1314e-23 | 3.1934e-25 | | 0.8587 | 2500 | MLP | false | true | 0.7186 | 6.8094e-23 | 8.1561e-23 | 2.9893e-25 | | 0.9059 | 2500 | MLP | true | false | 0.7378 | 6.5184e-23 | 8.2169e-23 | 2.8767e-25 | | 0.9 | 2500 | LnL | false | true | 0.7531 | 6.4097e-23 | 8.2171e-23 | 3.7248e-25 | | 0.9059 | 2500 | MLP | true | true | 0.6981 | 6.8486e-23 | 8.2868e-23 | 3.2593e-25 | | 0.8587 | 2500 | MLP | true | false | 0.6994 | 6.7322e-23 | 8.4007e-23 | 2.9498e-25 | | 0.9 | 2500 | LnL | true | true | 0.6935 | 6.7386e-23 | 8.4274e-23 | 3.3644e-25 | | 0.8587 | 2500 | MLP | true | true | 0.6617 | 6.9981e-23 | 8.4589e-23 | 3.4966e-25 | | 0.8 | 2500 | LnL | false | true | 0.6695 | 6.9115e-23 | 8.4993e-23 | 3.1983e-25 | | 0.9 | 2500 | LnL | false | false | 0.9 | 5.9862e-23 | 8.5786e-23 | 3.7241e-25 | | 0.8 | 2500 | LnL | false | false | 0.8 | 6.3885e-23 | 8.7385e-23 | 3.903e-25 | | 0.8 | 2500 | LnL | true | true | 0.6165 | 7.1705e-23 | 8.747e-23 | 4.099e-25 | | 0.7 | 2500 | LnL | false | true | 0.5858 | 7.4553e-23 | 8.9298e-23 | 4.0495e-25 | | 0.7 | 2500 | LnL | false | false | 0.7 | 6.7647e-23 | 9.0856e-23 | 3.3235e-25 | | 0.7 | 2500 | LnL | true | true | 0.5394 | 7.7018e-23 | 9.2565e-23 | 3.4573e-25 | \normalsize The distribution of all toy limits for this best setup can be seen in fig. [[fig:limit:expected_limits:toy_limit_histogram]]. It shows both the limit for the case without any candidates (red line, equivalent to 'No signal' in the table above) as well as the expected limit (blue line). Depending on the number of candidates that are inside the signal sensitive region (in regions of the solar axion image with significant flux expectation) based on $\ln(1 + s_i/b_i) > 0.5$ (at a fixed coupling constant, $g²_{ae} = (\num{8.1e-11})²$), the limits are split into histograms of different colors. Based on the location of these histograms and the expected limit, the most likely case for the real candidates seems to be 1 or 2 candidates in that region. Note that there are some toy limits that are below the red line for the case without candidates. This is expected, because the calculation of each limit is based on the MCMC evaluation of the likelihood. As such it is a statistical random process and the red line itself is a single sample. Further, the purple histogram for "0" candidates is *not* equivalent to the red line, because the definition of the number of signal sensitive candidates is an arbitrary cutoff. For the red line literally _no candidates_ at all are considered and the limit is based purely on the $\exp(-s_{\text{tot}})$ term of the likelihood. #+CAPTION: Distribution of toy limits resulting in the best expected limit based on $\num{50000}$ #+CAPTION: toy limits. The expected limit -- the median -- is shown as the blue line. The red line #+CAPTION: shows the limit for the case without any candidates. The different colored histograms correspond #+CAPTION: to toy sets with a different number of toy candidates in the signal sensitive region, defined #+CAPTION: by $\ln(1 + s_i/b_i) > 0.5$. The most likely number of candidates in the sensitive region #+CAPTION: seems to be 0, 1 or 2. #+NAME: fig:limit:expected_limits:toy_limit_histogram [[~/phd/Figs/limit/mc_limit_lkMCMC_skInterpBackground_nmc_50000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500_nmc_50k_pretty.pdf]] To see how the limit might change as a function of different candidates, see tab. [[tab:appendix:expected_limits_percentiles]] in appendix [[#sec:appendix:exp_limit_percentiles]]. It contains different percentiles of the computed toy limit distribution for each veto setup. The percentiles -- and ranges between them -- give insight into the probabilities to obtain a specific observed limit. Each observed limit is associated with a single possible set of candidates, measureable in the experiment, out of all possible sets of candidates compatible with the background hypothesis (as the toy limits are sampled from it). For example, the experiment will measure an observed limit in the range from $P_{25}$ to $P_{75}$ with a chance of $\SI{50}{\%}$. *** TODOs for this section [/] :noexport: - [X] *FOR TABLE:* Add 'uncertainty' to the expected limit by adding maybe 5, 25, 75, 95 percentiles! Clarify that ~Exp σ~ is the potential variation *OF THE EXPECTED LIMIT* and not of a potential real limit! Maybe we can print them as ~X ± 25/75 ± 5/95~ in the same way we would combine stat. and syst. uncertainties? - [X] *ADD STANDARD DEVIATION FOR LIMIT INTO EXPECTED PRESENTATION!* - [ ] *ASIMOV DATASET* and how we can / cannot use it -> Check note somewhere else here about using c_i = rational number for expected counts based on ~expCount~. Could that work? I think so. - [ ] *THINK ABOUT* bootstrapped samples: - should we sample the same number of limits for each input (i.e. independent of ~nmc~) or always sample ~nmc~ for each input? - [ ] *CURRENTLY TEXT AND TABLE DISAGREE THERE!* old paragraph: #+begin_quote The reason is simply that it highlights that $\num{2500}$ toy sets is not enough to make a rigorous estimate of the expected limit. Compare row 2 with row 3 (MLP at $\SI{95.23}{\%}$ efficiency with $\num{2500}$ and $\num{15000}$ toys) for an example of a better expected limit at low statistics. #+end_quote Old table: \footnotesize #+CAPTION: The FADC veto is always in use at an efficiency of $ε_{\text{FADC}} = 0.98$ and so is the #+CAPTION: scintillator veto. These settings were defined in preliminary studies of the expected limits. #+CAPTION: Note the efficiencies associated with the septem veto $ε_{\text{septem}} = 0.7841$ and the line veto #+CAPTION: $ε_{\text{line}} = 0.8602$ and combined $ε_{\text{septem+line}} = 0.7325$, which are implicitly #+CAPTION: included based on the 'Septem' and 'Line' column values into the total efficiency. #+NAME: tab:limit:expected_limits #+ATTR_LATEX: :booktabs t :environment longtable :align lrllllll | ε | ~nmc~ | Type | Septem | Line | Total eff. | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | |--------+-------+------+--------+-------+------------+------------------------+-----------------------| | 0.9107 | 30000 | MLP | false | true | 0.7677 | 5.9559e-23 | 7.5824e-23 | | 0.9718 | 15000 | MLP | false | true | 0.8192 | 5.8374e-23 | 7.6252e-23 | | 0.8474 | 1000 | MLP | false | true | 0.7143 | 6.1381e-23 | 7.643e-23 | | 0.9718 | 1000 | MLP | false | true | 0.8192 | 5.8374e-23 | 7.6619e-23 | | 0.8474 | 15000 | MLP | false | true | 0.7143 | 6.1381e-23 | 7.6698e-23 | | 0.9 | 1000 | LnL | false | true | 0.7587 | 6.0434e-23 | 7.7375e-23 | | 0.7926 | 15000 | MLP | false | true | 0.6681 | 6.2843e-23 | 7.8222e-23 | | 0.7926 | 1000 | MLP | false | true | 0.6681 | 6.2843e-23 | 7.8575e-23 | | 0.7398 | 1000 | MLP | false | true | 0.6237 | 6.5704e-23 | 7.941e-23 | | 0.7398 | 15000 | MLP | false | true | 0.6237 | 6.5704e-23 | 7.9913e-23 | | 0.8 | 1000 | LnL | false | true | 0.6744 | 6.3147e-23 | 8.0226e-23 | | 0.9718 | 1000 | MLP | true | true | 0.6976 | 6.2431e-23 | 8.0646e-23 | | 0.9107 | 1000 | MLP | true | true | 0.6538 | 6.432e-23 | 8.0878e-23 | | 0.9718 | 1000 | MLP | true | false | 0.7468 | 5.9835e-23 | 8.1654e-23 | | 0.9107 | 1000 | MLP | true | false | 0.6998 | 6.2605e-23 | 8.2216e-23 | | 0.8474 | 1000 | MLP | true | true | 0.6083 | 6.6739e-23 | 8.2488e-23 | | 0.9 | 1000 | LnL | true | true | 0.6461 | 6.4725e-23 | 8.3284e-23 | | 0.8474 | 1000 | MLP | true | false | 0.6511 | 6.4585e-23 | 8.338e-23 | | 0.7926 | 1000 | MLP | true | true | 0.569 | 6.8883e-23 | 8.3784e-23 | | 0.7926 | 1000 | MLP | true | false | 0.609 | 6.6309e-23 | 8.4116e-23 | | 0.8 | 1000 | LnL | true | true | 0.5743 | 6.8431e-23 | 8.5315e-23 | | 0.8 | 1000 | LnL | true | true | 0.5743 | 6.875e-23 | 8.5437e-23 | | 0.7398 | 1000 | MLP | true | true | 0.5311 | 7.1279e-23 | 8.5511e-23 | | 0.7398 | 1000 | MLP | true | false | 0.5685 | 6.9024e-23 | 8.6142e-23 | | 0.7 | 1000 | LnL | true | true | 0.5025 | 7.2853e-23 | 8.9271e-23 | \normalsize Old expected limits plot [[~/org/Figs/statusAndProgress/mc_limit_lkMCMC_skInterpBackground_nmc_30000_uncertainty_ukUncertain_σs_0.0328_σb_0.0028_posUncertain_puUncertain_σp_0.0500nmc_30k_pretty.pdf]] *** Verification Because of the significant complexity of the limit calculation, a large number of sanity checks were written. They are used to verify all internal results are consistent with expectation. They include things like verifying the background interpolation reproduces a compatible background rate or the individual $s_i$ terms of the likelihood reproduce the total $s_{\text{tot}}$ term, while producing sensible numbers. The details of this verification are left out of the main thesis, but can be found in the extended version after this section. **** TODOs for this section [/] :noexport: - [ ] *INSERT REFERENCE TO THE SANITY CHECKS* -> We'll probably really not place them into the appendix, right? Well, we'll decide later after cleaning the output up a bit. - [ ] *INSERT DISCUSSION* Of the sanity checks here! *** TODOs for this section [/] :noexport: - [ ] *SHOW DIFFERENT EXPECTED LIMITS* We want to have different expected limits given certain assumptions, i.e. for the case of only using the lnL cut, using lnL + scintillator (well, maybe that's too fine of a difference?), lnL + septem, lnL + septem + line, lnL + line etc. This way we get a better overview of which setup is actually the best to compute a limit. Note that it's important especially because the vetoes have an impact on tracking & background time and therefore change the expected signal! - [ ] *EXPLAIN* how the statistical uncertainty is baked into the calculation of the expected limit due to our calculation of candidates that are sampled from the background model! That is after all, our only source of statistical uncertainties (in terms of the candidates that we actually observe!) *** Example of the candidates in sensitive region :noexport: - [ ] Show a plot with a few more words about the candidates in sensitive region from a plot. *** Notes on all limit calculations :extended: All the notes about the expected limits are here: [[file:~/org/journal.org::#sec:journal:27_11_2023:final_limits_thesis]]. *** Generate expected limits table :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:gen_expected_limit_table :END: Originally written in [[file:~/org/journal.org::#sec:journal:23_11_23:gen_expected_limits]]. #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/generateExpectedLimitsTable/ :results drawer ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_21_11_23/ --prefix "mc_limit_lkMCMC" #+end_src #+RESULTS: :results: File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.9_lnL.h5 Standard deviation of existing limits: 1.881181322027084e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 Standard deviation of existing limits: 1.483388243259433e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.8_lnL.h5 Standard deviation of existing limits: 1.971701121346722e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Standard deviation of existing limits: 1.700245106804615e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Standard deviation of existing limits: 1.445520680284464e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5 Standard deviation of existing limits: 1.452619094436602e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.7_lnL.h5 Standard deviation of existing limits: 1.372285385785803e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 Standard deviation of existing limits: 1.823407217117141e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0323_σb_0.0028_posUncertain_puUncertain_σp_0.0500lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 Standard deviation of existing limits: 1.306608632559617e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.621471862633583e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.772272479247486e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.948710824120741e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0274_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.619168145135281e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_50000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.612857470644874e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.334752143815905e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0278_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.532833375151509e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.533064601717784e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0278_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.914237763454698e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0278_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.522621089145412e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0278_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.087184457140814e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.667094463823567e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.409073748826422e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0274_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.612856288124452e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 8.031501687857109e-21 File: mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.836230785643679e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0278_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 7.527443150348715e-21 File: mc_limit_lkMCMC_skInterpBackground_nmc_15000_uncertainty_ukUncertain_σs_0.0274_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.598536107791305e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0274_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.639619796624626e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_2500_uncertainty_ukUncertain_σs_0.0281_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.537689957889617e-20 File: mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty_ukUncertain_σs_0.0274_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 2.477371464091158e-20 | ε_eff | nmc | Type | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | ε_total | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻²] | Exp. limit σ [GeV⁻¹] | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | |--------+-------+------+--------+-------+-------+--------+-------+------------+---------+--------+-------------+--------+------------------------+-----------------------+----------------------------+---------------------+------------+------------+------------+------------+------------+------------| | 0.9804 | 1000 | MLP | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.9804 | 5.739e-23 | 7.805e-23 | 1.3548e-49 | 3.6807e-25 | 6.4404e-23 | 6.8193e-23 | 7.0894e-23 | 8.6464e-23 | 9.0882e-23 | 1.0288e-22 | | 0.9059 | 1000 | MLP | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.9059 | 6.0109e-23 | 7.856e-23 | 1.8498e-49 | 4.301e-25 | 6.5886e-23 | 6.9554e-23 | 7.207e-23 | 8.75e-23 | 9.249e-23 | 1.0254e-22 | | 0.9523 | 1000 | MLP | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.9523 | 5.7685e-23 | 7.8599e-23 | 2.609e-49 | 5.1078e-25 | 6.525e-23 | 6.8712e-23 | 7.1372e-23 | 8.7422e-23 | 9.182e-23 | 1.0231e-22 | | 0.9523 | 2500 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7969 | 6.3874e-23 | 7.8615e-23 | 8.6918e-50 | 2.9482e-25 | 6.7667e-23 | 7.0732e-23 | 7.2588e-23 | 8.7165e-23 | 9.1732e-23 | 1.0296e-22 | | 0.9804 | 15000 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.8204 | 6.1992e-23 | 7.8681e-23 | 1.2956e-50 | 1.1382e-25 | 6.7049e-23 | 6.9989e-23 | 7.2022e-23 | 8.7221e-23 | 9.1995e-23 | 1.0233e-22 | | 0.9523 | 50000 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7969 | 6.3874e-23 | 7.8782e-23 | 4.1777e-51 | 6.4635e-26 | 6.7535e-23 | 7.0369e-23 | 7.2457e-23 | 8.7211e-23 | 9.1778e-23 | 1.0231e-22 | | 0.9523 | 15000 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7969 | 6.3874e-23 | 7.879e-23 | 1.4226e-50 | 1.1927e-25 | 6.7542e-23 | 7.0374e-23 | 7.2442e-23 | 8.7233e-23 | 9.1633e-23 | 1.0254e-22 | | 0.9804 | 2500 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.8204 | 6.1992e-23 | 7.8833e-23 | 8.9862e-50 | 2.9977e-25 | 6.7296e-23 | 7.0113e-23 | 7.1929e-23 | 8.7164e-23 | 9.2161e-23 | 1.0231e-22 | | 0.8587 | 1000 | MLP | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.8587 | 6.1067e-23 | 7.9597e-23 | 2.5787e-49 | 5.0781e-25 | 6.7402e-23 | 7.0765e-23 | 7.3144e-23 | 8.8834e-23 | 9.3483e-23 | 1.0254e-22 | | 0.9059 | 2500 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7581 | 6.4704e-23 | 7.9886e-23 | 6.989e-50 | 2.6437e-25 | 6.9099e-23 | 7.184e-23 | 7.3827e-23 | 8.8958e-23 | 9.2975e-23 | 1.0267e-22 | | 0.9059 | 15000 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7581 | 6.4704e-23 | 8.0045e-23 | 1.1797e-50 | 1.0861e-25 | 6.9007e-23 | 7.1793e-23 | 7.376e-23 | 8.8706e-23 | 9.3435e-23 | 1.0392e-22 | | 0.9804 | 2500 | MLP | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.7554 | 6.5492e-23 | 8.0852e-23 | 8.5409e-50 | 2.9225e-25 | 6.9252e-23 | 7.2012e-23 | 7.4187e-23 | 8.9739e-23 | 9.4743e-23 | 1.0586e-22 | | 0.9523 | 2500 | MLP | true | true | 0.98 | true | false | 1 | 0.8311 | 1 | 1 | 0.7756 | 6.4906e-23 | 8.1135e-23 | 1.2737e-49 | 3.5689e-25 | 6.9101e-23 | 7.223e-23 | 7.4278e-23 | 9.0755e-23 | 9.5269e-23 | 1.065e-22 | | 0.9523 | 2500 | MLP | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.7338 | 6.6833e-23 | 8.1251e-23 | 9.5886e-50 | 3.0965e-25 | 6.9912e-23 | 7.2854e-23 | 7.4904e-23 | 9.0028e-23 | 9.4577e-23 | 1.0503e-22 | | 0.9804 | 2500 | MLP | true | true | 0.98 | true | false | 1 | 0.8311 | 1 | 1 | 0.7985 | 6.2664e-23 | 8.1314e-23 | 1.0198e-49 | 3.1934e-25 | 6.8161e-23 | 7.1565e-23 | 7.424e-23 | 9.0196e-23 | 9.4627e-23 | 1.0585e-22 | | 0.8587 | 2500 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7186 | 6.8094e-23 | 8.1561e-23 | 8.9356e-50 | 2.9893e-25 | 7.0315e-23 | 7.315e-23 | 7.5378e-23 | 9.0887e-23 | 9.5784e-23 | 1.0594e-22 | | 0.8587 | 15000 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7186 | 6.8094e-23 | 8.1826e-23 | 1.6937e-50 | 1.3014e-25 | 7.0331e-23 | 7.3208e-23 | 7.5405e-23 | 9.0593e-23 | 9.5135e-23 | 1.0593e-22 | | 0.9059 | 2500 | MLP | true | true | 0.98 | true | false | 1 | 0.8311 | 1 | 1 | 0.7378 | 6.5184e-23 | 8.2169e-23 | 8.2755e-50 | 2.8767e-25 | 7.0347e-23 | 7.332e-23 | 7.5414e-23 | 9.1199e-23 | 9.628e-23 | 1.0712e-22 | | 0.9 | 2500 | LnL | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.7531 | 6.4097e-23 | 8.2171e-23 | 1.3874e-49 | 3.7248e-25 | 6.9605e-23 | 7.2788e-23 | 7.4878e-23 | 9.1329e-23 | 9.6121e-23 | 1.0624e-22 | | 0.9059 | 2500 | MLP | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.6981 | 6.8486e-23 | 8.2868e-23 | 1.0623e-49 | 3.2593e-25 | 7.1015e-23 | 7.4215e-23 | 7.6216e-23 | 9.1682e-23 | 9.6429e-23 | 1.0821e-22 | | 0.8587 | 2500 | MLP | true | true | 0.98 | true | false | 1 | 0.8311 | 1 | 1 | 0.6994 | 6.7322e-23 | 8.4007e-23 | 8.7013e-50 | 2.9498e-25 | 7.1866e-23 | 7.4964e-23 | 7.7161e-23 | 9.2693e-23 | 9.7118e-23 | 1.0756e-22 | | 0.9 | 2500 | LnL | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.6935 | 6.7386e-23 | 8.4274e-23 | 1.1319e-49 | 3.3644e-25 | 7.209e-23 | 7.5171e-23 | 7.7407e-23 | 9.3793e-23 | 9.8875e-23 | 1.1055e-22 | | 0.8587 | 2500 | MLP | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.6617 | 6.9981e-23 | 8.4589e-23 | 1.2226e-49 | 3.4966e-25 | 7.3152e-23 | 7.5953e-23 | 7.7906e-23 | 9.3755e-23 | 9.7631e-23 | 1.078e-22 | | 0.8 | 2500 | LnL | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.6695 | 6.9115e-23 | 8.4993e-23 | 1.0229e-49 | 3.1983e-25 | 7.2997e-23 | 7.6045e-23 | 7.8319e-23 | 9.4028e-23 | 9.9097e-23 | 1.0938e-22 | | 0.9 | 2500 | LnL | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.9 | 5.9862e-23 | 8.5786e-23 | 1.3869e-49 | 3.7241e-25 | 6.909e-23 | 7.4264e-23 | 7.7309e-23 | 9.5704e-23 | 1.0069e-22 | 1.1172e-22 | | 0.8 | 2500 | LnL | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.8 | 6.3885e-23 | 8.7385e-23 | 1.5233e-49 | 3.903e-25 | 7.1267e-23 | 7.5857e-23 | 7.8839e-23 | 9.7896e-23 | 1.0328e-22 | 1.1451e-22 | | 0.8 | 2500 | LnL | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.6165 | 7.1705e-23 | 8.747e-23 | 1.6802e-49 | 4.099e-25 | 7.5205e-23 | 7.8191e-23 | 8.0272e-23 | 9.6782e-23 | 1.0246e-22 | 1.1303e-22 | | 0.7 | 2500 | LnL | true | true | 0.98 | false | true | 1 | 1 | 0.8539 | 1 | 0.5858 | 7.4553e-23 | 8.9298e-23 | 1.6399e-49 | 4.0495e-25 | 7.7223e-23 | 8.017e-23 | 8.205e-23 | 9.8623e-23 | 1.0401e-22 | 1.1601e-22 | | 0.7 | 2500 | LnL | false | false | 0.98 | false | false | 1 | 1 | 1 | 1 | 0.7 | 6.7647e-23 | 9.0856e-23 | 1.1046e-49 | 3.3235e-25 | 7.4035e-23 | 7.8682e-23 | 8.2286e-23 | 1.0096e-22 | 1.0656e-22 | 1.1869e-22 | | 0.7 | 2500 | LnL | true | true | 0.98 | true | true | 1 | 1 | 1 | 0.7863 | 0.5394 | 7.7018e-23 | 9.2565e-23 | 1.1953e-49 | 3.4573e-25 | 8.0091e-23 | 8.2758e-23 | 8.5076e-23 | 1.0242e-22 | 1.0769e-22 | 1.2037e-22 | | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |--------+-------+--------+--------+------------+------------+------------+------------+------------+------------+--------------------| | 0.9804 | 1000 | MLP - | 0.9804 | 6.4404e-23 | 6.8193e-23 | 7.0894e-23 | 8.6464e-23 | 9.0882e-23 | 1.0288e-22 | 7.80503(3681)e-23 | | 0.9059 | 1000 | MLP - | 0.9059 | 6.5886e-23 | 6.9554e-23 | 7.207e-23 | 8.75e-23 | 9.249e-23 | 1.0254e-22 | 7.85600(4301)e-23 | | 0.9523 | 1000 | MLP - | 0.9523 | 6.525e-23 | 6.8712e-23 | 7.1372e-23 | 8.7422e-23 | 9.182e-23 | 1.0231e-22 | 7.85990(5108)e-23 | | 0.9523 | 2500 | MLP L | 0.7969 | 6.7667e-23 | 7.0732e-23 | 7.2588e-23 | 8.7165e-23 | 9.1732e-23 | 1.0296e-22 | 7.86154(2948)e-23 | | 0.9804 | 15000 | MLP L | 0.8204 | 6.7049e-23 | 6.9989e-23 | 7.2022e-23 | 8.7221e-23 | 9.1995e-23 | 1.0233e-22 | 7.86812(1138)e-23 | | 0.9523 | 50000 | MLP L | 0.7969 | 6.7535e-23 | 7.0369e-23 | 7.2457e-23 | 8.7211e-23 | 9.1778e-23 | 1.0231e-22 | 7.878225(6464)e-23 | | 0.9523 | 15000 | MLP L | 0.7969 | 6.7542e-23 | 7.0374e-23 | 7.2442e-23 | 8.7233e-23 | 9.1633e-23 | 1.0254e-22 | 7.87895(1193)e-23 | | 0.9804 | 2500 | MLP L | 0.8204 | 6.7296e-23 | 7.0113e-23 | 7.1929e-23 | 8.7164e-23 | 9.2161e-23 | 1.0231e-22 | 7.88333(2998)e-23 | | 0.8587 | 1000 | MLP - | 0.8587 | 6.7402e-23 | 7.0765e-23 | 7.3144e-23 | 8.8834e-23 | 9.3483e-23 | 1.0254e-22 | 7.95967(5078)e-23 | | 0.9059 | 2500 | MLP L | 0.7581 | 6.9099e-23 | 7.184e-23 | 7.3827e-23 | 8.8958e-23 | 9.2975e-23 | 1.0267e-22 | 7.98858(2644)e-23 | | 0.9059 | 15000 | MLP L | 0.7581 | 6.9007e-23 | 7.1793e-23 | 7.376e-23 | 8.8706e-23 | 9.3435e-23 | 1.0392e-22 | 8.00447(1086)e-23 | | 0.9804 | 2500 | MLP SL | 0.7554 | 6.9252e-23 | 7.2012e-23 | 7.4187e-23 | 8.9739e-23 | 9.4743e-23 | 1.0586e-22 | 8.08523(2922)e-23 | | 0.9523 | 2500 | MLP S | 0.7756 | 6.9101e-23 | 7.223e-23 | 7.4278e-23 | 9.0755e-23 | 9.5269e-23 | 1.065e-22 | 8.11348(3569)e-23 | | 0.9523 | 2500 | MLP SL | 0.7338 | 6.9912e-23 | 7.2854e-23 | 7.4904e-23 | 9.0028e-23 | 9.4577e-23 | 1.0503e-22 | 8.12513(3097)e-23 | | 0.9804 | 2500 | MLP S | 0.7985 | 6.8161e-23 | 7.1565e-23 | 7.424e-23 | 9.0196e-23 | 9.4627e-23 | 1.0585e-22 | 8.13137(3193)e-23 | | 0.8587 | 2500 | MLP L | 0.7186 | 7.0315e-23 | 7.315e-23 | 7.5378e-23 | 9.0887e-23 | 9.5784e-23 | 1.0594e-22 | 8.15611(2989)e-23 | | 0.8587 | 15000 | MLP L | 0.7186 | 7.0331e-23 | 7.3208e-23 | 7.5405e-23 | 9.0593e-23 | 9.5135e-23 | 1.0593e-22 | 8.18263(1301)e-23 | | 0.9059 | 2500 | MLP S | 0.7378 | 7.0347e-23 | 7.332e-23 | 7.5414e-23 | 9.1199e-23 | 9.628e-23 | 1.0712e-22 | 8.21687(2877)e-23 | | 0.9 | 2500 | LnL L | 0.7531 | 6.9605e-23 | 7.2788e-23 | 7.4878e-23 | 9.1329e-23 | 9.6121e-23 | 1.0624e-22 | 8.21714(3725)e-23 | | 0.9059 | 2500 | MLP SL | 0.6981 | 7.1015e-23 | 7.4215e-23 | 7.6216e-23 | 9.1682e-23 | 9.6429e-23 | 1.0821e-22 | 8.28676(3259)e-23 | | 0.8587 | 2500 | MLP S | 0.6994 | 7.1866e-23 | 7.4964e-23 | 7.7161e-23 | 9.2693e-23 | 9.7118e-23 | 1.0756e-22 | 8.40075(2950)e-23 | | 0.9 | 2500 | LnL SL | 0.6935 | 7.209e-23 | 7.5171e-23 | 7.7407e-23 | 9.3793e-23 | 9.8875e-23 | 1.1055e-22 | 8.42741(3364)e-23 | | 0.8587 | 2500 | MLP SL | 0.6617 | 7.3152e-23 | 7.5953e-23 | 7.7906e-23 | 9.3755e-23 | 9.7631e-23 | 1.078e-22 | 8.45894(3497)e-23 | | 0.8 | 2500 | LnL L | 0.6695 | 7.2997e-23 | 7.6045e-23 | 7.8319e-23 | 9.4028e-23 | 9.9097e-23 | 1.0938e-22 | 8.49926(3198)e-23 | | 0.9 | 2500 | LnL - | 0.9 | 6.909e-23 | 7.4264e-23 | 7.7309e-23 | 9.5704e-23 | 1.0069e-22 | 1.1172e-22 | 8.57864(3724)e-23 | | 0.8 | 2500 | LnL - | 0.8 | 7.1267e-23 | 7.5857e-23 | 7.8839e-23 | 9.7896e-23 | 1.0328e-22 | 1.1451e-22 | 8.73846(3903)e-23 | | 0.8 | 2500 | LnL SL | 0.6165 | 7.5205e-23 | 7.8191e-23 | 8.0272e-23 | 9.6782e-23 | 1.0246e-22 | 1.1303e-22 | 8.74702(4099)e-23 | | 0.7 | 2500 | LnL L | 0.5858 | 7.7223e-23 | 8.017e-23 | 8.205e-23 | 9.8623e-23 | 1.0401e-22 | 1.1601e-22 | 8.92978(4050)e-23 | | 0.7 | 2500 | LnL - | 0.7 | 7.4035e-23 | 7.8682e-23 | 8.2286e-23 | 1.0096e-22 | 1.0656e-22 | 1.1869e-22 | 9.08559(3323)e-23 | | 0.7 | 2500 | LnL SL | 0.5394 | 8.0091e-23 | 8.2758e-23 | 8.5076e-23 | 1.0242e-22 | 1.0769e-22 | 1.2037e-22 | 9.25655(3457)e-23 | :end: *** Generate plot of expected limit histogram :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:expected_limits:gen_exp_limit_histogram :END: #+begin_src sh ESCAPE_LATEX=true USE_TEX=true mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --plotFile ~/org/resources/lhood_limits_21_11_23/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/mc_limit_lkMCMC_skInterpBackground_nmc_50000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv \ --xLow 2.5e-21 \ --xHigh 1.5e-20 \ --limitKind lkMCMC \ --yHigh 1300 \ --bins 100 \ --linesTo 800 \ --as_gae_gaγ \ --xLabel "Limit g_ae·g_aγ [GeV⁻¹]" \ --yLabel "MC toy count" \ --outpath ~/phd/Figs/limit/ \ --suffix "_nmc_50k_pretty" \ --nmc 50000 #+end_src *** Run limit for 50k toys :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:expected_limits:best_expected_50k :END: Now run the best case scenario again for 50k toys! <2023-11-26 Sun 21:27>. #+begin_src sh mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --outpath ~/org/resources/lhood_limits_21_11_23/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/ \ --suffix "" \ --nmc 50000 #+end_src It finished at some point during the night. *** Expected limits with ~nmc = 1000~ :noexport: This is our baseline analysis. It is already fixed to using the FADC veto always and disabling the eccentricity cutoff of the line veto. This is copy pasted directly from ~statusAndProgress~ section: [[sec:limit:expected_limits_different_setups_test]] See ~journal.org~ for more details around the calculation around this time! #+begin_src sh ./generateExpectedLimitsTable \ --path ~/org/resources/lhood_lnL_04_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000 \ --path ~/org/resources/lhood_MLP_06_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_1000_uncertainty #+end_src #+RESULTS: *** Expected limits with more statistics :noexport: Again, straight from the ~statusAndProgress~ section about it. For the best case: #+begin_src sh ./generateExpectedLimitsTable --path ~/org/resources/lhood_MLP_06_07_23/limits/ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_30000_ #+end_src | ε | Type | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | Total eff. | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻¹] | Exp. limit σ [GeV⁻¹] | |--------+------+--------+------+--------+--------+------+------------+----------+--------+--------------+------------+-------------------------+------------------------+-----------------------------+----------------------| | 0.9107 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7677 | 5.9559e-23 | 7.5824e-23 | 6.0632e-51 | 7.7866e-26 | For the next worse cases: #+begin_src sh ./generateExpectedLimitsTable \ --path ~/org/resources/lhood_MLP_06_07_23/limits/ \ --prefix mc_limit_lkMCMC_skInterpBackground_nmc_15000_ #+end_src | ε | Type | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | Total eff. | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻¹] | Exp. limit σ [GeV⁻¹] | |--------+------+--------+------+--------+--------+------+------------+----------+--------+--------------+------------+-------------------------+------------------------+-----------------------------+----------------------| | 0.9718 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.8192 | 5.8374e-23 | 7.6252e-23 | 1.6405e-50 | 1.2808e-25 | | 0.8474 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.7143 | 6.1381e-23 | 7.6698e-23 | 1.4081e-50 | 1.1866e-25 | | 0.7926 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6681 | 6.2843e-23 | 7.8222e-23 | 1.3589e-50 | 1.1657e-25 | | 0.7398 | MLP | true | true | 0.98 | false | true | 1 | 0.7841 | 0.8602 | 0.7325 | 0.6237 | 6.5704e-23 | 7.9913e-23 | 1.6073e-50 | 1.2678e-25 | *** Older expected limit table with different FADC percentiles and eccentricity line cutoffs :noexport: This is copy pasted directly from ~statusAndProgress~ section: [[sec:limit:expected_limits_different_setups_test]] Ideally we want to rerun this! Different ε line cutoff vetoes and FADC vetoes! <2023-03-20 Mon 12:44> Expected limits from [[file:~/org/resources/lhood_limits_automation_with_nn_support/]] which should be more or less correct now (but they lack the eccentricity line veto cut value, so it's 0 in all columns! #+begin_src sh cd $TPA/Tools/generateExpectedLimitsTable ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_automation_with_nn_support/limits #+end_src *NOTE*: These have different rows for different ε line veto cutoffs, but the table does not highlight that fact! 0.8602 corresponds to ε = 1.0, i.e. disable the cutoff. | ε_lnL | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | Total eff. | Limit no signal | Expected Limit | |-------+--------+-------+--------+--------+-------+------------+----------+--------+--------------+------------+-----------------+----------------| | 0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7587 | 3.7853e-21 | 7.9443e-23 | | 0.9 | true | false | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7742 | 3.6886e-21 | 8.0335e-23 | | 0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8794 | 0.7415 | 0.7757 | 3.6079e-21 | 8.1694e-23 | | 0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6744 | 4.0556e-21 | 8.1916e-23 | | 0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6744 | 4.0556e-21 | 8.1916e-23 | | 0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8946 | 0.7482 | 0.7891 | 3.5829e-21 | 8.3198e-23 | | 0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8794 | 0.7415 | 0.6895 | 3.9764e-21 | 8.3545e-23 | | 0.8 | true | true | 0.9 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6193 | 4.4551e-21 | 8.4936e-23 | | 0.9 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.9076 | 0.754 | 0.8005 | 3.6208e-21 | 8.5169e-23 | | 0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8946 | 0.7482 | 0.7014 | 3.9491e-21 | 8.6022e-23 | | 0.8 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.9076 | 0.754 | 0.7115 | 3.9686e-21 | 8.6462e-23 | | 0.9 | true | false | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6593 | 4.2012e-21 | 8.6684e-23 | | 0.7 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5901 | 4.7365e-21 | 8.67e-23 | | 0.9 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6461 | 4.3995e-21 | 8.6766e-23 | | 0.7 | true | false | 0.98 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6021 | 4.7491e-21 | 8.7482e-23 | | 0.8 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5743 | 4.9249e-21 | 8.7699e-23 | | 0.8 | true | true | 0.98 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.784 | 3.6101e-21 | 8.8059e-23 | | 0.8 | true | true | 0.8 | false | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5505 | 5.1433e-21 | 8.855e-23 | | 0.7 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8794 | 0.7415 | 0.6033 | 4.4939e-21 | 8.8649e-23 | | 0.8 | true | true | 0.98 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.6147 | 4.5808e-21 | 8.8894e-23 | | 0.9 | true | false | 0.98 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.7057 | 3.9383e-21 | 8.9504e-23 | | 0.7 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.8946 | 0.7482 | 0.6137 | 4.5694e-21 | 8.9715e-23 | | 0.8 | true | true | 0.9 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5274 | 5.3406e-21 | 8.9906e-23 | | 0.9 | true | true | 0.9 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5933 | 4.854e-21 | 9e-23 | | 0.8 | false | false | 0.98 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.8 | 3.5128e-21 | 9.0456e-23 | | 0.8 | true | false | 0.98 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.8 | 3.5573e-21 | 9.0594e-23 | | 0.7 | true | true | 0.98 | false | true | 0 | 0.7841 | 0.9076 | 0.754 | 0.6226 | 4.5968e-21 | 9.0843e-23 | | 0.7 | true | true | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5025 | 5.627e-21 | 9.1029e-23 | | 0.8 | true | true | 0.9 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.72 | 3.8694e-21 | 9.1117e-23 | | 0.8 | true | true | 0.9 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5646 | 4.909e-21 | 9.2119e-23 | | 0.7 | true | false | 0.98 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5128 | 5.5669e-21 | 9.3016e-23 | | 0.7 | true | false | 0.98 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5489 | 5.3018e-21 | 9.3255e-23 | | 0.7 | true | true | 0.9 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.4615 | 6.1471e-21 | 9.4509e-23 | | 0.8 | true | true | 0.8 | false | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.64 | 4.5472e-21 | 9.5113e-23 | | 0.8 | true | true | 0.8 | true | true | 0 | 0.7841 | 0.8602 | 0.7325 | 0.4688 | 5.8579e-21 | 9.5468e-23 | | 0.8 | true | true | 0.8 | true | false | 0 | 0.7841 | 0.8602 | 0.7325 | 0.5018 | 5.6441e-21 | 9.5653e-23 | ** Solar tracking candidates :PROPERTIES: :CUSTOM_ID: sec:limit:candidates :END: Based on the best performing setup we can now look at the solar tracking candidates. [fn:data_unblinding] In this setup, based on the background model a total of $\num{845}$ candidates are expected over the entire chip. [fn:expected_clusters] Computing the real candidates yields a total of $\num{850}$ clusters over the chip. A figure comparing the rate observed for the candidates compared to the background over the entire chip is shown in fig. sref:fig:limit:rate_candidates_background. Fig. [[fig:limit:candidates]] shows all solar tracking candidates that are observed with the method yielding the best expected limit. Their energy is color coded and written above each cluster within a $\SI{85}{pixel}$ radius of the chip center. The axion image is underlaid to provide a visual reference of the importance of each cluster. Very few candidates of relevant energies are seen within the region of interest. Based on the previously mentioned $\ln(1 + s_i/b_i) > 0.5$ condition, it is 1 candidate in the sensitive region. See fig. sref:fig:limit:candidates_s_b for an overview of the weighting of each candidate in this way, with only the single cluster near coordinate $(x,y) = (105,125)$ crossing the threshold of $\num{0.5}$. #+CAPTION: Overview of all solar tracking candidates given the MLP@\SI{95}{\%} setup #+CAPTION: including all vetoes except the septem veto. $\num{850}$ clusters are observed, #+CAPTION: with the majority low energy clusters in the chip corners. The axion image #+CAPTION: is underlaid and the energy of each cluster is color coded. For all #+CAPTION: candidates within a radius of $\SI{85}{pixels}$ of the center the energy is also #+CAPTION: written above. We can see by eye that very few candidates of relevant energies #+CAPTION: are present in the region of expected signal. #+NAME: fig:limit:candidates [[~/phd/Figs/trackingCandidates/background_cluster_centersmlp_0.95_scinti_fadc_line_tracking_candidates_axion_image_with_energy_radius_85.pdf]] #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption ($ "\\ln(1 + s_i/b_i)")) (label "fig:limit:candidates_s_b") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/trackingCandidates/real_candidates_signal_over_background.pdf")) (subfigure (linewidth 0.5) (caption "Rate candidates vs. background") (label "fig:limit:rate_candidates_background") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/trackingCandidates/rate_real_candidates_vs_background_rate_crAll_mlp_0.95_scinti_fadc_line.pdf")) (caption (subref "fig:limit:candidates_s_b") ": Weighting of each candidate based on " ($ "\\ln(1 + s_i/b_i)") "." " The largest weight given is for the \\SI{2.96}{keV} cluster seen in fig. " (ref "fig:limit:candidates") " near pixel " ($ "(105,125)") " and tops out at slighlty above " ($ "\\num{0.5}") ". " (subref "fig:limit:rate_candidates_background") ": Rate of background and candidate clusters over the entire chip as a log plot. Rates are generally compatible over the entire range.") (label "fig:limit:candidates_s_b_rates")) #+end_src [fn:expected_clusters] The background model contains $\num{16630}$ clusters in this case. $\SI{3156.8}{h}$ of background data and $\SI{160.375}{h}$ of tracking data yields $\num{16630}·\frac{160.375}{3156.8} \approx 845$ clusters. [fn:data_unblinding] The actual data unblinding of the candidates presented in this section was only done after the analysis of the previous sections was fully complete. A presentation with discussion took place first inside our own group and later with the relevant members of the CAST collaboration to ascertain that our analysis appears sound. *** TODO for this section [1/4] :noexport: - [ ] *NOTE* Do we even want to compute other candidates? Old paragraph: #+begin_quote The candidates that are obtained for other setups of those shown in tab. [[tab:limit:expected_limits]], see appendix [[sec:appendix:solar_candidates]] and the full version of the thesis for even more detail. #+end_quote - [ ] *NOTE: THE REFERENCE IN THE SUBFIGURE* is broken, because we currently don't export using custom identifiers only. - [X] Create a time series plot of all tracking candidates to verify that indeed their timestamp indicates that they were part of trackings. Or better yet export the timestamps as text strings to a CSV file? We should verify these things are correct, in particular given thqe mismatch of expected number of candidates and real ones. Or is it due to energy filtering? -> See: [[file:~/org/resources/candidate_cluster_dates.csv]] - [ ] *CREATE APPENDIX FOR OTHER CANDIDATE PLTOS!!!* Old paragraph: #+begin_src , a somewhat meaningful excess. However, this excess is mostly from very low energy contributions below \SI{1}{keV} and potentially a sign of additional activity from noise related phenomena due to the tracking motor activity (which caused the significant FADC noise activity). Restricting the clusters to $\SIrange{1}{12}{keV}$ yields \num{219} cluster candidates where \num{210} would be expected (\num{4145} clusters in background). #+end_src *** Rate plot comparing background to candidates :extended: Combined: #+begin_src sh ESCAPE_LATEX=true plotBackgroundRate \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --centerChip 3 \ --names "Background" --names "Background" --names "Candidates" --names "Candidates" \ --title "Rate over whole chip, MLP@95 % + line veto" \ --showNumClusters \ --region crAll \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.2 \ --outfile rate_real_candidates_vs_background_rate_crAll_mlp_0.95_scinti_fadc_line.pdf \ --outpath ~/phd/Figs/trackingCandidates/ \ --logPlot \ --hideErrors \ --useTeX \ --quiet #+end_src #+RESULTS: | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.2 | .. | 12.0: | 4.4601e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.2 | .. | 12.0: | 3.7797e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.2 | .. | 12.0: | 4.4063e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.2 | .. | 12.0: | 3.7342e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.5 | .. | 2.5: | 1.1442e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.5 | .. | 2.5: | 5.7209e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.5 | .. | 2.5: | 1.0011e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.5 | .. | 2.5: | 5.0056e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.5 | .. | 5.0: | 1.4225e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.5 | .. | 5.0: | 3.1611e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.5 | .. | 5.0: | 1.3198e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.5 | .. | 5.0: | 2.9329e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.2 | .. | 2.5: | 3.7810e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.2 | .. | 2.5: | 1.6439e-03 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.2 | .. | 2.5: | 3.6546e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.2 | .. | 2.5: | 1.5890e-03 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 4.0 | .. | 8.0: | 2.0653e-04 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 4.0 | .. | 8.0: | 5.1632e-05 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 4.0 | .. | 8.0: | 2.4595e-04 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 4.0 | .. | 8.0: | 6.1488e-05 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 2.0 | .. | 8.0: | 5.1210e-04 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 2.0 | .. | 8.0: | 8.5350e-05 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 2.0 | .. | 8.0: | 5.5425e-04 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 2.0 | .. | 8.0: | 9.2376e-05 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Background | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.2 | .. | 8.0: | 4.2249e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.2 | .. | 8.0: | 5.4165e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]:Dataset: | Candidates | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate | in | range: | 0.2 | .. | 8.0: | 4.1604e-03 cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | [INFO]: | Integrated | background | rate/keV | in | range: | 0.2 | .. | 8.0: | 5.3338e-04 keV⁻¹·cm⁻²·s⁻¹ | | | | | | | | | | | | | | | | | | | Classifier | | | ε_eff | | | Scinti | | | FADC | | | Septem | | | Line | | | ε_total | | | Rate | | | | | MLP | | | 0.957 | | | true | | | true | | | false | | | true | | | 0.807 | | | 0.0005417 | | | | | MLP | | | 0.957 | | | true | | | true | | | false | | | true | | | 0.807 | | | 0.0005334 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]:DataFrame | with | 15 | columns | and | 113 | rows: | | | | | | | | | | | | | | | | | | | | Idx | Energy | Rate | totalTime | RateErr | Dataset | yMin | yMax | File | ε_total | ε_eff | Classifier | Scinti | FADC | Septem | Line | | | | | | | | | | | dtype: | float | float | float | float | string | float | float | string | constant | constant | constant | constant | constant | constant | constant | | | | | | | | | | | 0 | 0.2 | 1424.4217 | 3158.0066 | 15.82984 | Background | 1408.5919 | 1440.2516 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 1 | 0.4 | 475.68685 | 3158.0066 | 9.147824 | Background | 466.53902 | 484.83467 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 2 | 0.6 | 261.06482 | 3158.0066 | 6.7769052 | Background | 254.28792 | 267.84173 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 3 | 0.8 | 132.46753 | 3158.0066 | 4.8273851 | Background | 127.64014 | 137.29491 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 4 | 1 | 106.60733 | 3158.0066 | 4.3306269 | Background | 102.27671 | 110.93796 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 5 | 1.2 | 55.590623 | 3158.0066 | 3.1272169 | Background | 52.463406 | 58.71784 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 6 | 1.4 | 41.165208 | 3158.0066 | 2.6910538 | Background | 38.474154 | 43.856262 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 7 | 1.6 | 36.591296 | 3158.0066 | 2.5371499 | Background | 34.054146 | 39.128446 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 8 | 1.8 | 24.100998 | 3158.0066 | 2.0590872 | Background | 22.041911 | 26.160085 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 9 | 2 | 21.814042 | 3158.0066 | 1.9589588 | Background | 19.855083 | 23.773001 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 10 | 2.2 | 16.536451 | 3158.0066 | 1.7056047 | Background | 14.830846 | 18.242056 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 11 | 2.4 | 13.369897 | 3158.0066 | 1.5336323 | Background | 11.836264 | 14.903529 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 12 | 2.6 | 15.832772 | 3158.0066 | 1.6689207 | Background | 14.163852 | 17.501693 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 13 | 2.8 | 14.601334 | 3158.0066 | 1.6027047 | Background | 12.99863 | 16.204039 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 14 | 3 | 19.527086 | 3158.0066 | 1.853429 | Background | 17.673657 | 21.380515 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 15 | 3.2 | 18.823407 | 3158.0066 | 1.8197274 | Background | 17.00368 | 20.643135 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 16 | 3.4 | 17.591969 | 3158.0066 | 1.7591969 | Background | 15.832772 | 19.351166 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 17 | 3.6 | 13.545816 | 3158.0066 | 1.543689 | Background | 12.002127 | 15.089505 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 18 | 3.8 | 9.8515028 | 3158.0066 | 1.3164624 | Background | 8.5350403 | 11.167965 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 19 | 4 | 4.3979923 | 3158.0066 | 0.87959846 | Background | 3.5183938 | 5.2775908 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 20 | 4.2 | 5.1016711 | 3158.0066 | 0.94735654 | Background | 4.1543145 | 6.0490276 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 21 | 4.4 | 2.8147151 | 3158.0066 | 0.70367877 | Background | 2.1110363 | 3.5183938 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 22 | 4.6 | 2.4628757 | 3158.0066 | 0.65823122 | Background | 1.8046445 | 3.1211069 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 23 | 4.8 | 6.1571892 | 3158.0066 | 1.0407549 | Background | 5.1164343 | 7.1979442 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 24 | 5 | 3.5183938 | 3158.0066 | 0.78673678 | Background | 2.7316571 | 4.3051306 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 25 | 5.2 | 4.9257514 | 3158.0066 | 0.93087951 | Background | 3.9948719 | 5.8566309 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 26 | 5.4 | 5.4535105 | 3158.0066 | 0.97947939 | Background | 4.4740311 | 6.4329899 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 27 | 5.6 | 5.1016711 | 3158.0066 | 0.94735654 | Background | 4.1543145 | 6.0490276 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 28 | 5.8 | 4.573912 | 3158.0066 | 0.89701794 | Background | 3.6768941 | 5.4709299 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 29 | 6 | 4.9257514 | 3158.0066 | 0.93087951 | Background | 3.9948719 | 5.8566309 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 30 | 6.2 | 5.8053499 | 3158.0066 | 1.0105817 | Background | 4.7947682 | 6.8159315 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 31 | 6.4 | 5.1016711 | 3158.0066 | 0.94735654 | Background | 4.1543145 | 6.0490276 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 32 | 6.6 | 3.5183938 | 3158.0066 | 0.78673678 | Background | 2.7316571 | 4.3051306 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 33 | 6.8 | 2.9906348 | 3158.0066 | 0.72533547 | Background | 2.2652993 | 3.7159702 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 34 | 7 | 8.2682255 | 3158.0066 | 1.2060446 | Background | 7.0621809 | 9.4742702 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 35 | 7.2 | 5.8053499 | 3158.0066 | 1.0105817 | Background | 4.7947682 | 6.8159315 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 36 | 7.4 | 6.6849483 | 3158.0066 | 1.0844418 | Background | 5.6005065 | 7.7693901 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 37 | 7.6 | 5.4535105 | 3158.0066 | 0.97947939 | Background | 4.4740311 | 6.4329899 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 38 | 7.8 | 8.4441452 | 3158.0066 | 1.2188074 | Background | 7.2253379 | 9.6629526 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 39 | 8 | 7.9163862 | 3158.0066 | 1.1801052 | Background | 6.736281 | 9.0964913 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 40 | 8.2 | 8.6200649 | 3158.0066 | 1.2314378 | Background | 7.3886271 | 9.8515028 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 41 | 8.4 | 9.3237437 | 3158.0066 | 1.2807147 | Background | 8.043029 | 10.604458 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 42 | 8.6 | 10.731101 | 3158.0066 | 1.3739767 | Background | 9.3571245 | 12.105078 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 43 | 8.8 | 12.842138 | 3158.0066 | 1.5030585 | Background | 11.339079 | 14.345196 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 44 | 9 | 13.545816 | 3158.0066 | 1.543689 | Background | 12.002127 | 15.089505 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 45 | 9.2 | 9.6755831 | 3158.0066 | 1.3046554 | Background | 8.3709277 | 10.980238 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 46 | 9.4 | 10.027422 | 3158.0066 | 1.3281646 | Background | 8.6992579 | 11.355587 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 47 | 9.6 | 7.7404665 | 3158.0066 | 1.1669192 | Background | 6.5735472 | 8.9073857 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 48 | 9.8 | 8.0923059 | 3158.0066 | 1.1931454 | Background | 6.8991604 | 9.2854513 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 49 | 10 | 4.9257514 | 3158.0066 | 0.93087951 | Background | 3.9948719 | 5.8566309 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 50 | 10.2 | 4.7498317 | 3158.0066 | 0.91410554 | Background | 3.8357262 | 5.6639372 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 51 | 10.4 | 2.8147151 | 3158.0066 | 0.70367877 | Background | 2.1110363 | 3.5183938 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 52 | 10.6 | 3.5183938 | 3158.0066 | 0.78673678 | Background | 2.7316571 | 4.3051306 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 53 | 10.8 | 1.0555182 | 3158.0066 | 0.43091348 | Background | 0.62460467 | 1.4864316 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 54 | 11 | 1.5832772 | 3158.0066 | 0.52775908 | Background | 1.0555182 | 2.1110363 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 55 | 11.2 | 1.5832772 | 3158.0066 | 0.52775908 | Background | 1.0555182 | 2.1110363 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 56 | 11.4 | 1.2314378 | 3158.0066 | 0.46543976 | Background | 0.76599809 | 1.6968776 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 57 | 11.6 | 0.70367877 | 3158.0066 | 0.35183938 | Background | 0.35183938 | 1.0555182 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 58 | 11.8 | 0.87959846 | 3158.0066 | 0.39336839 | Background | 0.48623007 | 1.2729669 | Background | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 59 | 0.2 | 1496.4882 | 160.37547 | 71.99982 | Candidates | 1424.4883 | 1568.488 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 60 | 0.4 | 478.04483 | 160.37547 | 40.693878 | Candidates | 437.35095 | 518.73871 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 61 | 0.6 | 200.91739 | 160.37547 | 26.381746 | Candidates | 174.53565 | 227.29914 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 62 | 0.8 | 138.56372 | 160.37547 | 21.908848 | Candidates | 116.65487 | 160.47257 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 63 | 1 | 114.31507 | 160.37547 | 19.899699 | Candidates | 94.415369 | 134.21477 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 64 | 1.2 | 24.248651 | 160.37547 | 9.1651285 | Candidates | 15.083522 | 33.413779 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 65 | 1.4 | 24.248651 | 160.37547 | 9.1651285 | Candidates | 15.083522 | 33.413779 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 66 | 1.6 | 45.033209 | 160.37547 | 12.489965 | Candidates | 32.543244 | 57.523173 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 67 | 1.8 | 20.784558 | 160.37547 | 8.4852602 | Candidates | 12.299298 | 29.269818 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 68 | 2 | 17.320465 | 160.37547 | 7.7459474 | Candidates | 9.5745175 | 25.066412 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 69 | 2.2 | 13.856372 | 160.37547 | 6.9281859 | Candidates | 6.9281859 | 20.784558 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 70 | 2.4 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 71 | 2.6 | 24.248651 | 160.37547 | 9.1651285 | Candidates | 15.083522 | 33.413779 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 72 | 2.8 | 24.248651 | 160.37547 | 9.1651285 | Candidates | 15.083522 | 33.413779 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 73 | 3 | 24.248651 | 160.37547 | 9.1651285 | Candidates | 15.083522 | 33.413779 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 74 | 3.2 | 13.856372 | 160.37547 | 6.9281859 | Candidates | 6.9281859 | 20.784558 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 75 | 3.4 | 13.856372 | 160.37547 | 6.9281859 | Candidates | 6.9281859 | 20.784558 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 76 | 3.6 | 17.320465 | 160.37547 | 7.7459474 | Candidates | 9.5745175 | 25.066412 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 77 | 3.8 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 78 | 4 | 13.856372 | 160.37547 | 6.9281859 | Candidates | 6.9281859 | 20.784558 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 79 | 4.2 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 80 | 4.4 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 81 | 4.8 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 82 | 5 | 10.392279 | 160.37547 | 5.999985 | Candidates | 4.3922939 | 16.392264 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 83 | 5.2 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 84 | 5.4 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 85 | 5.6 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 86 | 5.8 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 87 | 6.2 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 88 | 6.4 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 89 | 6.6 | 10.392279 | 160.37547 | 5.999985 | Candidates | 4.3922939 | 16.392264 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 90 | 6.8 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 91 | 7 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 92 | 7.2 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 93 | 7.4 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 94 | 7.6 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 95 | 7.8 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 96 | 8 | 17.320465 | 160.37547 | 7.7459474 | Candidates | 9.5745175 | 25.066412 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 97 | 8.2 | 13.856372 | 160.37547 | 6.9281859 | Candidates | 6.9281859 | 20.784558 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 98 | 8.4 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 99 | 8.6 | 17.320465 | 160.37547 | 7.7459474 | Candidates | 9.5745175 | 25.066412 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 100 | 8.8 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 101 | 9 | 10.392279 | 160.37547 | 5.999985 | Candidates | 4.3922939 | 16.392264 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 102 | 9.2 | 13.856372 | 160.37547 | 6.9281859 | Candidates | 6.9281859 | 20.784558 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 103 | 9.4 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 104 | 9.6 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 105 | 9.8 | 6.9281859 | 160.37547 | 4.8989673 | Candidates | 2.0292187 | 11.827153 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 106 | 10 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 107 | 10.2 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 108 | 10.4 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 109 | 10.6 | 10.392279 | 160.37547 | 5.999985 | Candidates | 4.3922939 | 16.392264 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 110 | 10.8 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 111 | 11.2 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | 112 | 11.4 | 3.464093 | 160.37547 | 3.464093 | Candidates | 0 | 6.9281859 | Candidates | 0.80656401 | 0.9568089 | MLP | true | true | false | true | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [INFO]:INFO: | storing | plot | in | /home/basti/phd/Figs/trackingCandidates/rate_real_candidates_vs_background_rate_crAll_mlp_0.95_scinti_fadc_line.pdf | | | | | | | | | | | | | | | | | | | | | | [WARNING]: | Printing | total | background | time | currently | only | supported | for | single | datasets. | | | | | | | | | | | | | | | [[~/phd/Figs/trackingCandidates/rate_real_candidates_vs_background_rate_crAll_mlp_0.95_scinti_fadc_line.pdf]] -> The two seem pretty compatible. Maybe there is minutely more in the middle range than expected, but I suppose that is a statistical effect. *** *OUTDATED* Rate plot comparing background to candidates :extended: Combined: #+begin_src sh plotBackgroundRate \ /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ /home/basti/org/resources/lhood_limits_10_05_23_mlp_sEff_0.99/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh300_msecheckpoint_epoch_485000_loss_0.0055_acc_0.9933_vQ_0.99.h5 \ /home/basti/Sync/lhood_tracking_scinti_line_mlp_0.95_2017.h5 \ /home/basti/Sync/lhood_tracking_scinti_line_mlp_0.95_2018.h5 \ --centerChip 3 \ --names "Background" --names "Background" --names "Candidates" --names "Candidates" \ --title "Rate over whole chip, MLP@91 % + line veto," \ --showNumClusters \ --region crAll \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.2 \ --outfile rate_real_candidates_vs_background_rate_crAll_mlp_0.95_scinti_fadc_line.pdf \ --outpath ~/org/Figs/statusAndProgress/trackingCandidates/ \ --logPlot \ --hideErrors \ --quiet #+end_src [[~/org/Figs/statusAndProgress/trackingCandidates/rate_real_candidates_vs_background_rate_crAll_mlp_0.95_scinti_fadc_line.pdf]] Below 1 keV the candidates are _always_ in excess. At higher energies maybe there is still a slight excess, yes, but it *seems* (may not be though) to be more in line with expectation. *** Perform the data unblinding and produce plots :extended: 1. run ~likelihood~ with ~--tracking~ 2. compute ~mcmc_limit_calculation~ based on the files Running tracking classification for the best performing setup, MLP@95% plus all vetoes except septem <2023-11-27 Mon 12:16>: #+begin_src sh ./createAllLikelihoodCombinations \ --f2017 ~/CastData/data/DataRuns2017_Reco.h5 \ --f2018 ~/CastData/data/DataRuns2018_Reco.h5 \ --c2017 ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --c2018 ~/CastData/data/CalibrationRuns2018_Reco.h5 \ --regions crAll \ --vetoSets "{+fkMLP, +fkFadc, +fkScinti, fkLineVeto}" \ --mlpPath ~/org/resources/nn_devel_mixing/17_11_23_adam_tanh30_sigmoid_mse_3keV/mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.pt \ --fadcVetoPercentile 0.99 \ --signalEfficiency 0.95 \ --out ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/ \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 8 \ --tracking \ --dryRun #+end_src #+begin_src Running all likelihood combinations took 399.9733135700226 s #+end_src Finished! I'm scared. *** Generate plots for real candidates :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:candidates:generate_candidate_plot :END: Using the files created in the previous section, let's create some plots. #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --title "MLP@95+FADC+Scinti+Line tracking clusters" \ --outpath ~/phd/Figs/trackingCandidates/ \ --suffix "mlp_0.95_scinti_fadc_line_tracking_candidates_axion_image_with_energy_radius_85" \ --energyMin 0.2 --energyMax 12.0 \ --filterNoisyPixels \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --energyText \ --colorBy energy \ --energyTextRadius 85.0 \ --switchAxes \ --useTikZ \ --singlePlot #+end_src [[~/phd/Figs/trackingCandidates/background_cluster_centersmlp_0.95_scinti_fadc_line_tracking_candidates_axion_image_with_energy_radius_85.pdf]] *** *OUTDATED* Generate plots for real candidates :extended: Using the files created in the previous section, let's create some plots. (from ~journal.org~) #+begin_src sh plotBackgroundClusters \ /home/basti/Sync/lhood_tracking_scinti_line_mlp_0.95_2017.h5 \ /home/basti/Sync/lhood_tracking_scinti_line_mlp_0.95_2018.h5 \ --title "MLP@95+FADC+Scinti+Line tracking clusters" \ --outpath ~/org/Figs/statusAndProgress/trackingCandidates/ \ --suffix "mlp_0.95_scinti_fadc_line_tracking_candidates_axion_image_with_energy_radius_85" \ --energyMin 0.2 --energyMax 12.0 \ --filterNoisyPixels \ --axionImage /home/basti/org/resources/axion_images/axion_image_2018_1487_93_0.989AU.csv \ --energyText \ --colorBy energy \ --energyTextRadius 85.0 #+end_src [[~/org/Figs/statusAndProgress/trackingCandidates/background_cluster_centersmlp_0.95_scinti_fadc_line_tracking_candidates_axion_image_with_energy_radius_85.pdf]] *** Number of background clusters expected :extended: This is part of the ~sanityCheckBackgroundSampling~ procedure in the limit code. Running it on the MLP@95 + line veto files yields: #+begin_src sh mcmc_limit_calculation sanity --limitKind lkMCMC #+end_src - [ ] *FURTHER INVESTIGATE* the missing time! *** Determine correct rotation for candidates :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:candidates:septemboard_layout_transformations :EXPORT_FILE_NAME: required_septemboard_layout_transformations :END: We know we need to apply a rotation to the candidates, because the Septemboard detector is rotated in relation to the native 'readout' direction. In principle the detector is rotated by $\SI{90}{°}$. See fig. [[fig:detector:full_septemboard_exploded]], the larger spacing between the middle and bottom row is parallel to the high voltage bracket. The HV bracket was installed such that it leads the cables outside the lead housing. Reminder about the Septemboard layout: (why did I not just do it in inkscape?) <<septemboard_layout_view>> #+begin_src sh Coordinate system for chips *5* and *6*, each! 0 256 256 +----------> x | | | ================= Legend: 0 v +-------+-------+ ======= <- chip bonding area y | | | | 6 | 5 | | | | +---+---+---+---+---+---+ | | | | | 2 | 3 | 4 | | | | | +-------+-------+-------+ ========================= 256 y +-------+-------+ ^ | | | | | 0 | 1 | | | | | | +-------+-------+ | ================= | | | 0 +---------------------> x 256 0 ^--- Coordinates of chips 0, 1, 2, 3, 4 *each* #+end_src Note that chips 5 and 6 are "inverted" in their ordering / numbering. Also note the inversion of the coordinate systems for chips 5 and 6 due to being installed upside down compared to the other chips. Keep this in mind for the explanation below: This /should/ imply that seen *from the front* (i.e. looking from the telescope onto the septemboard): - the top row is on the right hand side vertically, so that chip 6 is in the top right, chip 5 in the bottom right. - the center row is vertically in the center. Chip 2 is in the top middle, chip 3 (== center chip) obviously in the middle and chip 4 in the bottom middle - the bottom row is on the left vertically. Chip 0 is in the top left and chip 1 in the bottom left. In this way the schematic above should be rotated by $\SI{90}{°}$ /clockwise/. Kind of like this: <<cast_septemboard_view>> #+begin_src sh Legend: ‖+-------+ ‖ ‖| | ‖ ‖+-------+ ‖| 2 +-------+‖ ‖ ‖| | ‖| | |‖ ‖ ‖| 0 | ‖+-------+ 6 |‖ ‖ ‖| | ‖| | |‖ ^-- chip bonding area ‖+-------+ ‖| 3 |-------+‖ ‖| | ‖| | |‖ ‖| 1 | ‖+-------+ 5 |‖ ‖| | ‖| | |‖ ‖+-------+ ‖| 4 +-------+‖ ‖| | ‖+-------+ #+end_src That should be the view from the front. This implies that when we look at the data from a single GridPix, we actually see the data in the plot matching [[septemboard_layout_view]] above, but we _want_ to see it like this rotated view [[cast_septemboard_view]] here. This means we need to remap the coordinates, so that we view the y axis as our x axis and vice versa. To be more specific about the kind of transformation required to get the desired coordinates and some details that make this more tricky than it may appear (depending on how your brain works :) ), see the next section. **** Viewing direction and required transformations One extremely confusing aspect about data taking with these detectors (and presenting results) is that different people have different notions on how to interpret the data. For some people the detector is essentially a camera. We look at the world from the detector's point of view and thus 'see' the telescope and Sun in front of us. For other people the detector is an object that we look at from outside, acting more like a photo plate or a fog chamber that you look at. You thus see the image being built up from the view of the Sun or the telescope. To me, the latter interpretation makes more sense. I 'wait' the time of the data taking and then I 'look at' the detector 'from above' and see what was recorded on each chip. However, Christoph used the former approach in his analysis and initially I copied that over. In our code mapping pixel numbers as returned from TOS into physical positions we 'invert' the x pixel position by subtracting from the total number of pixels. See ~/TimepixAnalysis/Analysis/ingrid/private/geometry.nim~ in ~applyPitchConversion~: #+begin_src nim func applyPitchConversion*[T: (float | SomeInteger)](x, y: T, npix: int): (float, float) = ## template which returns the converted positions on a Timepix ## pixel position --> absolute position from pixel center in mm. ## Note that the x axis is 'inverted'! ((float(npix) - float(x) + 0.5) * PITCH, (float(y) + 0.5) * PITCH) #+end_src #+begin_src sh Coordinate system of a single Timepix y 256 ^ Legend: | ======= <- chip bonding area | | +-------+ | | | | | | | | | | +-------+ | ========= | 0 +-----------------> x 0 256 becomes: y 256 ^ Legend: | ======= <- chip bonding area | | +-------+ | | | | | | | | | | +-------+ | ========= | 0 +-----------------> x 256 0 #+end_src which is equivalent to 'looking at' (original coordinate system) the detector from above or 'looking through' the detector from behind (transformed coordinate system). Because all our physical coordinates (~centerX/Y~ in particular) take place in this 'inverted' coordinate system we need to take that into account when comparing to our raytracing results and present our final data. In particular our raytracer is a very obvious example for 'my' viewpoint, because it allows us to see the accumulation of signal on an ~ImageSensor~, see fig. [[fig:limit:septemboard_coordinates:traxer_view_at]]. #+CAPTION: Screenshot of TrAXer looking at an ~ImageSensor~ from on top of the magnet bore. #+CAPTION: The axion image is built up on the sensor and we view it "from above". #+NAME: fig:limit:septemboard_coordinates:traxer_view_at [[~/phd/Figs/raytracing/traxer_cast_view_top_magnet_to_imagesensor.png]] Let's be more specific now about what these things mean precisely and how to get to the correct coordinate system we care about (looking 'at' the detector), taking into account the 90° rotation of the septemboard at CAST. Our target is the 'looking at' coordinates that fall out of TrAXer as seen in the screenshot. Let's use the term 'world coordinates' for this view. - Description of our data reconstruction :: As mentioned, our data reconstruction performs an inversion of the data seen 'from above' to 'through detector from behind'. This means we perform a reflection along the $y$ axis and perform a translation of 256 pixels, $\vec{v}_x$ A coordinate $(x, y)$ in the transformed coordinates becomes $(x', y')$, given by \begin{align*} \vektor{x' \\ y'} &= \mathbf{T}_y · \vektor{x \\ y} + \vec{v}_x \\ &= \mtrix{ -1 & 0 \\ 0 & 1 } · \vektor{x \\ y} + \vektor{256 \\ 0} \\ &= \vektor{-x \\ y} + \vektor{256 \\ 0} \\ &= \vektor{256 - x \\ y} \\ &= \vektor{\tilde{x} \\ y} \end{align*} where we introduced $\tilde{x}$ for our new inverted x-coordinates, given in millimeter. (Note: we could make the translation more robust by using a 3D matrix (for this case) in homogeneous coordinates of the form \[ \mathbf{V} = \mtrix{ 1 & 0 & 256 \\ 0 & 1 & 0 \\ 0 & 0 & 1 } \] where the last column performs a translation for us. Our input vectors would need to be extended by a third row, which we would set to 1. After applying the transformation we would then drop the 3rd dimension again. The result is essentially the same as above, but can be represented as a matrix product. This is commonly used in computer graphics { with there adding a 4th dimension to 3D vectors } ) - Description of the native data transformation :: To convert the raw GridPix data (without the above inversion) into our target coordinates (in world coordinates) we need to account for the rotation of the GridPix detector. As described in the previous section, the detector is rotated by 90° clockwise (in world coordinates). Rotations in mathematics commonly describe positive rotations as anti clockwise. Thus, our detector is rotated by $\SI{-90}{°}$. Applying this rotation to our data yields the image as seen from world coordinates, mapped to the TrAXer simulation: \begin{align*} \vektor{x' \\ y'} &= R_{\SI{-90}{°}} · \vektor{x \\ y} \\ &= \mtrix{ \cos θ & -\sin θ \\ \sin θ & \cos θ } · \vektor{x \\ y} \\ &= \mtrix{ 0 & 1 \\ -1 & 0 } · \vektor{x \\ y} \\ &= \vektor{y \\ -x} \end{align*} where we used $θ = -π/2$. - Comparing our current description to the target :: Our current reconstructed data is of the form $(-x, y)$ while our target is $(y, -x)$. This means to get to the desired outcome all we have to do is replace the x and y coordinates. In other words perform a reflection along the line $x = y$ in our data. \begin{align*} \vektor{x' \\ y'} &= \mathbf{S}_{x = y} · \vektor{256 - x \\ y} \\ &= \mtrix{ 0 & 1 \\ 1 & 0 } · \vektor{256 - x \\ y} \\ &= \vektor{y \\ 256 - x} \end{align*} which is exactly what we want outside of the translation by $\num{256}$, which is not strictly speaking there in our actual data, due to the data being converted into millimeter. But this just means /the numbers are different, but the features are in the right place/. That latter part is important. In our coordinated (inverted) data, the final transformation may look like $(y, \tilde{x})$ -- clearly different from our target $(y, -x)$ -- the underlying transformation is anyway of the form $(y, -x)$, barring a translation. That translation is essentially just dropped due to the millimeter conversion, but the inherent effect on the final geometry is anyway retained (and encoded in the fact that all x coordinates are reversed). This means to bring our center cluster data to the same coordinates that we get from TrAXer (the world coordinates, 'looking at' the detector), all we need to do is to transpose our coordinates; replace x by y and vice versa. From here we can cross check with the X-ray finger run, run 189, see fig. sref:fig:cast:xray_finger_centers generated in sec. [[#sec:cast:alignment:xray_finger_plots]], that this indeed makes sense. To quickly reproduce that figure here and compare it to our TrAXer raytracing for an X-ray finger run, see fig. sref:fig:limit:candidates:layout_transformations. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "X-ray finger run") (label "fig:limit:candidates:xray_finger_transformed") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/CAST_Alignment/xray_finger_centers_run_189.pdf")) (subfigure (linewidth 0.5) (caption "MLP prediction") (label "fig:limit:candidates:traxer_xray_finger") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/xray_finger_14.2m_3mm_3keV.pdf")) (caption (subref "fig:limit:candidates:xray_finger_transformed") " X-ray finger run 189, which is transformed by $x ⇔ y$ transformation." (subref "fig:limit:candidates:traxer_xray_finger") " TrAXer X-ray finger simulation already in world coordinates. As we can see comparing the two, the shape is the same.") (label "fig:limit:candidates:layout_transformations")) #+end_src In the plotting code in that section we implement the required transformation by exchanging the X by the Y center data. This is correct, /but only/ because the conversion from pixel coordinates to milli meters when computing the cluster centers already includes the inversion of the x axis. A transposition like that should yield a clock wise rotation. This means for our limit calculation: perform the same X ⇔ Y replacement to get the correct rotation. Furthermore, let's look at an X-ray finger run using ~TrAXer~ (read appendix [[#sec:appendix:raytracing]] first) to reproduce the X-ray finger run of run 189. #+begin_quote Note: the TrAXer raytracing binary output files that store the image sensor data are again inverted in y compared to what we see / what we want. In our detector the bottom left is $(0, 0)$. But the data buffer associated with an ~ImageSensor~ in TrAXer starts with $(0, 0)$ in the top left. Hence you see ~--invertY~ in all calls to ~plotBinary~! #+end_quote ** Observed limit - $g_{ae}$ :PROPERTIES: :CUSTOM_ID: sec:limit:observed_limit :END: With the data fully unblinded and the solar tracking candidates known, we can now compute the observed limit for the axion-electron coupling $g_{ae}$. We compute an observed limit of \[ \left(g_{ae} · g_{aγ}\right)_{\text{observed}} = \SI{7.34(9)e-23}{GeV⁻¹}, \] which is lower than the expected limit, due to the distribution of candidates and absence of any highly significant candidates. This limit is the mean value out of $\num{200}$ limits computed via 3 Markov Chains of $\num{150000}$ links (same as for the expected limits) computed for the real candidates. The printed uncertainty represents the standard deviation out of those limits. Therefore, we may wish to present an upper bound, \[ \left(g_{ae} · g_{aγ}\right)_{\text{observed}} \lesssim \SI{7.35e-23}{GeV⁻¹} \text{ at } \SI{95}{\%} \text{ CL}. \] The expected limit for this case was $\left(g_{ae} · g_{aγ}\right)_{\text{expected}} = \SI{7.878225(6464)e-23}{GeV⁻¹}$ and the limit without any candidates at all $\left(g_{ae} · g_{aγ}\right)_{\text{no candidates}} = \SI{6.39e-23}{GeV⁻¹}$. This is a good improvement compared to the current, best observed limit by CAST in 2013 cite:Barth_2013, which achieved \[ \left(g_{ae} · g_{aγ}\right)_{\text{CAST2013}} \lesssim \SI{8.1e-23}{GeV⁻¹}. \] Unfortunately, [[cite:&Barth_2013]] does not provide an expected limit to compare to. [fn:likely_expected] Fig. [[fig:limit:observed_axion_electron]] shows the marginal posterior likelihood function for the observed solar tracking candidates, for a single calculation run (out of the $\num{200}$ mentioned). The limit is at the $95^{\text{th}}$ percentile of the histogram, shown by the intersection of the blue and red filling. In addition the yellow line shows values based on a numerical integration using Romberg's method [[cite:&romberg_integration]] at 20 different coupling constants. This is a cross validation of the MCMC result. [fn:romberg_integration_performance] Note that this observed limit is valid for axion masses in the range where the coherence condition in the conversion probability is met. That is, $qL \ll π$, refer back to equation [[eq:theory:axion_interaction:conversion_probability]]. This holds up to axion masses around $m_a \lesssim \SI{0.02}{eV}$, but the exact value is both energy dependent and based on the desired cutoff in reduction of the conversion probability. See fig. [[fig:appendix:conversion_probability_vs_mass]] in appendix [[#sec:appendix:conversion_probability]] to see how the conversion probability develops as a function of axion mass. The expected and observed limits simply (inversely) follow the conversion probability, i.e. out of coherence they get exponentially worse, superimposed with the periodic modulation seen in the conversion probability. As we did not perform a buffer gas run, the behavior in that range is not computed, because it is mostly trivial (only the exact point at which the limit decreases changes depending on the exact energies of the candidates). Finally, note that if one combines the existing astrophysical limits on $g_{ae}$ alone (for example tip of the red giant branch star brightness limits, cite:&capozzi20_axion_neutr_bound_improv_with at $g_{ae} < \num{1.3e-13}$) with an axion-photon coupling of choice (for example the current best limit of [[cite:&cast_nature]]) one may very well obtain a 'better' limit on $g_{ae}·g_{aγ}$. In that sense the above, at the very least, represents the best helioscope limit on the product of both coupling constants. It also suffers less from uncertainties as astrophysical limits, see for example cite:&dennis2023tip. #+CAPTION: The marginal posterior likelihood function for the solar candidates #+CAPTION: with the observed limit at the intersection between the blue and #+CAPTION: red filling. Also shown is a line based on a numerical integration of #+CAPTION: the 4-fold integral at 20 steps as a cross check of the MCMC. The x-axis #+CAPTION: is the parameter we sample, namely $g²_{ae}$. Limit at #+CAPTION: $g² \approx \num{5.4e-21} ⇒ g \approx \num{7.35e-11}$. #+NAME: fig:limit:observed_axion_electron [[~/phd/Figs/trackingCandidates/mcmc_real_limit_likelihood_ck_g_ae².pdf]] [fn:likely_expected] Judging by the $χ²$ distribution in [[cite:&Barth_2013]], fig. 6, having a minimum for negative $g²_{ae}g²_{gγ}$ values, potentially implies a better observed limit than expected. [fn:romberg_integration_performance] Note though, while the calculation of the observed limit via the MCMC takes about $\SI{10}{s}$, the numerical integration using Romberg's method takes $\sim\SI{1}{h}$ for only 20 points. And that is only using an integration level of 5 (a parameter of the Romberg method, one often uses 8 for Romberg for accuracy). This highlights the need for Monte Carlo methods, _especially_ for expected limits. **** TODOs for this section [/] :noexport: - [X] *MENTION THAT THIS LIMIT IS FOR AXION MASSES < 1e-3 or whatever* - [X] Refer to scaling of full magnet length (no simplification) - [X] Add plot to appendix of conversion probability scaling - [X] *REPLACE THIS FIG BY VERSION WITH g_ae·g_aγ!!!* -> But they look extremely different! So no. - [ ] *ALSO PRODUCE A χ² PLOT OF THE SPACE!* For limit: - [ ] *likelihood space* of the real candidates as plot - [ ] try to fit an exponential to the likelihood and provide that. Should give a better way to combine it maybe? - [ ] ? - [ ] *CITE ROMBERG* - [ ] *PROVIDE AT LEAST A TABLE* of the limits we get for a few other setups! Old limit: \[ g_{ae} · g_{aγ} = \SI{6.56e-23}{GeV⁻¹}, \] **** Sanity check :extended: This is only one of them, but quick, gives an overview: Running the sanity checks for the limits by varying ~g²_ae~ as described in the thesis: #+begin_src sh mcmc_limit_calculation \ sanity --limitKind lkMCMC \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes --sanityPath ~/phd/Figs/limit/sanity/axionElectronSanity/ \ --axionElectronLimit #+end_src See the plots in [[~/phd/Figs/limit/sanity/axionElectronSanity/]] and the sanity log: #+begin_src [2024-01-11 - 15:39:15] - INFO: =============== Input =============== [2024-01-11 - 15:39:15] - INFO: Input path: [2024-01-11 - 15:39:15] - INFO: Input files: @[(2017, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5"), (2018, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5")] [2024-01-11 - 15:39:15] - INFO: =============== Time =============== [2024-01-11 - 15:39:15] - INFO: Total background time: 3158.01 h [2024-01-11 - 15:39:15] - INFO: Total tracking time: 159.899 h [2024-01-11 - 15:39:15] - INFO: Ratio of tracking to background time: 1 UnitLess [2024-01-11 - 15:39:16] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronSanity/candidates_signal_over_background_axion_electron.pdf [2024-01-11 - 15:39:16] - INFO: =============== Chameleon coupling constant =============== [2024-01-11 - 15:39:16] - INFO: Conversion probability using default g_aγ² = 9.999999999999999e-25, yields P_a↦γ = 1.62702e-21 UnitLess [2024-01-11 - 15:39:25] - INFO: Limit with default g_ae² = 1e-26 is = 4.773876062173374e-21, and as g_ae = 6.909324179811926e-11 [2024-01-11 - 15:39:25] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronSanity/candidates_signal_over_background_axion_electron.pdf [2024-01-11 - 15:39:37] - INFO: 2. Limit with default g_ae² = 1e-26 is = 8.300544615154112e-21, and as g_ae = 9.110732470638194e-11 [2024-01-11 - 15:39:37] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronSanity/candidates_signal_over_background_axion_electron.pdf [2024-01-11 - 15:39:46] - INFO: 3. Limit with default g_ae² = 1e-26 is = 5.284512161896287e-21, and as g_ae = 7.26946501600791e-11 [2024-01-11 - 15:39:47] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronSanity/candidates_signal_over_background_axion_electron.pdf [2024-01-11 - 15:39:58] - INFO: 4. Limit with default g_ae² = 1e-26 is = 5.92914993487365e-21, and as g_ae = 7.700097359692051e-11 #+end_src And by varying g²_ae·g²_aγ instead (equivalent): #+begin_src sh mcmc_limit_calculation \ sanity --limitKind lkMCMC \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes --sanityPath ~/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/ \ --axionElectronAxionPhotonLimit #+end_src see [[~/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/]] and the sanity log: #+begin_src [2024-01-11 - 15:42:04] - INFO: =============== Input =============== [2024-01-11 - 15:42:04] - INFO: Input path: [2024-01-11 - 15:42:04] - INFO: Input files: @[(2017, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5"), (2018, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5")] [2024-01-11 - 15:42:05] - INFO: =============== Time =============== [2024-01-11 - 15:42:05] - INFO: Total background time: 3158.01 h [2024-01-11 - 15:42:05] - INFO: Total tracking time: 159.899 h [2024-01-11 - 15:42:05] - INFO: Ratio of tracking to background time: 1 UnitLess [2024-01-11 - 15:42:05] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/candidates_signal_over_background_axion_electron_axion_photon.pdf [2024-01-11 - 15:42:05] - INFO: =============== Axion-electron axion-photon coupling constant =============== [2024-01-11 - 15:42:05] - INFO: Conversion probability using default g_aγ² = 9.999999999999999e-25, yields P_a↦γ = 1.62702e-21 UnitLess [2024-01-11 - 15:42:14] - INFO: Limit is g_ae²·g_aγ² = 4.722738218023592e-45, as g_ae·g_aγ = 6.872218141199821e-23 [2024-01-11 - 15:42:14] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/candidates_signal_over_background_axion_electron_axion_photon.pdf [2024-01-11 - 15:42:25] - INFO: 2. Limit is g_ae²·g_aγ² = 8.597830720112351e-45, as g_ae·g_aγ = 9.272448824400355e-23 [2024-01-11 - 15:42:25] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/candidates_signal_over_background_axion_electron_axion_photon.pdf [2024-01-11 - 15:42:35] - INFO: 3. Limit is g_ae²·g_aγ² = 5.266187342850787e-45, as g_ae·g_aγ = 7.256850103764572e-23 [2024-01-11 - 15:42:35] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/candidates_signal_over_background_axion_electron_axion_photon.pdf [2024-01-11 - 15:42:46] - INFO: 4. Limit is g_ae²·g_aγ² = 6.027243097935441e-45, as g_ae·g_aγ = 7.763532120069731e-23 #+end_src Compare the limits with the above to see that they are basically the same limits (the variation is down to the MCMC uncertainty for a single limit). *NOTE*: Running the limit using ~--axionElectronAxionPhotonLimitWrong~ will run it by varying ~g_ae·g_aγ~ (without the square) directly. This is to illustrate the explanation of sec. [[#sec:limit:mcmc:notes_variation_coupling_parameter]]. It will show a distorted histogram and different limits than in the above two cases: #+begin_src sh mcmc_limit_calculation \ sanity --limitKind lkMCMC \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes --sanityPath ~/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/ \ --axionElectronAxionPhotonLimitWrong #+end_src See [[~/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/]] and the sanity log: #+begin_src [2024-01-11 - 15:43:43] - INFO: =============== Input =============== [2024-01-11 - 15:43:43] - INFO: Input path: [2024-01-11 - 15:43:43] - INFO: Input files: @[(2017, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5"), (2018, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5")] [2024-01-11 - 15:43:43] - INFO: =============== Time =============== [2024-01-11 - 15:43:43] - INFO: Total background time: 3158.01 h [2024-01-11 - 15:43:43] - INFO: Total tracking time: 159.899 h [2024-01-11 - 15:43:43] - INFO: Ratio of tracking to background time: 1 UnitLess [2024-01-11 - 15:43:43] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/candidates_signal_over_background_axion_electron_axion_photon_wrong.pdf [2024-01-11 - 15:43:44] - INFO: =============== Axion-electron axion-photon coupling constant via g_ae·g_aγ directly =============== [2024-01-11 - 15:43:44] - INFO: Conversion probability using default g_aγ² = 9.999999999999999e-25, yields P_a↦γ = 1.62702e-21 UnitLess [2024-01-11 - 15:43:53] - INFO: Limit is g_ae·g_aγ = 5.536231179166123e-23 [2024-01-11 - 15:43:54] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/candidates_signal_over_background_axion_electron_axion_photon_wrong.pdf [2024-01-11 - 15:44:05] - INFO: 2. Limit is g_ae·g_aγ = 7.677750408651383e-23 [2024-01-11 - 15:44:05] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/candidates_signal_over_background_axion_electron_axion_photon_wrong.pdf [2024-01-11 - 15:44:15] - INFO: 3. Limit is g_ae·g_aγ = 5.828448006406443e-23 [2024-01-11 - 15:44:15] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/candidates_signal_over_background_axion_electron_axion_photon_wrong.pdf [2024-01-11 - 15:44:27] - INFO: 4. Limit is g_ae·g_aγ = 6.573837668159744e-23 #+end_src See how the limits are much lower than for the two cases above (the additional 1e-12 difference is down to ~g_aγ~ not being included, note the numbers outside of the power). And in particular, compare [[~/phd/Figs/limit/sanity/axionElectronAxionPhotonWrongSanity/mcmc_histo_real_syst_.pdf]] with the correct [[~/phd/Figs/limit/sanity/axionElectronAxionPhotonSanity/mcmc_histo_real_syst_.pdf]] and [[~/phd/Figs/limit/sanity/axionElectronSanity/mcmc_histo_real_syst_.pdf]] which illustrates the sampling behavior nicely. **** Calculate the observed limit :extended: We use ~F_WIDTH=0.5~ for the ~ln(1 + s/b)~ plot. The MCMC g_ae² histogram is forced to be 0.9 in width anyway. #+begin_src sh F_WIDTH=0.5 ESCAPE_LATEX=true USE_TEX=true mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/org/resources/axionProduction/axionElectronRealDistance/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --outpath ~/phd/Figs/trackingCandidates/ \ --suffix "" #+end_src -> The plot looks exactly like the one already in the thesis? Surely they should be switched around x/y? The log file: [[~/phd/Figs/trackingCandidates/real_candidates_limit.log]] ** Other coupling constants :PROPERTIES: :CUSTOM_ID: sec:limit:other_couplings :END: As explained when we introduced the limit calculation method, one important aim was to develop a method which is agnostic to the coupling constant of choice. We will now make use of this to compute expected and observed limits for the axion-photon coupling $g_{aγ}$, sec. [[#sec:limit:axion_photon]] and chameleon coupling $β_γ$, sec. [[#sec:limit:chameleon]]. *** TODOs for this section :noexport: We will start with the main coupling constant of interest in the context of this thesis, the axion-electron coupling $g_{ae}$ in sec. [[#sec:limit:observed_axion_electron]]. Afterwards, we will also shortly present the limits achievable by this detector, dataset and methodology on the axion-photon coupling $g_{aγ}$, sec. [[#sec:limit:observed_axion_photon]], as well as the chameleon coupling constant $β$, sec. [[#sec:limit:observed_chameleon]]. *** Axion-photon coupling - $g⁴_{aγ}$ :PROPERTIES: :CUSTOM_ID: sec:limit:axion_photon :END: The axion-photon coupling requires the following changes: - use $g⁴_{aγ}$ in the vector of the Markov Chain, replacing the $g²_{ae}$ term. As $g_{aγ}$ both affects the production and reconversion it needs to be in the fourth power. - the axion flux based on Primakoff production only. - the axion image based on the Primakoff flux only. The axion flux and axion image for the Primakoff production are shown in fig. sref:fig:limit:axion_photon:flux_image_inputs. Note that the axion image is computed at the same effective conversion position as for the axion-electron flux. Strictly speaking this is not quite correct, due to the different energies of the two fluxes and therefore different absorption lengths. For the purposes here the inaccuracy is acceptable. Based on these we compute an expected limit for the same setup that yielded the best expected limit for the axion-electron coupling constant. Namely, the MLP classifier at $\SI{95}{\%}$ efficiency using all vetoes except the septem veto. We compute an expected limit based on $\num{1e4}$ toy candidates. Fig. [[fig:limit:axion_photon:expected_limit]] shows the distribution of limits obtained for different toy candidates, including the expected limit. In appendix [[#sec:appendix:exp_limit_percentiles]], tab. [[tab:appendix:expected_limits_percentiles_axion_photon]] the different percentiles for this distribution are shown. The obtained expected limit is \[ g_{aγ, \text{expected}} = \SI{9.0650(75)e-11}{GeV⁻¹}, \] which compared to the observed CAST Nature [[cite:&cast_nature]] limit of $g_{aγ, \text{Nature}} = \SI{6.6e-11}{GeV^{−1}}$ is of course significantly worse. This is expected however, due to significantly less tracking time and higher background rates. The limit without any candidates comes out to \[ g_{aγ, \text{no candidates}} = \SI{7.95e-11}{GeV⁻¹}. \] Based on the same candidates as in sec. [[#sec:limit:observed_limit]] we obtain an observed limit of \[ g_{aγ, \text{observed}} = \SI{8.99(7)e-11}{GeV⁻¹}, \] which again is the mean out of 200 MCMC limits. Once again it is better than the expected limit, similar as for the axion-electron limit, but the overall result is as expected. As a bound then, it is \[ g_{aγ, \text{observed}} \lesssim \SI{9.0e-11}{GeV⁻¹} \text{ at } \SI{95}{\%} \text{ CL}. \] The distribution of the posterior likelihood function can be found in fig. [[fig:appendix:posterior_likelihood_axion_photon]] of appendix [[#sec:appendix:limit_additional:axion_photon]]. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "$g_{aγ}$ flux") (label "fig:limit:axion_photon:flux") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/axions/differential_axion_flux_primakoff.pdf")) (subfigure (linewidth 0.5) (caption "$g_{aγ}$ image") (label "fig:limit:axion_photon:image") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.pdf")) (caption (subref "fig:limit:axion_photon:flux") ": Differential axion flux arriving on Earth due to Primakoff production assuming " ($ (SI 1e-12 "GeV⁻¹")) ". " (subref "fig:limit:axion_photon:image") ": Axion image for the Primakoff emission in the Sun.") (label "fig:limit:axion_photon:flux_image_inputs")) #+end_src #+CAPTION: $\num{1e4}$ toy limits for the axion-photon coupling $g⁴_{aγ}$. The expected limit is #+CAPTION: determined to $g_{aγ, \text{expected}} = \SI{9.0650(75)e-11}{GeV⁻¹}$, with the no candidates limit #+CAPTION: being $g_{aγ, \text{no candidates}} = \SI{7.95e-11}{GeV⁻¹}$. #+NAME: fig:limit:axion_photon:expected_limit [[~/phd/Figs/limit/mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500_axion_photon_nmc_10k_pretty.pdf]] **** TODOs for this section [/] :noexport: - [ ] *COMPUTE STANDARD DEVIATION OF EXPECTED LIMIT VIA BOOTSTRAPPING* - [X] subfigure - axion flux - axion image - [ ] expected limits 10k samples, histogram - [ ] histogram of observed limit - [ ] Show differential solar flux - [ ] Show axion image for Primakoff origin - [ ] How is the axion emission modeled for the axion-photon flux? -> Figure out, just need that! **** Calculate differential axion-photon flux and emission rates :extended: Same as for the axion-electron coupling, we use ~readOpacityFile~ with the ~~--fluxKind fkAxionPhoton~ argument (see also [[#sec:appendix:raytracing:generate_axion_image]]): #+begin_src sh :dir ~/CastData/ExternCode/AxionElectronLimit/src ./readOpacityFile \ --suffix "_0.989AU" \ --distanceSunEarth 0.9891144450781392.AU \ --fluxKind fkAxionPhoton \ --plotPath ~/phd/Figs/readOpacityFile/ \ --outpath ~/phd/resources/readOpacityFile/ #+end_src **** Generate plot of differential Primakoff flux :extended: Produce a standalone plot of the axion-photon flux: #+begin_src nim import ggplotnim, unchained let df = readCsv("~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionPhoton_0.989AU.csv") .mutate(f{float: "Flux" ~ idx("Flux / keV⁻¹ m⁻² yr⁻¹").keV⁻¹•m⁻²•yr⁻¹.toDef(keV⁻¹•cm⁻²•s⁻¹).float}) .filter(f{string: `type` == "Total flux"}) ggplot(df, aes("Energy [keV]", "Flux")) + geom_line() + ylab(r"Flux [$\si{keV⁻¹.cm⁻².s⁻¹}$]") + margin(left = 4.5) + ggtitle(r"Primakoff flux at $g_{aγ} = \SI{1e-12}{GeV⁻¹}$") + xlim(0.0, 15.0) + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + ggsave("~/phd/Figs/axions/differential_axion_flux_primakoff.pdf") #+end_src #+RESULTS: | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | [INFO] | TeXDaemon | ready | for | input. | | | | | shellCmd: | command | -v | lualatex | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/axions | /home/basti/phd/Figs/axions/differential_axion_flux_primakoff.tex | | | | | Generated: | /home/basti/phd/Figs/axions/differential_axion_flux_primakoff.pdf | | | | | | | **** Generate axion image for axion-photon emission :extended: Now we just run the raytracer, using the correct position (1492.93 mm effective distance from telescope center) and produced emission file: #+begin_src sh ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --solarModelFile ~/phd/resources/readOpacityFile/solar_model_dataframe_fluxKind_fkAxionPhoton_0.989AU.csv \ --sensorKind sSum \ --usePerfectMirror=false \ --rayAt 0.995286666667 \ --ignoreWindow #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2024-01-08T13:19:16+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2024-01-08T13:19:16+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2024-01-08T13:19:16+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: And plot it, produce the CSV axion image for the limit: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ F_WIDTH=0.5 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2024-01-08T13:19:16+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.pdf \ --inPixels=false \ --gridpixOutfile ~/phd/resources/axionImages/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.csv \ --title "Solar axion image (g_aγ) at 0.989 AU from Sun, 1492.93 mm" #+end_src yields: - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.pdf]] - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm_log10.pdf]] - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm_hpd_via_eef_50.pdf]] **** Compute an expected limit :extended: Given that this is only an 'add-on', we will just compute O(10k) toy limits for an expected limit using the best performing setup from the axion-electron limit. Based on those we will then compute the observed limit too. Sanity checks: #+begin_src sh mcmc_limit_calculation \ sanity --limitKind lkMCMC \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes --sanityPath ~/phd/Figs/limit/sanity/axionPhotonSanity/ \ --axionPhotonLimit #+end_src see [[~/phd/Figs/limit/sanity/axionPhotonSanity/]] and the sanity log file: #+begin_src [2024-01-11 - 15:48:04] - INFO: =============== Input =============== [2024-01-11 - 15:48:04] - INFO: Input path: [2024-01-11 - 15:48:04] - INFO: Input files: @[(2017, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5"), (2018, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5")] [2024-01-11 - 15:48:05] - INFO: =============== Time =============== [2024-01-11 - 15:48:05] - INFO: Total background time: 3158.01 h [2024-01-11 - 15:48:05] - INFO: Total tracking time: 159.899 h [2024-01-11 - 15:48:05] - INFO: Ratio of tracking to background time: 1 UnitLess [2024-01-11 - 15:48:05] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionPhotonSanity/candidates_signal_over_background_axionPhoton.pdf [2024-01-11 - 15:48:05] - INFO: =============== Axion-photon coupling constant =============== [2024-01-11 - 15:48:05] - INFO: Conversion probability using default g_aγ² = 9.999999999999999e-25, yields P_a↦γ = 7.97241e-18 UnitLess [2024-01-11 - 15:48:24] - INFO: Limit with default g_aγ² = 9.999999999999999e-25 is = 5.348452506385883e-41, and as g_aγ = 8.551790162182624e-11 [2024-01-11 - 15:48:24] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionPhotonSanity/candidates_signal_over_background_axionPhoton.pdf [2024-01-11 - 15:48:45] - INFO: 2. Limit with default g_aγ² = 9.999999999999999e-25 is = 9.258158870434389e-41, and as g_aγ = 9.809145065039609e-11 [2024-01-11 - 15:48:45] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionPhotonSanity/candidates_signal_over_background_axionPhoton.pdf [2024-01-11 - 15:49:04] - INFO: 3. Limit with default g_aγ² = 9.999999999999999e-25 is = 6.93624112433336e-41, and as g_aγ = 9.126012210624729e-11 [2024-01-11 - 15:49:04] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/axionPhotonSanity/candidates_signal_over_background_axionPhoton.pdf [2024-01-11 - 15:49:25] - INFO: 4. Limit with default g_aγ² = 9.999999999999999e-25 is = 6.105369380817466e-41, and as g_aγ = 8.839505819701355e-11 #+end_src The limits seem reasonable for our detector. Worse than the Nature limit by quite a bit, but still acceptable! See sec. [[#sec:limit:expected_limits:best_expected_50k]] for the command for axion-electron using 50k toys. The main differences: - axion photon differential flux - axion photon axion image - couplingKind: g_aγ #+begin_src sh mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --couplingKind ck_g_aγ⁴ \ --outpath ~/org/resources/lhood_limits_axion_photon_11_01_24/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/ \ --suffix "" \ --nmc 10000 #+end_src :RESULTS: Expected limit: 6.752456878081697e-41 Generating group /ctx/trackingDf Generating group /ctx/axionModel Serializing Interpolator by evaluating 0.001 to 15.0 of name: axionSpl Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl Generating group /ctx/backgroundDf Wrote outfile /home/basti/org/resources/lhood_limits_axion_photon_11_01_24/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 :END: Finished <2024-01-11 Thu 17:53>. It took about 3 hours maybe. - [ ] Make the output plot prettier! It's super ugly due to a few _very_ large limits. #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/generateExpectedLimitsTable/ :results drawer ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_axion_photon_11_01_24/ --prefix "mc_limit_lkMCMC" --precision 2 --coupling ck_g_aγ⁴ #+end_src #+RESULTS: :results: File: mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.451446348780004e-40 | ε_eff | nmc | Type | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | ε_total | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻²] | Exp. limit σ [GeV⁻¹] | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | |------+-------+------+--------+------+-------+--------+------+------------+---------+-------+-------------+--------+------------------------+-----------------------+----------------------------+---------------------+----------+---------+----------+----------+----------+----------| | 0.95 | 10000 | MLP | true | true | 0.98 | false | true | 1 | 1 | 0.85 | 1 | 0.8 | 7.84e-11 | 9.06e-11 | 5.57e-27 | 7.46e-14 | 8.24e-11 | 8.5e-11 | 8.66e-11 | 9.56e-11 | 9.83e-11 | 1.04e-10 | | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |------+-------+-------+--------+----------+---------+----------+----------+----------+----------+----------------| | 0.95 | 10000 | MLP L | 0.8 | 8.24e-11 | 8.5e-11 | 8.66e-11 | 9.56e-11 | 9.83e-11 | 1.04e-10 | 9.0650(75)e-11 | :end: **** Generate plot of expected limit histogram :extended: #+begin_src sh ESCAPE_LATEX=true USE_TEX=true mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --plotFile ~/org/resources/lhood_limits_axion_photon_11_01_24/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv \ --xLow 2e-41 \ --xHigh 2e-40 \ --couplingKind ck_g_aγ⁴ \ --limitKind lkMCMC \ --yHigh 400 \ --bins 100 \ --linesTo 220 \ --xLabel "Limit g_aγ⁴ [GeV⁻⁴]" \ --yLabel "MC toy count" \ --outpath ~/phd/Figs/limit/ \ --suffix "_axion_photon_nmc_10k_pretty" \ --nmc 10000 #+end_src **** Calculate the observed limit :extended: :PROPERTIES: :CUSTOM_ID: sec:limit:axion_photon:compute_axion_photon_observed :END: We use ~F_WIDTH=0.5~ for the ~ln(1 + s/b)~ plot. The MCMC g_ae² histogram is forced to be 0.9 in width anyway. #+begin_src sh F_WIDTH=0.5 ESCAPE_LATEX=true USE_TEX=true mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --couplingKind ck_g_aγ⁴ \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --outpath ~/phd/Figs/trackingCandidates/axionPhoton/ \ --suffix "" #+end_src The log file: [[~/phd/Figs/trackingCandidates/axionPhoton/real_candidates_limit.log]] #+begin_src [2024-01-18 - 14:18:05] - INFO: Mean of 200 real limits (3·150k MCMC) = 8.999457e-11 [2024-01-18 - 14:18:05] - INFO: Median of 200 real limits (3·150k MCMC) = 8.991653e-11 [2024-01-18 - 14:18:05] - INFO: σ of 200 real limits (3·150k MCMC) = 7.388529e-13 [2024-01-18 - 14:18:05] - INFO: Combined real limit (200 times 3·150k MCMC) = 8.99946(7389)e-11 [2024-01-18 - 14:18:05] - INFO: Real limit based on 3 150k long MCMCs: g_aγ = 8.961844793705987e-11 #+end_src (that should read g_aγ⁴, to be updated), yields $g_aγ = \SI{8.99946(7389)e-11}{GeV⁻¹}$ *** Chameleon coupling - $β⁴_γ$ :PROPERTIES: :CUSTOM_ID: sec:limit:chameleon :END: Let's now have a look at the chameleon coupling. In addition to the inputs and MCMC parameter that needs to be changed similarly to the axion-photon coupling (flux, image and using $β^4_γ$ in the MCMC), the conversion probability needs to be adjusted according to eq. [[eq:theory:chameleon_conversion_prob]]. This assumes the conversion is fully coherent and we restrict ourselves to non-resonant production. That means a chameleon-matter coupling $β_m$ of \[ 1 \leq β_m \leq \num{1e6}. \] Further, because the chameleon production occurs in the solar tachocline -- at around $0.7 · R_{\odot}$ -- the angles under which chameleons can reach the CAST magnet are much larger than for axions. This leads to a significant fraction of chameleons not traversing the entire magnet. For any chameleon traversing only parts of it, its probability for conversion is decreased. According to [[cite:&chameleons_sdd_cast]] this is accounted for by a correction factor of $\SI{38.9}{\%}$ in reduction of signal. [[cite:&krieger2018search]] used a simplified raytracing model for this. While the distance through the magnet can easily be modeled with our raytracer (see appendix [[#sec:appendix:raytracing]]), we will still use the correction factor [fn:why_factor]. Fig. sref:fig:limit:chameleon:flux_image_inputs shows the differential chameleon flux, assuming a magnetic field of $\SI{10}{T}$ at the solar tachocline region and using $β_γ = β^{\text{sun}}_γ = \num{6.46e10}$ (the bound on the chameleon coupling from solar physics). The chameleon image is in stark contrast to the very small axion image seen in the previous sections. The outer most ring corresponds to production in the tachocline regions in which our view is effectively tangent to the tachocline normal (i.e. the 'outer ring' of the solar tachocline when looking at the Sun). Also visible is the asymmetry in the signal on the chip, due to the LLNL telescope. The densest flux regions are in the top and bottom. These correspond to the narrow sides of the ellipsoid for example in fig. sref:fig:limit:axion_photon:image. This 'focusing effect' was not visible in the raytracing simulation of [[cite:&krieger2018search]], due to the simpler raytracing approach, which approximated the ABRIXAS telescope as a lens (slight differences between ABRIXAS and LLNL telescope would of course exist). The relative size of the chameleon image compared to the size of the GridPix was one of the main motivators to implement the background interpolation as introduced in sec. [[#sec:limit:ingredients:background]]. This allows us to utilize the entire chameleon flux and weigh each chameleon candidate correctly. This is a significant improvement compared to the 2014/15 detector result [[cite:&krieger2018search]], in which the entire outer ring of the chameleon flux had to be dropped. Not only can we include these regions in our calculation due to our approach, but also our background level is /significantly/ lower in these outer regions thanks to the usage of our vetoes (compare fig. sref:fig:background:background_suppression_comparison, where sref:fig:background:suppression_lnL80_without is comparable to the background level of cite:&krieger2018search). Again, we follow the approach used for the axion-photon coupling and restrict ourselves to a single calculation of an expected limit for the best veto setup (MLP at $\SI{95}{\%}$ efficiency using vetoes except the septem veto) based on $\num{1e4}$ toy candidate sets. This yields an expected limit of \[ β_{γ, \text{expected}} = \num{3.6060(39)e+10}. \] Without any candidates, which for the chameleon due to its large much larger focused image is significantly more extreme, the limit comes out to \[ β_{γ, \text{no candidates}} = \num{2.62e10}. \] Fig. [[fig:limit:chameleon:expected_limit]] shows the histograms of these toy limits. The differently colored histograms are again based on an arbitrary cutoff in $\ln(1 + s/b)$ for a fixed coupling constant. We can see that the difference between the 'no candidate' limit and the lowest toy limits is much larger than for the two axion limits. This is due to the chameleon image covering a large fraction of the chip, making it incredibly unlikely to have no candidates. Further, in appendix [[#sec:appendix:exp_limit_percentiles]] we find tab. [[tab:appendix:expected_limits_percentiles_chameleon]], containing the different percentiles for the distribution of toy limits. For the chameleon coupling it may be worthwhile to investigate other veto setups again, because of the very different nature of the chameleon image and even lower peak of the differential flux. Based on the same setup we compute an observed limit using the same set of candidates as previously of \[ β_{γ, \text{observed}} = \num{3.10(2)e+10}. \] or as an upper bound \[ β_{γ, \text{observed}} \lesssim \num{3.1e+10} \text{ at } \SI{95}{\%} \text{ CL}. \] This is a good improvement over the limit of [[cite:&krieger2018search;&krieger_chameleon_jcap]], and the current best chameleon-photon bound, of \[ β_{γ, \text{Krieger}} = \num{5.74e10} \text{ at } \SI{95}{\%} \text{ CL}. \] despite significantly less solar tracking time than in the former (our $\SI{160}{h}$ to $\SI{254}{h}$ in [[cite:&krieger2018search]]), thanks to the significant improvements in background due to the detector vetoes, higher detection efficiency (thinner window), better classifier and improved limit calculation method allowing for the inclusion of the entire chameleon flux. A figure of the sampled coupling constants, similar to fig. [[fig:limit:observed_axion_electron]] can be found in appendix [[#sec:appendix:limit_additional:chameleon]], fig. [[fig:appendix:posterior_likelihood_chameleon]]. Note however, that it is somewhat surprising that the observed limit is also an improvement over the expected limit, as the total number of clusters in the tracking data is almost exactly as expected (850 for 844 expected). Based on all evaluations I have done, it seems to be a real effect of the candidates. In fig. [[sref:fig:limit:rate_candidates_background]] of the spectrum comparing candidates to background, we see that there is a slight, but noticeable lower rate at energies below $\SI{2}{keV}$, which is the relevant range for chameleons. As such it may very well be a real effect, despite putting it below the $5^{\text{th}}$ percentile of the toy limits (compare tab. [[tab:appendix:expected_limits_percentiles_chameleon]]). Further investigation seems appropriate. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "$β_γ$ flux") (label "fig:limit:chameleon:flux") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/axions/differential_chameleon_flux.pdf")) (subfigure (linewidth 0.5) (caption "$β_{γ}$ image") (label "fig:limit:chameleon:image") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/solar_chameleon_image_0.989AU_1500mm.pdf")) (caption (subref "fig:limit:chameleon:flux") ": Differential chameleon flux arriving on Earth assuming " ($ "β_γ^{\\text{sun}} = \\num{6.46e10}") " and a magnetic field of " ($ "B = " (SI 10 "T")) " in the solar tachocline. " (subref "fig:limit:chameleon:image") ": Chameleon image for chameleon emission in the Sun.") (label "fig:limit:chameleon:flux_image_inputs")) #+end_src #+CAPTION: $\num{1e4}$ toy limits for the chameleon coupling $β⁴_{γ}$. The expected limit is #+CAPTION: determined to $β_{γ, \text{expected}} = \num{3.6060(39)e10}$, with the no candidates limit #+CAPTION: being $β_{γ, \text{no candidates}} = \num{2.62e10}$. #+NAME: fig:limit:chameleon:expected_limit [[~/phd/Figs/limit/mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500_chameleon_nmc_10k_pretty.pdf]] [fn:why_factor] Including the conversion probability from the raytracer would ideally mean to include the reflectivities under angles encountered for each ray. However, attempts to reproduce the effective area as provided by LLNL raytracing simulations have failed so far. In order to avoid complications with potential inaccuracies, we stick to the previous approach for simplicity and better comparison. **** TODOs for this section [/] :noexport: - [ ] *INVESTIGATE REAL LIMIT FOR CHAMELEONS A BIT MORE!!!* - [ ] *ONE SHORTCOMING* of this calculation is that our effective area is still the axion effective area from the LLNL raytracing simulation. That means we might be slightly overestimating the efficiency, due to slightly larger incoming angles of the chameleons. -> Ideally, we could correctly reproduce the effective area to correctly take this into account! -> Also note: the effect is very small anyway. The Sun has an angular diameter of about 32 arc minutes. That means the largest possible angles are on the order of 16 arc minutes. Compared to the angles of the telescope mirrors (1° and more), this is not that significant (16 arc minutes = 0.26°). - [X] For chameleon mainly just need to look at how Christoph sampled from the solar radius to compute the axion image in his case. We can then plug that into the raytracer. **** Chameleon references :extended: For chameleon theory. Best overview in detail: ~/Documents/Papers/PapersHome/Masterarbeit/Chameleons/thorough_chameleon_review_0611816.pdf later review: chameleon_review_1602.03869.pdf To differentiate chameleon gravity from other GR extensions: especially ~/Documents/Papers/brax_distinguishing_modified_gravity_models_1506.01519.pdf and ~/Documents/Papers/brax_lectures_on_screened_modified_gravity_1211.5237v1.pdf Detection in experiments, e.g. CAST ~/Documents/Papers/brax_detection_chameleons_1110.2583.pdf and a bit brax_solar_chameleons_1004.1846v1.pdf (is this a precursor preprint?) Other reading: chameleons and solar physics: chameleons_and_solar_physics_1405.1581.pdf polarizations_produced_by_chameleons_PhysRevD.79.044028.pdf **** Chameleon spectrum and plot :extended: We have the chameleon spectrum from Christoph's limit code back in the day. The file [[file:resources/chameleon-spectrum.dat]] contains the flux in units of ~1/16mm2/hour/keV~ at ~β_m = β^sun_m = 10^10.81~ assuming a magnetic field of 10 T at the solar tachocline, iirc. [[file:~/org/Misc/chameleon_spectrum.nim]] - [ ] Insert flux! #+begin_src nim :tangle code/generate_chameleon_flux.nim import ggplotnim, unchained # The data file `chameleon-spectrum.dat` contains the spectrum in units of # `keV⁻¹•16mm⁻²•h⁻¹` at β_m = β_m^sun = 6.457e10 or 10^10.81. # See fig. 11.2 in Christoph's thesis defUnit(keV⁻¹•mm⁻²•h⁻¹) defUnit(keV⁻¹•cm⁻²•s⁻¹) func conversionProbabilityChameleon(B: Tesla, L: Meter): float = const M_pl = sqrt(((hp_bar * c) / G_Newton).toDef(kg²)).toNaturalUnit.to(GeV) / sqrt(8 * π) # reduced Planck mass in natural units const βγsun = pow(10, 10.81) let M_γ = M_pl / βγsun result = (B.toNaturalUnit * L.toNaturalUnit / (2 * M_γ))^2 proc convertChameleon(x: float): float = # divide by 16 to get from /16mm² to /1mm². Input f # idiotic flux has already taken conversion probability into account. let P = conversionProbabilityChameleon(9.0.T, 9.26.m) # used values by Christop! result = (x.keV⁻¹•mm⁻²•h⁻¹ / 16.0 / P).to(keV⁻¹•cm⁻²•s⁻¹).float let df = readCsv("~/phd/resources/chameleon-spectrum.dat", sep = '\t', header = "#") .mutate(f{"Flux" ~ convertChameleon(idx("I[/16mm2/hour/keV]"))}, f{"Energy [keV]" ~ `energy` / 1000.0}) ggplot(df, aes("Energy [keV]", "Flux")) + geom_line() + ylab(r"Flux [$\si{keV⁻¹.cm⁻².s⁻¹}$]") + margin(left = 4.5) + ggtitle(r"Chameleon flux at $β^{\text{sun}}_γ = \num{6.46e10}$") + themeLatex(fWidth = 0.5, width = 600, baseTheme = sideBySide) + ggsave("~/phd/Figs/axions/differential_chameleon_flux.pdf") #+end_src #+RESULTS: **** Conversion probability :extended: Chameleon references: [[cite:&krieger_chameleon_jcap]] [[cite:&chameleons_sdd_cast]] Conversion probability for back conversion. We are in the coherent regime. In this case cite:brax12_chameleons equation 52: \[ P_{c↦γ} = \frac{B² L²}{4 M²_γ} \] where $M_γ$ is defined implicitly via the chameleon-photon coupling $β_γ$, \[ β_γ = \frac{m_{\text{pl}}}{M_γ} \] where $m_{\text{pl}}$ is the /reduced/ Planck mass, $m_{\text{pl}} = \frac{M_{\text{pl}}}{\sqrt{8 π}}$ (i.e. using natural units with $G = \frac{1}{8π}$ instead of $G = 1$, used in cosmology because it removes the $8π$ term from the Einstein field equations). See [[cite:&krieger_chameleon_jcap]] for mention that it is the reduced Planck constant here. But the $\sim \SI{2e18}{GeV}$ also gives it away. Let's check that the numbers hold: #+begin_src nim import unchained let M_pl = sqrt ((hp_bar * c) / G_Newton).toDef(kg²) echo "Planck mass = ", M_pl, " in GeV = ", M_pl.toNaturalUnit.to(GeV) / sqrt(8 * π) #+end_src #+RESULTS: : Planck mass = 2.17643e-08 kg in GeV = 2.43532e+18 GeV which indeed matches (although *2* is a rough approximation of the value!). Let's compute the conversion probability for a single $β_γ$ value: #+begin_src nim import unchained proc conversionProbability(B: Tesla, L: Meter, β_γ: float): float = let M_pl = sqrt(((hp_bar * c) / G_Newton).toDef(kg²)).toNaturalUnit.to(GeV) / sqrt(8 * π) #let M_γ = M_pl / β_γ result = (β_γ * B.toNaturalUnit() * L.toNaturalUnit() / (2 * M_pl))^2 echo "Conversion probability: ", conversionProbability(8.8.T, 9.26.m, 5.6e10) #+end_src #+RESULTS: : Conversion probability: 8.603132749603562e-13 O(1e-12) seems somewhat reasonable, given that the flux is generally much lower than for axions? But maybe the flux is too low? The flux is on the order of 1e-4 keV⁻¹•cm⁻²•s⁻¹ after all. Let's compare: #+begin_src nim import unchained, math defUnit(cm²) defUnit(keV⁻¹) func conversionProbability(): UnitLess = ## the conversion probability in the CAST magnet (depends on g_aγ) ## simplified vacuum conversion prob. for small masses let B = 9.0.T let L = 9.26.m let g_aγ = 1e-12.GeV⁻¹ # ``must`` be same as reference in Context result = pow( (g_aγ * B.toNaturalUnit * L.toNaturalUnit / 2.0), 2.0 ) echo conversionProbability() #+end_src #+RESULTS: : 1.70182e-21 UnitLess #+begin_src nim import unchained let flx = 2e18.keV⁻¹•m⁻²•yr⁻¹ echo flx.toDef(keV⁻¹•cm⁻²•s⁻¹) #+end_src #+RESULTS: : 6.33762e+06 keV⁻¹•cm⁻²•s⁻¹ So 6e6 axions at 1.7e-21 vs 8.6e-13 at 1e-4: #+begin_src nim echo 6e6 * 1.7e-21 echo 1e-4 * 8.6e-13 #+end_src #+RESULTS: | 1.02e-14 | | 8.6e-17 | 3 orders of magnitude difference. That seems like it would be too much? Surely not made up by the fact that the area of interest is so much larger, no? Only one way to find out, I guess. **** Generate chameleon solar image :extended: #+begin_src sh ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --chameleonFile ~/org/resources/chameleon-spectrum.dat \ --sensorKind sSum \ --usePerfectMirror=false \ --ignoreWindow \ --ignoreMagnet #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2024-01-08T20:29:05+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2024-01-08T20:29:05+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2024-01-08T20:29:05+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ F_WIDTH=0.5 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2024-01-08T20:29:05+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_chameleon_image_0.989AU_1500mm.pdf \ --inPixels=false \ --gridpixOutfile ~/phd/resources/axionImages/solar_chameleon_image_0.989AU_1500mm.csv \ --title "Solar chameleon image at 0.989 AU from Sun, 1500 mm" #+end_src **** Compute an expected limit :extended: Sanity checks: #+begin_src sh mcmc_limit_calculation \ sanity --limitKind lkMCMC \ --axionModel ~/phd/resources/chameleon-spectrum.dat \ --isChameleon \ --axionImage ~/phd/resources/axionImages/solar_chameleon_image_0.989AU_1500mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --switchAxes --sanityPath ~/phd/Figs/limit/sanity/chameleonSanity \ --chameleonLimit #+end_src see [[~/phd/Figs/limit/sanity/chameleonSanity/]] and the sanity log file: #+begin_src [2024-01-11 - 15:50:40] - INFO: =============== Input =============== [2024-01-11 - 15:50:40] - INFO: Input path: [2024-01-11 - 15:50:40] - INFO: Input files: @[(2017, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5"), (2018, "/home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5")] [2024-01-11 - 15:50:41] - INFO: =============== Time =============== [2024-01-11 - 15:50:41] - INFO: Total background time: 3158.01 h [2024-01-11 - 15:50:41] - INFO: Total tracking time: 159.899 h [2024-01-11 - 15:50:41] - INFO: Ratio of tracking to background time: 1 UnitLess [2024-01-11 - 15:50:41] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/chameleonSanity/candidates_signal_over_background_chameleon.pdf [2024-01-11 - 15:50:41] - INFO: =============== Chameleon coupling constant =============== [2024-01-11 - 15:50:41] - INFO: Conversion probability using default β² = 4.168693834703363e+21, yields P_c↦γ = 1.06716e-14 UnitLess [2024-01-11 - 15:50:52] - INFO: Limit with default β² = 4.168693834703363e+21 is = 1.637454027281386e+42, and as β = 3.577192e+10 [2024-01-11 - 15:50:52] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/chameleonSanity/candidates_signal_over_background_chameleon.pdf [2024-01-11 - 15:51:03] - INFO: 2. Limit with default β² = 4.168693834703363e+21 is = 2.129378077440907e+42, and as β = 3.819999e+10 [2024-01-11 - 15:51:04] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/chameleonSanity/candidates_signal_over_background_chameleon.pdf [2024-01-11 - 15:51:15] - INFO: 3. Limit with default β² = 4.168693834703363e+21 is = 1.890732758560203e+42, and as β = 3.708152e+10 [2024-01-11 - 15:51:15] - INFO: Saving plot: /home/basti/phd/Figs/limit/sanity/chameleonSanity/candidates_signal_over_background_chameleon.pdf [2024-01-11 - 15:51:26] - INFO: 4. Limit with default β² = 4.168693834703363e+21 is = 1.391973234680927e+42, and as β = 3.434850e+10 #+end_src These limits look good! Quite a bit better than Christoph's (which was 5.5e10). See sec. [[#sec:limit:expected_limits:best_expected_50k]] for the command for axion-electron using 50k toys. The main differences: - chameleon differential flux - chameleon axion image - couplingKind: β_m #+begin_src sh mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/chameleon-spectrum.dat \ --isChameleon \ --axionImage ~/phd/resources/axionImages/solar_chameleon_image_0.989AU_1500mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --couplingKind ck_β⁴ \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --outpath ~/org/resources/lhood_limits_chameleon_12_01_24/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/ \ --suffix "" \ --nmc 10000 #+end_src Expected limit: 3.605996073373334E10 Limit no candidates: 4.737234852038135E41 ^ 0.25 = 2.62350096896e10 ***** Initial run with segfault :noexport: :RESULTS: Expected limit: 1.690834101979812e+42 Generating group /ctx/trackingDf Generating group /ctx/axionModel Serializing Interpolator by evaluating 0.01 to 2.0 of name: axionSpl Serializing Interpolator by evaluating 0.0 to 10.0 of name: efficiencySpl Serializing Interpolator by evaluating 0.2 to 12.0 of name: backgroundSpl Generating group /ctx/backgroundDf Wrote outfile /home/basti/org/resources/lhood_limits_chameleon_11_01_24/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 :END: Started <2024-01-11 Thu 18:00>. It, uhh, 'finished' some time before <2024-01-11 Thu 20:22>, however, it gave a segmentation fault: #+begin_src WARNING: Unexpected data type VNull of column: limits! WARNING: Unexpected data type VNull of column: 4.6269e+41! WARNING: Unexpected data type VNull of column: 1.6908e+42! WARNING: Unexpected data type VNull of column: 0! WARNING: Unexpected data type VNull of column: 0! WARNING: Unexpected data type VNull of column: 0! WARNING: Unexpected data type VNull of column: 0! WARNING: Unexpected data type VNull of column: 1000! WARNING: Unexpected data type VNull of column: 1000! WARNING: Unexpected data type VNull of column: candsInSens! SIGSEGV: Illegal storage access. (Attempt to read from nil?) zsh: segmentation fault mcmc_limit_calculation limit -f -f --axionModel --isChameleon --axionImage #+end_src I guess something about the positive numbers of the chameleon limit yields rubbish somewhere? :) Ahh, I think I didn't adjust the ~candsInSens~ threshold value. **** Generate plot of expected limit histogram :extended: #+begin_src sh ESCAPE_LATEX=true USE_TEX=true mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/chameleon-spectrum.dat \ --isChameleon \ --axionImage ~/phd/resources/axionImages/solar_chameleon_image_0.989AU_1500mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --couplingKind ck_β⁴ \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --plotFile ~/org/resources/lhood_limits_chameleon_12_01_24/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99/mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.csv \ --xLow 2e41 \ --xHigh 4.2e42 \ --limitKind lkMCMC \ --yHigh 180 \ --bins 100 \ --linesTo 100 \ --xLabel "Limit β⁴" \ --yLabel "MC toy count" \ --outpath ~/phd/Figs/limit/ \ --suffix "_chameleon_nmc_10k_pretty" \ --nmc 10000 #+end_src **** Compute the observed limit :extended: We use ~F_WIDTH=0.5~ for the ~ln(1 + s/b)~ plot. The MCMC g_ae² histogram is forced to be 0.9 in width anyway. #+begin_src sh F_WIDTH=0.5 ESCAPE_LATEX=true USE_TEX=true mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/chameleon-spectrum.dat \ --isChameleon \ --axionImage ~/phd/resources/axionImages/solar_chameleon_image_0.989AU_1500mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --couplingKind ck_β⁴ \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --outpath ~/phd/Figs/trackingCandidates/chameleon/ \ --suffix "" #+end_src The log file: [[~/phd/Figs/trackingCandidates/chameleon/real_candidates_limit.log]] #+begin_src [2024-01-18 - 14:45:00] - INFO: Mean of 200 real limits (3·150k MCMC) = 3.103796e+10 [2024-01-18 - 14:45:00] - INFO: Median of 200 real limits (3·150k MCMC) = 3.103509e+10 [2024-01-18 - 14:45:00] - INFO: σ of 200 real limits (3·150k MCMC) = 2.299418e+08 [2024-01-18 - 14:45:00] - INFO: Combined real limit (200 times 3·150k MCMC) = 3.10380(2299)e+10 [2024-01-18 - 14:45:00] - INFO: Real limit based on 3 150k long MCMCs: β_γ = 31003855007.8231 #+end_src (that should read g_aγ⁴, to be updated), yields $β_γ = \num{3.10380(2299)e+10}$ **** Generate a plot of the chameleon spectrum with the candidates :extended: #+begin_src sh plotBackgroundClusters \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --title "MLP@95+FADC+Scinti+Line tracking clusters" \ --outpath ~/phd/Figs/trackingCandidates/chameleon/ \ --suffix "mlp_0.95_scinti_fadc_line_tracking_candidates_chameleon_below_2keV" \ --energyMin 0.2 --energyMax 2.001 \ --filterNoisyPixels \ --axionImage ~/phd/resources/axionImages/solar_chameleon_image_0.989AU_1500mm.csv \ --energyText \ --colorBy energy \ --switchAxes \ --useTikZ \ --singlePlot #+end_src [[~/phd/Figs/trackingCandidates/chameleon/background_cluster_centersmlp_0.95_scinti_fadc_line_tracking_candidates_chameleon_below_2keV.pdf]] *** Further note on the difference between axion-electron and axion-photon/chameleon limits :extended: There is a fundamental difference between computing a limit on the axion-electron coupling and computing a limit for either axion-photon or chameleon. In case of axion-photon and chameleon, a change in the coupling constant increases / decreases *both* the production and the conversion probability by that amount, to be specific \[ s(g⁴) = α · f(g²) · P(g²) \] where $α$ is the entire contribution independent of $g$. For the axion-electron coupling however, a change in $g²_{ae}$ only changes the production. Or, in case of working with $g²_{ae}·g²_{aγ}$, a fixed change of the product is 'dedicated' only /partially/ to production and /partially/ to the conversion probability. ** Comparison to 2013 limit (using their method) :extended: Some years ago I attempted to reproduce the limit calculation of the CAST 2013 axion-electron limit paper [[cite:&Barth_2013]]. Given that their main approach is a binned likelihood approach I thought it should be rather simple to reproduce by extracting the background rates and candidates from the figures in the paper and implementing the likelihood function. I did this in [[file:~/org/Doc/StatusAndProgress.org::#sec:limit:reproduce_2013]] and was not fully able to reproduce the numbers shown there. In particular the $χ²$ minimum was at values near $\sim 40$ instead of $\sim 22$. However, at the time I was just learning about limit calculations and I had a lot of misunderstandings, especially given the negative coupling constants as well as generally how to compute a limit using the $χ²$ approach. Still, I would like to see the numbers reproduced. At this time I do have access to the code that was supposedly used to calculate the numbers of that paper. I have yet to run them myself, but given the limit calculation is so simple, reproducing the numbers should anyhow be very easy. *** TODOs for this section :noexport: - [ ] Try to extract the limit calculation from statusAndProgress and see if there's something wrong in our current code? ** Observed limit for different axion masses :extended: *** Generate limits for different axion masses :PROPERTIES: :CUSTOM_ID: sec:limit:limit_different_axion_mass:gen_data_plots :END: In order to compute the observed limit for the axion photon limit, we reuse the command from sec. [[#sec:limit:axion_photon:compute_axion_photon_observed]], but add the ~--massLow/High/Steps~ commands and adjust the output path: #+begin_src sh mcmc_limit_calculation \ limit \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ -f /home/basti/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --tracking ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k_tracking/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --axionModel ~/phd/resources/readOpacityFile/solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15_fluxKind_fkAxionPhoton_0.989AU.csv \ --axionImage ~/phd/resources/axionImages/solar_axion_image_fkAxionPhoton_0.989AU_1492.93mm.csv \ --combinedEfficiencyFile ~/org/resources/combined_detector_efficiencies.csv \ --couplingKind ck_g_aγ⁴ \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --massLow 1e-4 --massHigh 2e-1 --massSteps 500 \ --outpath ~/phd/Figs/trackingCandidates/axionPhotonMassScan/ \ --suffix "" #+end_src Convert the output to a CSV file for external users: #+begin_src sh cd ~/CastData/ExternCode/TimepixAnalysis/Tools/exportLimitLikelihood/ ./exportLimitLikelihood \ massScan \ -f ~/phd/resources/trackingCandidates/axionPhotonMassScan/mc_limit_lkMCMC_skInterpBackground_nmc_500_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500mass_scan.h5 \ -o ~/phd/resources/trackingCandidates/axionPhotonMassScan/gridpix_axion_photon_likelihood_vs_mass.csv #+end_src With the fixes to the adjustment of the MCMC sampling range based on the ~integralBase~ instead of the ~conversionProbability~ at a fixed energy, the limits now <2024-06-10 Mon 17:37> look good at all masses! * Outlook :PROPERTIES: :CUSTOM_ID: sec:outlook :END: Having computed a new best limit on the axion-electron and chameleon couplings, as well as presented a limit on the axion-photon coupling, let us think about the next steps that should be taken from here. The first obvious step would be computing a combined limit of the dataset from this thesis with the GridPix data from 2014/15 [[cite:&krieger2018search]]. As the old detector used the same readout system, the entire analysis framework used in this thesis is directly compatible with the old data. This would be straightforward and should lead to a decent improvement on the limit, in particular due to the good expected limits for the MLP classifier at high software efficiencies, as there are no vetoes available for the old detector. Secondly, a combined limit of all GridPix data with all Micromegas data should be computed. At least at the likelihood level -- meaning to multiply the posterior likelihood functions for each detector and dataset -- would be little work and could lead to another limit improvement. While this would imply exclusion of systematic uncertainties for non GridPix data, in theory one could even attempt to compute limits with the MCMC code written for this thesis. GridPix and Micromegas detector differ, but fundamentally at the level of the limit calculation both work with cluster center positions and energies. This would require the corresponding systematic uncertainties and detection efficiencies. Focusing on a new detector, there are two major improvements on the horizon. First of all a future detector is likely going to be built from radiopure materials. This should reduce the inherent background seen in the detector significantly. In addition, such a detector will be built on top of the Timepix3 instead of the Timepix. This comes with massive improvements in terms of achievable background rates at higher signal efficiencies. The Timepix3 can be read out in a stream-based fashion. Therefore, there is no more dead time associated with a readout and long shutter times are a thing of the past. More importantly, this means for a 7-GridPix detector layout, the very high random coincidence rates seen for the current detector (see sec. [[#sec:background:estimate_veto_efficiency]]) are completely removed. For a Timepix3 based detector the septem veto and line veto can be used /without any efficiency penalty/. Combined with an MLP classifier as used in this thesis and without considering radiopure materials, background rates in the $\SIrange{5e-6}{7e-6}{keV^{-1}.cm^{-2}.s^{-1}}$ range are achievable at software efficiencies around the $\SI{95}{\%}$ mark (see tab. [[tab:background:background_rate_eff_comparisons]] and tab. [[tab:limit:expected_limits]]). This alone would lead to a significant improvement in limit calculations. Furthermore, the Timepix3 supports reading out the Time over Threshold values /in addition to/ the Time of Arrival (ToA) values. At that point the FADC is not needed anymore to provide time information nor to act as a readout trigger. All noise related issues due to dealing with analogue signals in such an experiment are side-stepped. Further, the availability of pixel-wise ToA values allows to reconstruct events in three dimensions, making the equivalent of the rise time veto for the FADC much more accurate. Indeed, in a prototype Timepix3 based detector utilizing an upper cut on the ToA values of clusters already shows promising results [[cite:&schiffer_phd]]. One positive aspect of a new Timepix3 based detector is that the full analysis procedure explained in the context of this thesis is already written with Timepix3 based detectors in mind. While some additions will likely be desired, the basis is already there and given a dataset for a Timepix3 based detector a limit calculation could be done in a matter of days. [fn:reproducible] In terms of a possible setup of such a Timepix3 based detector at BabyIAXO in the future, a few things are important. Firstly, a much improved scintillator veto setup will greatly reduce the possible background due to muons or X-ray fluorescence. One thing to consider for this though, is to move away from a purely discriminator based system. The scintillators used in this thesis were much less useful than initially hoped, due to lack of interpretability of the data and imperfect calibrations. Ideally, the analogue signals of each scintillator are read out so that discriminator thresholds can be applied offline during data analysis (while this introduces analogue signals back into the system, the highly amplified nature makes this significantly easier to deal with). This makes the system much more flexible and less error prone in terms of choosing the wrong parameters for a data taking campaign. Next, given that a Timepix3 based detector will surely need a cooling system as well, time should be invested in characterizing its performance and possibly even working towards a temperature stabilized system. Generally, more sensors should be installed, both for temperature measurements in different places, as well as pressures to make correlating detector behaviors with external parameters easier. On the data calibration side, more importance should be placed on measurements behind an X-ray tube. Data quality in the CAST Detector Lab (CDL) data for this detector was unfortunately not ideal and generally more statistics should be available. While probably unreasonable, BabyIAXO should have a calibration source installed on the opposite side of the magnet, which can produce different X-ray energies. This would allow calibrations both through the X-ray telescope for more frequent alignment measurements as well as X-ray calibration data taken at the same conditions as the background and tracking data. The discrepancy between CAST data detector behavior and CDL data is larger than I would like. Moreover, simulations of a future detector should be higher priority in the future. Both in terms of general background studies and electric field simulations, as well as event simulations. Having more realistic synthetic datasets would be invaluable for better developments of classifiers /without/ the need to rely on real datasets. This was already very successful with the comparatively simple synthetic X-rays in this thesis, but could be extended for background data in the future for pure synthetically trained machine learning classifiers. In this vein more powerful approaches would become feasible, especially if more calibration data were available as well. Convolutional Neural Networks are a promising candidate to investigate, especially also for the 3D cluster data of a Timepix3 detector. And hey, maybe soon we can also compute a limit on the Axio-Chameleon cite:brax2023axiochameleons? [fn:reproducible] This is also one additional reason why I felt it important to provide a thesis that is fully reproducible. This makes it easier for other people to actually use the developed software and understand the methods. ** TODOs for this section [/] :noexport: - [X] *OUTLOOK* in terms of *ANALYSIS* -> Combine this thesis data with other searches! - [X] ToA -> no more noise issues due to analogue parts of FADC - [X] *MENTION THAT DUE TO TOA INFORMATION A GRIDPIX3 DETECTOR WITH SEPTEMBOARD COULD USE SEPTEM VETO WITHOUT SIGNAL EFFICIENCY PENALTY!!!* -> *TOA 2 PURPOSES* 1. ToA as a cut itself 2. No random coincidences for Septemboard vetoes. Extremely important for an ultra high efficiency classifier. MLP @ 95% plus septem veto + line veto a possibility! Could mean that alone yields 5e-6 background rate *at 95%* software efficiency! For the setup at BabyIAXO .... (scintis + sensors) - [X] *OUTLOOK* in terms of *DETECTOR* - [ ] For IAXO detector, better simulation of X-rays would be useful also for MLP training - [ ] *MORE* CDL data! - [ ] *REWRITE ME* -> then refer to Tobi thesis Radiopure materials will also help to further reduce the background that is still present in the detector of this thesis. For a BabyIAXO detector the 7 GridPix layout is a must due to the power of the septem-veto. Finally, a new setup should make sure to at the very least have multiple temperature sensors, but more importantly likely use some kind of temperature control to avoid strong gas gain variations. - [X] *NOT BIGGEST MISTAKE, BECAUSE TIMERS?* -> Refers to ToA not used in this detector - [X] *TODO: get correct numbers!* -> No, we leave that out. #+begin_quote In a test background run using a GridPix3, which was pointed towards the zenith, 50 clusters remained after the likelihood cut. Using the duration of the clusters, another 15 could be removed, showing the usefulness of the approach. For more information on this see the upcoming cite:schiffer_phd. #+end_quote #+begin_quote Timepix3 based detector will be big improvement, as long readout times without time information are probably the biggest issue (and partially biggest mistake) in the data taking campaign. Further, not requiring any analog readout parts like an FADC is a big help as handling EMI properly is complex. #+end_quote * Summary & conclusion :PROPERTIES: :CUSTOM_ID: sec:summary :END: In this thesis we have presented the data taking campaign of the 7-GridPix 'Septemboard' detector used at CAST in 2017/18, its data analysis and limit calculations for different coupling constants. In total about $\SI{3150}{h}$ of active background and $\SI{160}{h}$ of active solar tracking data were taken at CAST with this detector. Generally, detector performance was very stable, with a few issues likely relating to varying operating conditions. There were some other minor setbacks during CAST data taking, most notably a ruptured window, which delayed data taking by a short amount. Also the FADC setup was partially affected by significant noise, which was fixed by changing amplifier settings. In addition, the scintillator data was not recorded correctly in the Run-2 data taking period (Oct 2017 to Apr 2018) and temperature log files of the detector were mostly lost. In the grand scheme of things these issues are minor and do not affect the physics potential of the data much. We have shown that the additional detector features are an extremely valuable addition. Most notably the Septemboard itself, in the form of the 'septem veto' and 'line veto', provide a large improvement to the backgrounds seen over the majority of the center GridPix. While the improvements to the center \goldArea region are not as large, over the entire chip background is suppressed by an order of magnitude. In the course of the thesis, the data reconstruction and analysis code written ~TimepixAnalysis~ [[cite:&TPA]], was written with future Timepix3 based detectors in mind. It is ready for usage for such detectors. Further, novel ideas were implemented to improve the reliability in the form of interpolating the reference datasets the likelihood cut classifier is based on. More interestingly though, a machine learning approach using a small multi-layer perceptron (MLP) was implemented, trained entirely on synthetic X-ray data and background data of the outer chips. This yields a classifier fully defined by data unrelated to its main application (background and tracking datasets) and verification (\cefe calibration and X-ray tube data). This MLP classifier achieves comparable performance to the likelihood cut at its default $ε_{\ln\mathcal{L}} = \SI{80}{\%}$ using a software efficiency of $\SI{95}{\%}$. Significant improvements to the limit calculation are possible as a result. Following the 2017 CAST Nature paper [[cite:&cast_nature]], an unbinned Bayesian likelihood method for limit calculation was implemented to compute limits on the axion-electron coupling $g_{ae}$, the axion-photon coupling $g_{aγ}$ and the chameleon coupling $β$. This limit calculation requires a description of the irreducible background during data taking and several inputs to calculate the expected number of axion induced X-rays during the solar tracking time. The background is obtained from the application of the classifier (\lnL or MLP) to the background dataset. To calculate the expected number of axion induced X-rays during the solar tracking dataset, the differential solar axion flux and the radial emission profile in the Sun is needed (in addition to the losses expected due to the detector window losses and gas transmission). To properly characterize the expected 'axion image' on the detector, a raytracing simulation taking these into account is required. Such a simulation was implemented in the context of this thesis and verified against PANTER measurements of the LLNL telescope. Despite the additional detector features producing a more homogeneous background rate over the entire central chip, an interpolation of the background rate at each point is still required. Such an interpolation was developed based on a normal distribution weighted nearest neighbor approach, producing smooth results. Further, the limit calculation method of [[cite:&cast_nature]] was extended to allow the inclusion of systematic uncertainties into the likelihood function by usage of four nuisance parameters. As these need to be integrated out to obtain the posterior likelihood function from which a limit is computed, a Markov Chain Monte Carlo (MCMC) approach was developed to sample from the likelihood function efficiently. This is needed, because different choices of parameters (detector vetoes, software efficiencies and so on) are evaluated based on their resulting expected limit. Expected limits are computed by sampling toy candidate sets from the background distribution and computing their limits. The expected limit then is defined by the median of all such limits. The expected limit for the best method based on $\num{50 000}$ toy candidate sets came out to \[ \left(g_{ae} · g_{aγ}\right)_{\text{expected}} = \SI{7.878225(6464)e-23}{GeV^{-1}}, \] while the observed limit was computed to \[ \left(g_{ae} · g_{aγ}\right)_{\text{observed}} \lesssim \SI{7.35e-23}{GeV⁻¹} \text{ at } \SI{95}{\%} \text{ CL}. \] This is a good improvement over the current best limit obtained by CAST in 2013 [[cite:&Barth_2013]] of \[ \left(g_{ae} · g_{aγ}\right)_{\text{CAST2013}} \lesssim \SI{8.1e-23}{GeV⁻¹}. \] For the axion-photon coupling our detector was not expected to improve on the current best limit ($g_{aγ, \text{Nature}} < \SI{6.6e-11}{GeV⁻¹}$, cite:cast_nature), which is validated by an expected limit of \[ g_{aγ, \text{expected}} = \SI{9.0650(75)e-11}{GeV⁻¹}, \] with an observed limit of \[ g_{aγ, \text{observed}} \lesssim \SI{9.0e-11}{GeV⁻¹} \text{ at } \SI{95}{\%} \text{ CL}. \] However, for the chameleon limit this detector was expected to be highly competitive, due to a $\SI{300}{nm}$ thin \ccsini window greatly increasing efficiency at the required low energies and additional detector vetoes improving background rates over the entire center GridPix. Indeed, these combined with an improved limit calculation method, allowing the inclusion of the entire center chip and thus full chameleon flux, and better classifier in the form of the MLP yields an expected limit of \[ β_{γ, \text{expected}} = \num{3.6060(39)e+10}. \] The observed limit was then computed to \[ β_{γ, \text{observed}} \lesssim \num{3.1e+10} \text{ at } \SI{95}{\%} \text{ CL}. \] which is a significant improvement over [[cite:&krieger2018search;&krieger_chameleon_jcap]] \[ β_{γ, \text{Krieger}} < \num{5.74e10} \text{ at } \SI{95}{\%} \text{ CL}, \] the current best bound on the chameleon-photon coupling. #+BIBLIOGRAPHY: references # Use biblatex for the bibliography # Add bibliography to Table of Contents # Comment out this command if your references are printed for each chapter. #+LATEX: \printbibliography[heading=bibintoc] #+LATEX: \appendix # * Appendix :Appendix: # #+LATEX: \minitoc # TODO: need backend specific sections here I think ** TODOs for this section [/] :noexport: - [ ] *WRITE SUMMARY IN FIRST OR THIRD PERSON?* - [ ] *EXTEND FOR OTHER COUPLING CONSTANT LIMITS!* * Bibliography :html: :PROPERTIES: :CUSTOM_ID: sec:bibliography :END: <<bibliographystyle link>> bibliographystyle:unsrtnat <<bibliography link>> bibliography:references.bib * Data acquisition and detector monitoring :Appendix:Detector: :PROPERTIES: :CUSTOM_ID: sec:daq :END: #+LATEX: \minitoc Having introduced the detector used for the data taking in chapter [[#sec:septemboard]], this appendix introduces the data acquisition (DAQ) software for the detector (sec. [[#sec:daq:tof]] and [[#sec:daq:tos]]), discusses the data formats used for readout as well as the logging facilities. In the last section [[#sec:daq:septemboard_event_display]] we will discuss the event display tool used to monitor daily data taking at CAST. ** TODOs for this section [/] :noexport: - [X] introduce TOS - [X] introduce data format - [X] logging data (temperature) - different timepix calibrations (here?) maybe just introduce different calibrations and then in data taking part talk about what they actually look like? - [X] *INTRODUCE DAQ ACRONYM* ** Timepix Operating Firmware - TOF :PROPERTIES: :CUSTOM_ID: sec:daq:tof :END: Starting with the firmware of the detector, the \textbf{T}imepix \textbf{O}perating \textbf{F}irmware (TOF), which runs on the Virtex-6 FPGA, specifically a Xilinx Virtex-6 (V6) ML605 evaluation board. TOF controls the Timepix ASICs of the Septemboard (both the slow control aspects and data taking) as well as coordinates the scintillator signals and FADC trigger. It is a VHDL project, intended to run at a clock frequency of $\SI{40}{MHz}$. Communication with the GridPixes is done via two \textbf{H}igh-\textbf{D}efinition \textbf{M}ultimedia \textbf{I}nterface (HDMI) cables, while communication with the readout software on the DAQ computer is handled via Ethernet. For a detailed introduction to TOF, see cite:lupberger2016pixel as well as cite:schiffer_phd. See sec. [[#sec:appendix:configuration:tos_tof_versions]] for a few notes on the used firmware versions during data taking. *** TODOs for this section :noexport: - [X] *CHECK IF VHDL* -> Confirmed by Tobi on discord. - [X] *LUPBERGER THESIS* ** Timepix Operating Software - TOS :PROPERTIES: :CUSTOM_ID: sec:daq:tos :END: The \textbf{T}imepix \textbf{O}perating \textbf{S}oftware (TOS) is the computer-side data acquisition software to read out Timepix based detectors. It is an object oriented C++ project, available at cite:TOS_github. [fn:TOS_versions] The project needs to be used in conjunction with the \textbf{T}imepix \textbf{O}perating \textbf{F}irmware (TOF), which communicates with TOS via Ethernet. The TOS project started as far back as 2009 by people at the University of Mainz. Next is a short overview over the basic blocks that make up the main logic of the software. The fully object oriented nature of the project means that there are different classes for the different software pieces: - =Console= :: A class representing the user facing REPL (Read-Evaluate-Print Loop, an 'interpreter') to control the software - =PC= :: A class representing the network layer and communication side of the software, sitting between the console and lower layers. - =FPGA= :: A class representing the functionality required to control the FPGA on the Virtex-6 evaluation board. - =Chip= :: A class representing each Timepix ASIC and its functionality. - =HFManager= :: A class unifying the FADC & Wiener HV control unit as they are both controlled via USB, installed in a VME crate. This class contains individual attributes that contain explicit classes for these two devices. The name is shortened for 'High Voltage and FADC Manager'. - =V1729= :: A class representing the Ortec Flash ADC. - =HV*= :: Multiple classes representing HV channels, groups and more. - =MCP2210= :: A class representing the PT1000 temperature sensors installed on the detector via a =MCP2210= micro controller, optionally connected via USB. The actual micro-controllers with attached PT1000s are =MAX31685= models. - Misc :: there are a few further classes of minor interest to the general functionality of TOS (tab command completion and history, classes to set masks on the chips, etc.) In general TOS is a fully command line driven software package, with its own REPL (Read-Evaluate-Print Loop; the name for an interactive terminal process, which takes commands that are evaluated and returns to the terminal). It brings all the expected features one might wish from a REPL, including auto completion, history lookup, emacs style keybindings (based on GNU Readline [fn:gnu_readline]) and more. The aforementioned =HFManager= and the temperature sensors are optional pieces that are not required for basic Timepix operation. Their functionality has to be activated via a manual command, =ActivateHFM=. This triggers the USB connection to the VME crate and tries to find the Wiener HV module as well as the Ortec FADC in the crate. Additionally, the temperature sensors are attempted to be found (via a secondary, optional USB connection). If the latter are found a continuous temperature logging begins (see sec. [[#sec:daq:temperature_readout]]). The HV controls are specific to Wiener HV power supplies. In principle the implemented functionality is a fully featured HV controller that supports all Wiener functionality like grouping different channels to ramp up together, kill channels on a trip and more. Most importantly, it implements a custom, slow HV ramping logic, which keeps the relative potentials constant between channels in a group to avoid tripping a channel. An example of a typical startup procedure is shown in listing [[TOS_startup_commands]], in this case to start a background run. Note that most essential commands in TOS also have shortened names via numbers, due to historic reasons (TOS originally did not have autocompletion or allowed moving the cursor in text input, making typing complex names cumbersome and error prone), which is why many of the inputs are simple numbers. #+CAPTION: An example of the typical startup routine of TOS for a background data taking measurement at CAST #+CAPTION: for the Septemboard based GridPix detector. The indented lines refer to commands given to the #+CAPTION: previous command at top level. #+LABEL: TOS_startup_commands #+begin_src sh user@ingrid-DAQ~/ ./TOS > 7 # number of chips > 4 # preload > SetChipIDOffset > 190 > lf # load FSR values for the chips > # return 7 times enter to load default paths > uma # create a uniform matrix for all chips > 1 # Matrix settings > 0 > 1 > 1 > 0 > LoadThreshold # load threshold equalisation files > 4 # write matrix > 3 # read out > 3 # 2nd readout to make sure pixels are 'empty' > ActivateHFM # startup HV & FADC controls > SetFadcSettings # load the FADC settings > Run # start a data taking run > 1 # run time via # frames > 0 > 0 > 0 > 2 # shutter range select > 30 # shutter time select (2 + 30 yields ~2.2 s frames) > 0 # zero suppression > 1 # FADC usage > 0 # accept FADC settings #+end_src [fn:TOS_versions] There are unfortunately 2 different versions of TOS, as development diverged for different readout systems. One version is for the Xilinx Virtex-6 (V6) ML605 evaluation board and the other for the \textbf{S}calable \textbf{R}eadout \textbf{S}ystem (SRS). The V6 version can read out only a single detector (with up to 8 Timepix ASICs), but supports readout of an Ortec FADC and controlling a Wiener HV module via VME. The SRS version instead supports neither of these additional features, but supports multiple detectors at the same time. The detector used in this thesis is read out using the Virtex-6 board. [fn:gnu_readline] https://tiswww.case.edu/php/chet/readline/rltop.html *** TODOs for this section [/] :noexport: *ACCORDING TO WIENER VME MANUAL ALL FILES ARE OPEN SOURCE* https://wikihost.nscl.msu.edu/S800Doc/lib/exe/fetch.php?media=wiki:manual_vm-usb_9_01_1.pdf [[file:~/org/Papers/manual_vm-usb_9_01_1.pdf]] -> page 13 - [ ] *PUSH TOS TO GITHUB OR SIMILAR AND REFERENCE* -> Pushed, but not public yet! - [ ] *CHECK AGAIN WIENER VME SOURCES OPEN SOURCE* -> See above. - [X] *ADD FULL NAME OF V6 BOARD* Link to repositories (maybe we can make the Virtex TOS public?) - [ ] *Link to TOF firmware.* -> ??? I guess they are on our office computer? - [X] *Septem event display example.* Section further down - [X] *INSERT TOS CONFIG FILES SOMEWHERE* -> Appendix, but there's a configuration section. Referred there. *** TOS output data format :PROPERTIES: :CUSTOM_ID: sec:daq:tos_output_format :END: When starting a data taking run with TOS, a new directory for the run is created in the ~data/runs~ subdirectory. The name will be of the form ~Run_<run number>_YYMMdd-HH-mm~ where the run number is an increasing number based on the last run present in the directory and the suffix is the date and time of day when starting the run. This directory contains the configuration of all DACs for each chip, ~fsr<chip number>.txt~, the written configuration matrix for all pixels, ~matrix<chip number>.txt~ and finally the data files ~data<event number>.txt~. If an FADC is used for the readout additional ~data<event number>.txt-fadc~ files are created, one for each file in which the FADC triggered (sec. [[#sec:daq:fadc_data_files]]). The Timepix data files are stored -- for historic reasons -- in raw ASCII format. Two different readout modes (with different output formats) are supported. For the following explanation it is assumed the Timepix is used in the ToT (Time-over-Threshold) mode. - full matrix readout :: reads out the whole Timepix ASIC(s) and writes a single 256x256 pixel matrix as an ASCII file (for each chip). 256 lines, each containing space separated ToT values for each pixel. - zero suppressed readout :: reads out only those pixels that have ToT values larger than 0, up to \num{4096} pixels. Stores the data in TSV files (tab separated values) '=X Y ToT=' with an additional header. The header contains a global "run" and "event" header, which contains information about the run the event is taken from and a "chip" header, which contains information about the specific Timepix ASIC(s) being read out (up to 8 can be read out at the same time using TOS). As for our purposes most events are extremely sparse (< 500 pixels active) the zero suppressed readout is the only relevant readout mode. The data files can be split into 3 distinct parts. A global run header, see listing [[code:daq:zero_suppressed_readout_run_header]], which contains information about the run the event is part of including important settings used as well as the timestamp of the event. Next is an event specific header, which contains specific information about the event in relation to the FADC and the scintillators, see listing [[code:daq:zero_suppressed_readout_event_header]]. The final part of the zero suppressed data files is the chip header and tab separated value part of the '~X Y ToT~' pairs of the active pixels for each chip of the detector in that event, see listing [[code:daq:zero_suppressed_readout_chips]]. #+CAPTION: TOS generated data files start with a general header, which mainly contains #+CAPTION: information about the run the data file is part of. The only exception is the #+CAPTION: =dateTime= field, which represents the timestamp of the event (an oversight, should #+CAPTION: have been in the ~[Event]~ header). #+NAME: code:daq:zero_suppressed_readout_run_header #+begin_src toml ## [General] ## runNumber: 339 ## runTime: 7200 ## runTimeFrames: 0 ## pathName: data/runs/Run_339_190218-10-36 ## dateTime: 2019-02-18.10:36:34 ## numChips: 7 ## shutterTime: 2 ## shutterMode: verylong ## runMode: 0 ## fastClock: 0 ## externalTrigger: 0 #+end_src #+CAPTION: After the general header follows the event header in similar fashion. It records #+CAPTION: the event number and information about the FADC and scintillators. If the FADC triggered, #+CAPTION: ~fadcReadout~ is 1. Scintillator triggers may then be values in $[0, 4096)$. The ~fadcTriggerClock~ #+CAPTION: is the clock cycle of the Timepix frame in which the FADC trigger was received. #+NAME: code:daq:zero_suppressed_readout_event_header #+begin_src sh ## [Event] ## eventNumber: 2 ## useHvFadc: 1 ## fadcReadout: 1 ## szint1ClockInt: 0 ## szint2ClockInt: 0 ## fadcTriggerClock: 647246 #+end_src #+CAPTION: The event header is followed by the beginning of the actual GridPix data. Each chip #+CAPTION: appears with a 3 line chip header containing number and name as well as the number #+CAPTION: of hits seen by that chip in the event. ~numHits~ lines follow with '~X Y TOT~' values #+CAPTION: in tab seperated fashion. This snippet would be followed by the remaining chips, #+CAPTION: as many as written in the run header [[code:daq:zero_suppressed_readout_run_header]] #+CAPTION: as ~numChips~. #+NAME: code:daq:zero_suppressed_readout_chips #+begin_src sh # chipNumber: 0 # chipName: E 6 W69 # numHits: 0 # chipNumber: 1 # chipName: K 6 W69 # numHits: 0 # chipNumber: 2 # chipName: H 9 W69 # numHits: 2 106 160 75 211 142 2 #+end_src **** TODOs for this section :noexport: - [ ] *MAYBE* move this section _after_ the configuration file? That way the FADC explanation makes a bit more sense? For the HV of course it is pretty much the same. - [ ] The below was still mentioned in the section above, but I don't think this is needed here anymore, given that we explain this before: #+begin_quote The Timepix is only capable of shutter based readouts. Typically, a fixed shutter is used. The readout is complicated for the case of using an FADC, in which case an FADC signal can be used as an external trigger to close the shutter early. This will be further explained in the FADC section, [[FADC]] *REWRITE THIS PART, REFER TO SCHEMATIC ABOUT TIMEPIX* #+end_quote - [X] TOS needs to talk about data format that was used in V6 TOS. Stupid ASCII files. Mention that in hindsight the time should have been invested to either use a really simple binary format (like NIO) or HDF5 (even if painful from C++). - [X] *EXPLAIN RUN DATA LOCATION. DATA STORED IN =data/runs= AND DIRECTORY NAME SCHEMA* - [ ] *HV AND WIENER VME EXPLANATION* - [X] *POSSIBLY SPLIT ZERO SUPPRESSED OUTPUT INTO CHUNKS? !! GENERAL HEADER, EVENT HEADER, CHIP HEADER, DATA* - [ ] *ADD SECTION (EVEN IF POSSIBLY NOEXPORT) EXPLAINING THE FIELIDS OF THE DATA FILE* - [ ] *ADD EXPLANATION OF HOW TO CALCULATE SHUTTER LENGTH. POSSIBLY SOMEWHERE HERE? OR IN TIMEPIX INTRO? NEED IT LATER TO EXPLAIN LENGTH OF EVENTS* -> This is explained later where we talk about extracting information from the data files in sec. [[#sec:reco:event_duration]]. **** FADC data files :PROPERTIES: :CUSTOM_ID: sec:daq:fadc_data_files :END: If the FADC triggered during an event, as indicated by the ~fadcReadout~ field in the event header seen in listing [[code:daq:zero_suppressed_readout_event_header]], an additional data file is written with the same name as the event file, but a ~.txt-fadc~ extension. It contains a memory dump of the channels of the circular memory of the FADC plus a basic header about the FADC settings and the information about when the trigger happened. The different fields in the header, see listing [[code:daq:fadc_data_header]], are as follows: - ~nb of channels~: decimal value of a 4-bit field that decides the number of active channels. ~0~ corresponds to using all channels as separate. We only use a single channel. [fn:fadc_chosen_settings] - ~channel mask~: decimal value of a 4-bit field to (de-)activate channels. ~15~ corresponds to all 4 channels active. - ~posttrig~: how many clock cycles in the $\SI{50}{MHz}$ [fn:base_clock] base clock of the FADC it continues taking data before commencing the readout (useful to record the rest of the signal and center it in the readout window) - ~pretrig~: the minimum acquisition time before a trigger is allowed to happen, in units of the $\SI{50}{MHz}$ [fn:base_clock] base clock. - ~triggerrecord~: together with ~posttrig~ allows to reconstruct the time of the trigger in the acquisition window - ~frequency~: decimal representation of a 6-bit field to select the operating frequency. ~2 = 0b000010~ corresponds to $\SI{1}{GHz}$ operation. - ~sampling mode~: decimal representation of a 3-bit field changing the operation mode (manual or automatic trigger) and register working mode (12 or 14-bit sensitivity of each register). We run in manual trigger and 12-bit mode. [fn:fadc_chosen_settings] - ~pedestal run~: a 1-bit flag indicating whether this file is a pedestal run. #+CAPTION: The file starts with a header indicated by ~#~. Some of the values are #+CAPTION: decimal representation of bit fields, hence the weird values like #+CAPTION: "0 channels". It mixes both the configuration used as well as the #+CAPTION: time the trigger occurred (~triggerrecord~). #+NAME: code:daq:fadc_data_header #+begin_src sh # nb of channels: 0 # channel mask: 15 # posttrig: 80 # pretrig: 15000 # triggerrecord: 56 # frequency: 2 # sampling mode: 0 # pedestal run: 0 #+end_src The data portion starts with another semi-header of 12 data points, see listing [[code:daq:fadc_data_header_2]]. It contains fields that are not explained in the FADC manual, but instead refer to "reserved for expert usage" cite:fadc_manual. One exception is data point 2, which is the so called Vernier, which could be used to determine the trigger time within two registers to get up to $\sim\SI{50}{ps}$ RMS accurate time information. For our purposes though $\SI{1}{ns}$ time resolution is more than enough, given the signal undergoes integration and differentiation of a multiple of that in the shaping amplifier, anyway. #+CAPTION: After the header starts the data portion with some auxiliary information. #+CAPTION: The lines are neither of significant interest to us, nor are they properly #+CAPTION: explained in the manual. The second number corresponds to the Vernier, which #+CAPTION: can be used to determine the trigger more precisely than between individual #+CAPTION: register values, which is also not important for our purposes, as $\SI{1}{ns}$ #+CAPTION: resolution is plenty. #+NAME: code:daq:fadc_data_header_2 #+begin_src sh # Data: # 3928 # 8022 # 3957 # 8076 # 3928 # 8023 # 3957 # 8077 # 2048 # 6138 # 2031 # 6151 #+end_src The final portion of the file contains the actual data ($\num{10240}$ lines) and 3 fields at the very end to reconstruct the trigger within the acquisition window [fn:fadc_data_last_3_lines]. The data represents a pure memory dump of the cyclic register. See listing [[code:daq:fadc_data_raw]] for a shortened example. #+CAPTION: Actual data portion of the FADC data. The first $\num{10240}$ lines represent #+CAPTIOn: a memory dump of the cyclic registers at trigger time (that is in their natural #+CAPTION: order instead of starting from the register in which the trigger was recorded). #+CAPTION: It starts at register 0 for channel 0, followed by register 0 of channel 1, and so on. #+CAPTION: As such each $4^{\text{th}}$ line corresponds to one channel. This is why the #+CAPTION: values jump so much from line to line. #+CAPTION: The last 3 lines are information to recover the trigger point in the acquisition #+CAPTION: window. #+NAME: code:daq:fadc_data_raw #+begin_src sh 2028 # register 0, channel 0 6119 # register 0, channel 1 1999 # register 0, channel 2 6100 # register 0, channel 3 2021 # register 1, channel 0 6108 # ... ... # 10240 lines of data in total # 56 # 4096 # 0 #+end_src [fn:base_clock] If running in $\SI{1}{GHz}$ mode. Else it corresponds to $\SI{100}{MHz}$ clocks. [fn:fadc_chosen_settings] As of writing this thesis, I don't remember why the choice was made to only use a single channel instead of using all 4 channels to extend the time interval (development of these things happened between 2015-2017). It's possible there were issues trying to combine all 4 channels. But it's also just as likely it was an oversight due to lack of time combined with the fact that a $\SI{2.5}{μs}$ window is long enough for all intents and purposes. However, combining all 4 channels would even yield a long enough acquisition window when running in the $\SI{2}{GHz}$ sampling mode. Similarly, the choice of the 12-bit readout mode may represent plenty resolution in ADC values, but it seems prudent to not use the 14-bit mode given availability. All in all it leaves me head scratching (and thinking the likely reason will have been lack of time and being happy things working in the first place at the time). [fn:fadc_data_last_3_lines] The last 3 lines of the data portion contain the trigger record, which is already printed by us in the header part and the ~Valp_cp~ and ~Vali_cp~ registers, which are only important if the FADC is used at a sampling frequency of $< \SI{1}{GHz}$, which is why we ignore it here. ***** TODOs for this section [/] :noexport: - [ ] Maybe make this a full section and not subsection of TOS output format? - [X] *Introduce FADC data files.* - [X] *VERIFY THE EXPLANATION OF THE HEADER FIELDS* - [X] *ISN'T THE POSTTRIG VALUE SOMETHING ONE SHOULD TAKE INTO ACCOUNT WHEN COMPUTING THE TIME OF THE ACTUAL TRIGGER IN RELATION TO OTHER DETECTOR FEATURES?* -> No, because our explanation was wrong. It has nothing to do with when the trigger is sent! It is sent, whenever it appears. It just changes the time *after the trigger* that the FADC continues recording data into the registers to record full signal shapes and center the signal! ***** Explanation of beginning of data portion :extended: Explanation of the data header [[code:daq:fadc_data_header_2]] here: - =hvFadcManager.cpp= =writeFadcData= - FADC manual page 27 lists the data sent by the FADC. However, neither explains what the "first sample" and the "rest baseline" is. The manual calls these "expert features" and doesn't explain them... That's how you keep it as an expert feature! The second line is the Vernier, that I honestly don't really understand either. I think it allows to more precisely find the time of trigger between two register entries? Ah yes, see page 14 of the manual about the Vernier. Allows for 50 ps RMS time information between two bins. ***** Explanation of FADC settings :extended: As mentioned in the footnote in the previous section, I really don't understand the choice of FADC settings we used for the actual data taking. It's quite possible I'm nowadays just not aware of something important, but well. Either way, unfortunately I only started being serious about note taking about my work around the beginning of the data taking period in October 2017. So retracing my thoughts during my master thesis (2015-2016) and beginning of my PhD is unfortunately pretty much impossible. However, as mentioned the settings are good enough for what we are doing with the data. The much bigger issues are related to the noise we observed at times etc., which will be mentioned later. *** TOS configuration file :PROPERTIES: :CUSTOM_ID: sec:daq:tos_config_file :END: Everything related to the =HFManager= in TOS is controlled by a configuration file, normally located in =TOS/config/HFM_settings.ini=. The TOS configuration file used during CAST data taking is found in appendix [[#sec:appendix:configuration:tos_config]]. We will go through the sections of it one by one and explain them. Starting with the =[General]= section, listing [[code:daq:general_config]]. This section defines the VME related settings. The VME address of the HV module installed in the crate is used as the base address. The FADC address in the same VME crate is calculated from an offset in units of the VME address spacing of =0x0400=. [fn:base_address_hv] #+CAPTION: General section of the TOS configuration file. It sets the base address #+CAPTION: of the HV module installed in the VME crate. The FADC address is given #+CAPTION: as an offset from the base address. #+NAME: code:daq:general_config #+begin_src toml [General] sAddress_fadc = 1 baseAddress_hv = 0x4000 #+end_src The next section =[HvModule]=, listing [[code:daq:hv_module_config]], are general settings about the used HV module. The settings are related to the =KillEnable= feature of Wiener HV power supplies, the ramping speed of the HV channels and the time in seconds between sanity checks of the HV during data taking. [fn:check_hv_interval_setting] #+CAPTION: The =[HvModule]= section contains settings related to the HV module as a whole. #+CAPTION: Whether a single channel tripping causes all channels to ramp down (=KillEnable=) #+CAPTION: the ramp speed and interval in which the HV module sanity status is checked. #+NAME: code:daq:hv_module_config #+begin_src toml [HvModule] setKillEnable = true # Voltage and Current RampSped currently set to arbitrary value # in percent / second moduleVoltageRampSpeed = 0.1 moduleCurrentRampSpeed = 50 # checkModuleTimeInterval = 60, checks the status of the # module every 60 seconds during a Run, between two events checkModuleTimeInterval = 60 #+end_src Next up, the =[HvGroups]= section in listing [[code:daq:hv_groups_config]] defines the different groups that combine multiple channels. There are multiple different kinds of groups in Wiener HV power supplies. The important groups are ramping groups and trip groups. Essentially, if one channel in a group starts ramping / trips all others also start ramping / shut off the HV, respectively. The section in the config file mainly exposes the already predefined sets of groups that are relevant for the Septemboard detector in TOS. #+CAPTION: =[HvGroups]= defines multiple groups of different HV channels together. The #+CAPTION: config file does not expose arbitrary groupings, but only sets flags whether #+CAPTION: groups are active and what their numbers are. #+NAME: code:daq:hv_groups_config #+begin_src toml # if this flag is set to true, anode and grid # will be coupled to one group [HvGroups] anodeGridGroupFlag = true # grid is master channel of set on group anodeGridGroupMasterChannel = 4 anodeGridGroupNumber = 0 monitorTripGroupFlag = true monitorTripGroupNumber = 1 rampingGroupFlag = true rampingGroupNumber = 2 gridChannelNumber = 4 anodeChannelNumber = 5 cathodeChannelNumber = 8 #+end_src After the =[HvGroups]= section is the definition of the individual HV channels in listing [[code:daq:hv_channels_config]]. Here the physical channels on the device are mapped to the desired voltages and current bounds as well as to a human readable name. The fields repeat with increasing prefix numbers. #+CAPTION: This is an excerpt of the full =[HvChannels]= section for a single HV channel. #+CAPTION: It maps the physical HV connectors to their voltages, current bounds and #+CAPTION: a human readable name. In this case the grid of the GridPixes of the Septemboard #+CAPTION: all receive a voltage of $\SI{300}{V}$. The naming scheme of the fields is #+CAPTION: hardcoded for practical reasons and simply repeats with increasing numbers. #+NAME: code:daq:hv_channels_config #+begin_src toml [HvChannels] # all currents given in A (vmecontrol shows mA) 0_Name = grid 0_Number = 5 0_VoltageSet = 300 0_VoltageNominal = 500 0_VoltageBound = 2.5 0_CurrentSet = 0.000050 0_CurrentNominal = 0.000500 0_CurrentBound = 0 #+end_src Second to last is the =[FADC]= section in listing [[code:daq:fadc_config]]. As the name implies it configures all parameters of the FADC. The main parameter to change is the =fadcTriggerThresholdRegisterAll= parameter, which defines the trigger threshold in effectively $\si{mV}$. Depending on the amount of noise in the system, adjustments to the threshold may be necessary. #+CAPTION: The =[FADC]= section configures the FADC. The most important setting is the trigger #+CAPTION: threshold as it defines the voltage required to trigger the FADC. #+NAME: code:daq:fadc_config #+begin_src toml [Fadc] # FADC Settings fadcTriggerType = 3 fadcFrequency = 2 fadcPosttrig = 80 fadcPretrig = 15000 # was 2033 before, 1966 corresponds to -40 mV fadcTriggerThresholdRegisterAll = 1966 # run time of a single pedestal run for the FADC in ms fadcPedestalRunTime = 100 # number of acquisition runs done for each pedestal calibration fadcPedestalNumRuns = 10 # using channel 0 on FADC as trigger source, thus bit 0 = 1! fadcChannelSource = 1 # set FADC mode register (mainly to enable 14-bit readout) fadcModeRegister = 0b000 #+end_src The last section of the configuration file is the =[Temperature]= section, which deals with the safety ranges of the temperature of the detector. If the temperature leaves the safe range, the detector is to be shut down. [fn:temperature_safety_config_settings] #+CAPTION: =[Temperature]= defines the safe operating ranges of the detector. If the #+CAPTION: range is left, the detector is to be shut down. #+NAME: code:daq:temperature_config #+begin_src toml [Temperature] # temperature related parameters, all temps in °C safeUpperTempIMB = 61 safeUpperTempSeptem = 61 safeLowerTempIMB = 0 safeLowerTempSeptem = 0 #+end_src [fn:base_address_hv] If in doubt about what the base address of the HV supply in the VME crate is, start one of the Wiener HV programs (for example =isegControl=), as it auto detects the module and prints the address. [fn:check_hv_interval_setting] The =checkModuleTimeInterval= setting to check the HV status during the data taking was disabled at CAST, as it caused issues due to false alarms of the HV status. Given that the =KillEnable= flag was used, it was deemed unimportant. Attempting to fix it would have caused possible data loss as it would have been tested on the live detector. [fn:temperature_safety_config_settings] The temperature safety range is coupled to the =checkModuleTimeInterval= setting in the previous footnote. It was disabled together with the above during actual data taking. **** TODOs about this section [/] :noexport: - [ ] *MAYBE* move the FADC parts to the explanation of the FADC data files? I'm not sure the order of things makes so much sense as it is now. At no point are we actually explaining what the different settings really are! -> Maybe the config file could actually be introduced before the data files are explained? **** Understanding FADC settings :extended: - [ ] *This should at least* be an :extended: section with an explanation of all the fields that one can actually set. Given the time since last working with this, I need to look up the values of the TOS config file in the FADC manual. For reference our settings: #+begin_src toml [Fadc] # FADC Settings fadcTriggerType = 3 fadcFrequency = 2 fadcPosttrig = 80 fadcPretrig = 15000 # was 2033 before, 1966 corresponds to -40 mV fadcTriggerThresholdRegisterAll = 1966 # run time of a single pedestal run for the FADC in ms fadcPedestalRunTime = 100 # number of acquisition runs done for each pedestal calibration fadcPedestalNumRuns = 10 # using channel 0 on FADC as trigger source, thus bit 0 1! fadcChannelSource = 1 # set FADC mode register (mainly to enable 14-bit readout) fadcModeRegister = 0b000 #+end_src =FP_FREQUENCY= is the name for the address =0x01= for the. It needs 6 bits of data: #+begin_src Bits 0-5 Function Val = 1 => Fsample = 2GHz. Val = 2 => Fsample = 1GHz. Val = 4 => Fsample = 500MHz. Val = 5 => Fsample = 400MHz. Val = 10 => Fsample = 200MHz. Val = 20 => Fsample = 100MHz. Val = 40 => Fsample = 50MHz. #+end_src As such our used value of ~fadcFrequency = 2~ corresponds to $\SI{1}{GHz}$ as I remembered. =MODE_REGISTER= is for value =0x03= and its 3 bits of data. [[/home/basti/phd/Figs/fadc_settings_mode_register.png]] Our value of =0b000= for the data means no interruption tagging, 12 bits data output and normal acquisition mode. The ~fadcTriggerThresholdRegisterAll~ controls the trigger threshold of the FADC. See [[cite:&fadc_manual]] page 31-32: #+begin_quote TRIGGER THRESHOLD DAC : common pre-loading register of the DACs. This 12-bit register covers the range from –1V (000) to +1V (FFF). By USB or GPIB, one has access to the MSBs and LSBs via 2 distinct sub-addresses. The access is necessarily made in the order MSB (0B) then LSB (0A). By VME, the access is made via a single sub-address (0A). After loading of this register, one must transfer the value in the analog converter via the LOAD_TRIGGER THRESHOLD DAC (09) command. #+end_quote This implies the trigger is calculated by: #+begin_src nim import unchained const U_range = 2.V const DAC_range = 4096 proc threshold(x: float): MilliVolt = result = (U_range / DAC_range * x - 1.V).to(mV) echo threshold(1966.0) #+end_src #+RESULTS: : -40.0391 mV which precisely reproduces our $\SI{-40}{mV}$ number. *** HV control via TOS :PROPERTIES: :CUSTOM_ID: sec:daq:tos_hv_control :END: As is being alluded to in the previous section [[#sec:daq:tos_config_file]], the HV control built into TOS can also handle ramping the channels of the detector. This is particularly convenient as it offers a very smooth ramping mode, which keeps the voltage potentials between all channels under a constant ratio. This allows for automatic ramping even for highly sensitive channels (like the GridPix grid). In order to use the HV control and ramp the channels via TOS, =ActivateHFM= must be followed by =InitHV=, which attempts to connect to the HV power supply using the configuration of the config file. If the channels are not ramped up, a call to =RampChannels= will start the smooth ramping process (see listing [[code:daq:ramp_hv_channels]]). #+CAPTION: The required commands to ramp the HV channels using the configuration #+CAPTION: from the config file using a smooth ramping mode. #+NAME: code:daq:ramp_hv_channels #+begin_src sh > ActivateHFM > InitHV > RampChannels #+end_src If the HV is to be ramped down, a call to =ShutdownHFM= will ask whether the channels should be ramped down. There are a multitude of further commands available to communicate with the module, check the voltages, print status information etc. Note that TOS can be started without ramping the HV channels and stopped without ramping the channels down. It is capable of connecting to a running HV power supply or leave it running after shut down. *** Temperature monitoring :PROPERTIES: :CUSTOM_ID: sec:daq:temperature_readout :END: The two =MAX31685= micro controllers read out =PT1000= sensors, a group of \textbf{R}esistance \textbf{T}emperature \textbf{S}ensors (RTDs), which measure temperatures by its effect on the electrical resistance. They are platinum based and have a resistance of $\SI{1000}{Ω}$ at $\SI{0}{\celsius}$. As the expected change in resistance is well understood, the temperature can be precisely measured. The micro controller communicates with another micro controller, an =MCP2210= via the \textbf{S}erial \textbf{P}eripheral \textbf{I}nterface (SPI). SPI allows to address both =MAX31685= from the single =MCP2210=. The =MCP2210= is a USB-to-SPI micro controller. The USB connection from the intermediate board with the computer is separate from the rest of the detector communication. =TOS= communicates with it via the standard \textbf{H}uman \textbf{I}nteface \textbf{D}evice (HID) driver and utilizes an existing open source library for the =MCP2210= cite:wong_mcp2210, which is slightly adapted. The =ActivateHFM= command mentioned in the previous section also attempts to find the USB device of the =MCP2210= (the two are intertwined mainly, as the Septemboard detector is the only detector with either of the two features). If it is found, temperature logging starts immediately and the log files are placed in the default =log= directory of the =TOS= repository. Once a data taking run starts, the logging location is moved over to the data storage directory of the run. In either case the log file is named =temp_log.txt= and contains one temperature value for the intermediate board sensor (=Temp_IMB=) and one for the carrier board sensor (=Temp_Septem=) computed -- based on an average over $\SI{5}{s}$ -- and a timestamp (=DateTime=). A short snippet of the temperature log is shown in listing [[code:daq:temperature_readout]]. [fn:temp_logs_lost] #+CAPTION: Snippet of a temperature log file as recorded for a run during the #+CAPTION: CAST detector lab measurements. Tabs were replaced by spaces for #+CAPTION: better visual alignment here. #+NAME: code:daq:temperature_readout #+begin_src sh # Temperature log file # Temp_IMB Temp_Septem DateTime 26.5186 42.1472 2019-02-16.16:11:45 26.5217 42.2798 2019-02-16.16:11:51 26.5202 42.4371 2019-02-16.16:11:57 26.5309 42.5944 2019-02-16.16:12:03 26.5324 42.7347 2019-02-16.16:12:09 26.5355 42.8627 2019-02-16.16:12:15 26.5416 42.9707 2019-02-16.16:12:21 26.5432 43.0771 2019-02-16.16:12:27 26.5616 43.1804 2019-02-16.16:12:33 ... #+end_src [fn:temp_logs_lost] This default temperature logging location was also used as an unintended fallback mechanism during data taking, if the HV of the detector was considered out of certain bounds. Unfortunately, the bounds checking was entangled with the HV module sanity checks. As both features were very only implemented shortly before data taking, they triggered data taking aborts. For that reason the feature was disabled manually in code for the data taking at CAST. This however triggered a secondary code path for the temperature logging, storing it in the default location outside the specific run directories. As an effect the majority of CAST temperature logging data has been lost, as most of it was overwritten several times. Roughly daily manual temperature measurements are still left and show the detector operating in a good temperature range. Precise correlations with certain detector behaviors are unfortunately impossible. The two different code paths for the temperature logging are essentially a bug in the code that was never intended, stemming from the fact that temperature logging must be done to a 'global' location outside of data taking (as no data taking specific directory exists). Due to how the temperature logging and HV & FADC controls were added to TOS, these things were more entangled than necessary. A more thorough testing period of the detector and software package should have been performed, but was not in scope. **** TODOs for this section [/] :noexport: - [ ] *SHOULD EXPLANATION OF THE READOUT _HARDWARE_ NOT BE IN THE DETECTOR CHAPTER?* -> Probably yes. In particular once / if we move the data readout parts of the detailed descriptions to the appendix, it becomes more important to put the actual hardware description into the detector chapter where we already mention it a bit. - [X] *INSERT TEMPERATURE LOGGING EXAMPLE SNIPPET* - [ ] *THINK ABOUT PUTTING FOOT NOTE INTO ACTUAL TEXT* - [X] *ADD CITATION* https://github.com/kerrydwong/MCP2210-Library *** TOS development :extended: As mentioned in one of the footnotes in the previous section, there are nowadays 2 independent versions of TOS. The detector used for CAST in 2014/15 (and thus its successor used in this thesis) was based on a readout using the Virtex 6 FPGA. This system was, at the point I started on my master thesis in 2015, already quite diverged from the SRS based system, which was mainly developed for multi chip detectors that were initially planned for a large GridPix based TPC to be used for the ILD (the detector planned for the ILC, the International Linear Collider to be built in Japan). In addition there was recent a master thesis (by Alexander Deisting [[cite:&Deisting]]), which included work on using an FADC to read out the induced charges on the grid of the InGrid by decoupling the signal using a capacitor. The software library to interact with the used FADC had partially been implemented into the Virtex 6 TOS. As the FADC was an integral part in the new detector design, it was natural to start with the Virtex 6 version. At the same time the SRS TOS version at the time was even more ugly than the same code paths in the Virtex 6 TOS, due to its hardcoded extra loops for each FEC of the TOS. At a later time the SRS TOS was required for other detectors and so development effort was spent on both systems unfortunately. ** schematic of whole readout chain [0/1] :noexport: *ELSEWHERE AS WELL?* Create a full flow chart of how everything is connected. We have our notes about where each cable goes etc. We have a schematic in the master thesis. That can be modified a bit for the PhD thesis. - [X] *THIS IS IN DETECTOR OVERVIEW NOW* ** Septemboard event display :PROPERTIES: :CUSTOM_ID: sec:daq:septemboard_event_display :END: In order to monitor the data taking process while the detector is running, an online event display tool was developed during the first CAST data taking period in March 2017. It is a Python [fn:daq_python] based project making heavy use of =matplotlib= cite:Hunter:2007 for an interactive view of both the Septemboard events as they are recorded, the FADC readout as well as a general information header about the current data taking run. The backend consists of a multiprocessing architecture with multiple worker processes. One process watches the current run directory for changes and reads the raw data files, another performs basic event reconstruction and a last one updates the current event to be displayed. The main process renders the =matplotlib= based graphical user interface (GUI) [fn:daq_backend]. Fig. [[fig:daq:septemboard_event_display_example]] shows the graphical user interface of the septemboard event display during a background run. General information about the current run and event is shown in the box at the top center. The top left box shows hit specific information for the current event. The current septemboard event is always shown in the left pane in a realistic layout of the septemboard. By default the Viridis [fn:viridis] color scale is used in the display of the septemboard events, each shown as an image of $(256, 256)$ pixels. If a chip did not record any activity during an event, its plot remains white for easier identification of few hits compared to no hits. The color scale can be adjusted when starting the program and the images can be downsampled by any factor of 2 for better visibility, as is done in fig. [[fig:daq:septemboard_event_display_example]]. The right pane of the event display shows the last recorded event of the FADC. It does not automatically update the plot every time a new septemboard event is recorded, as there can be multiple events without FADC activity and it is useful to be able to glance at the last large event on the FADC. The filename is printed as the title of the plot to show which septemboard event it corresponds to. #+CAPTION: Screenshot of the Septemboard event display showing a background #+CAPTION: event from a CAST data taking run in 2017. The pixel density in the #+CAPTION: septemboard on the left has been downsampled by a factor of 2 from #+CAPTION: $(256, 256)$ for each chip to $(128, 128)$ for better visibilty of #+CAPTION: the activity. The event display shows general information like run and #+CAPTION: event number in the box at the top, hit specific information for the #+CAPTION: current event in the left top box and the current septemboard event #+CAPTION: in the left pane. The right pane shows the last FADC event (if no #+CAPTION: new FADC event is recorded, it stays). #+NAME: fig:daq:septemboard_event_display_example [[~/phd/Figs/daq/example_event_display.pdf]] The event display provides multiple forms of interactivity, such as an "auto follow" mode (the default in which new events are shown as they are recorded), a "playback" mode (walks through all events with a certain delay and general back and forth optionality. Further, a shortcut to save images directly exists, as well as simple computation of aggregate statistics of the current data taking period (different occupancy maps and simple histograms showing the number of hits per each event and chip). All in all it provides a simple, but powerful way to monitor the detector activity online as it takes data. [fn:daq_downsides] [fn:daq_python] https://python.org [fn:daq_backend] =matplotlib= provides a multitude of different GUI backends. The explicit choice depends on the specific machine (available backends may differ) and preference. Common choices are GTK and TkAg. https://matplotlib.org/stable/users/explain/figure/backends.html [fn:viridis] See here for the introduction of Viridis and its siblings: https://bids.github.io/colormap/ [fn:daq_downsides] The main drawbacks are related to it being a Python based project that utilizes =matplotlib= possibly too heavily. The combination means the tool is not useful for fast data taking, as it is too slow to show events in real time if data taking exceeds one frame per second significantly. *** TODOs for this section [1/1] :noexport: - [X] *CITE SOMETHING FOR VIRIDIS COLORSCALE* * Configuration and TOS / TOF versions :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:configuration :END: In this appendix we will present the TOS configuration file used for data taking at CAST and shortly note on versions of TOS and TOF used. ** TOS configuration file :PROPERTIES: :CUSTOM_ID: sec:appendix:configuration:tos_config :END: Below is the configuration file as it was used at CAST during the data taking periods. Most notably it contains the required address for the FADC in the VME crate, the high voltage settings (groups, voltages and current bounds) and the FADC settings (channel, trigger threshold etc.). #+begin_src toml [General] sAddress_fadc = 1 baseAddress_hv = 0x4000 [HvModule] setKillEnable = true # Voltage and Current RampSped currently set to arbitrary value # in percent / second moduleVoltageRampSpeed = 0.1 moduleCurrentRampSpeed = 50 # checkModuleTimeInterval = 60, checks the status of the # module every 60 seconds during a Run, between two events checkModuleTimeInterval = 60 # if this flag is set to true, anode and grid # will be coupled to one group [HvGroups] anodeGridGroupFlag = true # grid is master channel of set on group anodeGridGroupMasterChannel = 5 anodeGridGroupNumber = 0 monitorTripGroupFlag = true monitorTripGroupNumber = 1 rampingGroupFlag = true rampingGroupNumber = 2 gridChannelNumber = 5 anodeChannelNumber = 6 cathodeChannelNumber = 9 [HvChannels] # grid, anode and cathode settings # all currents given in A (vmecontrol shows mA) 0_Name = grid 0_Number = 5 0_VoltageSet = 300 0_VoltageNominal = 500 0_VoltageBound = 10 0_CurrentSet = 0.000050 0_CurrentNominal = 0.000500 0_CurrentBound = 0 1_Name = anode 1_Number = 6 1_VoltageSet = 375 1_VoltageNominal = 500 1_VoltageBound = 10 1_CurrentSet = 0.000050 1_CurrentNominal = 0.000500 1_CurrentBound = 0 2_Name = cathode 2_Number = 9 2_VoltageSet = 1875 2_VoltageNominal = 2500 2_VoltageBound = 15 2_CurrentSet = 0.000050 2_CurrentNominal = 0.000500 2_CurrentBound = 0 3_Name = Ring1 3_Number = 7 3_VoltageSet = 415 3_VoltageNominal = 500 3_VoltageBound = 15 3_CurrentSet = 0.000100 3_CurrentNominal = 0.000500 3_CurrentBound = 0 4_Name = Ring29 4_Number = 8 4_VoltageSet = 1830 4_VoltageNominal = 2500 4_VoltageBound = 15 4_CurrentSet = 0.000100 4_CurrentNominal = 0.000500 4_CurrentBound = 0 6_Name = sipm 6_Number = 4 6_VoltageSet = 65.6 6_VoltageNominal = 100 6_VoltageBound = 5 6_CurrentSet = 0.0005 6_CurrentNominal = 0.0005 6_CurrentBound = 0 # The veto paddle scintillator is commented out, as it was supplied # with HV by an external CAEN HV power supply. # 5_Name = szintillator # 5_Number = 11 # #5_VoltageSet = 1300 # 5_VoltageSet = 0 # 5_VoltageNominal = 2500 # 5_VoltageBound = 5 # 5_CurrentSet = 0.002 # 5_CurrentNominal = 0.002 # 5_CurrentBound = 0 [Fadc] # FADC Settings fadcTriggerType = 3 fadcFrequency = 2 fadcPosttrig = 80 fadcPretrig = 15000 # was 2033 before, 1966 corresponds to -40 mV fadcTriggerThresholdRegisterAll = 1966 # run time of a single pedestal run for the FADC in ms fadcPedestalRunTime = 100 # number of acquisition runs done for each pedestal calibration fadcPedestalNumRuns = 10 # using channel 0 on FADC as trigger source, thus bit 0 1! fadcChannelSource = 1 # set FADC mode register (mainly to enable 14-bit readout) fadcModeRegister = 0b000 [Temperature] # temperature related parameters safeUpperTempIMB = 61 safeUpperTempSeptem = 61 safeLowerTempIMB = 0 safeLowerTempSeptem = 0 #+end_src ** TOS and TOF versions used at CAST :PROPERTIES: :CUSTOM_ID: sec:appendix:configuration:tos_tof_versions :END: For the DAQ software TOS unfortunately no discrete git tags exist, due to its rocky development. However, based on the git repository [[cite:&TOS_github]] and the dates of the start of the data taking campaigns (see [[#sec:cast:timeline]]), it is simple to deduce the corresponding commits which were used. The detector firmware versions, TOF, have an even less well defined history. No development history exists, strictly speaking. The ~.bit~ files are available, but their names are very 'descriptive' in nature, but do not contain any version numbers either. If memory serves correctly, the version used in /Run-3/ at CAST contained something like ~szint1_fixed~ in its filename, indicating that the scintillator trigger logic was fixed (which was the major bug in Run-2). *** More notes on the TOF versions :extended: Tobi expressed some of his development frustrations by coming up with ever more creative ways to name the binary files! The initial version might have been ~bastis neues lielibings tof~ or something like that. Well, it's not like it matters much at this point, which is why I did not spend any time really digging into which version exactly was used. *** TODOs for this section :noexport: - [ ] *WHICH VERSIONS WERE USED* - [ ] *WHERE ARE THE FILES*. Office computer? * Calibrations :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:calibration :END: This appendix explains the different calibrations that need to be performed for the Timepix to bring it into its operating window, sec. [[#sec:appendix:calibration:timepix]]. Then in sec. [[#sec:appendix:septemboard_calibrations]] we present additional calibrations for all chips. ** Timepix calibrations :PROPERTIES: :CUSTOM_ID: sec:appendix:calibration:timepix :END: Before a Timepix based detector can be used for data taking, different calibrations have to be performed. We will discuss those calibrations, which are performed before any data taking here. First, the =THS= optimization and threshold equalization (sec. [[#sec:operation_calibration:ths_opt_equalization]]). These two are calibrations that are used to set different DACs on the Timepix to good working points. The very important ToT calibration was already introduced in the main body, section [[#sec:operation_calibration:tot_calibration]]. Its purpose is to interpret the ToT values in amount of charge of recorded electrons. In principle there are many other calibrations one could perform, as the Timepix has 13 different DACs. Most are used with default values that are seen in tab. [[tab:daq:common_dac_values]]. S-Curve scans are introduced in sec. [[#sec:operation_calibration:scurve_scan]], which can be used to map ~THL~ threshold DAC values to charges and determine the activation threshold. Similarly, the Pólya distribution can also act as a means to determine the activation threshold, see sec. [[#sec:appendix:calibration:polya_distribution_threshold]]. Important references for the Timepix in general and for the calibration procedures explained below are cite:LLOPART2007485_timepix,LlopartCudie_1056683,timepix_manual,lupberger2016pixel. *** TODOs for this section [/] :noexport: - [ ] *THINK ABOUT KEEPING SCURVE HERE, GIVEN THAT APPENDIX NOW* *** =THS= optimization and threshold equalization :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:ths_opt_equalization :END: For an optimal operation of a Timepix based detector, each pixel should ideally have the same threshold. While all pixels are theoretically identical, imperfections in the production process will always lead to slight differences, either locally (transistor threshold voltage or current mismatches cite:LLOPART2007485_timepix,pelgrom1989matching ) or global effects like small supply voltage instabilities. Therefore, each pixel has 4 independently selectable current sources to minimize the spread of threshold values cite:timepix_manual,LLOPART2007485_timepix. Together all 4 sources act as an effective 4-bit DAC to slightly adjust the threshold. The absolute current for the 4 sources is dependent on the global =THS= DAC, allowing currents in the range of $\SIrange{0}{40}{nA}$. To achieve a good calibration for a homogeneous threshold, first the =THS= DAC has to be set correctly. This is referred to as the =THS= optimization. Once the correct value is found, the 4-bit DAC on each pixel can be adjusted to minimize the spread of threshold values of all pixels together. If the =THS= DAC is set too high, the 4-bit DAC on each pixel will be too coarse for a fine adjustment (as the 'current steps' will be too large). If it is too low, not enough range will be available to adjust each pixel to an equal noise / sensitivity level (not enough current available via the 4 current sources). The goal of the =THS= optimization is therefore to find just the right value, as to provide a range of values such that all values can be shifted to the same threshold of the threshold DAC =THL=. The algorithm scans a range of =THL= values through a subset of $\num{4096}$ pixels using different 4-bit DAC values. First 0 for all pixels and then the maximum value of 15. At each =THL= and 4-bit value the number of hits due to pure noise is recorded for each pixel. The weighted mean of the =THL= values, using the number of hits as weight, is the value of interest for each pixel and each 4-bit DAC value: the effective =THL= noise value for that pixel. For each of the two cases (4-bit value 0 and 15) we can then compute a histogram of the number of pixels at each =THL= value. The resulting histogram will be a normal distribution around a specific =THL= value. The stopping criterion, which defines the final =THS= value, is such that these two distributions overlap at the 3 RMS level. This is performed by comparing the means of the 0 and 15 value distribution at a starting =THS= value and again at half of that =THS= value. Using linear regression of the two differences, the optimal =THS= value is computed. With a suitable =THS= value set, the actual threshold equalization can start. The algorithm used is fundamentally very similar to the logic of the =THS= optimization. Each pixel of the chip is scanned for a range of =THL= values and the weighted =THL= noise mean is computed both at a 4-bit DAC value of 0 and at 15. The normalized deviation of each pixel's =THL= value to the mean =THL= value of all pixels is computed. Using a linear regression the optimal required shift (in units of the 4-bit DAC) yields the final 4-bit DAC value for each pixel. An example of the 0 and 15 value distributions as well as the distribution using the final 4-bit DAC values for each pixel is shown in fig. [[sref:fig:daq:optimal_ths_distribution]]. [fn:daq_histo_source] Each of the distributions represent different 4-bit DAC settings of all pixels of the chip. Orange ("min") represents all pixels using a 4-bit DAC value of 0, purple ("max") of 15. In green is the same distribution for the case where every pixel uses its optimal 4-bit DAC value. The threshold equalization thus yields a very strong reduction in the =THL= spread of all pixels. Fig. [[sref:fig:daq:4bit_dac_distribution]] shows how all pixels are spread in the values of the 4-bit DAC. The narrow equalized line of fig. [[sref:fig:daq:optimal_ths_distribution]] is achieved by a normal distribution around $\num{8}$ of the 4-bit DAC values, with only very few at the edges of the DAC ($\num{0}$ and $\num{15}$). Finally, fig. [[fig:daq:4bit_dac_heatmap]] shows a heatmap of an entire chip with its 4-bit DAC values after equalization. Similar plots for all other chips during both run periods can be found in the extended thesis. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "THL distributions") (label "fig:daq:optimal_ths_distribution") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/detector/calibration/ths_optimization_distributions_Run2_chip_3.pdf")) (subfigure (linewidth 0.5) (caption "4-bit distributions") (label "fig:daq:4bit_dac_distribution") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/detector/calibration/optimized_equalization_bits_Run2_chip_3.pdf")) (caption (subref "fig:daq:optimal_ths_distribution") "Distributions of different 4-bit DAC settings of all pixels of the chip. Orange (\"min\"): all pixels using a 4-bit DAC value of 0, purple (\"max\"): 15. Green: every pixel uses the optimal 4-bit DAC value after equalization. The result is a significant in =THL= value spread of all pixels." (subref "fig:daq:4bit_dac_distribution") "Distribution of all 4-bit DAC values for the pixels after the threshold equalization. A normal distribution around a middle value is expected to largest likelihood of achieving a flat threshold around the whole chip. Very few pixels are either in value \\num{0} or \\num{15}, implying few pixels likely outside their range to adjust to the required threshold. In this example represented is the center chip of the Septemboard with its calibration from July 2018.") (label "fig:daq:thl_and_4bit_distr")) #+end_src #+CAPTION: Heatmap of the distribution of the 4-bit DAC values of all pixels #+CAPTION: as they are spread over the full Timepix. In this example #+CAPTION: represented is the center chip of the Septemboard with its calibration #+CAPTION: from July 2018. #+NAME: fig:daq:4bit_dac_heatmap [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_3.pdf]] [fn:daq_histo_source] The plot is generated from the =thresholdMeans.txt= file created as part of the equalization procedure in TOS. [fn:daq_tos_code_quality] Having to check out the TOS code to verify the logic of the THS optimization and equalization procedures reminded me of the abhorrent code quality of that code base. Holy moly... **** TODOs for this section [/] :noexport: - [ ] Maybe move heatmap of equalization to extended version? - [X] *I THINK THS OPT DOES NOT!! USE TEST PULSES* Yes, pretty sure now. The THS optimization (and obviously threshold equalization) does not use test pulses - [ ] *INTRODUCE THE NAME 'PIXEL DAC' INSTEAD 4-BIT DAC?* - [ ] *MAYBE REPHRASE THE FOLLOWING. BUT THIS IS WHAT'S HAPPENING. THE TOS CODE IS PRETTY SIMPLE, JUST BLOATED AND UGLY* - [X] *FIRST EXPLAIN THS OPT, THEN EQ, BOTH USING THE PLOT* - [X] *REWRITE BELOW FOLLOWING THE ABOVE NOW* - [X] *MENTION THIS IS TO MAKE EQUALIZATION OF EACH PIXEL DAC NICE* - [ ] *PSEUDO CODE ALGORITHM? I THINK IT WOULD CLARIFY THE EXPLANATION QUITE A BIT. ESPECIALLY BECAUSE IT'S NOT DIFFICULT. JUST TOS IS UGLY* - [X] *PLOT OF THESE TWO HISTOGRAMS. SHOW MAYBE IDEALIZED EXAMPLE OF BAD THS AND GOOD THS VALUE* - [X] *LOOK AT TOS CODE AGAIN* - [X] *LOOK AT TOS CODE FOR EQUALIZATION AGAIN* - [X] *CROSS CHECK THE NAMES ETC* **** Generate the plot for the THS optimization result :extended: #+begin_src nim :tangle code/ths_optimization.nim import std / strformat import ggplotnim proc csbs(): Theme = result = sideBySide() result.titleFont = some(font(7.0)) proc main(fname, runPeriod: string, chip: int) = var df = readCsv(fname, sep = '\t', colNames = @["x", "y", "min", "max", "bit", "opt"]) let breaks = linspace(-0.5, 15.5, 17).toSeq1D echo breaks ggplot(df, aes("bit")) + geom_histogram(breaks = breaks, hdKind = hdOutline) + scale_x_continuous() + xlim(-0.5, 16.5) + xlab("4-bit DAC") + margin(left = 3.5) + themeLatex(fWidth = 0.5, width = 600, height = 420, baseTheme = csbs) + ggtitle(&"All equalization bits after optimization, {runPeriod}, chip {chip}") + ggsave(&"/home/basti/phd/Figs/detector/calibration/optimized_equalization_bits_{runPeriod}_chip_{chip}.pdf", useTeX = true, standalone = true) df = df.gather(["min", "max", "opt"], "type", "THL") ggplot(df.filter(f{`THL` > 330.0 and `THL` < 460.0}), aes("THL", fill = "type")) + geom_histogram(binWidth = 1.0, position = "identity", hdKind = hdOutline, alpha = 0.7) + ggtitle(&"All equalization bits at 0, 15 and optimized, {runPeriod}, chip {chip}") + #xlim(330, 460) + margin(left = 3.5) + themeLatex(fWidth = 0.5, width = 600, height = 420, baseTheme = csbs) + ggsave(&"/home/basti/phd/Figs/detector/calibration/ths_optimization_distributions_{runPeriod}_chip_{chip}.pdf", useTeX = true, standalone = true) when isMainModule: import cligen dispatch main #+end_src Laptop: #+begin_src zsh for chip in {0..6}; do ./code/ths_optimization -f ~/septemH_calibration/SeptemH_FullCalib_2018_2/chip$chip/thresholdMeans$chip.txt --runPeriod Run3 --chip $chip &; done #+end_src Desktop / Laptop: #+begin_src zsh :results append file :file ths_optimization.org :output-dir resources for run in 2 3; do for chip in {0..6}; do ./code/ths_optimization -f ~/CastData/ExternCode/TimepixAnalysis/resources/ChipCalibrations/Run$run/chip$chip/thresholdMeans$chip.txt --runPeriod Run$run --chip $chip &; done done #+end_src #+RESULTS: file:resources/ths_optimization.org #+begin_src nim :tangle code/threshold_equalization_heatmap.nim import ggplotnim import std / [sequtils, strutils, strformat] proc main(fname, runPeriod: string, chip: int) = let aranged = toSeq(0 .. 255).mapIt($it) var df = readCsv(fname, sep = '\t', colNames = aranged) df["y"] = toSeq(0 .. 255) df = df.gather(aranged, "x", "4-bit DAC") .mutate(f{"x" ~ `x`.parseInt}) echo df ggplot(df, aes("x", "y", fill = "4-bit DAC")) + geom_raster() + #scale_x_continuous() + #xlim(-0.5, 16.5) + coord_fixed(1.0) + xlab("x [pixel]") + ylab("y [pixel]") + xlim(0, 255) + ylim(0, 255) + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggtitle(&"Equalization bits after optimization, {runPeriod}, chip: {chip}") + ggsave(&"/home/basti/phd/Figs/detector/calibration/heatmap_threshold_equalization_{runPeriod}_chip_{chip}.pdf", useTeX = true, standalone = true) when isMainModule: import cligen dispatch main #+end_src #+begin_src sh ./code/threshold_equalization_heatmap -f ~/septemH_calibration/SeptemH_FullCalib_2018_2/chip3/threshold3.txt #+end_src #+begin_src zsh :results append file :file heatmap_threshold_equalization.org :output-dir resources for run in 2 3; do for chip in {0..6}; do ./code/threshold_equalization_heatmap -f ~/CastData/ExternCode/TimepixAnalysis/resources/ChipCalibrations/Run$run/chip$chip/threshold$chip.txt --runPeriod Run$run --chip $chip &; done done #+end_src #+RESULTS: [[file:resources/heatmap_threshold_equalization.org]] All plots are found under sec. [[#sec:appendix:septemboard_calibrations]]. **** Relevant code for calculation of mean values from TOS :extended: Filling of =sum= in =THscan=. =array_pos= is effectively the =THL= value currently being scanned. =pix_tempdata= is the response matrix of each pixel (contains hit counter for each pixel). Also fills =hit_counter=, which is simply the counts. #+begin_src c++ fpga->DataFPGAPC(pix_tempdata2,chp); //!!!only one chip!!! for(short y=step;y<256;y+=(256/pix_per_row)){ for(short x=0;x<256;x++){ if(pix_tempdata2[y][x]>=20 and pix_tempdata2[y][x]!=11810){ //if (pix_tempdata2[y][x]>=200) {std::cout << "hits for thl " << thl <<" :" << pix_tempdata2[y][x] << std::endl;} p3DArray[y][x][array_pos] = pix_tempdata2[y][x]; //if(LFSR_LookUpTable[(*VecData)[chp][y][x]]>=20 and LFSR_LookUpTable[(*VecData)[chp][y][x]]!=11810){ //p3DArray[y][x][array_pos] = LFSR_LookUpTable[(*VecData)[chp][y][x]]; sum[y][x]+=p3DArray[y][x][array_pos]*(array_pos); hit_counter[y][x]+=p3DArray[y][x][array_pos]; } else{ p3DArray[y][x][array_pos] = 0; sum[y][x]+=0; hit_counter[y][x]+=0; } } } #+end_src And in the =THSopt= the code to compute the mean: #+begin_src c++ for(y=0;y<256;y++){ for(x=0;x<256;x++){ if (hit_counter0[y][x]!=0){ mean0[y][x] = sum0[y][x]/hit_counter0[y][x]; mean0entries += 1; summean0 += mean0[y][x]; } if (hit_counter15[y][x]!=0){ mean15[y][x] = sum15[y][x]/hit_counter15[y][x]; mean15entries += 1; summean15 += mean15[y][x]; } } } #+end_src Length of shutter used is #+begin_src c++ // calling CountingTime with second argument == 1 // corresponds to n = 1, power of 256 fpga->CountingTime(10, 1); #+end_src (could compute the length, but not important right now) Given that the =THscan= is run for each THL value (and thus summing up all contributions of all THL values for =sum0=), the algorithm effectively computes: #+begin_src mean = Σ_i #hits_i * THL_i / Σ_i #hits_i #+end_src which is simply *the weighted mean of the =THL= value, weighted by the number of hits.* Essentially we compute the THL value with the most dominant noise? In a sense it makes sense as changing the 4-bit DAC will move around the position of that noise effectively. The The point of interest then here is the fact that the number of hits depends on the THL value strongly. We only see the number of injected test pulses, if we're above the noise. Ideally we don't want to see any noise due to too low =THL= range. Therefore let's check what is used in TOS. We will verify this by computing the same value for an S-curve calibration file: #+begin_src nim import ggplotnim const path = "/home/basti/septemH_calibration/SCurve/chip_3/voltage_100.txt" let df = readCsv(path, sep = '\t', header = "#", colNames = @["THL", "counts"]) .filter(f{`THL` > 424}) echo df let thls = df["THL", float] let counts = df["counts", float] var sum = 0.0 var hits = 0.0 for (thl, count) in zip(thls, counts): sum += count * thl hits += count echo "Mean value = ", sum / hits #+end_src #+RESULTS: | DataFrame | with | 2 | columns | and | 175 | rows: | | Idx | THL | counts | | | | | | dtype: | int | int | | | | | | 0 | 425 | 1000 | | | | | | 1 | 426 | 1000 | | | | | | 2 | 427 | 1000 | | | | | | 3 | 428 | 1000 | | | | | | 4 | 429 | 1000 | | | | | | 5 | 430 | 1000 | | | | | | 6 | 431 | 1000 | | | | | | 7 | 432 | 1000 | | | | | | 8 | 433 | 1000 | | | | | | 9 | 434 | 1000 | | | | | | 10 | 435 | 1000 | | | | | | 11 | 436 | 1000 | | | | | | 12 | 437 | 1000 | | | | | | 13 | 438 | 1000 | | | | | | 14 | 439 | 1000 | | | | | | 15 | 440 | 1000 | | | | | | 16 | 441 | 1000 | | | | | | 17 | 442 | 1000 | | | | | | 18 | 443 | 1000 | | | | | | 19 | 444 | 1000 | | | | | | | | | | | | | | Mean | value | = | 463.8075954222876 | | | | which results in a mean value of 463.8. Given the range of data that's, surprise, what we would expect from a weighted mean with the hit counter used. Of course, in the =THS= optimization the input is purely noise and not a fixed set of test pulses. *** Final =THL= (threshold) DAC value selection Once the detector is =THS= optimized and threshold equalized, the final threshold value of the =THL= DAC can be determined for the data taking. While measurements like an S-Curve scan (see sec. [[#sec:operation_calibration:scurve_scan]]) can be used to understand where the noise level of the chip is in terms of =THL= values, it is typically not a reliable measure as the real noise depends strongly on the shutter length. If an experiment -- like a low rate experiment as CAST -- requires long shutter lengths, the best way to determine the lowest possible noise-free =THL= value is to perform a simple scan through all =THL= values using the shutter length in use for the experiment. For a correctly equalized chip a sharp drop off of noisy pixels should be visible at a certain threshold. In principle the =THL= value at which no more pixels are noisy is the ideal =THL= value. TOS first performs a quick scan in a ~THL~ range given by the user, using short shutter lengths. The determined drop values that still see some noise to a noise-free range is used as a basis for a long shutter length scan using a shutter length given by the user. For safe noise free operation one should choose a ~THL~ value 2 or 3 above the first noise-free THL value at the target shutter length. Especially for long shutter lengths it is important to perform this calibration without any high voltage applied to the detector as otherwise cosmic background starts to affect the data. **** TODOs for this section [/] :noexport: Old paragraph for above: #+begin_quote - [ ] *REPHRASE AND NOT TALK ABOUT ENC HERE?* Each Timepix pixel has an electronic noise charge (ENC) of *CHECK THIS* electrons. Of course the behavior of the charges on the pixels are statistically distributed. For the different calibrations typically very short shutter opening times are used to get fast calibrations. For practical data takings at experiments like CAST, very long shutter times on the order of $\mathcal{O}(> \SI{1}{\s})$ are used however. Due to the statistical nature of noise, a =THL= value that is noise less may not be fully noise free for long shutters. Therefore, one often uses =THL= values that are slightly larger (i.e. ~3 values larger of the 10 bit DAC). In this sense the S-Curve scan is a good cross check for whether the =THL= value seems sensible, but in practice a =THL= scan with a longer shutter time is more useful. #+end_quote ... THL scan at desired shutter length! Semi automatically. Scan a take no noise + 2 - [ ] *CHECK ENC VALUE OF TIMEPIX AND CITE TIMEPIX PAPER* -> ENC is ~100 (timepix paper), but the ENC isn't the relevant property here. I think I misunderstood what the ENC really is. - [ ] *FOR THE SECTION ABOVE I THINK ONLY THE MINIMALLY DETECTABLE CHARGE IS RELEVANT?* - [X] *REWRITE FULL SECTION WITH A CLEAR HEAD* *** S-Curve scan :PROPERTIES: :CUSTOM_ID: sec:operation_calibration:scurve_scan :END: The S-Curve scan is one of 2 different ways to determine the optimal =THL= value. The purpose of the S-curve scan is to understand the relationship between injected charges in electron and the =THL= DAC values by providing a ${\text{\# } e^-}/\mathtt{THL}\text{ step}$ number (or without a =ToT= calibration $\mathtt{ToT}/\mathtt{THL}$). It works by injecting charges onto each pixel and checking the pixel response of each pixel at different =THL= values. Below a certain =THL= value all pixels will respond to the injected charge. At some point certain pixels will be insensitive to the induced charge and a 90° rotated "S" will form. By fitting an error function to this S an ideal =THL= value can be deduced. By calculating =THL= value at which half of all test pulses are recorded, we can compute the number of electrons corresponding to that =THL= DAC value, as we know the amplitude of the test pulse and thus number of injected electrons. Fig. [[fig:daq:s_curves_example]] shows an S-Curve scan of chip 0 of the Septemboard using the calibration from July 2018. The center peak in the middle is the noise peak of the detector at the shutter length used for the S-Curve scan. The symmetrical shape is due to specific implementation details of how the pixels function, the upper side is the one of interest. The center point (half way between both plateaus) corresponds to the effective threshold of the detector at that injected charge. The falling edge of each curve can be fit by the cumulative distribution function of a normal distribution, eq. [[eq:daq:s_curve_fit_function]]. #+NAME: eq:daq:s_curve_fit_function \begin{equation} f(μ, σ, N) = \frac{N}{2} · \text{erfc}((x - μ) / (σ · \sqrt{2})) \end{equation} where the parameter $N$ is simply a scaling factor and $μ$ represents the x value of the half-amplitude point. $σ$ is the spread of the drop and $\text{erfc}$ is the complementary error function $\text{erfc}(x) = 1 - \text{erf}(x)$. The error function is of course just the integral over a normal distribution up to the evaluation point $x$: \[ \text{erf}(x) = \frac{2}{\sqrt{π}} ∫_0^{x} e^{-t²} \dd t. \] Given that the number of injected electrons is known for each test pulse amplitude (see sec. [[#sec:operation_calibration:tot_calibration]]), we can compute the relationship of the number of electrons per =THL= value step. This is called the =THL= calibration and an example corresponding to fig. [[fig:daq:s_curves_example]] is shown in fig. [[fig:daq:thl_calibration_example]], where the =THL= values used correspond to the $μ$ parameters of eq. [[eq:daq:s_curve_fit_function]]. The resulting fit is useful, as it allows to easily convert a given =THL= DAC value into an effective number of electrons, which then corresponds to the effective threshold in electrons required to activate a pixel on average. When looking at the distribution of charges in a dataset, that cutoff in electrons is of interest (see sec. [[#sec:appendix:calibration:polya_distribution_threshold]]). Table tab. [[tab:daq:scurves_fit_params]] shows the fit parameters for the fits of fig. [[fig:daq:s_curves_example]]. See the extended thesis for all fit parameters and plots. #+CAPTION: S-Curve scan of chip 0 of the Septemboard using the calibration from #+CAPTION: July 2018. The scan works by injecting \num{1000} test pulses at different #+CAPTION: amplitudes onto the pixels. Each line represents one such measurement #+CAPTION: and each point is the mean number of counted hits for all pixels with injected #+CAPTION: test pulses. #+CAPTION: The center peak in the middle is the noise peak of the #+CAPTION: detector. The symmetrical shape is due to specific implementation #+CAPTION: details of how the pixels function. The upper side is the one of #+CAPTION: interest. The falling edge of each curve can be fit by the complement of the #+CAPTION: cumulative distribution function of a normal distribution. The center point #+CAPTION: (half way between both plateaus) corresponds to the effective threshold #+CAPTION: of the detector at that injected charge. The fit parameters are found in tab. [[tab:daq:scurves_fit_params]]. #+NAME: fig:daq:s_curves_example [[~/phd/Figs/detector/calibration/s_curves_0_Run3_lX_425.0_lY_3050.0.pdf]] #+CAPTION: The =THL= calibration can be used to gauge the 'threshold gain' the #+CAPTION: =THL= DAC has on the number of electrons required to cross the threshold. #+CAPTION: The =THL= DAC in the Timepix normally adjusts the threshold by about #+CAPTION: \num{25} electrons per DAC value cite:timepix_manual, which is reproduced #+CAPTION: well here (parameter $1/m$). The root of the linear fit (written as #+CAPTION: $f⁻¹(y=0)$ in the annotation) corresponds to the position of the noise peak #+CAPTION: in fig. [[fig:daq:s_curves_example]]. #+NAME: fig:daq:thl_calibration_example [[~/phd/Figs/detector/calibration/thl_calibration_chip_0_Run3_lX_425.0_lY_3050.0.pdf]] #+CAPTION: Fit parameters of all S-Curves for Run-3 of chip 0 as shown in fig. [[fig:daq:s_curves_example]]. #+NAME: tab:daq:scurves_fit_params #+ATTR_LATEX: :booktabs t | V [U] | N | ΔN | μ | Δμ | σ | Δσ | |-------+------+--------+-------+----------+-------+----------| | 20 | 1001 | 0.37 | 421.3 | 0.004754 | 4.221 | 0.006294 | | 25 | 1004 | 0.3732 | 430.8 | 0.004994 | 4.545 | 0.006582 | | 30 | 1000 | 0.3867 | 440.1 | 0.004989 | 4.429 | 0.006561 | | 35 | 1005 | 0.3935 | 449.3 | 0.005154 | 4.627 | 0.006748 | | 40 | 1002 | 0.3842 | 459.4 | 0.005048 | 4.53 | 0.006635 | | 50 | 1002 | 0.3928 | 478.2 | 0.005145 | 4.603 | 0.006741 | | 60 | 1001 | 0.3806 | 497.6 | 0.00505 | 4.553 | 0.006643 | | 100 | 1004 | 0.3936 | 569.8 | 0.005365 | 4.895 | 0.007004 | **** TODOs for this section [5/18] :noexport: - [ ] *FIGURE OUT WHY 2017 CALIBRATION HAS ABOUT 50e/THL COMPARED TO 25e/THL!* - [X] *REPLACE BELOW BY =plotCalibration= CALL?* That already handles it, even though currently via plotly. - [ ] *REWRITE THE CAPTION OF THL PLOT* - [ ] *TURN INTO IF ELSE FUNCTION FOR z* (defined in seqmath) #+begin_src nim let z = (x - x0) / (sigma * sqrt(2.0)) if z > 1.0: result = 0.5 * erfc(z) else: result = 0.5 * (1.0 - erf(z)) #+end_src - [ ] *MAIN PURPOSE IS NOT!! THL VALUE DETERMINATION* - [ ] *LIKELY MOVE THIS EITHER TO APPENDIX OR EVEN EXTENDED THESIS* - [X] *ADD FIT FUNCTION FOR SCURVE* - [X] *REFER TO THE CODE THAT DOES THE FIT* https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/calibration/calib_fitting.nim#L111 - [X] *FIX UP TYPESETTING OF FIT PARAMETERS IN THL PLOT* - [X] *INSERT AND REFERENCE TABLE OF ALL FIT PARAMETERS FOR THE SCURVES* - [ ] *CHECK WHAT 25e/THL IS CALLED IN TABLE OF TIMEPIX MANUAL* - [X] *FIX PARAMETERS OF THL CALIBRATION. THEY ARE NONSENSICAL AND NOT ALIGNED* - [X] *ADD THAT AT EVEN LOWER VALUE WE JUST SEE NOISE AND THEN DUE TO IMPLEMENTATION DETAILS THE WHOLE THING INVERTS* - [X] *ADD ITS OWN SHORT SECTION ABOUT THRESHOLD SCANNING WITH BELOW REWRITTEN TEXT* - [ ] *ADD NUMBER OF EXPECTED ELECTRONS AS THRESHOLD AS MENTIONED IN TIMEPIX ORIGINAL PAPER* - [X] *PLOT OF SCURVE SCANS* - [X] *APPENDIX ALL SCURVE SCANS FOR CAST DATA TAKING* -> If anything only for the extended thesis. We don't use them for literally anything. - [ ] *UNDERSTAND HOW WE CAN USE SCURVE TO DEDUCE THRESHOLD AND EXPLAIN IT* - [ ] *S-CURVE CAN BE USED TO DETERMINE #electron / THL DAC VALUE* (first gets THL value against pulse height of course) - [X] *MAKE THAT FIT AND SHOW EXAMPLE* -> That's the fit in [[fig:daq:thl_calibration_example]] no? - [ ] *ADD EQUATION FOR MINIMUM DETECTABLE CHARGE AND GIVE A NUMBER* **** Fit function for S-Curve :extended: The implementation of the S-Curve fit is found in [[file:~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/calibration/fit_functions.nim]], #+begin_src nim func sCurveFunc*(p: seq[float], x: float): float = ## we fit the complement of a cumulative distribution function ## of the normal distribution # parameter p[2] == sigma # parameter p[1] == x0 # parameter p[0] == scale factor result = normalCdfC(x, p[2], p[1]) * p[0] #+end_src which is typically called from [[file:~/CastData/ExternCode/TimepixAnalysis/Analysis/ingrid/calibration/calib_fitting.nim]], and for these plots used in file:~/CastData/ExternCode/TimepixAnalysis/Plotting/plotCalibration/plotCalibration.nim. **** Generate plots and compute electrons per THL DAC value [0/2] :extended: We generate both the S-Curve and THL calibration plots using ~plotCalibration~. Let's simply generate all Run2 and Run3 plots for S-Curves and THL values. #+begin_src zsh :results append file :file scurve_fits.org :output-dir resources :exports code for run in Run2 Run3; do for chip in {0..6}; do plotCalibration --scurve --chip $chip --runPeriod $run --useTeX --legendX 425.0 --legendY 3050 --outpath ~/phd/Figs/detector/calibration --quiet &; done done #+end_src #+RESULTS: [[file:resources/scurve_fits.org]] and for the THL calibration: #+begin_src zsh :results none for run in Run2 Run3; do for chip in {0..6}; do plotCalibration --scurve --chip $chip --runPeriod $run --useTeX --legendX 422.0 --legendY 3550 --outpath ~/phd/Figs/detector/calibration --quiet &; done done #+end_src the only difference being the legend placement (as the coordinates are applied to both plots). The following is essentially the main code required to plot the SCurves, determine the middle point and fit THL calib (not the SCurve fit though): #+begin_src nim :tangle /home/basti/phd/code/s_curve_electrons_per_thl.nim import std / [strutils, strscans, os, strformat] import ggplotnim import unchained proc charge(voltage: mV): UnitLess = ## Returns the number of electrons given a voltage pulse of amplitude `voltage` ## at the 8.fF capacitor of the Timepix1 result = 8.fF * voltage / e #const path = "/home/basti/septemH_calibration/SCurve/chip_3/voltage_*.txt" #const path = "/home/basti/septemH_calibration/CalibJul2018/SCurves/chip_1/voltage_*.txt" const path = "/home/basti/septemH_calibration/SeptemH_FullCalib_InGridDatabase/chip3/SCurve/voltage_*.txt" #const path = "/home/basti/septemH_calibration/SeptemH_FullCalib_2018_2/chip0/SCurve/voltage_*.txt" var charges = newSeq[float]() var thls = newSeq[int]() for file in walkFiles(path): let (success, _, voltage) = scanTuple(file.extractFilename, "$*_$i.txt$.") if voltage == 0: continue # skip 0 charges.add charge(voltage.mV) ## we'll do the simplest approach to get the correct THL value: ## - strip everything before noise peak (to have single THL value) ## - compute var df = readCsv(file, sep = '\t', header = "#", colNames = @["THL", "counts"]) let thlAtMax = df.filter(f{int: `counts` == `counts`.max})["THL", int][0] # must be single element const TestPulses = 1000 df = df.filter(f{`THL` > thlAtMax}) .mutate(f{int: "DiffHalf" ~ abs(`counts` - TestPulses div 2)}) .filter(f{int: `DiffHalf` == min(col("DiffHalf"))}) # f{int: `DiffHalf` < 200}) thls.add df["THL", int][0] # must be single element import polynumeric let fit = polyFit(thls.toTensor.asType(float), charges.toTensor, polyOrder = 1) echo fit proc linear(x, m, b: float): float = m * x + b let thlFit = linspace(thls.min, thls.max, 10) let chargesFit = linspace(charges.min, charges.max, 10) var dfFit = toDf({ "thls" : thlFit, "charges" : thlFit.map_inline(linear(x, fit[1], fit[0])) }) echo dfFit let df = toDf(thls, charges) ggplot(df, aes("thls", "charges")) + geom_point() + geom_line(data = dfFit, aes = aes("thls", "charges"), color = parseHex("FF00FF")) + xlab("THL DAC") + ylab("Injected charge [e⁻]") + ggtitle(&"Fit parameters: m = {fit[1]:.2f} e⁻/THL, b = {fit[0]:.2f} e⁻") + ggsave("/home/basti/phd/Figs/charge_per_thl.pdf") #+end_src #+RESULTS: All plots are found under sec. [[#sec:appendix:septemboard_calibrations]]. **** Compute minimally detectable charge :extended: Ref: - cite:LLOPART2007485_timepix Timepix paper - cite:LlopartCudie_1056683 Llopert PhD thesis The following quotes explain how to compute the effective threshold: page 5/10: #+begin_quote The electronic noise and effective threshold can be measured using the s-curve method [8] when the pixel is set in counting mode. #+end_quote page 5/10: #+begin_quote The effective threshold is at 50% of this s-curve. The charge difference between the 97.75% and 2.25% of the s- curve is four times the RMS noise of the front end assuming a gaussian distributed noise. #+end_quote #+begin_quote The measured electronic noise is 99:4 ± 3:8 e⁻ rms for hole collection and 104:8 ± 6 e⁻ rms for electron collection. The measured DAC step gain is 24:7 ± 0:7 e⁻ =step for hole collection and 25:4 ± 1:2 e⁻ =step for electron collection. #+end_quote page 7/10: #+begin_quote The threshold variation before equalization is ~240 e⁻ rms and after equalization the achieved noise free threshold variation is ~35 e⁻ rms for both polarities. #+end_quote page 7/10: #+begin_quote The minimum detectable charge can be calculated by quadratically adding the measured electronic noise and the threshold variation because both measurements are uncorrelated. #+end_quote #+begin_quote Before equalization the minimum detectable charge for the full matrix is ~1600 e⁻ and after equalization is ~650 e⁻ for both polarities. #+end_quote This means: - [ ] compute the 97.75 ⇔ 2.25% range of the S-curve. Yields 4 times RMS noise. Width in THL of that can be converted to #e⁻ using THL calibration. -> electronic noise $N_e$ - [ ] compute width of optimized threshold variation based on histogram. Fit gaussian (?) to optimized threshold variation. σ of that gaussian is width in THL. Convert THL to electrons. -> threshold variation $N_t$ - [ ] *CHECK IF THIS IS CORRECT*: If I'm not mistaken, the noise peak we see in the S-curve scan is essentially the same as the optimized distribution from the threshold equalization. Then with those two parameters we compute the effective detectable charge as: \[ N_d = √(N_e² + N_t²) \] which for the numbers listed above yields \[ N_d = √(105² + 35²) = 110 \] which is not close to the expected 650 e⁻! Checking in the PhD thesis of Llopert, page 115, equation 4.5 is: \[ \text{MinDetect}Q = 6·\sqrt{ ENC² + σ_{\text{dist}}²} \] which then works out nicely (110·6 = 660 is close enough)! So in theory we can compute this for all our chips and all calibrations. - [ ] *CALCULATE FOR ALL CHIPS AND PUT INTO APPENDIX* *** Pólya distribution for threshold detection :PROPERTIES: :CUSTOM_ID: sec:appendix:calibration:polya_distribution_threshold :END: In a gaseous detector the gas amplification (see sec. [[#sec:theory:gas_gain_polya]]) allows to easily exceed the minimum detectable charge of $\mathcal{O}(\SIrange{500}{1000}{e^-})$. The typically used =THL= threshold will be quite a bit higher than the 'theoretical limit' however for multiple reasons. One can either compute the effective threshold in use based on the =THL= calibration as explained in sec. [[#sec:operation_calibration:scurve_scan]], or use an experimental approach by utilizing the Pólya distribution as introduced in sec. [[#sec:theory:gas_gain_polya]] and [[#sec:daq:polya_distribution]]. By taking data over a certain period of time and computing a histogram of the charge values recorded by each pixel, a Pólya distribution naturally arises. The Pólya distribution can be used to determine the actual activation threshold by simply checking what the lowest charge is that sees significant statistics. An example of such a Pólya distribution with a very obvious cutoff at low charges is seen in fig. [[fig:daq:polya_example_chip0]]. We see chip 0 using the same calibration from July 2018 as in the figures in the previous section [[#sec:operation_calibration:scurve_scan]]. The data is a $\SI{90}{min}$ interval of background data at CAST. The pink line represents the fit of the Pólya distribution to the data. The dashed part of the line was not used for the fit and is only an extension using the final fit parameters. The cutoff at the lower end due to the chip's threshold is clearly visible. The fit determines a gas gain of about $\num{2700}$, compared to the mean of the data yielding about $\num{2430}$. Based on the data a threshold value of -- very roughly -- $\num{1000}$ can be estimated. Using the =THL= calibration of the chip as shown in fig. [[fig:daq:thl_calibration_example]] yields a value of \[ Q(\text{THL} = 419) = 26.8 · 419 - 10300 = 929.2 \] where we used the fit parameters as printed on the plot ($1/m$ and $f⁻¹(y=0)$) and the =THL= DAC value of $\num{419}$ as used during the data taking for this chip. Indeed, the real threshold is in the same range, but clearly a bit higher than the theoretical limit for this chip. This matches our expectation. #+CAPTION: An example of a Pólya distribution of chip 0 using the calibration #+CAPTION: of July 2018 based on \SI{90}{min} of background data. #+CAPTION: The lower cutoff is easily visible. The pink line represents the #+CAPTION: fit of the Pólya distribution to the data. In the dashed region the #+CAPTION: line was extended using the final fit parameters. #+NAME: fig:daq:polya_example_chip0 [[~/phd/Figs/gasGain/gas_gain_run_306_chip_0_2_90_min_1545296149.pdf]] **** TODOs for this section :noexport: - [ ] *REWRITE BELOW, OUTDATED REFERENCES* - [X] *DON'T NEED TO MENTION THL AND ACTIVATION THRESHOLD THL HERE* - [X] *THIS NEEDS A GOOD REWRITE STILL!* - [X] The additional threshold discussion can be moved to the appendix I think **** Generate Polya plot for chip 0, run period 3 [0/1] :extended: The current placeholder polya distribution (although it's the right chip and calibration) is: #+begin_src sh basti at void in /mnt/1TB/CAST/2018_2/out/DataRuns2018_Raw_2020-04-28_16-13-28 λ cp gas_gain_run_306_chip_0_5_30_min_1545294386.pdf \ ~/phd/Figs/gas_gain_run_306_chip_0_placeholder_example.pdf #+end_src - [X] *REPLACE BY BETTER PLOT. USING 90MIN AMONG OTHER THINGS* We simply use the existing reconstructed data and create the gas gain plots again: #+begin_src sh WRITE_PLOT_CSV=true reconstruction -i ~/CastData/data/DataRuns2018_Reco.h5 \ --only_gas_gain \ --run 306 \ --plotOutPath ~/phd/Figs/gasGain/ \ --useTeX=true \ --overwrite #+end_src ** Septemboard calibration :PROPERTIES: :CUSTOM_ID: sec:appendix:septemboard_calibrations :END: In the extended thesis this section contains all ~THS~ optimization, S-Curves and ~ToT~ calibrations for both run periods and all chips of the Septemboard. As this leads to /a lot/ of pages of figures, here we only show a histogram of all optimized THL distributions for each run period in sec. [[#sec:appendix:operation_calibration:all_thl_calib]]. *** TODOs for this section [/] :noexport: - [ ] Show all calibrations for each of the runs. That means, for all chips: - [X] THS optimization & equalization bits (extended) - [X] equalized matrix of all pixels (extended) -> This is really not very insightful. - [ ] ~ToT~ calibration - [X] S-curves - [ ] fit to 50% point of S-curves to get #electrons / THL DAC value *** Generate all ToT calibration plots :extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:calibration:gen_tot_calibration :END: See sec. [[#sec:operation_calibration:gen_tot_calib_plot]] for the section producing the single plot used in the thesis. #+begin_src zsh :results append file :file tot_calibrations.org :output-dir resources # To generate fig:septem:tot_calibration_example for run in 2 3; do for chip in {0..6}; do plotCalibration --tot --chip $chip --runPeriod Run$run --useTeX --file ~/CastData/ExternCode/TimepixAnalysis/resources/ChipCalibrations/Run$run/chip$chip/TOTCalib$chip.txt --outpath ~/phd/Figs/detector/calibration --quiet &; done done #+end_src #+RESULTS: [[file:resources/tot_calibrations.org]] *** All ToT calibrations :extended: All plots without caption, too much repetition and the run period & chip number are in the title. Looking at these plots one of the things I've long thought is that the fit function is way too overspecified, given that usually one parameter (most of the time $t$) is just zero. The errors go crazy in Run-2 for chip 0, because the fit produces crazily large errors. Not quite sure why, but I guess it's just another reason of the function having too many parameters. :) - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_0.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_1.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_2.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_3.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_4.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_5.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run2_chip_6.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_0.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_1.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_2.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_3.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_4.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_5.pdf]] - [[~/phd/Figs/detector/calibration/tot_calib_Run3_chip_6.pdf]] **** Investigate fit alone of Run-2, chip 0 #+begin_src sh :results drawer plotCalibration --tot --chip 0 --runPeriod Run2 --useTeX --file ~/CastData/ExternCode/TimepixAnalysis/resources/ChipCalibrations/Run2/chip0/TOTCalib0.txt \ --outpath /tmp/ #+end_src #+RESULTS: :results: testlinfit status = 1 χ² = 0.00204322 (10 DOF) χ²/dof = 0.000204322 NPAR = 4 NFREE = 4 NPEGGED = 0 NITER = 14 NFEV = 68 P[0] = 0.371063 +/- 0.87745 P[1] = 46.5797 +/- 359.443 P[2] = 1605.21 +/- 34712.9 P[3] = -11.3109 +/- 512.982 9 from ticks: 10 from scale: (low: 0.0, high: 450.0) and (low: 0.0, high: 450.0) [INFO] TeXDaemon ready for input. 5 from ticks: 10 from scale: (low: 0.0, high: 250.0) and (low: 0.0, high: 250.0) shellCmd: command -v lualatex shellCmd: lualatex -output-directory /tmp /tmp//tot_calib_Run2_chip_0.tex Generated: /tmp//tot_calib_Run2_chip_0.pdf :end: See the crazy errors on parameters 1, 2 and 3? *** All plots of THS optimization and equalization :extended: Again, all plots without caption or anything. That's why they have titles, you know. - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_0.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_1.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_2.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_3.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_4.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_5.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run2_chip_6.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_0.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_1.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_2.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_3.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_4.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_5.pdf]] - [[~/phd/Figs/detector/calibration/heatmap_threshold_equalization_Run3_chip_6.pdf]] *** All S-Curve plots :extended: Excuse the mismatched order of the tables... I don't really expect anyone to be particularly interested in this, hence I didn't want to spend time sorting them by hand. All figures: - [[~/phd/Figs/detector/calibration/s_curves_0_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_0_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_1_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_1_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_2_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_2_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_3_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_3_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_4_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_4_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_5_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_5_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_6_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/s_curves_6_Run3_lX_425.0_lY_3050.0.pdf]] All tables: | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 4 | 20 | Run3 | 999.7 | 0.4 | 437.3 | 0.004919 | 4.256 | 0.006459 | | 4 | 25 | Run3 | 1003 | 0.376 | 446.7 | 0.005023 | 4.561 | 0.006614 | | 4 | 30 | Run3 | 999.5 | 0.3509 | 455.5 | 0.004807 | 4.4 | 0.006384 | | 4 | 35 | Run3 | 999.4 | 0.424 | 465.1 | 0.005229 | 4.494 | 0.0068 | | 4 | 40 | Run3 | 997.6 | 0.372 | 474.8 | 0.004981 | 4.492 | 0.006571 | | 4 | 50 | Run3 | 997.6 | 0.4397 | 492.6 | 0.005304 | 4.476 | 0.006868 | | 4 | 60 | Run3 | 1004 | 0.3632 | 511.5 | 0.005063 | 4.706 | 0.006681 | | 4 | 100 | Run3 | 998.8 | 0.3903 | 582.7 | 0.005315 | 4.818 | 0.00695 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 3 | 20 | Run2 | 1020 | 1.225 | 422.9 | 0.00906 | 4.107 | 0.009371 | | 3 | 25 | Run2 | 1011 | 0.5655 | 428.1 | 0.005788 | 4.353 | 0.007229 | | 3 | 30 | Run2 | 1003 | 0.3785 | 433.1 | 0.004842 | 4.303 | 0.006391 | | 3 | 35 | Run2 | 1001 | 0.3727 | 438.4 | 0.004825 | 4.305 | 0.006377 | | 3 | 40 | Run2 | 1001 | 0.3724 | 444.3 | 0.004802 | 4.272 | 0.00635 | | 3 | 50 | Run2 | 1001 | 0.3643 | 454.1 | 0.004931 | 4.496 | 0.006519 | | 3 | 60 | Run2 | 999.5 | 0.3824 | 464.3 | 0.005009 | 4.477 | 0.006591 | | 3 | 100 | Run2 | 1005 | 0.4337 | 502.6 | 0.005988 | 5.397 | 0.007674 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 4 | 20 | Run2 | 1040 | 1.203 | 427.4 | 0.009416 | 4.453 | 0.009669 | | 4 | 25 | Run2 | 1006 | 0.6367 | 433.2 | 0.006145 | 4.298 | 0.007513 | | 4 | 30 | Run2 | 1005 | 0.3829 | 438.1 | 0.004901 | 4.371 | 0.006456 | | 4 | 35 | Run2 | 1001 | 0.3561 | 443.3 | 0.004829 | 4.407 | 0.006405 | | 4 | 40 | Run2 | 1001 | 0.4318 | 449.2 | 0.005384 | 4.642 | 0.006972 | | 4 | 50 | Run2 | 999.3 | 0.362 | 459.1 | 0.004891 | 4.445 | 0.006473 | | 4 | 60 | Run2 | 1010 | 0.4457 | 469.4 | 0.005662 | 4.944 | 0.00727 | | 4 | 100 | Run2 | 1006 | 0.4092 | 507.6 | 0.005949 | 5.552 | 0.00768 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 1 | 20 | Run3 | 1007 | 0.5532 | 384.6 | 0.00592 | 4.532 | 0.007397 | | 1 | 25 | Run3 | 1023 | 0.6033 | 393.6 | 0.007034 | 5.418 | 0.008488 | | 1 | 30 | Run3 | 1000 | 0.467 | 403.7 | 0.005764 | 4.853 | 0.007362 | | 1 | 35 | Run3 | 983.4 | 0.3714 | 413.1 | 0.004776 | 4.133 | 0.006327 | | 1 | 40 | Run3 | 1013 | 0.3916 | 422 | 0.005359 | 4.966 | 0.006994 | | 1 | 50 | Run3 | 1022 | 0.6807 | 440.9 | 0.007737 | 5.551 | 0.009052 | | 1 | 60 | Run3 | 982.4 | 0.3855 | 461.3 | 0.005151 | 4.526 | 0.006767 | | 1 | 100 | Run3 | 999.6 | 0.3774 | 534.6 | 0.005401 | 5.032 | 0.007076 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 5 | 20 | Run3 | 980.6 | 0.3185 | 351.3 | 0.004076 | 3.441 | 0.005511 | | 5 | 25 | Run3 | 1012 | 0.4223 | 359.8 | 0.005834 | 5.346 | 0.007509 | | 5 | 30 | Run3 | 1003 | 0.3553 | 369.1 | 0.00509 | 4.786 | 0.006727 | | 5 | 35 | Run3 | 999.5 | 0.5331 | 378.9 | 0.006367 | 5.063 | 0.007928 | | 5 | 40 | Run3 | 991.5 | 0.4121 | 388.8 | 0.005378 | 4.701 | 0.006998 | | 5 | 50 | Run3 | 993.7 | 0.3681 | 406.8 | 0.004946 | 4.445 | 0.006536 | | 5 | 60 | Run3 | 1009 | 0.3795 | 425.5 | 0.005705 | 5.503 | 0.007451 | | 5 | 100 | Run3 | 1006 | 0.4792 | 498.9 | 0.006008 | 5.086 | 0.007616 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+------+--------+-------+----------+-------+----------| | 0 | 20 | Run3 | 1001 | 0.37 | 421.3 | 0.004754 | 4.221 | 0.006294 | | 0 | 25 | Run3 | 1004 | 0.3732 | 430.8 | 0.004994 | 4.545 | 0.006582 | | 0 | 30 | Run3 | 1000 | 0.3867 | 440.1 | 0.004989 | 4.429 | 0.006561 | | 0 | 35 | Run3 | 1005 | 0.3935 | 449.3 | 0.005154 | 4.627 | 0.006748 | | 0 | 40 | Run3 | 1002 | 0.3842 | 459.4 | 0.005048 | 4.53 | 0.006635 | | 0 | 50 | Run3 | 1002 | 0.3928 | 478.2 | 0.005145 | 4.603 | 0.006741 | | 0 | 60 | Run3 | 1001 | 0.3806 | 497.6 | 0.00505 | 4.553 | 0.006643 | | 0 | 100 | Run3 | 1004 | 0.3936 | 569.8 | 0.005365 | 4.895 | 0.007004 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 2 | 20 | Run2 | 998.4 | 0.8093 | 351.8 | 0.007102 | 4.242 | 0.008231 | | 2 | 25 | Run2 | 1028 | 0.7173 | 356.9 | 0.006721 | 4.518 | 0.00796 | | 2 | 30 | Run2 | 1006 | 0.396 | 362.3 | 0.005205 | 4.683 | 0.006806 | | 2 | 35 | Run2 | 1003 | 0.3835 | 368.1 | 0.005318 | 4.898 | 0.006962 | | 2 | 40 | Run2 | 1013 | 0.4821 | 373.9 | 0.006488 | 5.664 | 0.008148 | | 2 | 50 | Run2 | 985.4 | 0.405 | 386.9 | 0.005722 | 5.14 | 0.007423 | | 2 | 60 | Run2 | 1015 | 0.4928 | 397.8 | 0.006599 | 5.721 | 0.008248 | | 2 | 100 | Run2 | 1002 | 0.4138 | 436 | 0.005847 | 5.353 | 0.007547 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 6 | 20 | Run2 | 1008 | 1.057 | 400.5 | 0.00954 | 4.876 | 0.01006 | | 6 | 25 | Run2 | 981.7 | 0.433 | 407.7 | 0.005316 | 4.43 | 0.006901 | | 6 | 30 | Run2 | 990.9 | 0.3359 | 413.2 | 0.004408 | 3.878 | 0.005909 | | 6 | 35 | Run2 | 996.6 | 0.3557 | 417.6 | 0.004588 | 4.055 | 0.006108 | | 6 | 40 | Run2 | 995.3 | 0.3669 | 423.4 | 0.004743 | 4.192 | 0.006287 | | 6 | 50 | Run2 | 995.7 | 0.384 | 433.3 | 0.005026 | 4.464 | 0.006612 | | 6 | 60 | Run2 | 990.4 | 0.3798 | 444.6 | 0.005093 | 4.542 | 0.006701 | | 6 | 100 | Run2 | 998.2 | 0.4102 | 483.5 | 0.00559 | 5.026 | 0.007249 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+------+--------+-------+----------+-------+----------| | 2 | 20 | Run3 | 1000 | 0.3853 | 366.4 | 0.005084 | 4.56 | 0.006677 | | 2 | 25 | Run3 | 1001 | 0.3791 | 375.9 | 0.005146 | 4.689 | 0.006762 | | 2 | 30 | Run3 | 1003 | 0.4448 | 384.9 | 0.005457 | 4.657 | 0.007037 | | 2 | 35 | Run3 | 1001 | 0.3878 | 394.5 | 0.005168 | 4.66 | 0.006776 | | 2 | 40 | Run3 | 1001 | 0.4325 | 405 | 0.005309 | 4.55 | 0.006881 | | 2 | 50 | Run3 | 1002 | 0.3695 | 423.3 | 0.005096 | 4.69 | 0.006713 | | 2 | 60 | Run3 | 1001 | 0.3829 | 443.1 | 0.005312 | 4.886 | 0.006957 | | 2 | 100 | Run3 | 1003 | 0.3783 | 516.5 | 0.005342 | 4.972 | 0.007 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 3 | 20 | Run3 | 998.8 | 0.3833 | 436 | 0.004918 | 4.348 | 0.006479 | | 3 | 25 | Run3 | 1002 | 0.377 | 445.6 | 0.005006 | 4.521 | 0.006593 | | 3 | 30 | Run3 | 1001 | 0.3666 | 454.1 | 0.00497 | 4.537 | 0.006564 | | 3 | 35 | Run3 | 1005 | 0.3822 | 463.6 | 0.005077 | 4.604 | 0.00667 | | 3 | 40 | Run3 | 1001 | 0.3767 | 473.6 | 0.004984 | 4.488 | 0.006567 | | 3 | 50 | Run3 | 1002 | 0.3772 | 491.7 | 0.005049 | 4.579 | 0.006646 | | 3 | 60 | Run3 | 999.8 | 0.3738 | 511.6 | 0.00493 | 4.427 | 0.006506 | | 3 | 100 | Run3 | 954.5 | 0.3897 | 583.8 | 0.005565 | 4.828 | 0.007279 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 0 | 20 | Run2 | 989.8 | 0.8928 | 408.9 | 0.007268 | 3.991 | 0.008277 | | 0 | 25 | Run2 | 1006 | 0.4597 | 414.2 | 0.004923 | 3.974 | 0.006386 | | 0 | 30 | Run2 | 1004 | 0.3651 | 419.2 | 0.004602 | 4.067 | 0.006111 | | 0 | 35 | Run2 | 999.3 | 0.3604 | 424.4 | 0.004597 | 4.058 | 0.006113 | | 0 | 40 | Run2 | 1003 | 0.3845 | 429.2 | 0.004994 | 4.465 | 0.006569 | | 0 | 50 | Run2 | 1002 | 0.3689 | 439 | 0.00499 | 4.554 | 0.006585 | | 0 | 60 | Run2 | 1004 | 0.4107 | 449.1 | 0.00541 | 4.834 | 0.007032 | | 0 | 100 | Run2 | 1013 | 0.4559 | 486.9 | 0.006465 | 5.853 | 0.008183 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 5 | 20 | Run2 | 1017 | 0.9365 | 336 | 0.007614 | 4.191 | 0.008502 | | 5 | 25 | Run2 | 1047 | 0.7914 | 341.5 | 0.00795 | 5.237 | 0.009016 | | 5 | 30 | Run2 | 988 | 0.4721 | 347.8 | 0.005962 | 4.962 | 0.007589 | | 5 | 35 | Run2 | 993.4 | 0.3367 | 352.5 | 0.004536 | 4.066 | 0.006067 | | 5 | 40 | Run2 | 1002 | 0.3981 | 357.8 | 0.005052 | 4.453 | 0.00662 | | 5 | 50 | Run2 | 999 | 0.3784 | 368.3 | 0.005305 | 4.891 | 0.006957 | | 5 | 60 | Run2 | 982.6 | 0.4703 | 381.5 | 0.006751 | 5.831 | 0.0085 | | 5 | 100 | Run2 | 1009 | 0.407 | 420.7 | 0.005546 | 5.066 | 0.007196 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 6 | 20 | Run3 | 993.9 | 0.375 | 413.2 | 0.004854 | 4.282 | 0.006414 | | 6 | 25 | Run3 | 1002 | 0.3873 | 422.3 | 0.005074 | 4.549 | 0.006662 | | 6 | 30 | Run3 | 997.5 | 0.3923 | 431.5 | 0.005272 | 4.74 | 0.006896 | | 6 | 35 | Run3 | 1000 | 0.4067 | 441.1 | 0.005354 | 4.764 | 0.006972 | | 6 | 40 | Run3 | 1001 | 0.379 | 450.8 | 0.005124 | 4.662 | 0.006734 | | 6 | 50 | Run3 | 1005 | 0.4097 | 469.4 | 0.005494 | 4.956 | 0.007131 | | 6 | 60 | Run3 | 1000 | 0.3918 | 489.3 | 0.005174 | 4.637 | 0.006778 | | 6 | 100 | Run3 | 1009 | 0.4163 | 562.9 | 0.005794 | 5.32 | 0.007475 | | chip | voltage | runPeriod | N | ΔN | μ | Δμ | σ | Δσ | |------+---------+-----------+-------+--------+-------+----------+-------+----------| | 1 | 20 | Run2 | 1069 | 1.481 | 368.1 | 0.01156 | 4.832 | 0.01095 | | 1 | 25 | Run2 | 1067 | 0.7955 | 374.7 | 0.009635 | 6.7 | 0.01062 | | 1 | 30 | Run2 | 975.2 | 0.3802 | 382.1 | 0.005377 | 4.803 | 0.007053 | | 1 | 35 | Run2 | 990.6 | 0.4188 | 387.6 | 0.005426 | 4.709 | 0.007046 | | 1 | 40 | Run2 | 1023 | 0.6093 | 393.5 | 0.007115 | 5.453 | 0.008561 | | 1 | 50 | Run2 | 989.8 | 0.4391 | 405.3 | 0.006054 | 5.321 | 0.007752 | | 1 | 60 | Run2 | 987.7 | 0.3325 | 415.5 | 0.004108 | 3.466 | 0.005535 | | 1 | 100 | Run2 | 1018 | 0.4941 | 452.2 | 0.006787 | 5.945 | 0.008457 | *** THL calibration :PROPERTIES: :CUSTOM_ID: sec:appendix:operation_calibration:all_thl_calib :END: Figure [[fig:daq:thl_optimized_distributinons]] shows the optimized =THL= distributions of all chips after threshold equalization for run 2 (left) and 3 (right). #+CAPTION: Distributions of the =THL= values of all Septemboard (board H) chips #+CAPTION: at the noise peak with the calibration for for run 2 on the left and #+CAPTION: run 3 on the right. #+NAME: fig:daq:thl_optimized_distributinons [[~/phd/Figs/detector/calibration/septemboard_all_thl_optimized.pdf]] **** Generate facet plot of THL optimization :extended: The plot is generated as part of sec. [[#sec:calib:generate_fsr_table]]. *** All THL calibration plots :extended: *Note*: The text should be placed in relative coordinates based on the data range, but that's currently not done. Sorry about tcho "hat. - [[~/phd/Figs/detector/calibration/thl_calibration_chip_0_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_0_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_1_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_1_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_2_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_2_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_3_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_3_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_4_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_4_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_5_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_5_Run3_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_6_Run2_lX_425.0_lY_3050.0.pdf]] - [[~/phd/Figs/detector/calibration/thl_calibration_chip_6_Run3_lX_425.0_lY_3050.0.pdf]] ** Calibration measurements of the veto scintillator paddle :extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:scintillator_calibration_notes :END: The following is a set of notes taken when calibrating the scintillator paddle in the laboratory of the RD51 group at CERN. It is reproduced here for transparency and completeness. *** Scintillator paddle calibrations [0/2] This document contains the data for the calibration of the MM veto scintillator. It is a 'report' created while data taking and thus may contain conflicting information. Not to be understood as a simple reference protocol. The scintillator has a Canberra 2007 base, which accepts positive HV. The PMT is a Bicron Corp. 31.49x15.74M2BC408/2-X, where the first two numbers are the scintillators dimensions in inch. =For calibration we're using $\SI{1400}{\volt}$, while Juanan mentioned in his mail to use $\SI{1200}{\volt}$ during data taking.= For calibration we're using an Ortec 9302 amplifier after the PMT with a gain of 20. This is fed into an LRS 621CL discriminator. The PMT and base are used at a HV of $+\SI{1200}{\volt}$. Scintillator is of size $\SI{42}{\cm}$ times $\SI{82}{\cm}$. Which is an area of #+BEGIN_SRC nim let x = 0.42 let y = 0.82 echo x * y #+END_SRC #+RESULTS: : 0.3444 - [ ] *TODO: CROSS CHECK THESE NUMBERS HERE* At a cosmic muon rate of $\sim\SI{100}{\hertz \per \meter \squared \steradian}$, the expected signal rate of munons is thus $\sim \SI{33}{\hertz}$. #+BEGIN_SRC nim let area = 0.34 let total_muons = 60000.0 echo area * total_muons #+END_SRC #+RESULTS: : 20400.0 - [ ] *UPDATE*: Muon rate about $\SI{1}{cm^{-2}.min^{-1}} \approx \SI{166.67}{m^{-2}.s^{-1}}$ **** Calibration :PROPERTIES: :ORDERED: t :END: Threshold values are scaled by a factor of 10. Coincidence using Theodoros 2 scintillator paddles in RD51 lab. - upper scinti: $\SI{-2070}{\volt}$ - lower scinti: $\SI{-2050}{\volt}$ Measurement time for each value: $\SI{10}{\minute}$ Note: The reason the coincidences are much lower than the single scintillator counts is of course due to the much smaller coincidence area of the small scintillators used for the measurement. | Threshold / mV | Counts Szinti | Counts Coincidence | |----------------+---------------+--------------------| | -301.9 | 24062 | 760 | | -399 | 13332 | 496 | | -498 | 6584 | 300 | | -603 | 3363 | 167 | | -699 | 1900 | 104 | | -802 | 1087 | 83 | | -901 | 651 | 54 | | -1005 | 523 | 50 | | -1104 | 361 | 32 | | -1203 | 231 | 32 | | -1305 | 189 | 38 | | -1400 | 151 | 23 | | -1502 | 96 | 14 | | -1602 | 78 | 15 | | -1703 | 72 | 10 | | -1802 | 58 | 11 | | | | | Second set of measurements around interesting point of $\SI{1000}{\milli\volt}$ | Threshold / mV | Counts Szinti | Counts Coincidence | |----------------+---------------+--------------------| | -1200 | 259 | 35 | | -1100 | 350 | 34 | | -1000 | 456 | 48 | | -900 | 774 | 42 | A third measurement using an amplifier after the PMT, since the output signal of the PMT is so small (see mail of JuanAn). Now the HV was lowered to $\SI{1200}{\volt}$ again, since it is not necessary. #+NAME: tab-test | Threshold / mV | Counts Szinti | Counts Coincidence | |----------------+---------------+--------------------| | -598 | 31221 | 634 | | -700 | 30132 | 674 | | -804 | 28893 | 635 | | -903 | 28076 | 644 | | -1005 | 27012 | 684 | | -1103 | 25259 | 566 | | -1200 | 22483 | 495 | | -1303 | 19314 | 437 | | -1403 | 16392 | 356 | | -1505 | 13677 | 312 | | -1600 | 11866 | 267 | | -1701 | 10008 | 243 | | | | | | -900 | 28263 | 892 | | -1000 | 26789 | 991 | #+begin_src nim :var tbl=tab-test import ggplotnim, sequtils proc parse(s: openArray[string]): seq[float] = s.filterIt(it.len > 0).mapIt(it.parseFloat) let df = toDf({ "Thr" : tbl["Threshold / mV"].parse, "Szinti" : tbl["Counts Szinti"].parse, "Coinc" : tbl["Counts Coincidence"].parse }) ggplot(df, aes("Thr", "Szinti")) + geom_point() + geom_line() + ggsave("/t/test.pdf") #+end_src Export this table using org-table-export to [[file:data/veto_szinti_counts.txt]]. Then remove unnecessary last line and add # to beginning of first line. - [ ] *REWRITE TO USE AN INLINE GGPLOTNIM PLOTTING AND SHOW DATA* Then use [[file:PyS_mm_veto_szinti_calib.py]] to plot the data. A threshold of $\SI{-110}{\milli\volt}$ was selected, after analysis. * CAST operation procedures :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:cast_operations :END: This appendix provides some guidance about typical operations necessary during maintenance or installation of the detector, vacuum system and water cooling at CAST. The majority of this appendix was written as an overview of the Septemboard operation at CAST, as required by CERN at the time. We first quickly look at the terminology to describe different areas in the CAST hall in sec. [[#sec:appendix:cast_operations:terminology]]. Then we go over the different systems. Detector high voltage (sec. [[#sec:cast:high_voltage]]), vacuum system (sec. [[#sec:cast:vacuum_system]]), water cooling and gas supply (sec. [[#sec:cast:watercooling_gas]]) and the safety interlock systems (sec. [[#sec:cast:interlock_systems]]). We finally end with the CAST log files in sec. [[#sec:appendix:cast_log_files]]. These are the most important aspect in terms of performing data analysis with the data taken at CAST (magnetic field, solar tracking and more). ** CAST terminology :PROPERTIES: :CUSTOM_ID: sec:appendix:cast_operations:terminology :END: In the CAST collaboration there is a common terminology in use to describe different parts of the hall / different bores and detector installations. Essentially, the two magnet bores and areas 'behind' them (seen from the center of the magnet if you will) are named by their being closest to either the Geneva airport and by the Jura mountains on the other side. The magnet ends are named by whether a detector installed on that side observes the sunrise or the sunset. See fig. [[fig:appendix:cast_operations:terminology]] for a schematic. #+CAPTION: Schematic of the CAST hall with the magnet and the parts that give the #+CAPTION: names to different areas. The bores (and as an extension sides of the magnet) #+CAPTION: are named by the airport / Jura mountains and the magnet ends by whether #+CAPTION: a detector observes sunrise or sunset. #+NAME: fig:appendix:cast_operations:terminology [[~/phd/Figs/CAST_terminology/cast_terminology.pdf]] *** TODOs for this section :noexport: - [X] *HAVE A SCHEMATIC OF SORTS THAT SHOWS WHAT AIRPORT, JURA, SUNRISE, SUNSET MEANS* -> Realistically: do a simple schematic showing CAST magnet, tracks, telescope side etc and annotate. -> We don't have the time, but it would be fun to experiment with gaussian splatting based on all the CAST pictures we have to see if we could get an interactive 3D view of the CAST hall, haha. *Hey*, you there! You seem like an interested reader. You should make a Gaussian splatting 3D scene of CAST. Or whatever fancy tech you have in your timeline! ** High voltage supply :PROPERTIES: :CUSTOM_ID: sec:cast:high_voltage :END: The high voltage supply is an iseg HV module, which is located in the VME crate on the airport side of the magnet. The HV is controlled via a USB connection to the VME crate, which it shares with the FADC. The veto scintillator however has its own HV supply, since it needs a positive HV, instead of a negative one. The detector uses $\num{7}$ different high voltages. $\num{5}$ of these are for the detector itself, $\num{1}$ for the SiPM and the last for the veto scintillator on top. Their voltages are shown in tab. [[tab:cast:high_voltage]]. The voltages actually used are defined in [[file:~/TOS/config/HFM_settings.ini][TOS/config/HFM_settings.ini]]. #+CAPTION: Table of high voltages in use for the Septemboard detector. #+CAPTION: Note that the veto scintillator is not controlled via #+CAPTION: the iseg module, but by a CAEN N470. #+NAME: tab:cast:high_voltage #+ATTR_LATEX: :booktabs t |-------------+---------+-------------+------------------| | Description | Channel | Voltage / V | TripCurrent / mA | |-------------+---------+-------------+------------------| | grid | 0 | -300 | 0.050 | | anode | 1 | -375 | 0.050 | | cathode | 2 | -1875 | 0.050 | | ring 1 | 3 | -415 | 0.100 | | ring 29 | 4 | -1830 | 0.100 | | veto scinti | 5 | +1200 | 2 | | SiPM | 6 | -65.6 | 0.05 | |-------------+---------+-------------+------------------| The HV cables in use are red cables with LEMO HV connectors. They run from the detector to an iseg HV module sitting in a VME crate on the airport side of the magnet. The cables are marked with zip ties and the same names as in tab. [[tab:cast:high_voltage]]. The interlock system for the high voltage supply is detailed in section [[#sec:cast:hv_interlock]], together with the other interlock systems in place. *** TODOs for this section [/] :noexport: - [X] *MOVE SOME OF THIS TO THE DAQ SECTION WHERE WE HAVE A HV SECTION* -> The table is now also part of the detector chapter [[#sec:operation_calibration:high_voltage]]. - [ ] *ONCE FINAL: REFERENCE A RELATIVE LINK TO THE TOS REPOSITORY* -> Or to Github. *** Ramping the HV :PROPERTIES: :CUSTOM_ID: sec:cast:ramping_high_voltage :END: The high voltage supply can be controlled in two different ways. Besides differing in usability terms (one is manual, the other automatic), the main difference between the two is the HV interlock, which is only partially usable in case of the manual HV control. 1. in the manual way using the Linux software supplied by iseg. On the InGrid-DAQ computer it is located in [[file:/home/ingrid/src/isegControl/isegControl][~/src/isegControl/isegControl]]. Depending on the setup of the machine, the software may need superuser rights to access the USB connection. With the software the given channel as shown in tab. [[tab:cast:high_voltage]] can be set up and the HV can be ramped up. Note: one needs to activate SetKillEnable such that the HV is shut down in case of a current trip (exceeding of specified current). One should then set 'groups' of different channels, so that grid, anode and ring 1 are shut down at the same time, in case of a current trip, as well as ring 29 and the cathode! In addition the trip current needs to be manually set about a factor 5 higher during ramping, because of capacitors, which need to be charged first. Otherwise the channels trip immediately. In this case the HV interlock is restricted to basic current restrictions. Anything detector related is not included! 2. in the automatic way via the TOS. The TOS takes care of everything mentioned above. To use the TOS for the HV control (and thus also use the complete HV interlock, as it exists at the moment), perform the following steps: 1. check [[file:~/TOS/config/HFM_settings.ini][~/TOS/config/HFM_settings.ini]] and compare with tab. [[tab:cast:high_voltage]] whether settings seem reasonable 2. after starting TOS and setting up the chips, call #+BEGIN_SRC sh > ActivateHFM #+END_SRC which will set up TOS to use the combined HV and FADC (due to both inside the same VME crate, they are intertwined). This configures the FADC and reads the desired HV settings, but does not set the HV settings on the module yet. 3. to write the HV to the HV module, call #+BEGIN_SRC sh > InitHV #+END_SRC which will write the HV settings from ~HFM_settings.ini~ to the HV module. At the end it will ask the user, whether the HV should be ramped up: #+BEGIN_SRC sh Do you wish to ramp up the channels now? (Y / n) #+END_SRC If yes, the ramping progress will be shown via calls to the CheckModuleIsRamping() function (which can also be called manually in TOS). This should properly ramp up all channels. It is possible that TOS fails to connect to the VME crate and hence is not able to ramp up the channels. The most likely reason for this is that the isegControl software is still open, since only one application can access a single USB interface at the same time. ** Vacuum system :PROPERTIES: :CUSTOM_ID: sec:cast:vacuum_system :END: This section covers the vacuum system of the detector. It is pumped via a single primary membrane pump and two turbo pumps. One turbo pump is used to pump the main vacuum vessel of the beam pipe, while the second small turbo pump is used to pump the interstage part of our X-ray source manipulator to reduce leakage during movement of the source. Fig. [[fig:cast:vacuum-schematic]] shows a schematic of the whole vacuum system including all interlock systems and the pressure sensors. The pressures of the sensors P3 and P-MM are used as an interlock for VT3 and the gas supply, respectively. #+CAPTION: Schematic of the vacuum system behind the LLNL telescope including #+CAPTION: interlocks and pressure sensors. #+NAME: fig:cast:vacuum-schematic #+ATTR_LATEX: :width 1\textwidth :options angle=90 [[file:~/phd/Figs/detector/vacuum_system2017.png]] *** TODOs for this section [/] :noexport: - [X] *EXPLAIN INTERLOCKS HERE OR IN APPENDIX?* -> This entire thing will be in the appendix later. :) *** Vacuum operations The vacuum system as described in sec. [[#sec:cast:vacuum_system]] usually does not require manual intervention during normal operation. For maintenance the following two sections describe how to pump the system safely as well as how to flush it with nitrogen. Both processes are rather delicate due to the sensitive $\ce{Si_x N_y}$ window. Pumping needs to be done slowly, $O(\SI{1}{\milli\bar \per \second})$. To be able to do this, the needle valve $V_{\text{Needle}}$ (cf. fig. [[fig:cast:vacuum-schematic]]) is installed. One may separate the vacuum volume into two separate vacua. A bad vacuum before the primary pump and after the turbo pumps T1 and T2 and a good vacuum before the turbo pump T2. There are three connections from the good vacuum to the bad one. 1. through T2, closable via $V_{\text{T2}}$, $\SI{40}{\mm}$ tubing 2. through the needle valve $V_{\text{needle}}$, $\SI{16}{\mm}$ tubing 3. through T1 via the manipulator interstage, normally closed (see the note below), $\SI{25}{\mm}$ tubing 3 is mainly irrelevant for pumping purposes, since there is no valve to open or close; it is always closed by a 2-O-ring seal to the good vacuum. While 1 is the main path for pumping during operation, it is 2 which is used during a pumping down or flushing of the system, since it can be controlled very granularly. For both explanations below, it is very important to always think about each step (are the correct valves open / closed? etc.). A small mistake can lead to severe damage of the hardware (turbo pumps can break, the window can rupture). _Note_: There is a third very small vacuum volume before T1, which is the volume up to the manipulator interstage. This volume is separate from the main good vacuum chamber, due to a 2-O-ring seal on both ends of the manipulator. Compare with fig. [[fig:cast:vacuum-schematic]] at the location of the two clamped flanges 'above' the manipulator. One 2-O-ring seal is at the upper flange and one at the lower. This is because the manipulator part furthest from the beampipe is under air. In order to seal the air and the vacuum especially during movement of the source, these seals are in place. However, while the 2-O-ring seals provide decent sealing, it is not perfect. This is why the small turbo pump T1 is in place at all, to reduce the amount of air, which might enter the system during source manipulation. Another aspect to keep in mind is potential air, which can get trapped in between the two O-rings. This air will be released during movement of the seals. Especially after the system was open to air, it is expected that a small pressure increase on $P_{\text{MM}}$ can be seen during operation, despite T1 being in place. After several movement cycles, $O(10)$, these peaks should be negligible. **** Pumping the vacuum Before pumping it is a good idea to connect two linear gauges to the two $P_{\text{Linear}}$ pressure sensors. To pump the system safely, perform the following steps: 1. Make sure every pump is turned off. 2. Make sure every valve in the system is closed: 1. $V_{\text{Primary}}$ 2. $V_{\text{Leak}}$ 3. $V_{\text{T2}}$ 4. $V_{\text{Needle}}$ 3. Connect a linear gauge to $P_{\text{P, Linear}}$ on the primary pump line. 4. Start the primary pump. Tubing up to $V_{\text{Primary}}$ will be pumped, visible on linear gauge connected to $P_{\text{P, Linear}}$. Check that the second linear gauge remains unchanged, if not $V_{\text{Primary}}$ and $V_{\text{Needle}}$ is open! 5. Once $P_{\text{P, Linear}}$ shows $\leq \SI{10}{\milli bar}$, slowly open $V_{\text{Primary}}$, again checking $P_{\text{N, Linear}}$ remains unchanged. This will increase the pressure on $P_{\text{P, Linear}}$ again until the volume is pumped. 6. This step is the most crucial. With $V_{\text{T2}}$ still closed, very carefully open $V_{\text{Needle}}$, while keeping an eye on $P_{\text{N, Linear}}$. Note that $V_{\text{Needle}}$ has two locking mechanisms. The knob at the end with the analog indicator and a general lock in front of that. While the analog indicator shows =000=, open the general lock. Then slowly start turning the knob. At around =300= the pressure on $P_{\text{N, Linear}}$ should slowly start to decrease. Keep turning the knob until you reach a pump rate of $O(\SI{1}{\milli \bar \per \second})$. You will have to keep opening the needle valve further, the lower the pressure is to keep the pump rate constant. 7. Once both linear gauges have equalized (up to different offsets), close the needle valve again. 8. Open $V_{\text{T2}}$. 9. Start T2 by turning on the power and pressing the right most button. Use the arrow buttons to select the 'actual RPM' setting to see that the turbo is spinning up. Final speed should be set to $\SI{1500}{\Hz}$. 10. While T2 is spinning up, start T1 by turning on the power at the back. There is no additional button to be pressed. The system should now be in the following state: - $V_{\text{Leak}}$ closed - $V_{\text{Needle}}$ closed - $V_{\text{T2}}$ open - $V_{\text{Primary}}$ open - T2 & T1 running - Primary pump running If so, the system is now pumping. Note that it may take several days to reach a vacuum good enough to satisfy the interlock. **** Flushing the system Flushing the system is somewhat of a reverse of pumping the system. Follow these steps to safely flush the system with nitrogen. See section [[Nitrogen supply]] for an explanation of which valves need to be operated to open the nitrogen line. Before flushing the system connect two linear gauges to both $P_{\text{Linear}}$ sensors. 1. Make sure the turbo pumps are turned off, if not yet turn both off and wait for them to have come to a halt. 2. Turn off the primary pump. 3. Close $V_{\text{T2}}$. $V_{\text{Leak}}$ and $V_{\text{Needle}}$ should already be closed, while $V_{\text{T2}}$ and $V_{\text{Primary}}$ should still be open. 4. Connect the nitrogen line to the blind flange before $V_{\text{Leak}}$. 5. Slowly open $V_{\text{Leak}}$, while checking both linear gauges. Make sure only the pressure on $P_{\text{P, Linear}}$ increases, while $P_{\text{N, Linear}}$ remains under vacuum. If not, another valve is still open. Close $V_{\text{Leak}}$ immediately again! 6. Keep flushing nitrogen, until $P_{\text{P, Linear}}$ gets close to $\SI{1000}{\milli\bar}$ (the sensors will never actually reach that value). 7. Close $V_{\text{Leak}}$ again to make sure you do not put the system over one atmosphere of pressure. 8. This step is the most crucial. With $V_{\text{T2}}$ still closed, very carefully open $V_{\text{Needle}}$, while keeping an eye on $P_{\text{N, Linear}}$. Note that $V_{\text{Needle}}$ has two locking mechanisms. The knob at the end with the analog indicator and a general lock in front of that. While the analog indicator shows =000=, open the general lock. Then slowly start turning the knob. At around =300= the pressure on $P_{\text{N, Linear}}$ should slowly start to increase. Keep turning the knob until you reach a pump rate of $O(\SI{1}{\milli \bar \per \second})$. 9. You will notice that the pressure on $P_{\text{P, Linear}}$ will start to decrease, since the air will distribute in a larger volume. Open $V_{\text{Leak}}$ again slightly to keep $P_{\text{P, Linear}}$ roughly constant. 10. Keep flushing with $\SI{1}{\milli\bar\per\second}$ until both sensors read $O(\SI{1000}{\milli\bar})$. 11. Close all valves in the system again. This way the system is safely flushed with nitrogen. This helps to pump faster after a short maintenance, because less humidity can enter the system. ** Watercooling system & gas supply :PROPERTIES: :CUSTOM_ID: sec:cast:watercooling_gas :END: In this section the watercooling system as well as the gas supply is discussed. In section [[#sec:cast:water_gas_schematic]] a combined schematic of both systems is shown. *** Watercooling In order to keep the detector cool enough to avoid noise and damage to the septemboard, a watercooling system is used. This section describes the relevant information for the system as used at CAST. To readout the temperature two PT1000 temperature sensors are installed on the detector. One is located on the bottomside of the intermediate board (outside of the detector volume), while the other is located on the bottom side of the Septemboard. This temperature $T_{\text{Septem}}$ is also included in the schematic [[fig:detector-schematic]] below, because it was intended to be part of the HV interlock, as described in [[#sec:cast:hv_interlock]]. In the end it was not due to lack of time for proper testing. [fn:data_loss] Fig. sref:fig:cast:water_cooling_system shows the main part of the system including the pump, reservoir and radiator. The tubing is specifically chosen in *blue* to clear up potential confusion with other tubes used in the detector system. [fn:cooling_colors] These tubes use special Festo quick couplings, which cannot be connected to the connectors of the gas supply, to avoid potential accidents. The tubes have zipties installed on them, which label the tubes as well, with the naming convention as it is used in fig. [[fig:detector-schematic]]. - Maintenance :: At the end of every shift it should be checked, whether the water level in the reservoir is still above the red line seen in Fig. [[sref:fig:cast:water_cooling_reservoir]]. If not, water should be added by the shift coordinator (or trusted shifters). #+begin_src subfigure (figure () (subfigure (linewidth 0.36) (caption "Reservior") (label "fig:cast:water_cooling_reservoir") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CAST_water_cooling_reservoir.jpg")) (subfigure (linewidth 0.64) (caption "Water cooling system") (label "fig:cast:water_cooling_system") (includegraphics (list (cons 'width (linewidth 0.95))) "~/phd/Figs/CAST_water_cooling_system.jpg")) (caption (subref "fig:cast:water_cooling_reservoir") "shows a closeup of the water cooling reservoir with indicated markings to be checked for safe water levels." (subref "fig:cast:water_cooling_system") "is a look from the ground floor at the water cooling system including radiator, pump and reservoir.") (label "fig:cast:water_cooling_overview")) #+end_src [fn:data_loss] As mentioned in a previous footnote, the decision to take it out of the HV interlock is the reason for loss of temperature logging data. [fn:cooling_colors] This change was made after the 'window accident'. *** Gas supply The gas supply uses red tubing (in parts where flexible tubing is used) to differentiate itself from the watercooling system. Additionally, the tubes have zipties showing which tube is which. These are located on both ends of the tubes. The naming convention is the same as in [[fig:detector-schematic]]. The connectors of the gas line are standard Swagelok connectors. As can be seen in the schematic, the gas supply has 4 valves on the inlet side and 2 on the outlet side. In addition a buffer gas volume is installed before the detector for better flow control. It follows a short explanation of the different valves: - $V_{\text{in, 1}}$ is the main electrovalve installed right after the gas bottle outside the CAST hall. - $V_{\text{interlock}}$ is installed is the electrovalve installed below the platform where the beamline is located. This valve is part of the gas supply interlock, as described in section [[#sec:cast:gas_supply_interlock]]. - $V_{\text{in, 2}}$ is the manual valve located on the second gas supply mounting below the beamline. - $V_{\text{in, N1}}$ is the first needle valve, which is located on the second gas supply mounting below the beamline. It is the one part of the flow meter placed there. - $V_{\text{P}}$ is the valve inside the pressure controller, which is placed roughly below the telescope (on the platform), while $P_{\text{Detector}}$ is the pressure gauge inside this controller. **** Gas system operations If the requirements of the gas supply interlock are satisfied (cf. sec. [[Gas supply interlock]]), it is possible to flush the detector with gas. For that, follow these steps: 1. Make sure the pressure controller is connected and running. Check InGrid-PLC computer in control room and see if pressure control software is running. If gas supply is currently closed, reported pressure inside the detector is usually reported to $\SIrange{960}{980}{\milli\bar}$. 2. Outside the building, open the main valve of the currently active gas bottle (check the arrow on the bottle selector mechanism). See fig. [[sref:fig:gas-bottle-outside]]. 3. Open the second valve near the bottle. 4. Pressure values should be: - gas bottle: $\sim\SIrange{30}{100}{\bar}$ - pre-line pressure: $\sim\SI{7}{\bar}$ - line pressure: $\sim\SI{0.45}{\bar}$ 5. Activate the gas supply at the interlock box by turning the key to =Security on= and pressing the large button. See fig. [[sref:fig:interlock-box]]. 6. Go to the airport side of the magnet. Open the valve on the InGrid gas panel below the telescope platform. See fig. [[fig:ingrid-gas-panel]]. 7. Slowly open the needle valve on the flow meter on the previous panel. Increase gas flow up to $\sim\SI{2}{\liter\per\hour}$. 8. Open the needle valve on the gas supply line on the side of the platform. 9. After $\SIrange{5}{10}{\minute}$ the pressure controller on the InGrid-PLC computer should report $\SI{1050}{\milli\bar}$. # 7. Make sure the electrovalves on the second gas panel (see # fig. NOT HERE. NO PANEL) are open. THESE VALVES CANNOT BE CHECKED # EXPLICITLY. IF INTERLOCK BOX IS ACTIVE, WILL BE OPEN?! MAYBE CHECK # BOX POWER SUPPLY? NO LED I THINK. Now the detector should be flushed with $\ce{Ar} / \ce{iC_4H_{10}}$. Before turning on the HV, make sure to flush for at least $\SI{12}{\hour}$ to be on the safe side. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Gas bottle") (label "fig:gas-bottle-outside") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/gas_bottles_outside.jpg")) (subfigure (linewidth 0.5) (caption "Interlock box") (label "fig:interlock-box") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/gas_interlock_box.jpg")) (caption (subref "fig:gas-bottle-outside") ": Location of the argon-isobutane bottle and the main valves outside the building." (subref "fig:cast:electrovalve-outside") ": Location of the gas interlock box") (label "fig:cast:bottle_valve_outside")) #+end_src #+CAPTION: Location of the InGrid gas panel below the telescope platform on the airport side. #+NAME: fig:ingrid-gas-panel #+ATTR_LATEX: :width 0.6\textwidth [[file:~/org/Doc/Detector/figs/gas_panel.jpg]] ***** TODOs for this section :noexport: It turns out I never took a good picture of the electrovalve I think :) #+CAPTION: Location of the needle valve on the gas supply line #+NAME: fig:gas-needle-valve <file to be inserted> *** Combined schematic (water & gas) :PROPERTIES: :CUSTOM_ID: sec:cast:water_gas_schematic :END: Fig. [[fig:detector-schematic]] shows a combined schematic of both the watercooling system and the gas supply. Additionally, the relevant interlock systems and their corresponding members are shown. #+CAPTION: Combined schematic of the detector system, consisting of the #+CAPTION: water cooling system and the gas supply. The interlock systems #+CAPTION: are shown with dashed lines. See section [[#sec:cast:interlock_systems]] #+CAPTION: regarding explanations of when the interlock is activated. #+NAME: fig:detector-schematic #+ATTR_LATEX: :width 1\textwidth :options angle=90 [[~/org/Doc/Detector/figs/detector_system2017.png]] *** Nitrogen supply Nitrogen is supplied by a nitrogen bottle outside the building. To open the nitrogen line, 5 valves need to be opened. The line ends in a copper pipe on the airport side, which is usually there rolled up (the copper is somewhat flexible). 1. Open the lever on the nitrogen bottle outside the building (see fig. [[sref:fig:nitrogen-bottle-outside]]). 2. Open the valve next to the bottle. 3. Go to the gas lines next to the control room. The right most line (see fig. [[sref:fig:valves-control-room]] and fig. [[sref:fig:nitrogen-valve-control-room]]) is the nitrogen line. Open the valve. 4. Open the needle valve on the flow meter to the right of the previous valve. 5. Open the needle valve on the airport side of the magnet. This should be all to open the nitrogen line. The flow through the pipe is not too large, but it should be large enough to feel it on the back of the hand. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Nitrogen bottle") (label "fig:nitrogen-bottle-outside") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/nitrogen_bottle.jpg")) (subfigure (linewidth 0.5) (caption "Control room") (label "fig:valves-control-room") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/nitrogen_valve_location.jpg")) (caption (subref "fig:nitrogen-bottle-outside") ": Location of the Nitrogen bottle and the main valves outside the building." (subref "fig:valves-control-room") ": Location of the valves next to the control room") (label "fig:nitrogen_bottle_valve")) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Nitrogen valve") (label "fig:nitrogen-valve-control-room") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/nitrogen_valve.jpg")) (subfigure (linewidth 0.5) (caption "") (label "fig:nitrogen-flow-meter-control-room") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/nitrogen_flow_meter.jpg")) (caption (subref "fig:nitrogen-valve-control-room") ": The actual nitrogen valve on the set of valves near the control room." (subref "fig:nitrogen-flow-meter-control-room") ": Location of the flow meter near the valve next to the control room") (label "fig:cast:nitrogen_flow_meter_valve")) #+end_src #+BEGIN_EXPORT latex \newpage #+END_EXPORT **** TODOs for this section :noexport: For the needle valve on the airport side, I do have a picture, but no need I think... ** Interlock systems :PROPERTIES: :CUSTOM_ID: sec:cast:interlock_systems :END: This section describes the interlock systems, which are related to the detector. There are 3 interlock systems to speak of. - The CAST magnet interlock, which prohibits the gate valve VT3 to be opened, if the pressure in the vacuum system is not good enough. - A gas supply interlock, which makes sure the detector is only flushed with gas, if all other parameters are considered nominal (mainly a good vacuum in the system). - A HV interlock, which makes sure the detector is only under HV, if the temperature of the detector is still good (otherwise a lot of sparks are produced) and the currents on the HV lines are nominal. *** HV interlock :PROPERTIES: :CUSTOM_ID: sec:cast:hv_interlock :END: #+begin_quote Note: This section describes the high voltage interlock as it was intended. But as mentioned multiple times previously it was deactivated for final data taking due to several bugs causing it do trigger under non intended circumstances. #+end_quote The HV system is part of an interlock, which tracks the following properties: - detector temperature - currents on HV lines - TO BE IMPLEMENTED: gas pressure inside the detector [fn:gas_pressure] The detector temperature is measured at two points by PT1000 sensors. One of these is located on the bottom side of the intermediate board (and thus is more a measure the the temperature surrounding the detector), while the second is located on the bottom side of the septemboard. The second is the best possible measure for the temperature of the InGrids. However, there is still a PCB separating the sensor from the actual InGrids. This means there is probably a temperature difference of a minimum of $\SI{10}{\celsius}$ between the measured value and the actual temperature of the InGrids. Whenever the TOS is configured to use the FADC readout and control the HV (note: the two are intertwined, since both sit in the same VME crate, which is controlled via a single USB connection), a background process which monitors the temperature is started. If the temperature exceeds the following boundaries \[ \SI{0}{\celsius} \leq T \leq \SI{60}{\celsius} \] on the lower side of the septemboard, the HV is shut down immediately. The lower bound is of less practical value in a physical sense, but in case of sensor problems negative temperature values may be reported. As the upper bound a value is taken at which sparks and general noise seen on the pixels becomes noticeable. The interlock currents at which the HV trips are already shown in tab. [[tab:cast:high_voltage]]. During ramp up of the HV, these trip currents are set higher to avoid trips, while capacitors are being charged. The gas pressure within the detector was intended to be included into the interlock system to shut down the HV if the pressure inside the detector leaves a certain safe boundary, since this could indicate a leak in the detector or an empty gas bottle. [fn:gas_pressure] The gas pressure inside the detector was intended to be added to the HV interlock as well, but also due to lack of time it never was. *** Gas supply interlock :PROPERTIES: :CUSTOM_ID: sec:cast:gas_supply_interlock :END: The gas supply is also part of an interlock system. In case of a window rupture the potential amount of gas that might enter the vacuum system should be limited by electrovalves. Ideally the pressure inside the detector had been included into the gas supply interlock. This could have made sure the gas inlet and outlet are closed in case the pressure inside the detector drops (which might indicate a leak somewhere or an empty gas bottle) or in case of rising pressure. The latter is not as important though, because a pressure controller is already installed behind the detector, which controls the flow such that the pressure stays at $\SI{1050}{\milli \bar}$. While a failure of the controller is thinkable, potentially leading to a pressure increase inside the detector, it is questionable whether this could be dealt with using this interlock system. That is because the pressure sensor used is part of the pressure controller. Three electrovalves are placed on the gas line of the detector. One, $V_{\text{in}}$, is outside of the building next to the gas bottles (see fig. [[sref:fig:cast:electrovalve-outside]]). The second valve $V_{\text{interlock}}$ is located right before the buffer volume, next to the watercooling system, below the shielding platform, fig. sref:fig:cast:electrovalve-V-interlock. The final electrovalve $V_{\text{out}}$ is located after the pressure controller on a blue beam, which supports the optical table below the telescope (see fig. [[sref:fig:cast:electrovalve-V-out]]). These valves are normally closed, i.e. in case of power loss they automatically close. They are open if a voltage is applied to them. The valves are connected to the pressure sensor $P_{\text{MM}}$ (see fig. [[fig:cast:vacuum-schematic]]). The pressures to activate the interlock system is defined by upper and lower thresholds asymmetrically. They are as follows: \begin{align*} P_{\text{MM, Gas enable}} \leq \SI{9.9e-3}{\milli \bar} \end{align*} and \begin{align*} P_{\text{MM, Gas disable}} \leq \SI{2e-2}{\milli \bar} \end{align*} #+begin_src subfigure (figure () (subfigure (linewidth 0.33) (caption "Outside $V_{\\text{in}}$") (label "fig:cast:electrovalve-outside") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/electrovalve_outside.jpg")) (subfigure (linewidth 0.33) (caption "Below $V_{\\text{interlock}}$") (label "fig:cast:electrovalve-V-interlock") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/electrovalve_V_interlock.jpg")) (subfigure (linewidth 0.33) (caption "Output $V_{\\text{out}}$") (label "fig:cast:electrovalve-V-out") (includegraphics (list (cons 'width (linewidth 0.95))) "~/org/Doc/Detector/figs/electrovalve_V_out.jpg")) (caption (subref "fig:cast:electrovalve-outside") ": Location of the electrovalve $V_{\\text{in}}$ located outside the building." (subref "fig:cast:electrovalve-outside") ": Location of the electrovalve $V_{\\text{interlock}}$ next to the watercooling system below the shielding table." (subref "fig:cast:electrovalve-outside") ": Location of the electrovalve $V_{\\text{out}}$ connected to the beam supporting the optical table, on which the telescope is mounted.") (label "fig:cast:interlock_electrovalves")) #+end_src **** TODOs for this section [/] :noexport: #+BEGIN_COMMENT *CHECK THESE VALUES, they are still wrong I believe*. #+END_COMMENT - [X] *MERGE IMAGES INTO ONE SUBFIG* **** Note on pressures of interlock :extended: It's possible the exact numbers are slightly wrong. At this point I'm not certain anymore, but I had a note to cross check them in the document the above was initially in (a document about the detector & operation for CERN). *** CAST magnet interlock :PROPERTIES: :CUSTOM_ID: sec:cast:cast_magnet_interlock :END: The main CAST magnet interlock, as it is relevant to our detector, is as follows. The gate valve VT3 separating the magnet volume from the telescope volume is interlocked. Only if the vacuum in the telescope volume is good enough, VT3 can be opened while the interlock is activated. For this the pressure of $P_{\text{3}}$ is considered relevant (cmp. fig. [[fig:cast:vacuum-schematic]]). The upper and lower thresholds which activate and deactivate the interlock are asymmetric and as follows: \begin{align*} P_{\text{3, VT3 enable}} &\leq \SI{1e-5}{\milli\bar} \end{align*} while \begin{align*} P_{\text{3, VT3 disable}} &\leq \SI{8e-5}{\milli\bar}. \end{align*} This is to make sure there can be no rapid toggling between the two states during pumping or flushing the system. ** CAST log files :PROPERTIES: :CUSTOM_ID: sec:appendix:cast_log_files :END: The CAST experiment is controlled by a central computer running a slow control system written in LabVIEW [fn:labview]. It receives data from all sensors installed at CAST (vacuum pressures, temperatures, magnet current, magnet position and so on). A separate LabVIEW program is responsible for controlling the magnet position, either by moving the magnet to a specific coordinate or by following the Sun, if it is in reach of the magnet. The slow control software records all sensor data and presents it in a multitude of graphs. All data is also written to log files. There are two different kinds of log files of interest in the context of the septemboard detector at CAST. 1. The general slow control log file is a versioned space separated values file (similar to a comma separated value (CSV) or tab separated value (TSV) file, except using spaces as delimiters). The filename contains the version number, which decides the columns contained in the file and their order. A ~Version.idx~ file maps version numbers to file columns for easy access. These slow control log files normally contain one entry every minute. For the Septemboard detector of interest in this file are mainly the magnet current, relevant vacuum pressure sensors and the state of the gate valve ~VT3~. 2. The second kind of log files are the tracking log files. These are also space separated value files with fields describing the pointing location of the magnet as well as whether the magnet is currently tracking the Sun. Both of these log files need to be read by the analysis software to decide whether one or multiple trackings took place in a given background run and if so extract the exact start and end time for a precise data splitting and calculation of background / solar tracking time. Generally the latter log files are all that is needed to determine the existence of a solar tracking. However, the slow control log file can be used as a further sanity check to make sure the gate valve was actually open and the magnet under current during the solar tracking. This is implemented in the ~cast_log_reader~ program part of ~TimepixAanlysis~. See sec. [[#sec:appendix:software:cast_log_reader]] for more details on the program. [fn:labview] https://www.ni.com/labview *** TODOs for this section [/] :noexport: - [ ] in the above we simply state the number of trackings etc. Of course realistically that is simply not enough. We need the slow control and tracking files to compute when a tracking starts and stops. I think this implies it needs to go before the summary! - [X] INSERT FOOTNOTE LINK TO LABVIEW - [ ] *LINK TO CODE!* *** Add tracking log information to background files :extended: :PROPERTIES: :CUSTOM_ID: sec:cast:log_files:add_tracking_info :END: The code dealing with the log files is [[file:~/CastData/ExternCode/TimepixAnalysis/LogReader/cast_log_reader.nim]] In case previous log information is available we first have to remove that information. This is done by just calling the program just with the H5 file as an input: #+begin_src sh cast_log_reader h5file -f ~/CastData/data/DataRuns2017_Reco.h5 cast_log_reader h5file -f ~/CastData/data/DataRuns2018_Reco.h5 #+end_src With an 'empty' (in terms of tracking information) HDF5 file we can then go ahead and add all tracking information to them. This of course requires the tracking log files: #+begin_src sh ./cast_log_reader tracking -p ../resources/LogFiles/tracking-logs --h5out ~/CastData/data/DataRuns2017_Reco.h5 ./cast_log_reader tracking -p ../resources/LogFiles/tracking-logs --h5out ~/CastData/data/DataRuns2018_Reco.h5 #+end_src One can also run it without any output H5 file ~--h5out~ to simply get information about the runs. In particular all runs #+begin_src sh ./cast_log_reader tracking -p ../resources/LogFiles/tracking-logs --startTime 2017/09/01 --endTime 2018/12/30 #+end_src :RESULTS: FILTERING TO DATE: 2017-09-01T02:00:00+02:00 and 2018-12-30T01:00:00+01:00 <2017-10-31 Tue 6:37> <2017-10-31 Tue 8:10> <2017-11-02 Thu 6:39> <2017-11-02 Thu 8:14> <2017-11-03 Fri 6:41> <2017-11-03 Fri 8:15> <2017-11-04 Sat 6:42> <2017-11-04 Sat 8:17> <2017-11-05 Sun 6:43> <2017-11-05 Sun 8:18> <2017-11-07 Tue 6:47> <2017-11-07 Tue 8:23> <2017-11-06 Mon 6:45> <2017-11-06 Mon 8:20> <2017-11-08 Wed 6:48> <2017-11-08 Wed 8:24> <2017-11-09 Thu 6:49> <2017-11-09 Thu 8:26> <2017-11-10 Fri 6:51> <2017-11-10 Fri 8:28> <2017-11-11 Sat 6:52> <2017-11-11 Sat 8:30> <2017-11-12 Sun 6:54> <2017-11-12 Sun 8:31> <2017-11-14 Tue 6:56> <2017-11-14 Tue 8:34> <2017-11-13 Mon 6:55> <2017-11-13 Mon 8:33> <2017-11-15 Wed 6:57> <2017-11-15 Wed 8:36> <2017-11-17 Fri 7:00> <2017-11-17 Fri 8:39> <2017-11-18 Sat 7:01> <2017-11-18 Sat 8:41> <2017-11-19 Sun 7:02> <2017-11-19 Sun 8:43> <2017-11-25 Sat 7:10> <2017-11-25 Sat 8:53> <2017-11-26 Sun 7:11> <2017-11-26 Sun 8:54> <2017-11-27 Mon 7:12> <2017-11-27 Mon 8:56> <2017-11-28 Tue 7:14> <2017-11-28 Tue 8:57> <2017-11-29 Wed 7:15> <2017-11-29 Wed 8:59> <2017-11-30 Thu 7:16> <2017-11-30 Thu 9:00> <2017-12-01 Fri 7:17> <2017-12-01 Fri 9:01> <2017-12-02 Sat 7:18> <2017-12-02 Sat 8:59> <2017-12-03 Sun 7:19> <2017-12-03 Sun 8:58> <2017-12-04 Mon 7:20> <2017-12-04 Mon 9:01> <2017-12-05 Tue 7:21> <2017-12-05 Tue 8:59> <2017-12-07 Thu 7:22> <2017-12-07 Thu 9:02> <2017-12-09 Sat 7:25> <2017-12-09 Sat 9:05> <2017-12-10 Sun 7:26> <2017-12-10 Sun 9:01> <2017-12-11 Mon 7:27> <2017-12-11 Mon 9:00> <2017-12-12 Tue 7:28> <2017-12-12 Tue 9:02> <2017-12-13 Wed 7:28> <2017-12-13 Wed 9:00> <2017-12-14 Thu 7:29> <2017-12-14 Thu 8:59> <2017-12-15 Fri 7:29> <2017-12-15 Fri 9:00> <2017-12-16 Sat 7:30> <2017-12-16 Sat 9:01> <2017-12-17 Sun 7:31> <2017-12-17 Sun 9:01> <2017-12-18 Mon 7:31> <2017-12-18 Mon 9:01> <2017-12-19 Tue 7:32> <2017-12-19 Tue 9:02> <2017-12-20 Wed 7:33> <2017-12-20 Wed 9:02> <2018-02-15 Thu 7:01> <2018-02-15 Thu 8:33> <2018-02-16 Fri 7:00> <2018-02-16 Fri 8:31> <2018-02-18 Sun 6:57> <2018-02-18 Sun 8:28> <2018-02-19 Mon 6:56> <2018-02-19 Mon 8:26> <2018-02-20 Tue 6:53> <2018-02-20 Tue 8:24> <2018-02-21 Wed 6:52> <2018-02-21 Wed 8:22> <2018-02-22 Thu 6:50> <2018-02-22 Thu 8:20> <2018-02-23 Fri 6:48> <2018-02-23 Fri 8:18> <2018-02-24 Sat 6:47> <2018-02-24 Sat 8:16> <2018-02-28 Wed 6:40> <2018-02-28 Wed 8:09> <2018-03-02 Fri 6:36> <2018-03-02 Fri 8:05> <2018-03-03 Sat 6:35> <2018-03-03 Sat 8:03> <2018-03-04 Sun 6:33> <2018-03-04 Sun 8:01> <2018-03-05 Mon 6:31> <2018-03-05 Mon 7:59> <2018-03-06 Tue 6:29> <2018-03-06 Tue 7:57> <2018-03-07 Wed 6:27> <2018-03-07 Wed 7:55> <2018-03-14 Wed 6:14> <2018-03-14 Wed 7:41> <2018-03-15 Thu 6:12> <2018-03-15 Thu 7:39> <2018-03-16 Fri 6:10> <2018-03-16 Fri 7:37> <2018-03-17 Sat 6:08> <2018-03-17 Sat 7:35> <2018-03-18 Sun 6:07> <2018-03-18 Sun 7:33> <2018-03-19 Mon 6:05> <2018-03-19 Mon 7:31> <2018-03-20 Tue 6:03> <2018-03-20 Tue 7:29> <2018-03-21 Wed 6:01> <2018-03-21 Wed 7:27> <2018-03-22 Thu 5:59> <2018-03-22 Thu 7:25> <2018-03-24 Sat 5:55> <2018-03-24 Sat 7:21> <2018-03-25 Sun 6:53> <2018-03-25 Sun 8:20> <2018-03-26 Mon 5:51> <2018-03-26 Mon 7:18> <2018-10-19 Fri 6:21> <2018-10-19 Fri 7:51> <2018-10-22 Mon 6:24> <2018-10-22 Mon 7:55> <2018-10-23 Tue 6:26> <2018-10-23 Tue 7:57> <2018-10-24 Wed 6:27> <2018-10-24 Wed 7:58> <2018-10-25 Thu 6:28> <2018-10-25 Thu 8:00> <2018-10-26 Fri 6:30> <2018-10-26 Fri 8:02> <2018-10-27 Sat 6:31> <2018-10-27 Sat 8:03> <2018-10-28 Sun 5:32> <2018-10-28 Sun 7:05> <2018-10-29 Mon 6:34> <2018-10-29 Mon 8:06> <2018-10-30 Tue 6:35> <2018-10-30 Tue 8:08> <2018-11-01 Thu 6:38> <2018-11-01 Thu 8:11> <2018-11-02 Fri 6:39> <2018-11-02 Fri 8:13> <2018-11-03 Sat 6:40> <2018-11-03 Sat 8:16> <2018-11-04 Sun 6:43> <2018-11-04 Sun 8:17> <2018-11-05 Mon 6:44> <2018-11-05 Mon 8:19> <2018-11-06 Tue 6:45> <2018-11-06 Tue 8:21> <2018-11-09 Fri 6:49> <2018-11-09 Fri 8:26> <2018-11-10 Sat 6:51> <2018-11-10 Sat 8:27> <2018-11-11 Sun 6:52> <2018-11-11 Sun 8:29> <2018-11-12 Mon 6:53> <2018-11-12 Mon 8:31> <2018-11-13 Tue 6:54> <2018-11-13 Tue 8:32> <2018-11-14 Wed 6:56> <2018-11-14 Wed 8:34> <2018-11-15 Thu 6:57> <2018-11-15 Thu 8:36> <2018-11-16 Fri 6:58> <2018-11-16 Fri 8:37> <2018-11-17 Sat 7:00> <2018-11-17 Sat 8:39> <2018-11-18 Sun 7:01> <2018-11-18 Sun 8:41> <2018-11-19 Mon 7:02> <2018-11-19 Mon 8:42> <2018-11-24 Sat 7:08> <2018-11-24 Sat 7:30> <2018-11-25 Sun 7:09> <2018-11-25 Sun 8:52> <2018-11-26 Mon 7:10> <2018-11-26 Mon 8:53> <2018-11-27 Tue 7:11> <2018-11-27 Tue 8:55> <2018-11-29 Thu 7:14> <2018-11-29 Thu 8:58> <2018-11-30 Fri 7:15> <2018-11-30 Fri 8:59> <2018-12-01 Sat 7:16> <2018-12-01 Sat 9:00> <2018-12-02 Sun 7:17> <2018-12-02 Sun 9:02> <2018-12-03 Mon 7:18> <2018-12-03 Mon 9:03> <2018-12-05 Wed 8:55> <2018-12-05 Wed 9:04> <2018-12-06 Thu 7:22> <2018-12-06 Thu 9:04> <2018-12-07 Fri 7:23> <2018-12-07 Fri 9:03> <2018-12-08 Sat 7:24> <2018-12-08 Sat 9:04> <2018-12-10 Mon 7:25> <2018-12-10 Mon 9:03> <2018-12-11 Tue 7:26> <2018-12-11 Tue 9:02> <2018-12-12 Wed 7:27> <2018-12-12 Wed 9:03> <2018-12-13 Thu 7:28> <2018-12-13 Thu 9:03> <2018-12-14 Fri 7:29> <2018-12-14 Fri 9:06> <2018-12-16 Sun 7:30> <2018-12-16 Sun 9:00> <2018-12-15 Sat 7:30> <2018-12-15 Sat 9:01> <2018-12-17 Mon 7:31> <2018-12-17 Mon 9:01> <2018-12-18 Tue 7:32> <2018-12-18 Tue 9:01> <2018-12-20 Thu 7:33> <2018-12-20 Thu 9:03> There are 120 solar trackings found in the log file directory The total time of all trackings: 186 h (exact: 1 week, 18 hours, 46 minutes, and 19 seconds) Total time the magnet was on (> 1 T): 0 h h :END: This lists 5 runs that were *not* part of our data taking, meaning we lost them. And finally (and most useful for information purposes) we can run the log reader over the files and do a fake insert (aka dry run) of inserting the files, which only outputs the run information including which runs were missed: #+begin_src sh ./cast_log_reader tracking -p ../resources/LogFiles/tracking-logs \ --startTime 2017/09/01 --endTime 2018/05/01 \ --h5out ~/CastData/data/DataRuns2017_Reco.h5 --dryRun #+end_src :RESULTS: There are 70 solar trackings found in the log file directory The total time of all trackings: 109 h (exact: 4 days, 13 hours, 3 minutes, and 22 seconds) Total time the magnet was on (> 1 T): 0 h h Filtering tracking logs to date: 2017-09-01T02:00:00+02:00 and 2018-05-01T02:00:00+02:00 ========== Logs mapped to trackings in the output file: ========== <2017-10-31 Tue 6:37> <2017-10-31 Tue 8:10> for run: 76 <2017-11-02 Thu 6:39> <2017-11-02 Thu 8:14> for run: 77 <2017-11-03 Fri 6:41> <2017-11-03 Fri 8:15> for run: 78 <2017-11-04 Sat 6:42> <2017-11-04 Sat 8:17> for run: 79 <2017-11-05 Sun 6:43> <2017-11-05 Sun 8:18> for run: 80 <2017-11-06 Mon 6:45> <2017-11-06 Mon 8:20> for run: 81 <2017-11-07 Tue 6:47> <2017-11-07 Tue 8:23> for run: 82 <2017-11-08 Wed 6:48> <2017-11-08 Wed 8:24> for run: 82 <2017-11-09 Thu 6:49> <2017-11-09 Thu 8:26> for run: 84 <2017-11-10 Fri 6:51> <2017-11-10 Fri 8:28> for run: 86 <2017-11-11 Sat 6:52> <2017-11-11 Sat 8:30> for run: 87 <2017-11-12 Sun 6:54> <2017-11-12 Sun 8:31> for run: 87 <2017-11-13 Mon 6:55> <2017-11-13 Mon 8:33> for run: 89 <2017-11-14 Tue 6:56> <2017-11-14 Tue 8:34> for run: 90 <2017-11-15 Wed 6:57> <2017-11-15 Wed 8:36> for run: 91 <2017-11-17 Fri 7:00> <2017-11-17 Fri 8:39> for run: 92 <2017-11-18 Sat 7:01> <2017-11-18 Sat 8:41> for run: 94 <2017-11-19 Sun 7:02> <2017-11-19 Sun 8:43> for run: 95 <2017-11-25 Sat 7:10> <2017-11-25 Sat 8:53> for run: 97 <2017-11-26 Sun 7:11> <2017-11-26 Sun 8:54> for run: 98 <2017-11-27 Mon 7:12> <2017-11-27 Mon 8:56> for run: 99 <2017-11-28 Tue 7:14> <2017-11-28 Tue 8:57> for run: 100 <2017-11-29 Wed 7:15> <2017-11-29 Wed 8:59> for run: 101 <2017-11-30 Thu 7:16> <2017-11-30 Thu 9:00> for run: 103 <2017-12-01 Fri 7:17> <2017-12-01 Fri 9:01> for run: 104 <2017-12-03 Sun 7:19> <2017-12-03 Sun 8:58> for run: 106 <2017-12-02 Sat 7:18> <2017-12-02 Sat 8:59> for run: 105 <2017-12-04 Mon 7:20> <2017-12-04 Mon 9:01> for run: 107 <2017-12-05 Tue 7:21> <2017-12-05 Tue 8:59> for run: 109 <2017-12-07 Thu 7:22> <2017-12-07 Thu 9:02> for run: 112 <2017-12-09 Sat 7:25> <2017-12-09 Sat 9:05> for run: 112 <2017-12-11 Mon 7:27> <2017-12-11 Mon 9:00> for run: 114 <2017-12-10 Sun 7:26> <2017-12-10 Sun 9:01> for run: 113 <2017-12-12 Tue 7:28> <2017-12-12 Tue 9:02> for run: 115 <2017-12-13 Wed 7:28> <2017-12-13 Wed 9:00> for run: 117 <2017-12-14 Thu 7:29> <2017-12-14 Thu 8:59> for run: 119 <2017-12-15 Fri 7:29> <2017-12-15 Fri 9:00> for run: 121 <2017-12-16 Sat 7:30> <2017-12-16 Sat 9:01> for run: 123 <2017-12-17 Sun 7:31> <2017-12-17 Sun 9:01> for run: 124 <2017-12-18 Mon 7:31> <2017-12-18 Mon 9:01> for run: 124 <2017-12-19 Tue 7:32> <2017-12-19 Tue 9:02> for run: 125 <2017-12-20 Wed 7:33> <2017-12-20 Wed 9:02> for run: 127 <2018-02-18 Sun 6:57> <2018-02-18 Sun 8:28> for run: 146 <2018-02-20 Tue 6:53> <2018-02-20 Tue 8:24> for run: 150 <2018-02-19 Mon 6:56> <2018-02-19 Mon 8:26> for run: 148 <2018-02-21 Wed 6:52> <2018-02-21 Wed 8:22> for run: 152 <2018-02-22 Thu 6:50> <2018-02-22 Thu 8:20> for run: 154 <2018-02-23 Fri 6:48> <2018-02-23 Fri 8:18> for run: 156 <2018-02-24 Sat 6:47> <2018-02-24 Sat 8:16> for run: 158 <2018-02-28 Wed 6:40> <2018-02-28 Wed 8:09> for run: 160 <2018-03-02 Fri 6:36> <2018-03-02 Fri 8:05> for run: 162 <2018-03-03 Sat 6:35> <2018-03-03 Sat 8:03> for run: 162 <2018-03-04 Sun 6:33> <2018-03-04 Sun 8:01> for run: 162 <2018-03-05 Mon 6:31> <2018-03-05 Mon 7:59> for run: 164 <2018-03-06 Tue 6:29> <2018-03-06 Tue 7:57> for run: 164 <2018-03-07 Wed 6:27> <2018-03-07 Wed 7:55> for run: 166 <2018-03-14 Wed 6:14> <2018-03-14 Wed 7:41> for run: 170 <2018-03-15 Thu 6:12> <2018-03-15 Thu 7:39> for run: 172 <2018-03-16 Fri 6:10> <2018-03-16 Fri 7:37> for run: 174 <2018-03-17 Sat 6:08> <2018-03-17 Sat 7:35> for run: 176 <2018-03-18 Sun 6:07> <2018-03-18 Sun 7:33> for run: 178 <2018-03-19 Mon 6:05> <2018-03-19 Mon 7:31> for run: 178 <2018-03-20 Tue 6:03> <2018-03-20 Tue 7:29> for run: 178 <2018-03-21 Wed 6:01> <2018-03-21 Wed 7:27> for run: 178 <2018-03-22 Thu 5:59> <2018-03-22 Thu 7:25> for run: 178 <2018-03-24 Sat 5:55> <2018-03-24 Sat 7:21> for run: 180 <2018-03-25 Sun 6:53> <2018-03-25 Sun 8:20> for run: 182 <2018-03-26 Mon 5:51> <2018-03-26 Mon 7:18> for run: 182 ================================================================== ========== Logs *not* mapped to a run ============================ <2018-02-15 Thu 7:01> <2018-02-15 Thu 8:33> <2018-02-16 Fri 7:00> <2018-02-16 Fri 8:31> ================================================================== :END: And for 2018: #+begin_src sh ./cast_log_reader tracking -p ../resources/LogFiles/tracking-logs \ --startTime 2018/05/01 --endTime 2018/12/31 \ --h5out ~/CastData/data/DataRuns2018_Reco.h5 --dryRun #+end_src :RESULTS: There are 50 solar trackings found in the log file directory The total time of all trackings: 77 h (exact: 3 days, 5 hours, 42 minutes, and 57 seconds) Total time the magnet was on (> 1 T): 0 h h Filtering tracking logs to date: 2018-05-01T02:00:00+02:00 and 2018-12-31T01:00:00+01:00 ========== Logs mapped to trackings in the output file: ========== <2018-10-22 Mon 6:24> <2018-10-22 Mon 7:55> for run: 240 <2018-10-23 Tue 6:26> <2018-10-23 Tue 7:57> for run: 242 <2018-10-24 Wed 6:27> <2018-10-24 Wed 7:58> for run: 244 <2018-10-25 Thu 6:28> <2018-10-25 Thu 8:00> for run: 246 <2018-10-26 Fri 6:30> <2018-10-26 Fri 8:02> for run: 248 <2018-10-27 Sat 6:31> <2018-10-27 Sat 8:03> for run: 250 <2018-10-29 Mon 6:34> <2018-10-29 Mon 8:06> for run: 254 <2018-10-30 Tue 6:35> <2018-10-30 Tue 8:08> for run: 256 <2018-11-01 Thu 6:38> <2018-11-01 Thu 8:11> for run: 258 <2018-11-02 Fri 6:39> <2018-11-02 Fri 8:13> for run: 261 <2018-11-03 Sat 6:40> <2018-11-03 Sat 8:16> for run: 261 <2018-11-04 Sun 6:43> <2018-11-04 Sun 8:17> for run: 261 <2018-11-05 Mon 6:44> <2018-11-05 Mon 8:19> for run: 263 <2018-11-06 Tue 6:45> <2018-11-06 Tue 8:21> for run: 265 <2018-11-09 Fri 6:49> <2018-11-09 Fri 8:26> for run: 268 <2018-11-10 Sat 6:51> <2018-11-10 Sat 8:27> for run: 270 <2018-11-11 Sun 6:52> <2018-11-11 Sun 8:29> for run: 270 <2018-11-12 Mon 6:53> <2018-11-12 Mon 8:31> for run: 272 <2018-11-13 Tue 6:54> <2018-11-13 Tue 8:32> for run: 272 <2018-11-14 Wed 6:56> <2018-11-14 Wed 8:34> for run: 272 <2018-11-15 Thu 6:57> <2018-11-15 Thu 8:36> for run: 274 <2018-11-16 Fri 6:58> <2018-11-16 Fri 8:37> for run: 274 <2018-11-17 Sat 7:00> <2018-11-17 Sat 8:39> for run: 274 <2018-11-18 Sun 7:01> <2018-11-18 Sun 8:41> for run: 276 <2018-11-19 Mon 7:02> <2018-11-19 Mon 8:42> for run: 276 <2018-11-25 Sun 7:09> <2018-11-25 Sun 8:52> for run: 279 <2018-11-26 Mon 7:10> <2018-11-26 Mon 8:53> for run: 279 <2018-11-27 Tue 7:11> <2018-11-27 Tue 8:55> for run: 281 <2018-11-29 Thu 7:14> <2018-11-29 Thu 8:58> for run: 283 <2018-11-30 Fri 7:15> <2018-11-30 Fri 8:59> for run: 283 <2018-12-01 Sat 7:16> <2018-12-01 Sat 9:00> for run: 283 <2018-12-02 Sun 7:17> <2018-12-02 Sun 9:02> for run: 285 <2018-12-03 Mon 7:18> <2018-12-03 Mon 9:03> for run: 285 <2018-12-05 Wed 8:55> <2018-12-05 Wed 9:04> for run: 287 <2018-12-06 Thu 7:22> <2018-12-06 Thu 9:04> for run: 289 <2018-12-07 Fri 7:23> <2018-12-07 Fri 9:03> for run: 291 <2018-12-08 Sat 7:24> <2018-12-08 Sat 9:04> for run: 291 <2018-12-10 Mon 7:25> <2018-12-10 Mon 9:03> for run: 293 <2018-12-11 Tue 7:26> <2018-12-11 Tue 9:02> for run: 295 <2018-12-12 Wed 7:27> <2018-12-12 Wed 9:03> for run: 297 <2018-12-13 Thu 7:28> <2018-12-13 Thu 9:03> for run: 297 <2018-12-14 Fri 7:29> <2018-12-14 Fri 9:06> for run: 298 <2018-12-15 Sat 7:30> <2018-12-15 Sat 9:01> for run: 299 <2018-12-16 Sun 7:30> <2018-12-16 Sun 9:00> for run: 301 <2018-12-17 Mon 7:31> <2018-12-17 Mon 9:01> for run: 301 <2018-12-18 Tue 7:32> <2018-12-18 Tue 9:01> for run: 303 <2018-12-20 Thu 7:33> <2018-12-20 Thu 9:03> for run: 306 ================================================================== ========== Logs *not* mapped to a run ============================ <2018-10-19 Fri 6:21> <2018-10-19 Fri 7:51> <2018-10-28 Sun 5:32> <2018-10-28 Sun 7:05> <2018-11-24 Sat 7:08> <2018-11-24 Sat 7:30> ================================================================== :END: * CAST data taking run list :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:cast_run_list :END: The following, table [[tab:appendix:cast_run_list]], is a simplified table of the entire run list of data taken at CAST. It includes the run numbers, the type of data (background ~b~ and \cefe calibration ~c~), as they split into each run period. The starting and end times are given for each run. Additionally, the number of trackings in each run as well as the total number of recorded events on the Septemboard and FADC is shown. \scriptsize #+CAPTION: List of all runs recorded with the Septemboard detector during Run-2 and Run-3 at CAST. #+CAPTION: The run type is listed as ~b~: background with possible tracking and ~c~: calibration with #+CAPTION: the \cefe source. #+NAME: tab:appendix:cast_run_list #+ATTR_LATEX: :environment longtable :width \textwidth :spread | Run # | Type | Start | End | Length | # trackings | # frames | # FADC | |-------+------+------------------------+------------------------+---------------+-------------+----------+--------| | Run-2 | | | | | | | | |-------+------+------------------------+------------------------+---------------+-------------+----------+--------| | 76 | b | <2017-10-30 Mon 18:39> | <2017-11-02 Thu 5:24> | 2 days 10:44 | 1 | 88249 | 19856 | | 77 | b | <2017-11-02 Thu 5:24> | <2017-11-03 Fri 5:28> | 1 days 00:03 | 1 | 36074 | 8016 | | 78 | b | <2017-11-03 Fri 5:28> | <2017-11-03 Fri 20:45> | 0 days 15:17 | 1 | 23506 | 5988 | | 79 | b | <2017-11-03 Fri 20:46> | <2017-11-05 Sun 0:09> | 1 days 03:22 | 1 | 40634 | 8102 | | 80 | b | <2017-11-05 Sun 0:09> | <2017-11-05 Sun 23:50> | 0 days 23:40 | 1 | 35147 | 6880 | | 81 | b | <2017-11-05 Sun 23:54> | <2017-11-07 Tue 0:00> | 1 days 00:06 | 1 | 35856 | 7283 | | 82 | b | <2017-11-07 Tue 0:01> | <2017-11-08 Wed 15:58> | 1 days 15:56 | 2 | 59502 | 12272 | | 83 | c | <2017-11-08 Wed 16:27> | <2017-11-08 Wed 17:27> | 0 days 00:59 | 0 | 4915 | 4897 | | 84 | b | <2017-11-08 Wed 17:49> | <2017-11-09 Thu 19:01> | 1 days 01:11 | 1 | 37391 | 7551 | | 85 | b | <2017-11-09 Thu 19:01> | <2017-11-09 Thu 21:46> | 0 days 02:45 | 0 | 4104 | 899 | | 86 | b | <2017-11-09 Thu 21:47> | <2017-11-11 Sat 2:17> | 1 days 04:29 | 1 | 42396 | 9656 | | 87 | b | <2017-11-11 Sat 2:17> | <2017-11-12 Sun 14:29> | 1 days 12:11 | 2 | 54786 | 15123 | | 88 | c | <2017-11-12 Sun 14:30> | <2017-11-12 Sun 15:30> | 0 days 00:59 | 0 | 4943 | 4934 | | 89 | b | <2017-11-12 Sun 15:30> | <2017-11-13 Mon 18:27> | 1 days 02:57 | 1 | 25209 | 6210 | | 90 | b | <2017-11-13 Mon 19:14> | <2017-11-14 Tue 20:24> | 1 days 01:09 | 1 | 37497 | 8122 | | 91 | b | <2017-11-14 Tue 20:24> | <2017-11-15 Wed 21:44> | 1 days 01:20 | 1 | 37732 | 8108 | | 92 | b | <2017-11-15 Wed 21:45> | <2017-11-17 Fri 19:18> | 1 days 21:32 | 1 | 67946 | 14730 | | 93 | c | <2017-11-17 Fri 19:18> | <2017-11-17 Fri 20:18> | 0 days 01:00 | 0 | 4977 | 4968 | | 94 | b | <2017-11-17 Fri 20:48> | <2017-11-19 Sun 2:34> | 1 days 05:46 | 1 | 44344 | 9422 | | 95 | b | <2017-11-19 Sun 2:35> | <2017-11-23 Thu 10:41> | 4 days 08:06 | 1 | 154959 | 33112 | | 96 | c | <2017-11-23 Thu 10:42> | <2017-11-23 Thu 17:43> | 0 days 07:01 | 0 | 34586 | 34496 | | 97 | b | <2017-11-23 Thu 17:43> | <2017-11-26 Sun 1:41> | 2 days 07:57 | 1 | 83404 | 18277 | | 98 | b | <2017-11-26 Sun 1:42> | <2017-11-26 Sun 21:18> | 0 days 19:36 | 1 | 29202 | 6285 | | 99 | b | <2017-11-26 Sun 21:18> | <2017-11-28 Tue 6:46> | 1 days 09:27 | 1 | 49921 | 10895 | | 100 | b | <2017-11-28 Tue 6:46> | <2017-11-29 Wed 6:40> | 0 days 23:53 | 1 | 35658 | 7841 | | 101 | b | <2017-11-29 Wed 6:40> | <2017-11-29 Wed 20:18> | 0 days 13:37 | 1 | 20326 | 4203 | | 102 | c | <2017-11-29 Wed 20:19> | <2017-11-29 Wed 22:19> | 0 days 02:00 | 0 | 9919 | 9898 | | 103 | b | <2017-11-29 Wed 22:26> | <2017-12-01 Fri 6:46> | 1 days 08:19 | 1 | 47381 | 7867 | | 104 | b | <2017-12-01 Fri 6:47> | <2017-12-02 Sat 6:48> | 1 days 00:00 | 1 | 35220 | 5866 | | 105 | b | <2017-12-02 Sat 6:48> | <2017-12-03 Sun 6:39> | 0 days 23:51 | 1 | 34918 | 5794 | | 106 | b | <2017-12-03 Sun 6:40> | <2017-12-04 Mon 6:54> | 1 days 00:14 | 1 | 35576 | 6018 | | 107 | b | <2017-12-04 Mon 6:54> | <2017-12-04 Mon 13:38> | 0 days 06:44 | 1 | 9883 | 1641 | | 108 | c | <2017-12-04 Mon 13:39> | <2017-12-04 Mon 17:39> | 0 days 04:00 | 0 | 19503 | 19448 | | 109 | b | <2017-12-04 Mon 17:47> | <2017-12-05 Tue 11:20> | 0 days 17:32 | 1 | 28402 | 8217 | | 110 | c | <2017-12-05 Tue 11:20> | <2017-12-05 Tue 13:20> | 0 days 01:59 | 0 | 9804 | 9786 | | 111 | b | <2017-12-05 Tue 13:23> | <2017-12-05 Tue 16:17> | 0 days 02:53 | 0 | 4244 | 644 | | 112 | b | <2017-12-06 Wed 14:50> | <2017-12-10 Sun 6:46> | 3 days 15:55 | 2 | 128931 | 19607 | | 113 | b | <2017-12-10 Sun 6:46> | <2017-12-11 Mon 6:49> | 1 days 00:03 | 1 | 35100 | 5174 | | 114 | b | <2017-12-11 Mon 6:50> | <2017-12-11 Mon 18:33> | 0 days 11:43 | 1 | 17111 | 2542 | | 115 | b | <2017-12-11 Mon 18:36> | <2017-12-12 Tue 20:58> | 1 days 02:21 | 1 | 40574 | 9409 | | 116 | c | <2017-12-12 Tue 20:59> | <2017-12-12 Tue 22:59> | 0 days 02:00 | 0 | 9741 | 9724 | | 117 | b | <2017-12-12 Tue 23:56> | <2017-12-13 Wed 21:29> | 0 days 21:33 | 1 | 31885 | 5599 | | 118 | c | <2017-12-13 Wed 21:30> | <2017-12-13 Wed 23:30> | 0 days 02:00 | 0 | 9771 | 9748 | | 119 | b | <2017-12-14 Thu 0:07> | <2017-12-14 Thu 17:04> | 0 days 16:57 | 1 | 25434 | 4903 | | 120 | c | <2017-12-14 Thu 17:04> | <2017-12-14 Thu 21:04> | 0 days 04:00 | 0 | 19308 | 19261 | | 121 | b | <2017-12-14 Thu 21:07> | <2017-12-15 Fri 19:22> | 0 days 22:14 | 1 | 33901 | 6947 | | 122 | c | <2017-12-15 Fri 19:22> | <2017-12-16 Sat 1:20> | 0 days 05:57 | 0 | 29279 | 29208 | | 123 | b | <2017-12-16 Sat 1:21> | <2017-12-17 Sun 1:06> | 0 days 23:45 | 1 | 34107 | 3380 | | 124 | b | <2017-12-17 Sun 1:06> | <2017-12-19 Tue 2:57> | 2 days 01:50 | 2 | 71703 | 7504 | | 125 | b | <2017-12-19 Tue 2:57> | <2017-12-19 Tue 16:20> | 0 days 13:22 | 1 | 19262 | 1991 | | 126 | c | <2017-12-19 Tue 16:21> | <2017-12-19 Tue 19:21> | 0 days 02:59 | 0 | 14729 | 14689 | | 127 | b | <2017-12-19 Tue 19:27> | <2017-12-22 Fri 0:17> | 2 days 04:50 | 1 | 75907 | 7663 | | 128 | c | <2017-12-22 Fri 0:18> | <2017-12-22 Fri 9:23> | 0 days 09:05 | 0 | 44806 | 44709 | | 145 | c | <2018-02-17 Sat 17:18> | <2018-02-17 Sat 20:40> | 0 days 03:22 | 0 | 16797 | 16796 | | 146 | b | <2018-02-17 Sat 20:41> | <2018-02-18 Sun 18:12> | 0 days 21:30 | 1 | 32705 | 3054 | | 147 | c | <2018-02-18 Sun 18:12> | <2018-02-18 Sun 20:12> | 0 days 01:59 | 0 | 10102 | 10102 | | 148 | b | <2018-02-18 Sun 20:46> | <2018-02-19 Mon 17:24> | 0 days 20:37 | 1 | 31433 | 3120 | | 149 | c | <2018-02-19 Mon 17:25> | <2018-02-19 Mon 19:25> | 0 days 02:00 | 0 | 9975 | 9975 | | 150 | b | <2018-02-19 Mon 19:53> | <2018-02-20 Tue 17:36> | 0 days 21:42 | 1 | 33192 | 3546 | | 151 | c | <2018-02-20 Tue 17:36> | <2018-02-20 Tue 19:36> | 0 days 01:59 | 0 | 9907 | 9907 | | 152 | b | <2018-02-20 Tue 21:54> | <2018-02-21 Wed 18:05> | 0 days 20:10 | 1 | 30809 | 3319 | | 153 | c | <2018-02-21 Wed 18:05> | <2018-02-21 Wed 20:05> | 0 days 01:59 | 0 | 10103 | 10102 | | 154 | b | <2018-02-21 Wed 21:10> | <2018-02-22 Thu 17:23> | 0 days 20:12 | 1 | 30891 | 3426 | | 155 | c | <2018-02-22 Thu 17:23> | <2018-02-22 Thu 19:23> | 0 days 02:00 | 0 | 9861 | 9861 | | 156 | b | <2018-02-23 Fri 6:06> | <2018-02-23 Fri 17:41> | 0 days 11:35 | 1 | 17686 | 1866 | | 157 | c | <2018-02-23 Fri 17:41> | <2018-02-23 Fri 19:41> | 0 days 01:59 | 0 | 9962 | 9962 | | 158 | b | <2018-02-23 Fri 19:42> | <2018-02-26 Mon 8:46> | 2 days 13:03 | 1 | 93205 | 9893 | | 159 | c | <2018-02-26 Mon 8:46> | <2018-02-26 Mon 12:46> | 0 days 04:00 | 0 | 19879 | 19878 | | 160 | b | <2018-02-26 Mon 14:56> | <2018-03-01 Thu 10:24> | 2 days 19:28 | 1 | 103145 | 11415 | | 161 | c | <2018-03-01 Thu 10:26> | <2018-03-01 Thu 14:26> | 0 days 04:00 | 0 | 19944 | 19943 | | 162 | b | <2018-03-01 Thu 17:07> | <2018-03-04 Sun 20:16> | 3 days 03:08 | 3 | 114590 | 11897 | | 163 | c | <2018-03-04 Sun 20:17> | <2018-03-04 Sun 22:17> | 0 days 02:00 | 0 | 10093 | 10093 | | 164 | b | <2018-03-04 Sun 22:57> | <2018-03-06 Tue 19:15> | 1 days 20:18 | 2 | 67456 | 6488 | | 165 | c | <2018-03-06 Tue 19:15> | <2018-03-06 Tue 23:15> | 0 days 04:00 | 0 | 19882 | 19879 | | 166 | b | <2018-03-07 Wed 0:50> | <2018-03-07 Wed 18:28> | 0 days 17:38 | 1 | 26859 | 2565 | | 167 | c | <2018-03-07 Wed 18:29> | <2018-03-07 Wed 20:29> | 0 days 02:00 | 0 | 9938 | 9938 | | 168 | b | <2018-03-07 Wed 20:37> | <2018-03-13 Tue 16:54> | 5 days 20:16 | 0 | 213545 | 20669 | | 169 | c | <2018-03-13 Tue 16:55> | <2018-03-13 Tue 22:55> | 0 days 06:00 | 0 | 29874 | 29874 | | 170 | b | <2018-03-13 Tue 23:19> | <2018-03-14 Wed 21:01> | 0 days 21:42 | 1 | 33098 | 3269 | | 171 | c | <2018-03-14 Wed 21:01> | <2018-03-14 Wed 23:01> | 0 days 02:00 | 0 | 9999 | 9999 | | 172 | b | <2018-03-14 Wed 23:06> | <2018-03-15 Thu 17:57> | 0 days 18:50 | 1 | 28649 | 2773 | | 173 | c | <2018-03-15 Thu 17:59> | <2018-03-15 Thu 19:59> | 0 days 01:59 | 0 | 9898 | 9897 | | 174 | b | <2018-03-15 Thu 20:39> | <2018-03-16 Fri 16:27> | 0 days 19:48 | 1 | 30163 | 2961 | | 175 | c | <2018-03-16 Fri 16:28> | <2018-03-16 Fri 18:28> | 0 days 01:59 | 0 | 10075 | 10075 | | 176 | b | <2018-03-16 Fri 18:35> | <2018-03-17 Sat 20:55> | 1 days 02:19 | 1 | 40084 | 3815 | | 177 | c | <2018-03-17 Sat 20:55> | <2018-03-17 Sat 22:55> | 0 days 01:59 | 0 | 9967 | 9966 | | 178 | b | <2018-03-17 Sat 23:31> | <2018-03-22 Thu 17:40> | 4 days 18:09 | 5 | 174074 | 17949 | | 179 | c | <2018-03-22 Thu 17:41> | <2018-03-22 Thu 19:41> | 0 days 01:59 | 0 | 9887 | 9887 | | 180 | b | <2018-03-22 Thu 20:47> | <2018-03-24 Sat 18:10> | 1 days 21:22 | 1 | 69224 | 7423 | | 181 | c | <2018-03-24 Sat 18:10> | <2018-03-24 Sat 22:10> | 0 days 04:00 | 0 | 20037 | 20036 | | 182 | b | <2018-03-24 Sat 23:32> | <2018-03-26 Mon 19:46> | 1 days 19:14 | 2 | 65888 | 6694 | | 183 | c | <2018-03-26 Mon 19:47> | <2018-03-26 Mon 23:47> | 0 days 03:59 | 0 | 20026 | 20026 | | 184 | b | <2018-03-27 Tue 0:32> | <2018-03-30 Fri 14:18> | 3 days 13:45 | 0 | 130576 | 12883 | | 185 | c | <2018-03-30 Fri 14:18> | <2018-03-30 Fri 18:18> | 0 days 03:59 | 0 | 19901 | 19901 | | 186 | b | <2018-03-30 Fri 19:03> | <2018-04-11 Wed 16:03> | 11 days 21:00 | 0 | 434087 | 42830 | | 187 | c | <2018-04-11 Wed 16:04> | <2018-04-11 Wed 20:04> | 0 days 04:00 | 0 | 19667 | 19665 | | 188 | b | <2018-04-11 Wed 20:53> | <2018-04-17 Tue 10:53> | 5 days 14:00 | 0 | 204281 | 20781 | |-------+------+------------------------+------------------------+---------------+-------------+----------+--------| | Run-3 | | | | | | | | |-------+------+------------------------+------------------------+---------------+-------------+----------+--------| | 239 | c | <2018-10-20 Sat 18:31> | <2018-10-20 Sat 20:31> | 0 days 02:00 | 0 | 9565 | 9518 | | 240 | b | <2018-10-21 Sun 14:54> | <2018-10-22 Mon 16:15> | 1 days 01:21 | 1 | 38753 | 4203 | | 241 | c | <2018-10-22 Mon 16:16> | <2018-10-22 Mon 18:16> | 0 days 02:00 | 0 | 9480 | 9426 | | 242 | b | <2018-10-22 Mon 18:44> | <2018-10-23 Tue 22:08> | 1 days 03:24 | 1 | 41933 | 4843 | | 243 | c | <2018-10-23 Tue 22:09> | <2018-10-24 Wed 0:09> | 0 days 01:59 | 0 | 9488 | 9429 | | 244 | b | <2018-10-24 Wed 0:32> | <2018-10-24 Wed 19:24> | 0 days 18:52 | 1 | 28870 | 3317 | | 245 | c | <2018-10-24 Wed 19:25> | <2018-10-24 Wed 21:25> | 0 days 01:59 | 0 | 9573 | 9530 | | 246 | b | <2018-10-24 Wed 21:59> | <2018-10-25 Thu 16:18> | 0 days 18:18 | 1 | 27970 | 2987 | | 247 | c | <2018-10-25 Thu 16:19> | <2018-10-25 Thu 18:19> | 0 days 01:59 | 0 | 9389 | 9334 | | 248 | b | <2018-10-25 Thu 18:25> | <2018-10-26 Fri 22:29> | 1 days 04:04 | 1 | 42871 | 4544 | | 249 | c | <2018-10-26 Fri 22:30> | <2018-10-27 Sat 0:30> | 0 days 02:00 | 0 | 9473 | 9431 | | 250 | b | <2018-10-27 Sat 1:31> | <2018-10-27 Sat 22:26> | 0 days 20:54 | 1 | 31961 | 3552 | | 251 | c | <2018-10-27 Sat 22:26> | <2018-10-28 Sun 0:26> | 0 days 01:59 | 0 | 9551 | 9503 | | 253 | c | <2018-10-28 Sun 19:18> | <2018-10-28 Sun 21:39> | 0 days 02:20 | 0 | 11095 | 11028 | | 254 | b | <2018-10-28 Sun 21:40> | <2018-10-29 Mon 23:03> | 1 days 01:23 | 1 | 38991 | 4990 | | 255 | c | <2018-10-29 Mon 23:03> | <2018-10-30 Tue 1:03> | 0 days 02:00 | 0 | 9378 | 9330 | | 256 | b | <2018-10-30 Tue 1:49> | <2018-10-31 Wed 22:18> | 1 days 20:29 | 1 | 68315 | 8769 | | 257 | c | <2018-10-31 Wed 22:19> | <2018-11-01 Thu 0:19> | 0 days 01:59 | 0 | 9648 | 9592 | | 258 | b | <2018-11-01 Thu 0:20> | <2018-11-01 Thu 16:15> | 0 days 15:55 | 1 | 24454 | 3103 | | 259 | c | <2018-11-01 Thu 16:16> | <2018-11-01 Thu 17:31> | 0 days 01:14 | 0 | 5900 | 5864 | | 260 | c | <2018-11-01 Thu 17:39> | <2018-11-01 Thu 19:09> | 0 days 01:30 | 0 | 7281 | 7251 | | 261 | b | <2018-11-01 Thu 19:39> | <2018-11-04 Sun 15:23> | 2 days 19:43 | 3 | 103658 | 12126 | | 262 | c | <2018-11-04 Sun 15:24> | <2018-11-04 Sun 21:24> | 0 days 05:59 | 0 | 28810 | 28681 | | 263 | b | <2018-11-05 Mon 0:35> | <2018-11-05 Mon 20:28> | 0 days 19:52 | 1 | 30428 | 3610 | | 264 | c | <2018-11-05 Mon 20:28> | <2018-11-05 Mon 22:28> | 0 days 01:59 | 0 | 9595 | 9544 | | 265 | b | <2018-11-05 Mon 22:52> | <2018-11-07 Wed 22:14> | 1 days 23:21 | 1 | 72514 | 8429 | | 266 | c | <2018-11-07 Wed 22:14> | <2018-11-08 Thu 0:14> | 0 days 01:59 | 0 | 9555 | 9506 | | 267 | b | <2018-11-08 Thu 2:05> | <2018-11-08 Thu 6:54> | 0 days 04:48 | 0 | 7393 | 929 | | 268 | b | <2018-11-09 Fri 6:15> | <2018-11-09 Fri 17:20> | 0 days 11:04 | 1 | 16947 | 1974 | | 269 | c | <2018-11-09 Fri 17:20> | <2018-11-09 Fri 21:20> | 0 days 04:00 | 0 | 19382 | 19302 | | 270 | b | <2018-11-09 Fri 21:27> | <2018-11-11 Sun 21:02> | 1 days 23:34 | 2 | 72756 | 8078 | | 271 | c | <2018-11-11 Sun 21:03> | <2018-11-11 Sun 23:46> | 0 days 02:43 | 0 | 13015 | 12944 | | 272 | b | <2018-11-12 Mon 0:09> | <2018-11-14 Wed 19:07> | 2 days 18:58 | 3 | 102360 | 11336 | | 273 | c | <2018-11-14 Wed 19:08> | <2018-11-14 Wed 21:08> | 0 days 01:59 | 0 | 9535 | 9471 | | 274 | b | <2018-11-14 Wed 21:28> | <2018-11-17 Sat 18:14> | 2 days 20:45 | 3 | 105187 | 12101 | | 275 | c | <2018-11-17 Sat 18:14> | <2018-11-17 Sat 20:57> | 0 days 02:43 | 0 | 13179 | 13116 | | 276 | b | <2018-11-17 Sat 22:08> | <2018-11-22 Thu 2:26> | 4 days 04:17 | 2 | 153954 | 19640 | | 277 | c | <2018-11-22 Thu 2:26> | <2018-11-22 Thu 16:14> | 0 days 13:48 | 0 | 66052 | 65749 | | 278 | b | <2018-11-22 Thu 16:14> | <2018-11-23 Fri 10:51> | 0 days 18:36 | 0 | 28164 | 3535 | | 279 | b | <2018-11-24 Sat 10:51> | <2018-11-26 Mon 14:58> | 2 days 04:07 | 2 | 79848 | 9677 | | 280 | c | <2018-11-26 Mon 14:59> | <2018-11-26 Mon 18:59> | 0 days 04:00 | 0 | 19189 | 19112 | | 281 | b | <2018-11-26 Mon 19:02> | <2018-11-28 Wed 18:07> | 1 days 23:04 | 1 | 72230 | 8860 | | 282 | c | <2018-11-28 Wed 18:07> | <2018-11-28 Wed 20:51> | 0 days 02:43 | 0 | 12924 | 12860 | | 283 | b | <2018-11-28 Wed 22:31> | <2018-12-01 Sat 14:38> | 2 days 16:07 | 3 | 98246 | 11965 | | 284 | c | <2018-12-01 Sat 14:39> | <2018-12-01 Sat 18:39> | 0 days 03:59 | 0 | 19017 | 18904 | | 285 | b | <2018-12-01 Sat 19:06> | <2018-12-03 Mon 19:39> | 2 days 00:33 | 2 | 74405 | 8887 | | 286 | c | <2018-12-04 Tue 15:57> | <2018-12-04 Tue 17:57> | 0 days 02:00 | 0 | 9766 | 9715 | | 287 | b | <2018-12-04 Tue 19:07> | <2018-12-05 Wed 15:08> | 0 days 20:01 | 1 | 30598 | 3393 | | 288 | c | <2018-12-05 Wed 17:28> | <2018-12-05 Wed 19:28> | 0 days 02:00 | 0 | 9495 | 9443 | | 289 | b | <2018-12-05 Wed 23:07> | <2018-12-06 Thu 19:11> | 0 days 20:03 | 1 | 30629 | 3269 | | 290 | c | <2018-12-06 Thu 19:11> | <2018-12-06 Thu 21:11> | 0 days 02:00 | 0 | 9457 | 9394 | | 291 | b | <2018-12-06 Thu 23:14> | <2018-12-08 Sat 13:39> | 1 days 14:24 | 2 | 58602 | 6133 | | 292 | c | <2018-12-08 Sat 13:39> | <2018-12-08 Sat 15:39> | 0 days 02:00 | 0 | 9475 | 9426 | | 293 | b | <2018-12-08 Sat 17:42> | <2018-12-10 Mon 21:50> | 2 days 04:07 | 1 | 79677 | 8850 | | 294 | c | <2018-12-10 Mon 21:50> | <2018-12-10 Mon 23:50> | 0 days 02:00 | 0 | 9514 | 9467 | | 295 | b | <2018-12-11 Tue 0:54> | <2018-12-11 Tue 20:31> | 0 days 19:37 | 1 | 29981 | 3271 | | 296 | c | <2018-12-11 Tue 20:31> | <2018-12-11 Tue 22:31> | 0 days 02:00 | 0 | 9565 | 9517 | | 297 | b | <2018-12-12 Wed 0:14> | <2018-12-13 Thu 18:30> | 1 days 18:15 | 2 | 68124 | 12530 | | 298 | b | <2018-12-13 Thu 18:39> | <2018-12-15 Sat 6:41> | 1 days 12:01 | 1 | 53497 | 0 | | 299 | b | <2018-12-15 Sat 6:43> | <2018-12-15 Sat 18:13> | 0 days 11:29 | 1 | 17061 | 0 | | 300 | c | <2018-12-15 Sat 18:38> | <2018-12-15 Sat 20:38> | 0 days 02:00 | 0 | 9466 | 9415 | | 301 | b | <2018-12-15 Sat 21:34> | <2018-12-17 Mon 14:17> | 1 days 16:43 | 2 | 62454 | 7751 | | 302 | c | <2018-12-17 Mon 14:18> | <2018-12-17 Mon 16:18> | 0 days 01:59 | 0 | 9616 | 9577 | | 303 | b | <2018-12-17 Mon 16:52> | <2018-12-18 Tue 16:41> | 0 days 23:48 | 1 | 36583 | 4571 | | 304 | c | <2018-12-19 Wed 9:33> | <2018-12-19 Wed 11:33> | 0 days 01:59 | 0 | 9531 | 9465 | | 306 | b | <2018-12-20 Thu 6:55> | <2018-12-20 Thu 11:53> | 0 days 04:58 | 1 | 7546 | 495 | \normalsize ** Full version :extended: This is a shortened version of the data in appendix [[#sec:appendix:cast_data_taking_notes]]. * CAST data taking notes [0/1] :Appendix:extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:cast_data_taking_notes :END: *NOTE*: This can't be properly exported to the full thesis in a PDF. The layout is all sorts of broken. This file contains a run list of all runs taken during the data taking period starting in October 2017. It lists each run, separated as data or calibration run and includes notes about chip settings etc. (in case there were changes). - [ ] *HAVE A NOEXPORT SECTION EACH ABOUT:* - CAST detector documentation - Shifter documentation - ..? ** Run table This section contains a table of the different runs, which identifies what type each run is, when it started and ended. The type column describes the run as either - 'd' == data runs - 'c' == calibration run - 'x' == experimental, related to development, problems etc *** Run in 2017 | Run # | Type {d, c} | Start | End | Length | Backup? | Notes | |-------+-------------+------------------------+------------------------+--------------+---------+--------------------------------------------| | 76 | d | <2017-10-30 Mon 18:39> | <2017-11-02 Thu 5:24> | 2 days 10:44 | y | | | 77 | d | <2017-11-02 Thu 05:24> | <2017-11-03 Fri 5:28> | 1 days 00:03 | y | | | 78 | d | <2017-11-03 Fri 05:28> | <2017-11-03 Fri 20:45> | 0 days 15:17 | y | | | 79 | d | <2017-11-03 Fri 20:46> | <2017-11-05 Sun 0:09> | 1 days 03:22 | y | | | 80 | d | <2017-11-05 Sun 00:09> | <2017-11-05 Sun 23:50> | 0 days 23:40 | y | | | 81 | d | <2017-11-05 Sun 23:54> | <2017-11-07 Tue 0:00> | 1 days 00:06 | y | | | 82 | d | <2017-11-07 Thu 00:01> | <2017-11-08 Wed 15:58> | 1 days 15:56 | y | | | 83 | c | <2017-11-08 Wed 16:27> | <2017-11-08 Wed 17:27> | 0 days 00:59 | y | | | 84 | d | <2017-11-08 Wed 17:49> | <2017-11-09 Thu 19:01> | 1 days 01:11 | y | | | 85 | d | <2017-11-09 Thu 19:01> | <2017-11-09 Thu 21:46> | 0 days 02:45 | y | | | 86 | d | <2017-11-09 Thu 21:47> | <2017-11-11 Sat 2:17> | 1 days 04:29 | y | | | 87 | d | <2017-11-11 Sat 2:17> | <2017-11-12 Sun 14:29> | 1 days 12:11 | y | | | 88 | c | <2017-11-12 Sun 14:30> | <2017-11-12 Sun 15:30> | 0 days 0:59 | y | | | 89 | d | <2017-11-12 Sun 15:30> | <2017-11-13 Mon 18:27> | 1 days 2:57 | y | See note below about length | | 90 | d | <2017-11-13 Mon 19:14> | <2017-11-14 Tue 20:24> | 1 days 1:09 | y | | | 91 | d | <2017-11-14 Tue 20:24> | <2017-11-15 Wed 21:44> | 1 days 1:20 | y | | | 92 | d | <2017-11-15 Wed 21:45> | <2017-11-17 Fri 19:18> | 1 days 21:32 | y | No Run on <2017-11-16 Th> | | 93 | c | <2017-11-17 Fri 19:18> | <2017-11-17 Fri 20:18> | 0 days 1:00 | y | | | 94 | d | <2017-11-17 Fri 20:48> | <2017-11-19 Sun 2:34> | 1 days 5:46 | y | | | 95 | d | <2017-11-19 Sun 2:35> | <2017-11-23 Thu 10:41> | 4 days 8:06 | y | Beginning of GRID | | 96 | c | <2017-11-23 Thu 10:42> | <2017-11-23 Thu 17:43> | 0 days 7:01 | y | Long calibration for statistics | | 97 | d | <2017-11-23 Thu 17:43> | <2017-11-26 Sun 1:41> | 2 days 7:57 | y | ~4 min of tracking lost on 25/11, see note | | 98 | d | <2017-11-26 Sun 1:42> | <2017-11-26 Sun 21:18> | 0 days 19:36 | y | | | 99 | d | <2017-11-26 Sun 21:18> | <2017-11-28 Tue 6:46> | 1 days 9:27 | y | | | 100 | d | <2017-11-28 Tue 6:46> | <2017-11-29 Wed 6:40> | 0 days 23:53 | y | | | 101 | d | <2017-11-29 Wed 6:40> | <2017-11-29 Wed 20:18> | 0 days 13:37 | y | FADC amp settings changed, see below | | 102 | c | <2017-11-29 Wed 20:19> | <2017-11-29 Wed 22:19> | 0 days 2:00 | y | | | 103 | d | <2017-11-29 Wed 22:26> | <2017-12-01 Fri 6:46> | 1 days 8:19 | y | | | 104 | d | <2017-12-01 Fri 6:47> | <2017-12-02 Sat 6:48> | 1 days 0:00 | y | | | 105 | d | <2017-12-02 Sat 6:48> | <2017-12-03 Sun 6:39> | 0 days 23:51 | y | | | 106 | d | <2017-12-03 Sun 6:40> | <2017-12-04 Mon 6:54> | 1 days 0:14 | y | | | 107 | d | <2017-12-04 Mon 6:54> | <2017-12-04 Mon 13:38> | 0 days 6:44 | y | | | 108 | c | <2017-12-04 Mon 13:39> | <2017-12-04 Mon 17:39> | 0 days 4:00 | y | | | 109 | d | <2017-12-04 Mon 17:47> | <2017-12-05 Tue 11:20> | 0 days 17:32 | y | A lot of noise during this shift | | 110 | c | <2017-12-05 Tue 11:20> | <2017-12-05 Tue 13:20> | 0 days 1:59 | y | | | 111 | d | <2017-12-05 Tue 13:23> | <2017-12-05 Tue 16:17> | 0 days 2:53 | y | gas interlock box fuse burned, early stop | | 112 | d | <2017-12-06 Wed 14:50> | <2017-12-10 Sun 6:46> | 3 days 15:55 | y | FADC: int. time: 50->100->50ns + quench | | 113 | d | <2017-12-10 Sun 6:46> | <2017-12-11 Mon 6:49> | 1 days 0:03 | y | | | 114 | d | <2017-12-11 Mon 6:50> | <2017-12-11 Mon 18:33> | 0 days 11:43 | y | | | 115 | d | <2017-12-11 Mon 18:36> | <2017-12-12 Tue 20:58> | 1 days 2:21 | y | | | 116 | c | <2017-12-12 Tue 20:59> | <2017-12-12 Tue 22:59> | 0 days 2:00 | y | | | 117 | d | <2017-12-12 Tue 23:56> | <2017-12-13 Wed 21:29> | 0 days 21:33 | y | | | 118 | c | <2017-12-13 Wed 21:30> | <2017-12-13 Wed 23:30> | 0 days 2:00 | y | | | 119 | d | <2017-12-14 Thu 0:07> | <2017-12-14 Thu 17:04> | 0 days 16:57 | y | | | 120 | c | <2017-12-14 Thu 17:04> | <2017-12-14 Thu 21:04> | 0 days 4:00 | y | | | 121 | d | <2017-12-14 Thu 21:07> | <2017-12-15 Fri 19:22> | 0 days 22:14 | y | Jochen: FADC int. time: 50->100ns, c note | | 122 | c | <2017-12-15 Fri 19:22> | <2017-12-16 Sat 1:20> | 0 days 5:57 | y | | | 123 | d | <2017-12-16 Sat 1:21> | <2017-12-17 Sun 1:06> | 0 days 23:45 | y | | | 124 | d | <2017-12-17 Sun 1:06> | <2017-12-19 Tue 2:57> | 2 days 1:50 | y | | | 125 | d | <2017-12-19 Tue 2:57> | <2017-12-19 Tue 16:20> | 0 days 13:22 | y | | | 126 | c | <2017-12-19 Tue 16:21> | <2017-12-19 Tue 19:21> | 0 days 2:59 | y | | | 127 | d | <2017-12-19 Tue 19:27> | <2017-12-22 Fri 0:17> | 2 days 4:50 | y | | | 128 | c | <2017-12-22 Fri 0:18> | <2017-12-22 Fri 9:23> | 0 days 9:05 | y | Final run of 2017 | *** Run 1 in 2018 | Run # | Type {d, c} | Start | End | Length | # trackings | Backup? | Notes | |-------+-------------+------------------------+------------------------+---------------+-------------+---------+-------------------------------------------------------------------| | 137 | d | <2018-02-15 Thu 5:34> | <2018-02-15 Thu 17:08> | 0 days 11:34 | | y* | WARNING: do not use, THL problems, see note! | | 138 | c | <2018-02-15 Thu 17:09> | <2018-02-15 Thu 19:34> | 0 days 2:24 | | y* | Seems like gas amplification down by factor 2! | | 139 | c | <2018-02-15 Thu 20:31> | <2018-02-15 Thu 21:53> | 0 days 1:22 | | y* | Central THL 450 -> 400 from here on! (if result good) | | 140 | d | <2018-02-15 Thu 21:53> | <2018-02-16 Fri 17:59> | 0 days 20:05 | | y* | Running THL from Run 139. | | 141 | x (c) | <2018-02-16 Fri 18:00> | <2018-02-17 Sat 13:28> | 0 days 19:28 | | y* | Calibration run over night, showcasing increasing [Power Problem] | | 142 | x (d) | <2018-02-17 Sat 14:04> | <2018-02-17 Sat 16:17> | 0 days 2:12 | | y* | Background data run w/ THL 400 and problems, see [Power Problem] | | 143 | x (c) | <2018-02-17 Sat 16:18> | <2018-02-17 Sat 16:26> | 0 days 0:07 | | y* | More calibration for testing | | 144 | x (c) | <2018-02-17 Sat 16:32> | <2018-02-17 Sat 17:18> | 0 days 0:45 | | y* | Calibration run in which [Power Problem] was fixed | | 145 | c | <2018-02-17 Sat 17:18> | <2018-02-17 Sat 20:40> | 0 days 3:22 | | y | Proper calibration run w/ THL = 450 and fixed [Power Problem] | | 146 | d | <2018-02-17 Sat 20:41> | <2018-02-18 Sun 18:12> | 0 days 21:30 | | y | First good shift after power supply problem | | 147 | c | <2018-02-18 Sun 18:12> | <2018-02-18 Sun 20:12> | 0 days 1:59 | | y | Calibration run | | 148 | d | <2018-02-18 Sun 20:46> | <2018-02-19 Mon 17:24> | 0 days 20:37 | | y | | | 149 | c | <2018-02-19 Mon 17:25> | <2018-02-19 Mon 19:25> | 0 days 2:00 | | y | | | 150 | d | <2018-02-19 Mon 19:53> | <2018-02-20 Tue 17:36> | 0 days 21:42 | | y | | | 151 | c | <2018-02-20 Tue 17:36> | <2018-02-20 Tue 19:36> | 0 days 1:59 | | y | | | 152 | d | <2018-02-20 Tue 21:54> | <2018-02-21 Wed 18:05> | 0 days 20:10 | | y | | | 153 | c | <2018-02-21 Wed 18:05> | <2018-02-21 Wed 20:05> | 0 days 1:59 | | y | | | 154 | d | <2018-02-21 Wed 21:10> | <2018-02-22 Thu 17:23> | 0 days 20:12 | | y | | | 155 | c | <2018-02-22 Thu 17:23> | <2018-02-22 Thu 19:23> | 0 days 1:59 | | y | | | 156 | d | <2018-02-23 Fri 6:06> | <2018-02-23 Fri 17:41> | 0 days 11:35 | | y | | | 157 | c | <2018-02-23 Fri 17:41> | <2018-02-23 Fri 19:41> | 0 days 1:59 | | y | | | 158 | d | <2018-02-23 Fri 19:42> | <2018-02-26 Mon 8:46> | 2 days 13:03 | | y | | | 159 | c | <2018-02-26 Mon 8:46> | <2018-02-26 Mon 12:46> | 0 days 4:00 | | y | | | 160 | d | <2018-02-26 Mon 14:56> | <2018-03-01 Thu 10:24> | 2 days 19:28 | | y | | | 161 | c | <2018-03-01 Thu 10:26> | <2018-03-01 Thu 14:26> | 0 days 4:00 | | y | | | 162 | d | <2018-03-01 Thu 17:07> | <2018-03-04 Sun 20:16> | 3 days 3:08 | | y | | | 163 | c | <2018-03-04 Sun 20:17> | <2018-03-04 Sun 22:17> | 0 days 2:00 | | y | | | 164 | d | <2018-03-04 Sun 22:57> | <2018-03-06 Tue 19:15> | 1 days 20:18 | | y | | | 165 | c | <2018-03-06 Tue 19:15> | <2018-03-06 Tue 23:15> | 0 days 4:00 | | y | | | 166 | d | <2018-03-07 Wed 0:50> | <2018-03-07 Wed 18:28> | 0 days 17:38 | | y | | | 167 | c | <2018-03-07 Wed 18:29> | <2018-03-07 Wed 20:29> | 0 days 2:00 | | y | | | 168 | d | <2018-03-07 Wed 20:37> | <2018-03-13 Tue 16:54> | 5 days 20:16 | | y | | | 169 | c | <2018-03-13 Tue 16:55> | <2018-03-13 Tue 22:55> | 0 days 6:00 | | y | | | 170 | d | <2018-03-13 Tue 23:19> | <2018-03-14 Wed 21:01> | 0 days 21:42 | 1 | y | First shift including sun tracking | | 171 | c | <2018-03-14 Wed 21:01> | <2018-03-14 Wed 23:01> | 0 days 2:00 | | y | | | 172 | d | <2018-03-14 Wed 23:06> | <2018-03-15 Thu 17:57> | 0 days 18:50 | 1 | y | | | 173 | c | <2018-03-15 Thu 17:59> | <2018-03-15 Thu 19:59> | 0 days 1:59 | | y | | | 174 | d | <2018-03-15 Thu 20:39> | <2018-03-16 Fri 16:27> | 0 days 19:48 | 1 | y | | | 175 | c | <2018-03-16 Fri 16:28> | <2018-03-16 Fri 18:28> | 0 days 1:59 | | y | | | 176 | d | <2018-03-16 Fri 18:35> | <2018-03-17 Sat 20:55> | 1 days 2:19 | 1 | y | | | 177 | c | <2018-03-17 Sat 20:55> | <2018-03-17 Sat 22:55> | 0 days 1:59 | | y | | | 178 | d | <2018-03-17 Sat 23:31> | <2018-03-22 Thu 17:40> | 4 days 18:09 | 5 | y | | | 179 | c | <2018-03-22 Thu 17:41> | <2018-03-22 Thu 19:41> | 0 days 1:59 | | y | | | 180 | d | <2018-03-22 Thu 20:47> | <2018-03-24 Sat 18:10> | 1 days 21:22 | 2 | y | | | 181 | c | <2018-03-24 Sat 18:10> | <2018-03-24 Sat 22:10> | 0 days 4:00 | | y | | | 182 | d | <2018-03-24 Sat 23:32> | <2018-03-26 Mon 19:46> | 1 days 19:14 | 2 | y | Last run including tracking | | 183 | c | <2018-03-26 Mon 19:47> | <2018-03-26 Mon 23:47> | 0 days 3:59 | | y | | | 184 | d | <2018-03-27 Tue 0:32> | <2018-03-30 Fri 14:18> | 3 days 13:45 | | y | | | 185 | c | <2018-03-30 Fri 14:18> | <2018-03-30 Fri 18:18> | 0 days 3:59 | | y | | | 186 | d | <2018-03-30 Fri 19:03> | <2018-04-11 Wed 16:03> | 11 days 21:00 | | y | | | 187 | c | <2018-04-11 Wed 16:04> | <2018-04-11 Wed 20:04> | 0 days 4:00 | | y | | | 188 | d | <2018-04-11 Wed 20:53> | <2018-04-17 Tue 10:53> | 5 days 14:00 | | y | Last background data run of 2017/18 | | 189 | X* | <2018-04-20 Fri 9:53> | <2018-04-21 Sat 18:39> | 1 days 08:45 | | y | X-ray finger run <2018-04-20 Fri> | y* == located in 2018/BadRuns folder to not mix up with 'good' data X* == X-ray finger run *** Run 2 in 2018 Data taking for the second data taking period starts on <2018-10-20 Sat 18:33> with a 2h calibration run after the detector was finally fixed on <2018-10-19 Fri>. The issue was a bad soldering joint on the Phoenix connector on the intermediate board. | Run # | Type {d, c} | Start | End | Length | # trackings | Backup? | Notes | |-------+-------------+-------+-----+--------------+-------------+---------+------------------------| | 239 | c | | | 0 days 02:00 | | | | | 240 | d | | | | 1 | | no B field! | | 297 | d | | | | | | crazy noise at the end | | 298 | d | | | | | | run without FADC | | Run # | Type | DataType | Start | End | Length | # trackings | # frames | # FADC Backup? | Backup? | Notes | |-------+---------------+----------+------------------------+------------------------+--------------+-------------+----------+----------------+---------+-------| | 239 | rtCalibration | rfNewTos | <1970-01-01 Thu 1:00> | <1970-01-01 Thu 1:00> | 0 days 02:00 | | 0 | 0 | y | | | 240 | rtBackground | rfNewTos | <2018-10-21 Sun 14:54> | <2018-10-22 Mon 16:15> | 1 days 01:21 | | 38753 | 4203 | y | | | 239 | rtCalibration | rfNewTos | <2018-10-20 Sat 18:31> | <2018-10-20 Sat 20:31> | 0 days 02:00 | | 9565 | 9518 | y | | | 240 | rtBackground | rfNewTos | <2018-10-21 Sun 14:54> | <2018-10-22 Mon 16:15> | 1 days 01:21 | | 38753 | 4203 | y | | | 239 | rtCalibration | rfNewTos | <2018-10-20 Sat 18:31> | <2018-10-20 Sat 20:31> | 0 days 02:00 | | 9565 | 9518 | y | | | 240 | rtBackground | rfNewTos | <2018-10-21 Sun 14:54> | <2018-10-22 Mon 16:15> | 1 days 01:21 | | 38753 | 4203 | y | | | 239 | rtCalibration | rfNewTos | <2018-10-20 Sat 18:31> | <2018-10-20 Sat 20:31> | 0 days 02:00 | | 9565 | 9518 | y | | | 240 | rtBackground | rfNewTos | <2018-10-21 Sun 14:54> | <2018-10-22 Mon 16:15> | 1 days 01:21 | | 38753 | 4203 | y | | | 241 | rtCalibration | rfNewTos | <2018-10-22 Mon 16:16> | <2018-10-22 Mon 18:16> | 0 days 02:00 | | 9480 | 9426 | y | | | 242 | rtBackground | rfNewTos | <2018-10-22 Mon 18:44> | <2018-10-23 Tue 22:08> | 1 days 03:24 | | 41933 | 4843 | y | | | 243 | rtCalibration | rfNewTos | <2018-10-23 Tue 22:09> | <2018-10-24 Wed 0:09> | 0 days 01:59 | | 9488 | 9429 | y | | | 244 | rtBackground | rfNewTos | <2018-10-24 Wed 0:32> | <2018-10-24 Wed 19:24> | 0 days 18:52 | | 28870 | 3317 | y | | | 245 | rtCalibration | rfNewTos | <2018-10-24 Wed 19:25> | <2018-10-24 Wed 21:25> | 0 days 01:59 | | 9573 | 9530 | y | | | 246 | rtBackground | rfNewTos | <2018-10-24 Wed 21:59> | <2018-10-25 Thu 16:18> | 0 days 18:18 | | 27970 | 2987 | y | | | 247 | rtCalibration | rfNewTos | <2018-10-25 Thu 16:19> | <2018-10-25 Thu 18:19> | 0 days 01:59 | | 9389 | 9334 | y | | | 248 | rtBackground | rfNewTos | <2018-10-25 Thu 18:25> | <2018-10-26 Fri 22:29> | 1 days 04:04 | | 42871 | 4544 | y | | | 249 | rtCalibration | rfNewTos | <2018-10-26 Fri 22:30> | <2018-10-27 Sat 0:30> | 0 days 02:00 | | 9473 | 9431 | y | | | 250 | rtBackground | rfNewTos | <2018-10-27 Sat 1:31> | <2018-10-27 Sat 22:26> | 0 days 20:54 | | 31961 | 3552 | y | | | 251 | rtCalibration | rfNewTos | <2018-10-27 Sat 22:26> | <2018-10-28 Sun 0:26> | 0 days 01:59 | | 9551 | 9503 | y | | | 252 | rtNone | rfNewTos | <2018-10-28 Sun 0:59> | <2018-10-28 Sun 2:20> | 0 days 01:20 | | 2060 | 214 | y | | | 253 | rtCalibration | rfNewTos | <2018-10-28 Sun 19:18> | <2018-10-28 Sun 21:39> | 0 days 02:20 | | 11095 | 11028 | y | | | 254 | rtBackground | rfNewTos | <2018-10-28 Sun 21:40> | <2018-10-29 Mon 23:03> | 1 days 01:23 | | 38991 | 4990 | y | | | 255 | rtCalibration | rfNewTos | <2018-10-29 Mon 23:03> | <2018-10-30 Tue 1:03> | 0 days 02:00 | | 9378 | 9330 | y | | | 256 | rtBackground | rfNewTos | <2018-10-30 Tue 1:49> | <2018-10-31 Wed 22:18> | 1 days 20:29 | | 68315 | 8769 | y | | | 257 | rtCalibration | rfNewTos | <2018-10-31 Wed 22:19> | <2018-11-01 Thu 0:19> | 0 days 01:59 | | 9648 | 9592 | y | | | 258 | rtBackground | rfNewTos | <2018-11-01 Thu 0:20> | <2018-11-01 Thu 16:15> | 0 days 15:55 | | 24454 | 3103 | y | | | 259 | rtCalibration | rfNewTos | <2018-11-01 Thu 16:16> | <2018-11-01 Thu 17:31> | 0 days 01:14 | | 5900 | 5864 | y | | | 260 | rtCalibration | rfNewTos | <2018-11-01 Thu 17:39> | <2018-11-01 Thu 19:09> | 0 days 01:30 | | 7281 | 7251 | y | | | 261 | rtBackground | rfNewTos | <2018-11-01 Thu 19:39> | <2018-11-04 Sun 15:23> | 2 days 19:43 | | 103658 | 12126 | y | | | 262 | rtCalibration | rfNewTos | <2018-11-04 Sun 15:24> | <2018-11-04 Sun 21:24> | 0 days 05:59 | | 28810 | 28681 | y | | | 263 | rtBackground | rfNewTos | <2018-11-05 Mon 0:35> | <2018-11-05 Mon 20:28> | 0 days 19:52 | | 30428 | 3610 | y | | | 264 | rtCalibration | rfNewTos | <2018-11-05 Mon 20:28> | <2018-11-05 Mon 22:28> | 0 days 01:59 | | 9595 | 9544 | y | | | 265 | rtBackground | rfNewTos | <2018-11-05 Mon 22:52> | <2018-11-07 Wed 22:14> | 1 days 23:21 | | 72514 | 8429 | y | | | 266 | rtCalibration | rfNewTos | <2018-11-07 Wed 22:14> | <2018-11-08 Thu 0:14> | 0 days 01:59 | | 9555 | 9506 | y | | | 267 | rtBackground | rfNewTos | <2018-11-08 Thu 2:05> | <2018-11-08 Thu 6:54> | 0 days 04:48 | | 7405 | 930 | y | | | 268 | rtBackground | rfNewTos | <2018-11-09 Fri 6:15> | <2018-11-09 Fri 17:20> | 0 days 11:04 | | 16947 | 1974 | y | | | 269 | rtCalibration | rfNewTos | <2018-11-09 Fri 17:20> | <2018-11-09 Fri 21:20> | 0 days 04:00 | | 19382 | 19302 | y | | | 270 | rtBackground | rfNewTos | <2018-11-09 Fri 21:27> | <2018-11-11 Sun 21:02> | 1 days 23:34 | | 72756 | 8078 | y | | | 271 | rtCalibration | rfNewTos | <2018-11-11 Sun 21:03> | <2018-11-11 Sun 23:46> | 0 days 02:43 | | 13015 | 12944 | y | | | 272 | rtBackground | rfNewTos | <2018-11-12 Mon 0:09> | <2018-11-14 Wed 19:07> | 2 days 18:58 | | 102360 | 11336 | y | | | 273 | rtCalibration | rfNewTos | <2018-11-14 Wed 19:08> | <2018-11-14 Wed 21:08> | 0 days 01:59 | | 9535 | 9471 | y | | | 274 | rtBackground | rfNewTos | <2018-11-14 Wed 21:28> | <2018-11-17 Sat 18:14> | 2 days 20:45 | | 105187 | 12101 | y | | | 275 | rtCalibration | rfNewTos | <2018-11-17 Sat 18:14> | <2018-11-17 Sat 20:57> | 0 days 02:43 | | 13179 | 13116 | y | | | 276 | rtBackground | rfNewTos | <2018-11-17 Sat 22:08> | <2018-11-22 Thu 2:26> | 4 days 04:17 | | 153954 | 19640 | y | | | 277 | rtCalibration | rfNewTos | <2018-11-22 Thu 2:26> | <2018-11-22 Thu 16:14> | 0 days 13:48 | | 66052 | 65749 | y | | | 278 | rtBackground | rfNewTos | <2018-11-22 Thu 16:14> | <2018-11-23 Fri 10:51> | 0 days 18:36 | | 28899 | 3581 | y | | | 279 | rtBackground | rfNewTos | <2018-11-24 Sat 10:51> | <2018-11-26 Mon 14:58> | 2 days 04:07 | | 79848 | 9677 | y | | | 280 | rtCalibration | rfNewTos | <2018-11-26 Mon 14:59> | <2018-11-26 Mon 18:59> | 0 days 04:00 | | 19189 | 19112 | y | | | 281 | rtBackground | rfNewTos | <2018-11-26 Mon 19:02> | <2018-11-28 Wed 18:07> | 1 days 23:04 | | 72230 | 8860 | y | | | 282 | rtCalibration | rfNewTos | <2018-11-28 Wed 18:07> | <2018-11-28 Wed 20:51> | 0 days 02:43 | | 12924 | 12860 | y | | | 283 | rtBackground | rfNewTos | <2018-11-28 Wed 22:31> | <2018-12-01 Sat 14:38> | 2 days 16:07 | | 98246 | 11965 | y | | | 284 | rtCalibration | rfNewTos | <2018-12-01 Sat 14:39> | <2018-12-01 Sat 18:39> | 0 days 03:59 | | 19017 | 18904 | y | | | 285 | rtBackground | rfNewTos | <2018-12-01 Sat 19:06> | <2018-12-03 Mon 19:39> | 2 days 00:33 | | 74431 | 8888 | y | | | 286 | rtCalibration | rfNewTos | <2018-12-04 Tue 15:57> | <2018-12-04 Tue 17:57> | 0 days 02:00 | | 9766 | 9715 | y | | | 287 | rtBackground | rfNewTos | <2018-12-04 Tue 19:07> | <2018-12-05 Wed 15:08> | 0 days 20:01 | | 30622 | 3395 | y | | | 288 | rtCalibration | rfNewTos | <2018-12-05 Wed 17:28> | <2018-12-05 Wed 19:28> | 0 days 02:00 | | 9495 | 9443 | y | | | 289 | rtBackground | rfNewTos | <2018-12-05 Wed 23:07> | <2018-12-06 Thu 19:11> | 0 days 20:03 | | 30629 | 3269 | y | | | 290 | rtCalibration | rfNewTos | <2018-12-06 Thu 19:11> | <2018-12-06 Thu 21:11> | 0 days 02:00 | | 9457 | 9394 | y | | | 291 | rtBackground | rfNewTos | <2018-12-06 Thu 23:14> | <2018-12-08 Sat 13:39> | 1 days 14:24 | | 58602 | 6133 | y | | | 292 | rtCalibration | rfNewTos | <2018-12-08 Sat 13:39> | <2018-12-08 Sat 15:39> | 0 days 02:00 | | 9475 | 9426 | y | | | 293 | rtBackground | rfNewTos | <2018-12-08 Sat 17:42> | <2018-12-10 Mon 21:50> | 2 days 04:07 | | 79677 | 8850 | y | | | 294 | rtCalibration | rfNewTos | <2018-12-10 Mon 21:50> | <2018-12-10 Mon 23:50> | 0 days 02:00 | | 9514 | 9467 | y | | | 295 | rtBackground | rfNewTos | <2018-12-11 Tue 0:54> | <2018-12-11 Tue 20:31> | 0 days 19:37 | | 29981 | 3271 | y | | | 296 | rtCalibration | rfNewTos | <2018-12-11 Tue 20:31> | <2018-12-11 Tue 22:31> | 0 days 02:00 | | 9565 | 9517 | y | | | 297 | rtBackground | rfNewTos | <2018-12-12 Wed 0:14> | <2018-12-13 Thu 18:30> | 1 days 18:15 | | 68124 | 12530 | y | | | 298 | rtBackground | rfNewTos | <2018-12-13 Thu 18:39> | <2018-12-15 Sat 6:41> | 1 days 12:01 | | 53497 | 0 | y | | | 299 | rtBackground | rfNewTos | <2018-12-15 Sat 6:43> | <2018-12-15 Sat 18:13> | 0 days 11:29 | | 17061 | 0 | y | | | 300 | rtCalibration | rfNewTos | <2018-12-15 Sat 18:38> | <2018-12-15 Sat 20:38> | 0 days 02:00 | | 9466 | 9415 | y | | | 301 | rtBackground | rfNewTos | <2018-12-15 Sat 21:34> | <2018-12-17 Mon 14:17> | 1 days 16:43 | | 62454 | 7751 | y | | | 302 | rtCalibration | rfNewTos | <2018-12-17 Mon 14:18> | <2018-12-17 Mon 16:18> | 0 days 01:59 | | 9616 | 9577 | y | | | 303 | rtBackground | rfNewTos | <2018-12-17 Mon 16:52> | <2018-12-18 Tue 16:41> | 0 days 23:48 | | 36583 | 4571 | y | | | 304 | rtCalibration | rfNewTos | <2018-12-19 Wed 9:33> | <2018-12-19 Wed 11:33> | 0 days 01:59 | | 9531 | 9465 | y | | | 305 | rtCalibration | rfNewTos | <2018-12-19 Wed 13:24> | <2018-12-20 Thu 3:23> | 0 days 13:58 | | 32655 | 25702 | y* | | | 306 | rtBackground | rfNewTos | <2018-12-20 Thu 6:55> | <2018-12-20 Thu 11:53> | 0 days 04:58 | | 7574 | 496 | y | | NOTE: Run 305 is considered a *bad* run now! See [[file:~/org/Doc/StatusAndProgress.org]] for more information. It is *not* a calibration run. Probably it's just a background run, but the confusion makes me not want to trust it! y* == located in 2018/BadRuns folder to not mix up with 'good' data X* == X-ray finger run X-ray finger runs are apparently number 21 (old before data taking??) and then 189 at the end of the 2017/18 data taking in Apr 2018. See [[file:analysis.org]] for reference to run 21. *** Total run time After Run 96: - Currently 23.85625 days (552.5 hours) of data taking (raw, beginning to end) - of that were 10 hours calibration - 17 shifts a 90 minutes => 25.5 hours => - 517 hours background - 10 hours calibration - 25.5 hours tracking After Run 116 (since excl. 96): - another 449.95 hours of data taking - of that were 16 shifts => ~24 hours tracking - 10 hours of calibration => - 417.95 hours of background - 24 hours of tracking - 10 hours of calibration Combined so far: - background: 934.95 h - tracking: 49.5 h - calibration: 20 h Up to incl. 128 (since excl. 116): - 4 days, 100 hours, 271 minutes background - 200.5 hours of background + tracking - 8 shifts since run 117 incl = 12 hours - calibration: 22 hours => - 188 h background - 12 h tracking - 22 h calibration Combined Runs 2017: - background: 1123 h - tracking: 61.5 h - calibration: 42 h Run time 2018 from run 145 to run 167: - total time: 8 days + 211 hours + 578 minutes = 17 days + 4 hours + 38 minutes - background + tracking: 15 days + 5 hours + 3.5 minutes ~ 365 h - background (90 minutes tracking): 342.5 h - tracking: 22.5 h - calibration: 31 hours + 17 minutes ~ 31.5 h Run time 2018 from run 168 to Run 187: - background + tracking: 26 days + 172 hours + 265 minutes = 33 days + 8 hours + 25 minutes ~ 800.5 h - # trackings: 13 - background: 781 h - tracking: 19.5 h - calibration: 32 hours Total time of Run 2017 / 2018: background: 2288 h tracking: 103.5 h calibration: 105.5 h ** Data runs This section covers the data runs, which took place. The runs, unless otherwise stated, are using the FSR files as in the Septem H calibration Git repo in the folder fsr_in_use. These are the THL values, which were obtained during calibration, but have THL+50 for chips 1, 2, 5 and 6. The HV values are the ones documented in the detector documentation [[file:Doc/Detector/CastDetectorDocumentation.org]] (as of <2017-11-09 Do 19:05>). [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_76_171030-18-39.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_77_171102-05-24.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_78_171103-05-28.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_79_171103-20-46.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_80_171105-00-09.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_81_171105-23-54.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_82_171107-00-01.tar.gz]] *NOTE:* this was the first run with the correct SiPM HV setting of $\SI{65.6}{\volt}$ [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_84_171108-17-49.tar.gz]] *NOTE*: this run was stopped early to fix the src/waitconditions.cpp bug, mentioned in the note below. [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_85_171109-19-01.tar.gz]] *NOTE: VERY IMPORTANT* In all runs above, there was a bug in src/waitconditions.cpp, which cause the FADC and scintillator values to be written the all subsequent files, from an event in which the FADC triggered until the next (in case of non-subsequent FADC events). Therefore: for analysis, we need to take into account - fadcReadout == 1. Otherwise we read random scintillator trigger values as well as FADC trigger clock cycles. All runs below are without the aforementioned error. [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_86_171109-21-47.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_87_171111-02-17.tar.gz]] *NOTE:* During Run 89 the byobu buffer containing TOS got stuck on <2017-11-12 So 19:35> due to <F7> being pressed (which eventually pauses the thread). Was called by Cristian at <2017-11-13 Mo 5:55> roughly. I fixed the issue. Therefore the length given in the table is misleading, as it does not show the actual time of data taking of that run. [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_89_171112-15-30.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_90_171113-19-14.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_91_171114-20-24.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_92_171115-21-45.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_94_171117-20-48.tar.gz]] *NOTE:* The following run contains the beginning (and most of it) of the GRID measurement. It only contains a single tracking, that's why the run is so long. [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/DataRuns/Run_95_171119-02-35.tar.gz]] *NOTE:* Regarding Run 97, morning shift on 25/11/17, from the elog: #+BEGIN_SRC Remarks: At approx. 7:02 there was an error message in the Slow Control program indicating that some file could not be saved because there was not enough memory. After about a minute, the Slow Control PC restarted by itself. At about the same time, an error message appeared in the Tracking program, probably related to the fact that it could not communicate with the Slow Control program in order to read proximity sensor data. After the restart of the Slow control PC and Slow control program, things went back to normal. Because of all that, we lost first 3-4 minutes of tracking #+END_SRC *NOTE: Still need to add links to previous runs (97, 98, 99)* *NOTE:* Regarding Run 101: Because of pretty bad noise during the run, I decided to play around with the main amplifier of the analogue signal. I changed the settings as followed: - Diff: 50 ns -> 20 ns (one to left) - Coarse gain: 6x -> 10x (one to right) this got rid of (almost ?) all of the noise. However, obviously this changes the FADC data completely. The shapes are slightly altered (a little steeper) *NOTE:* Regarding Run 109: CRAZY amounts of noise during that run. Interestingly, the only difference between the previous runs from what I can tell (although I wasn't present in the shift), was that the main light in the LHCb part of the hall was turned on. Maybe this causes the electricity circuit to be working under high load, producing a lot more noise? Will keep an eye on this. *NOTE:* Regarding Run 111: Was stopped early, because I tried to debug the noise and in doing so burned a fuse in the gas interlock box by connecting the NIM crate to a wrong power cable. -> no shift on 6/12/17, background data being taken again since <2017-12-06 Wed 14:55>. *NOTE:* Regarding Run 112: Another change of FADC settings due to crazy amounts of noise. Changed integration time from: - 50ns -> 100ns gets rid of all noise for now. However, shapes are much smoother than before, might make differentiation much harder later. This was done at around <2017-12-07 Thu 8:00> *Also:* The power cable from the main amplifier to the pre amplifier was not properly inserted. Did that before changing the settings, seeemed to help, but noise eventually returned. *NOTE:* Regarding Run 112 and previous note: Turned down integration time from 100ns to 50ns again at around <2017-12-08 Fri 17:50>. *NOTE:* Regarding Run 112: Run is so long, because after problems with fuse, had quench on <2017-12-07 Thu 18:31:53>. *NOTE:* Regarding Run 121: Jochen set the FADC main amplifier integration time from 50 to 100 ns again. Happened at around <2017-12-15 Fri 10:20>. Maybe 5 min later. *** Notes 2018 After Run 137 (first run of 2018) I did a calibration run only to notice that we didn't recover 220 electrons anymore, but rather 110. THLscan revealed that indeed the THL values of the central chips (and potentially all others) changed. Previously we used: Chip #: 3 THL: 450 Now a value of: THL: 400 produces no noise, if run without source on 2.4s frames, and recovers all electrons again, it seems. - Calibration Run 138 uses THL 450 - Calibration Run 139 uses THL 400 Change is reflected in CAST calibration Git repository, in fsr_as_used folder! **** Power Problem Problem due to power supply, causing what looked like changes to the thresholds of all chips. See the following mail for a short explanation: [[file:Mails/cast_power_supply_problem_thlshift/power_supply_problem.org]] **** Run 297 and 298 There was some pretty crazy noise in run 297 (see Sergios video on WhatsApp), so I disabled the FADC on <2018-12-13 Thu 18:40> for Run 298, since I'm heading down to CERN on <2018-12-14 Fri>. ** Calibration runs This scetion covers the calibration runs, which took place. [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/CalibrationRuns/Run_83_171108-16-27.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/CalibrationRuns/Run_88_171112-14-30.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/CalibrationRuns/Run_93_171117-19-18.tar.gz]] [[file:/ssh:tpc@tpc00:/volume1/cast/data/2017_CAST-Run/CalibrationRuns/Run_96_171123-10-42.tar.gz]] ** Automatically generated run list The following run list is created by the =writeRunList= tool: [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/writeRunList/writeRunList.nim]] based on the tracking logs. | Run # | Type | DataType | Start | End | Length | # trackings | # frames | # FADC | Backup? | Notes | |-------+---------------+----------+------------------------+------------------------+---------------+-------------+----------+--------+---------+-------| | 76 | rtBackground | rfNewTos | <2017-10-30 Mon 18:39> | <2017-11-02 Thu 5:24> | 2 days 10:44 | 1 | 88249 | 19856 | y | | | 77 | rtBackground | rfNewTos | <2017-11-02 Thu 5:24> | <2017-11-03 Fri 5:28> | 1 days 00:03 | 1 | 36074 | 8016 | y | | | 78 | rtBackground | rfNewTos | <2017-11-03 Fri 5:28> | <2017-11-03 Fri 20:45> | 0 days 15:17 | 1 | 23506 | 5988 | y | | | 79 | rtBackground | rfNewTos | <2017-11-03 Fri 20:46> | <2017-11-05 Sun 0:09> | 1 days 03:22 | 1 | 40634 | 8102 | y | | | 80 | rtBackground | rfNewTos | <2017-11-05 Sun 0:09> | <2017-11-05 Sun 23:50> | 0 days 23:40 | 1 | 35147 | 6880 | y | | | 81 | rtBackground | rfNewTos | <2017-11-05 Sun 23:54> | <2017-11-07 Tue 0:00> | 1 days 00:06 | 1 | 35856 | 7283 | y | | | 82 | rtBackground | rfNewTos | <2017-11-07 Tue 0:01> | <2017-11-08 Wed 15:58> | 1 days 15:56 | 2 | 59502 | 12272 | y | | | 83 | rtCalibration | rfNewTos | <2017-11-08 Wed 16:27> | <2017-11-08 Wed 17:27> | 0 days 00:59 | 0 | 4915 | 4897 | y | | | 84 | rtBackground | rfNewTos | <2017-11-08 Wed 17:49> | <2017-11-09 Thu 19:01> | 1 days 01:11 | 1 | 37391 | 7551 | y | | | 85 | rtBackground | rfNewTos | <2017-11-09 Thu 19:01> | <2017-11-09 Thu 21:46> | 0 days 02:45 | 0 | 4104 | 899 | y | | | 86 | rtBackground | rfNewTos | <2017-11-09 Thu 21:47> | <2017-11-11 Sat 2:17> | 1 days 04:29 | 1 | 42396 | 9656 | y | | | 87 | rtBackground | rfNewTos | <2017-11-11 Sat 2:17> | <2017-11-12 Sun 14:29> | 1 days 12:11 | 2 | 54786 | 15123 | y | | | 88 | rtCalibration | rfNewTos | <2017-11-12 Sun 14:30> | <2017-11-12 Sun 15:30> | 0 days 00:59 | 0 | 4943 | 4934 | y | | | 89 | rtBackground | rfNewTos | <2017-11-12 Sun 15:30> | <2017-11-13 Mon 18:27> | 1 days 02:57 | 1 | 25209 | 6210 | y | | | 90 | rtBackground | rfNewTos | <2017-11-13 Mon 19:14> | <2017-11-14 Tue 20:24> | 1 days 01:09 | 1 | 37497 | 8122 | y | | | 91 | rtBackground | rfNewTos | <2017-11-14 Tue 20:24> | <2017-11-15 Wed 21:44> | 1 days 01:20 | 1 | 37732 | 8108 | y | | | 92 | rtBackground | rfNewTos | <2017-11-15 Wed 21:45> | <2017-11-17 Fri 19:18> | 1 days 21:32 | 1 | 67946 | 14730 | y | | | 93 | rtCalibration | rfNewTos | <2017-11-17 Fri 19:18> | <2017-11-17 Fri 20:18> | 0 days 01:00 | 0 | 4977 | 4968 | y | | | 94 | rtBackground | rfNewTos | <2017-11-17 Fri 20:48> | <2017-11-19 Sun 2:34> | 1 days 05:46 | 1 | 44344 | 9422 | y | | | 95 | rtBackground | rfNewTos | <2017-11-19 Sun 2:35> | <2017-11-23 Thu 10:41> | 4 days 08:06 | 1 | 154959 | 33112 | y | | | 96 | rtCalibration | rfNewTos | <2017-11-23 Thu 10:42> | <2017-11-23 Thu 17:43> | 0 days 07:01 | 0 | 34586 | 34496 | y | | | 97 | rtBackground | rfNewTos | <2017-11-23 Thu 17:43> | <2017-11-26 Sun 1:41> | 2 days 07:57 | 1 | 83404 | 18277 | y | | | 98 | rtBackground | rfNewTos | <2017-11-26 Sun 1:42> | <2017-11-26 Sun 21:18> | 0 days 19:36 | 1 | 29202 | 6285 | y | | | 99 | rtBackground | rfNewTos | <2017-11-26 Sun 21:18> | <2017-11-28 Tue 6:46> | 1 days 09:27 | 1 | 49921 | 10895 | y | | | 100 | rtBackground | rfNewTos | <2017-11-28 Tue 6:46> | <2017-11-29 Wed 6:40> | 0 days 23:53 | 1 | 35658 | 7841 | y | | | 101 | rtBackground | rfNewTos | <2017-11-29 Wed 6:40> | <2017-11-29 Wed 20:18> | 0 days 13:37 | 1 | 20326 | 4203 | y | | | 102 | rtCalibration | rfNewTos | <2017-11-29 Wed 20:19> | <2017-11-29 Wed 22:19> | 0 days 02:00 | 0 | 9919 | 9898 | y | | | 103 | rtBackground | rfNewTos | <2017-11-29 Wed 22:26> | <2017-12-01 Fri 6:46> | 1 days 08:19 | 1 | 47381 | 7867 | y | | | 104 | rtBackground | rfNewTos | <2017-12-01 Fri 6:47> | <2017-12-02 Sat 6:48> | 1 days 00:00 | 1 | 35220 | 5866 | y | | | 105 | rtBackground | rfNewTos | <2017-12-02 Sat 6:48> | <2017-12-03 Sun 6:39> | 0 days 23:51 | 1 | 34918 | 5794 | y | | | 106 | rtBackground | rfNewTos | <2017-12-03 Sun 6:40> | <2017-12-04 Mon 6:54> | 1 days 00:14 | 1 | 35576 | 6018 | y | | | 107 | rtBackground | rfNewTos | <2017-12-04 Mon 6:54> | <2017-12-04 Mon 13:38> | 0 days 06:44 | 1 | 9883 | 1641 | y | | | 108 | rtCalibration | rfNewTos | <2017-12-04 Mon 13:39> | <2017-12-04 Mon 17:39> | 0 days 04:00 | 0 | 19503 | 19448 | y | | | 109 | rtBackground | rfNewTos | <2017-12-04 Mon 17:47> | <2017-12-05 Tue 11:20> | 0 days 17:32 | 1 | 28402 | 8217 | y | | | 110 | rtCalibration | rfNewTos | <2017-12-05 Tue 11:20> | <2017-12-05 Tue 13:20> | 0 days 01:59 | 0 | 9804 | 9786 | y | | | 111 | rtBackground | rfNewTos | <2017-12-05 Tue 13:23> | <2017-12-05 Tue 16:17> | 0 days 02:53 | 0 | 4244 | 644 | y | | | 112 | rtBackground | rfNewTos | <2017-12-06 Wed 14:50> | <2017-12-10 Sun 6:46> | 3 days 15:55 | 2 | 128931 | 19607 | y | | | 113 | rtBackground | rfNewTos | <2017-12-10 Sun 6:46> | <2017-12-11 Mon 6:49> | 1 days 00:03 | 1 | 35100 | 5174 | y | | | 114 | rtBackground | rfNewTos | <2017-12-11 Mon 6:50> | <2017-12-11 Mon 18:33> | 0 days 11:43 | 1 | 17111 | 2542 | y | | | 115 | rtBackground | rfNewTos | <2017-12-11 Mon 18:36> | <2017-12-12 Tue 20:58> | 1 days 02:21 | 1 | 40574 | 9409 | y | | | 116 | rtCalibration | rfNewTos | <2017-12-12 Tue 20:59> | <2017-12-12 Tue 22:59> | 0 days 02:00 | 0 | 9741 | 9724 | y | | | 117 | rtBackground | rfNewTos | <2017-12-12 Tue 23:56> | <2017-12-13 Wed 21:29> | 0 days 21:33 | 1 | 31885 | 5599 | y | | | 118 | rtCalibration | rfNewTos | <2017-12-13 Wed 21:30> | <2017-12-13 Wed 23:30> | 0 days 02:00 | 0 | 9771 | 9748 | y | | | 119 | rtBackground | rfNewTos | <2017-12-14 Thu 0:07> | <2017-12-14 Thu 17:04> | 0 days 16:57 | 1 | 25434 | 4903 | y | | | 120 | rtCalibration | rfNewTos | <2017-12-14 Thu 17:04> | <2017-12-14 Thu 21:04> | 0 days 04:00 | 0 | 19308 | 19261 | y | | | 121 | rtBackground | rfNewTos | <2017-12-14 Thu 21:07> | <2017-12-15 Fri 19:22> | 0 days 22:14 | 1 | 33901 | 6947 | y | | | 122 | rtCalibration | rfNewTos | <2017-12-15 Fri 19:22> | <2017-12-16 Sat 1:20> | 0 days 05:57 | 0 | 29279 | 29208 | y | | | 123 | rtBackground | rfNewTos | <2017-12-16 Sat 1:21> | <2017-12-17 Sun 1:06> | 0 days 23:45 | 1 | 34107 | 3380 | y | | | 124 | rtBackground | rfNewTos | <2017-12-17 Sun 1:06> | <2017-12-19 Tue 2:57> | 2 days 01:50 | 2 | 71703 | 7504 | y | | | 125 | rtBackground | rfNewTos | <2017-12-19 Tue 2:57> | <2017-12-19 Tue 16:20> | 0 days 13:22 | 1 | 19262 | 1991 | y | | | 126 | rtCalibration | rfNewTos | <2017-12-19 Tue 16:21> | <2017-12-19 Tue 19:21> | 0 days 02:59 | 0 | 14729 | 14689 | y | | | 127 | rtBackground | rfNewTos | <2017-12-19 Tue 19:27> | <2017-12-22 Fri 0:17> | 2 days 04:50 | 1 | 75907 | 7663 | y | | | 128 | rtCalibration | rfNewTos | <2017-12-22 Fri 0:18> | <2017-12-22 Fri 9:23> | 0 days 09:05 | 0 | 44806 | 44709 | y | | | 145 | rtCalibration | rfNewTos | <2018-02-17 Sat 17:18> | <2018-02-17 Sat 20:40> | 0 days 03:22 | 0 | 16797 | 16796 | y | | | 146 | rtBackground | rfNewTos | <2018-02-17 Sat 20:41> | <2018-02-18 Sun 18:12> | 0 days 21:30 | 1 | 32705 | 3054 | y | | | 147 | rtCalibration | rfNewTos | <2018-02-18 Sun 18:12> | <2018-02-18 Sun 20:12> | 0 days 01:59 | 0 | 10102 | 10102 | y | | | 148 | rtBackground | rfNewTos | <2018-02-18 Sun 20:46> | <2018-02-19 Mon 17:24> | 0 days 20:37 | 1 | 31433 | 3120 | y | | | 149 | rtCalibration | rfNewTos | <2018-02-19 Mon 17:25> | <2018-02-19 Mon 19:25> | 0 days 02:00 | 0 | 9975 | 9975 | y | | | 150 | rtBackground | rfNewTos | <2018-02-19 Mon 19:53> | <2018-02-20 Tue 17:36> | 0 days 21:42 | 1 | 33192 | 3546 | y | | | 151 | rtCalibration | rfNewTos | <2018-02-20 Tue 17:36> | <2018-02-20 Tue 19:36> | 0 days 01:59 | 0 | 9907 | 9907 | y | | | 152 | rtBackground | rfNewTos | <2018-02-20 Tue 21:54> | <2018-02-21 Wed 18:05> | 0 days 20:10 | 1 | 30809 | 3319 | y | | | 153 | rtCalibration | rfNewTos | <2018-02-21 Wed 18:05> | <2018-02-21 Wed 20:05> | 0 days 01:59 | 0 | 10103 | 10102 | y | | | 154 | rtBackground | rfNewTos | <2018-02-21 Wed 21:10> | <2018-02-22 Thu 17:23> | 0 days 20:12 | 1 | 30891 | 3426 | y | | | 155 | rtCalibration | rfNewTos | <2018-02-22 Thu 17:23> | <2018-02-22 Thu 19:23> | 0 days 02:00 | 0 | 9861 | 9861 | y | | | 156 | rtBackground | rfNewTos | <2018-02-23 Fri 6:06> | <2018-02-23 Fri 17:41> | 0 days 11:35 | 1 | 17686 | 1866 | y | | | 157 | rtCalibration | rfNewTos | <2018-02-23 Fri 17:41> | <2018-02-23 Fri 19:41> | 0 days 01:59 | 0 | 9962 | 9962 | y | | | 158 | rtBackground | rfNewTos | <2018-02-23 Fri 19:42> | <2018-02-26 Mon 8:46> | 2 days 13:03 | 1 | 93205 | 9893 | y | | | 159 | rtCalibration | rfNewTos | <2018-02-26 Mon 8:46> | <2018-02-26 Mon 12:46> | 0 days 04:00 | 0 | 19879 | 19878 | y | | | 160 | rtBackground | rfNewTos | <2018-02-26 Mon 14:56> | <2018-03-01 Thu 10:24> | 2 days 19:28 | 1 | 103145 | 11415 | y | | | 161 | rtCalibration | rfNewTos | <2018-03-01 Thu 10:26> | <2018-03-01 Thu 14:26> | 0 days 04:00 | 0 | 19944 | 19943 | y | | | 162 | rtBackground | rfNewTos | <2018-03-01 Thu 17:07> | <2018-03-04 Sun 20:16> | 3 days 03:08 | 3 | 114590 | 11897 | y | | | 163 | rtCalibration | rfNewTos | <2018-03-04 Sun 20:17> | <2018-03-04 Sun 22:17> | 0 days 02:00 | 0 | 10093 | 10093 | y | | | 164 | rtBackground | rfNewTos | <2018-03-04 Sun 22:57> | <2018-03-06 Tue 19:15> | 1 days 20:18 | 2 | 67456 | 6488 | y | | | 165 | rtCalibration | rfNewTos | <2018-03-06 Tue 19:15> | <2018-03-06 Tue 23:15> | 0 days 04:00 | 0 | 19882 | 19879 | y | | | 166 | rtBackground | rfNewTos | <2018-03-07 Wed 0:50> | <2018-03-07 Wed 18:28> | 0 days 17:38 | 1 | 26859 | 2565 | y | | | 167 | rtCalibration | rfNewTos | <2018-03-07 Wed 18:29> | <2018-03-07 Wed 20:29> | 0 days 02:00 | 0 | 9938 | 9938 | y | | | 168 | rtBackground | rfNewTos | <2018-03-07 Wed 20:37> | <2018-03-13 Tue 16:54> | 5 days 20:16 | 0 | 213545 | 20669 | y | | | 169 | rtCalibration | rfNewTos | <2018-03-13 Tue 16:55> | <2018-03-13 Tue 22:55> | 0 days 06:00 | 0 | 29874 | 29874 | y | | | 170 | rtBackground | rfNewTos | <2018-03-13 Tue 23:19> | <2018-03-14 Wed 21:01> | 0 days 21:42 | 1 | 33098 | 3269 | y | | | 171 | rtCalibration | rfNewTos | <2018-03-14 Wed 21:01> | <2018-03-14 Wed 23:01> | 0 days 02:00 | 0 | 9999 | 9999 | y | | | 172 | rtBackground | rfNewTos | <2018-03-14 Wed 23:06> | <2018-03-15 Thu 17:57> | 0 days 18:50 | 1 | 28649 | 2773 | y | | | 173 | rtCalibration | rfNewTos | <2018-03-15 Thu 17:59> | <2018-03-15 Thu 19:59> | 0 days 01:59 | 0 | 9898 | 9897 | y | | | 174 | rtBackground | rfNewTos | <2018-03-15 Thu 20:39> | <2018-03-16 Fri 16:27> | 0 days 19:48 | 1 | 30163 | 2961 | y | | | 175 | rtCalibration | rfNewTos | <2018-03-16 Fri 16:28> | <2018-03-16 Fri 18:28> | 0 days 01:59 | 0 | 10075 | 10075 | y | | | 176 | rtBackground | rfNewTos | <2018-03-16 Fri 18:35> | <2018-03-17 Sat 20:55> | 1 days 02:19 | 1 | 40084 | 3815 | y | | | 177 | rtCalibration | rfNewTos | <2018-03-17 Sat 20:55> | <2018-03-17 Sat 22:55> | 0 days 01:59 | 0 | 9967 | 9966 | y | | | 178 | rtBackground | rfNewTos | <2018-03-17 Sat 23:31> | <2018-03-22 Thu 17:40> | 4 days 18:09 | 5 | 174074 | 17949 | y | | | 179 | rtCalibration | rfNewTos | <2018-03-22 Thu 17:41> | <2018-03-22 Thu 19:41> | 0 days 01:59 | 0 | 9887 | 9887 | y | | | 180 | rtBackground | rfNewTos | <2018-03-22 Thu 20:47> | <2018-03-24 Sat 18:10> | 1 days 21:22 | 1 | 69224 | 7423 | y | | | 181 | rtCalibration | rfNewTos | <2018-03-24 Sat 18:10> | <2018-03-24 Sat 22:10> | 0 days 04:00 | 0 | 20037 | 20036 | y | | | 182 | rtBackground | rfNewTos | <2018-03-24 Sat 23:32> | <2018-03-26 Mon 19:46> | 1 days 19:14 | 2 | 65888 | 6694 | y | | | 183 | rtCalibration | rfNewTos | <2018-03-26 Mon 19:47> | <2018-03-26 Mon 23:47> | 0 days 03:59 | 0 | 20026 | 20026 | y | | | 184 | rtBackground | rfNewTos | <2018-03-27 Tue 0:32> | <2018-03-30 Fri 14:18> | 3 days 13:45 | 0 | 130576 | 12883 | y | | | 185 | rtCalibration | rfNewTos | <2018-03-30 Fri 14:18> | <2018-03-30 Fri 18:18> | 0 days 03:59 | 0 | 19901 | 19901 | y | | | 186 | rtBackground | rfNewTos | <2018-03-30 Fri 19:03> | <2018-04-11 Wed 16:03> | 11 days 21:00 | 0 | 434087 | 42830 | y | | | 187 | rtCalibration | rfNewTos | <2018-04-11 Wed 16:04> | <2018-04-11 Wed 20:04> | 0 days 04:00 | 0 | 19667 | 19665 | y | | | 188 | rtBackground | rfNewTos | <2018-04-11 Wed 20:53> | <2018-04-17 Tue 10:53> | 5 days 14:00 | 0 | 204281 | 20781 | y | | | 239 | rtCalibration | rfNewTos | <2018-10-20 Sat 18:31> | <2018-10-20 Sat 20:31> | 0 days 02:00 | 0 | 9565 | 9518 | y | | | 240 | rtBackground | rfNewTos | <2018-10-21 Sun 14:54> | <2018-10-22 Mon 16:15> | 1 days 01:21 | 1 | 38753 | 4203 | y | | | 241 | rtCalibration | rfNewTos | <2018-10-22 Mon 16:16> | <2018-10-22 Mon 18:16> | 0 days 02:00 | 0 | 9480 | 9426 | y | | | 242 | rtBackground | rfNewTos | <2018-10-22 Mon 18:44> | <2018-10-23 Tue 22:08> | 1 days 03:24 | 1 | 41933 | 4843 | y | | | 243 | rtCalibration | rfNewTos | <2018-10-23 Tue 22:09> | <2018-10-24 Wed 0:09> | 0 days 01:59 | 0 | 9488 | 9429 | y | | | 244 | rtBackground | rfNewTos | <2018-10-24 Wed 0:32> | <2018-10-24 Wed 19:24> | 0 days 18:52 | 1 | 28870 | 3317 | y | | | 245 | rtCalibration | rfNewTos | <2018-10-24 Wed 19:25> | <2018-10-24 Wed 21:25> | 0 days 01:59 | 0 | 9573 | 9530 | y | | | 246 | rtBackground | rfNewTos | <2018-10-24 Wed 21:59> | <2018-10-25 Thu 16:18> | 0 days 18:18 | 1 | 27970 | 2987 | y | | | 247 | rtCalibration | rfNewTos | <2018-10-25 Thu 16:19> | <2018-10-25 Thu 18:19> | 0 days 01:59 | 0 | 9389 | 9334 | y | | | 248 | rtBackground | rfNewTos | <2018-10-25 Thu 18:25> | <2018-10-26 Fri 22:29> | 1 days 04:04 | 1 | 42871 | 4544 | y | | | 249 | rtCalibration | rfNewTos | <2018-10-26 Fri 22:30> | <2018-10-27 Sat 0:30> | 0 days 02:00 | 0 | 9473 | 9431 | y | | | 250 | rtBackground | rfNewTos | <2018-10-27 Sat 1:31> | <2018-10-27 Sat 22:26> | 0 days 20:54 | 1 | 31961 | 3552 | y | | | 251 | rtCalibration | rfNewTos | <2018-10-27 Sat 22:26> | <2018-10-28 Sun 0:26> | 0 days 01:59 | 0 | 9551 | 9503 | y | | | 253 | rtCalibration | rfNewTos | <2018-10-28 Sun 19:18> | <2018-10-28 Sun 21:39> | 0 days 02:20 | 0 | 11095 | 11028 | y | | | 254 | rtBackground | rfNewTos | <2018-10-28 Sun 21:40> | <2018-10-29 Mon 23:03> | 1 days 01:23 | 1 | 38991 | 4990 | y | | | 255 | rtCalibration | rfNewTos | <2018-10-29 Mon 23:03> | <2018-10-30 Tue 1:03> | 0 days 02:00 | 0 | 9378 | 9330 | y | | | 256 | rtBackground | rfNewTos | <2018-10-30 Tue 1:49> | <2018-10-31 Wed 22:18> | 1 days 20:29 | 1 | 68315 | 8769 | y | | | 257 | rtCalibration | rfNewTos | <2018-10-31 Wed 22:19> | <2018-11-01 Thu 0:19> | 0 days 01:59 | 0 | 9648 | 9592 | y | | | 258 | rtBackground | rfNewTos | <2018-11-01 Thu 0:20> | <2018-11-01 Thu 16:15> | 0 days 15:55 | 1 | 24454 | 3103 | y | | | 259 | rtCalibration | rfNewTos | <2018-11-01 Thu 16:16> | <2018-11-01 Thu 17:31> | 0 days 01:14 | 0 | 5900 | 5864 | y | | | 260 | rtCalibration | rfNewTos | <2018-11-01 Thu 17:39> | <2018-11-01 Thu 19:09> | 0 days 01:30 | 0 | 7281 | 7251 | y | | | 261 | rtBackground | rfNewTos | <2018-11-01 Thu 19:39> | <2018-11-04 Sun 15:23> | 2 days 19:43 | 3 | 103658 | 12126 | y | | | 262 | rtCalibration | rfNewTos | <2018-11-04 Sun 15:24> | <2018-11-04 Sun 21:24> | 0 days 05:59 | 0 | 28810 | 28681 | y | | | 263 | rtBackground | rfNewTos | <2018-11-05 Mon 0:35> | <2018-11-05 Mon 20:28> | 0 days 19:52 | 1 | 30428 | 3610 | y | | | 264 | rtCalibration | rfNewTos | <2018-11-05 Mon 20:28> | <2018-11-05 Mon 22:28> | 0 days 01:59 | 0 | 9595 | 9544 | y | | | 265 | rtBackground | rfNewTos | <2018-11-05 Mon 22:52> | <2018-11-07 Wed 22:14> | 1 days 23:21 | 1 | 72514 | 8429 | y | | | 266 | rtCalibration | rfNewTos | <2018-11-07 Wed 22:14> | <2018-11-08 Thu 0:14> | 0 days 01:59 | 0 | 9555 | 9506 | y | | | 267 | rtBackground | rfNewTos | <2018-11-08 Thu 2:05> | <2018-11-08 Thu 6:54> | 0 days 04:48 | 0 | 7393 | 929 | y | | | 268 | rtBackground | rfNewTos | <2018-11-09 Fri 6:15> | <2018-11-09 Fri 17:20> | 0 days 11:04 | 1 | 16947 | 1974 | y | | | 269 | rtCalibration | rfNewTos | <2018-11-09 Fri 17:20> | <2018-11-09 Fri 21:20> | 0 days 04:00 | 0 | 19382 | 19302 | y | | | 270 | rtBackground | rfNewTos | <2018-11-09 Fri 21:27> | <2018-11-11 Sun 21:02> | 1 days 23:34 | 2 | 72756 | 8078 | y | | | 271 | rtCalibration | rfNewTos | <2018-11-11 Sun 21:03> | <2018-11-11 Sun 23:46> | 0 days 02:43 | 0 | 13015 | 12944 | y | | | 272 | rtBackground | rfNewTos | <2018-11-12 Mon 0:09> | <2018-11-14 Wed 19:07> | 2 days 18:58 | 3 | 102360 | 11336 | y | | | 273 | rtCalibration | rfNewTos | <2018-11-14 Wed 19:08> | <2018-11-14 Wed 21:08> | 0 days 01:59 | 0 | 9535 | 9471 | y | | | 274 | rtBackground | rfNewTos | <2018-11-14 Wed 21:28> | <2018-11-17 Sat 18:14> | 2 days 20:45 | 3 | 105187 | 12101 | y | | | 275 | rtCalibration | rfNewTos | <2018-11-17 Sat 18:14> | <2018-11-17 Sat 20:57> | 0 days 02:43 | 0 | 13179 | 13116 | y | | | 276 | rtBackground | rfNewTos | <2018-11-17 Sat 22:08> | <2018-11-22 Thu 2:26> | 4 days 04:17 | 2 | 153954 | 19640 | y | | | 277 | rtCalibration | rfNewTos | <2018-11-22 Thu 2:26> | <2018-11-22 Thu 16:14> | 0 days 13:48 | 0 | 66052 | 65749 | y | | | 278 | rtBackground | rfNewTos | <2018-11-22 Thu 16:14> | <2018-11-23 Fri 10:51> | 0 days 18:36 | 0 | 28164 | 3535 | y | | | 279 | rtBackground | rfNewTos | <2018-11-24 Sat 10:51> | <2018-11-26 Mon 14:58> | 2 days 04:07 | 2 | 79848 | 9677 | y | | | 280 | rtCalibration | rfNewTos | <2018-11-26 Mon 14:59> | <2018-11-26 Mon 18:59> | 0 days 04:00 | 0 | 19189 | 19112 | y | | | 281 | rtBackground | rfNewTos | <2018-11-26 Mon 19:02> | <2018-11-28 Wed 18:07> | 1 days 23:04 | 1 | 72230 | 8860 | y | | | 282 | rtCalibration | rfNewTos | <2018-11-28 Wed 18:07> | <2018-11-28 Wed 20:51> | 0 days 02:43 | 0 | 12924 | 12860 | y | | | 283 | rtBackground | rfNewTos | <2018-11-28 Wed 22:31> | <2018-12-01 Sat 14:38> | 2 days 16:07 | 3 | 98246 | 11965 | y | | | 284 | rtCalibration | rfNewTos | <2018-12-01 Sat 14:39> | <2018-12-01 Sat 18:39> | 0 days 03:59 | 0 | 19017 | 18904 | y | | | 285 | rtBackground | rfNewTos | <2018-12-01 Sat 19:06> | <2018-12-03 Mon 19:39> | 2 days 00:33 | 2 | 74405 | 8887 | y | | | 286 | rtCalibration | rfNewTos | <2018-12-04 Tue 15:57> | <2018-12-04 Tue 17:57> | 0 days 02:00 | 0 | 9766 | 9715 | y | | | 287 | rtBackground | rfNewTos | <2018-12-04 Tue 19:07> | <2018-12-05 Wed 15:08> | 0 days 20:01 | 1 | 30598 | 3393 | y | | | 288 | rtCalibration | rfNewTos | <2018-12-05 Wed 17:28> | <2018-12-05 Wed 19:28> | 0 days 02:00 | 0 | 9495 | 9443 | y | | | 289 | rtBackground | rfNewTos | <2018-12-05 Wed 23:07> | <2018-12-06 Thu 19:11> | 0 days 20:03 | 1 | 30629 | 3269 | y | | | 290 | rtCalibration | rfNewTos | <2018-12-06 Thu 19:11> | <2018-12-06 Thu 21:11> | 0 days 02:00 | 0 | 9457 | 9394 | y | | | 291 | rtBackground | rfNewTos | <2018-12-06 Thu 23:14> | <2018-12-08 Sat 13:39> | 1 days 14:24 | 2 | 58602 | 6133 | y | | | 292 | rtCalibration | rfNewTos | <2018-12-08 Sat 13:39> | <2018-12-08 Sat 15:39> | 0 days 02:00 | 0 | 9475 | 9426 | y | | | 293 | rtBackground | rfNewTos | <2018-12-08 Sat 17:42> | <2018-12-10 Mon 21:50> | 2 days 04:07 | 1 | 79677 | 8850 | y | | | 294 | rtCalibration | rfNewTos | <2018-12-10 Mon 21:50> | <2018-12-10 Mon 23:50> | 0 days 02:00 | 0 | 9514 | 9467 | y | | | 295 | rtBackground | rfNewTos | <2018-12-11 Tue 0:54> | <2018-12-11 Tue 20:31> | 0 days 19:37 | 1 | 29981 | 3271 | y | | | 296 | rtCalibration | rfNewTos | <2018-12-11 Tue 20:31> | <2018-12-11 Tue 22:31> | 0 days 02:00 | 0 | 9565 | 9517 | y | | | 297 | rtBackground | rfNewTos | <2018-12-12 Wed 0:14> | <2018-12-13 Thu 18:30> | 1 days 18:15 | 2 | 68124 | 12530 | y | | | 298 | rtBackground | rfNewTos | <2018-12-13 Thu 18:39> | <2018-12-15 Sat 6:41> | 1 days 12:01 | 1 | 53497 | 0 | y | | | 299 | rtBackground | rfNewTos | <2018-12-15 Sat 6:43> | <2018-12-15 Sat 18:13> | 0 days 11:29 | 1 | 17061 | 0 | y | | | 300 | rtCalibration | rfNewTos | <2018-12-15 Sat 18:38> | <2018-12-15 Sat 20:38> | 0 days 02:00 | 0 | 9466 | 9415 | y | | | 301 | rtBackground | rfNewTos | <2018-12-15 Sat 21:34> | <2018-12-17 Mon 14:17> | 1 days 16:43 | 2 | 62454 | 7751 | y | | | 302 | rtCalibration | rfNewTos | <2018-12-17 Mon 14:18> | <2018-12-17 Mon 16:18> | 0 days 01:59 | 0 | 9616 | 9577 | y | | | 303 | rtBackground | rfNewTos | <2018-12-17 Mon 16:52> | <2018-12-18 Tue 16:41> | 0 days 23:48 | 1 | 36583 | 4571 | y | | | 304 | rtCalibration | rfNewTos | <2018-12-19 Wed 9:33> | <2018-12-19 Wed 11:33> | 0 days 01:59 | 0 | 9531 | 9465 | y | | | 306 | rtBackground | rfNewTos | <2018-12-20 Thu 6:55> | <2018-12-20 Thu 11:53> | 0 days 04:58 | 1 | 7546 | 495 | y | | ** Automatically calculated total run times Run period 2 (= 2017/18): #+BEGIN_SRC Type: rtBackground trackingDuration: 4 days, 10 hours, and 20 seconds nonTrackingDuration: 14 weeks, 2 days, 1 hour, 23 minutes, and 18 seconds Type: rtCalibration trackingDuration: 0 nanoseconds nonTrackingDuration: 4 days, 11 hours, 25 minutes, and 20 seconds #+END_SRC Which amounts to: Calibration data: 107.42 h Background data: 2401.40 h Tracking data: 106 h Run period 3 (= Oct-Dec 2018): #+BEGIN_SRC Type: rtBackground trackingDuration: 3 days, 2 hours, 17 minutes, and 53 seconds nonTrackingDuration: 6 weeks, 4 days, 20 hours, 54 minutes, and 29 seconds Type: rtCalibration trackingDuration: 0 nanoseconds nonTrackingDuration: 3 days, 15 hours, 3 minutes, and 45 seconds #+END_SRC Which amounts to: Calibration data: 87.05 h Background data: 1124.9 h Tracking data: 74.29 h So in total: Calibration data: 194.47 h Background data: 3526.3 h Tracking data: 180.29 h ** InGrid temperature from shift forms Due to the bug in TOS that caused the temperature log files to be placed in =./TOS/log/= instead of the respective run folders, the files were overwritten periodically it seems. The only temperature information we still have from it thus is the shift forms. There are also pictures of every shift form in my googlo photos. | Run number | Date | Temp / ° | Notes | |------------+------------------------+----------+------------------------------------------------| | 76 | <2017-10-17 Tue 05:00> | - | Not written down yet | | 77 | <2017-11-02 Thu 05:25> | 40.50 | | | 78 | <2017-11-03 Fri 05:25> | 40.30 | | | 79 | <2017-11-04 Sat 05:30> | 40.63 | | | 80 | <2017-11-05 Sun 05:30> | 40.80 | | | 81 | <2017-11-06 Mon 06:07> | 40.27 | | | 82 | <2017-11-07 Tue 06:00> | 40.10 | | | 82 | <2017-11-08 Wed 05:32> | - | Wrote 2nd val: 22.73 | | 84 | <2017-11-09 Thu 05:33> | 40.22 | | | 86 | <2017-11-10 Fri 05:32> | 40.13 | | | 87 | <2017-11-11 Sat 05:36> | 40.00 | | | 87 | <2017-11-12 Sun 06:07> | 40.31 | | | 89 | <2017-11-13 Mon 05:40> | 39.87 | | | 90 | <2017-11-14 Tue 05:40> | 39.74 | | | 91 | <2017-11-15 Wed 05:35> | 39.68 | | | 92 | <2017-11-17 Fri 05:36> | 39.72 | | | 94 | <2017-11-18 Sat 05:38> | 39.72 | | | 95 | <2017-11-19 Sun 05:44> | 39.70 | | | 97 | <2017-11-25 Sat 05:52> | 40.35 | | | 98 | <2017-11-26 Sun 06:28> | 39.70 | | | 99 | <2017-11-27 Mon 06:25> | 39.61 | | | 100 | <2017-11-28 Tue 06:35> | 38.93 | | | 101 | <2017-11-29 Wed 06:35> | 38.72 | | | 103 | <2017-11-30 Thu 06:32> | 39.70 | | | 104 | <2017-12-01 Fri 06:42> | 38.68 | | | 105 | <2017-12-02 Sat 06:40> | 39.46 | | | 106 | <2017-12-03 Sun 06:36> | 39.46 | | | 107 | <2017-12-04 Mon 06:50> | 38.67 | | | 109 | <2017-12-05 Tue 06:35> | 38.31 | | | 112 | <2017-12-07 Thu 06:45> | 38.46 | | | 112 | <2017-12-09 Sat 06:38> | 39.79 | | | 113 | <2017-12-10 Sun 06:37> | 38.56 | | | 114 | <2017-12-11 Mon 06:45> | 38.86 | | | 115 | <2017-12-12 Tue 06:45> | 39.86 | | | 117 | <2017-12-13 Wed 06:41> | 39.71 | | | 119 | <2017-12-14 Thu 06:42> | 40.32 | | | 121 | <2017-12-15 Fri 06:45> | 39.95 | | | 123 | <2017-12-16 Sat 06:42> | 40.03 | | | 124 | <2017-12-17 Sun 06:47> | 39.96 | | | 124 | <2017-12-18 Mon 06:45> | 40.07 | | | 125 | <2017-12-19 Tue 06:44> | 40.61 | | | 127 | <2017-12-20 Wed 06:46> | 40.75 | | | 137 | <2018-02-15 Thu 05:48> | - | Start of 2018 data taking, temp readout broken | | 140 | <2018-02-16 Fri 05:44> | - | | | 146 | <2018-02-18 Sun 06:15> | - | | | 148 | <2018-02-19 Sun 06:12> | - | | | 150 | <2018-02-20 Tue 06:00> | - | | | 152 | <2018-02-21 Wed 06:12> | - | | | 154 | <2018-02-22 Thu 06:14> | - | | | 156 | <2018-02-23 Fri 05:58> | - | | | 158 | <2018-02-24 Sat 06:15> | - | | | 160 | <2018-02-28 Wed 06:01> | - | | | 162 | <2018-03-02 Fri 05:58> | - | | | 162 | <2018-03-03 Tue 06:03> | - | | | 162 | <2018-03-04 Sun 05:57> | - | | | 164 | <2018-03-05 Mon 06:00> | - | | | 164 | <2018-03-06 Tue 06:00> | - | | | 166 | <2018-03-07 Wed 05:44> | - | | | 170 | <2018-03-14 Wed 05:35> | - | | | 172 | <2018-03-15 Thu 05:29> | - | | | 174 | <2018-03-16 Fri 05:30> | - | | | 176 | <2018-03-17 Sat 05:25> | - | | | 178 | <2018-03-18 Sun 05:24> | - | | | 178 | <2018-03-19 Mon 05:22> | - | | | 178 | <2018-03-20 Tue 05:16> | - | | | 178 | <2018-03-21 Wed 05:15> | - | | | 178 | <2018-03-22 Thu 05:12> | - | | | 180 | <2018-03-23 Fri 05:12> | - | | | 180 | <2018-03-24 Sat 05:17> | - | | | 182 | <2018-03-25 Sun 06:07> | - | | | 182 | <2018-03-26 Mon 06:07> | - | | | 240 | <2018-10-22 Mon 06:38> | 48.27 | Begin of Run 3 period | | 242 | <2018-10-23 Tue 06:38> | 48.61 | | | 244 | <2018-10-24 Wed 06:45> | 48.70 | | | 246 | <2018-10-25 Thu 06:47> | 49.83 | | | 248 | <2018-10-26 Fri 06:47> | 49.37 | | | 250 | <2018-10-27 Sat 06:46> | 47.55 | | | 252 | <2018-10-28 Sun 05:40> | 47.16 | | | 254 | <2018-10-29 Mon 05:55> | 45.77 | | | 256 | <2018-10-30 Tue 05:33> | 45.56 | | | 258 | <2018-11-01 Thu 05:50> | 46.15 | | | 261 | <2018-11-02 Fri 06:10> | 46.65 | | | 261 | <2018-11-03 Sat 05:53> | 47.02 | | | 261 | <2018-11-04 Sun 06:14> | 47.97 | | | 263 | <2018-11-05 Mon 06:14> | 47.94 | | | 265 | <2018-11-06 Tue 05:53> | 47.27 | | | 268 | <2018-11-09 Fri 06:00> | 47.66 | | | 270 | <2018-11-10 Sat 06:04> | 48.39 | | | 270 | <2018-11-11 Sun 06:11> | 48.43 | | | 272 | <2018-11-12 Mon 06:10> | 48.69 | | | 272 | <2018-11-13 Tue 06:04> | 48.59 | | | 272 | <2018-11-14 Wed 06:10> | 48.96 | | | 274 | <2018-11-15 Thu 06:10> | 48.57 | | | 274 | <2018-11-16 Fri 06:07> | 48.60 | | | 274 | <2018-11-17 Sat 06:03> | 47.98 | | | 276 | <2018-11-18 Sun 06:05> | 47.70 | | | 276 | <2018-11-19 Mon 06:10> | 47.36 | | | 278 | <2018-11-24 Sat 06:22> | 46.60 | | | 279 | <2018-11-25 Sun 06:29> | 46.74 | | | 279 | <2018-11-26 Mon 06:26> | 46.67 | | | 281 | <2018-11-27 Tue 06:20> | 46.57 | | | 283 | <2018-11-29 Thu 06:22> | 46.31 | | | 283 | <2018-11-30 Fri 06:26> | 46.76 | | | 283 | <2018-12-01 Sat 06:29> | 46.83 | | | 285 | <2018-12-02 Sun 06:26> | 46.75 | | | 285 | <2018-12-03 Mon 06:28> | 46.62 | | | 287 | <2018-12-05 Wed 06:28> | 47.10 | | | 289 | <2018-12-06 Thu 06:29> | 47.44 | | | 291 | <2018-12-07 Fri 06:31> | 46.23 | | | 291 | <2018-12-08 Sat 06:35> | 46.13 | | | 293 | <2018-12-09 Sun 06:32> | 46.02 | | | 293 | <2018-12-10 Mon 06:32> | 45.78 | | | 295 | <2018-12-11 Tue 06:33> | 45.41 | | | 297 | <2018-12-12 Wed 06:37> | 44.38 | | | 297 | <2018-12-13 Thu 06:35> | 44.50 | | | 298 | <2018-12-14 Fri 06:40> | 44.74 | | | 299 | <2018-12-15 Sat 06:43> | 44.50 | | | 301 | <2018-12-16 Sun 06:41> | 44.44 | | | 301 | <2018-12-17 Mon 06:26> | 45.03 | | | 303 | <2018-12-18 Tue 06:52> | 44.69 | | | 306 | <2018-12-20 Thu 06:30> | 40.04 | | * Cabling & software setup [/] :Appendix:extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:cabling_and_softwar_setup :END: This appendix gives a short overview on how the cabling of the detector needs to be set up and how to set up a computer to flash the firmware TOF. Finally, an example is given on how to start up TOS and start a background run. These are mainly notes taken for myself, but I think they should be useful for anyone attempting to run this detector. ** TODOs for this section :noexport: - [ ] *CLEAN THIS UP TO MAKE IT STAND ON ITS OWN OR MERGE INTO ABOVE CAST OPERATION?* ** Virtex V6 cabling The following pieces are required: - power supply - 2 HDMI to intermediate board - mini USB into JTAG port on backside (to computer) - RJ45 from ethernet port into 2nd ethernet card on DAQ PC The RJ45 connection is only required to flash the firmware onto the Virtex. ** Detector cabling The following is the cabling for the FADC and the scintillators to the intermediate board. It's a useful reference when connecting the cabling! FADC Trig out (on the FADC): - trig out -> level adapter (NIM module) into NIM IN, set to +NORM+ / *COMPL* (*this is active*) TTL out -> TTL signal clipper (TTL 5V -> 2.4V) into port marked =I= TTL signal clipper -> adapter board into *left LEMO at back* (viewed from behind the crate; cable still at CAST) FPGA to FADC (shutter signal) - Adapter board *right LEMO at back* (viewed from behind crate) -> level adapter (NIM module), set to *NORM* / +COMPL+ into TTL IN NIM out -> FADC EXT EN TRIG Veto scintillator (*NOTE* this may be *WRONG*; input to adapter board that is); signal order is reversed! - Adapter board *left LEMO on top* (viewed from behind the crate, input number *1* on adapter board) -> discriminator OUT (top discriminator in NIM module) discriminator IN -> back of Amplifier Discriminator NIM module amplifier input -> veto scintillator signal The signal is needs to be *2.4 V TTL signal!* *** TODOs for this section :noexport: - [ ] *CLEAN THIS UP* ** Vivado / ISE on void linux & flashing Virtex V6 The setup described here was written for [[https://voidlinux.org/][Void Linux]], but of course should hold generally with minor modification. Installing Vivado on void linux is relatively straight forward (does not mean it's not annoying). First of all note that apparently Vivado version 2020.3 dropped support for many devices! That's why I decided to download the 2020.2 self installing linux web installer from here: https://www.xilinx.com/member/forms/download/xef.html?filename=Xilinx_Unified_2020.2_1118_1232_Lin64.bin (need a Xilinx account of course). Installation of it is simply done by running the installer and waiting a long time (it seems to work just fine on Void). On =tpc19= I installed it under =/data/Xilinx= (that's the 2 TB HDD in there). It complained at the end about some Python runtime error setting up some stuff or something, but said compilation was otherwise successful. Note: It needs about 56 GB (!!!) of space! After installation we need to do 3 more things to actually run it: 1. install (if not already done) =libtinfo=: #+begin_src sh sudo xbps-install -S ncurses-libtinfo-libs ncurses-libtinfo-devel #+end_src However, this only installs version 6 of =libtinfo=, but we need 5. Fortunately, 5 and 6 are API compatible, so we can link 6 to version 5: #+begin_src sh cd /lib64 sudo ln -s libtinfo.so.6 libtinfo.so.5 #+end_src 2. we need to source the =settings64.sh= file here: #+begin_src sh source /data/Xilinx/Vivado/2020.2/settings64.sh #+end_src 3. and finally we need to set some Java related environment variable, because otherwise under a tiling WM we only get a white screen upon launch: #+begin_src sh export _JAVA_AWT_WM_NONREPARENTIN=1 #+end_src (ref: https://forums.xilinx.com/t5/Installation-and-Licensing/Vivado-hangs-forever-with-white-window-during-startup-Linux/td-p/479058) After that it should launch succesfully. *** Installing ISE ISE has the advantage over Vivado that it allows for a so called "lab tools" installation, which only weighs about 5.5 GB (compared to Vivado's > 50 GB. Download the installer from: https://www.xilinx.com/member/forms/download/xef.html?filename=Xilinx_ISE_DS_Lin_14.7_1015_1.tar (requires Xilinx account). Untar the file somewhere and execute the =xsetup= with superuser access. This is in case the chosen installation path is not user writable or the user wishes to install the USB drivers. The latter is very useful, but possibly does not work. It's relatively straightforward to install USB drivers manually though (see below [[Install USB drivers]]). #+begin_src sh tar xf Xilinx_ISE_DS_Lin_14.7_1015_1.tar cd Xilinx_ISE_DS_Lin_14.7_1015_1 sudo ./xsetup #+end_src Simply follow the installation instructions and select the "LabTools" package. It contains =impact=, which is all we care about to flash the Virtex V6. To run =impact= we need to source the =settings64.sh= file: #+begin_src sh source <ISE_PATH>/settings64.sh #+end_src *** Install USB drivers To see whether the correct USB drivers are installed on a machine, connect the Virtex V6 board using a mini USB cable in the JTAG port to the computer. In the terminal: #+begin_src sh lsusb #+end_src we should see a Xilinx device. The Bus and device number of course depends on the machine and port. If the device is recognized, the ID can either be: #+begin_src sh 03fd:000d Xilinx, Inc. #+end_src or #+begin_src sh 03fd:0008 Xilinx, Inc. Platform Cable USB II #+end_src The former means only some generic Xilinx driver was loaded. The latter is the one we want. There are 4 different ways to install USB drivers, 3 are very similar to one another: 1. Using the USB drivers shipped with Vivado. These are found in: #+begin_src <Vivado_Path>/data/xicom/cable_drivers/lin64/install_script/install_drivers #+end_src From here in theory we should just have to run the =install_drivers= script with sudo rights and they should be installed. This does *not* work on Void Linux. Ref: https://www.xilinx.com/support/answers/59128.html 2. Using the USB drivers shipped with ISE. These are found in: #+begin_src <ISE_Path>/LabTools/LabTools/bin/lin64/install_script/install_drivers #+end_src Here we also find a =readme.txt=, which contains instructions for the installation. In theory we should just have to run the =install_drivers= script with sudo rights and they should be installed. This also does *not* work on Void Linux. Ref: https://www.xilinx.com/support/answers/54381.html 3. Using the USB drivers downloaded manually from Xilinx: Download the files from here: https://secure.xilinx.com/webreg/clickthrough.do?cid=103670 which gives us a =install_drivers.tar.gz=. Untar: #+begin_src sh tar xzf install_drivers.tar.gz cd install_drivers #+end_src and again, in theory we should be able to run the =install_drivers= script here. On Void Linux with this approach I got a bit farther. I had to install =fxload=, which I used from here: https://github.com/esden/fxload, compiled it and symlinked =fxload= to =/sbin/fxload=. That made the installer (which again has to be run with root) find fxload. However, from there many more errors happened, apparently because the scripts use =/bin/sh=, but expects a real bourne shell (bash) and not dash (typically used in ubuntu and many other distributions). I patched all occurences of =/bin/sh= to use explicitly =/bin/bash=, which got me farther, but in the end I still could not successfully install the drivers. Ref: https://www.xilinx.com/support/documentation/user_guides/ug344.pdf 4. The best way (and simplest) is to follow the Arch Wiki: Ref: https://wiki.archlinux.org/title/Xilinx_ISE_WebPACK#Xilinx_Platform_Cable_USB-JTAG_Drivers The instructions here make use of a custom USB driver package by "zerfleddert". Essentially just clone the repository: #+begin_src sh git clone git://git.zerfleddert.de/usb-driver #+end_src and call compile by running =make=. Afterwards we need to run the =setup_pcusb= script with the path to the ISE installation (again with superuser rights). However, the path to the ISE installation is *not* just the path to the =ISE= directory, but: #+begin_src sh sudo ./setup_pcusb <ISE_PATH>/LabTools/common #+end_src (the =common= directory is the one that contains the actual project). From here I compared my (now created) =/etc/udev/rules.d/xusbdfwu.rules= file with the file shown in the Arch wiki. They are not exactly identical, but I tried it first before modifying it. To be certain I reloaded the udev rules anyway: #+begin_src sh sudo udevadm control --reload-rules #+end_src Afterwards, reconnecting the Virtex V6 board and running =lsusb= showed the correct driver. Flashing the firmware works correctly both in Vivado as well as in Impact now. *** Setting up ethernet device *** Ethernet connection with Virtex The ethernet connection with the Virtex needs to be set up manually. On the one hand it is required to use a static IP address for the secondary ethernet device and in addition we need to set an ARP entry (note: the =arp= program is part of the =net-tools= package, name same in ubuntu & void linux). Under Ubuntu said setup can be done using the network manager. In Void we need to use =ip= from the terminal. The settings are as follows: - IP address: 10.1.2.3 - Subnet: 24 Setup of the device can be done according to the example here: https://docs.voidlinux.org/config/network/index.html namely (with superuser rights): #+begin_src sh ip addr show # check the name of the correct, secondary device ip link set dev enp4s0 up # name on tpc19 ip addr add 10.1.2.3/24 brd + dev enp4s0 #+end_src afterwards we can set the ARP entries (ref: https://confluence.team.uni-bonn.de/display/PHYGASDET/How+to+automatically+set+ARP+entries): #+begin_src sh arp -i enp4s0 -s 10.1.2.2 AA:BA:DD:EC:AD:E2 #+end_src *** Make these two steps automatic under void In principle it should be enough to set the above steps into =/etc/rc.local=. *** Temperature readout via MCP2210 and MAX31685 Ref: https://datasheets.maximintegrated.com/en/ds/MAX31865.pdf The temperature readout via the MCP2210 micro-controllers installed on the intermediate board is done via SPI through a micro USB port on the intermediate board. This micro-controller talks to a MAX31685 for each of the sensors. The micro-controller is powered via an additional power line! That means one needs to use one of the "big" power supplies with an additional +5V input. Using such a power supply and plugging the micro USB port from the intermediate board into the PC, should yield an entry like the following on =lsusb=: #+begin_src sh lsusb #+end_src #+RESULTS: Bus 001 Device 024: ID 04d8:00de Microchip Technology, Inc. MCP2210 USB to SPI Master The MCP2210 communication is done through code built on an open source library by Kerry Wong: https://github.com/kerrydwong/MCP2210-Library In TOS this is embedded using the HID API. To get the communication working, the final step is the =udev= rules. The file https://github.com/kerrydwong/MCP2210-Library/blob/master/99-hid.rules needs to be placed in =/etc/udev/rules.d/=. Since we use the HID API the second entry is the relevant one. The file as it is being used on =tpc19= at the moment. =/etc/udev/rules.d/99-mcp2210.rules=: #+begin_src sh # This is a sample udev file for HIDAPI devices which changes the permissions # to 0666 (world readable/writable) for a specified device on Linux systems. # If you are using the hidraw implementation, then do something like the # following, substituting the VID and PID with your device. Busnum 1 is USB. # HIDAPI/hidraw KERNEL=="hidraw*", ATTRS{busnum}=="1", ATTRS{idVendor}=="04d8", ATTRS{idProduct}=="00de", MODE="0666" # Once done, optionally rename this file for your device, and drop it into # /etc/udev/rules.d and unplug and re-plug your device. This is all that is # necessary to see the new permissions. Udev does not have to be restarted. # Note that the hexadecimal values for VID and PID are case sensitive and # must be lower case. # If you think permissions of 0666 are too loose, then see: # http://reactivated.net/writing_udev_rules.html for more information on finer # grained permission setting. For example, it might be sufficient to just # set the group or user owner for specific devices (for example the plugdev # group on some systems). #+end_src After writing this file (take care to check that the USB device is actually on USB bus number 1, as in the case of the example output of =lsusb= above), we can reload the =udev= rues: #+begin_src sh sudo udevadm control --reload-rules #+end_src and then replug the USB connection to the intermediate board. Now, TOS (or a standalone program using the MCP2210) should work fine. ** Setting up the chips in TOS This section is specific to the Septemboard used at CAST. Following the steps described in the shifter documentation [[file:~/org/Doc/ShiftDocumentation/shifter_documentation.org]] #+begin_src sh #+BEGIN_SRC python 7 # number of chips 4 # preload SetChipIDOffset 190 lf # 7 times enter to load default paths uma 1 # Matrix settings 0 1 1 0 LoadThreshold # load threshold equalisation files 4 # write matrix 3 # read out 3 ActivateHFM SetFadcSettings Run 1 # run time via # frames 0 0 0 2 # shutter range select 30 # shutter time select 0 # zero suppression 1 # FADC usage 0 # accept FADC settings #+end_src The above would launch a full background run. * Window rupture and vacuum contamination :Appendix:extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:vacuum_contamination :END: *NOTE*: This section is a direct copy of my notes about the calculation of the possible contamination of the LLNL telescope. This file is a simple readme about the calculation of the potential vacuum contamination of the system. The rough ideas are stated here. ** Calculation of vacuum volume In order to calculate the total amount of gas, which entered the vacuum volume, we first need to calculate the total volume of the vacuum system. The calculation of the volume will be done in the Nim calculation [[file:~/CAST/VacuumContamination/vacuum_contamination.nim][vacuum_contamination.nim]]. *** Tubing The following lists the different pieces of vacuum piping, tubes etc.: **** Static tubing: This table lists the static tubes used in the vacuum system: Table A: | Diameter / mm | Length / cm | Index | Notes | |---------------+-------------+-------+----------------------------------------| | 63 | 10 | 1 | telescope to gate valve | | 63 | 51 | 2 | telescope to detector, stainless steel | | 63 | 21.5 | 3 | between st. steel tube & copper tube | | 25 | 33.7 | 4 | copper tube | | 63 | 20 | 5 | bellow in front of telescope | | ? | 50 | 6 | telescope | | 40 | 15.5 | 7 | piece needle valve connects to | | 16 | 13 | 8 | Pirani <-> Mini turbo | | 40 | 10 | 10 | 90 deg next to cross for needle valve | **** Flexible tubing: Table B: | Diameter / mm | Length / cm | Index | Notes | |---------------+-------------+-------+-----------------------------------------------| | 16 | 25 | 1 | connection from needle valve to primary 1 / 4 | | 16 | 25 | 2 | 2 / 4 | | 16 | 25 | 3 | 3 / 4 | | 16 | 25 | 4 | 4 / 4 | | 16 | 40 | 5 | to needle valve | | 25 | 90 | 6 | Pirani <-> Mini turbo | | 25 | 80 | 7 | Mini turbo <-> T piece towards primary | | 40 | 50 | 8 | before turbo pump | | 16 | 150 | 9 | connection to primary | | 40 | 80 | 10 | needle valve to turbo, before index C3 | | 40 | 80 | 11 | to turbo pump, after index C3 | **** T-pieces: Table C: | Diameter / mm | Length / cm | Index | Notes | |---------------+-------------+-------+----------------------------------------------| | 40 | 18 x 21 | 1 | orthogonal to 3A | | 16 | 7 x 4.5 | 2 | in between mini turbo, needle valve, primary | | 40 | 10 | 3 | T-piece connection of P-MM | **** Crosses: Table D: | Diameter / mm | Length / cm | Index | Notes | |---------------+-------------+-------+---------------------------------| | 16 | 10 x 10 | 1 | before primary | | 40 | 14 x 14 | 2 | before turbo, behind gate valve | | 40 | 14 x 14 | 3 | same as above | | 40 | 14 x 14 | 4 | cross at needle valve | *** Implementation of tubing and calculation of volume Given the tubing data as above, the nim module defines a TubesMap datastructure, which is simply an object, with 4 different fields. One for each type of vacuum tubing: - static - flexible - T-pieces - crosses where for each a sequence of tuples is created given (diameter / mm, length / cm). This is defined in [[file:~/CAST/VacuumContamination/tubing.nim][tubing.nim]], which also offers a function to get such a TubesMap for the data. This tubing object is taken in the main() and handed to the calcTotalVacuumVolume(), which uses the helper volume functions to calculate the total vacuum volume. In a functional style using map, we iterate over each of the fields ot TubesMap one after another. For each item (a tuple of floats), an anonymous function is used to calculate the volume of that specific part. Finally, each volume element, which is added to the new sequence, is summed after map is completed to give the total volume. This amounts to #+BEGIN_LaTeX $V_{\text{vacuum}} \approx 10.88 \, \mathrm{l}$. #+END_LaTeX ** Calculation of potential influx of gas Once we have the total volume of the vacuum system, we still need to potential amount of gas, which entered the system. This can be separated into two parts. 1. An static initial state given by the detector volume under the pressure at which the window burst: #+BEGIN_LaTeX $n_{\text{inital}} = \frac{p_{\text{burst}} V_{\text{det}}}{R T_{\text{amb}}}$. #+END_LaTeX 2. And afterwards a dynamical flow, given by the compressed air tube, inserting gas until it was shut off, after $\SIrange{2}{5}{\second}$. For this one needs to consider the following. The compressed air tries to supply 6 bar. Assuming the last gauge sees $\SI{6}{\bar}$ the whole time. From there $\SI{2}{\meter}$ of tubing with about $\SI{3}{\milli \meter}$ inner diameter, results in a pressure of #+BEGIN_LaTeX $p_{\text{exit}} = p_i - z L \frac{\partial V}{\partial t}$ #+END_LaTeX (or something like this?). $p_{\text{exit}}$ is the pressure at the end of the tube, i.e. inside the detector. $p_i$ the initial $\SI{6}{\bar}$, while $z$ is the specific impedance (per length) of the tube for the compressed air. The partial derivative should describe the flow of the gas. Analogous to currents, an impedance should drop the pressure inside the tube depending on the length $L$ due to the flow of gas inside it. Problematic to estimate impedance of the tubes. Look into Demtröder etc. Given this, one can calculate the flow into the detector for the time gas was still flowing. #+BEGIN_LaTeX $n_{\text{total}} = n_{\text{initial}} + \frac{p_{\text{exit}}}{R T_{\text{amb}}}\frac{\mathrm{d}V}{\mathrm{d}t} t$ #+END_LaTeX Or something similar... Alternative way to estimate total gas is to consider increase of pressure inside of the $\sim \SI{11}{\liter}$ of vacuum volume while the turbopumps and primary were still running. However, this is probably less accurate, because this should be highly non-linear, since the turbos shut off immediately (ramping down slowly, i.e. pumping less and less). Primary kept pumping for about $\SI{2}{\minute}$. Note: Upper parts remain for now, change approach of 2. point above. We calculate the flow rate of the compressed air inside the tube using the Poiseuille equation #+BEGIN_LaTeX $Q = \frac{\pi D^4 \Delta P}{128 \mu \Delta x}$, #+END_LaTeX where $Q$ is the flow rate in $\si{\liter\per\second}$, $D$ the diameter of the tube, $\Delta P$ the pressure difference between both ends of the tube, $\mu$ the dyanmic viscosity of air and $\Delta x$ the length of the tube. Regarding the dynamic viscosity, we use https://www.lmnoeng.com/Flow/GasViscosity.php to calculate the viscosity of the compressed air. As a good approximation, the dynamic viscosity is unchanged under pressure changes (https://www.quora.com/What-is-the-effect-of-pressure-on-viscosity-of-gases-and-liquids), which means we can use the above calculator for air at $\SI{20}{\celsius}$ to get a value of #+BEGIN_LaTeX $\mu = \SI{1.8369247E-5}{\pascal \second}$. #+END_LaTeX In principle we need to check, whether the tube still contains laminar flow, which we can do following: https://engineering.stackexchange.com/questions/8004/how-to-calculate-flow-rate-of-water-through-a-pipe. This calculation results in a value of #+BEGIN_LaTeX $Q_{\text{air, laminar}} = \SI{3.246}{\liter \per \second}$, #+END_LaTeX which should be a good upper bound, since the equation is only valid for laminar, incompressible fluids with a small pressure gradient. Especially the last is definitely not valid, while the first two are at least questionable. Quick calculation of the Reynold's factor (to determine laminar or turbulent flow), shows that (using velocity of flow $v$): #+BEGIN_LaTeX \[ v = \frac{Q}{A} \] \[ \mathrm{Re} = \frac{\rho d v}{\mu} \] #+END_LaTeX #+BEGIN_SRC nim :exports both import math let v = 3.246e-3 / (PI * pow(1.5e-3, 2)) echo v let Re = 1.225 * 1.5e-3 * v / (1.8369e-5) echo Re #+END_SRC #+RESULTS: | 459.2150624678154 | | 45936.50592218471 | which shows that this calculation is wrong on several levels. The speed of the flow is much too high I would assume. Although one thing is to be noted: given a flow of compressed air into a vacuum, one might expect a speed similar to the speed of sound of the inlet pressure?! At the same time, if one were to trust this, it suggests the flow to be in the turbulent range (cp. laminar is $Re < \num{2300}$). Maximal bound given by Bernouilli's principle #+BEGIN_SRC nim :exports both import math let v = sqrt(2 * 6e5 / 1.2) let Q = PI * pow(1.5e-3, 2) * v echo v echo Q * 1000 #+END_SRC #+RESULTS: | 1000.0 | | 7.068583470577035 | meaning a speed of $\SI{1000}{\meter \per \second}$ and a maximal flow of $\SI{7.07}{\liter\per\second}$. On the other hand for a more practical value (ignoring more complex calculations including turbulent, incompressible gases), see the following plot from http://www.engineeringtoolbox.com/air-flow-compressed-air-pipe-line-d_1280.html: #+CAPTION: Compressed air capacities for different inner sizes. 1/8" roughly 3mm inner tube #+NAME: fig::comp-air-capacity [[file:~/org/Figs/compressed-air-pipeline-capacity-liter.png]] shows a capacity of the compressed air line of $\sim\SI{2}{\liter\per\second}$. Thus, we can safely assume the calculated $\SI{3.25}{\liter\per\second}$ to be a worst case scenario. Hence, the total amount of air introduced into the system is: #+BEGIN_LaTeX $n_{\text{total}} = n_{\text{initial}} + \frac{Q_{\text{comp. air}} \cdot \SI{5}{\second} \cdot p_{\text{atm}}}{R T_{\text{amb}}}$, where $Q_{\text{comp. air}} = \SI{3.246}{\liter\per\second}$ #+END_LaTeX Ends up to be: #+BEGIN_LaTeX $n_{\text{total}} = n_{\text{initial}} + n_{\text{flow}} = \SI{0.0069}{\mol} + \SI{0.666}{\mol} = \SI{0.673}{\mol}$ #+END_LaTeX Given in volume at normal conditions, this results in a gas volume of #+BEGIN_LaTeX $V_{\text{gas}} = \SI{16.4}{\liter}$ #+END_LaTeX ** Consider pumping of pumps Since the pumps were still running during this period, they would have extracted most of the gas immediately again. See last point in previous section. ** Calculation of possible contamination Finally, given the total vacuum volume and the amount of gas, which entered the system, we can estimate the potential contamination. With the vacuum volume and the gas flowing into the system, the maximum possible contamination can be estimated. The upper limit is of course all contaminations in the gas forming a monolayer in the whole vacuum system. Assuming a certain ppm contamination in the gas, the maximum contamination is simply #+BEGIN_LaTeX $d_{\text{cont}} = \frac{n_{\text{total}} R T_{\text{amb}} \cdot q_{\text{cont}}}{A_{\text{vacuum}}}$ #+END_LaTeX where $q_{\text{cont}}$ is the ppm contaminiation in the total gas volume $nRT$, which enterd, while $A_{\text{vacuum}}$ is the total surface area of all vacuum tubing. #+BEGIN_COMMENT However, first of all calculate the total amount of oil, which entered the system assuming a (very high) 1 ppm, based on the calculated total moles entering the vacuum of $n_{\text{total}} = \SI{0.673}{\mol}$. This means a total of $\SI{6.73e-7}{\mol}$ of oil entered the vacuum. #+END_COMMENT However, first we estimate the amount of oil, which entered the system from a typical oil contamination in compressed air. The ISO standard ISO 8573-1:2010 defines different classes for compressed air in different applications. Classes regarding oil contamination range from 0 to 4, with class 4 being the worst. Class 4 calls for $\text{ppmv}_{\text{oil}} \leq \SI{5}{\milli \gram\per\meter\cubed.}$. Thus, even if CERN's compressed air is a lot worse than this, $\text{ppmv}_{\text{oil}} \approx \SI{10}{\milli\gram\per\meter\cubed.}$ should be sufficient as a baseline. This means the entered air will contain about #+BEGIN_SRC nim :exports both import math let air_vol = 16.4 let ppmv = 10e-3 echo ppmv * air_vol #+END_SRC #+RESULTS: : 0.164 which is $\SI{0.164}{\mg}$ of oil. The telescope surface is not known exactly. No time to find out until this needs to be done. Can be checked again later with Jaime / find slides, paper about LLNL telescope, to get better numbers. Assuming 10 quarter shells of a radius of $\SI{5}{\centi\meter}$ (some larger, some smaller radius), the telescope has an area of: #+BEGIN_SRC nim :exports both nim import math let A = 10.0 * 0.5 * PI * 0.05 * 0.5 echo A #+END_SRC #+RESULTS: : 0.3926990816987241 , i.e. an area of $A_{\text{telescope}} = \SI{0.393}{\meter\squared}$. If all of the oil was placed on the telescope, this would result in #+BEGIN_SRC nim :exports both nim import math let A = 0.393 let oil_mg = 0.164 let ratio = oil_mg / A * 1e-4 echo ratio #+END_SRC #+RESULTS: : 4.173027989821883e-05 A contamination of $d_{\text{max, cont}} = \SI{41.73}{\nano\gram\per\cm\squared}$ is an upper bound on oil on the telescope. Realistically, the telescope only has $< \frac{1}{10}$ of the total vacuum surface area, while probably $> \SI{90}{\percent}$ of the oil will have left the system via the pumps, pushing the contamination as low as $d_{\text{cont}} < \SI{0.4173}{\nano\gram\per\cm\squared}$. This may still sounds like quite a bit, but the assumptions made here are all extremely conservative: - 5 seconds with an open valve. More likely it was about \SI{3}{\second} => factor of $3/5$. - flow of compressed air of $\SI{3.25}{\liter\per\second}$, more likely about $\SI{2}{\liter\per\second}$ => factor of another $2/3.25$. - area of telescope vs total area of gas volume (hard to calculate due to flexible tubing) probably quite a bit less than 1 / 10 ? - $\SI{10}{\percent}$ of oils sticking to surface probably also extremely unlikely. Maybe 2 orders of magnitude less? #+BEGIN_SRC nim :exports both let factors = (3.0 / 5.0) * (2.0 / 3.25) * 0.5 * 1e-2 echo factors echo factors * 0.4173 #+END_SRC #+RESULTS: | 0.001846153846153846 | | 0.0007704000000000001 | Meaning values as low as $d_{\text{cont}} < \SI{0.7704}{\pico \gram\per\cm\squared}$ may even be more reasonable. It is probably safe to say that this level is easily reached if the telescope sits open during installation etc. # A contamination of $d_{\text{max, cont}} = # \SI{1.715e-10}{\mol\per\cm\squared}$. Realistically, the telescope only # has $< \frac{1}{10}$ of the total vacuum surface area, while probably # $> \SI{90}{\percent}$ of the oil will have left the system via the # pumps, pushing the contamination as low as $d_{\text{cont}} < # \SI{1.74e-12}{\mol\per\cm\squared}$. Still a lot of particles, but # probably much less than contaminating the telescope due to exposure to # air. # #+BEGIN_SRC nim :exports both nim # let saw = 500.0 # echo 1.74e-12 * saw # #+END_SRC # #+RESULTS: # : 8.7e-10 # This is about $\SI{8.7e-10}{\milli \gram \per \cm\squared}$ of oil. This # still sounds like quite a bit, but the assumptions made here are all # extremely conservative: # - 5 seconds with an open valve. More likely it was about # \SI{3}{\second} => factor of $3/5$. # - flow of compressed air of $\SI{3.25}{\liter\per\second}$, more # likely about $\SI{2}{\liter\per\second}$ => factor of another # $2/3.25$. # - area of telescope vs total area of gas volume (hard to calculate due # to flexible tubing) probably quite a bit less than 1 / 10. # - $\SI{10}{\percent}$ of oils sticking to surface probably also # extremely unlikely. Maybe 2 orders of magnitude less. # #+BEGIN_SRC nim # let factors = (3.0 / 5.0) * (2.0 / 3.25) * 0.5 * 1e-2 # echo factors # echo factors * 2.6 # #+END_SRC # #+RESULTS: # | 0.001846153846153846 | # | 0.0048 | # => Would result in about $\SI{5}{\micro \gram \per \cm \squared}$. # #+BEGIN_SRC nim :exports both # import math # let oil_g = 16.4e-3 # let g_per_mol = 500.0 # echo oil_g / g_per_mol # echo 6.73e-7 * 500.0 # echo oil_g / 0.8 # #+END_SRC # #+RESULTS: # | 3.28e-05 | # | 0.0003365 | # | 0.0205 | * Detector behavior over time :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:detector_time_behavior :END: In section [[#sec:calib:detector_behavior_over_time]] we covered the median charge and energies of clusters in the background and calibration data. Here in the appendix we present a few additional plots. First, in sec. [[#sec:appendix:choice_gas_gain_binning]] are the equivalent figures to fig. [[fig:calib:median_energy_ridgeline_30_10_2017]] for other time periods. The goodness-of-fit tests mentioned in sec. [[#sec:calib:detector_behavior_over_time]] for the selection of the best interval length are shown as well. In sec. [[#sec:appendix:correlation_gas_gain_ambient_temp]] we have the equivalent figures to fig. [[fig:calib:correlation_ambient_temperature_gasgain_and_spectra]] for the other run periods. ** TODOs for this section [/] :noexport: - [ ] *PROBABLY SCRAP THIS IF WE PUT BOTH DATA INTO ONE PLOT* -> Not sure which plot this referenced anymore even. ** Choice of gas gain binning time interval :PROPERTIES: :CUSTOM_ID: sec:appendix:choice_gas_gain_binning :END: The following figures show the behavior of the different time intervals for the choice of 'ideal' gas gain time binning for all run periods (not in the sense of Run-2 and Run-3, but those split by significant off time). In addition fig. [[fig:appendix:calib:gof_tests_different_binnings]] shows the results of applying a range of goodness of fit tests to the cluster data (we use a plot and not a table for easier visual parsing). Note that the repository of this thesis contains even more figures related to this in the ~Figs/behavior_over_time~ directory. #+CAPTION: Kernel density estimation of the median energies split by the somewhat #+CAPTION: distinct run periods and the time intervals used. A KDE instead of a histogram #+CAPTION: is used as the binning has too large of an impact for the dataset. #+NAME: fig:appendix:calib:median_energy_kde_intervals [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_intervals.pdf]] #+CAPTION: Comparison of the different time intervals in each run period using a set #+CAPTION: of different goodness of fit tests. The $\SI{45}{min}$ interval seems optimal #+CAPTION: in the 30/10/2017 period, but worse in others. The $\SI{90}{min}$ interval is #+CAPTION: average in most cases. #+NAME: fig:appendix:calib:gof_tests_different_binnings [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/gofs_for_different_binnings.pdf]] #+CAPTION: Equivalent plot to fig. [[fig:calib:median_energy_ridgeline_30_10_2017]] for data from #+CAPTION: Feb 2018 to Apr 2018. #+CAPTION: Ridgeline plot of a kernel density estimation (bandwidth based on Silverman's rule of thumb) #+CAPTION: of the median cluster energies split by the used time intervals. The overlap of the individual ridges is for #+CAPTION: easier visual comparison and a KDE was selected over a histogram due to strong #+CAPTION: binning dependence of the resulting histograms. #+NAME: fig:calib:median_energy_ridgeline_Feb2018 [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_ridges_17_02_2018.pdf]] #+CAPTION: Equivalent plot to fig. [[fig:calib:median_energy_ridgeline_30_10_2017]] for data from #+CAPTION: Oct 2018 to Dec 2018. #+CAPTION: Ridgeline plot of a kernel density estimation (bandwidth based on Silverman's rule of thumb) #+CAPTION: of the median cluster energies split by the used time intervals. The overlap of the individual ridges is for #+CAPTION: easier visual comparison and a KDE was selected over a histogram due to strong #+CAPTION: binning dependence of the resulting histograms. #+NAME: fig:calib:median_energy_ridgeline_Oct2018 [[~/phd/Figs/behavior_over_time/optimizeGasGainSliceTime/medianEnergy_kde_ridges_21_10_2018.pdf]] \clearpage ** Correlation of gas gain and ambient CAST temperature :PROPERTIES: :CUSTOM_ID: sec:appendix:correlation_gas_gain_ambient_temp :END: Let's look at the rest of the data not shown in sec. [[#sec:calib:causes_variability]]. First in fig. [[fig:appendix:correlation_ambient_temperature_gasgain_and_spectra_run2_2017]] we see the same plot as fig. [[fig:calib:correlation_ambient_temperature_gasgain_and_spectra]], but only for Run-2 data from 2017. The anti-correlation is not quite visible here, instead in parts it seems like the expected correlation of temperature and gas gain is visible. Fig. [[fig:appendix:correlation_ambient_temperature_gasgain_and_spectra_run2_2018]] is the data for Feb 2018 to Apr 2018. Here there seems to be some of the anti correlation, but less than in the Run-3 data presented in the main of the discussion. Finally, fig. [[fig:appendix:gas_gain_vs_ambient_temp_center]] shows the gas gain of the center chip plotted directly against the ambient temperature at CAST as a scatter plot. Here the anti correlation becomes very visible as a global trend. #+CAPTION: Normalized data for Run-2 (only 2017) of the temperature sensors from the CAST slow control log #+CAPTION: files compared to the behavior of the mean peak position in the \cefe pixel spectra #+CAPTION: (black points), the recovered temperature values recorded during each solar tracking #+CAPTION: (blue points) and the gas gain values computed based on \SI{90}{min} of data for each #+CAPTION: chip (smaller points using Viridis color scale). The shift log temperatures nicely #+CAPTION: follow the trend of the general temperatures. In this period no real anti-correlation is #+CAPTION: visible. Instead in parts it looks like the expected proportionality between temperature #+CAPTION: and gas gain appears. #+NAME: fig:appendix:correlation_ambient_temperature_gasgain_and_spectra_run2_2017 #+ATTR_LATEX: :float sideways [[~/phd/Figs/behavior_over_time/correlation_fePixel_all_chips_gasgain_period_2017-10-30.pdf]] #+CAPTION: Normalized data for Run-2 (only Feb. to Apr. of 2018) of the temperature sensors from the CAST slow control log #+CAPTION: files compared to the behavior of the mean peak position in the \cefe pixel spectra #+CAPTION: (black points), the recovered temperature values recorded during each solar tracking #+CAPTION: (blue points) and the gas gain values computed based on \SI{90}{min} of data for each #+CAPTION: chip (smaller points using Viridis color scale). The shift log temperatures nicely #+CAPTION: follow the trend of the general temperatures. Here the anti correlation seems to #+CAPTION: be visible in some parts, but also less extreme than in the end of 2018 Run-3 data, #+CAPTION: presented in the main section. #+NAME: fig:appendix:correlation_ambient_temperature_gasgain_and_spectra_run2_2018 #+ATTR_LATEX: :float sideways [[~/phd/Figs/behavior_over_time/correlation_fePixel_all_chips_gasgain_period_2018-02-15.pdf]] #+CAPTION: Gas gains of the center chip (by $\SI{90}{min}$ time slices) against the ambient #+CAPTION: temperature at CAST. As a general trend the anti correlation is very visible. Also #+CAPTION: visible though is that for the 2017 Run-2 data that effect does not really appear. #+NAME: fig:appendix:gas_gain_vs_ambient_temp_center [[~/phd/Figs/behavior_over_time/gain_vs_temp_center_chip.pdf]] * CAST Detector Lab data :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:cast_detector_lab :END: In this appendix we present additional plots of the CAST Detector Lab (CDL) data taking campaign, as introduced in sec. [[#sec:cdl]]. First in sec. [[#sec:appendix:cdl_spectra_by_run]] plots of all CDL runs by their target and filter combination, both for the pixel and charge spectrum, can be found, split by each run. Sec. [[#sec:appendix:cdl:all_spectra_fits_by_run]] then contains all figures equivalent to fig. [[fig:cdl:ti_ti_charge_spectrum_run_326]] for all the different runs (both as pixel and charge spectrum). ** Generate all spectrum plots split by run :extended: All the plots of the spectra split by run are produced by ~cdl_spectrum_creation~ as mentioned in sec. [[#sec:background:gen_plots_cdl_data]]. However, for this appendix we wish to have slightly larger fonts, because the only plot used in the main thesis body is a single plot, not side by side. So the command for this appendix: #+begin_src sh F_WIDTH=0.5 WRITE_PLOT_CSV=true ESCAPE_LATEX=true USE_TEX=true \ cdl_spectrum_creation \ -i ~/CastData/data/CDL_2019/CDL_2019_Reco.h5 \ --cutcdl --dumpAccurate --hideNloptFit \ --plotPath ~/phd/Figs/CDL/ #+end_src ** All spectra split by run :PROPERTIES: :CUSTOM_ID: sec:appendix:cdl_spectra_by_run :END: The plots in this section are useful to see the variation visible for one target/filter combination between different runs. In theory the peaks should always be at the same pixel / charge value, but in reality for some targets they vary significantly between runs. This is the main motivator to treat the data on a 'run-by-run' basis. #+begin_comment Below is the code we use to produce all the ~subsection~ DSL side by side figures, their labels and captions. Too much work to do by hand, if it's all the same anyway. :) #+end_comment #+begin_src nim :exports results :results drawer import std / [os, algorithm, strformat, strutils] import ingrid / ingrid_types const path = "~/phd/Figs/CDL/" proc subfigureSec(body: string): string = result = """ $#begin_src subfigure (figure () $# ) $#end_src """ % ["#+", body, "#+"] proc subfigure(caption, label, path: string): string = result = """ (subfigure (linewidth 0.5) (caption "$#") (label "$#") (includegraphics (list (cons 'width (linewidth 1.0))) "$#")) """ % [caption, label, path] proc caption(isPixel: bool): string = result = if isPixel: "Pixel spectrum" else: "Charge spectrum" proc label(tfKind: TargetFilterKind, isPixel: bool): string = let typ = if isPixel: "pixel" else: "charge" result = &"fig:appendix:cdl_{typ}_{$tfKind}_by_run" proc fullLabel(tfKind: TargetFilterKind): string = &" (label \"fig:appendix:cdl_{$tfKind}_by_run\")" proc subref(tfKind: TargetFilterKind, isPixel: bool): string = result = "(subref \"$#\")" % [label(tfKind, isPixel)] proc fullCaption(tfKind: TargetFilterKind): string = result = """ (caption "Pixel spectra, " $# ", and charge spectra, " $# ", of the raw data for $# split by the data taking run.") """ % [subref(tfKind, true), subref(tfKind, false), $tfKind] var plots = newSeq[string]() for tfKind in TargetFilterKind: let pFile = (path / $tfKind & "-2019_by_run.pdf").expandTilde let cFile = (path / $tfKind & "Charge-2019_by_run.pdf").expandTilde doAssert fileExists(pFile) doAssert fileExists(cFile) let plot = subfigureSec( subfigure(caption(isPixel = true), label(tfKind, isPixel = true), pFile) & subfigure(caption(isPixel = false), label(tfKind, isPixel = false), cFile) & fullCaption(tfKind) & fullLabel(tfKind) ) plots.add plot for plt in plots.reversed: echo plt #+end_src #+RESULTS: :results: #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_C-EPIC-0.6kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_C-EPIC-0.6kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_C-EPIC-0.6kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_C-EPIC-0.6kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_C-EPIC-0.6kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-0.9kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Cu-EPIC-0.9kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Cu-EPIC-0.9kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-2kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-2kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-2kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Cu-EPIC-2kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Cu-EPIC-2kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Al-Al-4kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Al-Al-4kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Al-Al-4kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Al-Al-4kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Al-Al-4kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ag-Ag-6kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ag-Ag-6kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Ag-Ag-6kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Ag-Ag-6kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Ag-Ag-6kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ti-Ti-9kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ti-Ti-9kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Ti-Ti-9kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Ti-Ti-9kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Ti-Ti-9kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Mn-Cr-12kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Mn-Cr-12kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Mn-Cr-12kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Mn-Cr-12kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Mn-Cr-12kV_by_run") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-Ni-15kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kV-2019_by_run.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-Ni-15kV_by_run") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kVCharge-2019_by_run.pdf")) (caption "Pixel spectra, " (subref "fig:appendix:cdl_pixel_Cu-Ni-15kV_by_run") ", and charge spectra, " (subref "fig:appendix:cdl_charge_Cu-Ni-15kV_by_run") ", of the raw data split by the data taking run.") (label "fig:appendix:cdl_Cu-Ni-15kV_by_run") ) #+end_src :end: \clearpage ** All CDL spectra with line fits :extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:cdl:all_spectra_fits :END: The plots in this section are the raw and cleaned pixel and charge spectra for each target/filter combination, but not split by run. Any missing parameter indicates it was fixed. For lines with an explicit 'fixed', all parameters were fixed. #+begin_comment Below is the code we use to produce all the ~subsection~ DSL side by side figures, their labels and captions. Too much work to do by hand, if it's all the same anyway. :) #+end_comment #+begin_src nim :exports results :results drawer import std / [os, algorithm, strformat, strutils] import ingrid / ingrid_types const path = "~/phd/Figs/CDL/" proc subfigureSec(body: string): string = result = """ $#begin_src subfigure (figure () $# ) $#end_src """ % ["#+", body, "#+"] proc subfigure(caption, label, path: string): string = result = """ (subfigure (linewidth 0.5) (caption "$#") (label "$#") (includegraphics (list (cons 'width (linewidth 1.0))) "$#")) """ % [caption, label, path] proc caption(isPixel: bool): string = result = if isPixel: "Pixel spectrum" else: "Charge spectrum" proc label(tfKind: TargetFilterKind, isPixel: bool): string = let typ = if isPixel: "pixel" else: "charge" result = &"fig:appendix:cdl_{typ}_{$tfKind}" proc fullLabel(tfKind: TargetFilterKind): string = &" (label \"fig:appendix:cdl_{$tfKind}\")" proc subref(tfKind: TargetFilterKind, isPixel: bool): string = result = "(subref \"$#\")" % [label(tfKind, isPixel)] proc fullCaption(tfKind: TargetFilterKind): string = result = """ (caption "Pixel spectrum, " $# ", and charge spectrum, " $# ", including fit parameters, raw and cut data of the $# dataset.") """ % [subref(tfKind, true), subref(tfKind, false), $tfKind] var plots = newSeq[string]() for tfKind in TargetFilterKind: let pFile = (path / $tfKind & "-2019.pdf").expandTilde let cFile = (path / $tfKind & "Charge-2019.pdf").expandTilde doAssert fileExists(pFile) doAssert fileExists(cFile) let plot = subfigureSec( subfigure(caption(isPixel = true), label(tfKind, isPixel = true), pFile) & subfigure(caption(isPixel = false), label(tfKind, isPixel = false), cFile) & fullCaption(tfKind) & fullLabel(tfKind) ) plots.add plot for plt in plots.reversed: echo plt #+end_src #+RESULTS: :results: #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_C-EPIC-0.6kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_C-EPIC-0.6kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_C-EPIC-0.6kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_C-EPIC-0.6kV") ", including fit parameters, raw and cut data of the C-EPIC-0.6kV dataset.") (label "fig:appendix:cdl_C-EPIC-0.6kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-0.9kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-0.9kV") ", including fit parameters, raw and cut data of the Cu-EPIC-0.9kV dataset.") (label "fig:appendix:cdl_Cu-EPIC-0.9kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-2kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-2kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-2kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-2kV") ", including fit parameters, raw and cut data of the Cu-EPIC-2kV dataset.") (label "fig:appendix:cdl_Cu-EPIC-2kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Al-Al-4kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Al-Al-4kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Al-Al-4kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Al-Al-4kV") ", including fit parameters, raw and cut data of the Al-Al-4kV dataset.") (label "fig:appendix:cdl_Al-Al-4kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ag-Ag-6kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ag-Ag-6kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ag-Ag-6kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ag-Ag-6kV") ", including fit parameters, raw and cut data of the Ag-Ag-6kV dataset.") (label "fig:appendix:cdl_Ag-Ag-6kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ti-Ti-9kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ti-Ti-9kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ti-Ti-9kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ti-Ti-9kV") ", including fit parameters, raw and cut data of the Ti-Ti-9kV dataset.") (label "fig:appendix:cdl_Ti-Ti-9kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Mn-Cr-12kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Mn-Cr-12kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Mn-Cr-12kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Mn-Cr-12kV") ", including fit parameters, raw and cut data of the Mn-Cr-12kV dataset.") (label "fig:appendix:cdl_Mn-Cr-12kV") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-Ni-15kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kV-2019.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-Ni-15kV") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kVCharge-2019.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-Ni-15kV") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-Ni-15kV") ", including fit parameters, raw and cut data of the Cu-Ni-15kV dataset.") (label "fig:appendix:cdl_Cu-Ni-15kV") ) #+end_src :end: *** TODOs for this section [/] :noexport: - [X] *ADD TABLE OF THE FITS PERFORMED HERE?* -> What use would such a table be? All the fits are 'transient' numbers. They don't really matter, the approach matters. - [X] *NEED TO REPLACE BELOW BY THE PLOTS FOR THE INDIVIDUAL RUNS* -> This is a good idea then to remove the pixel spectra that are still present here and replace them by side-by-side versions of the different runs. ** All CDL spectra with line fits by run :PROPERTIES: :CUSTOM_ID: sec:appendix:cdl:all_spectra_fits_by_run :END: The plots in this section are the raw and cleaned pixel and charge spectra for each target/filter combination, split by run. Any missing parameter indicates it was fixed. For lines with an explicit 'fixed', all parameters were fixed. #+begin_comment Below is the code we use to produce all the ~subsection~ DSL side by side figures, their labels and captions. Too much work to do by hand, if it's all the same anyway. :) #+end_comment #+begin_src nim :exports results :results drawer import std / [os, algorithm, strformat, strutils, sugar] import ingrid / ingrid_types const path = "~/phd/Figs/CDL/" proc subfigureSec(body: string): string = result = """ $#begin_src subfigure (figure () $# ) $#end_src """ % ["#+", body, "#+"] proc subfigure(caption, label, path: string): string = result = """ (subfigure (linewidth 0.5) (caption "$#") (label "$#") (includegraphics (list (cons 'width (linewidth 1.0))) "$#")) """ % [caption, label, path] proc caption(isPixel: bool): string = result = if isPixel: "Pixel spectrum" else: "Charge spectrum" proc label(tfKind: TargetFilterKind, isPixel: bool, run: int): string = let typ = if isPixel: "pixel" else: "charge" result = &"fig:appendix:cdl_{typ}_{$tfKind}_run_{run}" proc fullLabel(tfKind: TargetFilterKind, run: int): string = &" (label \"fig:appendix:cdl_{$tfKind}_run_{run}\")" proc subref(tfKind: TargetFilterKind, isPixel: bool, run: int): string = result = "(subref \"$#\")" % [label(tfKind, isPixel, run)] proc fullCaption(tfKind: TargetFilterKind, run: int): string = result = """ (caption "Pixel spectrum, " $# ", and charge spectrum, " $# ", including fit parameters, raw and cut data of the $# dataset for run $#.") """ % [subref(tfKind, true, run), subref(tfKind, false, run), $tfKind, $run] var plots = newSeq[string]() for cFile in walkFiles(path.expandTilde & "*Charge-2019_run_*.pdf"): let fname = cFile.extractFilename() let tfKind = parseEnum[TargetFilterKind](fname.split("Charge")[0]) let run = fname.split("_")[^1].dup(removeSuffix(".pdf")).parseInt let pFile = (path / &"{tfKind}-2019_run_{run}.pdf").expandTilde doAssert fileExists(pFile) doAssert fileExists(cFile) let plot = subfigureSec( subfigure(caption(isPixel = true), label(tfKind, isPixel = true, run), pFile) & subfigure(caption(isPixel = false), label(tfKind, isPixel = false, run), cFile) & fullCaption(tfKind, run) & fullLabel(tfKind, run) ) plots.add plot for plt in plots.reversed: echo plt #+end_src #+RESULTS: :results: #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ti-Ti-9kV_run_349") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kV-2019_run_349.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ti-Ti-9kV_run_349") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kVCharge-2019_run_349.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ti-Ti-9kV_run_349") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ti-Ti-9kV_run_349") ", including fit parameters, raw and cut data of the Ti-Ti-9kV dataset for run 349.") (label "fig:appendix:cdl_Ti-Ti-9kV_run_349") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ti-Ti-9kV_run_326") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kV-2019_run_326.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ti-Ti-9kV_run_326") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kVCharge-2019_run_326.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ti-Ti-9kV_run_326") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ti-Ti-9kV_run_326") ", including fit parameters, raw and cut data of the Ti-Ti-9kV dataset for run 326.") (label "fig:appendix:cdl_Ti-Ti-9kV_run_326") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ti-Ti-9kV_run_325") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kV-2019_run_325.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ti-Ti-9kV_run_325") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ti-Ti-9kVCharge-2019_run_325.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ti-Ti-9kV_run_325") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ti-Ti-9kV_run_325") ", including fit parameters, raw and cut data of the Ti-Ti-9kV dataset for run 325.") (label "fig:appendix:cdl_Ti-Ti-9kV_run_325") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Mn-Cr-12kV_run_347") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kV-2019_run_347.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Mn-Cr-12kV_run_347") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kVCharge-2019_run_347.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Mn-Cr-12kV_run_347") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Mn-Cr-12kV_run_347") ", including fit parameters, raw and cut data of the Mn-Cr-12kV dataset for run 347.") (label "fig:appendix:cdl_Mn-Cr-12kV_run_347") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Mn-Cr-12kV_run_323") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kV-2019_run_323.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Mn-Cr-12kV_run_323") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kVCharge-2019_run_323.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Mn-Cr-12kV_run_323") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Mn-Cr-12kV_run_323") ", including fit parameters, raw and cut data of the Mn-Cr-12kV dataset for run 323.") (label "fig:appendix:cdl_Mn-Cr-12kV_run_323") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Mn-Cr-12kV_run_315") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kV-2019_run_315.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Mn-Cr-12kV_run_315") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Mn-Cr-12kVCharge-2019_run_315.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Mn-Cr-12kV_run_315") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Mn-Cr-12kV_run_315") ", including fit parameters, raw and cut data of the Mn-Cr-12kV dataset for run 315.") (label "fig:appendix:cdl_Mn-Cr-12kV_run_315") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-Ni-15kV_run_345") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kV-2019_run_345.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-Ni-15kV_run_345") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kVCharge-2019_run_345.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-Ni-15kV_run_345") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-Ni-15kV_run_345") ", including fit parameters, raw and cut data of the Cu-Ni-15kV dataset for run 345.") (label "fig:appendix:cdl_Cu-Ni-15kV_run_345") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-Ni-15kV_run_320") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kV-2019_run_320.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-Ni-15kV_run_320") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kVCharge-2019_run_320.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-Ni-15kV_run_320") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-Ni-15kV_run_320") ", including fit parameters, raw and cut data of the Cu-Ni-15kV dataset for run 320.") (label "fig:appendix:cdl_Cu-Ni-15kV_run_320") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-Ni-15kV_run_319") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kV-2019_run_319.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-Ni-15kV_run_319") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-Ni-15kVCharge-2019_run_319.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-Ni-15kV_run_319") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-Ni-15kV_run_319") ", including fit parameters, raw and cut data of the Cu-Ni-15kV dataset for run 319.") (label "fig:appendix:cdl_Cu-Ni-15kV_run_319") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-2kV_run_337") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kV-2019_run_337.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-2kV_run_337") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kVCharge-2019_run_337.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-2kV_run_337") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-2kV_run_337") ", including fit parameters, raw and cut data of the Cu-EPIC-2kV dataset for run 337.") (label "fig:appendix:cdl_Cu-EPIC-2kV_run_337") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-2kV_run_336") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kV-2019_run_336.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-2kV_run_336") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kVCharge-2019_run_336.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-2kV_run_336") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-2kV_run_336") ", including fit parameters, raw and cut data of the Cu-EPIC-2kV dataset for run 336.") (label "fig:appendix:cdl_Cu-EPIC-2kV_run_336") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-2kV_run_335") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kV-2019_run_335.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-2kV_run_335") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-2kVCharge-2019_run_335.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-2kV_run_335") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-2kV_run_335") ", including fit parameters, raw and cut data of the Cu-EPIC-2kV dataset for run 335.") (label "fig:appendix:cdl_Cu-EPIC-2kV_run_335") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV_run_340") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kV-2019_run_340.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-0.9kV_run_340") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kVCharge-2019_run_340.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV_run_340") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-0.9kV_run_340") ", including fit parameters, raw and cut data of the Cu-EPIC-0.9kV dataset for run 340.") (label "fig:appendix:cdl_Cu-EPIC-0.9kV_run_340") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV_run_339") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kV-2019_run_339.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Cu-EPIC-0.9kV_run_339") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kVCharge-2019_run_339.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Cu-EPIC-0.9kV_run_339") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Cu-EPIC-0.9kV_run_339") ", including fit parameters, raw and cut data of the Cu-EPIC-0.9kV dataset for run 339.") (label "fig:appendix:cdl_Cu-EPIC-0.9kV_run_339") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_C-EPIC-0.6kV_run_343") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kV-2019_run_343.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_C-EPIC-0.6kV_run_343") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kVCharge-2019_run_343.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_C-EPIC-0.6kV_run_343") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_C-EPIC-0.6kV_run_343") ", including fit parameters, raw and cut data of the C-EPIC-0.6kV dataset for run 343.") (label "fig:appendix:cdl_C-EPIC-0.6kV_run_343") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_C-EPIC-0.6kV_run_342") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kV-2019_run_342.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_C-EPIC-0.6kV_run_342") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/C-EPIC-0.6kVCharge-2019_run_342.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_C-EPIC-0.6kV_run_342") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_C-EPIC-0.6kV_run_342") ", including fit parameters, raw and cut data of the C-EPIC-0.6kV dataset for run 342.") (label "fig:appendix:cdl_C-EPIC-0.6kV_run_342") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Al-Al-4kV_run_333") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kV-2019_run_333.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Al-Al-4kV_run_333") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kVCharge-2019_run_333.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Al-Al-4kV_run_333") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Al-Al-4kV_run_333") ", including fit parameters, raw and cut data of the Al-Al-4kV dataset for run 333.") (label "fig:appendix:cdl_Al-Al-4kV_run_333") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Al-Al-4kV_run_332") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kV-2019_run_332.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Al-Al-4kV_run_332") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Al-Al-4kVCharge-2019_run_332.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Al-Al-4kV_run_332") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Al-Al-4kV_run_332") ", including fit parameters, raw and cut data of the Al-Al-4kV dataset for run 332.") (label "fig:appendix:cdl_Al-Al-4kV_run_332") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ag-Ag-6kV_run_351") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kV-2019_run_351.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ag-Ag-6kV_run_351") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kVCharge-2019_run_351.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ag-Ag-6kV_run_351") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ag-Ag-6kV_run_351") ", including fit parameters, raw and cut data of the Ag-Ag-6kV dataset for run 351.") (label "fig:appendix:cdl_Ag-Ag-6kV_run_351") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ag-Ag-6kV_run_329") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kV-2019_run_329.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ag-Ag-6kV_run_329") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kVCharge-2019_run_329.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ag-Ag-6kV_run_329") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ag-Ag-6kV_run_329") ", including fit parameters, raw and cut data of the Ag-Ag-6kV dataset for run 329.") (label "fig:appendix:cdl_Ag-Ag-6kV_run_329") ) #+end_src #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Pixel spectrum") (label "fig:appendix:cdl_pixel_Ag-Ag-6kV_run_328") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kV-2019_run_328.pdf")) (subfigure (linewidth 0.5) (caption "Charge spectrum") (label "fig:appendix:cdl_charge_Ag-Ag-6kV_run_328") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/CDL/Ag-Ag-6kVCharge-2019_run_328.pdf")) (caption "Pixel spectrum, " (subref "fig:appendix:cdl_pixel_Ag-Ag-6kV_run_328") ", and charge spectrum, " (subref "fig:appendix:cdl_charge_Ag-Ag-6kV_run_328") ", including fit parameters, raw and cut data of the Ag-Ag-6kV dataset for run 328.") (label "fig:appendix:cdl_Ag-Ag-6kV_run_328") ) #+end_src :end: * CAST Detector Lab variations and fitting by run :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:fit_by_run_justification :END: The main difference between how the CAST detector lab data is treated in cite:krieger2018search and in this thesis is that here we treat each data taking run of the CDL data as one unit, instead of all data for one target/filter combination. This is mainly due to the strong detector variation in the gas gain during the CDL data taking. As such it is not possible to fit the spectra when combining all data from all runs for a single target/filter combination. In the extreme cases ($\ce{Cu}-\ce{Ni}$ for example) this leads to visibly two main peaks in the charge spectrum. Each run is fit separately. The one question about this approach is whether the cluster properties are stable under gas gain changes. Sec. [[#sec:appendix:fit_by_run:gas_gain_var_cluster_prop]] shows that this is indeed the case. ** TODOs for this section [/] :noexport: - [ ] *THINK ABOUT WHERE TO PUT TEMPERATURE INFORMATION FROM CDL DATA!* -> the temperature plot split by run is relatively interesting. Should appear at least in extended version. -> Isn't that already in the CDL section as extended after in the main text? - [ ] ** Influence of gas gain variations on cluster properties :PROPERTIES: :CUSTOM_ID: sec:appendix:fit_by_run:gas_gain_var_cluster_prop :END: The following figures are ridgeline plots of all relevant cluster properties as introduced in sec. [[#sec:reco:cluster_geometry]]. For each plot and each property all CDL runs are shown as kernel density estimations. Outside the number of hits and total charge in a cluster (which are expected to vary with gas gain of course) the properties remain stable even in the cases that vary strongly. #+begin_comment Below is the code we use to produce all the ~subsection~ DSL side by side figures, their labels and captions. Too much work to do by hand, if it's all the same anyway. :) #+end_comment #+begin_src nim :exports results :results drawer import std / [os, algorithm, strformat, strutils] import ingrid / ingrid_types const path = "~/phd/Figs/CDL/" proc caption(tfKind: TargetFilterKind): string = result = """ $#CAPTION: Ridgeline plot of kernel density estimations of all cluster properties $#CAPTION: split by each CDL run. Target/filter: $# """ % ["#+", "#+", $tfKind] proc label(tfKind: TargetFilterKind): string = result = &"#+NAME: fig:appendix:cdl_ridgeline_kde_{$tfKind}_by_run\n" var plots = newSeq[string]() for tfKind in TargetFilterKind: let file = (path / $tfKind & "_ridgeline_kde_by_run.pdf").expandTilde let plot = caption(tfKind) & label(tfKind) & &"[[{file}]]\n" plots.add plot for plt in plots.reversed: echo plt #+end_src #+RESULTS: :results: #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: C-EPIC-0.6kV #+NAME: fig:appendix:cdl_ridgeline_kde_C-EPIC-0.6kV_by_run [[/home/basti/phd/Figs/CDL/C-EPIC-0.6kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Cu-EPIC-0.9kV #+NAME: fig:appendix:cdl_ridgeline_kde_Cu-EPIC-0.9kV_by_run [[/home/basti/phd/Figs/CDL/Cu-EPIC-0.9kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Cu-EPIC-2kV #+NAME: fig:appendix:cdl_ridgeline_kde_Cu-EPIC-2kV_by_run [[/home/basti/phd/Figs/CDL/Cu-EPIC-2kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Al-Al-4kV #+NAME: fig:appendix:cdl_ridgeline_kde_Al-Al-4kV_by_run [[/home/basti/phd/Figs/CDL/Al-Al-4kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Ag-Ag-6kV #+NAME: fig:appendix:cdl_ridgeline_kde_Ag-Ag-6kV_by_run [[/home/basti/phd/Figs/CDL/Ag-Ag-6kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Ti-Ti-9kV #+NAME: fig:appendix:cdl_ridgeline_kde_Ti-Ti-9kV_by_run [[/home/basti/phd/Figs/CDL/Ti-Ti-9kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Mn-Cr-12kV #+NAME: fig:appendix:cdl_ridgeline_kde_Mn-Cr-12kV_by_run [[/home/basti/phd/Figs/CDL/Mn-Cr-12kV_ridgeline_kde_by_run.pdf]] #+CAPTION: Ridgeline plot of kernel density estimations of all cluster properties #+CAPTION: split by each CDL run. Target/filter: Cu-Ni-15kV #+NAME: fig:appendix:cdl_ridgeline_kde_Cu-Ni-15kV_by_run [[/home/basti/phd/Figs/CDL/Cu-Ni-15kV_ridgeline_kde_by_run.pdf]] :end: ** Data overview with pixel spectra [/] :extended: - [ ] *SHOULD THIS BE NOEXPORT OR NOT?* #+CAPTION: Equivalent table to tab. [[tab:cdl:run_overview_tab]], but showing the fit results of the pixel #+CAPTION: spectra. #+NAME: tab:cdl:run_overview_tab_pixels |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | Run | FADC? | Target | Filter | HV [kV] | Line | Energy [keV] | μ | σ | μ/σ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 319 | y | Cu | Ni | 15 | $\ce{Cu}$ $\text{K}_{\alpha}$ | 8.04 | $\num{3.2114(58)e+02}$ | $\num{1.826(57)e+01}$ | $\num{5.68(18)e-02}$ | | 320 | n | Cu | Ni | 15 | | | $\num{3.1127(52)e+02}$ | $\num{2.280(48)e+01}$ | $\num{7.32(15)e-02}$ | | 345 | y | Cu | Ni | 15 | | | $\num{2.6735(37)e+02}$ | $\num{2.007(34)e+01}$ | $\num{7.51(13)e-02}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 315 | y | Mn | Cr | 12 | $\ce{Mn}$ $\text{K}_{\alpha}$ | 5.89 | $\num{2.1680(98)e+02}$ | $\num{2.573(79)e+01}$ | $\num{1.187(37)e-01}$ | | 323 | n | Mn | Cr | 12 | | | $\num{2.2649(29)e+02}$ | $\num{1.824(23)e+01}$ | $\num{8.05(10)e-02}$ | | 347 | y | Mn | Cr | 12 | | | $\num{2.0058(31)e+02}$ | $\num{1.440(26)e+01}$ | $\num{7.18(13)e-02}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 325 | y | Ti | Ti | 9 | $\ce{Ti}$ $\text{K}_{\alpha}$ | 4.51 | $\num{1.810(12)e+02}$ | $\num{1.341(70)e+01}$ | $\num{7.41(39)e-02}$ | | 326 | n | Ti | Ti | 9 | | | $\num{1.7558(61)e+02}$ | $\num{1.350(35)e+01}$ | $\num{7.69(20)e-02}$ | | 349 | y | Ti | Ti | 9 | | | $\num{1.6036(90)e+02}$ | $\num{1.224(49)e+01}$ | $\num{7.63(31)e-02}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 328 | y | Ag | Ag | 6 | $\ce{Ag}$ $\text{L}_{\alpha}$ | 2.98 | $\num{1.1761(29)e+02}$ | $\num{1.091(25)e+01}$ | $\num{9.27(21)e-02}$ | | 329 | n | Ag | Ag | 6 | | | $\num{1.1625(16)e+02}$ | $\num{1.190(13)e+01}$ | $\num{1.024(11)e-01}$ | | 351 | y | Ag | Ag | 6 | | | $\num{1.0675(21)e+02}$ | $\num{1.139(17)e+01}$ | $\num{1.067(16)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 332 | y | Al | Al | 4 | $\ce{Al}$ $\text{K}_{\alpha}$ | 1.49 | $\num{5.769(15)e+01}$ | $\num{6.12(11)e+00}$ | $\num{1.061(20)e-01}$ | | 333 | n | Al | Al | 4 | | | $\num{5.674(12)e+01}$ | $\num{7.18(10)e+00}$ | $\num{1.265(18)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 335 | y | Cu | EPIC | 2 | $\ce{Cu}$ $\text{L}_{\alpha}$ | 0.930 | $\num{3.542(35)e+01}$ | $\num{6.37(56)e+00}$ | $\num{1.80(16)e-01}$ | | 336 | n | Cu | EPIC | 2 | | | $\num{3.309(38)e+01}$ | $\num{8.62(47)e+00}$ | $\num{2.60(15)e-01}$ | | 337 | n | Cu | EPIC | 2 | | | $\num{3.392(56)e+01}$ | $\num{9.77(32)e+00}$ | $\num{2.88(11)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 339 | y | Cu | EPIC | 0.9 | $\ce{O }$ $\text{K}_{\alpha}$ | 0.525 | $\num{2.522(35)e+01}$ | $\num{6.32(58)e+00}$ | $\num{2.51(23)e-01}$ | | 340 | n | Cu | EPIC | 0.9 | | | $\num{2.121(10)e+01}$ | $\num{5.49(16)e+00}$ | $\num{2.590(76)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| | 342 | y | C | EPIC | 0.6 | $\ce{C }$ $\text{K}_{\alpha}$ | 0.277 | $\num{1.907(12)e+01}$ | $\num{4.446(97)e+00}$ | $\num{2.331(53)e-01}$ | | 343 | n | C | EPIC | 0.6 | | | $\num{1.7930(66)e+01}$ | $\num{5.243(51)e+00}$ | $\num{2.924(30)e-01}$ | |-----+-------+--------+--------+---------+-------------------------------+--------------+------------------------+-----------------------+-----------------------| * Morphing of CDL reference spectra :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:morphing_cdl_spectra :END: This appendix contains further figures about the interpolation of the reference distributions entering the likelihood method as discussed in sec. [[#sec:cdl:cdl_morphing]] and sec. [[#sec:cdl:cdl_morphing_energy_logL]]. Fig. [[fig:appendix:cdl_morphing_logL_vs_energy]] directly shows the impact of interpolating the reference distributions on the likelihood values (colored points for each different target/filter combination) and the cut value (black line). Both the points and the cut values become smoother utilizing the interpolation. Note that even the interpolation is not perfectly smooth, as the underlying reference data is still binned to a histogram. Further, section [[#sec:appendix:morphing_cdl_spectra:tilemaps]] contains tilemaps of the not interpolated reference data for each dataset and sec. [[#sec:appendix:morphing_cdl_spectra:interpolation_raster]] shows the fully interpolated space for each dataset. Finally, sec. [[#sec:appendix:morphing_cdl_spectra:binwise_linear]] contains the histograms of the binwise linear interpolations where for each ridge one dataset is skipped. See the extended thesis for the development notes of the interpolation technique, including different approaches (e.g. binwise spline interpolation instead of linear interpolation). #+CAPTION: $\ln\mathcal{L}$ values for all the cleaned CDL cluster data against the #+CAPTION: energy of the cluster. Left is the binwise linear interpolation and #+CAPTION: right is the calculation using the old cite:krieger2018search #+CAPTION: approach of fixed energy intervals. The bin wise linear interpolation helps to #+CAPTION: provide a smoother description of the $\ln\mathcal{L}$ data. #+NAME: fig:appendix:cdl_morphing_logL_vs_energy [[~/phd/Figs/background/logL_of_CDL_vs_energy.pdf]] \clearpage ** TODOs for this section [/] :noexport: - [ ] *NEED TO REREAD THIS AND FIX UP THE WORST ISSUES!* -> especially if this is to remain in the final thesis! - [ ] *PLACE ME SOMEWHERE MORE IMPORTANT?* -> The logL of CDL vs energy plot - [ ] CDL tile maps - [ ] CDL raster for all - [ ] linear binwise morph mtLinear ** Generate all morphing / tile related plots :extended: In principle the plots are produced with the same command as in sec. [[#sec:background:generated_morphing_plots]], but here we wish to have wider versions that are not side-by-side: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/cdlMorphing/ FWIDTH=0.9 HEIGHT=420 WRITE_PLOT_CSV=true USE_TEX=true ./cdlMorphing \ --cdlFile ~/CastData/data/CDL_2019/calibration-cdl-2018.h5 \ --outpath ~/phd/Figs/CDL/cdlMorphing/fWidth0.9/ #+end_src ** Generate plot comparing likelihood behavior :extended: Fig. [[fig:appendix:cdl_morphing_logL_vs_energy]] is produced in sec. [[#sec:background:correlation_lnL_variables]]. ** Tilemap of each likelihood dataset :PROPERTIES: :CUSTOM_ID: sec:appendix:morphing_cdl_spectra:tilemaps :END: The tilemaps shown here should be compared to the resulting raster plots of the next section, sec. [[#sec:appendix:morphing_cdl_spectra:interpolation_raster]]. For each dataset entering the likelihood method the reference distribution computed for each target/filter combination is spread over its applicable energy range. In the plots the colormap is the height of the distribution. The fluorescence lines are indicated that define the X-ray energy of each target. This illustrates how a cluster energy is mapped to a probability from each of the 8 target/filter distributions. The distinct jump in the distributions is visible in the form of a cut along the horizontal axis at different energies. #+CAPTION: Tilemap of the eccentricity dataset against energy. The colormap corresponds to the #+CAPTION: height of the eccentricity distribution at that point. Along the energy axis the #+CAPTION: data is constant within the applicable energy range of the target/filter kind. The #+CAPTION: energy of each fluorescence line is indicated. #+NAME: fig:appendix:morphing_cdl_spectra:cdl_as_tile_eccentricity [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/cdl_as_tile_morph_mtNone_eccentricity.pdf]] #+CAPTION: Tilemap of the fractionInTransverseRms dataset against energy. The colormap corresponds to the #+CAPTION: height of the fractionInTransverseRms distribution at that point. Along the energy axis the #+CAPTION: data is constant within the applicable energy range of the target/filter kind. The #+CAPTION: energy of each fluorescence line is indicated. #+NAME: fig:appendix:morphing_cdl_spectra:cdl_as_tile_fracRms [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/cdl_as_tile_morph_mtNone_fractionInTransverseRms.pdf]] #+CAPTION: Tilemap of the lengthDivRmsTrans dataset against energy. The colormap corresponds to the #+CAPTION: height of the lengthDivRmsTrans distribution at that point. Along the energy axis the #+CAPTION: data is constant within the applicable energy range of the target/filter kind. The #+CAPTION: energy of each fluorescence line is indicated. #+NAME: fig:appendix:morphing_cdl_spectra:cdl_as_tile_ldiv [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/cdl_as_tile_morph_mtNone_lengthDivRmsTrans.pdf]] \clearpage ** Interpolation of each likelihood dataset :PROPERTIES: :CUSTOM_ID: sec:appendix:morphing_cdl_spectra:interpolation_raster :END: In contrast to the plots of the previous section, sec. [[#sec:appendix:morphing_cdl_spectra:tilemaps]] all plots of the different datasets entering the likelihood method here show the distribution in property / energy space after binwise linear interpolation. Compared to the previous plots it results in a smooth transition over the entire energy range. #+CAPTION: Raster plot of the eccentricity dataset against energy after binwise linear interpolation. #+CAPTION: The colormap corresponds to the height of the eccentricity distribution at that point. The #+CAPTION: energy of each fluorescence line is indicated. #+NAME: fig:appendix:morphing_cdl_spectra:cdl_as_raster_eccentricity [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/cdl_as_raster_interpolated_morph_mtNone_eccentricity.pdf]] #+CAPTION: Raster plot of the fractionInTransverseRms dataset against energy after binwise linear interpolation. #+CAPTION: The colormap corresponds to the height of the fractionInTransverseRms distribution at that point. The #+CAPTION: energy of each fluorescence line is indicated. #+NAME: fig:appendix:morphing_cdl_spectra:cdl_as_raster_fracRms [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/cdl_as_raster_interpolated_morph_mtNone_fractionInTransverseRms.pdf]] #+CAPTION: Raster plot of the lengthDivRmsTrans dataset against energy after binwise linear interpolation. #+CAPTION: The colormap corresponds to the height of the lengthDivRmsTrans distribution at that point. The #+CAPTION: energy of each fluorescence line is indicated. #+NAME: fig:appendix:morphing_cdl_spectra:cdl_as_raster_ldiv [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/cdl_as_raster_interpolated_morph_mtNone_lengthDivRmsTrans.pdf]] \clearpage ** Binwise linear interpolations for each likelihood dataset :PROPERTIES: :CUSTOM_ID: sec:appendix:morphing_cdl_spectra:binwise_linear :END: The figures in this section show the binwise linear interpolation by skipping the dataset in each ridge and interpolating using the nearest neighbors above/below in energy. These are the same as shown in fig. sref:fig:cdl:cdl_morphing_frac_known_lines for all three datasets (i.e. fig. [[fig:appendix:morphing_cdl_spectra:fracRms_binwise]] is the same). As mentioned in the main text, skipping the dataset at each ridge means the interpolation errors are much larger than used in practice (as shown in the previous section [[#sec:appendix:morphing_cdl_spectra:interpolation_raster]]). #+CAPTION: Binwise linear interpolation of the eccentricity dataset skipping the #+CAPTION: target/filter dataset being interpolated. #+NAME: fig:appendix:morphing_cdl_spectra:eccentricity_binwise [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/eccentricity_ridgeline_morph_mtLinear_calibration-cdl-2018.h5_2018.pdf]] #+CAPTION: Binwise linear interpolation of the fractionInTransverseRms dataset skipping the #+CAPTION: target/filter dataset being interpolated. #+NAME: fig:appendix:morphing_cdl_spectra:fracRms_binwise [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/fractionInTransverseRms_ridgeline_morph_mtLinear_calibration-cdl-2018.h5_2018.pdf]] #+CAPTION: Binwise linear interpolation of the lengthDivRmsTrans dataset skipping the #+CAPTION: target/filter dataset being interpolated. #+NAME: fig:appendix:morphing_cdl_spectra:ldiv_binwise [[~/phd/Figs/CDL/cdlMorphing/fWidth0.9/lengthDivRmsTrans_ridgeline_morph_mtLinear_calibration-cdl-2018.h5_2018.pdf]] ** Notes on CDL morphing development :extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:morphing_cdl_spectra:notes :END: This section of the (not exported) appendix contains our notes about building and implementing the linear interpolation of CDL data (when talking about 'current approach' in the below, we refer to the old, non interpolating approach). It should give an understanding of what was considered and why the final choice is a linear interpolation. One problem with the current approach of utilizing the CDL data is that the reference distributions for the different logL variables are non continuous between two energy bins. This means that if a cluster is moved from one bin to another it suddenly has a very different cut for each property. It might be possible to morph CDL spectra between two energies. That is to allow interpolation between the shape of two neighboring reference datasets. This is the likely cause for the sudden steps visible in the background rate. With a fully morphed function this should hopefully disappear. *** References & ideas Read up on morphing of different functions: - in HEP: https://indico.cern.ch/event/507948/contributions/2028505/attachments/1262169/1866169/atlas-hcomb-morphwshop-intro-v1.pdf - https://mathematica.stackexchange.com/questions/208990/morphing-between-two-functions - https://mathematica.stackexchange.com/questions/209039/convert-symbolic-to-numeric-code-speed-up-morphing Aside from morphing, the theory of optimal transport seems to be directly related to such problems: - https://de.wikipedia.org/wiki/Optimaler_Transport (funny, this can be described by topology using GR lingo; there's no English article on this) - https://en.wikipedia.org/wiki/Transportation_theory_(mathematics) see in particular: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Continuous_optimal_transport.png/200px-Continuous_optimal_transport.png This seems to imply that given some functions $f(x)$, $g(x)$ we are looking for the transport function $T$ which maps $T: f(x) \rightarrow g(x)$ in the language of transportation theory. See the linked article about the Wasserstein metric: - https://en.wikipedia.org/wiki/Wasserstein_metric in particular the section about its connection to the optimal transport problem It describes a distance metric between two probability distributions. In that sense the distance between two distributions to be transported is directly related to the Wasserstein distance. One of the major constraints of transportation theory is that the transportation has to preserve the integral of the transported function. Technically, this is not the case for our application to the CDL data, due to different amount of data available for each target. However, of course we normalize the CDL data and assume that the given data actually is a PDF. At that point each distribution is normalized to 1 and thus each morphed function has to be normalized to 1 as well. This is a decent check for a morph result. If a morphing technique does *not* satisfy this property we need to renormalize the result. On the other hand, considering the slides by Verkerke (indico.cern.ch link above), the morphing between two functions can also be interpreted as a simple interpolation problem. In that sense there are multiple approaches to compute an intermediate step of the CDL distributions. 1. visualize the CDL data as a 2D heatmap: - value of the logL variable is x - energy of each fluorescence line is y 2. linear interpolation at a specific energy $E$ based on the two neighboring CDL distributions (interpolation thus *only* along y axis) 3. spline interpolation over all energy ranges 4. KDE along the energy axis (only along *y*) Extend KDE to 2D? 5. bicubic interpolation (-> problematic, because our energies/variables are *not* spread on a rectilinear grid (energy is not spread evenly) 6. other distance based interpolations, i.e. KD-tree? Simply perform an interpolation based on all neighboring points in a certain distance? Of these 2 and 4 seem to be the easiest implementations. KD-tree would also be easy, provided I finish the implementation finally. We will investigate different ideas in [[file:~/CastData/ExternCode/TimepixAnalysis/Tools/cdlMorphing/cdlMorphing.nim]]. **** DONE Visualize CDL data as a 2D heatmap In each of the following plots each distribution (= target filter combination) is normalized to 1. First let's visualize all of the CDL data as a scatter plot. That is pretty simple and gives an idea where the lines are and what the shape is roughly, fig. [[logL_scatter_lengthDivRmsTrans]]. #+CAPTION: Scatter plot of the reference dataset for the length / transverse RMS #+CAPTION: logL variable, colored by the normalized counts for each distribution #+CAPTION: (each is normalized to 1). The distributions are drawn at the energy #+CAPTION: of the fluorescence lines. #+NAME: logL_scatter_lengthDivRmsTrans [[~/phd/Figs/CDL/cdlMorphing/cdl_as_scatter_lengthDivRmsTrans.pdf]] Now we can check out what the data looks like if we interpret the whole (value of each variable, Energy) phase space as a tile map. In this way, morphing can be interpreted as performing interpolation along the energy axis in the resulting tile map. Each figure contains in addition as colored lines the *start* of each energy range as currently used. So clusters will be with the distribution defined by the line below. In addition the energy of each fluorescence lines is plotted in red at the corresponding energy value. This also shows that the intervals and the energy of the lines are highly asymmetric. #+CAPTION: Tile map of the reference dataset for the eccentricity #+CAPTION: logL variable, colored by the normalized counts for each distribution #+CAPTION: (each is normalized to 1 along the x axis). Each tile covers the range #+CAPTION: from start to end interval. In addition in red the energy of each #+CAPTION: fluorescence line is plotted. #+NAME: logL_tile_eccentricity [[~/phd/Figs/CDL/cdlMorphing/cdl_as_tile_eccentricity.pdf]] #+CAPTION: Tile map of the reference dataset for the length / transverse RMS #+CAPTION: logL variable, colored by the normalized counts for each distribution #+CAPTION: (each is normalized to 1 along the x axis). Each tile covers the range #+CAPTION: from start to end interval. In addition in red the energy of each #+CAPTION: fluorescence line is plotted. #+NAME: logL_tile_lengthDivRmsTrans [[~/phd/Figs/CDL/cdlMorphing/cdl_as_tile_lengthDivRmsTrans.pdf]] #+CAPTION: Tile map of the reference dataset for the fraction within transverse RMS #+CAPTION: logL variable, colored by the normalized counts for each distribution #+CAPTION: (each is normalized to 1 along the x axis). Each tile covers the range #+CAPTION: from start to end interval. In addition in red the energy of each #+CAPTION: fluorescence line is plotted. #+NAME: logL_tile_fracRmsTrans [[~/phd/Figs/CDL/cdlMorphing/cdl_as_tile_fractionInTransverseRms.pdf]] **** DONE Morph by linear interpolation bin by bin Based on the tile maps in the previous section it seems like a decent idea to perform a linear interpolation for any point in between two intervals: #+begin_export latex \[f(E, x) = f_\text{low}(x) \cdot \left(1 - \frac{ |L_\text{low} - E|}{\Delta E}\right) + f_\text{high}(x) \cdot \left(1 - \frac{ |L_\text{high} - E|}{\Delta E}\right) \] #+end_export where $f_\text{low,high}$ are the distributions below / above the given energy $E$. $L_\text{low,high}$ corresponds to the energy of the fluorescence line corresponding to the distribution below / above $E$. $\Delta E$ is the difference in energy between the lower and higher fluorescence lines. $x$ is the value of the given logL variable. In code this is: #+begin_src nim let E = ... # given as argument to function let lineEnergies = getXrayFluorescenceLines() let refLowT = df.filter(f{string -> bool: `Dset` == refLow})["Hist", float] let refHighT = df.filter(f{string -> bool: `Dset` == refHigh})["Hist", float] result = zeros[float](refLowT.size.int) let deltaE = abs(lineEnergies[idx] - lineEnergies[idx+offset]) # walk over each bin and compute linear interpolation between for i in 0 ..< refLowT.size: result[i] = refLowT[i] * (1 - (abs(lineEnergies[idx] - E)) / deltaE) + refHighT[i] * (1 - (abs(lineEnergies[idx+offset] - E)) / deltaE) #+end_src Since doing this for a point between two lines is not particularly helpful, because we do not /know/ what the distribution in between does actually look like. Instead for validation we will now try to compute the =Cu-EPIC-0.9kV= distribution (corresponding to the $\text{O K}_{α}$ line at $\SI{0.525}{\kilo \electronvolt}$) based on the =C-EPIC-0.6kV= and =Cu-EPIC-2kV= distributions. That means the interpolate the second ridge from the first and third in the CDL ridgeline plots. This is shown in fig. [[linear_interpolation_morph_eccentricity]], [[linear_interpolation_morph_lengthDivRmsTrans]] and [[linear_interpolation_morph_fracRmsTrans]]. The real data for each distribution is shown in red and the morphed linear bin-wise interpolation for the second ridge is shown in red. #+CAPTION: Interpolation of the =Cu-EPIC-0.9kV= distribution for the eccentricity logL variable #+CAPTION: using bin wise linear interpolation based on the =C-EPIC-0.6kV= and =Cu-EPIC-2kV= distributions. #+CAPTION: The real data is shown in the second ridge in red and the morphed interpolation is #+CAPTION: is shown in blue. The agreement is remarkable for the simplicity of the method. #+NAME: linear_interpolation_morph_eccentricity [[~/phd/Figs/CDL/cdlMorphing/eccentricity_ridgeline_XrayReferenceFile2018.h5_2018.pdf]] #+CAPTION: Interpolation of the =Cu-EPIC-0.9kV= distribution for the length / transverse RMS logL variable #+CAPTION: using bin wise linear interpolation based on the =C-EPIC-0.6kV= and =Cu-EPIC-2kV= distributions. #+CAPTION: The real data is shown in the second ridge in red and the morphed interpolation is #+CAPTION: is shown in blue. The agreement is remarkable for the simplicity of the method. #+NAME: linear_interpolation_morph_lengthDivRmsTrans [[~/phd/Figs/CDL/cdlMorphing/lengthDivRmsTrans_ridgeline_XrayReferenceFile2018.h5_2018.pdf]] #+CAPTION: Interpolation of the =Cu-EPIC-0.9kV= distribution for the fraction in transverse RMS logL variable #+CAPTION: using bin wise linear interpolation based on the =C-EPIC-0.6kV= and =Cu-EPIC-2kV= distributions. #+CAPTION: The real data is shown in the second ridge in red and the morphed interpolation is #+CAPTION: is shown in blue. This in particular is the problematic variable, due to the integer nature #+CAPTION: of the data at low energies. However, even here the interpolation works extremely well. #+NAME: linear_interpolation_morph_fracRmsTrans [[~/phd/Figs/CDL/cdlMorphing/fractionInTransverseRms_ridgeline_XrayReferenceFile2018.h5_2018.pdf]] **** DONE Compute all reference spectra from neighbors Similar to the plots in the previous section we can now compute all reference spectra based on the next neighboring spectras. This is done in fig. [[linear_interpolation_morph_eccentricity_all]], [[linear_interpolation_morph_lengthDivRmsTrans_all]], [[linear_interpolation_morph_fracRmsTrans_all]]. #+CAPTION: Interpolation of all reference distribution for the eccentricity logL variable #+CAPTION: using bin wise linear interpolation based on the neighboring distributions. #+CAPTION: The real data is shown in red while the morphed data is shown in cyan. #+CAPTION: In most ridges the agreement is very good. #+NAME: linear_interpolation_morph_eccentricity_all [[~/phd/Figs/CDL/cdlMorphing/eccentricity_ridgeline_morph_all_XrayReferenceFile2018.h5_2018.pdf]] #+CAPTION: Interpolation of all reference distribution for the length / transverse RMS logL variable #+CAPTION: using bin wise linear interpolation based on the neighboring distributions. #+CAPTION: The real data is shown in red while the morphed data is shown in cyan. #+CAPTION: In most ridges the agreement is very good. #+NAME: linear_interpolation_morph_lengthDivRmsTrans_all [[~/phd/Figs/CDL/cdlMorphing/lengthDivRmsTrans_ridgeline_morph_all_XrayReferenceFile2018.h5_2018.pdf]] #+CAPTION: Interpolation of all reference distribution for the fraction in transverse RMS logL variable #+CAPTION: using bin wise linear interpolation based on the neighboring distributions. #+CAPTION: The real data is shown in red while the morphed data is shown in cyan. #+CAPTION: In most ridges the agreement is very good. #+NAME: linear_interpolation_morph_fracRmsTrans_all [[~/phd/Figs/CDL/cdlMorphing/fractionInTransverseRms_ridgeline_morph_all_XrayReferenceFile2018.h5_2018.pdf]] **** DONE Compute full linear interpolation between fluorescence lines We can now apply the lessons from the last section to compute arbitrary reference spectra. We will use this to compute a heatmap of all possible energies in between the first and last fluorescence line. For all three logL variables, these are shown in fig. #+CAPTION: Heatmap of a fully linear interpolated view of the energy / eccentricity #+CAPTION: phase space in between the first and last fluorescence line. #+NAME: linear_interpolation_raster_eccentricity [[~/phd/Figs/CDL/cdlMorphing/cdl_as_raster_interpolated_eccentricity.pdf]] #+CAPTION: Heatmap of a fully linear interpolated view of the energy / lenthDivRmsTrans #+CAPTION: phase space in between the first and last fluorescence line. #+NAME: linear_interpolation_raster_lengthDivRmsTrans [[~/phd/Figs/CDL/cdlMorphing/cdl_as_raster_interpolated_lengthDivRmsTrans.pdf]] #+CAPTION: Heatmap of a fully linear interpolated view of the energy / fracRmsTrans #+CAPTION: phase space in between the first and last fluorescence line. #+NAME: linear_interpolation_raster_fracRmsTrans [[~/phd/Figs/CDL/cdlMorphing/cdl_as_raster_interpolated_fractionInTransverseRms.pdf]] *** KDE approach :extended: Using a KDE is problematic, because our data is already pre-binned of course. This leads to a very sparse phase space, which either makes the local prediction around a known distribution good, but fails miserably in between them (small bandwidth) or gives decent predictions in between, but pretty bad reconstruction of the known distributions (larger bandwidth). There is also a strong conflict in bandwidth selection, due to the non-linear steps in energy between the different CDL distributions. This leads to a too large / too small bandwidth at either end of the energy range. Fig. [[eccentricity_ridgeline_morph_kde]], [[fracTransRms_ridgeline_morph_kde]], [[lengthDivRmsTrans_ridgeline_morph_kde]] show the default bandwidth (Silverman's rule of thumb). In comparison Fig. [[eccentricity_ridgeline_morph_kde_small_bw]], [[fracTransRms_ridgeline_morph_kde_small_bw]], [[lengthDivRmsTrans_ridgeline_morph_kde_small_bw]] show the same plot using a much smaller custom bandwidth of 0.3 keV. The agreement is much better, but the actual prediction between the different distributions becomes much worse. Compare fig. [[eccentricity_raster_morph_kde]] (default bandwidth) to fig. [[eccentricity_raster_morph_kde_small_bw]]. The latter has regions of almost no counts, which is obviously wrong. Note that also the fig. [[eccentricity_raster_morph_kde]] is problematic. An effect of a bad KDE input is visible, namely that the bandwidth vs. number of datapoints is such that the center region (in energy) has higher values than the edges, due to the fact that predictions near the boundaries see no signal (this boundary effect could be corrected for by assuming a suitable boundary conditions, e.g. just extending the first/last distributions in the respective ranges. It is not clear however in what spacing such a distribution should be placed etc. #+CAPTION: Reconstruction of the different eccentricity CDL distributions using a KDE #+CAPTION: with the bin counts as weights with the automatically computed #+CAPTION: bandwidth using Silverman's rule of thumb (about 1.6 keV in this case). #+CAPTION: Bad match with real data for low energies (top ridges). #+NAME: eccentricity_ridgeline_morph_kde [[~/phd/Figs/CDL/cdlMorphing/eccentricity_ridgeline_morph_kde.pdf]] #+CAPTION: Reconstruction of the different fraction in transverse RMS CDL distributions using a KDE #+CAPTION: with the bin counts as weights with the automatically computed #+CAPTION: bandwidth using Silverman's rule of thumb (about 1.6 keV in this case). #+CAPTION: Bad match with real data for low energies (top ridges). #+NAME: fracTransRms_ridgeline_morph_kde [[~/phd/Figs/CDL/cdlMorphing/fractionInTransverseRms_ridgeline_morph_kde.pdf]] #+CAPTION: Reconstruction of the different length / transverse RMS CDL distributions using a KDE #+CAPTION: with the bin counts as weights with the automatically computed #+CAPTION: bandwidth using Silverman's rule of thumb (about 1.6 keV in this case). #+CAPTION: Bad match with real data for low energies (top ridges). #+NAME: lengthDivRmsTrans_ridgeline_morph_kde [[~/phd/Figs/CDL/cdlMorphing/lengthDivRmsTrans_ridgeline_morph_kde.pdf]] \clearpage #+CAPTION: Reconstruction of the different eccentricity CDL distributions using a KDE #+CAPTION: with the bin counts as weights with a custom bandwidth of 0.3 keV. #+CAPTION: Bad match with real data for low energies (top ridges). #+NAME: eccentricity_ridgeline_morph_kde_small_bw [[~/phd/Figs/CDL/cdlMorphing/eccentricity_ridgeline_morph_kde_small_bw.pdf]] #+CAPTION: Reconstruction of the different fraction in transverse RMS CDL distributions using a KDE #+CAPTION: with the bin counts as weights with a custom bandwidth of 0.3 keV. #+CAPTION: Bad match with real data for low energies (top ridges). #+NAME: fracTransRms_ridgeline_morph_kde_small_bw [[~/phd/Figs/CDL/cdlMorphing/fractionInTransverseRms_ridgeline_morph_kde_small_bw.pdf]] #+CAPTION: Reconstruction of the different length / transverse RMS CDL distributions using a KDE #+CAPTION: with the bin counts as weights with a custom bandwidth of 0.3 keV. #+CAPTION: Bad match with real data for low energies (top ridges). #+NAME: lengthDivRmsTrans_ridgeline_morph_kde_small_bw [[~/phd/Figs/CDL/cdlMorphing/lengthDivRmsTrans_ridgeline_morph_kde_small_bw.pdf]] \clearpage #+CAPTION: Raster of the KDE interpolation for the eccentricity using the automatically #+CAPTION: determined bandwidth based on Silverman's rule of thumb (about 1.6 keV in this case). #+CAPTION: Boundary effects are visible due to apparent more activity near the center #+CAPTION: energies. #+NAME: eccentricity_raster_morph_kde [[~/phd/Figs/CDL/cdlMorphing/eccentricity_raster_kde.pdf]] #+CAPTION: Raster of the KDE interpolation for the eccentricity using a custom #+CAPTION: bandwidth of 0.3 keV. #+CAPTION: Better agreement at the different CDL target energies at the expense of #+CAPTION: a reasonable prediction between the different regions. #+NAME: eccentricity_raster_morph_kde_small_bw [[~/phd/Figs/CDL/cdlMorphing/eccentricity_raster_kde_small_bw.pdf]] *** Spline approach Another idea is to use a spline interpolation. This has the advantage that the existing distributions will be correctly predicted (as for the linear interpolation), but possibly yields better results between distributions (or in the case of predicting a known distributions. Fig. [[eccentricity_ridgeline_morph_spline]], [[fracTransRms_ridgeline_morph_spline]], [[lengthDivRmsTrans_ridgeline_morph_spline]] show the prediction using a spline. Same as for the linear interpolation each morphed distribution was computed by excluding that distribution from the spline definition and then predicting the energy of the respective fluorescence line. The result looks somewhat better in certain areas than the linear interpolation, but has unphysical artifacts in other areas (negative values) while also deviating quite a bit. For that reason it seems like simpler is better in case of CDL morphing (at least if it's done bin-wise). #+CAPTION: Reconstruction of the different eccentricity CDL distributions using a spline #+CAPTION: interpolation (by excluding each distribution that is being predicted). #+CAPTION: Prediction sometimes even yields negative values, highlighting the #+CAPTION: problems of a spline in certain use cases (unphysical results). #+NAME: eccentricity_ridgeline_morph_spline [[~/phd/Figs/CDL/cdlMorphing/eccentricity_ridgeline_morph_spline.pdf]] #+CAPTION: Reconstruction of the different fraction in transverse RMS CDL distributions using a spline #+CAPTION: interpolation (by excluding each distribution that is being predicted). #+CAPTION: Prediction sometimes even yields negative values, highlighting the #+CAPTION: problems of a spline in certain use cases (unphysical results). #+NAME: fracTransRms_ridgeline_morph_spline [[~/phd/Figs/CDL/cdlMorphing/fractionInTransverseRms_ridgeline_morph_spline.pdf]] #+CAPTION: Reconstruction of the different length / transverse RMS CDL distributions using a spline #+CAPTION: interpolation (by excluding each distribution that is being predicted). #+CAPTION: Prediction sometimes even yields negative values, highlighting the #+CAPTION: problems of a spline in certain use cases (unphysical results). #+NAME: lengthDivRmsTrans_ridgeline_morph_spline [[~/phd/Figs/CDL/cdlMorphing/lengthDivRmsTrans_ridgeline_morph_spline.pdf]] \clearpage *** Summary For the time being we will use the linear interpolation method and see where this leads us. Should definitely be a big improvement over the current interval based option. For the results of applying linear interpolation based morphing to the likelihood analysis see section [[#sec:appendix:morphing_cdl_spectra:implementation]]. *** Implementation in =likelihood.nim= :PROPERTIES: :CUSTOM_ID: sec:appendix:morphing_cdl_spectra:implementation :END: Thoughts on the implememntition in =likelihood.nim= or CDL morphing. 0. Add interpolation code from =cdlMorphing.nim= in =private/cdl_cuts.nim=. 1. Add a field to =config.nim= that describes the morphing technique to be used. 2. Add an enum for the possible morphing techniques, =MorphingKind= with fields =mkNone=, =mkLinear= 3. In =calcCutValueTab= we currently return a =Table[string, float]= mapping target/filter combinations to cut values. This needs to be modified such that we have something that hides away input -> output and yields what we need. Define an =CutValueInterpolator= type, which is returned instead. It will be a variant object with case =kind: MorphingKind=. This object will allow access to cut values based on: - =string=: a target/filter combination. - =mkNone=: access internal =Table[string, float]= as done currently - =mkLinear=: raise exception, since does not make sense - =float=: an energy in keV: - =mkNone=: convert energy to a target/filter combination and access internal =Table= - =mkLinear=: access the closest energy distribution and return its cut value 4. in =filterClustersByLogL= replace =cutTab= name and access by energy of cluster instead of converted to target/filter dataset With these steps we should have a working interpolation routine. The code used in the =cdlMorphing.nim= test script needs to be added of course to provide the linearly interpolated logic (see step 0). **** Bizarre Al-Al 4kV behavior with =mkLinear= After the first implementation we see some very bizarre behavior in the case of linear interpolation for the logL distributions. This is both visible with the =plotCdl.nim= as well as plotting code in =likelihood.nim=. See fig. [[cdl_logL_linear_bizarre_Al_Al_4kV]]. #+CAPTION: LogL distribitions after implementing linear interpolation and running #+CAPTION: with =mkLinear=. The Al-Al 4kV line is noweher near where we expect it. #+CAPTION: The code currently recomputes the logL values by #+CAPTION: default, in which the =mkLinear= plays a role. The bug has to be somewhere #+CAPTION: in that part of the interpolation. #+NAME: cdl_logL_linear_bizarre_Al_Al_4kV [[~/phd/Figs/CDL/cdlMorphing/cdl_logl_linear_bizarre_Al_Al_4kV.pdf]] *UPDATE*: The issue was a couple of bugs & design choices in the implementation of the linear interpolation in =likelihood_utils.nim=. In particular about the design of the DF returned from =getInterpolatedWideDf= and a bug accessing not the sub DF, but the actual DF in the loop. The fixed result is shown in fig. [[cdl_logL_linear_fixed]] and in comparison the result using no interpolation (the reference in a way) in fig. [[cdl_logL_no_interp]]. #+CAPTION: LogL distribitions after implementing linear interpolation and running #+CAPTION: with =mkLinear= and after the above mentioned bug has been fixed. #+CAPTION: This is the same result as for =mkNone=, see fig. [[cdl_logL_no_interp]]. #+NAME: cdl_logL_linear_fixed [[~/phd/Figs/CDL/cdlMorphing/cdl_logl_linear_fixed.pdf]] #+CAPTION: LogL distribitions after implementing linear interpolation and running #+CAPTION: with =mkLinear= and after the above mentioned bug has been fixed. #+CAPTION: This is the same result as for =mkNone=, see fig. [[cdl_logL_no_interp]]. #+NAME: cdl_logL_no_interp [[~/phd/Figs/CDL/cdlMorphing/cdl_logl_no_interp.pdf]] * Occupancy maps :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:occupancy :END: In this appendix we find a few plots of the entire Septemboard activity during the CAST Run-2 and Run-3 campaigns. For each a plot of the raw pixel activity in the form of counts (how many times was each pixel activated) and in the form of summing the Time over Threshold (ToT) values is shown. In both cases the color scale is cut to the $95^{\text{th}}$ percentile of the data for better visibility. In each plot small regions that see infrequent sparks show up. Different chips have different amounts of activity, due to imperfect calibrations, in particular of the 6 outer chips. ** Run-2 #+CAPTION: Occupancy maps of the Septemboard during the Run-2 data taking campaign #+CAPTION: at CAST based on the raw pixel activity counts. Color scale saturates at $95^{\text{th}}$ percentile #+CAPTION: of all data. Some regions with sparks are #+CAPTION: visible. Top right chip on the left edge and bottom right, center right chip #+CAPTION: bottom left and bottom right chip in multiple spots. Generally activity is #+CAPTION: slightly different between chips, due to different threshold calibrations. #+NAME: fig:appendix:occupancy_maps:run2_counts [[~/phd/Figs/occupancyMaps/run2_occupancy_map_by_count_perc95.pdf]] #+CAPTION: Occupancy maps of the Septemboard during the Run-2 data taking campaign #+CAPTION: at CAST based on the sum of raw pixel ToT values. Color scale saturates at $95^{\text{th}}$ percentile #+CAPTION: of all data. Any pixel activity exceeding $\mathtt{ToT} > 1000$ are filtered out. #+NAME: fig:appendix:occupancy_maps:run2_charge [[~/phd/Figs/occupancyMaps/run2_occupancy_map_by_charge_perc95_max1000.pdf]] \clearpage ** Run-3 #+CAPTION: Occupancy maps of the Septemboard during the Run-3 data taking campaign #+CAPTION: at CAST based on the raw pixel activity counts. Color scale saturates at $95^{\text{th}}$ percentile #+CAPTION: of all data. Less spark activity visible than in Run-2. Differing levels of #+CAPTION: activity are still visible. #+NAME: fig:appendix:occupancy_maps:run3_counts [[~/phd/Figs/occupancyMaps/run3_occupancy_map_by_count_perc95.pdf]] #+CAPTION: Occupancy maps of the Septemboard during the Run-3 data taking campaign #+CAPTION: at CAST based on the sum of raw pixel ToT values. Color scale saturates at $95^{\text{th}}$ percentile #+CAPTION: of all data. Any pixel activity exceeding $\mathtt{ToT} > 1000$ are filtered out. #+NAME: fig:appendix:occupancy_maps:run3_charge [[~/phd/Figs/occupancyMaps/run3_occupancy_map_by_charge_perc95_max1000.pdf]] ** TODOs for this section [/] :noexport: - [ ] This *might* become an extended appendix, we'll see. - [ ] *PUT IN THE RAW OCCUPANCIES OF ALL CHIPS IN BACKGROUND OF EACH RUN PERIOD* - [ ] Can we use the approach used for the sparking issue? -> The plot for that is in [[#sec:detector:sparking_behavior]] I think this should not be too difficult. -> Put the septem logic into ~geometry.nim~. Then turn the script into a ~plotSeptemOccupancy~ file with single run / multiple run arguments etc.! ** Generate occupancy map plots :extended: Run-2: #+begin_src sh ./plotBackgroundSeptemboard \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --title "Run-2 CAST data occupancy" \ --outfile ~/phd/Figs/occupancyMaps/run2_occupancy_map_by_count_perc95.pdf & ./plotBackgroundSeptemboard \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --title "Run-2 CAST data occupancy" \ --outfile ~/phd/Figs/occupancyMaps/run2_occupancy_map_by_charge_perc95_max1000.pdf \ --onlyCount=false \ --chargeCut 1000 #+end_src Run-3: #+begin_src sh ./plotBackgroundSeptemboard \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --title "Run-3 CAST data occupancy" \ --outfile ~/phd/Figs/occupancyMaps/run3_occupancy_map_by_count_perc95.pdf & ./plotBackgroundSeptemboard \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --title "Run-3 CAST data occupancy" \ --outfile ~/phd/Figs/occupancyMaps/run3_occupancy_map_by_charge_perc95_max1000.pdf \ --onlyCount=false \ --chargeCut 1000 #+end_src * FADC :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:fadc :END: This appendix contains a few extra figures about the FADC signals. Sec. [[#sec:appendix:fadc:rise_fall]] contains additional figures -- similar to fig. [[fig:background:fadc_rise_time]] used in the main thesis -- about the distribution of rise and fall times. Sec. [[#sec:appendix:background:fadc]] contains more specifics related to the FADC veto. In the context of the FADC veto in the main body the expected cluster size is mentioned, which is supported by a figure in sec. [[#sec:appendix:fadc_veto_empirical_cluster_length]]. ** TODOs for this section [/] :noexport: - [X] FADC fall times - [X] FADC rise signal eff / back eff? - [X] FADC different settings rise / fall ** FADC rise and fall time :PROPERTIES: :CUSTOM_ID: sec:appendix:fadc:rise_fall :END: These figures show a few extra figures of the FADC signal rise and fall times comparing the \cefe calibration data with background data, for Run-2 and Run-3 of the CAST data taking campaign. In the Run-2 figures more variation is visible, due to the different FADC amplifier settings. These are similar (and include) fig. [[fig:background:fadc_rise_time]] shown in the main body. See in particular figures [[fig:background:fadc_rise_time_run2_different_settings]] and [[fig:background:fadc_fall_time_run2_different_settings]] for the impact of the rise and fall time based on the different settings. The fall time figures show that the difference between signal and background is not very pronounced. This is expected, as the fall times are dominated by the resistor-capacitor behavior of the circuit. #+CAPTION: KDE of the rise time of the FADC signals in the \cefe and background data #+CAPTION: of the CAST Run-2 dataset. The X-ray data is a single peak with a mean of #+CAPTION: about $\SI{70}{ns}$ while the background distribution is extremely wide, #+CAPTION: motivating a veto based on this data. #+NAME: fig:background:fadc_rise_time_run2 [[~/phd/Figs/FADC/fadc_riseTime_kde_signal_vs_background_run2.pdf]] #+CAPTION: KDE of the rise time of the FADC signals in the \cefe and background data #+CAPTION: of the CAST Run-3 dataset. The X-ray data is a single peak with a mean of #+CAPTION: about $\SI{55}{ns}$ while the background distribution is extremely wide, #+CAPTION: motivating a veto based on this data. #+NAME: fig:appendix:fadc_rise_time_run3 [[~/phd/Figs/FADC/fadc_riseTime_kde_signal_vs_background_run3.pdf]] #+CAPTION: KDE of the rise time of the FADC signals in the \cefe and background data #+CAPTION: of the CAST Run-2 dataset. Difference in the two signal types is negligible. #+CAPTION: Multi-peak structure is due to different FADC settings, see fig. [[fig:background:fadc_rise_time_run2_different_settings]]. #+NAME: fig:background:fadc_fall_time_run2 [[~/phd/Figs/FADC/fadc_fallTime_kde_signal_vs_background_run2.pdf]] #+CAPTION: KDE of the fall time of the FADC signals in the \cefe and background data #+CAPTION: of the CAST Run-3 dataset. Difference in the two signal types is negligible. #+NAME: fig:background:fadc_rise_time_run3 [[~/phd/Figs/FADC/fadc_fallTime_kde_signal_vs_background_run3.pdf]] #+CAPTION: KDE of the rise time of the FADC signals in the \cefe data of Run-2 split #+CAPTION: by the different FADC settings. #+NAME: fig:background:fadc_rise_time_run2_different_settings [[~/phd/Figs/FADC/fadc_riseTime_kde_signal_vs_background_different_fadc_amp_settings_run2.pdf]] #+CAPTION: KDE of the fall time of the FADC signals in the \cefe data of Run-2 split #+CAPTION: by the different FADC settings. Impact on the fall time is more pronounced #+CAPTION: than on the rise time above. #+NAME: fig:background:fadc_fall_time_run2_different_settings [[~/phd/Figs/FADC/fadc_fallTime_kde_signal_vs_background_different_fadc_amp_settings_run2.pdf]] \clearpage *** Generate plots for signal and background comparison :extended: These plots are generated using ~plotFadc~ in sec. [[#sec:background:fadc_veto:gen_signal_back_fadc_plots]]. ** FADC veto :PROPERTIES: :CUSTOM_ID: sec:appendix:background:fadc :END: Only briefly mentioned in sec. [[#sec:background:fadc_veto]] is the addition of a cut on the skewness of the FADC signal, namely a skewness $< -0.4$. Fig. [[fig:appendix:fadc_veto:rise_skewness_run2]] shows the FADC signal rise times of all events in Run-2 and fig. [[fig:appendix:fadc_veto:rise_skewness_run3]] for Run-3 against the skewness of the signal (i.e. a measure of how one sided the signal is due to the negative from baseline shape of a single FADC pulse). The color scale indicates whether the event is considered 'noisy' by our noise filter. We can see the majority of all noise events above skewness values of $\geq 0.4$, roughly. There are pockets of non-noisy events above this skewness value, in particular rise times of $\numrange{180}{210}$ and skewness $\numrange{-0.7}{0.3}$ in Run-2. Events in this range were inspected by eye (see extended thesis). They contain other noisy events not registered by the noise filter and certain types of multi cluster events resulting in peculiar shapes. Finally, fig. [[fig:appendix:fadc_veto:fadc_sig_back_efficiency]] shows the achievable signal efficiencies against background suppression as a function of the rise time. In this case only the upper range is cut (in practice the cut is symmetric on lower and upper end). This illustrates the efficiency of the FADC veto without any additional detector parts. At our used $\SI{99}{\%}$ upper range cut the background is suppressed to about $\SI{60}{\%}$ of the full background in the energy range of the \cefe photopeak. #+CAPTION: Scatter plot of all FADC events in Run-2 of the rise time of each event #+CAPTION: against the skewness. Colorcoded is whether each event is considered noisy #+CAPTION: by our noise filter. #+NAME: fig:appendix:fadc_veto:rise_skewness_run2 [[~/phd/Figs/FADC/fadc_risetime_skewness_run2.pdf]] #+CAPTION: Scatter plot of all FADC events in Run-3 of the rise time of each event #+CAPTION: against the skewness. Colorcoded is whether each event is considered noisy #+CAPTION: by our noise filter. #+NAME: fig:appendix:fadc_veto:rise_skewness_run3 [[~/phd/Figs/FADC/fadc_risetime_skewness_run3.pdf]] #+CAPTION: Signal efficency achievable using only the FADC veto for \cefe calibration data #+CAPTION: by cutting on the rise time (only the upper end) compared to the corresponding #+CAPTION: background suppression (for Run-3 data). #+NAME: fig:appendix:fadc_veto:fadc_sig_back_efficiency [[~/phd/Figs/FADC/fadc_rise_time_efficiencies_run3.pdf]] \clearpage ** Generate plot of rise time vs skewness :extended: The code here originates from: [[file:~/org/Doc/StatusAndProgress.org::#sec:fadc:noisy_events_and_fadc_veto]] That section also contains the commands and studies of the aforementioned regions of non noisy events in 'larger than veto' skewness values at low rise times. #+begin_src nim :tangle /tmp/fadc_data_skewness.nim import nimhdf5, ggplotnim import std / [strutils, os, sequtils, stats, strformat] import ingrid / [tos_helpers, fadc_helpers, ingrid_types, fadc_analysis] proc getFadcSkewness(h5f: H5File, run: int): DataFrame = let fadcRun = readRecoFadcRun(h5f, run) let recoFadc = readRecoFadc(h5f, run) let num = fadcRun.eventNumber.len var skews = newSeqOfCap[float](num) for idx in 0 ..< fadcRun.eventNumber.len: skews.add fadcRun.fadcData[idx, _].squeeze.toSeq1D.skewness() result = toDf({skews, "riseTime" : recoFadc.riseTime.asType(float), "noisy" : recoFadc.noisy}) echo result proc main(fname: string, outpath = "/tmp/", suffix = "") = let tmpFile = "/tmp/" & fname.extractFilename.replace(".h5", ".csv") var df = newDataFrame() if not fileExists tmpFile: var h5f = H5open(fname, "r") let fileInfo = h5f.getFileInfo() var dfs = newSeq[DataFrame]() for run in fileInfo.runs: echo "Run = ", run let fadcGroup = fadcRecoPath(run) if fadcGroup in h5f: # there were some runs at end of data taking without any FADC (298, 299) dfs.add h5f.getFadcSkewness(run) df = assignStack(dfs) df.writeCsv(tmpFile) else: df = readCsv(tmpFile) echo df df = df.mutate(f{int -> bool: "noisy" ~ (if `noisy` == 0: false else: true)}) ggplot(df, aes("skews")) + geom_density() + ggsave(&"{outpath}/fadc_skewness_kde_{suffix}.pdf") ggplot(df, aes("skews", "riseTime", color = "noisy")) + #geom_point(size = 1.0, alpha = 0.2) + geom_point(size = 0.5, alpha = 0.75) + xlab("Skewness") + ylab("riseTime [ns]") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + ggsave(&"{outpath}/fadc_risetime_skewness_{suffix}.pdf", dataAsBitmap = true) when isMainModule: import cligen dispatch main #+end_src Run-2: #+begin_src sh WRITE_PLOT_CSV=true \ ./fadc_data_skewness \ -f ~/CastData/data/DataRuns2017_Reco.h5 \ --outpath ~/phd/Figs/FADC/ \ --suffix "run2" #+end_src and for sanity, Run-3: #+begin_src sh WRITE_PLOT_CSV=true \ ./fadc_data_skewness \ -f ~/CastData/data/DataRuns2018_Reco.h5 \ --outpath ~/phd/Figs/FADC/ \ --suffix "run3" #+end_src ** Expected cluster size :PROPERTIES: :CUSTOM_ID: sec:appendix:fadc_veto_empirical_cluster_length :END: While not strictly speaking FADC data, the expected size of clusters was mentioned in sec. [[#sec:background:fadc_veto]] to be about $\SI{6}{mm}$ in length for \cefe calibration data. This is shown in fig. [[fig:appendix:empirical_cluster_lengths]]. #+CAPTION: Cluster lengths in millimeter of \cefe calibration data during the Run-2 data taking campaign. #+CAPTION: These $\SI{5.9}{keV}$ clusters peak at around $\SI{5.5}{mm}$ in length with significant drop #+CAPTION: in statistics from $\SI{6}{mm}$. #+NAME: fig:appendix:empirical_cluster_lengths [[~/phd/Figs/expectedClusterSize/length_run83_187_chip3_0.04833333333333333_binSize_binRange-0.0_14.5_region_crSilver_rmsTransverse_0.1_1.1_applyAll_true.pdf]] *** Generate cluster length plot :extended: #+begin_src sh WRITE_PLOT_CSV=true LINE_BREAK=true T_MARGIN=1.75 USE_TEX=true WIDTH=600 HEIGHT=420 \ plotData --h5file ~/CastData/data/CalibrationRuns2017_Reco.h5 \ --runType rtCalibration \ --chips 3 \ --config ~/CastData/ExternCode/TimepixAnalysis/Plotting/karaPlot/config.toml \ --ingrid \ --cuts '("rmsTransverse", 0.1, 1.1)' \ --applyAllCuts \ --region crSilver \ --plotPath ~/phd/Figs/expectedClusterSize/ #+end_src *** TODOs for this section [/] :noexport: - [X] Include the plot about empirical length of clusters -> What plot is that? -> I guess a plot of the transverse RMS? -> Ahh, not quite! It refers to the literal *length* of \cefe X-ray clusters, mentioned in sec. [[#sec:background:fadc_veto]] -> Based on [[file:~/org/Doc/StatusAndProgress.org::#sec:fadc:estimate_rise_times]]. #+begin_quote From the CAST \cefe data we see a peak at around $\SI{6}{mm}$ of transverse cluster size along the longer axis, which matches well with our expectation (see appendix [[#sec:appendix:fadc_veto_empirical_cluster_length]] for the length data). #+end_quote * Raw data and background rates :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:background_rates :END: In this appendix we find a few more figures and numbers about the background rates and a comparison to the raw data rates. Fig. [[fig:appendix:background_rates:all_cast_data]] shows the raw data rate of the full center chip of the entire CAST data taking campaign in the same units as used for the normal background rate plots. The background increases towards low energies, but a hump is visible in the $\SIrange{2}{3}{keV}$ range, matching the expected energies for muon tracks. The rates are roughly four orders of magnitude larger than the best achieved background rates. #+CAPTION: All CAST data of the entire center chip without any classifier or other cuts. #+CAPTION: Uses the same units as all background plots for easier comparison. #+NAME: fig:appendix:background_rates:all_cast_data [[~/phd/Figs/CAST_raw_data/all_cast_data_rate_crAll_chip_3_log.pdf]] Further, in sec. [[#sec:appendix:background_rates:full_chip]] we find background rates not only for the \goldArea region, but for the entire center chip. Both to compare with fig. [[fig:appendix:background_rates:all_cast_data]] as well as to get a better idea of the efficiency of the vetoes. ** TODOs for this section [/] :noexport: - [ ] Possibly more background rate plots? - [ ] *IMPORTANT* - [ ] Background rate plots for MLP best case 95% plus line veto - [ ] Background rates over the entire chip for different veto setups! Emphasizes even more effect of vetoes! - [ ] Raw data rate without any classifier or vetoes! - [ ] Table of rates over whole chip ** Background rates over full chip :PROPERTIES: :CUSTOM_ID: sec:appendix:background_rates:full_chip :END: Fig. [[fig:appendix:background_rates:full_chip_crAll_lnL80]] shows the effect of the different vetoes over the entire chip using the \lnL method at $\SI{80}{\%}$ software efficiency as a base. It better highlights the efficiency of some vetoes, for example the FADC veto is more effective over the entire chip, than in the center \goldArea region alone (compare with fig. [[fig:background:background_rate_fadc_veto]]). Of course, the septem and line vetoes are even more efficient over the entire chip, as expected and previously seen in the form of the background suppression plots (for example fig. sref:fig:background:background_suppression_comparison). Fig. [[fig:appendix:background_rates:full_chip_crAll_lnL_mlp_setups]] shows how the different software efficiencies affect the background rate comparing different \lnL and MLP efficiencies including all vetoes. Finally, tab. [[tab:appendix:background_rates:mean_background_rates]] shows the mean background rates over the entire chip for all considered \lnL and MLP veto setups in the energy range $\SIrange{0.2}{8}{keV}$. It is the equivalent of tab. [[tab:background:background_rate_eff_comparisons]] for the full chip. #+CAPTION: Background rate over the entire center chip of all CAST data, comparing the #+CAPTION: \lnL cut method at $\SI{80}{\%}$ software efficiency without any vetoes with #+CAPTION: each veto setup (vetoes are additive again). #+NAME: fig:appendix:background_rates:full_chip_crAll_lnL80 [[~/phd/Figs/background/background_rate_crAll_lnL80.pdf]] #+CAPTION: Background rate over the entire center chip of all CAST data, comparing the #+CAPTION: \lnL cut method and the MLP at different software efficiencies including all #+CAPTION: vetoes. #+NAME: fig:appendix:background_rates:full_chip_crAll_lnL_mlp_setups [[~/phd/Figs/background/background_rate_crAll_all_vetoes_lnL_mlp.pdf]] #+CAPTION: Mean background rates in $\si{keV^{-1}.cm^{-2}.s^{-1}}$ over the entire chip comparing all different \lnL and MLP setups #+CAPTION: in an energy range from $\SIrange{0.2}{8}{keV}$. Due to its efficiency at lower energies, where #+CAPTION: the majority of background is, the MLP produces the lowest mean rates. #+NAME: tab:appendix:background_rates:mean_background_rates #+ATTR_LATEX: :booktabs t |------------+-----------------+--------+-------+--------+-------+-------------------+---------------------------------------| | Classifier | $ε_{\text{eff}}$ | Scinti | FADC | Septem | Line | $ε_{\text{total}}$ | Rate [$\si{keV^{-1}.cm^{-2}.s^{-1}}$] | |------------+-----------------+--------+-------+--------+-------+-------------------+---------------------------------------| | MLP | 0.865 | true | true | true | true | 0.621 | $\num{ 1.73569(3138)e-05}$ | | MLP | 0.912 | true | true | true | true | 0.655 | $\num{ 2.26378(3583)e-05}$ | | LnL | 0.700 | true | true | true | true | 0.503 | $\num{ 2.54625(3800)e-05}$ | | LnL | 0.800 | true | true | true | true | 0.574 | $\num{ 3.30065(4327)e-05}$ | | MLP | 0.957 | true | true | true | true | 0.687 | $\num{ 3.55534(4491)e-05}$ | | LnL | 0.900 | true | true | true | true | 0.646 | $\num{ 4.51280(5059)e-05}$ | | MLP | 0.865 | true | true | false | true | 0.729 | $\num{ 4.90021(5272)e-05}$ | | LnL | 0.800 | true | true | true | false | 0.615 | $\num{ 5.48218(5576)e-05}$ | | MLP | 0.983 | true | true | true | true | 0.706 | $\num{ 5.79245(5732)e-05}$ | | MLP | 0.912 | true | true | false | true | 0.769 | $\num{ 6.25757(5958)e-05}$ | | LnL | 0.700 | true | true | false | true | 0.590 | $\num{ 6.59450(6116)e-05}$ | | LnL | 0.800 | true | true | false | true | 0.674 | $\num{ 8.42889(6915)e-05}$ | | MLP | 0.957 | true | true | false | true | 0.807 | $\num{ 9.12033(7193)e-05}$ | | LnL | 0.900 | true | true | false | true | 0.759 | $\num{1.107044(7924)e-04}$ | | MLP | 0.983 | true | true | false | true | 0.829 | $\num{1.344765(8734)e-04}$ | | MLP | 0.865 | false | false | false | false | 0.865 | $\num{ 3.09861(1326)e-04}$ | | LnL | 0.700 | false | false | false | false | 0.700 | $\num{ 3.35726(1380)e-04}$ | | LnL | 0.800 | true | true | false | false | 0.784 | $\num{ 3.79510(1467)e-04}$ | | MLP | 0.912 | false | false | false | false | 0.912 | $\num{ 3.90043(1487)e-04}$ | | LnL | 0.800 | true | false | false | false | 0.800 | $\num{ 4.21893(1547)e-04}$ | | LnL | 0.800 | false | false | false | false | 0.800 | $\num{ 4.26918(1556)e-04}$ | | MLP | 0.957 | false | false | false | false | 0.957 | $\num{ 5.33187(1739)e-04}$ | | LnL | 0.900 | false | false | false | false | 0.900 | $\num{ 5.60453(1783)e-04}$ | | MLP | 0.983 | false | false | false | false | 0.983 | $\num{ 7.25588(2029)e-04}$ | ** Generate rate without any vetoes over full chip :extended: We will use the same units as for the background rate for easier comparison! i.e. 1e-5 keV⁻¹·cm⁻²·s⁻¹ #+begin_src sh plotBackgroundRate \ ~/CastData/data/DataRuns2017_Reco.h5 \ ~/CastData/data/DataRuns2018_Reco.h5 \ --names "All" --names "All" \ --centerChip 3 \ --title "Raw CAST data rate" \ --showNumClusters \ --region crAll \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --energyMin 0.0 \ --outfile all_cast_data_rate_crAll_chip_3_log.pdf \ --outpath ~/phd/Figs/CAST_raw_data \ --useTeX \ --quiet #+end_src #+RESULTS: ** Generate background rates over full chip :extended: - [ ] Adjust the below to use the interesting input files. The region is already ~crAll~. Just need to remove the files we don't want for different plots. LnL 80% veto setups: #+begin_src sh :results drawer WRITE_PLOT_CSV=true plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 \ --names "No vetoes" --names "No vetoes" \ --names "Scinti" --names "Scinti" \ --names "FADC" --names "FADC" \ --names "Septem" --names "Septem" \ --names "All vetoes" --names "All vetoes" \ --names "Line" --names "Line" \ --centerChip 3 \ --title "Background rate from CAST data over entire chip, lnL@80% incl. vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crAll_lnL80.pdf \ --outpath ~/phd/Figs/background/ \ --region crAll \ --useTeX \ --logPlot \ --hidePoints --hideErrors \ --quiet #+end_src Comparison of different lnL and MLP setups with all vetoes in place: #+begin_src sh :results drawer WRITE_PLOT_CSV=true plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "lnL@80" --names "lnL@80" \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "lnL@70" --names "lnL@70" \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "lnL@90" --names "lnL@90" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@85" --names "MLP@85" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@90" --names "MLP@90" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@95" --names "MLP@95" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@98" --names "MLP@98" \ --centerChip 3 \ --title "Background rate from CAST data over entire chip, all vetoes" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crAll_all_vetoes_lnL_mlp.pdf \ --outpath ~/phd/Figs/background/ \ --region crAll \ --energyMin 0.2 \ --useTeX \ --logPlot \ --hidePoints --hideErrors \ --quiet #+end_src ** Generate table of background rates for all setups :extended: We use the background rate plotting tool to produce a table of the background rates over the entire chip in an energy range from 0 to 8 keV. Note the ~--rateTable~ argument at the end in the massive wall below: #+begin_src sh :results drawer WRITE_PLOT_CSV=true plotBackgroundRate \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.8_lnL_scinti_fadc_line_vQ_0.99.h5 \ --names "LnL80" --names "LnL80" \ --names "LnL80+Sc" --names "LnL80+Sc" \ --names "LnL80+F" --names "LnL80+F" \ --names "LnL80+S" --names "LnL80+S" \ --names "LnL80+SL" --names "LnL80+SL" \ --names "LnL80+L" --names "LnL80+L" \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.7_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "LnL70" --names "LnL70" \ --names "LnL70+L" --names "LnL70+L" \ --names "LnL70+S" --names "LnL70+S" \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R2_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ ~/org/resources/lhood_lnL_17_11_23_septem_fixed/lhood_c18_R3_crAll_sEff_0.9_lnL_scinti_fadc_septem_line_vQ_0.99.h5 \ --names "LnL90" --names "LnL90" \ --names "LnL90+L" --names "LnL90+L" \ --names "LnL90+S" --names "LnL90+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.85_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@85" --names "MLP@85" \ --names "MLP@85+L" --names "MLP@85+L" \ --names "MLP@85+S" --names "MLP@85+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.9_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@9" --names "MLP@9" \ --names "MLP@9+L" --names "MLP@9+L" \ --names "MLP@9+S" --names "MLP@9+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.95_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@95" --names "MLP@95" \ --names "MLP@95+L" --names "MLP@95+L" \ --names "MLP@95+S" --names "MLP@95+S" \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R2_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ ~/org/resources/lhood_mlp_17_11_23_adam_tanh30_sigmoid_mse_82k/lhood_c18_R3_crAll_sEff_0.98_scinti_fadc_septem_line_mlp_mlp_tanh_sigmoid_MSE_Adam_30_2checkpoint_epoch_82000_loss_0.0249_acc_0.9662_vQ_0.99.h5 \ --names "MLP@98" --names "MLP@98" \ --names "MLP@98+L" --names "MLP@98+L" \ --names "MLP@98+S" --names "MLP@98+S" \ --centerChip 3 \ --title "Background rate from CAST data, incl. scinti, FADC, septem, line veto" \ --showNumClusters \ --showTotalTime \ --topMargin 1.5 \ --energyDset energyFromCharge \ --outfile background_rate_crAll_different_veto_cases.pdf \ --outpath ~/phd/Figs/background/ \ --region crAll \ --energyMin 0.2 \ --logPlot --hidePoints --hideErrors \ --rateTable ~/phd/resources/background_rate_comparisons_crAll.org \ --noPlot \ --quiet #+end_src #+RESULTS: :results: Manual rate = 2.66983(1097)e-03 [INFO]:Dataset: LnL70 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 2.61866(1076)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 3.35726(1380)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 5.24421(4864)e-04 [INFO]:Dataset: LnL70+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.14371(4770)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 6.59450(6116)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.02488(3022)e-04 [INFO]:Dataset: LnL70+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.98608(2964)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.54625(3800)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.39502(1238)e-03 [INFO]:Dataset: LnL80 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 3.32996(1214)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 4.26918(1556)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.01802(1167)e-03 [INFO]:Dataset: LnL80+F [INFO]: Integrated background rate in range: 0.2 .. 8.0: 2.96018(1144)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 3.79510(1467)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 6.70299(5499)e-04 [INFO]:Dataset: LnL80+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 6.57454(5393)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 8.42889(6915)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.35965(4435)e-04 [INFO]:Dataset: LnL80+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 4.27610(4350)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 5.48218(5576)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.62481(3441)e-04 [INFO]:Dataset: LnL80+SL [INFO]: Integrated background rate in range: 0.2 .. 8.0: 2.57451(3375)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 3.30065(4327)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.35506(1230)e-03 [INFO]:Dataset: LnL80+Sc [INFO]: Integrated background rate in range: 0.2 .. 8.0: 3.29076(1207)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 4.21893(1547)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.45695(1418)e-03 [INFO]:Dataset: LnL90 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 4.37153(1391)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 5.60453(1783)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 8.80365(6302)e-04 [INFO]:Dataset: LnL90+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 8.63494(6181)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.107044(7924)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.58876(4023)e-04 [INFO]:Dataset: LnL90+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 3.51999(3946)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 4.51280(5059)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.46414(1054)e-03 [INFO]:Dataset: MLP@85 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 2.41692(1034)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 3.09861(1326)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.89685(4193)e-04 [INFO]:Dataset: MLP@85+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 3.82217(4112)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 4.90021(5272)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.38029(2495)e-04 [INFO]:Dataset: MLP@85+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.35384(2447)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.73569(3138)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 3.10178(1183)e-03 [INFO]:Dataset: MLP@9 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 3.04234(1160)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 3.90043(1487)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.97627(4738)e-04 [INFO]:Dataset: MLP@9+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 4.88091(4647)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 6.25757(5958)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.80024(2850)e-04 [INFO]:Dataset: MLP@9+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.76574(2795)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 2.26378(3583)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.24012(1383)e-03 [INFO]:Dataset: MLP@95 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 4.15886(1356)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 5.33187(1739)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 7.25285(5720)e-04 [INFO]:Dataset: MLP@95+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 7.11386(5610)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 9.12033(7193)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 2.82735(3571)e-04 [INFO]:Dataset: MLP@95+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 2.77316(3503)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 3.55534(4491)e-05 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 5.77017(1613)e-03 [INFO]:Dataset: MLP@98 [INFO]: Integrated background rate in range: 0.2 .. 8.0: 5.65959(1582)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 7.25588(2029)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 1.069411(6945)e-03 [INFO]:Dataset: MLP@98+L [INFO]: Integrated background rate in range: 0.2 .. 8.0: 1.048917(6812)e-03 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 1.344765(8734)e-04 keV⁻¹·cm⁻²·s⁻¹ Manual rate = 4.60639(4558)e-04 [INFO]:Dataset: MLP@98+S [INFO]: Integrated background rate in range: 0.2 .. 8.0: 4.51811(4471)e-04 cm⁻²·s⁻¹ [INFO]: Integrated background rate/keV in range: 0.2 .. 8.0: 5.79245(5732)e-05 keV⁻¹·cm⁻²·s⁻¹ | Classifier | ε_eff | Scinti | FADC | Septem | Line | ε_total | rateMeas | Rate | | MLP | 0.865 | true | true | true | true | 0.621 | (1.7357 ± 0.0314)e-05 | 1.73569(3138)e-05 | | MLP | 0.912 | true | true | true | true | 0.655 | (2.2638 ± 0.0358)e-05 | 2.26378(3583)e-05 | | LnL | 0.700 | true | true | true | true | 0.503 | (2.5463 ± 0.0380)e-05 | 2.54625(3800)e-05 | | LnL | 0.800 | true | true | true | true | 0.574 | (3.3007 ± 0.0433)e-05 | 3.30065(4327)e-05 | | MLP | 0.957 | true | true | true | true | 0.687 | (3.5553 ± 0.0449)e-05 | 3.55534(4491)e-05 | | LnL | 0.900 | true | true | true | true | 0.646 | (4.5128 ± 0.0506)e-05 | 4.51280(5059)e-05 | | MLP | 0.865 | true | true | false | true | 0.729 | (4.9002 ± 0.0527)e-05 | 4.90021(5272)e-05 | | LnL | 0.800 | true | true | true | false | 0.615 | (5.4822 ± 0.0558)e-05 | 5.48218(5576)e-05 | | MLP | 0.983 | true | true | true | true | 0.706 | (5.7925 ± 0.0573)e-05 | 5.79245(5732)e-05 | | MLP | 0.912 | true | true | false | true | 0.769 | (6.2576 ± 0.0596)e-05 | 6.25757(5958)e-05 | | LnL | 0.700 | true | true | false | true | 0.590 | (6.5945 ± 0.0612)e-05 | 6.59450(6116)e-05 | | LnL | 0.800 | true | true | false | true | 0.674 | (8.4289 ± 0.0691)e-05 | 8.42889(6915)e-05 | | MLP | 0.957 | true | true | false | true | 0.807 | (9.1203 ± 0.0719)e-05 | 9.12033(7193)e-05 | | LnL | 0.900 | true | true | false | true | 0.759 | (1.10704 ± 0.00792)e-04 | 1.107044(7924)e-04 | | MLP | 0.983 | true | true | false | true | 0.829 | (1.34477 ± 0.00873)e-04 | 1.344765(8734)e-04 | | MLP | 0.865 | false | false | false | false | 0.865 | (3.0986 ± 0.0133)e-04 | 3.09861(1326)e-04 | | LnL | 0.700 | false | false | false | false | 0.700 | (3.3573 ± 0.0138)e-04 | 3.35726(1380)e-04 | | LnL | 0.800 | true | true | false | false | 0.784 | (3.7951 ± 0.0147)e-04 | 3.79510(1467)e-04 | | MLP | 0.912 | false | false | false | false | 0.912 | (3.9004 ± 0.0149)e-04 | 3.90043(1487)e-04 | | LnL | 0.800 | true | false | false | false | 0.800 | (4.2189 ± 0.0155)e-04 | 4.21893(1547)e-04 | | LnL | 0.800 | false | false | false | false | 0.800 | (4.2692 ± 0.0156)e-04 | 4.26918(1556)e-04 | | MLP | 0.957 | false | false | false | false | 0.957 | (5.3319 ± 0.0174)e-04 | 5.33187(1739)e-04 | | LnL | 0.900 | false | false | false | false | 0.900 | (5.6045 ± 0.0178)e-04 | 5.60453(1783)e-04 | | MLP | 0.983 | false | false | false | false | 0.983 | (7.2559 ± 0.0203)e-04 | 7.25588(2029)e-04 | :end: * Background interpolation chip cutout correction :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:background_interpolation_chip_area :END: As mentioned in the main text (sec. [[#sec:limit:ingredients:background]]), when performing the background interpolation, we need to correct for the fact that towards the edges of the chip, part of the circle (the x-y plane component) will be cut off. To correct for this we want to compute the area of the circle that is still contained on the chip. By scaling the intensity $I$ by the missing area we correct for this. Let's discuss the actual calculation of the (possibly) double cut out circle. The area of a single [[https://en.wikipedia.org/wiki/Circular_segment][circle segment]] can be written as \[ A = R² / 2 · (ϑ - \sin(ϑ)) \] where $R$ is the radius of the circle and $ϑ$ the angle that cuts off the circle. See the upper center part of fig. [[fig:appendix:background_interp_cutoff_explanation]] for the area $A$. In the general case we need to know the area of a circle that is cut off from 2 sides, with angles $ϑ_1$ and $ϑ_2$, which are orthogonal to another. See the middle left part of fig. [[fig:appendix:background_interp_cutoff_explanation]] to see areas $A$ and $B$ cut off, leaving area $E$ as the remaining area contained on the chip. For a point $(x, y)$, define the distance to the edge of the chip to be $(Δx, Δy)$ for each axis. Then, the cutout areas $A$ and $B$ are given by \[ A = R² / 2 · (ϑ - \sin(ϑ_1)) \] and \[ B = R² / 2 · (ϑ - \sin(ϑ_2)) \] where the angles $ϑ_1$ and $ϑ_2$ (see center of fig. [[fig:appendix:background_interp_cutoff_explanation]]) are related to the distances to the edge of the chip by \begin{align*} ϑ_1 &= 2 \arccos(Δx / R) \\ ϑ_2 &= 2 \arccos(Δy / R). \end{align*} By subtracting areas $A$ and $B$ from the total area $F$, we remove too much however. So we need to add back: - another circle segment $D$, of the angle $α$ given by the lines connecting to the ends up each cut off line (see the center row of fig. [[fig:appendix:background_interp_cutoff_explanation]]) - the area of the triangle $C$, see the bottom row of fig. [[fig:appendix:background_interp_cutoff_explanation]]. The angle $α$ relates to angles $ϑ_1$ and $ϑ_2$ via \[ α = \frac{ϑ_2}{2} - \left(π - \frac{ϑ_1}{2}\right), \] see the center row of fig. [[fig:appendix:background_interp_cutoff_explanation]]. To calculate the area $C$, we need the catheti of the triangle, $x'$ and $y'$. See the bottom part of fig. [[fig:appendix:background_interp_cutoff_explanation]]. These are the distances from the /orthogonal/ cutoff line to the edge of the circle, as hopefully clear in the figure. Given that we know the center position of the circle (as that is the interpolation point), we can express $x'$ and $y'$ via the circle radius $R$ and the distances from the center to the chip in each axis, $Δx$ and $Δy$ (they may in theory be negative if the center is outside the chip). They are thus \begin{align*} x' &= \cos β · R - Δx \\ y' &= \cos γ · R - Δy, \end{align*} where $γ$ is the same angle as in the middle row of the schematic and $β$ is the equivalent for $ϑ_2$, $β = \frac{ϑ_2}{2} - \frac{π}{2}$. In combination then the area $E$ can then be expressed as: \[ E = F - A - B + C + D, \] with $F$ being the total area of the circle. This finally means to correct the background interpolation at a point close to the chip edges, we adjust $I$ by \[ I'(x,y) = I · \frac{F}{E(x,y)}, \] where we emphasize that $E(x,y)$ depends on the position, before normalizing with the weight $W$. #+CAPTION: Explanation of the different areas appearing in the calculation. We want #+CAPTION: to calculate area $E$, the area remaining on the chip. Subtracting $A$ and #+CAPTION: $B$ makes us subtract areas $C$ and $D$ twice. #+NAME: fig:appendix:background_interp_cutoff_explanation #+ATTR_LATEX: :width 1.0\linewidth [[~/phd/Figs/circleCutout/circle_cutout_explanation.pdf]] * Additional limit information :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:limit_additional :END: ** Conversion probability as a function of mass :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:conversion_probability :END: Fig. [[fig:appendix:conversion_probability_vs_mass]] shows how the axion-photon conversion probability changes as a function of the axion mass. This implements eq. [[eq:theory:axion_interaction:conversion_probability]], reproduced here with slightly changed notation, \[ P_{a↦γ}(z) = \left( \frac{g_{aγ} B L}{2} \right)² \left(\frac{\sin\left(\frac{q L}{2}\right)}{\frac{q L}{2}}\right)², \] with $q = \frac{m²_γ - m²_a}{2 E_a}$. $E_a$ is the energy of the axion (or in the context of a limit calculation the energy of a candidate). In the vacuum setup $m_γ = 0$. The figure shows this conversion probability for different axion energies and based on the CAST magnet. We see that the conversion probability starts falling off roughly around $m_a \approx \SI{0.01}{eV}$, with the exact value depending on energy (and personal $ΔP$ cutoff). #+CAPTION: Axion-photon conversion probability as a function of axion mass. Using #+CAPTION: $g_{aγ} = \SI{1e-12}{GeV⁻¹}, B = \SI{8.8}{T}, L = \SI{9.26}{m}$. Different #+CAPTION: axion energies indicated by color. #+NAME: fig:appendix:conversion_probability_vs_mass [[~/phd/Figs/axions/axion_conversion_probability_vs_mass.pdf]] \clearpage *** TODOs for this section [1/1] :noexport: - [X] Plot of conversion probability scaling *** Generate the plot of the conversion probability :extended: Taken and adapted from [[file:~/org/Code/CAST/babyIaxoAxionMassRange/axionMass.org]]. #+begin_src nim import ggplotnim, unchained, sequtils proc momentumTransfer(m_γ, m_a: eV, E_a = 4.2.keV): eV = ## calculates the momentum transfer for a given effective photon ## mass `m_gamma` and axion mass `m_a` at an axion energy of ## 4.2 keV `E_a` (by default). result = abs((m_γ * m_γ - m_a * m_a) / (2 * E_a)) proc vacuumConversionProb(E_a: keV, m_a: eV, B: Tesla, L: Meter): float = ## calculates the conversion probability in BabyIAXO for the given axion ## mass `m_a` # both `g_agamma` and `B` only scale the absolute value `P`, does not matter const g_aγ = 1e-12.GeV⁻¹ # convert length in `m` to `eV` let q = momentumTransfer(m_γ = 0.0.eV, m_a = m_a, E_a = E_a) let term1 = pow(g_aγ * B.toNaturalUnit() * L.toNaturalUnit() / 2.0, 2) let term2 = pow(sin(q * L.toNaturalUnit() / 2.0) / (q * L.toNaturalUnit() / 2.0), 2.0) result = term1 * term2 let energies = arange(1, 9, 2) var df = newDataFrame() for E_a in energies: let masses = logspace(-6, 0, 1000) let Ps = masses.mapIt(vacuumConversionProb(E_a.keV, it.eV, 8.8.T, 9.26.m)) df.add toDf({"masses" : masses, "Ps" : Ps, "E_a [keV]" : E_a}) ggplot(df, aes("masses", "Ps", color = "E_a [keV]")) + geom_line() + xlab("Axion mass [eV]") + ylab("Conversion probability") + themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) + scale_x_log10() + scale_y_log10() + ggsave("~/phd/Figs/axions/axion_conversion_probability_vs_mass.pdf") #+end_src #+RESULTS: | [INFO]: | No | plot | ratio | given, | using | golden | ratio. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | INFO: | The | integer | column | `E_a | [keV]` | has | been | automatically | determined | to | be | discrete. | To | overwrite | this | behavior | add | a | `+ | scale_x/y_continuous()` | call | to | the | plotting | chain. | Choose | `x` | or | `y` | depending | on | which | axis | this | column | refers | to. | | [INFO] | TeXDaemon | ready | for | input. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | command | -v | lualatex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | shellCmd: | lualatex | -output-directory | /home/basti/phd/Figs/axions | /home/basti/phd/Figs/axions/axion_conversion_probability_vs_mass.tex | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generated: | /home/basti/phd/Figs/axions/axion_conversion_probability_vs_mass.pdf | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ** Expected limit table with percentiles :PROPERTIES: :CUSTOM_ID: sec:appendix:exp_limit_percentiles :END: The table shown in tab. [[tab:appendix:expected_limits_percentiles]] shows the same table as tab. [[tab:limit:expected_limits]] in the main part of the thesis, but with a focus on the variation of the limits. The focus is on different percentiles of the distribution of sampled toy limits. The $P_i$ columns correspond to the limit at the $i^{\text{th}}$ percentile of all toy limits. $P_{50}$ would be the median and thus expected limit. The table yields insight into the probabilities with which limits are expected for certain setups, given the pure statistical fluctuation possible by the measured candidates. The veto information has been merged into the 'Type' column. A suffix (L) indicates 'line veto', 'S' the 'septem veto' and 'SL' both vetoes. '-' means no vetoes. FADC and scintillators are implicitly included if any of septem or line vetoes are in use. The units are excluded in the column names to save space. For the axion-electron and axion-photon tables they are all in $\si{GeV⁻¹}$. For the expected limit (last column) the uncertainty is again a bootstrapped standard deviation. The same table for the expected axion-photon limit and chameleon limits are tab. [[tab:appendix:expected_limits_percentiles_axion_photon]] and tab. [[tab:appendix:expected_limits_percentiles_chameleon]], respectively. They only have a single row, because we only computed the expected limit for one veto setup. \footnotesize #+CAPTION: Table of the expected limits for different veto setups, comparable to tab. [[tab:limit:expected_limits]], #+CAPTION: with a focus on the percentiles $P_i$ of the computed toy limits. #+CAPTION: For example $P_{25}$ is the $25^{\text{th}}$ percentile of the distribution of toy limits. #+CAPTION: All values in units of $\si{GeV⁻¹}$. #+NAME: tab:appendix:expected_limits_percentiles #+ATTR_LATEX: :booktabs t :environment longtable | ε_eff | nmc | Type | ε_total | $P_5$ | $P_{16}$ | $P_{25}$ | $P_{75}$ | $P_{84}$ | $P_{95}$ | Expected | |------+-------+--------+--------+----------+----------+----------+----------+----------+----------+----------------| | 0.98 | 1000 | MLP - | 0.98 | 6.44e-23 | 6.82e-23 | 7.09e-23 | 8.65e-23 | 9.09e-23 | 1.03e-22 | 7.805(37)e-23 | | 0.91 | 1000 | MLP - | 0.91 | 6.59e-23 | 6.96e-23 | 7.21e-23 | 8.75e-23 | 9.25e-23 | 1.03e-22 | 7.856(43)e-23 | | 0.95 | 1000 | MLP - | 0.95 | 6.53e-23 | 6.87e-23 | 7.14e-23 | 8.74e-23 | 9.18e-23 | 1.02e-22 | 7.860(51)e-23 | | 0.95 | 2500 | MLP L | 0.8 | 6.77e-23 | 7.07e-23 | 7.26e-23 | 8.72e-23 | 9.17e-23 | 1.03e-22 | 7.862(29)e-23 | | 0.98 | 15000 | MLP L | 0.82 | 6.7e-23 | 7e-23 | 7.2e-23 | 8.72e-23 | 9.2e-23 | 1.02e-22 | 7.868(11)e-23 | | 0.95 | 50000 | MLP L | 0.8 | 6.75e-23 | 7.04e-23 | 7.25e-23 | 8.72e-23 | 9.18e-23 | 1.02e-22 | 7.8782(65)e-23 | | 0.95 | 15000 | MLP L | 0.8 | 6.75e-23 | 7.04e-23 | 7.24e-23 | 8.72e-23 | 9.16e-23 | 1.03e-22 | 7.879(12)e-23 | | 0.98 | 2500 | MLP L | 0.82 | 6.73e-23 | 7.01e-23 | 7.19e-23 | 8.72e-23 | 9.22e-23 | 1.02e-22 | 7.883(30)e-23 | | 0.86 | 1000 | MLP - | 0.86 | 6.74e-23 | 7.08e-23 | 7.31e-23 | 8.88e-23 | 9.35e-23 | 1.03e-22 | 7.960(51)e-23 | | 0.91 | 2500 | MLP L | 0.76 | 6.91e-23 | 7.18e-23 | 7.38e-23 | 8.9e-23 | 9.3e-23 | 1.03e-22 | 7.99(16)e-23 | | 0.91 | 15000 | MLP L | 0.76 | 6.9e-23 | 7.18e-23 | 7.38e-23 | 8.87e-23 | 9.34e-23 | 1.04e-22 | 8.004(11)e-23 | | 0.98 | 2500 | MLP SL | 0.76 | 6.93e-23 | 7.2e-23 | 7.42e-23 | 8.97e-23 | 9.47e-23 | 1.06e-22 | 8.085(29)e-23 | | 0.95 | 2500 | MLP S | 0.78 | 6.91e-23 | 7.22e-23 | 7.43e-23 | 9.08e-23 | 9.53e-23 | 1.07e-22 | 8.113(36)e-23 | | 0.95 | 2500 | MLP SL | 0.73 | 6.99e-23 | 7.29e-23 | 7.49e-23 | 9e-23 | 9.46e-23 | 1.05e-22 | 8.125(31)e-23 | | 0.98 | 2500 | MLP S | 0.8 | 6.82e-23 | 7.16e-23 | 7.42e-23 | 9.02e-23 | 9.46e-23 | 1.06e-22 | 8.131(32)e-23 | | 0.86 | 2500 | MLP L | 0.72 | 7.03e-23 | 7.32e-23 | 7.54e-23 | 9.09e-23 | 9.58e-23 | 1.06e-22 | 8.156(30)e-23 | | 0.86 | 15000 | MLP L | 0.72 | 7.03e-23 | 7.32e-23 | 7.54e-23 | 9.06e-23 | 9.51e-23 | 1.06e-22 | 8.183(13)e-23 | | 0.91 | 2500 | MLP S | 0.74 | 7.03e-23 | 7.33e-23 | 7.54e-23 | 9.12e-23 | 9.63e-23 | 1.07e-22 | 8.22(19)e-23 | | 0.9 | 2500 | LnL L | 0.75 | 6.96e-23 | 7.28e-23 | 7.49e-23 | 9.13e-23 | 9.61e-23 | 1.06e-22 | 8.217(37)e-23 | | 0.91 | 2500 | MLP SL | 0.7 | 7.1e-23 | 7.42e-23 | 7.62e-23 | 9.17e-23 | 9.64e-23 | 1.08e-22 | 8.287(33)e-23 | | 0.86 | 2500 | MLP S | 0.7 | 7.19e-23 | 7.5e-23 | 7.72e-23 | 9.27e-23 | 9.71e-23 | 1.08e-22 | 8.401(29)e-23 | | 0.9 | 2500 | LnL SL | 0.69 | 7.21e-23 | 7.52e-23 | 7.74e-23 | 9.38e-23 | 9.89e-23 | 1.11e-22 | 8.427(34)e-23 | | 0.86 | 2500 | MLP SL | 0.66 | 7.32e-23 | 7.6e-23 | 7.79e-23 | 9.38e-23 | 9.76e-23 | 1.08e-22 | 8.459(35)e-23 | | 0.8 | 2500 | LnL L | 0.67 | 7.3e-23 | 7.6e-23 | 7.83e-23 | 9.4e-23 | 9.91e-23 | 1.09e-22 | 8.499(32)e-23 | | 0.9 | 2500 | LnL - | 0.9 | 6.91e-23 | 7.43e-23 | 7.73e-23 | 9.57e-23 | 1.01e-22 | 1.12e-22 | 8.579(37)e-23 | | 0.8 | 2500 | LnL - | 0.8 | 7.13e-23 | 7.59e-23 | 7.88e-23 | 9.79e-23 | 1.03e-22 | 1.15e-22 | 8.738(39)e-23 | | 0.8 | 2500 | LnL SL | 0.62 | 7.52e-23 | 7.82e-23 | 8.03e-23 | 9.68e-23 | 1.02e-22 | 1.13e-22 | 8.747(41)e-23 | | 0.7 | 2500 | LnL L | 0.59 | 7.72e-23 | 8.02e-23 | 8.21e-23 | 9.86e-23 | 1.04e-22 | 1.16e-22 | 8.930(40)e-23 | | 0.7 | 2500 | LnL - | 0.7 | 7.4e-23 | 7.87e-23 | 8.23e-23 | 1.01e-22 | 1.07e-22 | 1.19e-22 | 9.086(33)e-23 | | 0.7 | 2500 | LnL SL | 0.54 | 8.01e-23 | 8.28e-23 | 8.51e-23 | 1.02e-22 | 1.08e-22 | 1.2e-22 | 9.257(35)e-23 | \normalsize \footnotesize #+CAPTION: Table of the different percentiles for the single axion-photon expected limit. #+CAPTION: All values in units of $\si{GeV⁻¹}$. #+NAME: tab:appendix:expected_limits_percentiles_axion_photon #+ATTR_LATEX: :booktabs t :environment longtable | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |------+-------+-------+--------+----------+---------+----------+----------+----------+----------+----------------| | 0.95 | 10000 | MLP L | 0.8 | 8.24e-11 | 8.5e-11 | 8.66e-11 | 9.56e-11 | 9.83e-11 | 1.04e-10 | 9.0650(75)e-11 | \normalsize \footnotesize #+CAPTION: Table of the different percentiles for the single chameleon expected limit. #+NAME: tab:appendix:expected_limits_percentiles_chameleon #+ATTR_LATEX: :booktabs t :environment longtable | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |------+-------+-------+--------+----------+----------+----------+----------+----------+----------+----------------| | 0.95 | 10000 | MLP L | 0.8 | 3.22e+10 | 3.35e+10 | 3.43e+10 | 3.82e+10 | 3.93e+10 | 4.16e+10 | 3.6060(39)e+10 | \normalsize *** Generate the expected limit table with percentiles :extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:limit_additional:generate_expected_limit_percentiles :END: Following sec. [[#sec:limit:gen_expected_limit_table]], #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/generateExpectedLimitsTable/ :results drawer ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_21_11_23/ --prefix "mc_limit_lkMCMC" --precision 2 #+end_src #+RESULTS: :results: | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |----|----|----|----|----|----|----|----|----|----|----| |0.98|1000|MLP -|0.98|6.44e-23|6.82e-23|7.09e-23|8.65e-23|9.09e-23|1.03e-22|7.805(37)e-23| |0.91|1000|MLP -|0.91|6.59e-23|6.96e-23|7.21e-23|8.75e-23|9.25e-23|1.03e-22|7.856(43)e-23| |0.95|1000|MLP -|0.95|6.53e-23|6.87e-23|7.14e-23|8.74e-23|9.18e-23|1.02e-22|7.860(51)e-23| |0.95|2500|MLP L|0.8|6.77e-23|7.07e-23|7.26e-23|8.72e-23|9.17e-23|1.03e-22|7.862(29)e-23| |0.98|15000|MLP L|0.82|6.7e-23|7e-23|7.2e-23|8.72e-23|9.2e-23|1.02e-22|7.868(11)e-23| |0.95|50000|MLP L|0.8|6.75e-23|7.04e-23|7.25e-23|8.72e-23|9.18e-23|1.02e-22|7.8782(65)e-23| |0.95|15000|MLP L|0.8|6.75e-23|7.04e-23|7.24e-23|8.72e-23|9.16e-23|1.03e-22|7.879(12)e-23| |0.98|2500|MLP L|0.82|6.73e-23|7.01e-23|7.19e-23|8.72e-23|9.22e-23|1.02e-22|7.883(30)e-23| |0.86|1000|MLP -|0.86|6.74e-23|7.08e-23|7.31e-23|8.88e-23|9.35e-23|1.03e-22|7.960(51)e-23| |0.91|2500|MLP L|0.76|6.91e-23|7.18e-23|7.38e-23|8.9e-23|9.3e-23|1.03e-22|7.99(16)e-23| |0.91|15000|MLP L|0.76|6.9e-23|7.18e-23|7.38e-23|8.87e-23|9.34e-23|1.04e-22|8.004(11)e-23| |0.98|2500|MLP SL|0.76|6.93e-23|7.2e-23|7.42e-23|8.97e-23|9.47e-23|1.06e-22|8.085(29)e-23| |0.95|2500|MLP S|0.78|6.91e-23|7.22e-23|7.43e-23|9.08e-23|9.53e-23|1.07e-22|8.113(36)e-23| |0.95|2500|MLP SL|0.73|6.99e-23|7.29e-23|7.49e-23|9e-23|9.46e-23|1.05e-22|8.125(31)e-23| |0.98|2500|MLP S|0.8|6.82e-23|7.16e-23|7.42e-23|9.02e-23|9.46e-23|1.06e-22|8.131(32)e-23| |0.86|2500|MLP L|0.72|7.03e-23|7.32e-23|7.54e-23|9.09e-23|9.58e-23|1.06e-22|8.156(30)e-23| |0.86|15000|MLP L|0.72|7.03e-23|7.32e-23|7.54e-23|9.06e-23|9.51e-23|1.06e-22|8.183(13)e-23| |0.91|2500|MLP S|0.74|7.03e-23|7.33e-23|7.54e-23|9.12e-23|9.63e-23|1.07e-22|8.22(19)e-23| |0.9|2500|LnL L|0.75|6.96e-23|7.28e-23|7.49e-23|9.13e-23|9.61e-23|1.06e-22|8.217(37)e-23| |0.91|2500|MLP SL|0.7|7.1e-23|7.42e-23|7.62e-23|9.17e-23|9.64e-23|1.08e-22|8.287(33)e-23| |0.86|2500|MLP S|0.7|7.19e-23|7.5e-23|7.72e-23|9.27e-23|9.71e-23|1.08e-22|8.401(29)e-23| |0.9|2500|LnL SL|0.69|7.21e-23|7.52e-23|7.74e-23|9.38e-23|9.89e-23|1.11e-22|8.427(34)e-23| |0.86|2500|MLP SL|0.66|7.32e-23|7.6e-23|7.79e-23|9.38e-23|9.76e-23|1.08e-22|8.459(35)e-23| |0.8|2500|LnL L|0.67|7.3e-23|7.6e-23|7.83e-23|9.4e-23|9.91e-23|1.09e-22|8.499(32)e-23| |0.9|2500|LnL -|0.9|6.91e-23|7.43e-23|7.73e-23|9.57e-23|1.01e-22|1.12e-22|8.579(37)e-23| |0.8|2500|LnL -|0.8|7.13e-23|7.59e-23|7.88e-23|9.79e-23|1.03e-22|1.15e-22|8.738(39)e-23| |0.8|2500|LnL SL|0.62|7.52e-23|7.82e-23|8.03e-23|9.68e-23|1.02e-22|1.13e-22|8.747(41)e-23| |0.7|2500|LnL L|0.59|7.72e-23|8.02e-23|8.21e-23|9.86e-23|1.04e-22|1.16e-22|8.930(40)e-23| |0.7|2500|LnL -|0.7|7.4e-23|7.87e-23|8.23e-23|1.01e-22|1.07e-22|1.19e-22|9.086(33)e-23| |0.7|2500|LnL SL|0.54|8.01e-23|8.28e-23|8.51e-23|1.02e-22|1.08e-22|1.2e-22|9.257(35)e-23| :end: **** Axion-photon: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/generateExpectedLimitsTable/ :results drawer ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_axion_photon_11_01_24// --prefix "mc_limit_lkMCMC" --precision 2 --coupling ck_g_aγ⁴ #+end_src #+RESULTS: :results: File: mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 1.451446348780004e-40 | ε_eff | nmc | Type | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | ε_total | Limit no signal [GeV⁻¹] | Expected limit [GeV⁻¹] | Exp. limit variance [GeV⁻²] | Exp. limit σ [GeV⁻¹] | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | |----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| |0.95|10000|MLP|true|true|0.98|false|true|1|1|0.85|1|0.8|7.84e-11|9.06e-11|5.57e-27|7.46e-14|8.24e-11|8.5e-11|8.66e-11|9.56e-11|9.83e-11|1.04e-10| | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |----|----|----|----|----|----|----|----|----|----|----| |0.95|10000|MLP L|0.8|8.24e-11|8.5e-11|8.66e-11|9.56e-11|9.83e-11|1.04e-10|9.0650(75)e-11| :end: **** Chameleon #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Tools/generateExpectedLimitsTable/ :results drawer ./generateExpectedLimitsTable --path ~/org/resources/lhood_limits_chameleon_12_01_24/ --prefix "mc_limit_lkMCMC" --precision 2 --coupling ck_β⁴ #+end_src #+RESULTS: :results: File: mc_limit_lkMCMC_skInterpBackground_nmc_10000_uncertainty_ukUncertain_σs_0.0276_σb_0.0028_posUncertain_puUncertain_σp_0.0500.h5 Standard deviation of existing limits: 2.481905781358339e+42 | ε_eff | nmc | Type | Scinti | FADC | ε_FADC | Septem | Line | eccLineCut | ε_Septem | ε_Line | ε_SeptemLine | ε_total | Limit no signal | Expected limit | Exp. limit variance | Exp. limit σ | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | |----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| |0.95|10000|MLP|true|true|0.98|false|true|1|1|0.85|1|0.8|2.61e+10|3.61e+10|1.5e+15|3.88e+07|3.22e+10|3.35e+10|3.43e+10|3.82e+10|3.93e+10|4.16e+10| | ε_eff | nmc | Type | ε_total | P_5 | P_16 | P_25 | P_75 | P_84 | P_95 | Expected | |----|----|----|----|----|----|----|----|----|----|----| |0.95|10000|MLP L|0.8|3.22e+10|3.35e+10|3.43e+10|3.82e+10|3.93e+10|4.16e+10|3.6060(39)e+10| :end: ** Observed limit - axion photon $g_{aγ}$ :PROPERTIES: :CUSTOM_ID: sec:appendix:limit_additional:axion_photon :END: Fig. [[fig:appendix:posterior_likelihood_axion_photon]] shows the sampled coupling constants in $g⁴_{aγ}$ of the calculation for the observed limit, i.e. the marginal posterior likelihood function for the real candidates for the axion-photon coupling. #+CAPTION: Marginal posterior likelihood function of the real candidates for the axion-photon #+CAPTION: coupling constant in $g⁴_{aγ}$ space. The yellow line is a numerical integration #+CAPTION: of the likelihood function using Romberg's method [[cite:&romberg_integration]]. #+CAPTION: Limit at $g⁴ \approx \SI{6.56e-41}{GeV⁻⁴} ⇒ g \approx \SI{9e-11}{GeV⁻¹}$. #+NAME: fig:appendix:posterior_likelihood_axion_photon [[~/phd/Figs/trackingCandidates/axionPhoton/mcmc_real_limit_likelihood_ck_g_aγ⁴.pdf]] ** Observed limit - chameleon $β_γ$ :PROPERTIES: :CUSTOM_ID: sec:appendix:limit_additional:chameleon :END: Fig. [[fig:appendix:posterior_likelihood_chameleon]] shows the sampled coupling constants in $β⁴_γ$ of the calculation for the observed limit, i.e. the marginal posterior likelihood function for the real candidates for the chameleon coupling. #+CAPTION: Marginal posterior likelihood function of the real candidates for the chameleon #+CAPTION: coupling constant in $β⁴_γ$ space. The yellow line is a numerical integration #+CAPTION: of the likelihood function using Romberg's method [[cite:&romberg_integration]]. #+CAPTION: Limit at $β⁴ \approx \num{9.2e41} ⇒ β \approx \num{3.1e10}$. #+NAME: fig:appendix:posterior_likelihood_chameleon [[~/phd/Figs/trackingCandidates/chameleon/mcmc_real_limit_likelihood_ck_β⁴.pdf]] ** TODOs for this section [/] :noexport: - [X] *INSERT FIGS OF THE SOLAR TRACKING CANDIDATES* -> the distribution over the chip w/ axion image -> rate background vs candidates -> s/b plot -> These are all already in the main body now! * Software :Software:Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:software :END: #+LATEX: \minitoc In this appendix we will go over the software developed and used in the course of this thesis with a bit more detail than in the main thesis (beginning of chapter [[#sec:reconstruction]]). The focus is more on the technical and usability side and not the physics application. Read this appendix either if you - are simply interested, - are intending to reproduce the analysis or use (parts of) the software for your own data, - wish to further process data produced by these tools with your own software. ** TODOs for this section [/] :noexport: Essentially we aim to move the chapter [[Software]] here and extend it to: - give an overview of all tools we use in this thesis behind the scenes (means what they do, what their CLI looks like and how to use them in the context of generating the limit starting from raw data) Introduce used software for analysis. Previous code used MarlinTPC (already extended a framework for use for gaseous detectors, focus on strips). Additional extension to use our new detector features would be beyond the scope of the framework. - [ ] *POSSIBLY MAKE THIS A LONG APPENDIX* Then we can just refer to that appendix and at the same time then don't have to worry about keeping it very focused on things that "should be" in a thesis. - [ ] *IN APPENDIX ABOUT THIS, EXPLAIN CONFIG FILE OF TPA!* ** Why did I start writing my own analysis framework? :extended: Some of you may wonder why I wrote all this code in order to analyze my data. It is a huge undertaking with not much upside in the context of finishing my PhD after all. The reason boils down to 2 main points: 1. Christoph [[cite:&krieger2018search]] used the software framework [[https://ilcsoft.desy.de/portal/software_packages/marlintpc/][MarlinTPC]] (see also [fn:marlin_paper]). It is an extension to the [[https://github.com/iLCSoft/Marlin]['Marlin']] framework intended for the application of TPCs in the context of the International Linear Collider (ILC). I had multiple issues with this: - it is mostly intended for TPCs. While a GridPix is a TPC of sorts, most regular TPCs use strips or pads as a readout and not pixels. This made the GridPix introduction inefficient. - Christoph's existing code was written for a single GridPix. The addition of the Septemboard and more importantly detector vetoes would have anyhow meant significant time investment. From a simple glance it did not seem like MarlinTPC would make for an easy platform to introduce the required features. - it is a framework based on a base program, which is controlled by XML filed called 'steering files'. These describe the processes to be run in order. Instead of building small programs that do one thing, the main program is large and decides at runtime which processes to run. All this is built in a very heavy object oriented manner, resulting in massive amounts of boilerplate code for each new feature. In addition assembling processes in such a way results in generally low performance. - it uses CERN's ROOT internally, of which I am not a fan, especially not for projects that are not LHC scale. 2. I was a bit naive in some respects. Less so in terms of underestimating the amount of code needing to write to reproduce the basic reconstruction of Christoph's code. Yes, that also was quite some work, but manageable. Firstly, I was naive enough to think people would "accept" my results easily, in case their are discrepancies between my and Christoph's results (of which there were many). As it turns out two competing pieces of code often don't produce the exact same results. Understanding and minimizing discrepancies was a serious hindrance. Some boiled down to bugs in my own code, but others due to bugs in MarlinTPC. Secondly, by choosing Nim as the target language I underestimated the amount of code I would / had to write completely independent of the actual ~TimepixAnalysis~ code base. Things like a plotting library, dataframe library etc. Initially I thought I would either not need these ore simply use Python for additional processing, but the joy of building things and my urge for "purity" (few external dependencies) lead me down the path to replace more and more of my dependencies by my own code. It was a lot of work of course. But while it certainly delayed the end of my thesis by a significant time, I learned /way/ more than I would have otherwise. On another note: Having looked into MarlinTPC now (<2024-01-04 Thu 16:38>) again, the choice was /very much/ the right one. Development seemingly has halted or at the very least is not public. This page https://znwiki3.ifh.de/MarlinTPC/ mentions a migration to DESY's GitLab here: https://gitlab.desy.de/users/sign_in. Let me be blunt: screw non public code access! Second note: [[https://github.com/rest-for-physics/framework][REST]] was not known to me at the time when I started development. But more importantly, it pretty much follows in the footsteps of MarlinTPC (in terms of XML steering files plus a base ~rest-manager~ program, being based on CERN's ROOT, heavy OOP paradigm etc.). We'll see how REST holds up in 10 years time (maybe it blooms in the context of BabyIAXO!). At the very least I'm pretty confident I'll be able to get this code up and running with little work at that point. [fn:marlin_paper] https://arxiv.org/abs/0709.0790 ** Nim :extended: As briefly mentioned already in the main body, Nim is a -- relatively -- young programming language that offers C-like performance combined with Lisp-like metaprogramming, Python-like whitespace-sensitive, few operator based syntax and an Ada-like type system. Combined, these provide the perfect base for the single developer to be productive and build fast, safe software. In other words: the language gets out of my way and let's me build stuff that is fast and should /in theory/ be understandable to people coming after me. ** TimepixAnalysis :PROPERTIES: :CUSTOM_ID: sec:appendix:timepix_analysis :END: Introduced in the main part, in sec. [[#sec:reco:tpa]], ~TimepixAnalysis~ [[cite:&TPA]] is the name for the repository containing a large collection of different programs for the data reconstruction and analysis of Timepix based detectors. Generally, the ~README~ in the repository gives an overview of all the relevant programs, installation instructions and more. For further details therefore check there or simply open an issue in the repository [[cite:&TPA]]. Here we will now go over the main programs required to handle the Septemboard data taken at CAST so that in appendix [[#sec:appendix:full_data_reconstruction]] we can present the commands to for the entire CAST data reconstruction. #+begin_quote Note: in the PDF and HTML version of the thesis I provide some links to different parts of the repository. These generally point to Github, because the main public repository of ~TimepixAnalysis~ is found there. This is mostly out of convenience though. It should be straightforward to map them to the paths inside of your local copy of the repository, but that makes it trickier to link to. #+end_quote *** TODOs for this section [/] :noexport: Framework written for data analysis. Rewrites Timepix / InGrid related code from MarlinTPC in Nim and extends it (e.g. supports Timepix3). [[https://github.com/Vindaar/TimepixAnalysis]] After the thesis is published it is possible that this repository will become the de facto repository for the thesis and the actual analysis code will become its own repository. We'll see. *THESE SECTIONS MUST BE MERGED WITH THE ANALYSIS BELOW*. Maybe let this chapter simply be a high level overview: introduce Nim and the why. Mention the TimepixAnalysis repo only as a "this is the code for the analysis, the details of which will be explained in the next section? In that case one might merge the whole chapter with the next one and simply have these as the introductory part of the chapter. *** Common points between all TimepixAnalysis programs All programs in the TimepixAnalysis repository are command line only. While it would be quite doable to merge the different programs into a single graphical user interface (GUI), I'm personally not much of a GUI person. Each program usually has a large number of (optional) parameters. Keeping the GUI up to date with (in the past, quickly) changing features is just extra work, which I personally did not have any use for (if someone wishes to write a GUI for TimepixAnalysis, I'd be more than happy to mentor though). Every program uses ~cligen~ [fn:cligen], a command line interface generator. Based on the definition of the main procedure(s) in the program, a command line interface is generated. While ~cligen~ provides extremely simplified command line argument parsing for the developer, it also gives a nice ~help~ screen for every program. For example, running the first program of the analysis pipeline ~raw_data_manipulation~ with the ~-h~ or ~--help~ option: #+begin_src sh raw_data_manipulation -h #+end_src yields the help screen as shown in listing [[list:raw_data_manipulation_help]] [fn:colors]. Keep this in mind, if you are unsure about how to use any of the here mentioned programs. Further, there is a TOML configuration file in the repository (~Analysis/ingrid/config.toml~ from the repository root), which controls many aspects of the different programs. Most of these can be overwritten by command line arguments to the appropriate programs and some also via environment variables. See the extended thesis for information about this, it is mentioned where important. #+CAPTION: Example help output of the ~raw_data_manipulation~ program when run with ~-h~ or ~--help~ #+CAPTION: (some options were removed here to shorten the output). #+NAME: list:raw_data_manipulation_help #+begin_src sh Usage: main [REQUIRED,optional-params] Version: 44c0c91 built on: 2023-12-06 at 13:01:35 Options: -h, --help print this cligen-erated help --help-syntax advanced: prepend,plurals,.. -p=, --path= string REQUIRED set path -r=, --runType= RunTypeKind REQUIRED Select run type (Calib | Back | Xray) The following are parsed case insensetive: Calib = {"calib", "calibration", "c"} Back = {"back", "background", "b"} Xray = {"xray", "xrayfinger", "x"} -o=, --out= string "" Filename of output file. If none given will be set to run_file.h5. -n, --nofadc bool false Do not read FADC files. -i, --ignoreRunList bool false If set ignores the run list 2014/15 to indicate using any rfOldTos run -c=, --config= string "" Path to the configuration file to use. Default is config.toml in directory of this source file. ... -t, --tpx3 bool false Convert data from a Timepix3 H5 file to TPA format instead of a Tpx1 run directory ... #+end_src [fn:cligen] [[https://github.com/c-blake/cligen]] [fn:colors] You can have a ~$HOME/.config/cligen/config~ configuration file to adjust the output style (color, column widths, drop entire columns etc.). *** Dependencies TimepixAnalysis mainly has a single noteworthy external dependency, namely the HDF5 [[cite:&hdf5]] library. The vast majority of code (inside the repository itself and its dependencies) is pure Nim. Two optimization libraries written in C ~mpfit~ (Levenberg-Marquardt) [fn:mpfit] and NLopt [fn:nlopt] are wrapped from Nim and are further minor dependencies. Local compilation and installation of these is trivial and explained in the TimepixAnalysis README. For those programs related to the multilayer perceptron (MLP) training or usage, PyTorch [[cite:&Paszke_PyTorch_An_Imperative_2019]] is an additional dependency via [[cite:&flambeau]]. Flambeau installs a suitable PyTorch version for you. Common other dependencies are the ~cairo~ graphics library [fn:cairo] and a working BLAS and LAPACK installation. [fn:cairo] https://cairographics.org/download/ [fn:mpfit] https://pages.physics.wisc.edu/~craigm/idl/cmpfit.html [fn:nlopt] https://nlopt.readthedocs.io/en/latest/ *** Compilation Nim being a compiled language means we need to compile the programs mentioned below. It can target a C or C++ backend (among others). The compilation commands differ slightly between the different programs and can depend on usage. The ~likelihood~ program below for example can be compiled for the C or C++ backend. In the latter case, the MLP as a classifier is compiled in. Generally, compilation is done via: #+begin_src sh nim c -d:release foo.nim #+end_src where ~foo.nim~ is any of the programs below. ~-d:release~ tells the Nim compiler to compile with optimizations (you can compile with ~-d:danger~ for even faster, but less safe, code). Replace ~c~ by ~cpp~ to compile to the C++ backend. See the TimepixAnalysis README for further details on how to compile each program. Unless otherwise specified, each program mentioned below is located in ~Analysis/ingrid~ from the root of the TimepixAnalysis repository. *** =raw_data_manipulation= ~raw_data_manipulation~ is the first step of the analysis pipeline. Essentially, it is a parsing stage of the data generated by TOS (see section [[#sec:daq:tos_output_format]] for an explanation of it) and storing it in a compressed HDF5 [[cite:&hdf5]] data file. The program is fed a directory containing a TOS run via the ~-p / --path~ argument. Either a directory containing a single run (i.e. a data taking period ranging typically from minutes to days in length), or a directory that itself contains multiple TOS run directories. Runs compressed as gzipped TAR balls, ~.tar.gz~ are also supported. All data files contained in a run directory will then be parsed in a multithreaded way. The files are memory mapped and parsed in parallel into a =Run= data structure, which itself contains =Event= structures. If FADC files are present in a directory, these will also be parsed into =FadcEvent= structures in a similar fashion, unless explicitly disabled via the ~--nofadc~ option. Each run is then written into the output HDF5 file as a 'group' (HDF5 terminology). The meta data about each run and event are stored as 'attributes' and additional 'datasets', respectively. The structure of the produced HDF5 file is shown in sec. [[#sec:appendix:tos:raw_data_layout]]. In addition, the tool also supports input from HDF5 files containing the raw data from a Timepix3 detector. That data is parsed and reprocessed into the same kind of file structure. **** HDF5 data layout generated by ~raw_data_manipulation~ :PROPERTIES: :CUSTOM_ID: sec:appendix:tos:raw_data_layout :END: The listing [[code:reco:abstract_hdf5_layout]] shows the layout of the data stored in the HDF5 files after the ~raw_data_manipulation~ program has processed the TOS run folders. The data is structured in groups based on each run, chip and the FADC (if available). Generally each "property" is stored in its own dataset for performance reasons to allow faster access to individual subsets of the data (read only the hits, only $x/y$ data, etc.). While HDF5 supports even heterogeneous compound datasets (that is different data types in different "columns" of a 2D like dataset), these are only used sparingly and not at all in the ~raw_data_manipulation~ output, as reading individual columns from these is inefficient. #+CAPTION: An abstract overview of the general layout of the generated HDF5 files. #+CAPTION: Each entry shown that has any children is an HDF5 group. Every #+CAPTION: leaf node is an HDF5 dataset. The data is ordered by availability. Each #+CAPTION: run is a separate group. Within all chips have their own groups with data #+CAPTION: associated to that chip. The common datasets are those that contain data #+CAPTION: from the TOS event header. FADC data stored as the raw memory dumps from the #+CAPTION: FADC files (for which fewer than regular TOS data files exist). #+NAME: code:reco:abstract_hdf5_layout #+begin_src toml - runs - run_<number> - chip_0 # one for each chip in the event - Hits # number of hits in each event - Occupancy # a 2D occupancy map of this run - ToT # all ToT values of this run - raw_ch # the ToT/ToA values recorded for each event (ragged data) - raw_x # the x coordinates recorded for each event (ragged data) - raw_y # the y coordinates recorded for each event (ragged data) - chip_i # all other chips - ... - fadc # if available - eventNumber # event number of each entry # (not all events have FADC data) - raw_fadc # raw FADC data (uncorrected, all 10240 registers) - trigger_record # temporal correction factor for each event - fadcReadout # flag if FADC was readout in each event - fadcTriggerClock # clock cycle FADC triggered - szintillator trigger clocks # datasets for each scintillator - timestamp # timestamp of each event - run_i # all other runs - ... #+end_src ***** Notes on data layout :extended: Of course we could generate the layout from real data files. Either using ~h5ls~ or using ~nimhdf5~, most conveniently using its iterators and/or the JSON interface of it. *** =reconstruction= :PROPERTIES: :CUSTOM_ID: sec:appendix:tpa:reconstruction :END: After the raw data has been converted to storage in HDF5, the =reconstruction= tool is used to start the actual analysis of the data. The program receives an input HDF5 file via the ~-i / --input~ argument. As the name implies, the first stage of data analysis is in the form of reconstructing the basic properties of each event. In this stage all events are processed in a multithreaded way. The steps for cluster finding and [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/geometry.nim#L308-L366][geometric]] [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Analysis/ingrid/private/geometry.nim#L517-L569][cluster]] reconstruction (as mentioned in sec. [[#sec:reco:data_reconstruction]]) are performed and the data is written to the desired output file given by ~-o / --outfile~. The produced output HDF5 file then also acts as the /input/ file for ~reconstruction~ for all further, optional reconstruction steps. These are mentioned at different parts in the thesis, but we will explain them shortly here now. - ~--only_fadc~ :: Performs the reconstruction of the FADC data to calculate FADC values such as rise and fall times. - ~--only_fe_spec~ :: If the input file contains \cefe calibration runs, creates the \cefe spectra and performs fits to them. Also performs the energy calibration for each run. - ~--only_charge~ :: Performs the ~ToT~ calibration of all runs to compute the detected charges in electrons. Requires each chip to be present in the InGrid database (see sec. [[#sec:appendix:software:ingrid_database]]). - ~--only_gas_gain~ :: Computes the gas gain in the desired interval lengths via Pólya fits. - ~--only_gain_fit~ :: If the input file contains \cefe calibration runs, performs the fit of energy calibration runs against the gas gain of each interval. Required to perform energy calibration in background runs. - ~--only_energy_from_e~ :: Performs the energy calibration for each cluster in the input file. **** TODOs for this section [/] :noexport: *NOTE:* How should we take care of linking to our code? Of course need tagged version that corresponds to stuff in the thesis, but beyond that? A cluster finding algorithm is applied to each event on each chip separately, splitting a single event into possibly multiple clusters. Clusters are defined based on a certain notion of distance (the details depend on the clustering algorithm used). The multiple clusters from a single event are then treated fully equally for the rest of the analysis. The fact that they originate from the same event has no further relevance (with a slight exception for one veto technique, which utilizes clustering over multiple chips, more on that in section [[sec:septem_veto]]). For the individual clusters geometric properties will be computed. These are the long and short axis, the eccentricity as well as the statistical moments up to kurtosis along the long and short axis. The full list is shown in tab. [[tab:geometric_properties]]. #+CAPTION: Table of all the (mostly) geometric properties of a single cluster computed during the #+CAPTION: =reconstruction= tool. All but the likelihood, charge and energy properties are computed #+CAPTION: during the first pass of the tool. #+NAME: tab:geometric_properties #+ATTR_LATEX: :booktabs t |---------------------------+------------------------------------------------------------------| | Property | Meaning | |---------------------------+------------------------------------------------------------------| | igCenterX | =x= position of cluster center | | igCenterY | =y= position of cluster center | | igHits | number of pixels in cluster | | igEventNumber | event number cluster is from | | igEccentricity | eccentricity of the cluster | | igSkewnessLongitudinal | skewness along long axis | | igSkewnessTransverse | skewness along short axis | | igKurtosisLongitudinal | kurtosis along long axis | | igKurtosisTransverse | kurtosis along short axis | | igLength | size along long axis | | igWidth | size along short axis | | igRmsLongitudinal | RMS along long axis | | igRmsTransverse | RMS along short axis | | igLengthDivRmsTrans | length divided by transverse RMS | | igRotationAngle | rotation angle of long axis over chip coordinate system | | igEnergyFromCharge | energy of cluster computed from its charge | | igLikelihood | likelihood value for cluster | | igFractionInTransverseRms | fraction of pixels within radius of transverse RMS around center | | igTotalCharge | integrated charge of total cluster in electrons | | igNumClusters | | | igFractionInHalfRadius | fraction of pixels in half radius around center | | igRadiusDivRmsTrans | radius divided by transverse RMS | | igRadius | radius of cluster | | igLengthDivRadius | length divided by radius | |---------------------------+------------------------------------------------------------------| After all geometrical properties have been computed, the next step is to apply the ~ToT~ calibration (sec. [[sec:operation_calibration:tot_calibration]]) to the ~ToT~ values of all clusters, resulting in the equivalent charge in electrons. The charge values for all recorded pixels are then used to compute a histogram, which roughly follows a Pólya distribution (sec. [[sec:polya_distribution]]). From the mean value of that distribution a value for the gas gain is obtained, which is a necessary input to perform an energy calibration for each cluster. Second step of analysis, performs most of the major steps. - cluster finding - calculation of geometric properties - charge calibration - gas gain computation - energy calibration - ... **** Command line interface of ~reconstruction~ :extended: #+CAPTION: Usage of the =reconstruction= tool. Input is in the form of a run directory / a directory #+CAPTION: containing multiple run directories. The parsed output is stored in compressed HDF5 files. #+NAME: list:reconstruction_help #+begin_src sh Usage: main [REQUIRED,optional-params] InGrid reconstruction and energy calibration. NOTE: When calling reconstruction without any of the --only_ flags, the input file has to be a H5 file resulting from raw_data_manipulation. In the other cases the input is simply a file resulting from a prior reconstruction call! The optional flags are given roughly in the order in which the full analysis chain requires them to be run. If unsure on the order, check the runAnalysisChain.nim file. Version: 6f8ed08 built on: 2023-11-17 at 18:36:40 Options: -h, --help print this cligen-erated help --help-syntax advanced: prepend,plurals,.. -i=, --input= string REQUIRED set input -o=, --outfile= string "" Filename and path of output file -r=, --runNumber= int none Only work on this run -c, --create_fe_spec bool false Toggle to create Fe calibration spectrum based on cuts Takes precedence over --calib_energy if set! --only_fadc bool false If this flag is set, the reconstructed FADC data is used to calculate FADC values such as rise and fall times among others, which are written to the H5 file. --only_fe_spec bool false Toggle to /only/ create the Fe spectrum for this run and perform the fit of it. Will try to perform a charge calibration, if possible. --only_charge bool false Toggle to /only/ calculate the charge for each TOT value based on the TOT calibration. The ingridDatabase.h5 needs to be present. --only_gas_gain bool false Toggle to /only/ calculate the gas gain for the runs in the input file based on the polya fits to time slices defined by gasGainInterval. ingridDatabase.h5 needs to be present. --only_gain_fit bool false Toggle to /only/ calculate the fit mapping the energy calibration factors of the 55Fe runs to the gas gain values for each time slice. Required to calculate the energy in any run using only_energy_from_e. --only_energy_from_e bool false Toggle to /only/ calculate the energy for each cluster based on the Fe charge spectrum vs gas gain calibration --only_energy= float none Toggle to /only/ perform energy calibration using the given factor. Takes precedence over --create_fe_spec if set. If no runNumber is given, performs energy calibration on all runs in the HDF5 file. --clusterAlgo= ClusteringAlgorithm none The clustering algorithm to use. Leave at caDefault unless you know what you're doing. -s=, --searchRadius= int none The radius in pixels to use for the default clustering algorithm. -d=, --dbscanEpsilon= float none The radius in pixels to use for the DBSCAN clustering algorithm. -u=, --useTeX= bool none Whether to use TeX to produce plots instead of Cairo. --config= string "" Path to the configuration file to use. -p=, --plotOutPath= string none set plotOutPath #+end_src **** HDF5 data layout generated by ~reconstruction~ :PROPERTIES: :CUSTOM_ID: sec:appendix:tos:reco_data_layout :END: The HDF5 file generated by ~reconstruction~ follows closely the one from ~raw_data_manipulation~. The main difference is that within each chip group now each chip has a different number of entries in the datasets, as each entry now corresponds to a single cluster, not an event from the detector. On some events multiple clusters on a single chip may be reconstructed, while other events may be fully empty. This means an additional ~eventNumber~ dataset is required for each chip, which maps back each cluster to a corresponding event. Aside from that the other major difference is simply that each chip has a larger number of datasets, as each computed cluster property is a single variable. Also additional new datasets will be created during the data calibration (charge calibration, computation of the gas gain, etc.). Listing [[code:reco:abstract_reco_hdf5_layout]] shows the layout in a similar fashion to the equivalent for ~raw_data_manipulation~ before. #+CAPTION: Abstract overview of the data layout of the ~reconstruction~ HDF5 #+CAPTION: output. It is essentially the same layout as the ~raw_data_manipulation~ #+CAPTION: HDF5 files, but contains more datasets, due to larger number of #+CAPTION: properties. #+NAME: code:reco:abstract_reco_hdf5_layout #+begin_src toml - reconstruction - run_<number> - chip_0 # one for each chip in the event - datasets for each property - optional datasets for calibrations - chip_i # all other chips - ... - fadc # if available - datasets for each FADC property - common datasets # copied from `raw_data_manipulation` input - run_i # all other runs - ... #+end_src *** =cdl_spectrum_creation= This is a helper program responsible for the treatment of the X-ray reference data taken at the CAST Detector Lab in Feb. 2019. It receives an HDF5 file as input that is fully reconstructed using ~raw_data_manipulation~ and ~reconstruction~ containing all runs taken in the CDL. An additional Org table is used as reference to map each run to the correct target/filter kind, found in ~resources/cdl_runs_2019.org~. The program performs the fits to the correct fluorescence lines for each run based on the target/filter kind in use. It can also produce a helper HDF5 file called ~calibration-cdl-2018.h5~ via the ~genCdlFile~ argument, which contains all CDL data split by target/filter kind. This file is used in the context of the likelihood cut method to produce the reference distributions for each cluster property used. *** =likelihood= The ~likelihood~ program is the (historically named) tool that applies the classifier and any selection of vetoes to an input file. The input files are a fully reconstructed background HDF5 file, corresponding calibration runs and the ~calibration-cdl-2018.h5~ file mentioned above. It has a large number of command line options to adjust the classifier that is used, software efficiency, vetoes, region of the chip to cut to, whether tracking data or background data is selected and more. The program writes the remaining clusters (with additional meta information) to the HDF5 file given by the ~--h5out~ argument. The structure is essentially identical to that of the ~reconstruction~ tool (the data is stored in a ~likelihood~ group instead of a ~reconstruction~ group). The selection of tracking or non-tracking data requires information about when solar trackings took place as attributes inside of the background HDF5 files. These are added using the ~cast_log_reader~, see sec. [[#sec:appendix:software:cast_log_reader]]. It is also used directly to estimate the random coincidences of the septem and line vetoes, as mentioned in sec. [[#sec:background:estimate_veto_efficiency]]. *** ~determineDiffusion~ directory The ~Analysis/ingrid/determineDiffusion~ directory contains the library / binary to empirically determine the gas diffusion parameters from input data, as explained in sec. [[#sec:background:mlp:determine_gas_diffusion]]. It can either be compiled as a standalone program or be used as a library. *** ~nn~ directory The ~Analysis/ingrid/nn~ subdirectory in the TimepixAnalysis repository contains the programs related to the training and evaluation of the multilayer perceptrons (MLP) used in the thesis. Of note is the ~train_ingrid~ program, which is the program to train a network. It allows to customize the network to be trained based on command line arguments describing the number of neurons, hidden layers, optimizers, activation functions and so forth. The extended thesis contains the command to train the best performing network. Secondly, the ~simulate_xrays~ program is a helper program to produce an HDF5 file containing simulated X-rays as described in sec. [[#sec:background:mlp:event_generation]]. It makes use of the ~fake_event_generator.nim~ file in the ~ingrid~ directory, which contains the actual logic. *** ~InGridDatabase~ :PROPERTIES: :CUSTOM_ID: sec:appendix:software:ingrid_database :END: The InGrid database is both a library part of the TimepixAnalysis repository (~InGridDatabase~ directory) as well as a binary tool and the name for a very simple 'database' storing information about different GridPix chips. At its core the 'database' part is an HDF5 file containing chip calibrations (ToT, SCurve, ...) mapped to timestamps or run numbers in which these are applicable. This allows (mainly) the ~reconstruction~ program to retrieve the required calibrations automatically without user input based on the given input files. To utilize it, the [[https://github.com/Vindaar/TimepixAnalysis/tree/master/InGridDatabase][~databaseTool~]] needs to be compiled as a binary. Chips are added to the database using this tool. A directory describing the applicable run period and containing calibration files for the chip need to follow the format seen for example in: https://github.com/Vindaar/TimepixAnalysis/tree/master/resources/ChipCalibrations/Run2 for the Run-2 period of the Septemboard. The ~runPeriod.toml~ file describes the applicability of the data, see listing [[list:ingrid_database:run_period_config]] for the file in this case. For each chip, there is simply a directory with the calibration files as produced by TOS and an additional ~chipInfo.txt~ file, see listing [[list:ingrid_database:chip_info]]. Note that the ~runPeriod~ name needs to match the name of one of the run periods listed in the TOML file. The ~databaseTool~ also allows to perform fits to the calibration data, if needed (for example to analyze SCurves or the raw ToT calibration data). #+CAPTION: Example of a ~runPeriod.toml~ file. #+NAME: list:ingrid_database:run_period_config #+begin_src toml title = "Run period 2 of CAST, 2017/18" # list of the run periods defined in the file runPeriods = ["Run2"] [Run2] start = 2017-10-30 stop = 2018-04-11 # either as a sequence of run numbers validRuns = [ 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89 90, 91, 92, 93, 94, 95, 96, 97, 98, 99,100,101,102,103 104,105,106,107,108,109,110,111,112,113,114,115,116,117 118,119,120,121,122,123,124,125,126,127,128,145,146,147 148,149,150,151,152,153,154,155,156,157,158,159,160,161 162,163,164,165,166,167,168,169,170,171,172,173,174,175 176,177,178,179,180,181,182,183,184,185,186,187,188,189 ] # or as simply a range given as start and stop values firstRun = 76 lastRun = 189 #+end_src #+CAPTION: An example of a ~chipInfo.txt~ file describing a single chip for a run period. #+NAME: list:ingrid_database:chip_info #+begin_src sh chipName: H10 W69 runPeriod: Run2 board: SeptemH chipNumber: 3 Info: This calibration data is valid for Run 2 starting until March 2018! #+end_src **** TODOs for this section :noexport: - explain database - explain data structure needed to add to DB - show how to add Section about InGrid database (or maybe :noexport: instead of :optional: ?), which explains the idea of storing the chip calibration data in a single HDF5 file, which can then be easily accessed from the reconstruction / calibration (or :noexport: ?) - [ ] introduce database including the data structure needed to add a detector to the database? - [ ] *OR SHOULD THIS BE PART OF THE APPENDIX ABOUT SOFTWARE AND IN SECTION ABOVE WE MENTION AND REFER TO THAT?* *** =cast_log_reader= :PROPERTIES: :CUSTOM_ID: sec:appendix:software:cast_log_reader :END: The [[https://github.com/Vindaar/TimepixAnalysis/tree/master/LogReader][~LogReader/cast_log_reader~]] is a utility to work with the slow control and tracking log files produced by CAST. It can parse and analyze the different log file formats used over the years and provide different information (magnet operation statistics for example). For the Septemboard detector it provides the option to parse the tracking log files and add the tracking information to the background HDF5 files. The program first parses the log files and determines valid solar trackings. Then, given an HDF5 data file containing the background data each solar tracking is mapped to a background run. One background run may have zero or more solar trackings attached to it. In the final step the solar tracking start and stop information is added to each run as additional meta data. During the later stages of processing in other ~TimepixAnalysis~ programs, notably ~likelihood~, this meta data is then used to only consider background (non tracking) or solar tracking data. The CAST log files relevant for the Septemboard detector can be found together with the Septemboard CAST data. **** TODOs for this section [/] :noexport: - [ ] *LINK LOG FILES!!!* - [ ] *IMPORTANT* to read log files for tracking info - [ ] Also mention some of the other features, i.e. generating times the magnet was on, point of the magnet etc. **** Adding tracking information to HDF5 files :extended: See sec. [[#sec:cast:log_files:add_tracking_info]] on how to use ~cast_log_reader~ to add the tracking information to the HDF5 files. *** =mcmc_limit_calculation= ~mcmc_limit_calculation~ is the 'final' tool relevant for this thesis. As the name implies it performs the limit calculation using Markov Chain Monte Carlo (MCMC) as explained in detail in chapter [[#sec:limit]]. See the extended thesis on how to use it. *** ~Tools~ directory Multitude of tools for various things that were analyzed over the years. Includes things like computing the gas properties of the Septemboard gas mixture and detection efficiency. *** ~resources~ directory A large number of resources required or simply useful about different data takings, efficiencies, log files and more, which are small enough to be part of a non LFS git repository. *** ~Plotting~ directory From a practical analysis point of view, the [[https://github.com/Vindaar/TimepixAnalysis/tree/master/Plotting][~Plotting~]] directory is one of the most interesting parts of the repository. It contains different tools to visualize data at various stages of the analysis pipeline. The most relevant are mentioned briefly here. **** ~plotBackgroundClusters~ [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Plotting/plotBackgroundClusters/plotBackgroundClusters.nim][~plotBackgroundCluster~]] produces plots of the distribution of cluster center left after application of the ~likelihood~ program. This is used to produce figures like [[fig:detector:cluster_centers_likelihood]] and fig. sref:fig:background:background_suppression_comparison. **** ~plotBackgroundRate~ [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Plotting/plotBackgroundRate/plotBackgroundRate.nim][~plotBackgroundRate~]] is the main tool to visualize background (or raw data) spectra. All such plots in the thesis are produced with it. Input files are reconstructed HDF5 files or the result of the ~likelihood~ program. **** ~plotCalibration~ [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Plotting/plotCalibration/plotCalibration.nim][~plotCalibration~]] is a tool to produce visualizations of the different Timepix calibration steps, e.g. ToT calibration, SCurve scans and so on. The figures in sec. [[#sec:appendix:calibration:timepix]] are produced with it. **** ~plotData~ [[https://github.com/Vindaar/TimepixAnalysis/blob/master/Plotting/karaPlot/plotData.nim][~plotData~]] is a very versatile tool to produce a variety of different plots. It can produce histograms of the different geometric properties, occupancy maps, event displays and more. If desired, it can produce a large number of plots for an input data file in one go. It is very powerful, because it can receive an arbitrary number of cuts on any dataset present in the input. This allows to produce visualizations for any desired subset of the data. For example to produce event displays or histograms for only those events with specific geometric properties. It is an exceptionally useful tool to understand certain subsets of data that appear 'interesting'. In sec. [[#sec:appendix:background:fadc]] we mention non noisy FADC events in a region of the rise time / skewness space of fig. [[fig:appendix:fadc_veto:rise_skewness_run2]]. These are easily filtered to and investigated using ~plotData~. Also fig. [[fig:appendix:empirical_cluster_lengths]] and fig. [[fig:reco:fadc_reco_example]] are produced with it, among others. Generally, it favors information (density) over aesthetically pleasing visualizations though. ** Other libraries relevant for TimepixAnalysis A few other libraries not part of the TimepixAnalysis repository bear mentioning, due to their importance. They were written alongside TimepixAnalysis. - [[https://github.com/Vindaar/ggplotnim][ggplotnim]] :: A ~ggplot2~ [fn:ggplot2] inspired plotting library. All plots in this thesis are produced with it. - [[https://github.com/SciNim/Datamancer][Datamancer]] :: A ~dpylr~ [fn:dplyr] inspired data frame library. - [[https://github.com/Vindaar/nimhdf5][nimhdf5]] :: A high level interface to the HDF5 [[cite:&hdf5]] library, somewhat similar to ~h5py~ [fn:h5py] for Python. - [[https://github.com/SciNim/Unchained][Unchained]] :: A library to perform zero runtime overhead, compile-time checking and conversion of physical units. Exceptionally useful to avoid bugs due to wrong unit conversions and a big help when dealing with natural units. - [[https://github.com/SciNim/Measuremancer][Measuremancer]] :: A library to deal with measurements with uncertainties. Performs automatic Gaussian error propagation when performing calculations with measurements. - [[https://github.com/SciNim/xrayAttenuation][xrayAttenuation]] :: A library dealing with the interaction of X-rays with matter (gases and solids). It is used to calculate things like absorption of gas and reflectivities of the X-ray telescope in this thesis. - [[https://github.com/Vindaar/TrAXer][TrAXer]] :: The raytracer used to compute the axion image, expanded on in appendix [[#sec:appendix:raytracing]]. [fn:ggplot2] https://ggplot2.tidyverse.org/ [fn:dplyr] https://dplyr.tidyverse.org/ [fn:h5py] https://www.h5py.org/ * Full data reconstruction :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:full_data_reconstruction :END: We will now go over how to fully reconstruct all CAST data from the Run-2 and Run-3 data taking campaigns all the way up to the computation of the observed limit. This assumes the following: - all CAST data files are located in a directory accessible by a ~DATA~ environment variable. Inside should be the directory structure containing directories ~2017~, ~2018~, ~2018_2~ and ~CDL_2019~ as downloaded from Zenodo [[cite:&schmidt_2024_10521887]]. - all binaries used below are compiled and their location is in your ~PATH~ variable. There is a ~bin~ directory in the repository root, which contains symbolic links to the binaries if they are placed next to the source file. I recommend adding this directory to your ~PATH~. #+begin_quote Note: There is also the ~Analysis/ingrid/runAnalysisChain~ program, which automates a majority of these calls. See the section below on how it replaces the data parsing and reconstruction. Here I wish to illustrate the actual steps and don't hide any. #+end_quote Furthermore, the commands below produce the correct results assuming the ~Analysis/ingrid/config.toml~ file is used as committed as part of the ~phd~ tag in the TimepixAnalysis repository. ** Raw data parsing and reconstruction If this is all in place, first perform the raw data parsing. Run-2, background and calibration: #+begin_src sh raw_data_manipulation -p $DATA/2017/DataRuns \ --runType rtBackground \ --out $DATA/DataRuns2017_Raw.h5 raw_data_manipulation -p $DATA/2017/CalibrationRuns \ --runType rtCalibration \ --out $DATA/DataRuns2017_Raw.h5 #+end_src Run-3: #+begin_src sh raw_data_manipulation -p $DATA/2018_2/DataRuns \ --runType rtBackground \ --out $DATA/DataRuns2018_Raw.h5 raw_data_manipulation -p $DATA/2018_2/CalibrationRuns \ --runType rtCalibration \ --out $DATA/DataRuns2018_Raw.h5 #+end_src Next, the initial data reconstruction (geometric properties): Run-2: #+begin_src sh reconstruction -i $DATA/DataRuns2017_Raw.h5 \ -o $DATA/DataRuns2017_Reco.h5 reconstruction -i $DATA/CalibrationRuns2017_Raw.h5 \ -o $DATA/CalibrationRuns2017_Reco.h5 #+end_src Run-3: #+begin_src sh reconstruction -i $DATA/DataRuns2018_Raw.h5 \ -o $DATA/DataRuns2018_Reco.h5 reconstruction -i $DATA/CalibrationRuns2018_Raw.h5 \ -o $DATA/CalibrationRuns2018_Reco.h5 #+end_src Now, the next steps of the reconstruction (charge calibration, gas gain and FADC reconstruction): #+begin_src sh DATA=~/CastData/data for typ in Data Calibration; do for year in 2017 2018; do file="${DATA}/${typ}Runs${year}_Reco.h5" reconstruction -i $file --only_charge reconstruction -i $file --only_fadc reconstruction -i $file --only_gas_gain done done #+end_src where we simply loop over the ~Data~ and ~Calibration~ prefixes and years. With this done, we can perform the \cefe calibration fits: #+begin_src sh reconstruction -i $DATA/CalibrationRuns2017_Reco.h5 --only_fe_spec reconstruction -i $DATA/CalibrationRuns2018_Reco.h5 --only_fe_spec #+end_src and then finally the fit of energy calibration factors determined from each fit against its gas gain: #+begin_src sh reconstruction -i $DATA/CalibrationRuns2017_Reco.h5 --only_gain_fit reconstruction -i $DATA/CalibrationRuns2018_Reco.h5 --only_gain_fit #+end_src This then allows to calibrate the energy for each file: #+begin_src sh reconstruction -i $DATA/DataRuns2017_Reco.h5 --only_energy_from_e reconstruction -i $DATA/DataRuns2018_Reco.h5 --only_energy_from_e reconstruction -i $DATA/CalibrationRuns2017_Reco.h5 --only_energy_from_e reconstruction -i $DATA/CalibrationRuns2018_Reco.h5 --only_energy_from_e #+end_src ** Parse and reconstruct the CDL data In order to use the likelihood cut method we also need to parse and reconstruct the CDL data. Note that this can be done before even reconstructing any of the CAST data files (with the exception of ~--only_energy_from_e~, which is optional anyway). #+begin_src sh raw_data_manipulation -p $DATA/CDL_2019/ -r Xray -o $DATA/CDL_2019/CDL_2019_Raw.h5 reconstruction -i $DATA/CDL_2019/CDL_2019_Raw.h5 -o $DATA/CDL_2019/CDL_2019_Reco.h5 reconstruction -i $DATA/CDL_2019/CDL_2019_Reco.h5 --only_charge reconstruction -i $DATA/CDL_2019/CDL_2019_Reco.h5 --only_fadc reconstruction -i $DATA/CDL_2019/CDL_2019_Reco.h5 --only_gas_gain reconstruction -i $DATA/CDL_2019/CDL_2019_Reco.h5 --only_energy_from_e #+end_src With this file we can then run ~cdl_spectrum_calibration~ in order to produce the ~calibration-cdl-2018.h5~ file: #+begin_src sh cdl_spectrum_creation $DATA/CDL_2019/CDL_2019_reco.h5 \ --genCdlFile --year=2018 #+end_src ** Add tracking information to background files To add the tracking information, the CAST log files are needed. The default path (which may be a symbolic link of course) is in ~resources/LogFiles/tracking-logs~ from the TimepixAnalysis root. #+begin_src sh for year in 2017 2018; do cast_log_reader tracking \ -p $TPXDIR/resources/LogFiles/tracking-logs \ --startTime "2017/01/01" \ --endTime "2018/12/31" \ --h5out "${DATA}/DataRuns${year}_Reco.h5" done #+end_src Here we assume ~TPXDIR~ is an environment variable to the root of the TimepixAnalysis repository. Feel free to adjust if your path differs. ** Using ~runAnalysisChain~ All of the commands above can also be performed in one by using ~runAnalysisChain~: #+begin_src sh ./runAnalysisChain \ -i $DATA \ --outpath $DATA \ --years 2017 --years 2018 \ --calib --back --cdl \ --raw --reco \ --logL \ --tracking #+end_src which tells the ~runAnalysisChain~ helper to parse all data files for Run-2 and Run-3, do the same for the CDL data files, reconstruct with all steps, compute the likelihood values and add the tracking information to the background data files. It makes the same assumptions about the location of the data files as in the above. In particular, if the tracking log files are in a different location, add the ~--trackingLogs~ argument to the above with the correct path. ** Applying a classifier To apply a classifier we use the ~likelihood~ program. We will show how to apply it for one file now here. Generally, manual calling of ~likelihood~ is not needed, more on that below. Let's apply the likelihood cut method using all vetoes for the Run-2 data: #+begin_src sh likelihood \ -f $DATA/DataRuns2017_Reco.h5 \ --h5out $DATA/classifier/run2_lnL_80_all_vetoes.h5 \ --region=crAll \ --cdlYear=2018 \ --lnL --signalEfficiency=0.8 \ --scintiveto --fadcveto --septemveto --lineveto \ --vetoPercentile=0.99 \ --cdlFile=$DATA/CDL_2019/calibration-cdl-2018.h5 \ --calibFile=$DATA/CalibrationRuns2017_Reco.h5 #+end_src which should be mostly self explanatory. The ~--signalEfficiency~ argument is the software efficiency of the likelihood cut. ~--vetoPercentile~ refers to the FADC veto cut. The method is applied to the entire center chip (~--region=crAll~). A ~--tracking~ flag could be added to apply the classifier only to the tracking data (in this case it is applied only to /non/ tracking data). As explained in sec. [[#sec:background:all_vetoes_combined]], many different veto setups were considered. For this reason a helper tool ~createAllLikelihoodCombinations~ (excuse the verbose name, heh) exists to simplify the process of calling ~likelihood~. It is located in the ~Analysis~ directory. For example: #+begin_src sh :dir ~/CastData/ExternCode/TimepixAnalysis/Analysis/ ./createAllLikelihoodCombinations \ --f2017 $DATA/DataRuns2017_Reco.h5 \ --f2018 $DATA/DataRuns2018_Reco.h5 \ --c2017 $DATA/CalibrationRuns2017_Reco.h5 \ --c2018 $DATA/CalibrationRuns2018_Reco.h5 \ --regions crAll \ --vetoSets "{fkMLP, fkFadc, fkScinti, fkSeptem, fkLineVeto, fkExclusiveLineVeto}" \ --mlpPath $MLP/<mlp_of_choice>.pt \ --fadcVetoPercentile 0.99 \ --signalEfficiency 0.85 --signalEfficiency 0.90 \ --signalEfficiency 0.95 --signalEfficiency 0.98 \ --out $DATA/classifier/mlp \ --cdlFile $DATA/CDL_2019/calibration-cdl-2018.h5 \ --multiprocessing \ --jobs 6 \ --dryRun #+end_src would apply the MLP classifier (here with a dummy name) with successive additions of all vetoes in ~vetoSets~ at 4 different signal efficiencies and store the HDF5 output files in ~DATA/classifier/mlp~. Multiple ~vetoSets~ arguments can be given in one call. Also each ~fk~ entry can be prefixed with a ~+~ to indicate that this veto should not be run on its own. This allows flexibility. As indicated by ~--multiprocessing~ and ~--jobs 6~ this would run 6 processes in parallel. Note that each process can peak at up to 10 GB of RAM (depending on the used classifier and vetoes). ** Computing limits To compute limits we now feed each of the produced HDF5 output files of the ~likelihood~ program (in pairs including Run-2 and Run-3 output files) into ~mcmc_limit_calculation~. As the number of combinations can be quite large, this can again be automated using the ~Analysis/runLimits~ program. Let's continue with the example from above and pretend we wish to run ~1000~ toy candidate sets for each input pair, but only those that include the line veto. This can be accomplished by using the ~--prefix~ argument to ~runLimits~, which understands standard glob patterns. #+begin_src sh ./runLimits \ --path $DATA/classifier/mlp \ --prefix "lhood_c18_R2_crAll_*line*" \ --exclude "_septem_" \ --outpath $DATA/limits/mlp/ \ --energyMin 0.2 --energyMax 12.0 \ --axionModel <path_to_differential_axion_flux>.csv \ --axionImage <path_to_axion_image>.csv \ --combinedEfficiencyFile <path_to_detection_efficiency>.csv \ --switchAxes \ --nmc 1000 \ --dryRun #+end_src Note that the CSV files mentioned here are found in the ~resources~ directory of the repository of this thesis. See the extended thesis on how to produce them. To produce the observed limit, we run: #+begin_src sh mcmc_limit_calculation \ limit \ -f <path_to_background_run2>.h5 \ -f <path_to_background_run3>.h5 \ --tracking <path_to_tracking_run2>.h5 \ --tracking <path_to_tracking_run3>.h5 \ --axionModel <path_to_differential_axion_flux>.csv \ --axionImage <path_to_axion_image>.csv \ --combinedEfficiencyFile <path_to_detection_efficiency>.csv \ --switchAxes \ --path "" \ --years 2017 --years 2018 \ --σ_p 0.05 \ --energyMin 0.2 --energyMax 12.0 \ --limitKind lkMCMC \ --outpath ~/phd/Figs/trackingCandidates/ \ --suffix "" #+end_src where the ~<path_to_*_run>~ refers to the output HDF5 file of the ~likelihood~ program for the best setup as described in sec. [[#sec:limit:expected_limits]]. ~--σ_p~ refers to the position uncertainty to use. The signal and background uncertainties have the real value placed as the default. With this done, you have successfully reproduced the entire final results of the thesis! As mentioned in chapter [[#sec:about_thesis]], see the extended thesis for commands, code snippets and more information about how each figure and table is produced exactly. *** Produce the axion model, axion image and detection efficiency file :extended: For the axion model and axion image, see sec. [[#sec:appendix:raytracing:generate_axion_image]]. For the detection efficiency see sec. [[#sec:limit:ingredients:gen_detection_eff]]. ** TODOs for this section :noexport: - [ ] well, actually we want to reduce all this to a single script... - [ ] Replace all this by a call to ~runAnalysisChain~. - [ ] This chapter may still be one that contains some other stuff that needs to be run. Not sure. Or it might contain the code to set everything up, we'll see. This part of the appendix contains the full set of operations to perform the data reconstruction. 1. setup toolchain 2. compile TPA binaries 3. (chips to ingrid database (well, it's commited)) 3. raw data 4. reco 5. cdl same 6. background rate 7. limit 2017 data: #+begin_src sh reconstruction ~/CastData/data/CalibrationRuns2017_Raw.h5 --out ~/CastData/data/CalibrationRuns2017_Reco_withFadc.h5 #+end_src #+begin_src sh reconstruction ~/CastData/data/CalibrationRuns2017_Reco_withFadc.h5 --only_fadc #+end_src #+begin_src sh reconstruction ~/CastData/data/DataRuns2017_Raw.h5 --out ~/CastData/data/DataRuns2017_Reco_withFadc.h5 #+end_src #+begin_src sh reconstruction ~/CastData/data/DataRuns2017_Reco_withFadc.h5 --only_fadc #+end_src 2018 data: #+begin_src sh reconstruction ~/CastData/data/CalibrationRuns2018_Raw.h5 --out ~/CastData/data/CalibrationRuns2018_Reco_withFadc.h5 #+end_src #+begin_src sh reconstruction ~/CastData/data/CalibrationRuns2018_Reco_withFadc.h5 --only_fadc #+end_src #+begin_src sh reconstruction ~/CastData/data/DataRuns2018_Raw.h5 --out ~/CastData/data/DataRuns2018_Reco_withFadc.h5 #+end_src #+begin_src sh reconstruction ~/CastData/data/DataRuns2018_Reco_withFadc.h5 --only_fadc #+end_src * Average distance X-rays travel in argon at CAST conditions :Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:average_depth_xrays_argon :END: #+begin_quote Note: This section and its subsections are also available as a standalone document titled ~SolarAxionConversionPoint~. It is linked in the extended thesis and this appendix is only a minor modification. #+end_quote In order to be able to compute the correct distance to use in the raytracer for the position of the axion image, we need a good understanding of where the X-ray will generally convert in the gas. By combining the expected axion flux (folded with the telescope efficiency and window transmission to get the correct energy distribution) with the absorption length [fn:xrayAtt] of X-rays at different energies we can compute a weighted mean of all X-rays and come up with a single number. The difficulty lies in combining the statistical process of absorption with the incoming flux distribution. We implement a numerical Monte Carlo approach in literate programming style below. [fn:xrayAtt] This was one of the reasons I wrote [[https://github.com/SciNim/xrayAttenuation][xrayAttenuation]]. ** TODOs for this section :noexport: - [ ] *REWRITE INTRODUCTION* - [ ] *UPLOAD ORIGINAL DOC* ** Reference to original document :extended: As mentioned in the note above, the original document is titled ~SolarAxionConversionPoint.org~. From the Org file a PDF and HTML version is produced. These can be found on http://phd.vindaar.de/docs/SolarAxionConversionPoint/ That document contains an attempt to compute the same thing analytically, which as I later realized produces wrong results of course (at least if done the way I did it there). See the document. ** Calculate conversion point numerically In order to calculate the conversion point, we need: - random sampling logic - sampling from exponential distribution depending on energy - the axion flux, telescope effective area and window absorption Let's start by importing the modules we need: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim import helpers / sampling_helper # sampling distributions import unchained # sane units import ggplotnim # see something! import xrayAttenuation # window efficiencies import math, sequtils #+end_src where the ~sampling_helpers~ is a small module to sample from a procedure or a sequence. In addition let's define some helpers: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim from os import `/`, expandTilde const ResourcePath = "~/org/resources".expandTilde const OutputPath = "~/phd/Figs/axion_conversion_point_sampling/".expandTilde proc thm(): Theme = ## A shorthand to define a `ggplotnim` theme that looks nice ## in the thesis result = themeLatex(fWidth = 0.9, width = 600, baseTheme = singlePlot) #+end_src Now let's read the LLNL telescope efficiency as well as the axion flux model. Note that we may wish to calculate the absorption points not only for a specific axion flux model, but potentially any other kind of signal. We'll build in functionality to disable different contributions. #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim let flux = "solar_axion_flux_differential_g_ae_1e-13_g_ag_1e-12_g_aN_1e-15.csv" let dfAx = readCsv(ResourcePath / flux) .filter(f{`type` == "Total flux"}) let llnl = "llnl_xray_telescope_cast_effective_area_parallel_light_DTU_thesis.csv" let dfLLNL = readCsv(ResourcePath / llnl) .mutate(f{"Efficiency" ~ idx("EffectiveArea[cm²]") / (PI * 2.15 * 2.15)}) #+end_src Note: to get the differential axion flux use ~readOpacityFile~ from https://github.com/jovoy/AxionElectronLimit. It generates the CSV file. Next up we need to define the material properties of the detector window in order to compute its transmission. #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim let Si₃N₄ = compound((Si, 3), (N, 4)) # actual window const ρSiN = 3.44.g•cm⁻³ const lSiN = 300.nm # window thickness let Al = Aluminium.init() # aluminium coating const ρAl = 2.7.g•cm⁻³ const lAl = 20.nm # coating thickness #+end_src With these numbers we can compute the transmission at an arbitrary energy. In order to compute the correct inputs for the calculation we now have everything. We wish to compute the following, the intensity $I(E)$ is the flux that enters the detector \[ I(E) = f(E) · ε_{\text{LLNL}} · ε_{\ce{Si3.N4}} · ε_{\ce{Al}} \] where $f(E)$ is the solar axion flux and the $ε_i$ are the efficiencies associated with the telescope and transmission of the window. The idea is to sample from this intensity distribution to get a realistic set of X-rays as they would be experienced in the experiment. One technical aspect still to be done is an interpolation of the axion flux and LLNL telescope efficiency to evaluate the data at an arbitrary energy as to define a function that yields $I(E)$. #+begin_quote _Important note_: We fully neglect here the conversion probability and area of the magnet bore. These (as well as a potential time component) are purely constants and do not affect the *shape* of the distribution $I(E)$. We want to sample from it to get the correct weighting of the different energies, but do not care about absolute numbers. So differential fluxes are fine. #+end_quote The idea is to define the interpolators and then create a procedure that captures the previously defined properties and interpolators. #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim from numericalnim import newLinear1D, eval let axInterp = newLinear1D(dfAx["Energy", float].toSeq1D, dfAx["diffFlux", float].toSeq1D) let llnlInterp = newLinear1D(dfLLNL["Energy[keV]", float].toSeq1D, dfLLNL["Efficiency", float].toSeq1D) #+end_src With the interpolators defined let's write the implementation for $I(E)$: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim proc I(E: keV): float = ## Compute the intensity of the axion flux after telescope & window eff. ## ## Axion flux and LLNL efficiency can be disabled by compiling with ## `-d:noAxionFlux` and `-d:noLLNL`, respectively. result = transmission(Si₃N₄, ρSiN, lSiN, E) * transmission(Al, ρAl, lAl, E) when not defined(noAxionFlux): result *= axInterp.eval(E.float) when not defined(noLLNL): result *= llnlInterp.eval(E.float) #+end_src Let's test it and see what we get for e.g. $\SI{1}{keV}$: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim echo I(1.keV) #+end_src yields $1.249e20$. Not the most insightful, but it seems to work. Let's plot it: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim let energies = linspace(0.01, 10.0, 1000).mapIt(it.keV) let Is = energies.mapIt(I(it)) block PlotI: let df = toDf({ "E [keV]" : energies.mapIt(it.float), "I" : Is }) ggplot(df, aes("E [keV]", "I")) + geom_line() + ggtitle("Intensity entering the detector gas") + margin(left = 3.0) + thm() + ggsave(OutputPath / "intensity_axion_conversion_point_simulation.pdf") #+end_src shown in fig. [[fig:axion_converison_point:intensity]]. It looks exactly as we would expect. #+CAPTION: Intensity that enters the detector taking into account LLNL telescope and window #+CAPTION: efficiencies as well as the solar axion flux #+NAME: fig:axion_converison_point:intensity [[~/phd/Figs/axion_conversion_point_sampling/intensity_axion_conversion_point_simulation.pdf]] Now we define the sampler for the intensity distribution $I(E)$, which returns an energy weighted by $I(E)$: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim let Isampler = sampler( (proc(x: float): float = I(x.keV)), # wrap `I(E)` to take `float` 0.01, 10.0, num = 1000 # use 1000 points for EDF & sample in 0.01 to 10 keV ) #+end_src and define a random number generator: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim import random var rnd = initRand(0x42) #+end_src First we will sample 100,000 energies from the distribution to see if we recover the intensity plot from before. #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim block ISampled: const nmc = 100_000 let df = toDf( {"E [keV]" : toSeq(0 ..< nmc).mapIt(rnd.sample(Isampler)) }) ggplot(df, aes("E [keV]")) + geom_histogram(bins = 200, hdKind = hdOutline) + ggtitle("Energies sampled from I(E)") + thm() + ggsave(OutputPath / "energies_intensity_sampled.pdf") #+end_src This yields fig. [[fig:axion_conversion_point:energies_sampled_intensity]], which clearly shows the sampling works as intended. #+CAPTION: Energies sampled from the distribution $I(E)$ using 100k samples. #+CAPTION: The shape is nicely reproduced, here plotted using a histogram of #+CAPTION: 200 bins. #+NAME: fig:axion_conversion_point:energies_sampled_intensity [[~/phd/Figs/axion_conversion_point_sampling/energies_intensity_sampled.pdf]] The final piece now is to use the same sampling logic to generate energies according to $I(E)$, which correspond to X-rays of said energy entering the detector. For each of these energies then sample from the Beer-Lambert law (see sec. [[#sec:theory:xray_matter_gas]]) \[ I(z) = I_0 \exp\left[ - \frac{z}{l_{\text{abs}} } \right], \] where $I_0$ is some initial intensity and $l_\text{abs}$ the absorption length. The absorption length is computed from the gas mixture properties of the gas used at CAST, namely argon/isobutane 97.7/2.3 at $\SI{1050}{mbar}$. It is the inverse of the attenuation coefficient $μ_M$ \[ l_{\text{abs}} = \frac{1}{μ_M} \] where the attenuation coefficient is computed via \[ μ_m = \frac{N_A}{M * σ_A} \] with $N_A$ Avogadro's constant, $M$ the molar mass of the compound and $σ_A$ the atomic absorption cross section. The latter again is defined by \[ σ_A = 2 r_e λ f₂ \] with $r_e$ the classical electron radius, $λ$ the wavelength of the X-ray and $f₂$ the second scattering factor. Scattering factors are tabulated for different elements, for example by [[https://www.nist.gov/pml/x-ray-form-factor-attenuation-and-scattering-tables][NIST]] and [[https://henke.lbl.gov/optical_constants][Henke]]. For a further discussion of this see the README and implementation of [[https://github.com/SciNim/xrayAttenuation][~xrayAttenuation~]] [[cite:&Schmidt_xrayAttenuation_2022]]. We will now go ahead and define the CAST gas mixture: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim proc initCASTGasMixture(): GasMixture = ## Returns the absorption length for the given energy in keV for CAST ## gas conditions: ## - Argon / Isobutane 97.7 / 2.3 % ## - 20°C ( for this difference in temperature barely matters) let arC = compound((Ar, 1)) # need Argon gas as a Compound let isobutane = compound((C, 4), (H, 10)) # define the gas mixture result = initGasMixture(293.K, 1050.mbar, [(arC, 0.977), (isobutane, 0.023)]) let gm = initCASTGasMixture() #+end_src To sample from the Beer-Lambert law with a given absorption length we also define a helper that returns a sampler for the target energy using the definition of a normalized exponential distribution \[ f_e(x, λ) = \frac{1}{λ} \exp \left[ -\frac{x}{λ} \right]. \] #+begin_quote Note: The sampling of the conversion point is the crucial aspect of this. Naively we might want to sample between the detector volume from 0 to $\SI{3}{cm}$. However, this skews our result. Our calculation depends on the energy distribution of the incoming X-rays. If the absorption length is long enough the probability of reaching the readout plane and thus not being detected is significant. Restricting the sampler to $\SI{3}{cm}$ would pretend that independent of absorption length we would _always_ convert within the volume, giving too large a weight to the energies that should sometimes not be detected! #+end_quote Let's define the sampler now. It takes the gas mixture and the target energy. A constant ~SampleTo~ is defined to adjust the position to which we sample at compile time (to play around with different numbers). #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim proc generateSampler(gm: GasMixture, targetEnergy: keV): Sampler = ## Generate the exponential distribution to sample from based on the ## given absorption length # `xrayAttenuation` `absorptionLength` returns number in meter! let λ = absorptionLength(gm, targetEnergy).to(cm) let fnSample = (proc(x: float): float = result = expFn(x, λ.float) # expFn = 1/λ · exp(-x/λ) ) const SampleTo {.intdefine.} = 20 ## `SampleTo`, set via `-d:SampleTo=<int>` let num = (SampleTo.float / 3.0 * 1000).round.int # # of points to sample at result = sampler(fnSample, 0.0, SampleTo, num = num) #+end_src Note that this is inefficient, because we generate a new sampler from which we only sample a single point, namely the conversion point of that X-ray. If one intended to perform a more complex calculation or wanted to sample orders of magnitude more X-rays, one should either restructure the code (i.e. sample from known energies and then reorder based on the weight defined by $I(E)$) or cache the samplers and pre-bin the energies. For reference let's compute the absorption length as a function of energy for the CAST gas mixture: #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim block GasAbs: let Es = linspace(0.03, 10.0, 1000) let lAbs = Es.mapIt(absorptionLength(gm, it.keV).m.to(cm).float) let df = toDf({ "E [keV]" : Es, "l_abs [cm]" : lAbs }) ggplot(df, aes("E [keV]", "l_abs [cm]")) + geom_line() + ggtitle(r"Absorption length of X-rays in CAST gas mixture: \\" & $gm) + margin(top = 1.5) + thm() + ggsave(OutputPath / "cast_gas_absorption_length.pdf") #+end_src which yields fig. [[fig:axion_conversion_point:absorption_length]] #+CAPTION: Absorption length in the CAST gas mixture as a function of X-ray energy. #+NAME: fig:axion_conversion_point:absorption_length [[~/phd/Figs/axion_conversion_point_sampling/cast_gas_absorption_length.pdf]] So, finally: let's write the MC sampling! #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim const nmc = 500_000 # start with 100k samples var Es = newSeqOfCap[keV](nmc) var zs = newSeqOfCap[cm](nmc) while zs.len < nmc: # 1. sample an energy according to `I(E)` let E = rnd.sample(Isampler).keV # 2. get the sampler for this energy let distSampler = generateSampler(gm, E) # 3. sample from it var z = Inf.cm when defined(Equiv3cmSampling): ## To get the same result as directly sampling ## only up to 3 cm use the following code while z > 3.0.cm: z = rnd.sample(distSampler).cm elif defined(UnboundedVolume): ## This branch pretends the detection volume ## is unbounded if we sample within 20cm z = rnd.sample(distSampler).cm else: ## This branch is the physically correct one. If an X-ray reaches the ## readout plane it is _not_ recorded, but it was still part of the ## incoming flux! z = rnd.sample(distSampler).cm if z > 3.0.cm: continue # just drop this X-ray zs.add z Es.add E #+end_src Great, now we have sampled the conversion points according to the correct intensity. We can now ask for statistics or create different plots (e.g. conversion point by energies etc.). #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim import stats, seqmath # mean, variance and percentile let zsF = zs.mapIt(it.float) # for math echo "Mean conversion position = ", zsF.mean().cm echo "Median conversion position = ", zsF.percentile(50).cm echo "Variance of conversion position = ", zsF.variance().cm #+end_src This prints the following: #+begin_src nim Mean conversion position = 0.556813 cm Median conversion position = 0.292802 cm Variance of conversion position = 0.424726 cm #+end_src We see the mean conversion position is at about $\SI{0.56}{cm}$. If we consider the median it's only $\SI{0.29}{cm}$ (the number we use in [[#sec:appendix:raytracing:axion_image]]). This number provides the target for the raytracing of the axion image as an offset from the focal length in sec. [[#sec:appendix:raytracing:axion_image]]. Let's plot the conversion points of all sampled (and recorded!) X-rays as well as what their distribution against energy looks like. #+begin_src nim :tangle code/sample_axion_xrays_conversion_points.nim let dfZ = toDf({ "E [keV]" : Es.mapIt(it.float), "z [cm]" : zs.mapIt(it.float) }) ggplot(dfZ, aes("z [cm]")) + geom_histogram(bins = 200, hdKind = hdOutline) + ggtitle("Conversion points of all sampled X-rays according to I(E)") + thm() + ggsave(OutputPath / "sampled_axion_conversion_points.pdf") ggplot(dfZ, aes("E [keV]", "z [cm]")) + geom_point(size = 0.5, alpha = 0.2) + ggtitle("Conversion points of all sampled X-rays according to I(E) " & "against their energy") + thm() + ggsave(OutputPath / "sampled_axion_conversion_points_vs_energy.pdf", dataAsBitmap = true) #+end_src The former is shown in fig. [[fig:axion_conversion_point:sampled_axion_conversion_points]]. The overlapping exponential distribution is obvious, as one would expect. The same data is shown in fig. [[fig:axion_conversion_point:sampled_axion_conversion_points_by_energy]], but in this case not as a histogram, but by their energy as a scatter plot. We can clearly see the impact of the absorption length on the conversion points for each energy! #+CAPTION: Distribution of the conversion points of all sampled X-rays for which #+CAPTION: conversion in the detector took place as sampled from $I(E)$. #+NAME: fig:axion_conversion_point:sampled_axion_conversion_points [[~/phd/Figs/axion_conversion_point_sampling/sampled_axion_conversion_points.pdf]] #+CAPTION: Distribution of the conversion points of all sampled X-rays for which #+CAPTION: conversion in the detector took place as sampled from $I(E)$ as a scatter #+CAPTION: plot against the energy for each X-ray. #+NAME: fig:axion_conversion_point:sampled_axion_conversion_points_by_energy [[~/phd/Figs/axion_conversion_point_sampling/sampled_axion_conversion_points_vs_energy.pdf]] ** Compiling and running the code :extended: The code above is written in literate programming style. To compile and run it we use ~ntangle~ to extract it from the Org file: #+begin_src nim ntangle <file> #+end_src which generates [[file:code/sample_axion_xrays_conversion_points.nim]]. Compiling and running it can be done via: #+begin_src nim nim r -d:danger code/sample_axion_xrays_conversion_points.nim #+end_src which compiles and runs it as an optimized build. We have the following compilation flags to compute different cases: - ~-d:noLLNL~: do not include the LLNL efficiency into the input intensity - ~-d:noAxionFlux~: do not include the axion flux into the input intensity - ~-d:SampleTo=<int>~: change to where we sample the position (only to 3cm for example) - ~-d:UnboundedVolume~: if used together with the default ~SampleTo~ (or any large value) will effectively compute the case of an unbounded detection volume (i.e. every X-ray recorded with 100% certainty). - ~-d:Equiv3cmSampling~: Running this with the default ~SampleTo~ (or any large value) will effectively change the sampling to a maximum \SI{3}{cm} sampling. This can be used as a good crossheck to verify the sampling behavior is independent of the sampling range. Configurations of note: #+begin_src nim nim r -d:danger -d:noAxionFlux code/sample_axion_xrays_conversion_points.nim #+end_src $⇒$ realistic case for a flat input spectrum Yields: #+begin_src nim Mean conversion position = 0.712102 cm Median conversion position = 0.445233 cm Variance of conversion position = 0.528094 cm #+end_src #+begin_src nim nim r -d:danger -d:noAxionFlux -d:UnboundedVolume code/sample_axion_xrays_conversion_points.nim #+end_src $⇒$ the closest analogue to the analytical calculation from section [[#sec:axion_conversion_point:analytical]] (outside of including isobutane here) Yields: #+begin_src nim Mean conversion position = 1.25789 cm Median conversion position = 0.560379 cm Variance of conversion position = 3.63818 cm #+end_src #+begin_src nim nim r -d:danger code/sample_axion_xrays_conversion_points.nim #+end_src $⇒$ the case we most care about and of which the numbers are mentioned in the text above. * Raytracing :Software:Appendix: :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing :END: #+LATEX: \minitoc In this appendix we will introduce the concept of raytracing, present our raytracer in sec. [[#sec:appendix:raytracing:traxer]], give a more technical overview of the LLNL telescope in sec. [[#sec:appendix:raytracing:llnl_telescope]], show comparisons of our raytracer to PANTER measurements, sec. [[#sec:appendix:raytracing:panter]] and finally compute the axion image expected for CAST behind the LLNL telescope, sec. [[#sec:appendix:raytracing:axion_image]]. Raytracing is a technique for the rendering of computer-generated images [fn:history]. The classical raytracing algorithm goes back to a paper by Turner Whitted in 1979 [[cite:&whitted79]]. It is essentially a recursive algorithm, which shoots rays ("photons") from a camera into a scene of 3D objects. Interactions with the objects follows geometrical optics. The ray equation \[ \vec{r}(t) = \vec{o} + t \vec{d}, \] describes the ray vector $\vec{r}$, how it propagates from its origin $\vec{o}$ along the direction $\vec{d}$ as a function of the parameter $t$ (it is no coincidence that $t$ evokes the notion of time). Ray-object intersections tests are performed based on the ray equation and parametrizations of scene objects (either analytical parametrizations for geometric primitives like spheres, cones and the like, or complex objects made up of a large number of triangles). Each ray is traced until a certain 'depth', defined by the maximum number of reflections a ray may undergo, is reached and the recursion is stopped. Building on this more sophisticated methods were invented, in particular the 'path tracing' algorithm and the introduction of the 'rendering equation' cite:kajiya86,immel86. Here the geometrical optics approximation is replaced by the concepts of radiometry (power, irradiance, radiance, spectral radiance and so on) embedding the concept in more robust physical terms. At the heart of it, the rendering equation is a statement of conservation of energy. To generate images of a scene, the most interesting quantity is the exitant radiance $L_o$ at a point on a surface. It is the sum of the emitted radiance $L_e$ and the scattered radiance $L_{o,s}$, \[ L_o = L_e + L_{o,s}. \] In the context of rendering, the emitted radiance $L_e$ is commonly a property of the materials in the scene. $L_{o,s}$ needs to be computed via the scattering equation \[ L_{o,s}(\vec{x}, ω_o) = ∫_{S²} L_i(\vec{x}, ω_i) f_s(\vec{x}, ω_i ↦ ω_o) \sin ϑ \: \mathrm{d}\sin ϑ \, \mathrm{d}ϕ. \] $L_i$ is the incident radiance at point $\vec{x}$ and direction $ω_i$ (do not confuse it with an energy). $f_s$ is the bidirectional scattering distribution function (BSDF) and describes the scattering properties of a material surface. In the context of an equilibrium in radiance (the light description does not change over time in the scene) the incident radiance can be expressed as \[ L_i(\vec{x}, ω) = L_o(\mathbf{x}_{\mathcal{M}}(\vec{x}, ω), -ω). \] Here $\mathbf{x}_{\mathcal{M}}(\vec{x}, ω)$ is the 'ray-casting function', which yields the closest point on the set of surfaces $\mathcal{M}$ of the scene visible from $\vec{x}$ in direction $ω$. These expressions can be combined into a form independent of $L_i$ to \[ L_o(\vec{x}, ω_o) = L_e(\vec{x}, ω_o) + ∫_{S²} L_o(\mathbf{x}_{\mathcal{M}}(\vec{x}, ω_i), -ω_i) f_s(\vec{x}, ω_i ↦ ω_o) \sin ϑ \: \mathrm{d}\sin ϑ \, \mathrm{d}ϕ, \] which is the 'light transport equation', commonly used nowadays in path tracing based computer graphics. The fundamental problem with raytracing, and path tracing based on the light transport equation in particular, is the extreme computational cost. Monte Carlo based approaches to sample only relevant directions and space partitioning data structures are employed to reduce this cost. One of the seminal documents of modern Monte Carlo based rendering is Eric Veach's PhD thesis [[cite:&veach1998robust]], which the notation used above is based on. If you are interested in the topic I highly recommend at least a cursory look at it. Movies have made the transition from rasterization based computer graphics in /Toy Story/ (1995) over the course of 15 to 20 years [fn:monster_house] to full path tracing graphics commonly used today. The use of offline rendering utilizing many compute hours per frame of the final movie allowed movies to introduce path tracing at the end of the 90s (/Bunny/, an animated short released in 1998). In recent years the idea of real time path tracing has become tangible, in big parts due to the -- at the time bold -- bet by Nvidia. They dedicated specific parts of their 'Turing' architecture of graphics processing units (GPUs) in 2018 to accelerate raytracing. Starting with path traced versions of old video games, like /Quake II RTX/ running in real time on 'Turing' GPUs, in 2023 we have path traced versions of modern high budget ("AAA") video games like /Cyberpunk 2077/ and /Alan Wake 2/ [fn:other_raytracing]. This is in parts due to the general increase in compute performance, partially due to specific hardware features and a large part due to usage of machine learning (to denoise and upsample images and further reconstruct missing rays using ML). For an overview of the developments in the movie industry to its adoption of path tracing, have a look at [[cite:&christensen16path]]. To understand how path tracing is implemented in video games on modern GPUs, see the 'Raytracing Gems' series cite:raytracingGemsHaines2019,raytracingGemsIIMarrs2021. [fn:history] The history of raytracing goes back much further than its application to computer graphics, though. [fn:monster_house] The first feature length movie entirely rendered using path tracing was /Monster House/ (2006). [fn:other_raytracing] There were of course previous experiments in interactive real time raytracing. ** TODOs for this section :noexport: - [ ] *CAN WE* get this appendix to be Appendix R in the final thesis? That would be too fun, haha. *UPDATE*: <2023-01-17 Tue 19:55> For length reasons of the thesis this will remain a pretty short appendix after all. While writing a longer intro to raytracing would be really neat, it's just going to make the thesis explode in length. Also it would take significant time to write it after all. *UPDATE 2*: <2023-08-04 Fri 17:26> Having thought about this now, I think it would make sense to have a basic introduction to raytracing (just the ray equation and the like) and then present the LLNL telescope setup and a couple of plots showing e.g. that fraction of shells hit matches the expectation from thesis (i.e. the opening area). *UPDATE 3*: <2023-09-08 Fri 13:42> Given that we have finally after all implemented the interactive raytracer, we should write a short raytracing introduction (i.e. camera + rays into scene + ray equation + object intersections + Whitted algorithm + Path tracing algorithm + solving the light ... equation) - [X] Whitted raytracing algorithm! - [X] Mention the Path Tracing algorithm - [X] Mention Eric Veach's thesis - [X] Reference modern real time raytracing features & support on GPUs - [X] *INTRODUCE CONCEPT OF RAYTRACING* - [ ] *INTRODUCE LIGHT TRANSPORT EQUATION* - [X] *CITE PBR* - [ ] *MAYBE SHOW RAYTRACING IN A WEEKEND PICTURE AS EXAMPLE* -> Could of course mention fancy realtime raytracing possible nowadays. - [ ] *PLOT OF MIRROR SHELLS* -> And relation of LLNL telescope table from PhD thesis & reproducing the fractions in our code via the ~geom_bar~ plot of how many times each shell hit - [X] *AXION ELECTRON IMAGE* -> Found in limit chapter - [ ] *PRIMAKOFF IMAGE* -> If we _do_ compute it, will be placed in relevant chapter there next to flux - [ ] *CHAMELEON IMAGE* -> If we _do_ compute it, will be placed in relevant chapter there next to flux - [ ] *NUMBERS FOR FLUX FRACTION ENCOUNTERED WHERE* - [ ] *LLNL telescope overview*: In the section where we talk about our raytracing of this telescope include: - table of parameters of the telescope - numbers for telescope design 'as built' - Wolter equation with correct radius to use - the multilayer recipes used ** TrAXer - An interactive axion raytracer :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing:traxer :END: #+begin_src subfigure (figure () (subfigure (linewidth 0.525) (caption "RTOW spheres example") (label "fig:appendix:raytracing:riow_example") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/raytracing/raytracing_in_one_weekend.png")) (subfigure (linewidth 0.475) (caption "CAST setup") (label "fig:appendix:raytracing:cast_setup_example") (includegraphics (list (cons 'width (linewidth 1.0))) "/home/basti/phd/Figs/raytracing/traxer_cast_llnl_setup_sun.png")) (caption (subref "fig:appendix:raytracing:riow_example") " is the reproduction of the final result of the book 'Ray Tracing in One Weekend' (RTOW) " (cite "Shirley2020RTW1") " I originally followed. " (subref "fig:appendix:raytracing:cast_setup_example") " is a very narrow angle of view on the CAST setup. In the bottom left is the image sensor with the size of a GridPix. In the middle with a reddish color is the LLNL telescope sitting in front of the CAST magnet bore. On the right is the Sun. Note that only the essential parts are simulated floating in the air. The angle and view from above makes the image sensor look a bit odd.") (label "fig:appendix:raytracing:example_screenshots")) #+end_src The simulation of light based on physical principles as done in path tracing algorithms combined with the performance of modern computers make it an interesting solution to light propagation problems, which are difficult if not impossible to solve numerically. Calculation of the expected image produced by the X-ray telescope for a realistic X-ray telescope and solar axion emission is one such problem. In [[cite:&krieger2018search]] Christoph Krieger already wrote a basic hardcoded [fn:hardcoded] raytracer for the 2014/15 data taking behind the ABRIXAS telescope. He approximated the ABRIXAS telescope as a simple lens. For the data taking with the detector of this thesis behind the LLNL telescope, a more precise solution was desired. As such Johanna von Oy wrote a new raytracer for this purpose as part of her master thesis [[cite:&vonOy_MSc]]. However, although this code [[cite:&JvO_axionElectron]] correctly simulates the LLNL (and as a matter of fact also other telescopes) based on their correct layer descriptions, it still uses a similar hardcoded [fn:hardcoded] approach, making it inflexible for different scenes. More importantly it makes understanding and verifying the code for correctness very difficult. What started mainly as a toy project I wrote out of curiosity following the popular 'Ray Tracing in One Weekend' (RTOW) [[cite:&Shirley2020RTW1]] book [fn:literal_weekend], anyhow made me appreciate the advantage of a generic raytracer (see fig. sref:fig:appendix:raytracing:riow_example for the final result of that book, created by this project). Generic meaning that the scene to be rendered is described by objects placed in simulated 3D world space. This reduces the amount of code that needs to be checked for correctness by orders of magnitude (reflection for one material type is a few line function, instead of having lines of code for every single reflection in a hardcoded raytracer). It also makes the program much more flexible to support different setups. In particular though, the concept of a camera independent of the scene geometry allows to visualize the scene (obviously the entire purpose of a normal raytracer!). As a personal addition to the RTOW project, I implemented rendering to an SDL2 cite:SDL2 window and handling mouse and keyboard inputs to produce an interactive real-time raytracer (this was early 2021). This sparked the idea to use the same approach for our axion raytracer. I pitched the idea to Johanna, but did not manage to convince her [fn:not_criticism]. While working on finalizing the axion image and X-ray finger simulations for this thesis, several big discrepancies between our raytracing results and those used for the CAST Nature paper in 2017 [[cite:&cast_nature]] were noticed. The latter were done by Michael Pivovaroff at LLNL using a private in-house raytracer [fn:my_belief]. The need to better verify the correctness of our own raytracing simulation lead me down the path of finishing the project of turning the RTOW based raytracer into an axion X-ray raytracer. This way verifying correctness was easier. Some additional features of the second book of the ROTW series were added (light sources among others) and other aspects were inspired by Physically Based Rendering [[cite:&pharr2016physically]] (in particular the idea of propagating multiple energies in each ray). The result is ~TrAXer~ cite:traxer, an interactive real-time visible light and X-ray raytracer. In essence it is two raytracers in one. First, a 'normal' visible light raytracer, which the camera uses. Secondly, an X-ray raytracer. Both use the same raytracing logic, but they differ in sampling. For visible light we emit rays from the camera into the scene, while for X-rays we emit from an X-ray source (an object placed in the scene, for example the Sun). Both of these run in parallel. Scenes are defined by placing geometric primitives with different materials (glass, metal etc.) into the scene. The X-ray raytracer is used by choosing X-ray emitting materials and adding another object as an X-ray target (another material), which the X-ray sampling will be biased towards. There is further a specific X-ray material an object can be made of, which behaves as expected when hit by an X-ray, calculated using [[cite:&Schmidt_xrayAttenuation_2022]]. Behavior is described by the reflectivity computed via the Fresnel equations as discussed in sec. [[#sec:theory:xray_reflectivity]]. To simulate reflectivity of the LLNL telescope, the depth graded multilayers of the different layers are taken into account. X-ray transmission is currently not implemented, but only because there is no need for us. The addition would be easy based on mass attenuation coefficients and the Fresnel equations for transmission. In order to detect and read out results from the X-ray raytracing, we place "image sensors" into the scene. They use a special material, which accumulates X-rays hitting them, split into spatial pixels. For visible light they simply emit the current values stored in each pixel of the image sensor, mapped to the Viridis color scale. These things are visible in the example screenshot showing the "CAST" scene in fig. [[sref:fig:appendix:raytracing:cast_setup_example]]. The view is seen from behind and above the setup with a narrow field of view (like a telephoto lens). Only relevant pieces of the setup are placed into the scene (partially for performance, although bounding volume hierarchies help there, mostly for simplicity). The Sun is visible on the right side, emitting a yellow hue for the visible rays and an invisible X-ray spectrum following the expected solar axion flux (with correct radial emission). The long blueish tube is the inner bore of the CAST magnet. Then, with red hues (in visible light) we have the LLNL telescope. In the bottom left is the image sensor, which has the same physical size as the center GridPix of the Septemboard detector and is placed in the focal point of the telescope. As every object of the scene is floating above the earth, combined with the telephoto like view, the sensor looks a bit odd. [fn:sensor_sun] [fn:hardcoded] By 'hardcoded' in this context I mean that the scene geometry is entirely embedded in imperative code. Each reflection is calculated manually. Thus, the raytracer is not generic. [fn:literal_weekend] The nice thing about the first book is that it can literally be implemented in a single weekend! [fn:not_criticism] Some people may just be better at judging which efforts are worth it, than I am, haha! On a serious note, given her other obligations and this not being an urgent problem, we did not pursue it at the time. [fn:my_belief] At least that is my understanding. There are two public X-ray raytracing packages I am aware of, ~McXtrace~ cite:mc_xtrace and ~MT_RAYOR~ cite:mt_rayor. There is a chance ~MT_RAYOR~ may have been used, because it is written in an interpreted programming language, 'Yorick' cite:yorick, developed at LLNL. And it was used for NuSTAR raytracing simulations in the past cite:nustar_dtu_phd. [fn:sensor_sun] You may wonder why there is nothing shielding the image sensor from direct illumination by the Sun. The reason is that the X-rays emitted from the Sun are only sampled into the entrance of the magnet bore. This way we save compute and do not need a mask to shield the sensor. *** Discrepancies raytracer and LLNL raytracer :extended: - [ ] LINK TO DOCUMENT ANSWER TO IGOR! -> For that we need to host the document somewhere first! *** TODOs for this section :noexport: Old text: #+begin_quote *Signal hypothesis - raytracing from Sun to detector* (It can be a reasonably long section, I think that's fair) An interactive raytracer for the applications of solar axion fluxes, which allows to investigate the scene (geometry of objects) in 'visible light' as well as serve as an X-ray raytracer is in development. +For time reasons that development is somewhat on hold unfortunately.+ +https://github.com/Vindaar/rayTracingInOneWeekend/tree/interactiveRayTracer+ https://github.com/Vindaar/TrAXer -> Done. Ray tracing through the detector. Put this before limit calculation stuff? The whole theoretical side needs to be described in the theory chapter in [[Axion-electron flux]]. Then here we can just describe how one implements this (in particular in the noexport section). Raytracing in a Weekend: cite:Shirley2020RTW1 For an in depth guide to raytracing, from theoretical principles to a pretty sophisticated raytracer, see the amazing 'Physically based rendering' cite:pharr2016physically. #+end_quote *** Produce screenshots of TrAXer :extended: This is simply a manual matter of running ~TrAXer~ with the desired scene and taking screenshots manually. In theory we can save the frame buffers using ~F5~ and then produce an image from the raw data, but there's not much point really. The main question is what we really want to showcase. For example to produce an example screenshot showcasing the original 'Raytracing in One Weekend' final image, we can do: #+begin_src sh ./raytracer --width 1200 --speed 10.0 --nJobs 32 --maxDepth 10 --spheres #+end_src and take a screenshot from: #+begin_src Now looking at: (Point: [-4.278876423185614, 1.499472514352247, -5.283379080542512]) from : (Point: [-4.825481106391748, 1.938486638958975, -5.996463871218145]), yaw = 5.366388980384832, pitch = -0.4545011087932843 #+end_src (we currently don't have a way to specify the starting location and point we look at. You see the current values printed to the terminal. Note that the movement speed can be adjusted using page up / page down). - [ ] Or use the original raytracing in one weekend screenshot we created? We are not using any depth of field in the current camera setup. -> I also get the impression that something about our image integration is not entirely working in the current raytracing code. The code has been running quite a while now (given that it's 28 threads), but the noise does not really get better. -> Well, our color code was simpler in the past. Just adding colors then dividing by num samples & gamma correcting. But it did not include yet how to integrate. This way we took: <<INSERT>> Let's place this screenshot next to one that showcases something more CAST related? - [ ] CAST screenshot. Maybe image sensor, magnet, telescope sun? For example to simulate the 'realistic' #+begin_src sh ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --solarModelFile ~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe.csv \ --rayAt 1.0 \ --sensorKind sSum \ --usePerfectMirror=false \ --ignoreWindow #+end_src from #+begin_quote [INFO] Current position (lookFrom) = (Point: [-207.533615402934, 75.77234678496751, -2003.522751711818]) at (lookAt) (Point: [-207.4532559830788, 75.71290478850352, -2002.527759745671]) #+end_quote [[~/phd/Figs/raytracing/traxer_cast_llnl_setup_sun.png]] ** A few more details about the LLNL telescope :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing:llnl_telescope :END: The Lawrence Livermore National Laboratory, introduced in sec. [[#sec:helioscopes:llnl_telescope]], is the X-ray telescope of interest for us. Let us now first get an overview of the telescope and its construction itself, before we move on to comparing the TrAXer simulations with PANTER measurements and raytracing simulations from Michael Pivovaroff from LLNL. Its design follows the technology developed for the NuSTAR mission cite:Harrison_2013,Harrison2006,nustar_design_performance,nustar_fabrication,nustar_overview_status. This means it is not a true Wolter optic (meaning parabolic and hyperbolic mirrors), but instead uses a cone approximation cite:Petre:85, which are easier to produce and in particular for CAST provide more than enough angular resolution. It consists of two sets of 13 layers. Each mirror has a physical length of $\SI{225}{mm}$ and thickness of $d_{\text{glass}} = \SI{0.21}{mm}$. These mirrors are made of glass, which are thermally-formed ("slumped") into the conical shapes. Production of the mirrors happened at the National Space Institute at the Technical University of Denmark (DTU Space) as part of the PhD thesis of Anders Jakobsen cite:anders_phd. The first sets of layers $i$ describe truncated coni with a remaining height of $\SI{225}{mm}$, an arc angle of $\SI{30}{°}$, a radius on the opening end [fn:opening_end] of $R_{1,i}$ and a cone angle of $α_i$. That is for each layer a non-truncated cone would have a total height of $h = R_{1,i} / \tan{α_i}$. The second set of 13 layers only differs by their radii, $R_{4,i}$ on the opening end and an angle of $3 α_i$. In the horizontal direction along the optical axis of the telescope, there is a spacing of $x_{\text{sep}} = \SI{4}{mm}$ between each set of layers. Due to the tilted angle the physical distance between each set is minutely larger. Combined the telescope thus has a length of roughly $\SI{454}{mm}$. One can define additional helpful radii, $R_{2,i}$ the minimum radius of the first set of cones, $R_{4,i}$ the maximum radii of the opening end of the second set of mirrors and finally $R_{3,i}$, the radius at the mid point where the hypothetical extension of the first layer and last layer would touch at $x = \SI{2}{mm}$ from each set of mirrors. Equations [[eq:llnl_rest:radii_eqs]] specify how these numbers are related with $l$ the length of the mirrors. #+NAME: eq:llnl_rest:radii_eqs \begin{align} R_{2,i} &= R_{3,i} + 0.5 x_{\text{sep}} \tan(α_i) \\ R_{1,i} &= R_{2,i} + l \sin(α_i) \\ R_{4,i} &= R_{3,i} - 0.5 x_{\text{sep}} \tan(3α_i) \\ R_{5,i} &= R_{4,i} - l \sin(3α_i) \end{align} which we can rewrite to compute everything from $R_{1,i}$ by: \begin{align*} R_{2,i} &= R_{1,i} - l \sin(α_i) \\ R_{3,i} &= R_{2,i} - 0.5 x_{\text{sep}} \tan(α_i) \\ R_{4,i} &= R_{3,i} - 0.5 x_{\text{sep}} \tan(3α_i) \\ R_{5,i} &= R_{4,i} - l \sin(3α_i) \end{align*} I emphasize this, because this conical nature describes the /entire telescope/ as long as the radii $R_{1,i}$, $R_{4,i}$ and angle $α_i$ are known. [fn:alternative] Alternatively, this can also be used to /construct/ an optimal conical telescope from a starting radius $R_{1,0}$ (typically called the "mandrel"), if the iterative condition $R_{3,i+1} = R_{1,i} + d_{\text{glass}}$ is employed (in this case some target parameters are needed of course, like focal length, $x_{\text{sep}}$ and a few more). To calculate the focal length $f$ of an X-ray telescope of the Wolter type, we can use the Wolter equation, \[ \tan(4α_i) = \frac{R_{3,i,\text{virtual}}}{f} \] where $α_i$ is the angle of the $i^{\text{th}}$ layer and $R_{3,i,\text{virtual}}$ is the radius corresponding to the virtual reflection point. Due to the double reflection inherent to a Wolter telescope, the incoming ray picks up a total angle of $4α$ ($0$ to $2α$ after first reflection of the shell with angle $α$, then $2α$ to $4α$ after reflecting from second shell of angle $3α$). The virtual radius is the radius at the point if one extends the incoming ray through the first set of mirrors and reflects at a single virtual mirror of an angle $4α$ precisely at the midpoint of the telescope. This is illustrated in fig. [[fig:appendix:llnl_schematic_explanation]], which is an adapted version of fig. 4.6 from A. Jakobsen's PhD thesis [[cite:&anders_phd]] to better illustrate this. In addition to the PhD thesis mentioned, there is a paper cite:llnl_telescope_first_cast_results about the telescope for CAST and initial results. *Note though*, that /both/ the PhD thesis and the paper contain contradicting and wrong information about the details of the telescope design. Neither (nor combined do they) describe the real telescope. [fn:whats_wrong] [fn:clarify] Thanks to the help of and personal communication with Jaime Ruz and Julia Vogel we were able to both clear up confusions and find the original numbers of the telescope that were actually built. Table [[tab:appendix:llnl_overview]] gives an overview of the telescope design. It is adapted from cite:llnl_telescope_first_cast_results, but modified to use the correct numbers. Then in tab. [[tab:appendix:llnl_as_built]] is the list of all layers and their corresponding radii in detail as given to me by Jaime Ruz. #+CAPTION: Properties of the LLNL telescope. Adapted from [[cite:&llnl_telescope_first_cast_results]] #+CAPTION: and modified to match the numbers actually built. #+NAME: tab:appendix:llnl_overview #+ATTR_LATEX: :booktabs t | Property | Value | |---------------------------------------------+-------------------------------| | Mirror substrates | glass, Schott D263 | | Substrate thickness | $\SI{0.21}{mm}$ | | $l$, length of upper and lower mirrors | $\SI{225}{mm}$ | | Overall telescope length | $\sim\SI{454}{mm}$ | | $f$ , focal length | $\SI{1500}{mm}$ | | Layers | 13 | | Total number of individual mirrors in optic | 26 | | $R_{1,i}$ , range of maximum radii | $\SIrange{63.24}{102.38}{mm}$ | | $R_{3,i}$ , range of mid-point radii | $\SIrange{62.07}{100.5}{mm}$ | | $R_{5,i}$ , range of minimum radii | $\SIrange{53.88}{87.19}{mm}$ | | $α$, range of graze angles | $\SIrange{0.592}{0.958}{°}$ | | Azimuthal extent | $\sim\SI{30}{°}$ | #+CAPTION: Overview of the relevant numbers as the telescope was actually built. Based on a data file #+CAPTION: sent to me by Jaime Ruz. The values for $R_3$ and the angles $α$, $3α$ were calculated from #+CAPTION: the values for $R_1, R_2$ and $R_4, R_5$ based on equations [[eq:llnl_rest:radii_eqs]]. #+NAME: tab:appendix:llnl_as_built #+ATTR_LATEX: :booktabs t | i | $R_1$ [$\si{mm}$] | $R_2$ [$\si{mm}$] | $R_3$ [$\si{mm}$] | $R_4$ [$\si{mm}$] | $R_5$ [$\si{mm}$] | α [$\si{°}$] | 3α [$\si{°}$] | |----+-------------------+-------------------+-------------------+-------------------+-------------------+--------------+---------------| | 1 | 63.2384 | 60.9121 | 60.8914 | 60.8632 | 53.8823 | 0.5924 | 1.7780 | | 2 | 65.8700 | 63.4470 | 63.4255 | 63.3197 | 56.0483 | 0.6170 | 1.8520 | | 3 | 68.6059 | 66.0824 | 66.0600 | 65.9637 | 58.3908 | 0.6426 | 1.9288 | | 4 | 71.4175 | 68.7898 | 68.7664 | 68.6794 | 60.7934 | 0.6692 | 2.0086 | | 5 | 74.4006 | 71.6647 | 71.6404 | 71.5582 | 63.3473 | 0.6967 | 2.0913 | | 6 | 77.4496 | 74.6014 | 74.5761 | 74.4997 | 65.9515 | 0.7253 | 2.1773 | | 7 | 80.6099 | 77.6452 | 77.6188 | 77.5496 | 68.6513 | 0.7550 | 2.2665 | | 8 | 83.9198 | 80.8341 | 80.8067 | 80.7305 | 71.4688 | 0.7858 | 2.3591 | | 9 | 87.3402 | 84.1290 | 84.1005 | 84.0137 | 74.3748 | 0.8178 | 2.4553 | | 10 | 90.8910 | 87.5495 | 87.5198 | 87.4316 | 77.4012 | 0.8510 | 2.5551 | | 11 | 94.5780 | 91.1013 | 91.0704 | 90.9865 | 80.5497 | 0.8850 | 2.6587 | | 12 | 98.3908 | 94.7737 | 94.7415 | 94.6549 | 83.7962 | 0.9211 | 2.7662 | | 13 | 102.381 | 98.6187 | 98.5853 | 98.4879 | 87.1914 | 0.9581 | 2.8778 | #+CAPTION: Schematic of the reflection of an X-ray on a single layer of the telescope. #+CAPTION: Angles are exaggerated ($α = \SI{5}{°}$ here). The virtual reflection point is #+CAPTION: the relevant point to consider when calculating the focal length. #+NAME: fig:appendix:llnl_schematic_explanation #+ATTR_LATEX: :width 1\textwidth [[~/org/Figs/llnlExplanation/llnl_layers_explanation.pdf]] Having discussed the physical /layout/ of the telescope, let us quickly talk about the telescope coatings. In contrast to telescope like the ABRIXAS or XMM-Newton optics, which use a single layer gold coating, the NuSTAR design uses a depth graded multilayer coating as introduced in sec. [[#sec:theory:xray_reflectivity]]. There are four different 'recipes', used for different layers. All recipes are a depth graded multilayer of different number of Pt/C layers. #+begin_quote Note that in this terminology the high $Z$ material (platinum) is actually /below/ the low $Z$ material (carbon) contrary to what Pt/C might imply! #+end_quote Table [[tab:appendix:llnl_recipes]] gives an overview of which recipes are used for which layer and how they differ. $d_{\text{min}}$ is the minimum thickness of one multilayer and $d_{\text{max}}$ the maximum thickness. The top most multilayer is the thickest. $Γ$ is the ratio of the top to bottom material in each multilayer. For example recipe 1 has a top multilayer of carbon of a thickness $Γ · d_{\text{max}} = \SI{10.125}{nm}$ on top of $(1 - Γ) · d_{\text{max}} = \SI{12.375}{nm}$ of platinum. Because recipe 1 only has 2 such multilayers, the bottom Pt/C layer has a combined thickness of $d_{\text{min}}$. To reiterate the equations that describe the layer thicknesses for $N$ layers as given in sec. [[#sec:theory:xray_reflectivity]], a depth-graded multilayer is described by the equation: #+NAME: eq:appendix:depth_graded_multilayer \begin{equation} d_i = \frac{a}{(b + i)^c} \end{equation} where $d_i$ is the depth of layer $i$ (out of $N$ layers), \[ a = d_{\text{min}} (b + N)^c \] and \[ b = \frac{1 - N k}{k - 1} \] with \[ k = \left(\frac{d_{\text{min}}}{d_{\text{max}}}\right)^{\frac{1}{c}}. \] In all four recipes used for the LLNL telescope the parameter $c$ is set to 1. #+CAPTION: Overview of the different depth graded multilayer coatings used for #+CAPTION: the table. Adapted from fig. 4.11 of [[cite:&anders_phd]] with the correct #+CAPTION: numbers finally built. Parameter $c$ of equation [[eq:appendix:depth_graded_multilayer]] is #+CAPTION: set to 1 in all layers. #+NAME: tab:appendix:llnl_recipes | Recipe | # of layers | $d_{\text{min}}$ [$\si{nm}$] | $d_{\text{max}}$ [$\si{nm}$] | Γ | |--------+-------------+------------------+------------------+------| | 1 | 2 | 11.5 | 22.5 | 0.45 | | 2 | 4 | 7.0 | 19.0 | 0.45 | | 3 | 4 | 5.5 | 16.0 | 0.4 | | 4 | 2 | 5.0 | 14.0 | 0.4 | Calculation of the reflectivity follows the equations also mentioned in the theory section. The reflectivities of each layer are calculated with [[cite:&Schmidt_xrayAttenuation_2022]], which is used as part of the material description in ~TrAXer~. [fn:opening_end] By 'opening end' I talk about the side of a truncated cone with the larger radius. For an ice cream cone it would simply be the radius at the top. [fn:alternative] An alternative, equivalent description replaces the angle by the minimum radii of the coni. [fn:whats_wrong] Anders' thesis contains numbers for a telescope design with a focal length of $\SI{1530}{mm}$ instead of $\SI{1500}{mm}$, possibly due to a typo in a form of the Wolter equation. The paper contains schematics with factually wrong annotations and more wrong (or outdated?) numbers. [fn:clarify] Let me be very clear: by no means I am trying to demean Anders work! These things happen. It is just a shame to not have any public, accurate information about the telescope. In particular the paper [[cite:&llnl_telescope_first_cast_results]] should have received an errata. *** TODOs for this section :noexport: - [X] Explain the layout, *cones*, table of the shells (*final shells!!*) and length of mirrors, angles, $x_{\text{sep}}$ and all that jazz - [ ] Mention the recipes used for each set of layers and which layers are affected by each - [ ] Mention calculation with xrayAttenuation of those. - [ ] Calculate correct radii and angles, update ρ mid - [ ] Make updated schematic like Anders for the angles [fn:table_llnl] The data in the table was sent to me in personal communication by Jaime Rúz. *IS WRITTEN CORRECT?* *** Calculate angles and focal length based on Wolter equation for telescope data files :extended: The following, initially adapted from [[file:~/org/Mails/llnlAxionImage/llnl_axion_image.org::#sec:expected_focal_length]] and found in [[file:~/org/Doc/LLNL_TrAXer_comparison/llnl_traxer_raytracing_comparison.org::#sec:llnl_traxer:focal_length_wolter_eq]] and [[file:~/org/journal.org::#sec:journal:llnl_focal_length_wolter_eq]] computes the expected focal length based on the Wolter equation: \[ \tanh(4α) = \frac{R}{f} \] where the radius $R$ is the radius of the virtual reflection point of the combined telescope system. The data files are [[file:~/org/resources/LLNL_telescope/cast20l4_f1500mm_asBuilt.txt]] and [[file:~/org/resources/LLNL_telescope/cast20l4_f1500mm_asDesigned.txt]]. What we can see below is that the numbers from Anders' PhD thesis yield a telescope with focal length of $\SI{1530}{mm}$ instead of the target. Also it computes the angles for all layers based on the radii. The data file 'as built' is the one of main interest! The angles printed in the table below for that case are the ones shown in the main section above. #+begin_src nim :results raw import math, sequtils, datamancer const lMirror = 225.0 const xSep = 4.0 proc calcAngle(r1, r2: float): float = result = arcsin(abs(r1 - r2) / lMirror) proc calcR3(r1, lMirror: float, α: float): float = let r2 = r1 - lMirror * sin(α) result = r2 - 0.5 * xSep * tan(α) proc printTab(R1s, R2s, R4s, R5s: openArray[float]) = var df = newDataFrame() for i in 0 ..< R1s.len: let r1 = R1s[i] let r2 = R2s[i] let α = calcAngle(r1, r2) let r3 = calcR3(r1, lMirror, α) let r1minus = r1 - sin(α) * lMirror/2 let fr3 = r3 / tan(4 * α) let fr1m = r1minus / tan(4 * α) df.add (i: i+1, f_R3: fr3.float, f_R1m: fr1m.float, α: α.radToDeg, R3: r3) echo df.toOrgTable proc printBuiltTab(R1s, R2s, R4s, R5s: openArray[float]) = var df = newDataFrame() for i in 0 ..< R1s.len: let r1 = R1s[i] let r2 = R2s[i] let r4 = R4s[i] let r5 = R5s[i] let α = calcAngle(r1, r2) let α3 = calcAngle(r4, r5) let r3 = calcR3(r1, lMirror, α) df.add (i: i+1, R1: r1, R2: r2, R3: r3, R4: r4, R5: r5, α: α.radToDeg, α3: α3.radToDeg) echo df.rename(f{"3α" <- "α3"}).toOrgTable(precision = 6) block AndersPhD: let R1s = @[63.006, 65.606, 68.305, 71.105, 74.011, 77.027, 80.157, 83.405, 86.775, 90.272, 93.902, 97.668, 101.576, 105.632] let αs = @[0.579, 0.603, 0.628, 0.654, 0.680, 0.708, 0.737, 0.767, 0.798, 0.830, 0.863, 0.898, 0.933, 0.970] proc calcR2(r1, α: float): float = r1 - sin(α.degToRad) * lMirror let R2s = toSeq(0 ..< R1s.len).mapIt(calcR2(R1s[it], αs[it])) echo "Using values from Anders PhD thesis" printTab(R1s, R2s, [], []) block AsDesigned: # `cast20l4_f1500mm_asDesigned.txt` let R1s = [ 63.2412, 65.8741, 68.6075, 71.4450, 74.3908, 77.4488, 80.6233, 83.9188, 87.3398, 90.8911, 94.5775, 98.4043, 102.377 ] let R2s = [ 60.9149, 63.4511, 66.0840, 68.8173, 71.6549, 74.6006, 77.6586, 80.8331, 84.1286, 87.5496, 91.1008, 94.7872, 98.6139 ] let R4s = [ 60.8322, 63.3650, 65.9942, 68.7239, 71.5576, 74.4993, 77.5532, 80.7234, 84.0144, 87.4307, 90.9771, 94.6586, 98.4801 ] let R5s = [ 53.8513, 56.0936, 58.4213, 60.8379, 63.3467, 65.9511, 68.6549, 71.4617, 74.3755, 77.4003, 80.5403, 83.7999, 87.1836 ] let diffs = [ 10.3390, 10.7690, 11.2150, 11.6780, 12.1590, 12.6580, 13.1760, 13.7130, 14.2710, 14.8500, 15.4510, 16.0740, 16.7210 ] echo "Using values from .txt file 'as designed'" printTab(R1s, R2s, R4s, R5s) block AsBuilt: # `cast20l4_f1500mm_asBuilt.txt` # These are the numbers from the "as built" text file let R1s = [ 63.2384, 65.8700, 68.6059, 71.4175, 74.4006, 77.4496, 80.6099, 83.9198, 87.3402, 90.8910, 94.5780, 98.3908, 102.381 ] let R2s = [ 60.9121, 63.4470, 66.0824, 68.7898, 71.6647, 74.6014, 77.6452, 80.8341, 84.1290, 87.5495, 91.1013, 94.7737, 98.6187 ] let R4s = [ 60.8632, 63.3197, 65.9637, 68.6794, 71.5582, 74.4997, 77.5496, 80.7305, 84.0137, 87.4316, 90.9865, 94.6549, 98.4879 ] let R5s = [ 53.8823, 56.0483, 58.3908, 60.7934, 63.3473, 65.9515, 68.6513, 71.4688, 74.3748, 77.4012, 80.5497, 83.7962, 87.1914 ] # this last one should be the difference between R5 and R1 let diffs = [ 10.339, 10.769, 11.216, 11.679, 12.160, 12.659, 13.176, 13.714, 14.272, 14.851, 15.452, 16.076, 16.725 ] echo "Using values from .txt file 'as built'" printTab(R1s, R2s, R4s, R5s) # And now print the table to use in thesis echo "Overview of radii and angles of telescope was built:" printBuiltTab(R1s, R2s, R4s, R5s) #+end_src #+RESULTS: Using values from Anders PhD thesis | i | f_R3 | f_R1m | α | R3 | |----+-----------+-----------+-------+-----------| | 0 | 1501.1452 | 1529.7542 | 0.579 | 60.712099 | | 1 | 1500.7995 | 1529.4071 | 0.603 | 63.217019 | | 2 | 1500.2462 | 1528.8523 | 0.628 | 65.816977 | | 3 | 1499.554 | 1528.1586 | 0.654 | 68.513974 | | 4 | 1501.1365 | 1529.7394 | 0.68 | 71.316971 | | 5 | 1500.4047 | 1529.0057 | 0.708 | 74.222046 | | 6 | 1499.816 | 1528.4149 | 0.737 | 77.23716 | | 7 | 1499.4292 | 1528.026 | 0.767 | 80.366313 | | 8 | 1499.2931 | 1527.8876 | 0.798 | 83.613505 | | 9 | 1499.4644 | 1528.0564 | 0.83 | 86.983737 | | 10 | 1500.0058 | 1528.5952 | 0.863 | 90.483008 | | 11 | 1499.1816 | 1527.768 | 0.898 | 94.110358 | | 12 | 1500.579 | 1529.1623 | 0.933 | 97.879709 | | 13 | 1500.8171 | 1529.397 | 0.97 | 101.78914 | Using values from .txt file 'as designed' | i | f_R3 | f_R1m | α | R3 | |----+-----------+-----------+------------+-----------| | 0 | 1471.5582 | 1500.1664 | 0.59239799 | 60.894221 | | 1 | 1471.5794 | 1500.1861 | 0.61702381 | 63.429561 | | 2 | 1471.5244 | 1500.1297 | 0.64261747 | 66.061567 | | 3 | 1471.5363 | 1500.1398 | 0.66915352 | 68.793941 | | 4 | 1471.5242 | 1500.1259 | 0.69670838 | 71.630579 | | 5 | 1471.5125 | 1500.1123 | 0.72530755 | 74.575281 | | 6 | 1471.5293 | 1500.127 | 0.7549765 | 77.632245 | | 7 | 1471.5027 | 1500.0982 | 0.78579169 | 80.805669 | | 8 | 1471.5144 | 1500.1074 | 0.81775313 | 84.100053 | | 9 | 1471.501 | 1500.0913 | 0.85093727 | 87.519895 | | 10 | 1471.4966 | 1500.0841 | 0.88536962 | 91.069892 | | 11 | 1471.453 | 1500.0374 | 0.92112663 | 94.755044 | | 12 | 1471.2913 | 1499.8723 | 0.95831023 | 98.580446 | Using values from .txt file 'as built' | i | f_R3 | f_R1m | α | R3 | |----+-----------+-----------+------------+-----------| | 0 | 1471.4905 | 1500.0987 | 0.59239799 | 60.891421 | | 1 | 1471.4842 | 1500.091 | 0.61702381 | 63.425461 | | 2 | 1471.4888 | 1500.094 | 0.64261747 | 66.059967 | | 3 | 1470.948 | 1499.5516 | 0.66915352 | 68.766441 | | 4 | 1471.7255 | 1500.3272 | 0.69670838 | 71.640379 | | 5 | 1471.5282 | 1500.128 | 0.72530755 | 74.576081 | | 6 | 1471.2753 | 1499.873 | 0.7549765 | 77.618845 | | 7 | 1471.5209 | 1500.1164 | 0.78579169 | 80.806669 | | 8 | 1471.5214 | 1500.1144 | 0.81775313 | 84.100453 | | 9 | 1471.4993 | 1500.0896 | 0.85093727 | 87.519795 | | 10 | 1471.5047 | 1500.0922 | 0.88536962 | 91.070392 | | 11 | 1471.2434 | 1499.8278 | 0.92112663 | 94.741544 | | 12 | 1471.6768 | 1500.2579 | 0.95810648 | 98.585253 | Overview of radii and angles of telescope was built: | i | R1 | R2 | R3 | R4 | R5 | α | 3α | |----+---------+---------+---------+---------+---------+----------+---------| | 0 | 63.2384 | 60.9121 | 60.8914 | 60.8632 | 53.8823 | 0.592398 | 1.77796 | | 1 | 65.87 | 63.447 | 63.4255 | 63.3197 | 56.0483 | 0.617024 | 1.85197 | | 2 | 68.6059 | 66.0824 | 66.06 | 65.9637 | 58.3908 | 0.642617 | 1.92879 | | 3 | 71.4175 | 68.7898 | 68.7664 | 68.6794 | 60.7934 | 0.669154 | 2.00856 | | 4 | 74.4006 | 71.6647 | 71.6404 | 71.5582 | 63.3473 | 0.696708 | 2.09135 | | 5 | 77.4496 | 74.6014 | 74.5761 | 74.4997 | 65.9515 | 0.725308 | 2.17731 | | 6 | 80.6099 | 77.6452 | 77.6188 | 77.5496 | 68.6513 | 0.754977 | 2.26652 | | 7 | 83.9198 | 80.8341 | 80.8067 | 80.7305 | 71.4688 | 0.785792 | 2.35914 | | 8 | 87.3402 | 84.129 | 84.1005 | 84.0137 | 74.3748 | 0.817753 | 2.45528 | | 9 | 90.891 | 87.5495 | 87.5198 | 87.4316 | 77.4012 | 0.850937 | 2.55507 | | 10 | 94.578 | 91.1013 | 91.0704 | 90.9865 | 80.5497 | 0.88537 | 2.65866 | | 11 | 98.3908 | 94.7737 | 94.7415 | 94.6549 | 83.7962 | 0.921127 | 2.76622 | | 12 | 102.381 | 98.6187 | 98.5853 | 98.4879 | 87.1914 | 0.958106 | 2.87784 | :shocked_face: :exploding_head: *** Create the schematic of the layers :noexport: The final schematic included above is: [[~/org/Figs/llnlExplanation/llnl_layers_explanation.pdf]] The same directory contains the Inkscape SVG as well. For the schematic we want a few annotations. x separation: #+begin_src nim :results none import latexdsl let body = r"$x_{\text{sep}} = \SI{4}{mm}$" compile("/tmp/xsep.tex", body) #+end_src mirror length: #+begin_src nim :results none import latexdsl let body = r"$l = \SI{225}{mm}$" compile("/tmp/mirror_length.tex", body) #+end_src #+begin_src nim :results none import latexdsl let body = r"$l_{\text{tel}} = \SI{454}{mm}$" compile("/tmp/telescope_length.tex", body) #+end_src #+begin_src nim :results none import latexdsl let body = r"$α$" compile("/tmp/alpha.tex", body) let body2 = r"$3α$" compile("/tmp/3alpha.tex", body2) #+end_src #+begin_src nim :results none import latexdsl let r1 = r"$R_1$" compile("/tmp/r1.tex", r1) let r2 = r"$R_2$" compile("/tmp/r2.tex", r2) let r3 = r"$R_3$" compile("/tmp/r3.tex", r3) let r4 = r"$R_4$" compile("/tmp/r4.tex", r4) let r5 = r"$R_5$" compile("/tmp/r5.tex", r5) let r3v = r"$R_{3,\text{virtual}}$" compile("/tmp/r3v.tex", r3v) #+end_src Wolter equation: #+begin_src nim :results none import latexdsl let wolter = r"$\tan(4α) = \frac{R_{3,\text{virtual}}}{f}$" compile("/tmp/wolter.tex", wolter) #+end_src Focal length #+begin_src nim :results none import latexdsl let focal = r"$f = \SI{1500}{mm}$" compile("/tmp/focal.tex", focal) #+end_src ** Comparison of TrAXer results with PANTER measurements :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing:panter :END: The LLNL telescope was tested and characterized at the PANTER X-ray test facility in Munich in July 2016. It was reported on these tests during the $62^{\text{nd}}$ CAST collaboration meeting (CCM) on <2016-09-26 Mon> and again in the CCM on <2017-01-23 Mon>. [fn:slides] These results are valuable to us, because they can be used as a cross-check to verify our ~TrAXer~ based raytracing simulations. At the PANTER facility the X-ray source sits $\SI{130.297}{m}$ away from the center of the telescope (the plane of the virtual reflection). It can be approximated as a source of radius $\SI{0.42}{mm}$. The detector used to measure at PANTER is installed on the optical axis (i.e. directly 'in front of' the source). The telescope itself is offset by the radius of its shells, such that its optical axis aligns with the optical axis defined by the X-ray beam (instead of aligning the telescope entrance with the beam axis and the detector offset from it!). In this way the light entering the telescope does not enter perpendicular to the opening, but under an angle. The setup of having such an effective point source implies that the best focal point will not be in the physical focal spot defined by the focal length, but slightly behind. At PANTER this was measured to be at $\SI{1519}{mm}$ instead of $\SI{1500}{mm}$ and provides a first check of our raytracer. When reproducing the same setup in TrAXer we find the smallest, most symmetric spot around the $\SIrange{1519}{1520}{mm}$ mark as well. [fn:precise] In this setup three different X-ray fluorescence lines were measured. $\ce{Al}$ Kα ($\SI{1.49}{keV}$), $\ce{Ti}$ Kα ($\SI{4.51}{keV}$) and $\ce{Fe}$ Kα ($\SI{6.41}{keV}$). We can compare the images between measurements and simulations as well as compute the 'half power diameter' (HPD). The HPD is the radius around the image center (based on the weighted mean center) in which $\SI{50}{\%}$ of the flux is contained. This is computed based on the encircled energy function (EEF): integrate the flux in radial slices around the center, compute a radial cumulative distribution function and find the radius corresponding to $\SI{50}{\%}$. Let's consider the aluminum Kα lines. Fig. sref:fig:appendix:raytracing:comparison_alKalpha compares the PANTER data, with the LLNL raytracing result and our raytracing result. [fn:scene_panter] All three plots show the inner $\num{3}·\SI{3}{mm²}$ of the image. Generally, the agreement is very good. The "bow tie" shape can be seen in all three figures. #+begin_src subfigure (figure () (subfigure (linewidth 0.3) (caption "PANTER data") (label "fig:appendix:raytracing:panter_data_alK") (includegraphics (list (cons 'width (linewidth 0.95))) "/home/basti/phd/Figs/raytracing/llnlPanter/PANTER_LLNL_CCM_slide_11_cropped_panter_data.pdf")) (subfigure (linewidth 0.3) (caption "LLNL simulation") (label "fig:appendix:raytracing:simulation_alK") (includegraphics (list (cons 'width (linewidth 0.95))) "/home/basti/phd/Figs/raytracing/llnlPanter/PANTER_LLNL_CCM_slide_11_cropped_simulation.pdf")) (subfigure (linewidth 0.39) (caption "TrAXer") (label "fig:appendix:raytracing:traxer_sim_alK") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_fWidth0.3.pdf")) (caption (subref "fig:appendix:raytracing:panter_data_alK") ": Data of the telescope taken at PANTER. " (subref "fig:appendix:raytracing:simulation_alK") ": Raytracing simulation performed by M. Pivovaroff at LLNL. Both of these are taken from the CCM in Jan 2017, slide 11. " (subref "fig:appendix:raytracing:traxer_sim_alK") ": Simulation performed with TrAXer. All three plots show the inner " ($ "\\num{3}·\\SI{3}{mm²}") " of the image.") (label "fig:appendix:raytracing:comparison_alKalpha")) #+end_src Computing the encircled energy function yields a figure as shown in fig. [[fig:appendix:raytracing:traxer_encircled_energy_function]]. We see the HPD shown as the red line at a radius of $r_{\text{HPD}} \approx \SI{0.79}{mm}$. This is converted to an HPD in arc seconds via \[ α_{\text{HPD}} = \arctan\left( \frac{r_{\text{HPD}}}{f} \right) \] where $f$ is the focal length of the optic. This leads to an HPD of $\SI{216.38}{\arcsecond}$, which is slightly above both the LLNL raytracing simulation (PANTER model) and the PANTER data. Doing this for all three fluorescence lines, we get the numbers shown in tab. [[tab:appendix:llnl_panter_eef_hpd_measurements]], where the differences are even smaller. Therefore, our implementation of the raytracer including figure errors yields very compatible results to the LLNL model developed for PANTER. The usage of TrAXer as the tool of choice for the calculation of the axion image seems justified. See sec. [[#sec:appendix:raytracing:figure_error_model]] below for more details on how the figure error is introduced and its parameters determined. #+CAPTION: Encircled energy function of the aluminum Kα line, corresponding to the data #+CAPTION: of fig. sref:fig:appendix:raytracing:traxer_sim_alK. The HPD comes out to #+CAPTION: $\SI{186.45}{\arcsecond}$. #+NAME: fig:appendix:raytracing:traxer_encircled_energy_function [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] #+CAPTION: Measurements and simulation results given on slides 18-20 of the #+CAPTION: CCM slides Jan 2017. 'Point source' refers to LLNL raytracing simulations. #+CAPTION: PANTER model is an LLNL raytracing model adjusted to best match the #+CAPTION: PANTER data. #+CAPTION: Our raytracing results are the TrAXer rows. #+CAPTION: TrAXer generally overestimates the HPD by about $\SI{20}{\arcsecond}$ #+CAPTION: compared to the perfectc mirror LLNL raytracing results. #+NAME: tab:appendix:llnl_panter_eef_hpd_measurements #+ATTR_LATEX: :booktabs t |--------------------------------+------------+------------+------------| | Al Kα (1.49 keV) | | | | |--------------------------------+------------+------------+------------| | | 50% (HPD) | 80% circle | 90% circle | |--------------------------------+------------+------------+------------| | Point source (perfect mirrors) | 168 arcsec | 270 arcsec | 313 arcsec | | Point source (figure errors) | 206 | 387 | 568 | | PANTER data | 206 | 397 | 549 | | PANTER model | 211 | 391 | 559 | | TrAXer (perfect mirrors) | 183.19 | 304.61 | 351.54 | | TrAXer (figure errors) | 216.38 | 389.72 | 559.79 | |--------------------------------+------------+------------+------------| |--------------------------------+------------+------------+------------| | Ti Kα (4.51 keV) | | | | |--------------------------------+------------+------------+------------| | | 50% (HPD) | 80% circle | 90% circle | | Point source (perfect mirrors) | 161 | 259 | 301 | | Point source (figure errors) | 202 | 382 | 566 | | PANTER data | 196 | 380 | 511 | | PANTER model | 206 | 380 | 559 | | TrAXer (perfect mirrors) | 174.84 | 288.54 | 333.75 | | TrAXer (figure errors) | 207.43 | 378.16 | 552.55 | |--------------------------------+------------+------------+------------| |--------------------------------+------------+------------+------------| | Fe Kα (6.41 keV) | | | | |--------------------------------+------------+------------+------------| | | 50% (HPD) | 80% circle | 90% circle | | Point source (perfect mirrors) | 144 | 233 | 265 | | Point source (figure errors) | 184 | 350 | 541 | | PANTER data | 196 | 364 | 483 | | PANTER model | 185 | 348 | 516 | | TrAXer (perfect mirrors) | 160.38 | 257.79 | 296.79 | | TrAXer (figure errors) | 189.75 | 345.20 | 518.51 | |--------------------------------+------------+------------+------------| [fn:slides] I mention the exact dates, because if you have access to the indico page of all the CAST collaboration meetings, this should make it easy to find the slides. [fn:precise] The differences on the order of $\SI{1}{mm}$ are very small. I haven't implemented an optimization routine to find the absolute smallest value precisely. This is by eye. [fn:scene_panter] I don't show a screenshot of the scene, because the source is obviously tiny and far away. The telescope and sensor are the same as in the CAST scene shown earlier. The CAST magnet bore is not included. *** TODOs for this section :noexport: - [ ] *TRY TO RERUN RAYTRACER WITH DIFFERENT FIGURE ERROR VALUES AND SEE WHAT HAPPENS* - [ ] *TRY RUNNING AGAIN WITH A HOMOGENEOUS FIGURE ERROR* -> i.e. not sampling from a normal distribution - [X] *UPDATE NUMBERS IN TABLE!* - [X] *ADD 80% AND 90% VALUES.* #+begin_src nim import math echo arctan(1.3559 / 2.0 / 1519.0).radToDeg * 3600.0 * 2.0 echo arctan(1.3 / 1500.0).radToDeg * 3600.0 * 2.0 echo arctan(1.3559 / 1500.0).radToDeg * 3600.0 * 2.0 #+end_src #+RESULTS: | 184.117466899582 | | 357.5255746478411 | | 372.8991661558662 | Old paragraph: #+begin_quote What our simulation struggles with is how the figure error affects the encircled energy function, especially visible in the table for the $\SI{90}{\%}$ circles. This is likely due to mishandling how a figure error should be simulated. The current ~TrAXer~ implementation for figure errors takes the reflected ray from a mirror surface and adds another small vector to the reflected ray. The small vector is sampled from a 3D normal distribution with a standard deviation such that it best reproduces the PANTER measurements (without figure errors, the image in fig. sref:fig:appendix:raytracing:traxer_sim_alK is very sharply defined). However, it seems that it does not handle possible scattering to very large deviations correctly. In a realistic simulation one would use a map of the real surface roughness of each mirror shell (as done in [[cite:&nustar_dtu_phd]] for the NuSTAR telescope using ~MT_RAYOR~). As mentioned previously, based on the visual difference between our simulation and the real PANTER data it may appear as if the figure error is /too large/. But what seems to be happening is that our figure error causes a /radial/ enlargement, whereas the real figure error enlarges the data more along one axis than the other, increasing the HPD in size. #+end_quote *** Definition of the figure error and parameter determination :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing:figure_error_model :END: The figure error implementation has multiple free parameters. Generally speaking upon reflection on a mirror surface, the reflected direction is slightly modified. Let $\vec{r}_i$ be the incoming ray, $\vec{n}$ the surface normal on which the ray reflects and $\vec{r}_r$ the reflected ray. Reflection happens in the plane defined by $\vec{n}$ and $\vec{r}_{i,r}$. For a realistic figure error we wish to mainly vary the reflected direction in this plane around $\vec{r}_r$, but still allow for smaller amounts in directions orthogonal. The vector orthogonal to this plane is \[ \vec{n}_{\perp} = \vec{n} \times \vec{r}_r. \] This orthogonal vector $\vec{n}_{\perp}$ allow us further to construct a vector that is orthogonal to $\vec{r}_r$ but in the plane of the incoming and outgoing ray, \[ \vec{s} = \vec{n}_{\perp} \times \vec{r}_r. \] Based on two fuzzing parameters $f_{\parallel}$ and $f_{\perp}$ we can then define the real reflected ray to be: \[ \vec{r}_{\text{fuzzed}} = \vec{p}_{\text{hit}} + \vec{r}_r + f_{\parallel} · \vec{s} + f_{\perp} · \vec{n}_{\perp} \] where $\vec{p}_{\text{hit}}$ is the point at which the ray hit a mirror shell. Generally $f_{\parallel}$ and $f_{\perp}$ are parameters sampled from a normal distribution. In practice only $f_{\perp}$ is directly sampled from a single normal distribution, while $f_{\parallel}$ is -- effectively -- a mix of a narrow and a wide normal distribution. In total there are $\num{5}$ input parameters, which define the final figure error model. In order to find the best matching parameters, we perform non-linear optimization using a global derivative-free algorithm [fn:nlopt_algo]. This optimization calls the raytracer with a set of parameters in batch mode [fn:batch_mode] up to a fixed set of rays, computes the HPD and radii containing $\SI{80}{\%}$ and $\SI{90}{\%}$ of the flux and finally computes the mean squared error of the target parameters given and the current values. Ideally of course, one would use a map of the real surface roughness of each mirror shell (as done in [[cite:&nustar_dtu_phd]] for the NuSTAR telescope using ~MT_RAYOR~) and use that as the basis for more accurate reflection on the surface. See fig. sref:fig:appendix:raytracing:comparison_traxer_perfect_imperfect for a comparison of the image without any figure error (assuming perfect mirrors) and the final figure errors (which are the ones used in the table and plots of the previous section). The right plot shows the same data as seen in fig. sref:fig:appendix:raytracing:traxer_sim_alK above. The difference is massive in the sharpness of the data. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "Perfect mirrors") (label "fig:appendix:raytracing:traxer_alK_perfect") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm_fWidth0.5.pdf")) (subfigure (linewidth 0.5) (caption "Imperfect mirrors") (label "fig:appendix:raytracing:traxer_alK_imperfect") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_fWidth0.5.pdf")) (caption (subref "fig:appendix:raytracing:traxer_alK_perfect") ": TrAXer simulation for Al Kα without any figure error. The resulting image is extremely well defined. " (subref "fig:appendix:raytracing:traxer_alK_imperfect") ": TrAXer simulation of Al Kα including a figure error. This is the same image as shown in " (subref "fig:appendix:raytracing:traxer_sim_alK") ".") (label "fig:appendix:raytracing:comparison_traxer_perfect_imperfect")) #+end_src [fn:nlopt_algo] To be precise the ~NLOPT_GN_DIRECT_L~ algorithm of the NLopt cite:NLopt library, an implementation of the locally biased 'DIviding RECTangles' algorithm cite:jones93_direct,gablonsky01_direct_L. The algorithm performs a hyperrectangle search of the bounded parameter space. Note that many other algorithms would work fine too, but we want a global search algorithm, as there are possibly different local minima in how the parameters are chosen. Derivative-free is an advantage, as each iteration is expensive, having to run the entire raytracer to accumulate enough statistics ($\mathcal{O}(\SI{10}{s})$). [fn:batch_mode] Batch mode just refers to only running the X-ray raytracer without interactivity, graphical interface or the visible light raytracer running. **** Determining the optimal parameters :extended: The optimal parameters that we use nowadays, are determined by starting with the Al Kα line. The program performing the optimization is ~optimize_figure_error.nim~ in the TrAXer repository [[cite:&traxer]]. #+begin_src sh ./optimize_figure_error \ --hpd 206 --c80 397 --c90 568 \ --bufOutdir out_GN_DIRECT_L \ --shmFile /dev/shm/image_sensor_GN_DIRECT_L.dat #+end_src which calls the raytracer with a set of fuzzing parameters, computes the EEF for the HPD, c80 and c90 radii and comparing them to the given arguments (these are the target values for the PANTER data of the Al Kα line, see tab. [[tab:appendix:llnl_panter_eef_hpd_measurements]]). See the NLopt documentation about the algorithm here: https://nlopt.readthedocs.io/en/latest/NLopt_Algorithms/#direct-and-direct-l To run the program you need to first compile the raytracer, #+begin_src sh nim c -d:ThreadPoolSize=32 -d:danger raytracer.nim #+end_src and the ~calc_eef~ helper: #+begin_src sh nim c -d:danger calc_eef.nim #+end_src after checking out the repository. *** More notes :extended: See the following documents for more information about the LLNL telescope and the raytracer. It includes a lot of understanding various confusions mentioned above (and more), the reasoning as to how I ended up finishing ~TrAXer~ in the first place (reply to Igor), development notes of it and further questions and finally lots more about these PANTER / TrAXer comparisons. - [[~/org/Doc/LLNL_def_REST_format/llnl_def_rest_format.org]] - [[~/org/Mails/igorReplyLLNL/igor_reply_llnl_axion_image.org]] - [[~/org/Doc/interactive_raytracer_development_notes.pdf]] -> These are part of [[file:~/org/Doc/StatusAndProgress.org::#sec:interactive_raytracing_development_notes]]! - [[~/org/Mails/llnlAxionImage/llnl_axion_image.org]] - [[~/org/Doc/LLNL_TrAXer_comparison/llnl_traxer_raytracing_comparison.org]] *** Half-power diameter definition / calculation :extended: The slides from the $62^{\text{nd}}$ CCM contain the following explanation about how the HPD should be calculated on slide 10: #+begin_quote - Actually, more useful to consider the encircled energy function (EEF) - Draw a circle around the PSF, integrate the flux, record the value - Repeat process, building up cumulative distribution as a function of radius; normalize to unity - It’s the integral of the PSF - The half-power diameter (HPD) is just diameter where the EEF = 50% #+end_quote The slide in the CCM from Jan 2017 showing their EEFs is slide 17, fig. [[fig:appendix:raytracing:lln_traxer_compare:radial_cdfs_eef_slide17]]. #+CAPTION: Radial cumulative distribution functions of the signal, in other words #+CAPTION: 'energy encircled function' (EEF) for the three fluorescence lines. #+NAME: fig:appendix:raytracing:lln_traxer_compare:radial_cdfs_eef_slide17 [[~/org/Figs/statusAndProgress/LLNL_telescope/PANTER_LLNL_CCM_slide_17.pdf]] *** Comparison of slices along short axis :extended: The CCM presentation from Jan 2017 contains also plots that sum the signal along each axis in the data. These can be used for a more precise determination of whether our signal is too wide (due to too large figure error) or not. See slide 12, [[~/org/Figs/statusAndProgress/LLNL_telescope/PANTER_LLNL_CCM_slide_12.pdf]], namely the plot in the bottom right. It is quite hard to get precise numbers from it, but the 50% amplitude is roughly near the $\SI{0.1}{mm}$ mark. Reproducing a similar plot with our data, yields fig. [[fig:appendix:raytracing:traxer_alKalpha_short_axis_sum]]. We see that the $\SI{50}{\%}$ mark here is at $\SI{0.13}{mm}$. Maybe this is a slight indication for a somewhat too wide result. But well. The actual problem with the HPD as mentioned in the main text still remains. #+CAPTION: Cut along the short axis, summing the long axis for each bin for aluminium #+CAPTION: Kα. This yields a $\SI{50}{\%}$ amplitude width of about $\SI{0.13}{mm}$. #+NAME: fig:appendix:raytracing:traxer_alKalpha_short_axis_sum [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_hpd_x.pdf]] *** More on figure errors :extended: One of the main issues with figure errors -- aside from not being very well defined -- is that they are usually quoted as a number for the entire X-ray optic. This is problematic when implementing it in a raytracer however, because the figure error is a property of the material surface. And because an X-ray optic works by a /double/ reflection, the uncertainty is squared (in a non-trivial way). Therefore, it is not straightforward to simply compute a correct number to apply to the uncertainty of the scattering on the mirror surfaces from a real measurement. (Hey, if you do know how to do it, let me know!) *** Produce $\ce{Al}Kα$ plot from ~TrAXer~ data :extended: (We define a simple helper script to extract the last bits of output from the raytracing program #+NAME: nim-head #+begin_src nim :var data="" :var lines=5 :exports code import strutils for l in data.len - lines ..< data.len: echo data[l].join(" ") #+end_src ) In order to generate the data file needed for the HPD calculation and plot used above, we need to first run the raytracer to produce the binary data files and then use ~plotBinary~ to produce the plots. Let's produce all three targets now. The HPD values are extracted from the output of ~plotBinary~ or from the ~*_hpd_via_eef.pdf~ plot annotations. **** Al Kα Run the ~raytracer~, note the ~energyMin~ and ~energyMax~. #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 1.48 --energyMax 1.50 \ --usePerfectMirror=false \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T12:56:26+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T12:56:26+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T12:56:26+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: Once you deem that enough statistics has been accumulated, press ~F5~ on your keyboard to save all buffers to binary files. The output filenames will be printed to the terminal. And then with the produced binary files we can plot them and compute the HPDs (note: we produce it such that text matches for the 3 plots side by side and then with ~F_WIDTH=0.9~ for the HPD plot and with ~F_WIDTH=0.5~ for the perfect/imperfect comparison): #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ F_WIDTH=0.3333333 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T12:56:26+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_fWidth0.3.pdf \ --inPixels=false \ --title "Al Kα, 1.49 keV, 3x3 mm, figure errors, source optical axis" \ --xrange 1.5 F_WIDTH=0.5 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T12:56:26+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_fWidth0.5.pdf \ --inPixels=false \ --title "Al Kα, 1.49 keV, 3x3 mm, figure errors, source optical axis" \ --xrange 1.5 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T12:56:26+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm.pdf \ --inPixels=false \ --title "Al Kα, 1.49 keV, 3x3 mm, figure errors, source optical axis" \ --xrange 1.5 #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 0.1959999999999997 | as | angle: | 26.42346523090174 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 2.688000000000001 | as | angle: | 362.3785808854751 '' | | | | | | | | | | | | | | | | | #+CAPTION: Our result Al Kα as 3x3 mm with imperfect mirrors [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm.pdf]] #+CAPTION: Our result Al Kα as 3x3 mm with imperfect mirrors, log10 [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_log10.pdf]] #+CAPTION: HPD in X for imperfect mirrors of Al Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_hpd_x.pdf]] #+CAPTION: HPD in Y for imperfect mirrors of Al Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_hpd_y.pdf]] #+CAPTION: HPD of Al Kα computed from the EEF (radial cumulative distribution function, encircled energy) [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_imperfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] ***** Perfect mirrors Perfect mirror: #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 1.48 --energyMax 1.49 \ --usePerfectMirror=true \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-11T09:31:09+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-11T09:31:09+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-11T09:31:09+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: Create the plots (only in 3x3 mm): #+Begin_src sh :dir ~/CastData/ExternCode/RayTracing/ F_WIDTH=0.5 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-11T09:31:09+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm_fWidth0.5.pdf \ --inPixels=false \ --title "Al Kα, 1.49 keV, 3x3 mm, source optical axis" \ --xrange 1.5 #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 0.1400000000000006 | as | angle: | 18.87390378692756 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 2.618 | as | angle: | 352.941657341418 '' | | | | | | | | | | | | | | | | | | Val | at: | 0.9 | = | 351.5354019605708 | | | | | | | | | | | | | | | | | | | | Sum | below | 1.278219101380229 | : | 215505260.1427205 | | | | | | | | | | | | | | | | | | | | Sum | above | 1.278219101380229 | : | 23943483.16359809 | | | | | | | | | | | | | | | | | | | | Val | at: | 0.8 | = | 304.6164587588692 | | | | | | | | | | | | | | | | | | | | Sum | below | 1.107616901540538 | : | 191568517.2606231 | | | | | | | | | | | | | | | | | | | | Sum | above | 1.107616901540538 | : | 47880226.04569627 | | | | | | | | | | | | | | | | | | | | Val | at: | 0.5 | = | 183.1855614136018 | | | | | | | | | | | | | | | | | | | | Sum | below | 0.6660815414431028 | : | 119737938.6530822 | | | | | | | | | | | | | | | | | | | | Sum | above | 0.6660815414431028 | : | 119710804.6532382 | | | | | | | | | | | | | | | | | | | Produces the following figures: #+CAPTION: Our result Al Kα as 3x3 mm with perfect mirrors [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm.pdf]] #+CAPTION: Our result Al Kα as 3x3 mm with perfect mirrors, log10 [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm_log10.pdf]] #+CAPTION: HPD in X for perfect mirrors of Al Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm_hpd_x.pdf]] #+CAPTION: HPD in Y for perfect mirrors of Al Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm_hpd_y.pdf]] #+CAPTION: HPD of Al Kα computed from the EEF (radial cumulative distribution function, encircled energy) [[~/phd/Figs/raytracing/llnlPanter/panter_source_Al_Kalpha_perfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] **** Ti Kα Run the ~raytracer~, note the ~energyMin~ and ~energyMax~. #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 4.50 --energyMax 4.52 \ --usePerfectMirror=false \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T12:58:39+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T12:58:39+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T12:58:39+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T12:58:39+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_imperfect_mirrors_3x3mm.pdf \ --inPixels=false \ --title "Ti Kα, 4.51 keV, 3x3 mm, figure errors, source optical axis" \ --xrange 1.5 #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 0.1959999999999997 | as | angle: | 26.42346523090174 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 2.534000000000001 | as | angle: | 341.617347141652 '' | | | | | | | | | | | | | | | | | #+CAPTION: Our result Ti Kα as 3x3 mm with imperfect mirrors [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_imperfect_mirrors_3x3mm.pdf]] #+CAPTION: Our result Ti Kα as 3x3 mm with imperfect mirrors, log10 [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_imperfect_mirrors_3x3mm_log10.pdf]] #+CAPTION: HPD in X for imperfect mirrors of Ti Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_imperfect_mirrors_3x3mm_hpd_x.pdf]] #+CAPTION: HPD in Y for imperfect mirrors of Ti Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_imperfect_mirrors_3x3mm_hpd_y.pdf]] #+CAPTION: HPD of Ti Kα computed from the EEF (radial cumulative distribution function, encircled energy) [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_imperfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] ***** Perfect mirrors Perfect mirror: #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 4.50 --energyMax 4.52 \ --usePerfectMirror=true \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-11T09:37:15+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-11T09:37:15+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-11T09:37:15+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-11T09:37:15+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_perfect_mirrors_3x3mm.pdf \ --inPixels=false \ --title "Ti Kα, 4.51 keV, 3x3 mm, source optical axis" \ --xrange 1.5 #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 0.1400000000000006 | as | angle: | 18.87390378692756 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 2.436 | as | angle: | 328.4056493105146 '' | | | | | | | | | | | | | | | | | | Val | at: | 0.9 | = | 333.7531445943796 | | | | | | | | | | | | | | | | | | | | Sum | below | 1.213560944372482 | : | 65510262.06201264 | | | | | | | | | | | | | | | | | | | | Sum | above | 1.213560944372482 | : | 7278495.262482432 | | | | | | | | | | | | | | | | | | | | Val | at: | 0.8 | = | 288.5410499685314 | | | | | | | | | | | | | | | | | | | | Sum | below | 1.049165035489549 | : | 58231807.0367901 | | | | | | | | | | | | | | | | | | | | Sum | above | 1.049165035489549 | : | 14556950.28770481 | | | | | | | | | | | | | | | | | | | | Val | at: | 0.5 | = | 174.8413254494009 | | | | | | | | | | | | | | | | | | | | Sum | below | 0.6357410375749298 | : | 36396804.70486 | | | | | | | | | | | | | | | | | | | | Sum | above | 0.6357410375749298 | : | 36391952.61963489 | | | | | | | | | | | | | | | | | | | #+CAPTION: Our result Ti Kα as 3x3 mm with perfect mirrors [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_perfect_mirrors_3x3mm.pdf]] #+CAPTION: Our result Ti Kα as 3x3 mm with perfect mirrors, log10 [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_perfect_mirrors_3x3mm_log10.pdf]] #+CAPTION: HPD in X for perfect mirrors of Ti Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_perfect_mirrors_3x3mm_hpd_x.pdf]] #+CAPTION: HPD in Y for perfect mirrors of Ti Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_perfect_mirrors_3x3mm_hpd_y.pdf]] #+CAPTION: HPD of Ti Kα computed from the EEF (radial cumulative distribution function, encircled energy) [[~/phd/Figs/raytracing/llnlPanter/panter_source_Ti_Kalpha_perfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] **** Fe Kα Run the ~raytracer~, note the ~energyMin~ and ~energyMax~. #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 6.39 --energyMax 6.41 \ --usePerfectMirror=false \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T12:59:52+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T12:59:52+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T12:59:52+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T12:59:52+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_imperfect_mirrors_3x3mm.pdf \ --inPixels=false \ --title "Fe Kα, 6.41 keV, 3x3 mm, figure errors, source optical axis" \ --xrange 1.5 #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 0.1679999999999993 | as | angle: | 22.64868451649989 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 2.31 | as | angle: | 311.4191767261138 '' | | | | | | | | | | | | | | | | | #+CAPTION: Our result Fe Kα as 3x3 mm with imperfect mirrors [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_imperfect_mirrors_3x3mm.pdf]] #+CAPTION: Our result Fe Kα as 3x3 mm with imperfect mirrors, log10 [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_imperfect_mirrors_3x3mm_log10.pdf]] #+CAPTION: HPD in X for imperfect mirrors of Fe Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_imperfect_mirrors_3x3mm_hpd_x.pdf]] #+CAPTION: HPD in Y for imperfect mirrors of Fe Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_imperfect_mirrors_3x3mm_hpd_y.pdf]] #+CAPTION: HPD of Fe Kα computed from the EEF (radial cumulative distribution function, encircled energy) [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_imperfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] ***** Perfect mirrors Perfect mirror: #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 6.39 --energyMax 6.41 \ --usePerfectMirror=true \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-11T09:41:47+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-11T09:41:47+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-11T09:41:47+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-11T09:41:47+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_perfect_mirrors_3x3mm.pdf \ --inPixels=false \ --title "Fe Kα, 6.41 keV, 3x3 mm, source optical axis" \ --xrange 1.5 #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 0.1400000000000006 | as | angle: | 18.87390378692756 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 2.197999999999999 | as | angle: | 296.3200864311206 '' | | | | | | | | | | | | | | | | | | Val | at: | 0.9 | = | 296.523470064401 | | | | | | | | | | | | | | | | | | | | Sum | below | 1.0781899486169 | : | 48506492.61313087 | | | | | | | | | | | | | | | | | | | | Sum | above | 1.0781899486169 | : | 5385733.150556044 | | | | | | | | | | | | | | | | | | | | Val | at: | 0.8 | = | 257.7868500325422 | | | | | | | | | | | | | | | | | | | | Sum | below | 0.9373395598019377 | : | 43114338.96146736 | | | | | | | | | | | | | | | | | | | | Sum | above | 0.9373395598019377 | : | 10777886.8022196 | | | | | | | | | | | | | | | | | | | | Val | at: | 0.5 | = | 160.3741245847753 | | | | | | | | | | | | | | | | | | | | Sum | below | 0.5831368020869885 | : | 26950827.2941249 | | | | | | | | | | | | | | | | | | | | Sum | above | 0.5831368020869885 | : | 26941398.46956202 | | | | | | | | | | | | | | | | | | | #+CAPTION: Our result Fe Kα as 3x3 mm with perfect mirrors [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_perfect_mirrors_3x3mm.pdf]] #+CAPTION: Our result Fe Kα as 3x3 mm with perfect mirrors, log10 [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_perfect_mirrors_3x3mm_log10.pdf]] #+CAPTION: HPD in X for perfect mirrors of Fe Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_perfect_mirrors_3x3mm_hpd_x.pdf]] #+CAPTION: HPD in Y for perfect mirrors of Fe Kα [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_perfect_mirrors_3x3mm_hpd_y.pdf]] #+CAPTION: HPD of Fe Kα computed from the EEF (radial cumulative distribution function, encircled energy) [[~/phd/Figs/raytracing/llnlPanter/panter_source_Fe_Kalpha_perfect_mirrors_3x3mm_hpd_via_eef_50.pdf]] ** Computing an axion image with TrAXer :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing:axion_image :END: The previous section provides us with reasonable guarantees that our raytracer produces compatible results to PANTER measurements and the LLNL raytracing results. In particular for the axion-electron coupling we do not wish to reuse the raytracing prediction for the solar axion image computed by Michael Pivovaroff at LLNL, as used in the CAST Nature 2017 paper [[cite:&cast_nature]] paper. [fn:have_access] This is because of the different radial emission profile (compare fig. [[sref:fig:theory:solar_axion_flux:radial_dependence_ksvz_dfsz]]). And for the chameleon, due to its production in the solar tachocline, this is of course absolutely mandatory. To simulate the solar axion image, we use the scene already shown in fig. sref:fig:appendix:raytracing:cast_setup_example. The Sun is placed as an X-ray emitter [fn:why_xray] at $\SI{0.989}{AU}$ away [fn:distance] from the telescope. It emits X-rays of energies and from radii corresponding to the distribution seen in fig. sref:fig:theory:solar_axion_flux:flux_vs_energy_and_radius calculated using the opacity calculation code [[cite:&JvO_axionElectron]] also developed by Johanna von Oy during her master thesis [[cite:&vonOy_MSc]]. The calculations are based on the AGSS09 solar model cite:agss09_chemical,agss09_new_solar. The telescope is rotated under the angle it was installed at CAST, as deduced from the X-ray finger taken at CAST, fig. sref:fig:cast:xray_finger_centers (about $\SI{14}{°}$). The image sensor is placed slightly in front of the focal point. $\SI{1492.93}{mm}$ away from the center of the telescope (to the virtual reflection point) instead of $\SI{1500}{mm}$ to account for the median conversion point of the X-rays of the expected axion electron X-ray flux (as mentioned in sec. [[#sec:limit:median_absorption_depth]] and calculated in [[#sec:appendix:average_depth_xrays_argon]]). With all these in place, the axion image comes out as seen in fig. sref:fig:appendix:raytracing:traxer_solar_axion_image. In comparison fig. sref:fig:appendix:raytracing:llnl_solar_axion_image shows the axion image as computed by the LLNL raytracing code (albeit only for the axion-photon coupling and related emission). The similarity between the two images is implying our raytracing simulation is sensible. Small differences are expected, not only because of a different solar emission. The different rotation angle is due to a different assumed rotation of the telescope. Our raytracing image shows a larger bulge in the center at very low intensity. The absolute size of the major flux region is very comparable. The slight asymmetry in our image is due to it being effectively $\SI{8}{mm}$ in front of the focal point. #+begin_src subfigure (figure () (subfigure (linewidth 0.5) (caption "TrAXer") (label "fig:appendix:raytracing:traxer_solar_axion_image") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.pdf")) (subfigure (linewidth 0.5) (caption "LLNL") (label "fig:appendix:raytracing:llnl_solar_axion_image") (includegraphics (list (cons 'width (linewidth 1.0))) "~/phd/Figs/raytracing/raytracing_axion_image_llnl_jaime_all_energies.pdf")) (caption (subref "fig:appendix:raytracing:traxer_solar_axion_image") ": TrAXer simulation of the solar axion image using the correct Sun-Earth distance with the image sensor placed " ($ (SI 1492.93 "mm")) " away from the telescope center. " (subref "fig:appendix:raytracing:llnl_solar_axion_image") ": Corresponding raytracing simulation done at LLNL, however for axion-photon production only.") (label "fig:appendix:raytracing:comparison_traxer_llnl_axion_image")) #+end_src [fn:have_access] Let alone the fact that we only got access to these raytracing results mid 2023, but our raytracing work started years before that. [fn:why_xray] The axion-photon conversion probability is handled in the limit code. [fn:distance] This was the average distance to the Sun during our CAST data taking campaign, as discussed in sec. [[#sec:limit:ingredients:solar_axion_flux]]. *** TODOs for this section [/] :noexport: Related to explicit axion image - [ ] *SHOW PLOTS CHARACTERIZING USED SOLAR MODEL* - [X] *REFERENCE SOLAR MODEL* - [ ] *INVERT LLNL IMAGE PLOT* *** Further thoughts on the (missing?) bow tie :extended: The bow tie feature in the raytracing images is partially present even in the perfect mirror raytracing results. But in those there is a hard cutoff at some point, because the last shell and largest angles have been reached. Once figure errors are included this turns into a much smoother process, resulting in the bow tie shape. *** Note on conversion point :extended: The calculation for the conversion point was mentioned in - [ ] *REFERENCE* *** Generate solar axion image using TrAXer [/] :extended: :PROPERTIES: :CUSTOM_ID: sec:appendix:raytracing:generate_axion_image :END: - [X] *WE FORGOT TO USE THE CORRECT DISTANCE, I.E. MEAN CONVERSION!* -> Corrected. The main ingredient we need to simulate the correct axion image is the data file that describes the radial and spectral emission of axions from the Sun. Make sure to get the [[cite:&JvO_axionElectron]] code: #+begin_src sh cd <path/of/choice> git clone https://github.com/jovoy/AxionElectronLimit.git cd AxionElectronLimit cd src nim c -d:danger readOpacityFile #+end_src Note that you also need to download the OPCD files from the opacity project https://cdsweb.u-strasbg.fr/topbase/TheOP.html. Download the ~.tar~ file and place it in a location of your choice. With that done, update the ~AxionElectronLimit~ config file found in [[file:~/CastData/ExternCode/AxionElectronLimit/config/config.toml]] by setting the #+begin_src toml opcdPath = "/home/basti/CastData/data/" #+end_src field under ~[ReadOpacityFile]~ to the correct directory. The directory should point to the parent directory of the ~OPCD_3.3~ dir. If you do not have a ~config.toml~ file, just create a copy of the ~config_default.toml~ as ~config.toml~. Now we can run ~readOpacityFile~: #+begin_src sh :dir ~/CastData/ExternCode/AxionElectronLimit/src ./readOpacityFile \ --suffix "_0.989AU" \ --distanceSunEarth 0.9891144450781392.AU \ --fluxKind fkAxionElectronPhoton \ --plotPath ~/phd/Figs/readOpacityFile/ \ --outpath ~/phd/resources/readOpacityFile/ #+end_src where we insert the correct distance from Sun to Earth, see sec. [[#sec:limit:ingredients:solar_axion_flux:gen_flux_distance_sun]]. And we set the fluxes to only those interactions we are interested in (ignoring things like plasmon interactions). The output data will be written as a CSV file to a location also defined in the config file. Using this CSV file, which should be called ~solar_model_dataframe_fluxKind_fkAxionElectronPhoton_0.989AU.csv~ we can run the raytracer, in this case in the focal point: #+begin_src sh ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --solarModelFile ~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --sensorKind sSum \ --usePerfectMirror=false \ --ignoreWindow #+end_src :RESULTS: # Using the new FUZZ parameters: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T16:12:11+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T16:12:11+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T16:12:11+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat # Using the old single FUZZ with sampling from normal distribution: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-10T12:40:44+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-10T12:40:44+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-10T12:40:44+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: The important point is the ~solarModelFile~ parameter and ~--sensorKind sSum~, which sets image sensors up to sum up all flux contributions in each pixel (instead of just counting how many rays hit each pixel). We do not want to have the GridPix window strongback in the result. It will sample in an energy range between 0.03 and 15 keV. The lower bound is due to the limit on the reflectivity calculation that is possible based on the atomic scattering factors. Time to produce a plot of the image: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-10T12:40:44+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_old_figure_errors.pdf \ --inPixels=false \ --title "Solar axion image at 0.989 AU from Sun (old figure errors)" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 2.421999999999999 | as | angle: | 326.5182636835636 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 1.148000000000001 | as | angle: | 154.7659824408381 '' | | | | | | | | | | | | | | | | | With the new fuzzing logic: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T16:12:11+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU.pdf \ --inPixels=false \ --title "Solar axion image at 0.989 AU from Sun (imperfect mirrors)" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 2.59 | as | angle: | 349.166887507867 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 1.204 | as | angle: | 162.3155395156477 '' | | | | | | | | | | | | | | | | | For comparison let's also run the code without any figure errors: #+begin_src sh ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --solarModelFile ~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --sensorKind sSum \ --usePerfectMirror=true \ --ignoreWindow #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-10T12:45:29+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-10T12:45:29+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-10T12:45:29+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-10T12:45:29+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_axion_image_perfect_mirrors_fkAxionElectronPhoton_0.989AU.pdf \ --inPixels=false \ --title "Solar axion image at 0.989 AU from Sun, perfect mirrors" #+end_src And finally we also compute the axion image at the point of most likely conversion of X-rays, i.e. at $\SI{1492.93}{mm}$ instead of $\SI{1500}{mm}$. Thus, ~--rayAt 0.995286666667~ (1492.93 / 1500) to get the correct placement: #+begin_src sh ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --solarModelFile ~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --sensorKind sSum \ --usePerfectMirror=false \ --rayAt 0.995286666667 \ --ignoreWindow #+end_src :RESULTS: # Using the new FUZZ parameters: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T16:23:18+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T16:23:18+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T16:23:18+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat # Using the old single FUZZ with sampling from normal distribution: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-10T20:38:55+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-10T20:38:55+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-10T20:38:55+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: Which we also plot: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-10T20:38:55+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm_old_figure_error.pdf \ --inPixels=false \ --title "Solar axion image at 0.989 AU from Sun, 1492.93 mm (old figure error)" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 2.407999999999999 | as | angle: | 324.6308780019351 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 1.176 | as | angle: | 158.5407610313404 '' | | | | | | | | | | | | | | | | | #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ F_WIDTH=0.5 USE_TEX=true ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T16:23:18+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.pdf \ --inPixels=false \ --gridpixOutfile ~/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv \ --title "Solar axion image at 0.989 AU from Sun, 1492.93 mm" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 2.617999999999999 | as | angle: | 352.9416573414177 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 1.204000000000001 | as | angle: | 162.3155395156478 '' | | | | | | | | | | | | | | | | | | [INFO] | Writing | GridPix | CSV | file: | /home/basti/phd/resources/axionImages/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.csv | | | | | | | | | | | | | | | | | | This yields the files included in the main body. - ~~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.pdf~ - ~~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU.pdf~ - ~~/phd/Figs/raytracing/solar_axion_image_perfect_mirrors_fkAxionElectronPhoton_0.989AU.pdf~ If you compare [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU.pdf]] with [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm.pdf]] you see that there is a minor amount of asymmetry in the version that is ~1 cm in front of the focal point, as expected. The impact on the size is very small though. This can be seen when comparing: - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm_hpd_via_eef_50.pdf]] - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm_hpd_x.pdf]] - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm_hpd_y.pdf]] with the same plots from the focal point. - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_hpd_via_eef_50.pdf]] - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_hpd_x.pdf]] - [[~/phd/Figs/raytracing/solar_axion_image_fkAxionElectronPhoton_0.989AU_hpd_y.pdf]] The HPD increases by <0.05 mm Comparing this to the LLNL raytracing axion image: [[~/org/Figs/statusAndProgress/rayTracing/raytracing_axion_image_llnl_jaime_all_energies_gridpix_size.pdf]] they actually look very compatible. *Note*: we also produced figures for $\SI{1487.93}{mm}$ instead of the final $\SI{1492.93}{mm}$ found in [[file:Figs/raytracing/axion_image_assuming_1.5cm_behind_window/]] **** What does the axion image look like without any orthogonal fuzzing? :extended: This thought just occurred to me, because the LLNL raytracing image still has more of a 'waist' in the center. #+begin_src sh FUZZ_ORTH=0.0 ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 15 --maxDepth 5 \ --llnl --focalPoint --sourceKind skSun \ --solarModelFile ~/CastData/ExternCode/AxionElectronLimit/resources/solar_model_dataframe_fluxKind_fkAxionElectronPhoton_0.989AU.csv \ --sensorKind sSum \ --usePerfectMirror=false \ --ignoreWindow #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T16:29:50+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T16:29:50+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T16:29:50+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T16:29:50+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out /tmp/solar_axion_image_fkAxionElectronPhoton_0.989AU_1492.93mm_no_orth_fuzz.pdf \ --inPixels=false \ --title "Solar axion image at 0.989 AU from Sun, 1492.93 mm (no orth fuzz)" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 2.59 | as | angle: | 349.166887507867 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 1.204 | as | angle: | 162.3155395156477 '' | | | | | | | | | | | | | | | | | Ok, the plot looks near identical. :) *** Generating the plot of the LLNL axion image :extended: The origin of the raytracing plot is [[file:~/org/Doc/StatusAndProgress.org::#sec:raytracing:llnl_raytracing_results]], but we reproduce the code here to change it slightly. The code requires the text data of the LLNL raytracing simulations. - [ ] *FIND OUT IF I CAN PUBLISH THEM* #+begin_src nim :tangle code/raytracing_axion_images_llnl_jaime.nim import ggplotnim, seqmath import std / [os, sequtils, strutils] proc readRT(p: string): DataFrame = result = readCsv(p, sep = ' ', skipLines = 4, colNames = @["x", "y", "z"]) result["File"] = p proc meanData(df: DataFrame): DataFrame = result = df.mutate(f{"x" ~ `x` - mean(col("x"))}, f{"y" ~ `y` - mean(col("y"))}) proc customSideBySide(): Theme = result = sideBySide() result.titleFont = some(font(8.0)) proc plots(df: DataFrame, title, outfile: string) = var customInferno = inferno() customInferno.colors[0] = 0 # transparent ggplot(df.filter(f{`x` >= -7.0 and `x` <= 7.0 and `y` >= -7.0 and `y` <= 7.0}), aes("x", "y", fill = "z")) + geom_raster() + scale_fill_gradient(customInferno) + xlab("x [mm]") + ylab("y [mm]") + xlim(-7.0, 7.0) + ylim(-7.0, 7.0) + coord_fixed(1.0) + ggtitle(title) + themeLatex(fWidth = 0.5, width = 600, baseTheme = customSideBySide, useTeX = true) + ggsave(outfile) var dfs = newSeq[DataFrame]() for f in walkFiles("/home/basti/org/resources/llnl_cast_nature_jaime_data/2016_DEC_Final_CAST_XRT/*2Dmap.txt"): echo "Reading: ", f dfs.add readRT(f) echo "Summarize" var df = dfs.assignStack() df = df.group_by(@["x", "y"]) .summarize(f{float: "z" << sum(`z`)}, f{float: "zMean" << mean(`z`)}) .mutate(f{"y" ~ col("y").max - idx(`y`)}) # invert the y axis df = df.meanData() plots(df, "LLNL raytracing of axion image (sum all energies)", "~/phd/Figs/raytracing/raytracing_axion_image_llnl_jaime_all_energies.pdf") #+end_src ** Reproducing an X-ray finger run with TrAXer :extended: We can of course also attempt to reproduce an X-ray finger run using TrAXer. Essentially it is not too dissimilar from the PANTER measurements, with two differences: 1. source distance and size 2. source placement. In contrast to the PANTER measurement, where the source is on the optical axis, at CAST for X-ray finger measurements the source is in the magnet bore and therefore in front of the telescope (i.e. _away_ from the optical axis). The main reason this is not part of the main thesis is that this is neither very important and at the same time we don't actually know the dimensions or distance of the X-ray finger used at CAST very well. -> Decide final arguments #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --sensorKind sSum \ --energyMin 2.5 --energyMax 3.5 \ --usePerfectMirror=false \ --sourceDistance 10.m \ --sourceRadius 2.0.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T17:16:36+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T17:16:36+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T17:16:36+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: Note the lack of ~--sourceOnOpticalAxis~, which puts the source behind in at the center of the magnet bore instead of the optical axis. #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T17:16:36+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/xray_finger_10m_2mm_3keV.pdf \ --inPixels=false \ --title "X-ray finger in 10 m distance, 2 mm radius at 2.5-3.5 keV" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 5.600000000000001 | as | angle: | 754.9527823402237 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 5.809999999999999 | as | angle: | 783.2632444367842 '' | | | | | | | | | | | | | | | | | And now using 14.2 m and 3 mm source, which is what is mentioned in [[cite:&anders_phd]] under fig. 4.32. While I don't understand how these numbers are supposed to make sense, they produce a better image. #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --sensorKind sSum \ --energyMin 2.5 --energyMax 3.5 \ --usePerfectMirror=false \ --sourceDistance 14.2.m \ --sourceRadius 3.0.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-12T17:21:37+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-12T17:21:37+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-12T17:21:37+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-12T17:21:37+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out ~/phd/Figs/raytracing/xray_finger_14.2m_3mm_3keV.pdf \ --inPixels=false \ --title "X-ray finger in 14.2 m distance, 3 mm radius at 2.5-3.5 keV" #+end_src #+RESULTS: | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | height | of | 1 | since | no | raster | height | information | supplied. | Add | `height` | or | (`yMin`, | `yMax`) | as | aesthetics | for | a | different | values. | | INFO: | using | default | width | of | 1 | since | no | raster | width | information | supplied. | Add | `width` | or | (`xMin`, | `xMax`) | as | aesthetics | for | a | different | values. | | HPD | along | x: | 4.354 | as | angle: | 586.9768249098241 '' | | | | | | | | | | | | | | | | | | HPD | along | y: | 4.116000000000001 | as | angle: | 554.8914342751674 '' | | | | | | | | | | | | | | | | | Yielding (10 m) [[~/phd/Figs/raytracing/xray_finger_10m_2mm_3keV.pdf]] and (14.2 m): [[~/phd/Figs/raytracing/xray_finger_14.2m_3mm_3keV.pdf]] *I still believe* it is a bit weird as to why the 10 m case is definitely too large. Comparing our result to [[~/org/Figs/statusAndProgress/rayTracing/raytracing_xray_finger_llnl_jaime_gridpix_size.pdf]] (from [[file:~/org/Doc/StatusAndProgress.org::#sec:raytracing:llnl_raytracing_results:xray_finger]]) also looks a bit smaller than ours. But well, maybe there is still something wrong, but I think this will be for someone else to understand. :/ In any case, comparing the 14.2 m simulation to our real data: [[~/phd/Figs/CAST_Alignment/xray_finger_centers_run_189.pdf]] shows a near perfect match! So all in all this is a success! ** DONE Can we finish our interactive ray tracer? :extended: Need: - light sources (4h of work at most) - cylinders, hyperboloids, paraboloids as objects (once we figure out one, the rest should be relatively easy) - placing different telescope layers etc. (2h) In theory this *should* be possible as an extensive weekend project! I'd say this is definitely worth it. <2023-11-02 Thu 19:11>, damn I cannot stress how good it felt to just turn the old TODO there into a DONE! When I wrote the above lines, I didn't think I'd end up finishing that raytracer, to be honest. At the same time I also didn't think the thesis would still be worked on at the end of 2023! "Extensive weekend project" -- yeah. I mean my estimates weren't that far off (within π I guess), but the additional work to get everything working correctly, implementing the actual X-ray raytracer on top of the visible light based raytracer using spectral radiance and not an RGB based approach, thinking and implementing how to essentially have 2 raytracers in one with ImageSensors as a replacement for the camera for X-rays etc. In the end it was a good 4 weeks of work I guess instead of a weekend. :) ** Computation of atomic processes :noexport: Computation of atomic processes done in *CODE*. ** Rerunning Al Kα after replacing target :extended: I moved the "light target" to the end of the magnet (on the telescope side) in order to get better statistics for regular X-ray finger runs. While doing this I realized that with ~--sourceOnOpticalAxis~ the target at the beginning is problematic, because the telescope may not be fully illuminated at all! But the below shows that it does not make an important difference. *UPDATE* <2023-11-10 Fri 21:28>: I forgot to add ~--ignoreMagnet~ to all commands above! Need to rerun them tomorrow!!! - [X] *FOR THAT* adjust the light target size, make it a parameter! -> Done. #+begin_src sh :post nim-head(*this*,lines=5) :eval no-export ./raytracer \ --width 1200 --speed 10.0 --nJobs 32 --vfov 45 --maxDepth 10 \ --llnl --focalPoint \ --sourceKind skXrayFinger \ --rayAt 1.013 \ --sensorKind sSum \ --energyMin 1.48 --energyMax 1.50 \ --usePerfectMirror=false \ --ignoreWindow \ --sourceDistance 130.297.m \ --sourceRadius 0.42.mm \ --telescopeRotation 90.0 \ --sourceOnOpticalAxis \ --ignoreMagnet \ --targetRadius 40.mm #+end_src :RESULTS: [INFO] Writing buffers to binary files. [INFO] Writing file: out/buffer_2023-11-11T09:26:25+01:00_type_uint32_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/counts_2023-11-11T09:26:25+01:00_type_int_len_1440000_width_1200_height_1200.dat [INFO] Writing file: out/image_sensor_0_2023-11-11T09:26:25+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat :END: Once you deem that enough statistics has been accumulated, press ~F5~ on your keyboard to save all buffers to binary files. The output filenames will be printed to the terminal. And then with the produced binary files we can plot them and compute the HPDs: #+begin_src sh :dir ~/CastData/ExternCode/RayTracing/ ./plotBinary \ --dtype float \ -f out/image_sensor_0_2023-11-11T09:26:25+01:00__dx_14.0_dy_14.0_dz_0.1_type_float_len_1000000_width_1000_height_1000.dat \ --invertY \ --out /tmp/test_me.pdf \ --inPixels=false \ --title "Al Kα, 1.49 keV, 3x3 mm, figure errors, source optical axis" \ --xrange 1.5 #+end_src ** Figure error development notes :extended: For my development notes taken when I fixed the implementation to what it is now in the above, see [[file:~/org/Doc/StatusAndProgress.org::#sec:raytracing:interactive:fixing_figure_error]]. * List of figures and tables :noexport: :PROPERTIES: :CUSTOM_ID: sec:appendix:list_figures_tables :END: Read up on =ox-extra= and the :ignore: tag that it includes to only export this, but not the actual header. #+LATEX: \listoffigures{} #+LATEX: \listoftables{} * Acknowledgments :Ack: :PROPERTIES: :CUSTOM_ID: sec:appendix:acknowledgments :END: If you come here after having read (parts of?) the thesis, then first of all my gratitude to you, the reader, for having spent time with my writing. It means a lot. Of course, without Klaus Desch I wouldn't even have been able to work on this thesis. Thanks a lot, Klaus. And especially thank you for taking the time to do our personal meetings these last years. I really appreciate it! Next, I want to thank everyone in our group. Although, especially since COVID hit I haven't been to the office much, it was always a great environment to work in. Most notably I want to thank Tobias (Schiffer) and Markus (Gruber) as my longest office mates. We had a lot of fun together (maybe some days a bit too much, hehe) and I hope we can continue even when I'm not around anymore! Johanna (von Oy), you came later, but that doesn't mean I didn't enjoy working with you. On the contrary, it was great. Thanks to being open minded about my software ideas. :) Some people in our group left a while ago, but you're not forgotten. Christoph (Krieger), thank you for your supervision all those years ago. I hope you are satisfied with my work and maybe a little bit proud! Lucian (Scharenberg), you technically didn't really 'leave' until recently, but we didn't see each other much with you being at CERN for your PhD. Also thank you for the good times and we'll keep in touch! Hendrik (Schmick), although you were only around for the year of your master thesis, it was a lot of fun to supervise you! The same holds for Jannes (Schmitz), even though it was even shorter with his Bachelor thesis. During my years of the thesis I spent a significant chunk of time at CERN. I want to extend my gratitude to everyone in the CAST collaboration. You're a great bunch of people! Thank you for being welcoming and supportive. Konstantin (Zioutas), Giovanni (Cantatore), Horst (Fischer), your focus was elsewhere in those last years of CAST of course, but I appreciated your feedback and it was fun working with you! Theodoros (Vafeiadis), a special thanks goes to you for not only being an excellent technical coordinator, but also just being fun and helpful outside of work! Marios (Maroudas) and Justin (Baier) thank you two as well. And Cristian (Cogollos) and Sergio (Arguedas) I loved our time together sharing "the corridor". You know who you are! While I am not part of the group of Igor (García Irastorza) in Zaragoza, I spent a lot of time in the last few years in Zaragoza. Everyone of you treated me as if I /was/ part of your group (aside from being IAXO collaboration members!). Thank you all! Special thanks go out to Julia (Vogel) and Jaime (Ruz) for their efforts in helping me understand the LLNL telescope! And Konrad (Altenmüller), thanks for 'abseiling' into my life via your postdoc, haha! And Igor, thank you for helping me untangle the limit calculation. On a completely different side of things, thanks to Andreas (Rumpf, [[https://github.com/Araq][@araq]]) for inventing the Nim programming language. Without you my thesis would certainly be different! Generally, the Nim community is a great bunch of really talented people (give Nim a try!). Too many to list here, but I want to highlight a few. Mamy (Ratsimbazafy, [[https://github.com/mratsim][@mratsim]]) your work is extremely appreciated, not only by me as you very well know. And I'm especially grateful to you, for being as trusty as you were right from the start. You didn't know me, but you treated me with respect and collaborated with me, which gave me a huge boost in confidence to put my own code out there! Brent (Pedersen, [[https://github.com/brentp][@brentp]]), thanks for developing [[https://github.com/SciNim/nim-plotly][~nim-plotly~]]. It was really helpful in getting started to use Nim for my work! The same goes for Yuriy (Glukhov [[https://github.com/yglukhov][@yglukhov]]), being able to interact seamlessly with Python from Nim via [[https://github.com/yglukhov/nimply][~nimpy~]] was exceptionally useful. Regis (Caillaud, [[https://github.com/Clonkk][@clonkk]]) and Hugo (Granström [[https://github.com/hugogranstrom][@hugogranstrom]]), I love collaborating with you guys. Thanks for working with me on [[https://github.com/SciNim][~SciNim~]]. Onto the future! And finally, Chuck (Charles Blake [[https://github.com/c-blake][@c-blake]]), thank you for being a mentor and friend. :) I consider most of the people I named above good friends, but mention you in a context that somehow relates to the thesis. A few people should be mentioned though that don't really fit in like that! First, David (Helten), I hope we manage to stay in contact better again in the future! Stephan (Kürten), a big thank you to you, too (and of course for proofreading!). And Roberto (Röll), thanks for always being there for me (and reading the thesis!). And finally, thanks to my family. Danke, Bianca, Papa und Mama, dass es euch gibt! And thank you Cristina (Margalejo) for being my partner and being at my side! ** TODOs for this section :noexport: - [X] WHAT TO CALL ZGZ GROUP -> I think this is fine. - [X] Thanks to Klaus & group. - [X] Thanks to Julia and Jaime for LLNL help - [X] Also thanks to Theodoros (and maybe Giovanni / Konstantin / Horst ? maybe not...) - [X] Thanks to Araq for building Nim. Thanks to Roberto. Thanks to Chuck. - [X] Thanks to Nim community, and especially: Mamy (@mratsim), Hugo (@hugogranstrom), Clonkk, Chuck (@cblake), Andrea Ferreti (alea among others), @brentp (plotly was a *huge* help in the beginning), @Bluenote10 (NimData was great), @yglukhov (nimpy in particular!!)