UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS

UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS Abstract Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the ‘EURADOS Working Group 10—Retrospective dosimetry’ members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios. INTRODUCTION In Europe today, and indeed the whole world, the current state of the art of retrospective radiation dosimetry incorporates a number of key biological and physical retrospective techniques. Amongst the biological methods, the dicentric chromosome assay (DCA), which relies on the relationship between the frequency of dicentric chromosomes in peripheral blood lymphocytes and the absorbed dose (‘dose’) of exposed persons, is recognised to be the most well developed of these(1, 2). The cytokinesis-blocked micronucleus (CBMN) assay is also regarded as an important tool for biological assessment of radiation dose and is growing in importance(3). For longer-term retrospective dosimetry, fluorescence in situ hybridisation (FISH) staining of translocations can be used(4). In addition, new techniques for premature chromosome condensation and scoring offer alternative methodologies(5) and the γ-H2AX assay allows direct measurement of double-strand breaks (DSB), which are caused almost exclusively by ionising radiation, in a human cell(6). For physical dosimetry(7), the key techniques are electron paramagnetic resonance (EPR) with bone(8), teeth(9), nails(10–12) and glass(13) and optically or thermally stimulated luminescence (OSL/TL), chiefly of electronic components in mobile phones(14). A large number of studies have demonstrated the need for a wider range of techniques, both separately and in collaboration, in order that dosimetry can be carried out in a range of potential exposure scenarios(15, 16). Most importantly, the operational assays have recently been amalgamated into fully functional emergency response plans through the network: Realising the European network of biodosimetry, RENEB(16, 17). The European Radiation Dosimetry Group (EURADOS) was set up to advance the scientific understanding and the technical development of the dosimetry of ionising radiation by the stimulation of collaboration between laboratories in Europe. As part of the network, EURADOS Working Group 10 (WG10) was formed, with the objective of establishing a network of European (and some non-European) laboratories with expertise in the areas of physical and biological retrospective dosimetry. The aims of WG10 include: establishing a multi-parameter approach to dose assessment in retrospective dosimetry (including emergency response); evaluating newly developed dosimetry methods, and establishing a common approach for uncertainty estimation throughout biological and physical methods of dosimetry. In order to address this last point, EURADOS WG10 task group 10.6 was created, with a focus on uncertainties associated with retrospective dosimetry techniques. Uncertainty analysis can be defined as the method to quantify the degree of confidence considering the model outputs taking into account the model inputs (data and parameters)(18). In classical statistics, the aim of uncertainty assessment is to provide a range around the estimated quantity, the uncertainty interval, within which the real value of the variable is likely to be found. The uncertainty interval usually depends on the standard deviation that corresponds to the uncertainty associated with the estimate of the variable. The process of uncertainty assessment is an intrinsic part of any method of retrospective radiation dosimetry, however, it has previously been noted that estimation approaches vary between different dosimetry techniques and, furthermore, that the overall effort devoted to uncertainty analysis varies widely between groups of retrospective dosimetry practitioners(19, 20). The Guide to the Expression of Uncertainty in Measurement (GUM) is the standard reference for uncertainty analysis techniques, and most of the retrospective dosimetry laboratories across Europe and the world use the GUM uncertainty estimation techniques(21). There are several relevant ISO standards and guides to assist with implementation of GUM(22, 23) and the associated analysis techniques, e.g. for assessment of accuracy(24) and laboratory intercomparisons(25). There are also a number of alternative techniques that may be employed, both for measurement and calculation of the quantity of interest, i.e. absorbed dose, for the propagation of uncertainties through this calculation, and for quantification of both type A (generally corresponding to random) and type B (generally systematic) uncertainty estimates. For example, Monte Carlo simulation, which includes auto- and cross-correlations among the components, can be a convenient means of obtaining probability distributions for parameter estimation in non-linear systems(26, 27). This can be combined with the use of Bayesian logical constructs to constrain parameter estimation to help improve the accuracy of dose estimates(28). Bayesian statistics consider the uncertain parameters and variables as probabilistic. Bayesian analysis takes into account assumed prior distributions for each parameter (which must be chosen by the user), together with the experimental data to produce a subsequent probability density function (PDF) that describes the probability of the observations. In all cases, the appropriateness of the approach should be carefully evaluated(29). The aims of EURADOS WG10 task 10.6 are thus to survey the different methods used to assess uncertainties in retrospective techniques, to compare and contrast the different approaches, to popularise the potential of Monte Carlo or Bayesian techniques, to identify best practice and finally to attempt to homogenise the approaches where possible and/or desirable. The survey of uncertainty assessment methods used by WG10 participants was carried out in 2012. The survey participants were overwhelmingly supportive of the need for a review of uncertainty analysis techniques, inter- and intra-technique comparisons of methods, and organisation of centrally administrated training once best practice has been identified. In this publication, the authors present a comparison of techniques of uncertainty estimation amongst laboratories using biological and physical retrospective dosimetry methods of dicentric chromosome, micronucleus, PCC, FISH translocation and γ-H2AX analysis, EPR, OSL and TL. The data were collated from the survey and discussions at EURADOS annual meetings 2012–16 and from results of the RENEB networking project(17). The similarities and differences in recommended uncertainty analysis methods, those used in practise, and also the experimental and external factors that influence the results, are discussed. In biological dosimetry, Bayesian analysis methods have so far only been implemented for the DCA(30) and thus are discussed in this context. Bayesian approaches have been developed for application in luminescence retrospective dosimetry per se(28), and others are widely used to compare retrospective archaeological and geological age estimates(31), but have not yet been applied for accident reconstruction. The wider applications of Monte Carlo sampling are also discussed, with the aim of promoting the techniques for use within the community. Finally, areas in which biological and physical retrospective dosimetry uncertainty analysis methods may be improved are considered. DOSIMETRY METHODS There are a number of relevant publications including recent reviews of biological and physical retrospective dosimetry methods, thus we here only present a relatively high level summary in order to set the scene for the review of the uncertainty estimation methods that are currently in use. Biodosimetry The most well-established assays for biological dosimetry rely on assessing chromosome aberrations in peripheral blood lymphocytes as the number of aberrations induced by ionising radiation corresponds to the absorbed dose. The established techniques are: (1) the dicentric chromosome assay (DCA), which is the ‘gold standard’;(2) (2) fluorescence in situ hybridisation (FISH);(4) (3) premature chromosome condensation (PCC);(5) (4) the cytokinesis-block micronucleus assay (CBMN)(3), and (5) counting of γ-H2AX foci which form at the site of double-strand breaks(6). The two main prerequisites for biodosimetry are the stability of aberrations with time following irradiation and knowledge of the background level, i.e. the number of aberrations in a given sample before irradiation occurred. DCA, PCC and CBMN show stability of responses for several months, as induction of these aberrations means the cells cannot reproduce and thus the cells die and these aberrations are gradually removed from the population of circulating lymphocytes. Therefore, the relative specificity to ionising radiation drives the accuracy of these methods. In contrast, while γ-H2AX and FISH are not radiation specific, the individual background levels are becoming better understood(4, 6, 32). The vast majority of γ-H2AX aberrations disappear within approximately 24 h and thus this is very much a short term assay, however, translocations detected by FISH are stable over many years. All of the established methods were developed based on visual scoring techniques, and a large amount of work has been required to ensure their suitability for mass casualty events(17). The current status is promising although the need for automation and other more rapid strategies remains(1, 33, 34) and there is also a need for development of appropriate, reliable, uncertainty methods. Dicentric chromosome assay As the ‘gold standard’ of biodosimetry, the statistical analysis methods for the dicentric chromosome assay (DCA) are extremely well defined. An ISO standard has been created for the assay in routine(35) and rapid assessment ‘triage’(36) modes. The assay’s standardised guidance text, the International Atomic Energy Authority (IAEA) cytogenetics manual(1), is used by almost all practitioners of the assay for biodosimetry purposes. In brief, the dicentric yield y (average number of dicentric chromosomes per cell) is modelled as a function of the absorbed dose D by formation of a linear or linear-quadratic calibration curve:   y=C+αD+βD2 (1)The specific type (energy/LET) of radiation determines the type of curve, which is created by counting (‘scoring’) aberrations in large numbers of cells, especially at low doses. For in vivo exposure cases, the yield of aberrations observed in the exposed sample is then compared to the calibration curve. Dicentrics are highly specific to ionising radiation, with only a few rarely used radiomimetic drugs also able to induce them, thus, the lower limit of detection is approximately 100 mGy(1). Simple methods also exist to account for fractionation or partial-body exposures, chiefly based on adherence to or departure from the expected Poisson distribution of aberrations(1). Micronucleus assay The in vitro cytokinesis-block micronucleus assay (CBMN) is another well-established method for biodosimetry(3). The assay is based on assessing the frequency of acentric chromosome fragments (micronuclei, MN) and, to a small extent, malsegregation of whole chromosomes in binucleated (BN) cells. MN are caused by many clastogenic and aneugenic agents(37) and thus are not specific for ionising radiation. Compared to the DCA, scoring of MN is simple and quick. Furthermore, scoring can be automated relatively easily and international standardisation is in place(38). The reported background frequency of MN is variable: values ranging from 0 to 40 per 1000 BN have been recorded(1). Consequently, the lower limit for dose detection by conventional CBMN is 0.2–0.3 Gy; although more detailed analysis restricted to centromere positive cells lowers this limit to 0.05–0.1 Gy. The most important determinants influencing MN background rates are dietary factors, exposure to environmental clastogens and aneugens, age and gender(1). When possible, for instance for medical exposure scenarios, the variability of the background level should be decoupled from the other parameters by identification of the individual background level in blood samples taken before irradiation(39). Dose estimation using CBMN follows the same strategy as for the DCA. The absorbed dose can be assessed up to few months after exposure(1, 40, 41). A drawback of CBMN is the natural overdispersion of MN data, therefore partial-body irradiation is hard to detect, and also the larger variation in background levels which means that the detection limit is generally greater(37). PCC An important limitation of the DCA, CBMN and FISH assays is the prerequisite for lymphocytes to enter the mitotic phase of the cell cycle, which requires culturing for 48 h or more. Thus dose estimates always take several days. The need for metaphases also leads to several technical problems including radiation-induced mitotic delay and cell death that can lead to non-representative cell samples. After high doses of ionising radiation, this can cause considerable underestimation of absorbed doses. Premature chromosome condensation (PCC) can be induced by cell fusion or chemical induction. The cell fusion PCC technique visualises chromosome aberrations in the interphase cells, which can allow same day biological dose assessment. Chemically induced PCC has been validated for triage following in several potential exposure scenarios(1, 5, 40). PCC analysis is not a biomarker on its own, rather it should be combined with scoring of specific chromosome aberrations (e.g. fragments, rings or translocations). The frequency of spontaneous occurring PCC fragments is in the range of the dicentric frequency, 1–3 in 1000 cells. For dose assessment with PCC, the same tools are used as presented for the DCA. PCC assay is particularly useful for assessment of a wide range of doses. It is applicable as well to exposure at low doses (as low as 0.2 Gy for PCC fragments) as to life-threatening high acute doses of low and high-LET radiations (up to 20 Gy)(42). Fluorescence in situ hybridisation Dicentric chromosomes, rings and MN are ‘unstable’ chromosome aberrations and thus they are lost from the peripheral blood lymphocyte pool at the rate that cell renewal occurs. FISH techniques allow identification of stable translocations, and have been used for many years for assessment of past exposures(1, 43). However, background frequencies increase significantly with age and vary greatly between individuals of the same age and dose history. No significant gender effects have been observed but smoking habit has been suggested to be of significance(32). The observed number of aberrations must be corrected for these known confounding factors in order to obtain radiation-induced translocations only, before dose–response curves are calculated as for dicentrics(1). Note that as a consequence of stability and non-specificity to radiation, the minimum detectable dose is limited as a function of time: with contributions to the minimum detectable dose in the region of 1.8 mGy per year (20–69 years) for acute doses, and for chronic exposure 15.9 mGy per year, respectively(42). At high doses, correlation of translocations and complex aberrations in cells is also of importance(1). Although the biological complexity of this assay is relatively well understood, the uncertainty analysis techniques remain simplistic and more work is needed. γ-H2AX The γ-H2AX assay, commonly used for investigating radiosensitivity, has in recent years become a well-established biomarker for radiation-induced DNA double-strand breaks and thus radiation exposure(6, 44). Fluorescence microscopy or flow cytometry measure the formation of DNA repair-protein clusters termed γ-H2AX foci (in terms of number or intensity) in peripheral blood lymphocytes of exposed persons(45, 46). The foci are not specific for radiation exposure and spontaneous frequency is very low. Wide use for biodosimetry purposes is limited by fast loss of the signal (maximal γ-H2AX level is reached 30 min after irradiation, the tissue related half-life of the signal is 2–7 h) as well as high individual variability(42, 47, 48). The advantages of the assay are its high sensitivity and relatively low detection limit (as low as tens of mGy), the need for only a few drops of blood, that lymphocyte cultures are not required and the ease of automation. However, the absorbed dose can only be assessed up to ~1 day after exposure(49). In addition, the influence of factors such as age, gender and genotypes are not yet well understood(50–56). The relatively large uncertainties allow γ-H2AX to be used for biodosimetry only in extremely controlled scenarios(57) and uncertainty assessment is carried out on a case-by-case basis, with the sophistication of the analysis varying greatly between laboratories. Physical retrospective dosimetry For the purposes of retrospective dosimetry, physical dosimetry methods are retrospective dose estimation techniques based on the quantitative evaluation of detectable changes induced by ionising radiation in inert materials or by the activation of atoms such as sodium or phosphorus when exposed to neutrons. They are usually only suitable for detection of external exposure and for situations of partial or localised exposure, they provide a useful dosimetric information only if the ‘fortuitous dosemeter’ (defined as a material validated for dosimetry which the individual happens to be carrying) is by chance within the radiation field. The three physical dosimetry techniques considered in this review are electron paramagnetic resonance (EPR), optically stimulated luminescence (OSL) and thermoluminescence (TL). Electron paramagnetic resonance EPR dosimetry is based on the quantification of paramagnetic species (defects or free radicals) induced by ionising radiation. In solids, as crystalline materials, the radicals/defects can be trapped and are thus generally sufficiently stable to be measured. The EPR signal is a measure of the radical/defect concentration within the solid matrix and is usually proportional to the mean absorbed dose in the sample. The principle of EPR spectroscopy may be found in several textbooks(58–60) and a detailed description of applications in retrospective dosimetry is given by Trompier and colleagues(61). In general, depending on the complexity of the EPR spectra, the area under the absorption curve or peak-to-peak amplitude of an EPR signal are used and considered to be proportional to the concentration of paramagnetic species. The validated assays for retrospective analysis are calcified tissues (tooth enamel and bone) and sugar, while materials such as fingernails, mineral glass, sweeteners, plastics and clothing fabrics that are widespread among humans are under investigation(12, 13, 62–66). The ideal characteristics of a beneficial fortuitous dosemeter are a high radiation-induced signal yield, the absence of endogenous signals, a low UV-induced signal, low detection limit, the linearity of the signal with dose and post-irradiation signal stability(61). The quantity of the radiation-induced radicals may be correlated to a value of the absorbed dose either via application of a ‘positive control’—the delivery of additional radiation doses with a laboratory source (‘additive dose’ method) or via a calibration curve. Calibration curves may be created on an individual basis for each sample, or using different samples each irradiated at a different radiation dose, i.e. always applying the same sample-specific, signal-to-dose calibration curve. A disadvantage of the universal calibration curve is that it does not take into account the specific sensitivity of the sample. Instead, the curve is built using the average of the sensitivities of different samples. This of course affects the uncertainty associated with the estimated dose. When EPR is used for ionising radiation dosimetry, other confounding factors such as UV exposure or mechanical stress may generate additional radicals resulting in signals which may overlap or mask the radiation-induced component of the total signal. In these cases, spectral simulations or other numerical analysis methods are needed, in order to decompose the different components of the spectrum and extract the signal of interest. The average lifetimes for the radicals vary from minutes to billions of years. Controlling the sample storage condition, for example by keeping the samples in dark, low humidity and sometimes in a freezer, will protect the samples against unwanted changes. A key advantage of EPR analysis is that it can be repeated as many times as needed, as the readout process does not alter the signal. This gives the possibility to estimate the effect of sample positioning, spectrometer reproducibility and stability which all play a large role in the uncertainty budget for EPR dose estimates. Optically/thermally stimulated luminescence Luminescence dosimetry relies on the stimulated emission of light from an insulator or a semiconductor after the absorption of energy from ionising radiation(67). Ionising radiation transfers energy to the electrons of the solid, moving them to a metastable state. When the electrons return to the ground state, recombination occurs and luminescence light is emitted. This recombination occurs after absorption of stimulation energy provided by heat in the case of TL and by light for OSL. Crystals contain defects, which produce spatially localised energy levels in the energetically ‘forbidden’ zone between the valance and conduction bands. Ionising radiation produces electron–hole pairs by exciting electrons beyond the potential of their parent molecule into a delocalised state, which is most commonly the conduction band. As electrons and holes migrate in the conduction and valence bands, most recombine rapidly but some become trapped in metastable states associated with the defects. Later, these can be excited by thermal or optical stimulation, so that electrons and holes are again able to recombine. Following recombination, the host molecule is excited, and some emit photons at visible wavelengths as they de-excite: this emission is termed thermally or optically stimulated luminescence, depending on the type of stimulation. In TL, the light yield is recorded as a glow curve, i.e. as a function of the stimulating temperature, whereas in OSL the number of emitted photons per time interval is recorded as a function of the optical stimulation time. For both stimulation modes, the area under the glow curve/OSL decay curve is related to the total number of emitted photons and thus to the absorbed dose in the dosemeter. Published results on the applications of TL and OSL for retrospective dosimetry have concentrated on chip cards, electronic components and glass from personal electronic devices(68). For some of these methods, interlaboratory comparisons have been performed(69). UNCERTAINTY ESTIMATION APPROACHES Biological Dosimetry—Dicentric Assay Frequentist approach: confidence limits Uncertainty assessment for cytogenetic dosimetry is widely understood as the quantification of the variability within the dosimetric model, e.g. as defined by equation (1). Thus, parameter uncertainties as well as biomarker variability need to be considered. Indeed, full uncertainty analysis for cytogenetic dosimetry following the GUM(21) considers a long list of factors. For routine dosimetry, this list includes: the type and parameters of the dose–response curve, the stochastic characteristics of the biological marker, inter- and intra-individual variability, technical noise sources and practical limitations (e.g. in vitro calibrated methods applied to in vivo data). More complex scenarios of exposure induce further challenges, as described by Vinnikov and colleagues(19, 70). As processing is time consuming and the level of experience varies between laboratories, the potential for standardisation and verification of uncertainty analysis methods has so far been very limited. This may explain the absence of agreement on some of the expected uncertainty parameters within this field, such as the coverage factor(21). For the DCA, which has the most well-developed uncertainty estimation methods of all the biological assays, uncertainties on estimated doses are generally assessed by the analysis on the variability of the dicentric yield y and the parameters of the calibration curve C, α and β(71–74), according to the GUM methodology(21) (with detailed examples for several of the biological techniques given by Ainsbury et al.(75)). In the version of the IAEA manual published in 2001, which was the first time uncertainty analysis was discussed in detail in a methodological biodosimetry publication, three methods of uncertainty assessment were presented, labelled A–C. It was reasoned that Merkle’s Approach ‘C’(72) performs best for low numbers of dicentrics (as in low doses and/or few cells), whereas for high doses Savage’s Approach ‘A’(71) is more precise. In the updated 2011 version of the IAEA manual(1), Merkle’s approach is discussed in greater detail. This method of uncertainty assessment allows incorporation of both the Poisson error of the yield as well as errors in the dose–response curve parameters. The confidence bands of the calibration curve follow from the insight that the maximum likelihood estimate of the parameter vector is asymptotically multivariate normal(72). The upper limit yul and lower limit yll for the expected mean yield are therefore:   yul/ll=C+αD+βD2±R√(sC2+sα2D2+sβ2D4+2sC,αD+2sC,βD2+2sα,βD3) (2)where sx2 denotes the variance of x and sx,z the covariance of x and z, for each value of x and z in the equation. The regression factor R2 gives the range within which the true average yield is to be expected. This is equivalent to the confidence limit of the Chi-squared distribution with 2 or 3 degrees of freedom for linear or linear-quadratic fits; i.e. for 95% confidence intervals, R = 2.45 at 2 degrees of freedom (df) and R = 2.80 at df = 3, respectively. Finally, the calculation of confidence intervals for the estimated dose includes two steps: Determination of the boundaries YU and YL of the dicentric yields which are consistent with the observed yield in the sample (95% confidence limits for the mean parameter of a Poisson random number). Determination of the absorbed dose for which the upper dose–response curve yul exceeds YL and the dose where the lower curve yll exceeds YU.Note that some authors reason that the combination of 95% confidence intervals for the dicentric yield as well as for the confidence bands leads to a falsely large confidence interval for the dose, thus in case of combined errors 83% confidence limits of the Chi-squared distribution are more appropriate(76, 77) (the square root of the regression factor at 1–α = 83% is R = 1.88 (df = 2) and R = 2.24 (df = 3) respectively). The alternative approaches given in the IAEA manual(1): ‘A’ (Savage(71)) and ‘B’ (again Merkle(72)), are built on classical error propagation calculations for normally distributed random variables. They apply the Delta method to calculate the standard error for the estimated dose from the calibration curve and its parameter uncertainties. For dose estimations in more complicated scenarios, extensions exist including correction for protracted or fractioned doses and partial-body exposures. In these cases, in principal, after correction of the curve, uncertainty assessment follows the same strategy as above. However, in this case, either the simplified method ‘C’ described above is used (with no parameter uncertainties) or equation (2) must be adjusted manually. Tools to apply the standard (IAEA) methods for automated dose estimation and uncertainty assessment are available, including CABAS (Chromosomal ABerration Calculation Software)(78) and DoseEstimate(79). In order to assess inhomogeneous exposures more realistically, Sasaki modelled the damaged cell population with a mixed Poisson, which can be numerically deconvoluted(80). The resulting exposure profile indicates some uncertainty within the dose. However, this does not represent a rigorous uncertainty assessment. In addition, a correction factor for confidence intervals of overdispersed data (i.e. dispersion index σ2/y > 1) is proposed(1). For those samples, the limits of the expected range of the yield, YU and YL, of a sample with mean y and variance σ2 should be adjusted as follows (with either YU and YL as appropriate):   YU/L⁎=YU/L(YU/Ly)σ2y (3) Probabilistic approach: Bayesian methods In parallel to the classical, frequentist approach, Bayesian methods are becoming increasingly popular in the field of biological dosimetry(30, 81, 82). Key to the Bayesian concept is the application of the inversion theorem in its continuous version, i.e.:   P(D|Xobs)=P(Xobs|D)P(D)∫P(Xobs|D)P(D)dD (4)where D denotes the unknown parameter (absorbed dose) and Xobs the observation (the dicentric yield within the sample, y, and the calibration data). Thus, the posterior dose distribution (or calibrative dose density), P(D|Xobs), scales with the product of the likelihood (or predictive density) P(Xobs|D) and the prior P(D):   P(D|Xobs)∝P(Xobs|D)P(D) (5) With respect to uncertainty analysis, the Bayesian approach does not require additional considerations, since the resulting distribution P(D|Xobs) (probability for a dose given the data) inherently provides quantification of the uncertainty within the dosimetric model. Consequently, Bayesian uncertainty intervals for the calibration parameter (the dose in this case) are accurate. Apart from the intrinsic inclusion of uncertainty within the posterior model, the greatest advantage compared to the frequentist approach is the possibility to include other information besides the number of aberrations through the chosen prior distribution(s). The choice of the prior could be sensitive, since well chosen, informative priors should guide noisy data towards the true dose, whereas incorrect priors may drive the estimate away from the true dose. Some authors reason that high quality prior information is almost always available, and thus Bayesian approaches are more appropriate for biological dosimetry(70, 81). Higueras and colleagues(30) also showed that if an appropriate prior is applied, the actual choice of prior in fact doesn’t greatly impact the overall dose assessment in some scenarios. In contrast to the frequentist approach, which relies on processing the information successively from the initial sample, to the calibration data, to a point estimate of the dose surrounded by a confidence interval; the Bayesian approach incorporates all decisions at the same time and the resulting equation is then analytically, numerically or empirically solved for the required components. A fully Bayesian method thereby also adjusts the PDF of the aberrations to the specific scenario of exposure and simultaneously incorporates the uncertainties of the parameters of the dose–response curve. The mathematical complexity of this task means that it is not possible to define a general Bayesian solution applicable to all exposure scenarios. Nevertheless, Di Giorgio and Zaretzky showed a procedure to include prior information in dose estimation in a Bayes-like manner(83). Discretization of the dose range and a separated frequentist estimation of response curve parameters provides a straightforward method resulting in a Bayes-like posterior of the dose(83). Note that this example also illustrates the influence of the prior on the credible intervals for the dose. In general, to date, three types of Bayesian solutions can be identified from within the literature. Firstly, analytical expressions for simplified scenarios: The earliest prominent example of such a solution is the calibrative density for a Poisson distributed number of aberrations linked to the absorbed dose via a linear dose–response without intercept (C = β = 0) using Gamma priors for dose and slope(84). In this case, an analytical expression is derived that is proportional to the posterior. The authors reasoned that the trivial dose–response curve is appropriate for neutrons (high LET) at high doses, however, neutron dose response is known to exhibit overdispersion(85). Brame and Groer revisited the same scenario in 2002, replacing the Poisson distribution by a negative binomial in order to jointly model the density of the slope and the degree of overdispersion(82). Secondly, practical guides for specific scenarios that provide R code for reuse. Higueras et al.(39). discussed the reasonable set-up of Poisson and compound Poisson models (Neyman A, negative binomial, Hermite) for biodosimetry. Complete and simplified models are provided and three examples are given for dose assessments for a linear-quadratic calibration curve (two Poisson and one negative binomial regression model). For each example three different priors are compared(39). Together with Vinnikov, the same group of authors presented a guide for analysis of partial-body exposure for a zero-inflated Poisson model(30). This guide approximates credible intervals for the irradiated fraction of the body and the received dose simultaneously from:   P(D,F,d0|y)∝(Fe−Dd0−F+1)−n∑j=1n0(n0j)Fn−j(1−F)j(n−j)sP(Xj=s|D)P(D)P(F)P(d0) (6)where D is the absorbed dose, F the fraction of irradiated body, d0 the 37% cell survival dose, n number of cells in patient data, n0 the number of cells without aberrations, and Xi is a negative binomial distributed random number corresponding to the unirradiated fraction of the cells with mean and variance depending on the index i and the mean and the variance of the calibration curve respectively. Uniform priors for F and d0 and a Gamma prior for the dose are used(30). Thirdly, Software packages: The Java application CytoBayesJ for cytogenetic radiation dosimetry(81) and R package radir containing the models by Higueras(86) offer platform independent software solutions to Bayesian uncertainty assessment. CytoBayes offers tools for (i) distribution testing (compound Poisson models), (ii) posterior calculations of the number of aberrations (several combinations of priors and yield models), (iii) Bayesian-like dose assessment (Poisson data), (iv) full Bayesian calculation of posteriors of the dose (Poisson data in y = αD), as well as (v) Bayesian methods for detection limits(81). In order to simplify the analysis, most scenarios include a Bayesian uncertainty assessment of the dicentric yield and then make a conventional frequentist inverse regression step due to mathematical complexity. Ultimately, for biological dosimetry, it can be concluded that the Bayesian methodology provides the most coherent approach, but at the same time it is far more technically challenging than the dose and uncertainty assessment methods currently recommended and used by most practitioners(1). Despite recent developments, such methods thus remain to date ‘expert’ tools. Therefore, software solutions such as those described above will be required to bridge the gap between the necessary mathematical skills and the users. In particular, the potential pitfall of incorrectly chosen priors will need careful consideration going forward, since the methodological coherence of prior and posterior can be seen conceptually as a self-fulfilling prophecy that masks unexpected results. EPR Sources of uncertainty Uncertainties or ‘errors’ can be classified into two types: type A errors can be evaluated by statistical methods whereas type B errors are commonly termed ‘systematic errors’ and must be dealt with by other means. Historically uncertainties have also been classified as ‘random’ or ‘systematic’ errors, and these terms are still sometimes used for type A and B errors, respectively. However, it is important to note that the GUM recommends the nomenclature of ‘type A’ or ‘type B’ to classify how an error is dealt with rather than where it originates(21). For EPR, determination of irradiation doses retrospectively is not a straightforward task. Uncertainties both of type A and of type B are introduced and must be carefully analysed and reported. Several technical publications have been produced, dedicated to the determination of uncertainties with EPR spectroscopy on materials such as tooth enamel or alanine(87–90). A list of possible sources of uncertainties has been drafted for tooth enamel dosimetry in Fattibene and Callens(91) and many issues considered in the list (effect of sample anisotropy, parameters of spectrum acquisition, spectrometer instability, sample mass, spectrum processing methods, uncertainty linked to the dose calibration curve, etc.) are valid for almost all the EPR dosimetry methods. As recommended by IAEA(87), the total combined uncertainty is expressed as the quadratic sum of the possible source of uncertainties, under the assumption that these sources are uncorrelated. Specifically,:   σED=(σF2+σS2+σE2+σC2+σT2) (7)where σF is the contribution from the fading correction; σS is the contribution from the sample preparation; σE the contribution from the EPR measurement; σT the contribution from the numerical treatment of spectra, and σC is the contribution from the calibration of EPR dose–response, including differences in radiation sensitivity. The fading contribution depends on the detector materials used in analysis. For tooth enamel it can be assumed that σF does not contribute to the overall uncertainty(87), however, a sufficient delay should be observed after irradiation for signal stabilisation due to recombination of short-lived species. A minimum delay of 48 h is usually recommended for calcified tissue. This recommendation is also valid for most irradiated materials including sugars, for example, for which the stabilisation delay can reach weeks(92). EPR dosimetry is usually not performed on unstable species. Nevertheless, a few groups are considering the use of the unstable signal component in nails for dosimetry applications(10, 11, 93). In this case, the fading correction may significantly contribute to the uncertainty budget, because of the influence of multiple parameters (temperature humidity, light, etc.) which may be difficult to evaluate for the delay between irradiation and sample harvest. In addition, special attention must be paid to control fading during the storage period or at least the parameters of influence. Similarly, the sample preparation approach depends on the material used as well as on the method chosen. The other three factors affecting the total uncertainty stem from measurements and data processing. The EPR measurement uncertainty, σE, depends on a complex combination of uncertainties linked to the performance of EPR spectrometers per se and the experimental set-up. Uncertainty contributions from the numerical treatment of spectra, σT, and contributions from the calibration of EPR dose–response, σC, can be minimised through experimental validation of the method for different materials as has been done for tooth enamel(91) and smart phone touch screen glass(13). For this reason, international interlaboratory comparisons of EPR dose reconstruction are the most useful tools for identifying contributing sources of uncertainties and finding the best solution to minimise these. Evaluation of uncertainties The uncertainty analysis approach used by several EURADOS partners is the standard one recommended for uncertainty estimation for EPR on alanine. In brief, this method consists of taking the mean value of multiple measurements as the best evaluation of the true value and the sample standard deviation as the uncertainty on the signal. A minimum of 10 measurements is commonly used; however, some EURADOS WG10 members regularly perform 12 or 16 measurements (three dosemeters in four different orientations or four dosemeters in four orientations). In order to measure unknown absorbed doses, a calibration curve is required to be created to describe the relationship between EPR signal (y) and dose (D): y = F(D). The parameters of the function, F, are estimated through a best-fit procedure. The dose–signal relation is usually linear and weighted regression is sometimes, but not always, applied. Thus, the general expression of the estimated dose ED as a function of the measured signal, y, is as follows:   ED=y−ba±u(ED)=y−ba±1a2u2(y)+1a2u2(b)+(y−b)2a4u2(a)+2(y−b)a3u(b,a)+ucal2 (8)where a is the calibration curve slope and b is the y-axis offset. The combined variance u2(ED) consists of the variance in the measured signals u2(y), the variance in the y-axis offset u2(b), the variance in the slope in the calibration curve u2(a), the covariance of b and a and finally the dose dependent variance in the doses given to the calibration samples, ucal2. The covariance term was found to be negligible in accurate EPR dosimetry(94, 95) and it is assumed that this will be the case for retrospective dosimetry. An unbiased estimate of the variance of the measured signals can be obtained from:   u2(y)=1n−2∑k=1n(yk−bk−a⋅Dk)2 (9)where Dk are the known absorbed doses given to the calibration samples, yk are the corresponding signal values and bk is in this case the zero dose for each calibration sample. The denominator represents the number of calibration points minus the two degrees of freedom. Standard regression analysis yields:   u2(b)=u2(y)∑k=1nDk2n⋅σ2(D) (10)  u2(a)=u2(y)σ2(D) (11)where:   σ2(D)=∑k=1n(Dk−D¯)2 (12)and the covariance term is omitted. These principals allow deduction of chiefly type A uncertainties. Type B uncertainties are generally considered by taking into account uncertainties in fading, corrections for radiation energy, environmental factors, spectrometer variations and calibration dose. Note, however, that fading could be treated as a type A uncertainty if multiple measurements of dispersion in fading for a given time period are performed—such a method is used for TL/OSL dosimetry, as described in the next section. An alternative approach relies on determination of the absorbed dose in alanine measurements from the calibration curve, by relating the amplitude of the EPR signal to absorbed dose. To estimate the uncertainty of dose, u(ED), an imperfect calibration curve is designed and the procedure described by Nagy(89) is applied; in the case of a calibration plot based on n calibration points, the confidence interval for the dose value D, determined from m replicate measurements of the signal of a test sample, is calculated using the following expression:   D=D0±tn−2,P⋅sfitb⋅1m+1n+(D0−Dmean)2∑i=1n(Di−Dmean)2 (13)where: tn−2,P is the Student coefficient for the chosen probability P; sfit is the standard uncertainty of the mean of the fit; b is the slope of the regression; D0 is the dose, D, value to be determined and Dmean is the mean of the D values of all calibration point Di. Performance parameters and predicted uncertainty In the framework of the European Research project Southern Urals Radiation Risk Research (SOUL, 2005), a benchmark protocol was established between three EPR laboratories (HMGU, Munich, ISS, Rome, and IMP, Ekaterinburg) for the definition of the performance parameters for EPR dosimetry with tooth enamel and for the prediction of associated uncertainty(96). The parameters ‘critical dose’ and ‘limit of detection’, taken from chemical metrology(97, 98) were deemed to be most appropriate to characterise the uncertainty of EPR measurements. The definition of critical dose follows from the hypothesis test for 95% probability of an unirradiated sample and hence allows a false positive error rate, α, of 5%. In other words, within the distribution of measured EPR signal amplitudes from unexposed samples, there is an accepted probability of 5% that the amplitude is larger than the critical amplitude, which is the decision limit below which it is assumed that the sample was not exposed and above which it is assumed that an exposure occurred. The absorbed dose value corresponding to the critical amplitude on the EPR signal-to-dose–response curve is then termed the critical dose. The definition of the limit of detection follows from the hypothesis test for 95% probability that the sample was exposed, hence allowing for a false negative error rate β of 5% indicating that an exposure did not occur. That is, within the distribution of the measured EPR signal amplitudes from exposed samples, there is probability β of 5% that the amplitude is lower than the critical amplitude. A graphical illustration of the definitions of critical amplitude and limit of detection is shown in Figure 1. Figure 1. View largeDownload slide A graphical illustration of the definitions of critical amplitude (ICL) and critical dose (DCL), limit of detection of signal amplitude (IDL) and of absorbed dose (DDL). (figure adapted from(96)). Figure 1. View largeDownload slide A graphical illustration of the definitions of critical amplitude (ICL) and critical dose (DCL), limit of detection of signal amplitude (IDL) and of absorbed dose (DDL). (figure adapted from(96)). The critical amplitude, ICL, and the limit of detection, IDL, of EPR signal intensity are calculated from the mean of measurements of unexposed samples (b0) and the estimated standard deviation of n EPR measurements of unexposed samples, σˆ0, and samples exposed to a dose DDL. σˆDL, respectively:   ICL=b0+t(1−α,n−2)σˆ0 (14)  IDL=ICL+t(1−β,n−2)σˆDL (15) The estimated standard deviation must be multiplied by the Student’s critical value t(1−[α or β],n−2), the (1−[α or β]) percentage point of Student’s t distribution with the single-sided confidence interval chosen according to the desired confidence level (1−[α or β]) and number of samples n. The standard deviations may be evaluated from the 90% prediction bands of an unweighted linear least-squares fit of the EPR signal-to-dose–response curves in the case of constant uncertainty. Alternatively, in the case of dose dependent uncertainty, the values of the standard deviations may be predicted from an analytical model function formulated from the variance of EPR measurements on the absorbed dose. An example function of variance as a function of absorbed dose in tooth enamel, developed at the EPR laboratory of the ISS, is presented in Figure 2. Figure 2. View largeDownload slide Model function of variance as a function of the EPR signal amplitude in tooth enamel, built at the EPR laboratory of the ISS. Figure 2. View largeDownload slide Model function of variance as a function of the EPR signal amplitude in tooth enamel, built at the EPR laboratory of the ISS. Following work carried out under the European project SOUL, the benchmark protocol has been used for the estimation of the performance parameters within several EPR dosimetry method intercomparisons(13, 99). OSL/TL Evaluation of uncertainties In OSL and TL, there is no specific standard dedicated to the evaluation of uncertainties, and evaluations normally follow the classical GUM(21) guidance. Uncertainty analysis is performed using the standard theory of error propagation. If only a single dose calibration point is used then the unknown absorbed dose, DX, is obtained through a simple comparison between the corresponding luminescent signal (TL or OSL) and the luminescent signal, Ical, obtained after exposing the same dosemeter to a calibration dose Dcal. If fading is an issue then either the signal or the measured dose can be corrected for this effect. Two cases will be considered here, both of which have been applied in the literature: (1) the fading factor is determined individually for the sample in question using the (known) time tX since irradiation and (2) the fading factor is calculated based on a known fading function, with associated uncertainties. In the case of (1), a possible approach would be as follows: after measurement of the signal IX related to the unknown dose DX of the incident, with a time delay tX since this incident, the sample is given a calibration dose Dcal and a corresponding signal Ical is measured after a time tcal. The latter procedure is then repeated with the same dose Dcal but this time waiting for a longer time interval tX (the same time delay as for the accidental exposure), before measuring the corresponding signal IX,Dcal. The fading factor is then directly determined by the simple relation:   f=IX,DcalIcal (16) In this case, only the measurement uncertainties of IX,Dcal and IX are required for evaluation of the uncertainty in f. The unknown absorbed dose DX is then calculated as   DX=IXfICalDcal, (17)with IX being, as above, the signal measured after the unknown exposure, with a delay time tX. It is important to note the difference between IX and IX,Dcal. Equation (17) can then be simplified to:   DX=IXIX,DcalDcal (18) Therefore, only the measurement uncertainties in IX and IX,Dcal and the uncertainty in determining Dcal are needed for the calculation of uncertainty in DX, which can be carried out using the GUM methodology as explained in the above sections. This method assumes that the uncertainties in IX and IX,Dcal exhaustively explain the observed deviances in the dose–response and fading curves. However, from experience, it is known that this is probably not always the case and the uncertainty in DX is therefore likely to be underestimated using the above procedure. For instance, an uncertainty in the time tX since the unknown exposure is not considered. In the case of (2), fading is calculated according to a functional relationship fitted to datasets of other samples. For chip cards and electronic components, where the effect of anomalous fading is suspected, this functional relationship between intensity and time since irradiation is well-known:(69,100–104)  I(t)=IC[1−κln(ttC)] (19) With IC being the signal intensity that would be observed after an (arbitrarily) chosen time tC after irradiation and κ a fitting constant (in the literature, the common logarithm is often used and κ replaced by g/100, with g being the percentage decrease per decade). If, for convenience, tC is set to tcal before fitting of equation (19), then the fading factor can be calculated as:   f=ItxI(tcal)=[1−κln(txtcal)] (20) The difference between equations (20) and (16) is that here the signal intensities are calculated rather than measured. If, again, a single calibration dose is used to convert signal to dose, the unknown dose DX is calculated according to equation (17). If uncertainties are assumed in κ, tX, tcal, IX, Ical and Dcal, then in this simplified case, the uncertainty in DX can be assessed using GUM methodology:   δDx=Dx⋅(δDcalDcal)2+(δIXIX)2+(δIcalIcal)2+(δff)2 (21)with   σf2=(ln(tXtcal))2σκ2+(κtX)2σtX2+(κtcal)2σtcal2 (22) It should be emphasised that if published fading parameters in equation (19) are used and tC does not equal tcal, then equation (20) should be applied in its more general form, as a ratio of two calculated intensities with associated uncertainties. The calculation of the uncertainty in DX is then more laborious but still straightforward. Another issue is that if equation (19) is fitted to fading data obtained from averaged signals of several samples, which is sometimes done, then the calculated signal uncertainty will always be lower than the standard deviation of the input data, i.e. the uncertainties in the parameter κ and IC are unlikely to describe the full variability in observed fading behaviour. One possibility to circumvent this issue is to fit the fading data of each sample individually, rather than to average signals and then to calculate average and standard deviation of the group of obtained parameter values. This approach has been pursued, for example, in the MULTIBIODOSE project(49). For a luminescence reader with a built-in calibration source, the time delay tcal between irradiation with the calibration dose and measurement is usually known very accurately, therefore the uncertainty δcal can be neglected in this case. On the other hand, an increase in the value of tcal leads to a reduction in the first term in equation (22) and thus to a reduction in the uncertainty of the fading factor. Furthermore, there will also be fading during the irradiation period itself, i.e. during tirr. If tcal is of the order of tirr this should be accounted for in evaluation of the uncertainty in Dx, for example (approximately) by adding tirr/2 or tirr/ln 2 to the delay time. It is more correct, however, to make tcal » tirr. It should be noted that, in general, the fading function that is appropriate to the materials being studied should be determined independently and the uncertainty analysis appropriate to that expression should then be evaluated and used. As an example, in case of human teeth as well as integrated circuits from mobile phones, the fading curves were better fitted by a bi-exponential decay function(100, 102). In contrast to electronic components, sensitive dosemeter materials with comparatively slow or no fading, such as household salt (NaCl)(101, 105) and quartz extracted from building materials(106–108), may have a substantial background signal if shielded from light during the time of storage before irradiation. The detection limit in such materials is related to the magnitude and the uncertainty in the background signal. The background absorbed dose in, for example, household salt may vary depending on how the salt was manufactured(101), in what package it was kept, and how and where this package was stored(105). For quartz extracted from bricks, the background dose depends on the concentration of natural radionuclides in brick, plaster and soil in front of the building and on the age of the bricks (as referenced above). In these cases, when the dose is in the region of the background dose, the variance in the background dose will predominate over the uncertainty in the dose measurement itself. However, for higher doses, in the region of several 100 mGy or above, the uncertainty in background dose will have a lesser impact. Luminescence signals as recorded by photon counting hardware are in essence a binomial signal, which is assumed to approximate the Poisson-distribution when sufficient counts are registered. In OSL, a background is usually subtracted from the measured signal, determined from a certain part of the OSL decay curve (often the last seconds of the measured signal). The background can be a combination of hard-to-bleach components and instrumental background and as such can be overdispersed. Detailed approaches for the calculation of uncertainty for the net OSL count in such a case can be found in the literature, e.g.(109). If several calibration doses are used in order to verify the dose–response curve, or several delay times are used to verify the fading curve, a number of different methods are applied. These include proprietary codes or spreadsheets, software for luminescence data processing (e.g. Analyst(110)), or dedicated curve fitting packages (e.g. Sigmaplot(111), Origin(112)). The equations chosen to approximate dose responses are commonly linear, saturating exponential (sublinear), or exponential/quadratic (superlinear) in form(113). If a sample is divided into several aliquots to assess DX, the quantities IX and Ical could be calculated as the average of the luminescent signals of the different aliquots and their uncertainty as the weighted standard error of the mean. However, to avoid additional uncertainties due to the different aliquot sensitivities, in practice, a dose is usually measured for each aliquot individually and the obtained distribution in aliquot doses further analysed to obtain a best estimate and uncertainty for DX. A variety of approaches has been developed for obtaining central measures from non-perfect data in this case(28, 114, 115). Means and maximum likelihood estimates such as the weighted mean (weighted to inverse variance) are associated with well-defined uncertainty estimates(21), which are obtained by propagation of uncertainties through calculation (internal error) or by evaluation of dispersion in the observed results (external error). Dispersion in observations is commonly observed to be greater than that predicted by propagation of uncertainties through the calculation of the central estimate, leading to overdispersion. This can relate to experimental variables that are undefined or not included in the calculation, and where signal levels are low. It may also relate to the assumptions underlying the calculations themselves. The combination of data in GUM based approaches assumes a Gaussian approximation of the Poisson distribution. MONTE CARLO MODELLING TO SUPPORT UNCERTAINTY CALCULATIONS A key aim of the EURADOS WG10 uncertainties task is to further promote the powerful Monte Carlo (MC) techniques for uncertainty estimation. Thus, in addition to the above review of uncertainty analysis techniques in retrospective dosimetry, we present the following review of use of MC methods within uncertainty estimation. With the availability of high-power computational facilities, numerical simulations have become increasingly practical and popular for analysis of physical or biological systems. One method of numerical simulation that has widespread application in dosimetry, as well as in countless other physical and biological sciences, is the Monte Carlo method. MC modelling can be used to aid and analyse uncertainty propagation, where the goal is to determine how random variation, lack of knowledge, or error affects the sensitivity, performance, or reliability of the system that is being modelled. However, this inevitably comes at a cost: the MC method is itself prone to uncertainty, and can therefore impart additional sources of error. The current section of this paper focuses on two applications of the Monte Carlo method that are relevant to retrospective dosimetry. The first application discusses the use of Monte Carlo programmes created specifically for uncertainty propagation analysis, giving an illustrative example of the technique. The second application concerns the MC transport of ionising radiation through matter, which is a common technique used to model retrospective dosimetry systems. In each case, the role that MC plays in both increasing and decreasing a user’s understanding of uncertainty is discussed. Uncertainty propagation with Monte Carlo Monte Carlo Simulation (MCS) provides a practical alternative to the GUM modelling approach. Indeed, the GUM method has limitations, especially in the case in which the model is characterised by a non-linear function and the approximation of a Taylor’s series expansion up to first-order terms for error propagation is not sufficient. Furthermore, the uncertainty distributions may be non-Gaussian in which case it is not always possible to propagate uncertainties using the GUM approach(21, 116–120). The use of MCS for the evaluation of measurement uncertainty is presented by the ‘GUM Supplement’(21). MCS may be applied to estimate the combined effects of uncertainty propagation through a physical system that comprises a number of individual components, each of which possesses outcomes and uncertainties expressed by independent probability functions. Many authors report applications of MCS for determination of their measurement uncertainties: Couto et al.(116), for example, recommended its use for complex problems that could not be solved by the GUM method. Whereas GUM calculations are purely theoretical, MC analyses aim to perform a large series of simulated experiments, with estimates of uncertainties then derived by considering the distributions of their results. In most simple cases, the theoretical GUM results can be compared and tested experimentally against the MC ones. In more complicated situations where the GUM approach would be difficult or unfeasible, MC simulations may still provide reliable results. Moreover, whereas the GUM modelling approach may require advanced mathematical skills for many of its procedures, the MCS method can be applied more easily using readily available spreadsheet software, such as Microsoft(R)Excel(R) or Libreoffice Calc: complex uncertainty calculations can hence be accomplished by non-statisticians using the standard spreadsheet applications rather than requiring technically demanding mathematical procedures(119, 120). The MCS method for assessing uncertainty propagation MC analyses require the definition of a measurement model (with the corresponding functional relationship) that describes the measurement process in terms of the inputs to the measurand and the assessment of the types of distribution that apply to the various input uncertainties. The aim of the MC analysis is then to obtain properties of the measurand quantity, Y, such as expectations, variances and covariances, and coverage regions, by calculating an approximate numerical representation of the distribution function GY for Y. Suppose that Y is function of various independent variables Xi, i.e. Y = f(X) = f(X1,….,XN) with i from 1 to N; for the present discussion, the Xi are assumed to be continuous parameters, but similar techniques can be used for discrete variables. For each input variable Xi the corresponding PDF, P(Xi), describing its likely values is assumed to be known. A value for Y may therefore be drawn by sampling the N input quantities Xi from their respective PDFs. In practice, this sampling procedure is typically achieved computationally using pseudo-random numbers that are generated algorithmically according to a uniform distribution between 0 and 1, and then suitably ‘transformed’ to obtain the prescribed probability distribution. One such transformation makes use of the cumulative distribution function (CDF), C(Xi), corresponding to a given P(Xi), which is a monotonically increasing, normalizable function with a range constrained between 0 and 1; the result from the MC sampling of the uniform distribution may be identified with a value within the range of this CDF, which then maps uniquely to a specific outcome Xi. The commonest distributions, P(Xi), used in uncertainty calculations are Gaussian, rectangular, triangular, t, exponential, gamma and multivariate Gaussian; it is possible to sample fairly from each of these distributions by using a uniform distribution between 0 and 1 that is randomly sampled by Monte Carlo methods. Sampling once from each of the N PDFs, P(Xi), corresponding to the N independent input quantities Xi provides one value for the measurand, which may be labelled Y1, by using the expression of the f(X) function. Clearly, the value of Y1 will depend on the specific outcomes that were obtained during the N random samplings, and repeating the process is likely to yield a different estimate, i.e. Y2. If the MCS is repeated M times, requiring M×N samplings overall, a distribution, GY, of M values for the measurand are generated, i.e. {Y1, Y2,…, YM}. This process is repeated a sufficiently large number of times (i.e. M is very large) in order to have significant statistics, i.e. until it may be assumed that the generated distribution GY provides a reasonable estimate of the likely distribution of the true measurand, Y. Since the input values are randomly drawn from the predefined probability distributions associated with each of the input variables, the information regarding these PDFs will be included implicitly in the distribution of the Y variable, and this allows for the propagation of distributions. Once the representation GY of the distribution function for Y has been derived, it is possible to extract from it values for the mean and standard deviation associated with Y as well as the other moments of the distribution function. Moreover, the distribution of output data can be plotted and additional information can be extracted from that graph, such as the coverage interval of the measurand for a stipulated coverage probability, p, even when the PDF of the measurand has significant asymmetry. The possibility of graphical representation of the distribution of the measurand through the MCS procedure allows for the detection of possible asymmetry or deviation from Gaussian shape. This graphical representation also favours the determination of a coverage interval corresponding to a stipulated coverage probability. From the above discussion, the advantages of MC simulation with respect to the GUM approach(21) are seen to be manifold. The MC technique involves propagation of distributions and always provides a PDF for the output quantity that is consistent with the PDFs of the various inputs, whereas the GUM modelling approach is not able to explicitly determine a PDF for the output quantity. Also, in the case in which the input quantities may themselves depend on other quantities, including corrections and correction factors for type B errors, MCS is able to calculate the combined standard uncertainty of the measurand, even if the functional relationships are complex or difficult to deal with analytically. Similarly, if two inputs are correlated via a bivariate distribution, MC analysis can provide a joint simulation if the input PDFs are defined in such a way to include the correlation coefficient. Additionally, the MCS procedure intrinsically accounts for any non-linearity in the functional relationship, whereas GUM does not; in general, more accurate estimates of uncertainties for non-linear models are therefore achieved through MC calculations. Application of the MCS method for uncertainty propagation In order to show how the MC method is used to evaluate measurement uncertainties, an example application for EPR retrospective dosimetry is reported here. Consider a plot of EPR signal as a function of absorbed dose, and assume that the data may be fitted with a calibration curve characterised by a quadratic trend. This behaviour is common for samples that present a background signal that is of the same shape and overlaps the signal induced by irradiation; in these cases the exposure to ionising radiation produces an increase of EPR intensity of these signals. The expression for the fitting function in this case is of the general form:   S=a+bD+cD2 (23)where S is the EPR signal, D is the absorbed dose and a, b and c are the fitting parameters of the calibration curve, which in this example have the values a = 10.4 ± 0.2, b = 1.527 ± 0.011 and c = 0.409 ± 0.005. In EPR retrospective dosimetry, the general approach is to reconstruct the absorbed dose Dr deposited during an exposure from a measurement of the induced signal, with corrections applied to account for fading and other measurement conditions. As per equation (7), S is the signal measured from a sample, and that the corresponding standard deviation σED is calculated by considering the various contributions from the standard deviations associated with the fading correction (σF), the sample preparation process (σS), the EPR measurement (σE), and from the numerical treatment of spectra (σT). For the present example, suppose that S ± σED have the values 25.5 ± 0.7. The reconstructed dose can be calculated by inverting equation (23), i.e.:   D=−b+b2−4ac+4cS2c (24) As can be seen, equation (24) contains a fraction and a square root term. The calculation of the standard deviation, σD, of the reconstructed dose following the GUM modelling approach is therefore not straightforward because, to take into account the uncertainties of the calibration curve parameters, the corresponding partial derivatives of the factors in equation (24) should be calculated. In fact, under the simplifying hypothesis that the covariances between the various fitting coefficients are negligible, this uncertainty becomes analogous to equation (8):   σD=(dDda)2σa+2(dDdb)2σb2+(dDdc)2σc2+(dDds)2σs2 (25) After the calculation of the partial derivatives and the substitution of the above-mentioned values, this approach can be shown to provide the result Dr = 4.49 ± 0.14 Gy. On the other hand, this calculation can be performed by MCS following an easy procedure with common spreadsheet software. An example of this MC analysis is reported in Figure 3. In Figure 3, the values in the A, B, C and D columns are realisations of the S measurement and the a, b and c parameters, respectively. All of these quantities are assumed to be distributed normally around their respective average values with their respective standard deviations, which are stated at the top of Figure 3. Based on these distributions, trial values are drawn for each of the input variables (a, b, c and S), and the corresponding value of the dose D (column H) is then calculated using equation (24). In this example spreadsheet, the values were calculated by using a combination of the NORMINV (for calculating Gaussian-distributed values) and RAND (for generating pseudo-random values) functions, according to the procedure described in the literature(120). Figure 3 lists the results from 10 such applications of this process; for the complete analysis, a total of 106 trials were performed. A histogram of the dose values obtained from these 106 trials is reported in Figure 4 binned in 0.02 Gy increments, with the probability (y-axis) derived from a normalisation of their respective populations. Figure 3. View largeDownload slide Example of a Monte Carlo simulation for estimating the uncertainty of an EPR measurement, performed using a spreadsheet programme. Figure 3. View largeDownload slide Example of a Monte Carlo simulation for estimating the uncertainty of an EPR measurement, performed using a spreadsheet programme. Figure 4. View largeDownload slide Histogram of dose values obtained by means of Monte Carlo simulations. Figure 4. View largeDownload slide Histogram of dose values obtained by means of Monte Carlo simulations. The mean value and the standard deviation of the results are readily calculated from the histogram (Figure 4), and can be shown to be equal to 4.49 Gy and 0.14 Gy, respectively, in this example. These values are consistent with those obtained by the GUM approach. However, as mentioned previously an advantage of the Monte Carlo analysis is that it also provides the PDF for the output quantity, which is dependent on the PDFs of the various inputs. Other moments of the distribution can also be obtained from this PDF, such as the skewness and the kurtosis. These are found to be −0.0588 and 3.0093, respectively, for the data in the current example (Figure 4), which are similar to the values of 0 (skewness) and 3 (kurtosis) for a Gaussian distribution, as expected since the PDFs of the input variables were all assumed to be Gaussian-like. Another application of the Monte Carlo technique is the uncertainty analysis for TL measurements on display glass of mobile phones. The absorbed dose measurement is influenced in this case by the presence of an intrinsic background signal and signal fading(103). The intrinsic background signal can be reduced by etching the glass sample in concentrated HF before measurement but not completely eliminated(104). In both cases (etched or unetched), the distribution of intrinsic background doses could be shown to approximately follow a log-normal distribution. From the measured dose D, along with its estimated uncertainty σD, the corrected unknown absorbed dose Dcorr is then calculated from the expression:   Dcorr=D−DBGf, (26)with DBG being the median of the intrinsic background dose distribution and f the fading factor. Analysis of the signal fading of 17 different glass samples for different storage times indicated that the variability (standard deviation) in f is approximately independent of the value of f itself, therefore a constant value for σf is assumed. Since the calculation of uncertainty for the corrected dose involves the combination of Gaussian and non-symmetrically distributed parameters, the GUM methodology is not directly applicable whereas with the MCS the simulated distribution of possible corrected dose values is easily obtainable, allowing the immediate assessment of the median and the 95% confidence interval. An example for two unetched glass samples from mobile phones is shown in Figure 5. For the sample with the lower dose the uncertainty in the intrinsic background dose dominates, leading to a distribution skewed to the left whereas for the sample with the higher dose, the uncertainty in the fading dominates, leading to a distribution skewed to the right. Figure 5. View largeDownload slide Histogram of dose values for two display glass samples of irradiated mobile phones (Samsung Galaxy Y S5360). Nominal doses were 0.6 Gy and 1.5 Gy, reconstructed doses with 95% CI, 0.59 [0.18–0.83] Gy and 1.6 [1.3–2.2] Gy, respectively. Figure 5. View largeDownload slide Histogram of dose values for two display glass samples of irradiated mobile phones (Samsung Galaxy Y S5360). Nominal doses were 0.6 Gy and 1.5 Gy, reconstructed doses with 95% CI, 0.59 [0.18–0.83] Gy and 1.6 [1.3–2.2] Gy, respectively. Radiation transport modelling with Monte Carlo There are a number of Monte Carlo radiation transport codes currently available, examples including the EGSnrc(121), FLUKA(122), GEANT(123), MCNP(124), PENELOPE(125) and PHITS(126) families of software. These codes are described as ‘general purpose’: they are intended, in principle, to be able to model the passage of any type of ionising radiation from any type of source through any arrangement of matter that might be required by their users, providing output data on parameters such as energy depositions and fluences at any location of interest in the geometry. Accordingly, these codes have widespread application in retrospective dosimetry(127–133), where computer models of the dosimetry system in question may be created and interrogated to understand or improve its performance, limitations and uncertainties. Despite these successes, however, the techniques are not without drawback: although they may be a valuable tool in evaluating and handling uncertainty, they may also be a source of this. Statistical uncertainties with MC modelling It is relatively easy for the users of general purpose MC codes to reduce the statistical uncertainties on their results. Essentially, these procedures typically rely on increasing the number of scored histories in the regions of interest within the geometry. The most elementary such method is simply to instruct the programme to simulate the histories of a greater number of particles, though inevitably this is achieved at the expense of an increased CPU time. More sophisticated techniques of variation reduction can also often be implemented, such as biasing source directions, forcing particular interactions to occur, or artificially splitting individual particle histories into multiplicities for example, with scores then having to be weighted accordingly to ensure fairness and fidelity of results. Using these techniques, and coupled with the power of modern computers (especially cluster-based platforms), it is therefore not uncommon for simulated results to be associated with very small statistical uncertainties, sometimes a fraction of a per cent. But anecdotally, at least within the experience of the authors of this current article, such values are often then quoted in the scientific literature as the primary or only uncertainty that is provided with a particular Monte Carlo result. This is misleading, however, because it neglects the type B uncertainties that are also inevitably associated with the modelling, which tend to be much harder to quantify and may be substantially larger in magnitude. In summary, with Monte Carlo modelling it is often easy to derive highly precise results, but this does not necessarily mean that they are accurate; arguably, the important difference between these two qualities is not always granted enough weight. Known type B ‘systematic’ uncertainties The MC radiation transport method simulates the passage of particles through a user-defined configuration of matter. Accordingly, the accuracy with which the computational model reflects physical reality will dictate the accuracy of its results. Clearly, then, there are a number of factors that could introduce significant uncertainty into the modelling. These might be classified into two broad types: uncertainties inherent in the Monte Carlo software; and uncertainties associated with the user-defined model itself. Type B uncertainties within the Monte Carlo software incorporate factors such as uncertainties in the underlying physics upon which it relies. These uncertainties include, for example, limitations and inaccuracies of the interaction models that might be in use, including any energy-dependencies that they might have. Although many of these uncertainties may be known in principle, or could be derived from the various references that describe the origins of the physical data and models underpinning the general purpose codes, their magnitudes may not be readily apparent to ‘casual’ users of the software, with their combined effects even harder to quantify. Their contributions to the overall uncertainty budgets arising from the use of Monte Carlo modelling in retrospective dosimetry are therefore highly context dependent, and difficult to numerate in general. Type B uncertainties that originate from the users of codes reflect the inevitable inability of these users to construct a perfect model of the physical system. This failure might be because of factors that can only be known with limited resolution, and can hence only be input to the MC programme with limited accuracy. To give an illustrative example, in the modelling of resistors in mobile phones for fortuitous dosimetry(132), the absorbed doses received by the target will depend strongly on accurate knowledge of the material compositions and densities of the aluminium oxide substrate, the high-Z contact electrodes adjacent to it, the circuit board to which it is attached, and the screen, case, battery and other features that surround it, as well as on all of their relative locations in 3D space; the estimates of each of these physical parameters will be subject to a significant measurement uncertainty, and this is translated into an unavoidable inaccuracy of the MC model (and hence its results). Some type B uncertainties may be mitigated by performing sensitivity analyses with the model. In fact, investigating the likely effects of such sensitivities in the physical world might be the primary motivation for developing the Monte Carlo model in the first instance. For example, the impact on dosimetry of the measurement uncertainty on the density of a given object in the real world may be estimated by perturbing the density of that object in the model by an amount deemed equivalent to that uncertainty, and then repeating the simulation; comparison of the perturbed and unperturbed results provides an estimate of the effects of that density uncertainty. Similarly, by varying the concentration of crucial elements, the same approach may be used to estimate the impact of uncertainty regarding the material composition(128). The MC method is thus seen to be a quick and effective means of quantifying the effects of a given uncertainty in a physical system. The above univariate sensitivity analysis may be generalised to account for error propagation and the overall uncertainty budget. Specifically, uncertainty propagation analyses can be achieved by repeating the sensitivity analysis for all parameters within the physical model that are associated with a significant type B uncertainty. In fact, this procedure may be applied to assess the impacts of both the physical uncertainties (i.e. the measurement uncertainties) and the code-specific ones; this latter assessment might be achieved by rerunning the simulation using different simulation parameters, for example choosing alternative cross-section databases or interaction models. Overall, the procedure therefore leads to distributions of perturbed data around a mean, which may be interrogated by standard techniques to obtain a handle on the overall quality of the quoted result, and hence on the robustness of its predictions about the performance of the physical system being modelled. Unknown type B ‘systematic’ uncertainties In addition to applying Monte Carlo techniques to estimate the effects of known type B uncertainties in a physical dosimetry system, such as uncertainties on the precise material composition of a dosemeter, it is also possible to use them to estimate the effects of unknown type B uncertainties. Unknown type B uncertainties include those factors that will affect the results recorded by a retrospective dosemeter, but are a consequence of ignorance of the values of key parameters rather than any imprecision in the estimates of them; these could include uncertainties resulting from incomplete knowledge of the nature of the exposure conditions of the dosemeter, for instance, or missing data on the characteristics of the source term. An example of this might be a situation in which it acknowledged that an energy-dependent correction to the response of a dosemeter needs to be made, but when the energy of the radiation source to which it was exposed is not known. In such cases, it might be appropriate to use Monte Carlo techniques to model the responses of the dosemeter to a range of plausible sources with different energies, with the subsequent variation in the results then used to provide a handle on the maximum error that is likely to be caused by the ignorance of the true energy of the physical exposure. As an illustrative case study of the application of MC in handling this type of uncertainty, consider the use of mobile phones as emergency retrospective dosemeters. Although phones, or more specifically their display screens or resistors, possess many of the features considered advantageous to act as reliable fortuitous dosemeters, for them to be useful it is mandatory to relate the absorbed doses they record in an exposure to the concurrent doses deposited in their owners. This can be achieved by Monte Carlo modelling of the phones located at various positions on an anthropomorphic phantom, exposing the configuration to various fields, and then comparing the doses deposited in the phones and phantom to generate a set of exposure- and location-dependent conversion factors; for some locations and exposures, phone and body absorbed doses may differ by a factor of ~20(129). In the real world, however, the precise location of the phone relative to the body during an unplanned exposure may not be known in hindsight, at least not to those performing the dosimetry. Moreover the precise exposure conditions, and orientation of the individual relative to the source, are also unlikely to be recorded. Accordingly, this ignorance introduces a significant unknown type B uncertainty into the conversion of phone doses to body doses. But, this ignorance may be managed by the use of mean conversion factors that are averaged over the datasets of all of the parameters that are unknown. For instance, if the exposure geometry and radiation source were known with some degree of confidence, this averaging might just be over the conversion factor datasets generated by the Monte Carlo model for the different phone positions; but if only the source were known, then the averaging would be over the datasets for all phone positions and all exposure geometries. Use of these mean conversion factors is then associated with conservative uncertainties identified as the maximum over- and under-responses that are expected to arise from their application. These extrema may be taken from the envelope function of the conversion factors that were summed-over in the averaging process, and quantify the worst-case errors in the dosimetry that might be anticipated in adopting this conversion process due to the unknown type B uncertainty in the exposure conditions. This is an important separate topic for further consideration. BRIEF DISCUSSION AND CONCLUSIONS In this paper, the current state of the art in terms of uncertainty analysis techniques for biological dosimetry (with the DCA as the most well-developed example) and physical retrospective dosimetry has been reviewed, with a particular emphasis on the potential for increased use of the more sophisticated Bayesian and Monte Carlo modelling methodologies to support uncertainty characterisation. To survey the current situation, a questionnaire was compiled and sent to all members of EURADOS WG10 on retrospective dosimetry. The questionnaire was designed to gather information on current experience related to uncertainty estimation and also to assess the possible needs in terms of training or courses. From the 28 laboratories who responded, 72% currently use physical retrospective dosimetry techniques (EPR, TL and OSL), 19% biological techniques (micronuclei, dicentrics, FISH, and γ-H2AX) and 8% others techniques (UV–vis spectroscopy, neutron activation, etc.). Fifty six per cent of the responders use only the classical GUM approach, 13% the approach described in the IAEA manual for cytogenetics, about 18% Monte Carlo methods and about 9% use a partial Bayesian approach. None of the responders used a formal Bayesian approach. It is interesting to note that 56% of responders use software to calculate the uncertainties (35% in house-developed software, 41% commercial software and 24% freely available software). 64% of responders were satisfied with their uncertainty characterisation method, but would be interested in improving it and/or to evaluate and compare other approach such Bayesian or MC methods. In contrast, 24% were aware of weaknesses in their approach or were concerned that their method may be incorrect or inappropriate. So, in general the situation looks positive for the most well-established assays, with practical, accepted analysis techniques in place (chiefly based on GUM) which likely support good overall absorbed dose and uncertainty estimates. The exception is for the more complex exposure scenarios(19), which are known to introduce additional uncertainties which should ideally be characterised on a case-by-case basis. The tools to do this are in place (for instance in the GUM(21)) but, in practice, are not often applied for biological dose estimation, and to date physical retrospective dosimetry techniques have only really been used for a set of ‘standard’ scenarios. Thus there is still work to do. The next steps in development of uncertainty analysis techniques across the field of retrospective dosimetry will be to look at standardisation of techniques—i.e. to evaluate which of the methods detailed in this work give the most accurate representation of uncertainty in various different exposure scenarios, including the more difficult cases. In addition, as discussed, uncertainty analysis is a complex field in itself and thus an ‘expert’ level of knowledge is required. For example, the application of GUM can be very complex and time consuming, especially when the different terms of uncertainties are correlated (cf. calculation of covariance terms). In many circumstances GUM also requires approximation, and the situation is further complicated when the mathematical model that fits the relation between input data (measurand) and output data (dose) is non-linear. The estimation of each of the uncertainty terms may need a large amount of work (including experimental work). In light of this, MC or Bayesian approaches would be particularly efficient for retrospective dosimetry application. However, to date, only a small number of EURADOS retrospective dosimetry Group members use these methods, and further work is required. For example, for GUM the algebraic benefits of using the Gaussian approximation should be balanced against its potential divergence from the ‘true’ uncertainty of the observations. However, the way in which the uncertainty is characterised at each stage in a Monte Carlo calculation should be appropriate to the observations and/or should allow for uncertainty in its own assignment(28), and the extra analytical power provided by the use of Bayesian priors requires that their presence and form be carefully justified in order to limit the potential for mistakes(26, 31, 134). In addition, it is worth noting that beside retrospective dosimetry, EPR has long been used for metrology with for instance alanine and recently with tartrates and formats(135, 136). In this field comparatively large amount of effort has ensured that the uncertainty analysis supports highly accurate absorbed dose determinations(137). However, these principles are equally useful for retrospective dosimetry even if the intrinsic uncertainties are larger and thus the accuracy normally is lower in this case. It will thus be very important for researchers active in these fields to ensure that new methods are disseminated and that new and existing colleagues access appropriate training. This is something that EURADOS WG10 will continue to support going forward. ACKNOWLEDGEMENTS The authors would like to express their thanks to the EURADOS WG10 members who supported this work, in particular Octavia Monteiro Gil of the Centro de Ciencias e Tecnologias Nucleares, Instituto Superior Tecnico, Portugal, and Seongjae Jang of the Korea Institute of Radiological and Medical Sciences. FUNDING This work was supported by partly supported by the European Radiation Dosimetry Group (EURADOS; WG10) and by the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Chemical & Radiation Threats & Hazards at Newcastle University in partnership with Public Health England (PHE). The views expressed are those of the author(s) and not necessarily those of the NIHR, the Department of Health or Public Health England. REFERENCES 1 International Atomic Energy Agency EPR-Biodosimetry series. Cytogenetic dosimetry: applications in preparedness for and response to radiation emergencies ( 2011). 2 Oestreicher, U. et al.  . RENEB intercomparisons applying the conventional Dicentric Chromosome Assay (DCA). Int. J. Radiat. Biol.  93, 20– 29 ( 2017). Google Scholar CrossRef Search ADS   3 Depudyt, J. et al.  . RENEB intercomparison exercises analyzing micronuclei (Cytokinesis-block Micronucleus Assay). Int. J. Radiat. Biol.  93, 36– 47 ( 2017). Google Scholar CrossRef Search ADS   4 Barquinero, J. F. et al.  . RENEB biodosimetry intercomparison analyzing translocations by FISH. Int. J. Radiat. Biol.  93, 30– 35 ( 2017). Google Scholar CrossRef Search ADS   5 Terzoudi, G. I. et al.  . Dose assessment intercomparisons within the RENEB network using G0-lymphocyte prematurely condensed chromosomes (PCC assay). Int. J. Radiat. Biol.  93, 48– 57 ( 2017). Google Scholar CrossRef Search ADS   6 Moquet, J. et al.  . The second gamma-H2AX assay inter-comparison exercise carried out in the framework of the European biodosimetry network (RENEB). Int. J. Radiat. Biol.  93, 58– 64 ( 2017). Google Scholar CrossRef Search ADS   7 Trompier, F. et al.  . Overview of physical dosimetry methods for triage application integrated in the new European network RENEB. Int. J. Radiat. Biol.  93, 65– 74 ( 2017). Google Scholar CrossRef Search ADS   8 Trompier, F. et al.  . EPR dosimetry for actual and suspected overexposures during radiotherapy treatments in Poland. Radiat. Meas.  42, 1025– 1028 ( 2007). Google Scholar CrossRef Search ADS   9 Degteva, M. O. et al.  . Analysis of EPR and FISH studies of radiation doses in persons who lived in the upper reaches of the Techa River. Radiat. Environ. Biophys.  54, 433– 444 ( 2015). Google Scholar CrossRef Search ADS   10 Trompier, F. et al.  . EPR retrospective dosimetry with fingernails. Health Phys.  106, 798– 805 ( 2014). Google Scholar CrossRef Search ADS   11 Sholom, S. and McKeever, S. W. S. Emergency EPR dosimetry technique using vacuum-stored dry nails. Radiat. Meas.  88, 41– 47 ( 2016). Google Scholar CrossRef Search ADS   12 Marciniak, A. and Ciesielski, B. EPR dosimetry in nails—a review. Appl. Spectrosc. Rev.  51, 73– 92 ( 2016). Google Scholar CrossRef Search ADS   13 Fattibene, P. et al.  . EPR dosimetry intercomparison using smart phone touch screen glass. Radiat. Environ. Biophys.  53, 311– 320 ( 2014). 14 Bailiff, I. K., Sholom, S. and McKeever, S. W. S. Retrospective and emergency dosimetry in response to radiological incidents and nuclear mass-casualty events: a review. Radiat. Meas.  94, 83– 139 ( 2016). Google Scholar CrossRef Search ADS   15 Jaworska, A. et al.  . Operational guidance for radiation emergency response organisations in Europe for using biodosimetric tools developed in eu multibiodose project. Radiat. Prot. Dosim.  164, 165– 169 ( 2015). Google Scholar CrossRef Search ADS   16 Kulka, U. et al.  . Realising the European network of biodosimetry: Reneb-status quo. Radiat. Prot. Dosim.  164, 42– 45 ( 2015). Google Scholar CrossRef Search ADS   17 Kulka, U. et al.  . RENEB – Running the European Network of biological dosimetry and physical retrospective dosimetry. Int. J. Radiat. Biol.  93, 2– 14 ( 2017). Google Scholar CrossRef Search ADS   18 Helton, J. C. Uncertainty and sensitivity analysis for models of complex systems. In: Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 62). (Berlin, Heidelberg: Springer Berlin Heidelberg) (2008). 19 Vinnikov, V. A., Ainsbury, E. A., Lloyd, D. C., Maznyk, N. A. and Rothkamm, K. Difficult cases for chromosomal dosimetry: statistical considerations. Radiat. Meas.  46, 1004– 1008 ( 2011). Google Scholar CrossRef Search ADS   20 Ainsbury, E. et al.  . Integration of new biological and physical retrospective dosimetry methods into EU emergency response plans – joint RENEB and EURADOS inter-laboratory comparisons. Int. J. Radiat. Biol.  93, 99– 109 ( 2017). Google Scholar CrossRef Search ADS   21 ISO/IEC. Uncertainty of measurement – Guide to the expression of uncertainty in measurement (GUM:1995). ISO/IEC Guide 98-3:2008 2008 ( 2008). 22 International Standard Organization. ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories. Int. Stand.  2005, 1– 36 ( 2005). 23 JCGM 100. JCGM 100:2008 – evaluation of measurement data – guide to the expression of uncertainty in measurement. Int. Organ. Stand. Geneva ISBN  50, 134 ( 2008). 24 ISO 5725-1:1994. Accuracy (trueness and precision) of measurement methods and results – Part 1: general principles and definitions ( 1994). 25 ISO 13528:2015. Statistical Methods for Use in Proficiency Testing by Interlaboratory Comparison ( 2015). 26 Sivia, D. S. and Skilling, J. Data Analysis: A Bayesian Tutorial  ( Oxford: Oxford University Press) ( 2006). 27 Marrale, M. et al.  . Assessing the impact of copy number variants on miRNA genes in autism by Monte Carlo simulation. PLoS ONE  9, e90947 ( 2014). Google Scholar CrossRef Search ADS   28 Sivia, D. S., Burbidge, C., Roberts, R. G. and Bailey, R. M. A Bayesian approach to the evaluation of equivalent doses in sediment mixtures for luminescence dating. AIP Conf. Proc.  735, 305– 311 (AIP, 2004). 29 Thomsen, K. J., Murray, A. S. and Bøtter-Jensen, L. Sources of variability in OSL dose measurements using single grains of quartz. Radiat. Meas.  39, 47– 61 ( 2005). Google Scholar CrossRef Search ADS   30 Higueras, M., Puig, P., Ainsbury, E. A., Vinnikov, V. A. and Rothkamm, K. A new Bayesian model applied to cytogenetic partial body irradiation estimation. Radiat. Prot. Dosim.  168, 330– 336 ( 2016). 31 Ramsey, C. B. Deposition models for chronological records. Quat. Sci. Rev.  27, 42– 60 ( 2008). Google Scholar CrossRef Search ADS   32 Sigurdson, A. J. et al.  . International study of factors affecting human chromosome translocations. Mutat. Res. Toxicol. Environ. Mutagen  652, 112– 121 ( 2008). Google Scholar CrossRef Search ADS   33 McNamee, J. P., Flegal, F. N., Greene, H. B., Marro, L. and Wilkins, R. C. Validation of the cytokinesis-block micronucleus (CBMN) assay for use as a triage biological dosimetry tool. Radiat. Prot. Dosim.  135, 232– 242 ( 2009). Google Scholar CrossRef Search ADS   34 Flegal, F. N., Devantier, Y., McNamee, J. P. and Wilkins, R. C. Quickscan dicentric chromosome analysis for radiation biodosimetry. Health Phys.  98, 276– 281 ( 2010). Google Scholar CrossRef Search ADS   35 ISO 19238:2014. Radiological protection – performance criteria for service laboratories performing biological dosimetry by cytogenetics ( 2014). 36 ISO 21243:2008. Radiation protection—performance criteria for laboratories performing cytogenetic triage for assessment of mass casualties in radiological or nuclear emergencies—general principles and application to dicentric assay ( 2008). 37 Vral, A., Fenech, M. and Thierens, H. The micronucleus assay as a biological dosimeter of in vivo ionising radiation exposure. Mutagenesis  26, 11– 17 ( 2011). Google Scholar CrossRef Search ADS   38 ISO 17099:2014. Radiological protection – performance criteria for laboratories using the cytokinesis block micronucleus (CBMN) assay in peripheral blood lymphocytes for biological dosimetry ( 2014). 39 Higueras, M. et al.  . A new inverse regression model applied to radiation biodosimetry. Proc. R. Soc. A  471, 20140 ( 2015). Google Scholar CrossRef Search ADS   40 Ainsbury, E. A. and Barquinero, J. F. Biodosimetric tools for a fast triage of people accidentally exposed to ionizing radiation. Statistical and computational aspects. Ann. Ist. Super. Sanita  45, 307– 312 ( 2009). 41 Fenech, M. The lymphocyte cytokinesis-block micronucleus cytome assay and its application in radiation biodosimetry. Health Phys.  98, 234– 243 ( 2010). Google Scholar CrossRef Search ADS   42 Pernot, E. et al.  . Ionizing radiation biomarkers for potential use in epidemiological studies. Mutat. Res. Mutat. Res.  751, 258– 286 ( 2012). Google Scholar CrossRef Search ADS   43 Ainsbury, E. A. et al.  . What radiation dose does the fish translocation assay measure in cases of incorporated radionuclides for the Southern Urals populations? Radiat. Prot. Dosim.  159, 26– 33 ( 2014). Google Scholar CrossRef Search ADS   44 Barnard, S. et al.  . The first gamma-H2AX biodosimetry intercomparison exercise of the developing european biodosimetry network RENEB. Radiat. Prot. Dosim.  164, 265– 270 ( 2015). Google Scholar CrossRef Search ADS   45 Rothkamm, K., Krüger, I., Thompson, L. H. and Löbrich, M. Pathways of DNA double-strand break repair during the mammalian cell cycle. Mol. Cell. Biol.  23, 5706– 5715 ( 2003). Google Scholar CrossRef Search ADS   46 Rothkamm, K. and Horn, S. gamma-H2AX as protein biomarker for radiation exposure. Ann. dell’Istituto Super. di sanità  45, 265–– 2671 ( 2009). 47 Horn, S. and Rothkamm, K. Candidate protein biomarkers as rapid indicators of radiation exposure. Radiat. Meas.  46, 903– 906 ( 2011). Google Scholar CrossRef Search ADS   48 Valdiglesias, V., Laffon, B., Pásaro, E. and Méndez, J. Evaluation of okadaic acid-induced genotoxicity in human cells using the micronucleus test and γH2AX analysis. J. Toxicol. Environ. Heal. A  74, 980– 992 ( 2011). Google Scholar CrossRef Search ADS   49 Wojcik, A. et al.  . Multidisciplinary biodosimetric tools for a large-scale radiological emergency – the MULTIBIODOSE Project. Radiat. Emerg. Med.  3, 19– 23 ( 2014). 50 Rübe, C. E. et al.  . DNA repair in the context of chromatin: New molecular insights by the nanoscale detection of DNA repair complexes using transmission electron microscopy. DNA Repair. (Amst)  10, 427– 437 ( 2011). Google Scholar CrossRef Search ADS   51 Löbrich, M. et al.  . In vivo formation and repair of DNA double-strand breaks after computed tomography examinations. Proc. Natl. Acad. Sci. USA  102, 8984– 8989 ( 2005). Google Scholar CrossRef Search ADS   52 Rothkamm, K., Balroop, S., Shekhdar, J., Fernie, P. and Goh, V. Leukocyte DNA damage after multi-detector row CT: a quantitative biomarker of low-level radiation exposure. Radiology  242, 244– 251 ( 2007). Google Scholar CrossRef Search ADS   53 Sedelnikova, O. A. et al.  . Delayed kinetics of DNA double-strand break processing in normal and pathological aging. Aging Cell.  7, 89– 100 ( 2008). Google Scholar CrossRef Search ADS   54 Joyce, E. F. et al.  . Drosophila ATM and ATR have distinct activities in the regulation of meiotic DNA damage and repair. J. Cell. Biol.  195, 359– 367 ( 2011). Google Scholar CrossRef Search ADS   55 Ricceri, F. et al.  . Involvement of MRE11A and XPA gene polymorphisms in the modulation of DNA double-strand break repair activity: a genotype–phenotype correlation study. DNA Repair.  10, 1044– 1050 ( 2011). Google Scholar CrossRef Search ADS   56 Valdiglesias, V., Giunta, S., Fenech, M., Neri, M. and Bonassi, S. γH2AX as a marker of DNA double strand breaks and genomic instability in human population studies. Mutat. Res. Mutat. Res.  753, 24– 40 ( 2013). Google Scholar CrossRef Search ADS   57 Horn, S., Barnard, S. and Rothkamm, K. Gamma-H2AX-based dose estimation for whole and partial body radiation exposure. PLoS ONE  6, e25113 ( 2011). Google Scholar CrossRef Search ADS   58 Abragam, A. and Bleaney, B. Electron Paramagnetic Resonance of Transition Ions | A. Abragam and B. Bleaney | 9780199651528 | Oxford University Press Canada. (Clearendon Press) ( 1970). 59 Weil, A. J. and Bolton, J. Electron Paramagnetic Resonance: Elementary Theory and Practical Applications , second edn. ( Hoboken, NJ: John Wiley & Sons, Inc.) ( 2006) 10.1002/0470084987. Google Scholar CrossRef Search ADS   60 Poole, C. P. Electron Spin Resonance: A Comprehensive Treatise on Experimental Techniques  ( Mineola, New York: Dover Publications) ( 1997). 61 Trompier, F. et al.  . Radiation-induced signals analysed by EPR spectrometry applied to fortuitous dosimetry. Ann. dell’Istituto Super. di sanità  45, 287– 296 ( 2009). 62 Israelsson, A., Gustafsson, H. and Lund, E. Dose response of xylitol and sorbitol for EPR retrospective dosimetry with applications to chewing gum. Radiat. Prot. Dosim.  154, 133– 141 ( 2013). Google Scholar CrossRef Search ADS   63 Trompier, F., Bassinet, C. and Clairand, I. Radiation accident dosimetry on plastics by EPR spectrometry. Health Phys.  98, 388– 394 ( 2010). Google Scholar CrossRef Search ADS   64 Sholom, S. and Chumak, V. EPR emergency dosimetry with plastic components of personal goods. Health Phys.  98, 395– 399 ( 2010). Google Scholar CrossRef Search ADS   65 Kamenopoulou V. Barthe J. Hickman C. Portal G. 1986 Accidental gamma irradiation dosimetry using clothing Radiat. Prot. Dosim.  17 185 188 Google Scholar CrossRef Search ADS   66 Barthe, J., Kamenopoulou, V., Cattoire, B. and Portal, G. Dose evaluation from textile fibers: a post-determination of initial ESR signal. Int. J. Radiat. Appl. Instrum. A. Appl. Radiat. Isot.  40, 1029– 1033 ( 1989). Google Scholar CrossRef Search ADS   67 McKeever, S. W. S. et al.  . Numerical solutions to the rate equations governing the simultaneous release of electrons and holes during thermoluminescence and isothermal decay. Phys. Rev. B  32, 3835– 3843 ( 1985). Google Scholar CrossRef Search ADS   68 Woda, C. et al.  . Radiation-induced damage analysed by luminescence methods in retrospective dosimetry and emergency response. Ann. Ist. Super. Sanita.  45, 297– 306 ( 2009). 69 Bassinet, C. et al.  . Retrospective radiation dosimetry using OSL of electronic components: results of an inter-laboratory comparison. Radiat. Meas.  71, 475– 479 ( 2014). Google Scholar CrossRef Search ADS   70 Vinnikov, V. A., Ainsbury, E. A., Maznyk, N. A., Lloyd, D. C. and Rothkamm, K. Limitations associated with analysis of cytogenetic data for biological dosimetry. Radiat. Res.  174, 403– 414 ( 2010). Google Scholar CrossRef Search ADS   71 Savage, J. R. K. and Papworth, D. G. Constructing a 2B calibration curve for retrospective dose reconstruction. Radiat. Prot. Dosim.  88, 69– 76 ( 2000). Google Scholar CrossRef Search ADS   72 Merkle, W. Statistical methods in regression and calibration analysis of chromosome aberration data. Radiat. Environ. Biophys.  21, 217– 233 ( 1983). Google Scholar CrossRef Search ADS   73 Szłuińska, M., Edwards, A. A. and Lloyd, D. C., Health Protection Agency (Great Britain).. Radiation Protection Division. Statistical Methods for Biological Dosimetry. (Health Protection Agency, Radiation Protection Division) ( 2005). 74 Ainsbury, E. A., Vinnikov, V. A., Maznyk, N. A., Lloyd, D. C. and Rothkamm, K. A comparison of six statistical distributions for analysis of chromosome aberration data for radiation biodosimetry. Radiat. Prot. Dosim.  155, 253– 267 ( 2013). Google Scholar CrossRef Search ADS   75 Ainsbury, E. A. et al.  . Uncertainty of fast biological radiation dose assessment for emergency response scenarios. Int. J. Radiat. Biol.  93, 127– 135 ( 2017). Google Scholar CrossRef Search ADS   76 Schenker, N. and Gentleman, J. F. On judging the significance of differences by examining the overlap between confidence intervals. J. Am. Statist.  ( 2012) http://dx.doi.org/10.1198/000313001317097960. 77 Austin, P. and HUX, J. A brief note on overlapping confidence intervals. J. Vasc. Surg.  36, 194– 195 ( 2002). Google Scholar CrossRef Search ADS   78 Deperas, J. et al.  . CABAS: a freely available PC program for fitting calibration curves in chromosome aberration dosimetry. Radiat. Prot. Dosim.  124, 115– 123 ( 2007). Google Scholar CrossRef Search ADS   79 Ainsbury, E. A. and Lloyd, D. C. Dose estimation software for radiation biodosimetry. Health Phys.  98, 290– 295 ( 2010). Google Scholar CrossRef Search ADS   80 Sasaki, M. S. Chromosomal biodosimetry by unfolding a mixed Poisson distribution: a generalized model. Int. J. Radiat. Biol.  79, 83– 97 ( 2003). Google Scholar CrossRef Search ADS   81 Ainsbury, E. A. et al.  . CytoBayesJ: Software tools for Bayesian analysis of cytogenetic radiation dosimetry data. Mutat. Res.Genet. Toxicol. Environ. Mutagen  756, 184– 191 ( 2013). Google Scholar CrossRef Search ADS   82 Brame, R. S. and Groer, P. G. Bayesian analysis of overdispersed chromosome aberration data with the negative binomial model. Radiat. Prot. Dosim.  102, 115– 119 ( 2002). Google Scholar CrossRef Search ADS   83 DiGiorgio, M. and Zaretzky, A. Biological dosimetry – a Bayesian approach for presenting uncertainty on biological dose estimates. Annals of ‘II Encuen- tro de Docentes e Investigadores de Estadstica en Psicologa ( 2011). 84 Groer, P. G. and De Pereira, C. A. B. Probability and Bayesian Statistics  ( US: Springer) pp. 225– 232 ( 1987) 10.1007/978-1-4613-1885-9_23. Google Scholar CrossRef Search ADS   85 Edwards, A. A and Lloyd, D. C. The Early Effects of Radiation on DNA  ( Berlin, Heidelberg: Springer) pp. 385– 396 ( 1991) 10.1007/978-3-642-75148-6_40. Google Scholar CrossRef Search ADS   86 Moriña, D., Higueras, M., Puig, P., Ainsbury, E. A. and Rothkamm, K. <tt>radir</tt> package: an <tt>R</tt> implementation for cytogenetic biodosimetry dose estimation. J. Radiol. Prot.  35, 557– 569 ( 2015). Google Scholar CrossRef Search ADS   87 IAEA. et al. Use of electron paramagnetic resonance dosimetry with tooth enamel for retrospective dose assessment. Vienna IAEA 57 ( 2002). 88 Fattibene, P., La Civita, S., De Coste, V. and Onori, S. Analysis of sources of uncertainty of tooth enamel EPR signal amplitude. Radiat. Meas.  43, 827– 830 ( 2008). Google Scholar CrossRef Search ADS   89 Nagy, V. Accuracy considerations in EPR dosimetry. Appl. Radiat. Isot.  52, 1039– 1050 ( 2000). Google Scholar CrossRef Search ADS   90 ISO/ASTM 51607:2013. Practice for use of the alanine-EPR dosimetry system. ( 2013). 91 Fattibene, P. and Callens, F. EPR dosimetry with tooth enamel: a review. Appl. Radiat. Isot.  68, 2033– 2116 ( 2010). Google Scholar CrossRef Search ADS   92 Fattibene, P., Duckworth, T. L. and Desrosiers, M. F. Critical evaluation of the sugar-EPR dosimetry system. Appl. Radiat. Isot.  47, 1375– 1379 ( 1996). Google Scholar CrossRef Search ADS   93 Wilcox, D. E. et al.  . Dosimetry based on EPR spectral analysis of fingernail clippings. Health Phys.  98, 309– 317 ( 2010). Google Scholar CrossRef Search ADS   94 Anton, M. et al.  . Uncertainties in alanine/ESR dosimetry at the Physikalisch-Technische Bundesanstalt. Phys. Med. Biol.  51, 5419– 5440 ( 2006). Google Scholar CrossRef Search ADS   95 Antonovic, L., Gustafsson, H., Carlsson, G. A. and Carlsson Tedgren, A. Evaluation of a lithium formate EPR dosimetry system for dose measurements around 192Ir brachytherapy sources. Med. Phys.  36, 2236– 2247 ( 2009). Google Scholar CrossRef Search ADS   96 Wieser, A. et al.  . Assessment of performance parameters for EPR dosimetry with tooth enamel. Radiat. Meas.  43, 731– 736 ( 2008). Google Scholar CrossRef Search ADS   97 Currie, L. A. Nomenclature in evaluation of analytical methods including detection and quantification capabilities: (IUPAC Recommendations 1995). Anal. Chim. Acta.  391, 105– 126 ( 1999). Google Scholar CrossRef Search ADS   98 Currie, L. A. Detection and quantification limits: basic concepts, international harmonization, and outstanding (‘low-level’) issues. Appl. Radiat. Isot.  61, 145– 149 ( 2004). Google Scholar CrossRef Search ADS   99 Fattibene, P. et al.  . The 4th International comparison on EPR dosimetry with tooth enamel, part 1: report on the results. Radiat. Meas.  46, 765– 771 ( 2011). Google Scholar CrossRef Search ADS   100 Sholom, S., Dewitt, R., Simon, S. L., Bouville, A. and McKeever, S. W. S. Emergency dose estimation using optically stimulated luminescence from human tooth enamel. Radiat. Meas.  46, 778– 782 ( 2011). Google Scholar CrossRef Search ADS   101 Bernhardsson, C., Christiansson, M., Mattsson, S. and Rääf, C. L. Household salt as a retrospective dosemeter using optically stimulated luminescence. Radiat. Environ. Biophys.  48, 21– 28 ( 2009). Google Scholar CrossRef Search ADS   102 Sholom, S. and McKeever, S. W. S. Integrated circuits from mobile phones as possible emergency OSL/TL dosimeters. Radiat. Prot. Dosim.  170, 398– 401 ( 2015) 10.1093/rpd/ncv446. Google Scholar CrossRef Search ADS   103 Discher, M. and Woda, C. Thermoluminescence of glass display from mobile phones for retrospective and accident dosimetry. Radiat. Meas.  53–54, 12– 21 ( 2013). Google Scholar CrossRef Search ADS   104 Discher, M., Woda, C. and Fiedler, I. Improvement of dose determination using glass display of mobile phones for accident dosimetry. Radiat. Meas.  56, 240– 243 ( 2013). Google Scholar CrossRef Search ADS   105 Christiansson, M., Bernhardsson, C., Geber-Bergstrand, T., Mattsson, S. and Rääf, C. L. Household salt for retrospective dose assessments using OSL: signal integrity and its dependence on containment, sample collection, and signal readout. Radiat. Environ. Biophys.  53, 559– 569 ( 2014). Google Scholar CrossRef Search ADS   106 Woda, C. et al.  . Luminescence dosimetry in a contaminated settlement of the Techa River valley, Southern Urals, Russia. Radiat. Meas.  46, 277– 285 ( 2011). Google Scholar CrossRef Search ADS   107 Woda, C. et al.  . Evaluation of external exposures of the population of Ozyorsk, Russia, with luminescence measurements of bricks. Radiat. Environ. Biophys.  48, 405– 417 ( 2009). Google Scholar CrossRef Search ADS   108 Bailiff, I. K. et al.  . The application of retrospective luminescence dosimetry in areas affected by fallout from the semipalatinsk nuclear test site: an evaluation of potential. Health Phys.  87, 625– 641 ( 2004). Google Scholar CrossRef Search ADS   109 Galbraith, R. F. A further note on the variance of a background – corrected OSL count. Anc. TL  32, 1– 3 ( 2014). 110 Duller, G. A. T. The Analyst software package for luminescence data: overview and recent improvements. Anc. TL  33, 35– 42 ( 2015). 111 Sigmaplot. ( 2017). Available at: http://www.sigmaplot.co.uk/products/peakfit/peakfit.php. (accessed 22 February 2017). 112 Origin. ( 2017). Available at: http://www.originlab.com/index.aspx?go=Solutions/Applications/Spectroscopy. (accessed 22 February 2017). 113 Burbidge, C. I. A broadly applicable function for describing luminescence dose response. J. Appl. Phys.  118, 044904 ( 2015). Google Scholar CrossRef Search ADS   114 Committee, A. M. Robust statistics: a method of coping with outliers. AMC Tech. Br. 2 ( 2001). 115 Galbraith, R. F., Roberts, R. G., Laslett, G. M., Yoshida, H. and Olley, J. M. Optical dating of single and multiple grains of quartz from jinmium rock sheltern, northern Australia: Part I, experimental design and statistical models. Archaeometry  41, 339– 364 ( 1999). Google Scholar CrossRef Search ADS   116 Couto, P. R. G., Damasceno, J. C. and de Oliveira, S. P. Monte Carlo simulations applied to uncertainty in measurement. In: Theory and Applications of Monte Carlo Simulations. Wai Kin (Victor) Chan, Ed. (In Tech) (2013). DOI: 10.5772/53014. Available on: https://www.intechopen.com/books/theory-and-applications-of-monte-carlo-simulations/monte-carlo-simulations-applied-to-uncertainty-in-measurement. 117 Ángeles Herrador, M. and González, A. G. Evaluation of measurement uncertainty in analytical assays by means of Monte-Carlo simulation. Talanta  64, 415– 422 ( 2004). Google Scholar CrossRef Search ADS   118 Lepek, A. A computer program for a general case evaluation of the expanded uncertainty. Accredit. Qual. Assur.  8, 296– 299 ( 2003). Google Scholar CrossRef Search ADS   119 Chew, G. and Walczyk, T. A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software. Anal. Bioanal. Chem.  402, 2463– 2469 ( 2012). Google Scholar CrossRef Search ADS   120 Farrance, I. and Frenkel, R. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants. Clin. Biochem. Rev.  35, 37– 61 ( 2014). 121 Kawrakow, I. and Rogers, D. The EGSnrc code system: Monte Carlo simulation of electron and photon transport. NRCC Rep. PIRS-701 ( 2000). 122 Ferrari, A., Sala, P. R., Fasso, A. and Ranft, J. FLUKA: a multi-particle transport code ( 2005). 123 Agostinelli, S. et al.  . Geant4—a simulation toolkit. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip.  506, 250– 303 ( 2003). Google Scholar CrossRef Search ADS   124 Pelowitz, D. B. E. MCNP6TM users manual Version 1.0. LA-CP-13–00634. ( 2013). 125 Salvat, F., Fernandez-Varea, J. M. and Sempau, J. Penelope-2006: A code system for Monte Carlo simulation of electron and photon transport – workshop proceedings, Barcelona, Spain, 4–7 July 2006 - nea6222-penelope.pdf. ( 2006). 126 Niita, K., Matsuda, N., Iwamoto, Y., Iwase, H. and Sato, T. PHITS: Particle and Heavy Ion Transport code System, Version 2. 23. JAEA-Data/Code ( 2010). 127 Discher, M. Lumineszenzuntersuchungen an körpernah getragenen Gegenständen für die Notfalldosimetrie. (München, Technische Universität ( 2015). 128 Discher, M., Hiller, M. and Woda, C. MCNP simulations of a glass display used in a mobile phone as an accident dosimeter. Radiat. Meas.  75, 21– 28 ( 2015). Google Scholar CrossRef Search ADS   129 Eakins, J. S. and Kouroukla, E. Luminescence-based retrospective dosimetry using Al2O3 from mobile phones: a simulation approach to determine the effects of position. J. Radiol. Prot.  35, 343– 381 ( 2015). Google Scholar CrossRef Search ADS   130 Gómez-Ros, J. M., Pröhl, G., Ulanovsky, A. and Lis, M. Uncertainties of internal dose assessment for animals and plants due to non-homogeneously distributed radionuclides. J. Environ. Radioact.  99, 1449– 1455 ( 2008). Google Scholar CrossRef Search ADS   131 Hervé, M. L., Clairand, I., Trompier, F., Tikunov, D. and Bottollier-Depois, J. F. Relation between organ and whole body doses and local doses measured by ESR for standard and realistic neutron and photon external overexposures. Radiat. Prot. Dosim.  125, 355– 360 ( 2007). Google Scholar CrossRef Search ADS   132 Kouroukla, E. Luminescence dosimetry with ceramic materials for application to radiological emergencies and other incidents. ( 2015). 133 Ulanovsky, A., Pröhl, G. and Gómez-Ros, J. M. Methods for calculating dose conversion coefficients for terrestrial and aquatic biota. J. Environ. Radioact.  99, 1440– 1448 ( 2008). Google Scholar CrossRef Search ADS   134 Chipman, H., George, E. and MuCulloch, R. The practical implementation of Bayesian model selection. IMS Lect. Notes – Monogr. Ser. 38, ( 2001). 135 Lund, E. et al.  . Formates and dithionates: sensitive EPR-dosimeter materials for radiation therapy. Appl. Radiat. Isot.  62, 317– 324 ( 2005). Google Scholar CrossRef Search ADS   136 Marrale, M. et al.  . Neutron ESR dosimetry through ammonium tartrate with low Gd content. Radiat. Prot. Dosim.  159, 233– 236 ( 2014). Google Scholar CrossRef Search ADS   137 Bergstrand, E. S., Hole, E. O. and Sagstuen, E. A simple method for estimating dose uncertainty in ESR/alanine dosimetry. Appl. Radiat. Isot.  7, 845– 854 ( 1998). Google Scholar CrossRef Search ADS   © Crown copyright 2017. This article contains public sector information licensed under the Open Government Licence v3.0 (http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/). http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Radiation Protection Dosimetry Oxford University Press

Loading next page...
 
/lp/ou_press/uncertainty-on-radiation-doses-estimated-by-biological-and-JbXGq60xwW
Publisher
Oxford University Press
Copyright
© Crown copyright 2017. This article contains public sector information licensed under the Open Government Licence v3.0 (http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/).
ISSN
0144-8420
eISSN
1742-3406
D.O.I.
10.1093/rpd/ncx125
Publisher site
See Article on Publisher Site

Abstract

Abstract Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the ‘EURADOS Working Group 10—Retrospective dosimetry’ members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios. INTRODUCTION In Europe today, and indeed the whole world, the current state of the art of retrospective radiation dosimetry incorporates a number of key biological and physical retrospective techniques. Amongst the biological methods, the dicentric chromosome assay (DCA), which relies on the relationship between the frequency of dicentric chromosomes in peripheral blood lymphocytes and the absorbed dose (‘dose’) of exposed persons, is recognised to be the most well developed of these(1, 2). The cytokinesis-blocked micronucleus (CBMN) assay is also regarded as an important tool for biological assessment of radiation dose and is growing in importance(3). For longer-term retrospective dosimetry, fluorescence in situ hybridisation (FISH) staining of translocations can be used(4). In addition, new techniques for premature chromosome condensation and scoring offer alternative methodologies(5) and the γ-H2AX assay allows direct measurement of double-strand breaks (DSB), which are caused almost exclusively by ionising radiation, in a human cell(6). For physical dosimetry(7), the key techniques are electron paramagnetic resonance (EPR) with bone(8), teeth(9), nails(10–12) and glass(13) and optically or thermally stimulated luminescence (OSL/TL), chiefly of electronic components in mobile phones(14). A large number of studies have demonstrated the need for a wider range of techniques, both separately and in collaboration, in order that dosimetry can be carried out in a range of potential exposure scenarios(15, 16). Most importantly, the operational assays have recently been amalgamated into fully functional emergency response plans through the network: Realising the European network of biodosimetry, RENEB(16, 17). The European Radiation Dosimetry Group (EURADOS) was set up to advance the scientific understanding and the technical development of the dosimetry of ionising radiation by the stimulation of collaboration between laboratories in Europe. As part of the network, EURADOS Working Group 10 (WG10) was formed, with the objective of establishing a network of European (and some non-European) laboratories with expertise in the areas of physical and biological retrospective dosimetry. The aims of WG10 include: establishing a multi-parameter approach to dose assessment in retrospective dosimetry (including emergency response); evaluating newly developed dosimetry methods, and establishing a common approach for uncertainty estimation throughout biological and physical methods of dosimetry. In order to address this last point, EURADOS WG10 task group 10.6 was created, with a focus on uncertainties associated with retrospective dosimetry techniques. Uncertainty analysis can be defined as the method to quantify the degree of confidence considering the model outputs taking into account the model inputs (data and parameters)(18). In classical statistics, the aim of uncertainty assessment is to provide a range around the estimated quantity, the uncertainty interval, within which the real value of the variable is likely to be found. The uncertainty interval usually depends on the standard deviation that corresponds to the uncertainty associated with the estimate of the variable. The process of uncertainty assessment is an intrinsic part of any method of retrospective radiation dosimetry, however, it has previously been noted that estimation approaches vary between different dosimetry techniques and, furthermore, that the overall effort devoted to uncertainty analysis varies widely between groups of retrospective dosimetry practitioners(19, 20). The Guide to the Expression of Uncertainty in Measurement (GUM) is the standard reference for uncertainty analysis techniques, and most of the retrospective dosimetry laboratories across Europe and the world use the GUM uncertainty estimation techniques(21). There are several relevant ISO standards and guides to assist with implementation of GUM(22, 23) and the associated analysis techniques, e.g. for assessment of accuracy(24) and laboratory intercomparisons(25). There are also a number of alternative techniques that may be employed, both for measurement and calculation of the quantity of interest, i.e. absorbed dose, for the propagation of uncertainties through this calculation, and for quantification of both type A (generally corresponding to random) and type B (generally systematic) uncertainty estimates. For example, Monte Carlo simulation, which includes auto- and cross-correlations among the components, can be a convenient means of obtaining probability distributions for parameter estimation in non-linear systems(26, 27). This can be combined with the use of Bayesian logical constructs to constrain parameter estimation to help improve the accuracy of dose estimates(28). Bayesian statistics consider the uncertain parameters and variables as probabilistic. Bayesian analysis takes into account assumed prior distributions for each parameter (which must be chosen by the user), together with the experimental data to produce a subsequent probability density function (PDF) that describes the probability of the observations. In all cases, the appropriateness of the approach should be carefully evaluated(29). The aims of EURADOS WG10 task 10.6 are thus to survey the different methods used to assess uncertainties in retrospective techniques, to compare and contrast the different approaches, to popularise the potential of Monte Carlo or Bayesian techniques, to identify best practice and finally to attempt to homogenise the approaches where possible and/or desirable. The survey of uncertainty assessment methods used by WG10 participants was carried out in 2012. The survey participants were overwhelmingly supportive of the need for a review of uncertainty analysis techniques, inter- and intra-technique comparisons of methods, and organisation of centrally administrated training once best practice has been identified. In this publication, the authors present a comparison of techniques of uncertainty estimation amongst laboratories using biological and physical retrospective dosimetry methods of dicentric chromosome, micronucleus, PCC, FISH translocation and γ-H2AX analysis, EPR, OSL and TL. The data were collated from the survey and discussions at EURADOS annual meetings 2012–16 and from results of the RENEB networking project(17). The similarities and differences in recommended uncertainty analysis methods, those used in practise, and also the experimental and external factors that influence the results, are discussed. In biological dosimetry, Bayesian analysis methods have so far only been implemented for the DCA(30) and thus are discussed in this context. Bayesian approaches have been developed for application in luminescence retrospective dosimetry per se(28), and others are widely used to compare retrospective archaeological and geological age estimates(31), but have not yet been applied for accident reconstruction. The wider applications of Monte Carlo sampling are also discussed, with the aim of promoting the techniques for use within the community. Finally, areas in which biological and physical retrospective dosimetry uncertainty analysis methods may be improved are considered. DOSIMETRY METHODS There are a number of relevant publications including recent reviews of biological and physical retrospective dosimetry methods, thus we here only present a relatively high level summary in order to set the scene for the review of the uncertainty estimation methods that are currently in use. Biodosimetry The most well-established assays for biological dosimetry rely on assessing chromosome aberrations in peripheral blood lymphocytes as the number of aberrations induced by ionising radiation corresponds to the absorbed dose. The established techniques are: (1) the dicentric chromosome assay (DCA), which is the ‘gold standard’;(2) (2) fluorescence in situ hybridisation (FISH);(4) (3) premature chromosome condensation (PCC);(5) (4) the cytokinesis-block micronucleus assay (CBMN)(3), and (5) counting of γ-H2AX foci which form at the site of double-strand breaks(6). The two main prerequisites for biodosimetry are the stability of aberrations with time following irradiation and knowledge of the background level, i.e. the number of aberrations in a given sample before irradiation occurred. DCA, PCC and CBMN show stability of responses for several months, as induction of these aberrations means the cells cannot reproduce and thus the cells die and these aberrations are gradually removed from the population of circulating lymphocytes. Therefore, the relative specificity to ionising radiation drives the accuracy of these methods. In contrast, while γ-H2AX and FISH are not radiation specific, the individual background levels are becoming better understood(4, 6, 32). The vast majority of γ-H2AX aberrations disappear within approximately 24 h and thus this is very much a short term assay, however, translocations detected by FISH are stable over many years. All of the established methods were developed based on visual scoring techniques, and a large amount of work has been required to ensure their suitability for mass casualty events(17). The current status is promising although the need for automation and other more rapid strategies remains(1, 33, 34) and there is also a need for development of appropriate, reliable, uncertainty methods. Dicentric chromosome assay As the ‘gold standard’ of biodosimetry, the statistical analysis methods for the dicentric chromosome assay (DCA) are extremely well defined. An ISO standard has been created for the assay in routine(35) and rapid assessment ‘triage’(36) modes. The assay’s standardised guidance text, the International Atomic Energy Authority (IAEA) cytogenetics manual(1), is used by almost all practitioners of the assay for biodosimetry purposes. In brief, the dicentric yield y (average number of dicentric chromosomes per cell) is modelled as a function of the absorbed dose D by formation of a linear or linear-quadratic calibration curve:   y=C+αD+βD2 (1)The specific type (energy/LET) of radiation determines the type of curve, which is created by counting (‘scoring’) aberrations in large numbers of cells, especially at low doses. For in vivo exposure cases, the yield of aberrations observed in the exposed sample is then compared to the calibration curve. Dicentrics are highly specific to ionising radiation, with only a few rarely used radiomimetic drugs also able to induce them, thus, the lower limit of detection is approximately 100 mGy(1). Simple methods also exist to account for fractionation or partial-body exposures, chiefly based on adherence to or departure from the expected Poisson distribution of aberrations(1). Micronucleus assay The in vitro cytokinesis-block micronucleus assay (CBMN) is another well-established method for biodosimetry(3). The assay is based on assessing the frequency of acentric chromosome fragments (micronuclei, MN) and, to a small extent, malsegregation of whole chromosomes in binucleated (BN) cells. MN are caused by many clastogenic and aneugenic agents(37) and thus are not specific for ionising radiation. Compared to the DCA, scoring of MN is simple and quick. Furthermore, scoring can be automated relatively easily and international standardisation is in place(38). The reported background frequency of MN is variable: values ranging from 0 to 40 per 1000 BN have been recorded(1). Consequently, the lower limit for dose detection by conventional CBMN is 0.2–0.3 Gy; although more detailed analysis restricted to centromere positive cells lowers this limit to 0.05–0.1 Gy. The most important determinants influencing MN background rates are dietary factors, exposure to environmental clastogens and aneugens, age and gender(1). When possible, for instance for medical exposure scenarios, the variability of the background level should be decoupled from the other parameters by identification of the individual background level in blood samples taken before irradiation(39). Dose estimation using CBMN follows the same strategy as for the DCA. The absorbed dose can be assessed up to few months after exposure(1, 40, 41). A drawback of CBMN is the natural overdispersion of MN data, therefore partial-body irradiation is hard to detect, and also the larger variation in background levels which means that the detection limit is generally greater(37). PCC An important limitation of the DCA, CBMN and FISH assays is the prerequisite for lymphocytes to enter the mitotic phase of the cell cycle, which requires culturing for 48 h or more. Thus dose estimates always take several days. The need for metaphases also leads to several technical problems including radiation-induced mitotic delay and cell death that can lead to non-representative cell samples. After high doses of ionising radiation, this can cause considerable underestimation of absorbed doses. Premature chromosome condensation (PCC) can be induced by cell fusion or chemical induction. The cell fusion PCC technique visualises chromosome aberrations in the interphase cells, which can allow same day biological dose assessment. Chemically induced PCC has been validated for triage following in several potential exposure scenarios(1, 5, 40). PCC analysis is not a biomarker on its own, rather it should be combined with scoring of specific chromosome aberrations (e.g. fragments, rings or translocations). The frequency of spontaneous occurring PCC fragments is in the range of the dicentric frequency, 1–3 in 1000 cells. For dose assessment with PCC, the same tools are used as presented for the DCA. PCC assay is particularly useful for assessment of a wide range of doses. It is applicable as well to exposure at low doses (as low as 0.2 Gy for PCC fragments) as to life-threatening high acute doses of low and high-LET radiations (up to 20 Gy)(42). Fluorescence in situ hybridisation Dicentric chromosomes, rings and MN are ‘unstable’ chromosome aberrations and thus they are lost from the peripheral blood lymphocyte pool at the rate that cell renewal occurs. FISH techniques allow identification of stable translocations, and have been used for many years for assessment of past exposures(1, 43). However, background frequencies increase significantly with age and vary greatly between individuals of the same age and dose history. No significant gender effects have been observed but smoking habit has been suggested to be of significance(32). The observed number of aberrations must be corrected for these known confounding factors in order to obtain radiation-induced translocations only, before dose–response curves are calculated as for dicentrics(1). Note that as a consequence of stability and non-specificity to radiation, the minimum detectable dose is limited as a function of time: with contributions to the minimum detectable dose in the region of 1.8 mGy per year (20–69 years) for acute doses, and for chronic exposure 15.9 mGy per year, respectively(42). At high doses, correlation of translocations and complex aberrations in cells is also of importance(1). Although the biological complexity of this assay is relatively well understood, the uncertainty analysis techniques remain simplistic and more work is needed. γ-H2AX The γ-H2AX assay, commonly used for investigating radiosensitivity, has in recent years become a well-established biomarker for radiation-induced DNA double-strand breaks and thus radiation exposure(6, 44). Fluorescence microscopy or flow cytometry measure the formation of DNA repair-protein clusters termed γ-H2AX foci (in terms of number or intensity) in peripheral blood lymphocytes of exposed persons(45, 46). The foci are not specific for radiation exposure and spontaneous frequency is very low. Wide use for biodosimetry purposes is limited by fast loss of the signal (maximal γ-H2AX level is reached 30 min after irradiation, the tissue related half-life of the signal is 2–7 h) as well as high individual variability(42, 47, 48). The advantages of the assay are its high sensitivity and relatively low detection limit (as low as tens of mGy), the need for only a few drops of blood, that lymphocyte cultures are not required and the ease of automation. However, the absorbed dose can only be assessed up to ~1 day after exposure(49). In addition, the influence of factors such as age, gender and genotypes are not yet well understood(50–56). The relatively large uncertainties allow γ-H2AX to be used for biodosimetry only in extremely controlled scenarios(57) and uncertainty assessment is carried out on a case-by-case basis, with the sophistication of the analysis varying greatly between laboratories. Physical retrospective dosimetry For the purposes of retrospective dosimetry, physical dosimetry methods are retrospective dose estimation techniques based on the quantitative evaluation of detectable changes induced by ionising radiation in inert materials or by the activation of atoms such as sodium or phosphorus when exposed to neutrons. They are usually only suitable for detection of external exposure and for situations of partial or localised exposure, they provide a useful dosimetric information only if the ‘fortuitous dosemeter’ (defined as a material validated for dosimetry which the individual happens to be carrying) is by chance within the radiation field. The three physical dosimetry techniques considered in this review are electron paramagnetic resonance (EPR), optically stimulated luminescence (OSL) and thermoluminescence (TL). Electron paramagnetic resonance EPR dosimetry is based on the quantification of paramagnetic species (defects or free radicals) induced by ionising radiation. In solids, as crystalline materials, the radicals/defects can be trapped and are thus generally sufficiently stable to be measured. The EPR signal is a measure of the radical/defect concentration within the solid matrix and is usually proportional to the mean absorbed dose in the sample. The principle of EPR spectroscopy may be found in several textbooks(58–60) and a detailed description of applications in retrospective dosimetry is given by Trompier and colleagues(61). In general, depending on the complexity of the EPR spectra, the area under the absorption curve or peak-to-peak amplitude of an EPR signal are used and considered to be proportional to the concentration of paramagnetic species. The validated assays for retrospective analysis are calcified tissues (tooth enamel and bone) and sugar, while materials such as fingernails, mineral glass, sweeteners, plastics and clothing fabrics that are widespread among humans are under investigation(12, 13, 62–66). The ideal characteristics of a beneficial fortuitous dosemeter are a high radiation-induced signal yield, the absence of endogenous signals, a low UV-induced signal, low detection limit, the linearity of the signal with dose and post-irradiation signal stability(61). The quantity of the radiation-induced radicals may be correlated to a value of the absorbed dose either via application of a ‘positive control’—the delivery of additional radiation doses with a laboratory source (‘additive dose’ method) or via a calibration curve. Calibration curves may be created on an individual basis for each sample, or using different samples each irradiated at a different radiation dose, i.e. always applying the same sample-specific, signal-to-dose calibration curve. A disadvantage of the universal calibration curve is that it does not take into account the specific sensitivity of the sample. Instead, the curve is built using the average of the sensitivities of different samples. This of course affects the uncertainty associated with the estimated dose. When EPR is used for ionising radiation dosimetry, other confounding factors such as UV exposure or mechanical stress may generate additional radicals resulting in signals which may overlap or mask the radiation-induced component of the total signal. In these cases, spectral simulations or other numerical analysis methods are needed, in order to decompose the different components of the spectrum and extract the signal of interest. The average lifetimes for the radicals vary from minutes to billions of years. Controlling the sample storage condition, for example by keeping the samples in dark, low humidity and sometimes in a freezer, will protect the samples against unwanted changes. A key advantage of EPR analysis is that it can be repeated as many times as needed, as the readout process does not alter the signal. This gives the possibility to estimate the effect of sample positioning, spectrometer reproducibility and stability which all play a large role in the uncertainty budget for EPR dose estimates. Optically/thermally stimulated luminescence Luminescence dosimetry relies on the stimulated emission of light from an insulator or a semiconductor after the absorption of energy from ionising radiation(67). Ionising radiation transfers energy to the electrons of the solid, moving them to a metastable state. When the electrons return to the ground state, recombination occurs and luminescence light is emitted. This recombination occurs after absorption of stimulation energy provided by heat in the case of TL and by light for OSL. Crystals contain defects, which produce spatially localised energy levels in the energetically ‘forbidden’ zone between the valance and conduction bands. Ionising radiation produces electron–hole pairs by exciting electrons beyond the potential of their parent molecule into a delocalised state, which is most commonly the conduction band. As electrons and holes migrate in the conduction and valence bands, most recombine rapidly but some become trapped in metastable states associated with the defects. Later, these can be excited by thermal or optical stimulation, so that electrons and holes are again able to recombine. Following recombination, the host molecule is excited, and some emit photons at visible wavelengths as they de-excite: this emission is termed thermally or optically stimulated luminescence, depending on the type of stimulation. In TL, the light yield is recorded as a glow curve, i.e. as a function of the stimulating temperature, whereas in OSL the number of emitted photons per time interval is recorded as a function of the optical stimulation time. For both stimulation modes, the area under the glow curve/OSL decay curve is related to the total number of emitted photons and thus to the absorbed dose in the dosemeter. Published results on the applications of TL and OSL for retrospective dosimetry have concentrated on chip cards, electronic components and glass from personal electronic devices(68). For some of these methods, interlaboratory comparisons have been performed(69). UNCERTAINTY ESTIMATION APPROACHES Biological Dosimetry—Dicentric Assay Frequentist approach: confidence limits Uncertainty assessment for cytogenetic dosimetry is widely understood as the quantification of the variability within the dosimetric model, e.g. as defined by equation (1). Thus, parameter uncertainties as well as biomarker variability need to be considered. Indeed, full uncertainty analysis for cytogenetic dosimetry following the GUM(21) considers a long list of factors. For routine dosimetry, this list includes: the type and parameters of the dose–response curve, the stochastic characteristics of the biological marker, inter- and intra-individual variability, technical noise sources and practical limitations (e.g. in vitro calibrated methods applied to in vivo data). More complex scenarios of exposure induce further challenges, as described by Vinnikov and colleagues(19, 70). As processing is time consuming and the level of experience varies between laboratories, the potential for standardisation and verification of uncertainty analysis methods has so far been very limited. This may explain the absence of agreement on some of the expected uncertainty parameters within this field, such as the coverage factor(21). For the DCA, which has the most well-developed uncertainty estimation methods of all the biological assays, uncertainties on estimated doses are generally assessed by the analysis on the variability of the dicentric yield y and the parameters of the calibration curve C, α and β(71–74), according to the GUM methodology(21) (with detailed examples for several of the biological techniques given by Ainsbury et al.(75)). In the version of the IAEA manual published in 2001, which was the first time uncertainty analysis was discussed in detail in a methodological biodosimetry publication, three methods of uncertainty assessment were presented, labelled A–C. It was reasoned that Merkle’s Approach ‘C’(72) performs best for low numbers of dicentrics (as in low doses and/or few cells), whereas for high doses Savage’s Approach ‘A’(71) is more precise. In the updated 2011 version of the IAEA manual(1), Merkle’s approach is discussed in greater detail. This method of uncertainty assessment allows incorporation of both the Poisson error of the yield as well as errors in the dose–response curve parameters. The confidence bands of the calibration curve follow from the insight that the maximum likelihood estimate of the parameter vector is asymptotically multivariate normal(72). The upper limit yul and lower limit yll for the expected mean yield are therefore:   yul/ll=C+αD+βD2±R√(sC2+sα2D2+sβ2D4+2sC,αD+2sC,βD2+2sα,βD3) (2)where sx2 denotes the variance of x and sx,z the covariance of x and z, for each value of x and z in the equation. The regression factor R2 gives the range within which the true average yield is to be expected. This is equivalent to the confidence limit of the Chi-squared distribution with 2 or 3 degrees of freedom for linear or linear-quadratic fits; i.e. for 95% confidence intervals, R = 2.45 at 2 degrees of freedom (df) and R = 2.80 at df = 3, respectively. Finally, the calculation of confidence intervals for the estimated dose includes two steps: Determination of the boundaries YU and YL of the dicentric yields which are consistent with the observed yield in the sample (95% confidence limits for the mean parameter of a Poisson random number). Determination of the absorbed dose for which the upper dose–response curve yul exceeds YL and the dose where the lower curve yll exceeds YU.Note that some authors reason that the combination of 95% confidence intervals for the dicentric yield as well as for the confidence bands leads to a falsely large confidence interval for the dose, thus in case of combined errors 83% confidence limits of the Chi-squared distribution are more appropriate(76, 77) (the square root of the regression factor at 1–α = 83% is R = 1.88 (df = 2) and R = 2.24 (df = 3) respectively). The alternative approaches given in the IAEA manual(1): ‘A’ (Savage(71)) and ‘B’ (again Merkle(72)), are built on classical error propagation calculations for normally distributed random variables. They apply the Delta method to calculate the standard error for the estimated dose from the calibration curve and its parameter uncertainties. For dose estimations in more complicated scenarios, extensions exist including correction for protracted or fractioned doses and partial-body exposures. In these cases, in principal, after correction of the curve, uncertainty assessment follows the same strategy as above. However, in this case, either the simplified method ‘C’ described above is used (with no parameter uncertainties) or equation (2) must be adjusted manually. Tools to apply the standard (IAEA) methods for automated dose estimation and uncertainty assessment are available, including CABAS (Chromosomal ABerration Calculation Software)(78) and DoseEstimate(79). In order to assess inhomogeneous exposures more realistically, Sasaki modelled the damaged cell population with a mixed Poisson, which can be numerically deconvoluted(80). The resulting exposure profile indicates some uncertainty within the dose. However, this does not represent a rigorous uncertainty assessment. In addition, a correction factor for confidence intervals of overdispersed data (i.e. dispersion index σ2/y > 1) is proposed(1). For those samples, the limits of the expected range of the yield, YU and YL, of a sample with mean y and variance σ2 should be adjusted as follows (with either YU and YL as appropriate):   YU/L⁎=YU/L(YU/Ly)σ2y (3) Probabilistic approach: Bayesian methods In parallel to the classical, frequentist approach, Bayesian methods are becoming increasingly popular in the field of biological dosimetry(30, 81, 82). Key to the Bayesian concept is the application of the inversion theorem in its continuous version, i.e.:   P(D|Xobs)=P(Xobs|D)P(D)∫P(Xobs|D)P(D)dD (4)where D denotes the unknown parameter (absorbed dose) and Xobs the observation (the dicentric yield within the sample, y, and the calibration data). Thus, the posterior dose distribution (or calibrative dose density), P(D|Xobs), scales with the product of the likelihood (or predictive density) P(Xobs|D) and the prior P(D):   P(D|Xobs)∝P(Xobs|D)P(D) (5) With respect to uncertainty analysis, the Bayesian approach does not require additional considerations, since the resulting distribution P(D|Xobs) (probability for a dose given the data) inherently provides quantification of the uncertainty within the dosimetric model. Consequently, Bayesian uncertainty intervals for the calibration parameter (the dose in this case) are accurate. Apart from the intrinsic inclusion of uncertainty within the posterior model, the greatest advantage compared to the frequentist approach is the possibility to include other information besides the number of aberrations through the chosen prior distribution(s). The choice of the prior could be sensitive, since well chosen, informative priors should guide noisy data towards the true dose, whereas incorrect priors may drive the estimate away from the true dose. Some authors reason that high quality prior information is almost always available, and thus Bayesian approaches are more appropriate for biological dosimetry(70, 81). Higueras and colleagues(30) also showed that if an appropriate prior is applied, the actual choice of prior in fact doesn’t greatly impact the overall dose assessment in some scenarios. In contrast to the frequentist approach, which relies on processing the information successively from the initial sample, to the calibration data, to a point estimate of the dose surrounded by a confidence interval; the Bayesian approach incorporates all decisions at the same time and the resulting equation is then analytically, numerically or empirically solved for the required components. A fully Bayesian method thereby also adjusts the PDF of the aberrations to the specific scenario of exposure and simultaneously incorporates the uncertainties of the parameters of the dose–response curve. The mathematical complexity of this task means that it is not possible to define a general Bayesian solution applicable to all exposure scenarios. Nevertheless, Di Giorgio and Zaretzky showed a procedure to include prior information in dose estimation in a Bayes-like manner(83). Discretization of the dose range and a separated frequentist estimation of response curve parameters provides a straightforward method resulting in a Bayes-like posterior of the dose(83). Note that this example also illustrates the influence of the prior on the credible intervals for the dose. In general, to date, three types of Bayesian solutions can be identified from within the literature. Firstly, analytical expressions for simplified scenarios: The earliest prominent example of such a solution is the calibrative density for a Poisson distributed number of aberrations linked to the absorbed dose via a linear dose–response without intercept (C = β = 0) using Gamma priors for dose and slope(84). In this case, an analytical expression is derived that is proportional to the posterior. The authors reasoned that the trivial dose–response curve is appropriate for neutrons (high LET) at high doses, however, neutron dose response is known to exhibit overdispersion(85). Brame and Groer revisited the same scenario in 2002, replacing the Poisson distribution by a negative binomial in order to jointly model the density of the slope and the degree of overdispersion(82). Secondly, practical guides for specific scenarios that provide R code for reuse. Higueras et al.(39). discussed the reasonable set-up of Poisson and compound Poisson models (Neyman A, negative binomial, Hermite) for biodosimetry. Complete and simplified models are provided and three examples are given for dose assessments for a linear-quadratic calibration curve (two Poisson and one negative binomial regression model). For each example three different priors are compared(39). Together with Vinnikov, the same group of authors presented a guide for analysis of partial-body exposure for a zero-inflated Poisson model(30). This guide approximates credible intervals for the irradiated fraction of the body and the received dose simultaneously from:   P(D,F,d0|y)∝(Fe−Dd0−F+1)−n∑j=1n0(n0j)Fn−j(1−F)j(n−j)sP(Xj=s|D)P(D)P(F)P(d0) (6)where D is the absorbed dose, F the fraction of irradiated body, d0 the 37% cell survival dose, n number of cells in patient data, n0 the number of cells without aberrations, and Xi is a negative binomial distributed random number corresponding to the unirradiated fraction of the cells with mean and variance depending on the index i and the mean and the variance of the calibration curve respectively. Uniform priors for F and d0 and a Gamma prior for the dose are used(30). Thirdly, Software packages: The Java application CytoBayesJ for cytogenetic radiation dosimetry(81) and R package radir containing the models by Higueras(86) offer platform independent software solutions to Bayesian uncertainty assessment. CytoBayes offers tools for (i) distribution testing (compound Poisson models), (ii) posterior calculations of the number of aberrations (several combinations of priors and yield models), (iii) Bayesian-like dose assessment (Poisson data), (iv) full Bayesian calculation of posteriors of the dose (Poisson data in y = αD), as well as (v) Bayesian methods for detection limits(81). In order to simplify the analysis, most scenarios include a Bayesian uncertainty assessment of the dicentric yield and then make a conventional frequentist inverse regression step due to mathematical complexity. Ultimately, for biological dosimetry, it can be concluded that the Bayesian methodology provides the most coherent approach, but at the same time it is far more technically challenging than the dose and uncertainty assessment methods currently recommended and used by most practitioners(1). Despite recent developments, such methods thus remain to date ‘expert’ tools. Therefore, software solutions such as those described above will be required to bridge the gap between the necessary mathematical skills and the users. In particular, the potential pitfall of incorrectly chosen priors will need careful consideration going forward, since the methodological coherence of prior and posterior can be seen conceptually as a self-fulfilling prophecy that masks unexpected results. EPR Sources of uncertainty Uncertainties or ‘errors’ can be classified into two types: type A errors can be evaluated by statistical methods whereas type B errors are commonly termed ‘systematic errors’ and must be dealt with by other means. Historically uncertainties have also been classified as ‘random’ or ‘systematic’ errors, and these terms are still sometimes used for type A and B errors, respectively. However, it is important to note that the GUM recommends the nomenclature of ‘type A’ or ‘type B’ to classify how an error is dealt with rather than where it originates(21). For EPR, determination of irradiation doses retrospectively is not a straightforward task. Uncertainties both of type A and of type B are introduced and must be carefully analysed and reported. Several technical publications have been produced, dedicated to the determination of uncertainties with EPR spectroscopy on materials such as tooth enamel or alanine(87–90). A list of possible sources of uncertainties has been drafted for tooth enamel dosimetry in Fattibene and Callens(91) and many issues considered in the list (effect of sample anisotropy, parameters of spectrum acquisition, spectrometer instability, sample mass, spectrum processing methods, uncertainty linked to the dose calibration curve, etc.) are valid for almost all the EPR dosimetry methods. As recommended by IAEA(87), the total combined uncertainty is expressed as the quadratic sum of the possible source of uncertainties, under the assumption that these sources are uncorrelated. Specifically,:   σED=(σF2+σS2+σE2+σC2+σT2) (7)where σF is the contribution from the fading correction; σS is the contribution from the sample preparation; σE the contribution from the EPR measurement; σT the contribution from the numerical treatment of spectra, and σC is the contribution from the calibration of EPR dose–response, including differences in radiation sensitivity. The fading contribution depends on the detector materials used in analysis. For tooth enamel it can be assumed that σF does not contribute to the overall uncertainty(87), however, a sufficient delay should be observed after irradiation for signal stabilisation due to recombination of short-lived species. A minimum delay of 48 h is usually recommended for calcified tissue. This recommendation is also valid for most irradiated materials including sugars, for example, for which the stabilisation delay can reach weeks(92). EPR dosimetry is usually not performed on unstable species. Nevertheless, a few groups are considering the use of the unstable signal component in nails for dosimetry applications(10, 11, 93). In this case, the fading correction may significantly contribute to the uncertainty budget, because of the influence of multiple parameters (temperature humidity, light, etc.) which may be difficult to evaluate for the delay between irradiation and sample harvest. In addition, special attention must be paid to control fading during the storage period or at least the parameters of influence. Similarly, the sample preparation approach depends on the material used as well as on the method chosen. The other three factors affecting the total uncertainty stem from measurements and data processing. The EPR measurement uncertainty, σE, depends on a complex combination of uncertainties linked to the performance of EPR spectrometers per se and the experimental set-up. Uncertainty contributions from the numerical treatment of spectra, σT, and contributions from the calibration of EPR dose–response, σC, can be minimised through experimental validation of the method for different materials as has been done for tooth enamel(91) and smart phone touch screen glass(13). For this reason, international interlaboratory comparisons of EPR dose reconstruction are the most useful tools for identifying contributing sources of uncertainties and finding the best solution to minimise these. Evaluation of uncertainties The uncertainty analysis approach used by several EURADOS partners is the standard one recommended for uncertainty estimation for EPR on alanine. In brief, this method consists of taking the mean value of multiple measurements as the best evaluation of the true value and the sample standard deviation as the uncertainty on the signal. A minimum of 10 measurements is commonly used; however, some EURADOS WG10 members regularly perform 12 or 16 measurements (three dosemeters in four different orientations or four dosemeters in four orientations). In order to measure unknown absorbed doses, a calibration curve is required to be created to describe the relationship between EPR signal (y) and dose (D): y = F(D). The parameters of the function, F, are estimated through a best-fit procedure. The dose–signal relation is usually linear and weighted regression is sometimes, but not always, applied. Thus, the general expression of the estimated dose ED as a function of the measured signal, y, is as follows:   ED=y−ba±u(ED)=y−ba±1a2u2(y)+1a2u2(b)+(y−b)2a4u2(a)+2(y−b)a3u(b,a)+ucal2 (8)where a is the calibration curve slope and b is the y-axis offset. The combined variance u2(ED) consists of the variance in the measured signals u2(y), the variance in the y-axis offset u2(b), the variance in the slope in the calibration curve u2(a), the covariance of b and a and finally the dose dependent variance in the doses given to the calibration samples, ucal2. The covariance term was found to be negligible in accurate EPR dosimetry(94, 95) and it is assumed that this will be the case for retrospective dosimetry. An unbiased estimate of the variance of the measured signals can be obtained from:   u2(y)=1n−2∑k=1n(yk−bk−a⋅Dk)2 (9)where Dk are the known absorbed doses given to the calibration samples, yk are the corresponding signal values and bk is in this case the zero dose for each calibration sample. The denominator represents the number of calibration points minus the two degrees of freedom. Standard regression analysis yields:   u2(b)=u2(y)∑k=1nDk2n⋅σ2(D) (10)  u2(a)=u2(y)σ2(D) (11)where:   σ2(D)=∑k=1n(Dk−D¯)2 (12)and the covariance term is omitted. These principals allow deduction of chiefly type A uncertainties. Type B uncertainties are generally considered by taking into account uncertainties in fading, corrections for radiation energy, environmental factors, spectrometer variations and calibration dose. Note, however, that fading could be treated as a type A uncertainty if multiple measurements of dispersion in fading for a given time period are performed—such a method is used for TL/OSL dosimetry, as described in the next section. An alternative approach relies on determination of the absorbed dose in alanine measurements from the calibration curve, by relating the amplitude of the EPR signal to absorbed dose. To estimate the uncertainty of dose, u(ED), an imperfect calibration curve is designed and the procedure described by Nagy(89) is applied; in the case of a calibration plot based on n calibration points, the confidence interval for the dose value D, determined from m replicate measurements of the signal of a test sample, is calculated using the following expression:   D=D0±tn−2,P⋅sfitb⋅1m+1n+(D0−Dmean)2∑i=1n(Di−Dmean)2 (13)where: tn−2,P is the Student coefficient for the chosen probability P; sfit is the standard uncertainty of the mean of the fit; b is the slope of the regression; D0 is the dose, D, value to be determined and Dmean is the mean of the D values of all calibration point Di. Performance parameters and predicted uncertainty In the framework of the European Research project Southern Urals Radiation Risk Research (SOUL, 2005), a benchmark protocol was established between three EPR laboratories (HMGU, Munich, ISS, Rome, and IMP, Ekaterinburg) for the definition of the performance parameters for EPR dosimetry with tooth enamel and for the prediction of associated uncertainty(96). The parameters ‘critical dose’ and ‘limit of detection’, taken from chemical metrology(97, 98) were deemed to be most appropriate to characterise the uncertainty of EPR measurements. The definition of critical dose follows from the hypothesis test for 95% probability of an unirradiated sample and hence allows a false positive error rate, α, of 5%. In other words, within the distribution of measured EPR signal amplitudes from unexposed samples, there is an accepted probability of 5% that the amplitude is larger than the critical amplitude, which is the decision limit below which it is assumed that the sample was not exposed and above which it is assumed that an exposure occurred. The absorbed dose value corresponding to the critical amplitude on the EPR signal-to-dose–response curve is then termed the critical dose. The definition of the limit of detection follows from the hypothesis test for 95% probability that the sample was exposed, hence allowing for a false negative error rate β of 5% indicating that an exposure did not occur. That is, within the distribution of the measured EPR signal amplitudes from exposed samples, there is probability β of 5% that the amplitude is lower than the critical amplitude. A graphical illustration of the definitions of critical amplitude and limit of detection is shown in Figure 1. Figure 1. View largeDownload slide A graphical illustration of the definitions of critical amplitude (ICL) and critical dose (DCL), limit of detection of signal amplitude (IDL) and of absorbed dose (DDL). (figure adapted from(96)). Figure 1. View largeDownload slide A graphical illustration of the definitions of critical amplitude (ICL) and critical dose (DCL), limit of detection of signal amplitude (IDL) and of absorbed dose (DDL). (figure adapted from(96)). The critical amplitude, ICL, and the limit of detection, IDL, of EPR signal intensity are calculated from the mean of measurements of unexposed samples (b0) and the estimated standard deviation of n EPR measurements of unexposed samples, σˆ0, and samples exposed to a dose DDL. σˆDL, respectively:   ICL=b0+t(1−α,n−2)σˆ0 (14)  IDL=ICL+t(1−β,n−2)σˆDL (15) The estimated standard deviation must be multiplied by the Student’s critical value t(1−[α or β],n−2), the (1−[α or β]) percentage point of Student’s t distribution with the single-sided confidence interval chosen according to the desired confidence level (1−[α or β]) and number of samples n. The standard deviations may be evaluated from the 90% prediction bands of an unweighted linear least-squares fit of the EPR signal-to-dose–response curves in the case of constant uncertainty. Alternatively, in the case of dose dependent uncertainty, the values of the standard deviations may be predicted from an analytical model function formulated from the variance of EPR measurements on the absorbed dose. An example function of variance as a function of absorbed dose in tooth enamel, developed at the EPR laboratory of the ISS, is presented in Figure 2. Figure 2. View largeDownload slide Model function of variance as a function of the EPR signal amplitude in tooth enamel, built at the EPR laboratory of the ISS. Figure 2. View largeDownload slide Model function of variance as a function of the EPR signal amplitude in tooth enamel, built at the EPR laboratory of the ISS. Following work carried out under the European project SOUL, the benchmark protocol has been used for the estimation of the performance parameters within several EPR dosimetry method intercomparisons(13, 99). OSL/TL Evaluation of uncertainties In OSL and TL, there is no specific standard dedicated to the evaluation of uncertainties, and evaluations normally follow the classical GUM(21) guidance. Uncertainty analysis is performed using the standard theory of error propagation. If only a single dose calibration point is used then the unknown absorbed dose, DX, is obtained through a simple comparison between the corresponding luminescent signal (TL or OSL) and the luminescent signal, Ical, obtained after exposing the same dosemeter to a calibration dose Dcal. If fading is an issue then either the signal or the measured dose can be corrected for this effect. Two cases will be considered here, both of which have been applied in the literature: (1) the fading factor is determined individually for the sample in question using the (known) time tX since irradiation and (2) the fading factor is calculated based on a known fading function, with associated uncertainties. In the case of (1), a possible approach would be as follows: after measurement of the signal IX related to the unknown dose DX of the incident, with a time delay tX since this incident, the sample is given a calibration dose Dcal and a corresponding signal Ical is measured after a time tcal. The latter procedure is then repeated with the same dose Dcal but this time waiting for a longer time interval tX (the same time delay as for the accidental exposure), before measuring the corresponding signal IX,Dcal. The fading factor is then directly determined by the simple relation:   f=IX,DcalIcal (16) In this case, only the measurement uncertainties of IX,Dcal and IX are required for evaluation of the uncertainty in f. The unknown absorbed dose DX is then calculated as   DX=IXfICalDcal, (17)with IX being, as above, the signal measured after the unknown exposure, with a delay time tX. It is important to note the difference between IX and IX,Dcal. Equation (17) can then be simplified to:   DX=IXIX,DcalDcal (18) Therefore, only the measurement uncertainties in IX and IX,Dcal and the uncertainty in determining Dcal are needed for the calculation of uncertainty in DX, which can be carried out using the GUM methodology as explained in the above sections. This method assumes that the uncertainties in IX and IX,Dcal exhaustively explain the observed deviances in the dose–response and fading curves. However, from experience, it is known that this is probably not always the case and the uncertainty in DX is therefore likely to be underestimated using the above procedure. For instance, an uncertainty in the time tX since the unknown exposure is not considered. In the case of (2), fading is calculated according to a functional relationship fitted to datasets of other samples. For chip cards and electronic components, where the effect of anomalous fading is suspected, this functional relationship between intensity and time since irradiation is well-known:(69,100–104)  I(t)=IC[1−κln(ttC)] (19) With IC being the signal intensity that would be observed after an (arbitrarily) chosen time tC after irradiation and κ a fitting constant (in the literature, the common logarithm is often used and κ replaced by g/100, with g being the percentage decrease per decade). If, for convenience, tC is set to tcal before fitting of equation (19), then the fading factor can be calculated as:   f=ItxI(tcal)=[1−κln(txtcal)] (20) The difference between equations (20) and (16) is that here the signal intensities are calculated rather than measured. If, again, a single calibration dose is used to convert signal to dose, the unknown dose DX is calculated according to equation (17). If uncertainties are assumed in κ, tX, tcal, IX, Ical and Dcal, then in this simplified case, the uncertainty in DX can be assessed using GUM methodology:   δDx=Dx⋅(δDcalDcal)2+(δIXIX)2+(δIcalIcal)2+(δff)2 (21)with   σf2=(ln(tXtcal))2σκ2+(κtX)2σtX2+(κtcal)2σtcal2 (22) It should be emphasised that if published fading parameters in equation (19) are used and tC does not equal tcal, then equation (20) should be applied in its more general form, as a ratio of two calculated intensities with associated uncertainties. The calculation of the uncertainty in DX is then more laborious but still straightforward. Another issue is that if equation (19) is fitted to fading data obtained from averaged signals of several samples, which is sometimes done, then the calculated signal uncertainty will always be lower than the standard deviation of the input data, i.e. the uncertainties in the parameter κ and IC are unlikely to describe the full variability in observed fading behaviour. One possibility to circumvent this issue is to fit the fading data of each sample individually, rather than to average signals and then to calculate average and standard deviation of the group of obtained parameter values. This approach has been pursued, for example, in the MULTIBIODOSE project(49). For a luminescence reader with a built-in calibration source, the time delay tcal between irradiation with the calibration dose and measurement is usually known very accurately, therefore the uncertainty δcal can be neglected in this case. On the other hand, an increase in the value of tcal leads to a reduction in the first term in equation (22) and thus to a reduction in the uncertainty of the fading factor. Furthermore, there will also be fading during the irradiation period itself, i.e. during tirr. If tcal is of the order of tirr this should be accounted for in evaluation of the uncertainty in Dx, for example (approximately) by adding tirr/2 or tirr/ln 2 to the delay time. It is more correct, however, to make tcal » tirr. It should be noted that, in general, the fading function that is appropriate to the materials being studied should be determined independently and the uncertainty analysis appropriate to that expression should then be evaluated and used. As an example, in case of human teeth as well as integrated circuits from mobile phones, the fading curves were better fitted by a bi-exponential decay function(100, 102). In contrast to electronic components, sensitive dosemeter materials with comparatively slow or no fading, such as household salt (NaCl)(101, 105) and quartz extracted from building materials(106–108), may have a substantial background signal if shielded from light during the time of storage before irradiation. The detection limit in such materials is related to the magnitude and the uncertainty in the background signal. The background absorbed dose in, for example, household salt may vary depending on how the salt was manufactured(101), in what package it was kept, and how and where this package was stored(105). For quartz extracted from bricks, the background dose depends on the concentration of natural radionuclides in brick, plaster and soil in front of the building and on the age of the bricks (as referenced above). In these cases, when the dose is in the region of the background dose, the variance in the background dose will predominate over the uncertainty in the dose measurement itself. However, for higher doses, in the region of several 100 mGy or above, the uncertainty in background dose will have a lesser impact. Luminescence signals as recorded by photon counting hardware are in essence a binomial signal, which is assumed to approximate the Poisson-distribution when sufficient counts are registered. In OSL, a background is usually subtracted from the measured signal, determined from a certain part of the OSL decay curve (often the last seconds of the measured signal). The background can be a combination of hard-to-bleach components and instrumental background and as such can be overdispersed. Detailed approaches for the calculation of uncertainty for the net OSL count in such a case can be found in the literature, e.g.(109). If several calibration doses are used in order to verify the dose–response curve, or several delay times are used to verify the fading curve, a number of different methods are applied. These include proprietary codes or spreadsheets, software for luminescence data processing (e.g. Analyst(110)), or dedicated curve fitting packages (e.g. Sigmaplot(111), Origin(112)). The equations chosen to approximate dose responses are commonly linear, saturating exponential (sublinear), or exponential/quadratic (superlinear) in form(113). If a sample is divided into several aliquots to assess DX, the quantities IX and Ical could be calculated as the average of the luminescent signals of the different aliquots and their uncertainty as the weighted standard error of the mean. However, to avoid additional uncertainties due to the different aliquot sensitivities, in practice, a dose is usually measured for each aliquot individually and the obtained distribution in aliquot doses further analysed to obtain a best estimate and uncertainty for DX. A variety of approaches has been developed for obtaining central measures from non-perfect data in this case(28, 114, 115). Means and maximum likelihood estimates such as the weighted mean (weighted to inverse variance) are associated with well-defined uncertainty estimates(21), which are obtained by propagation of uncertainties through calculation (internal error) or by evaluation of dispersion in the observed results (external error). Dispersion in observations is commonly observed to be greater than that predicted by propagation of uncertainties through the calculation of the central estimate, leading to overdispersion. This can relate to experimental variables that are undefined or not included in the calculation, and where signal levels are low. It may also relate to the assumptions underlying the calculations themselves. The combination of data in GUM based approaches assumes a Gaussian approximation of the Poisson distribution. MONTE CARLO MODELLING TO SUPPORT UNCERTAINTY CALCULATIONS A key aim of the EURADOS WG10 uncertainties task is to further promote the powerful Monte Carlo (MC) techniques for uncertainty estimation. Thus, in addition to the above review of uncertainty analysis techniques in retrospective dosimetry, we present the following review of use of MC methods within uncertainty estimation. With the availability of high-power computational facilities, numerical simulations have become increasingly practical and popular for analysis of physical or biological systems. One method of numerical simulation that has widespread application in dosimetry, as well as in countless other physical and biological sciences, is the Monte Carlo method. MC modelling can be used to aid and analyse uncertainty propagation, where the goal is to determine how random variation, lack of knowledge, or error affects the sensitivity, performance, or reliability of the system that is being modelled. However, this inevitably comes at a cost: the MC method is itself prone to uncertainty, and can therefore impart additional sources of error. The current section of this paper focuses on two applications of the Monte Carlo method that are relevant to retrospective dosimetry. The first application discusses the use of Monte Carlo programmes created specifically for uncertainty propagation analysis, giving an illustrative example of the technique. The second application concerns the MC transport of ionising radiation through matter, which is a common technique used to model retrospective dosimetry systems. In each case, the role that MC plays in both increasing and decreasing a user’s understanding of uncertainty is discussed. Uncertainty propagation with Monte Carlo Monte Carlo Simulation (MCS) provides a practical alternative to the GUM modelling approach. Indeed, the GUM method has limitations, especially in the case in which the model is characterised by a non-linear function and the approximation of a Taylor’s series expansion up to first-order terms for error propagation is not sufficient. Furthermore, the uncertainty distributions may be non-Gaussian in which case it is not always possible to propagate uncertainties using the GUM approach(21, 116–120). The use of MCS for the evaluation of measurement uncertainty is presented by the ‘GUM Supplement’(21). MCS may be applied to estimate the combined effects of uncertainty propagation through a physical system that comprises a number of individual components, each of which possesses outcomes and uncertainties expressed by independent probability functions. Many authors report applications of MCS for determination of their measurement uncertainties: Couto et al.(116), for example, recommended its use for complex problems that could not be solved by the GUM method. Whereas GUM calculations are purely theoretical, MC analyses aim to perform a large series of simulated experiments, with estimates of uncertainties then derived by considering the distributions of their results. In most simple cases, the theoretical GUM results can be compared and tested experimentally against the MC ones. In more complicated situations where the GUM approach would be difficult or unfeasible, MC simulations may still provide reliable results. Moreover, whereas the GUM modelling approach may require advanced mathematical skills for many of its procedures, the MCS method can be applied more easily using readily available spreadsheet software, such as Microsoft(R)Excel(R) or Libreoffice Calc: complex uncertainty calculations can hence be accomplished by non-statisticians using the standard spreadsheet applications rather than requiring technically demanding mathematical procedures(119, 120). The MCS method for assessing uncertainty propagation MC analyses require the definition of a measurement model (with the corresponding functional relationship) that describes the measurement process in terms of the inputs to the measurand and the assessment of the types of distribution that apply to the various input uncertainties. The aim of the MC analysis is then to obtain properties of the measurand quantity, Y, such as expectations, variances and covariances, and coverage regions, by calculating an approximate numerical representation of the distribution function GY for Y. Suppose that Y is function of various independent variables Xi, i.e. Y = f(X) = f(X1,….,XN) with i from 1 to N; for the present discussion, the Xi are assumed to be continuous parameters, but similar techniques can be used for discrete variables. For each input variable Xi the corresponding PDF, P(Xi), describing its likely values is assumed to be known. A value for Y may therefore be drawn by sampling the N input quantities Xi from their respective PDFs. In practice, this sampling procedure is typically achieved computationally using pseudo-random numbers that are generated algorithmically according to a uniform distribution between 0 and 1, and then suitably ‘transformed’ to obtain the prescribed probability distribution. One such transformation makes use of the cumulative distribution function (CDF), C(Xi), corresponding to a given P(Xi), which is a monotonically increasing, normalizable function with a range constrained between 0 and 1; the result from the MC sampling of the uniform distribution may be identified with a value within the range of this CDF, which then maps uniquely to a specific outcome Xi. The commonest distributions, P(Xi), used in uncertainty calculations are Gaussian, rectangular, triangular, t, exponential, gamma and multivariate Gaussian; it is possible to sample fairly from each of these distributions by using a uniform distribution between 0 and 1 that is randomly sampled by Monte Carlo methods. Sampling once from each of the N PDFs, P(Xi), corresponding to the N independent input quantities Xi provides one value for the measurand, which may be labelled Y1, by using the expression of the f(X) function. Clearly, the value of Y1 will depend on the specific outcomes that were obtained during the N random samplings, and repeating the process is likely to yield a different estimate, i.e. Y2. If the MCS is repeated M times, requiring M×N samplings overall, a distribution, GY, of M values for the measurand are generated, i.e. {Y1, Y2,…, YM}. This process is repeated a sufficiently large number of times (i.e. M is very large) in order to have significant statistics, i.e. until it may be assumed that the generated distribution GY provides a reasonable estimate of the likely distribution of the true measurand, Y. Since the input values are randomly drawn from the predefined probability distributions associated with each of the input variables, the information regarding these PDFs will be included implicitly in the distribution of the Y variable, and this allows for the propagation of distributions. Once the representation GY of the distribution function for Y has been derived, it is possible to extract from it values for the mean and standard deviation associated with Y as well as the other moments of the distribution function. Moreover, the distribution of output data can be plotted and additional information can be extracted from that graph, such as the coverage interval of the measurand for a stipulated coverage probability, p, even when the PDF of the measurand has significant asymmetry. The possibility of graphical representation of the distribution of the measurand through the MCS procedure allows for the detection of possible asymmetry or deviation from Gaussian shape. This graphical representation also favours the determination of a coverage interval corresponding to a stipulated coverage probability. From the above discussion, the advantages of MC simulation with respect to the GUM approach(21) are seen to be manifold. The MC technique involves propagation of distributions and always provides a PDF for the output quantity that is consistent with the PDFs of the various inputs, whereas the GUM modelling approach is not able to explicitly determine a PDF for the output quantity. Also, in the case in which the input quantities may themselves depend on other quantities, including corrections and correction factors for type B errors, MCS is able to calculate the combined standard uncertainty of the measurand, even if the functional relationships are complex or difficult to deal with analytically. Similarly, if two inputs are correlated via a bivariate distribution, MC analysis can provide a joint simulation if the input PDFs are defined in such a way to include the correlation coefficient. Additionally, the MCS procedure intrinsically accounts for any non-linearity in the functional relationship, whereas GUM does not; in general, more accurate estimates of uncertainties for non-linear models are therefore achieved through MC calculations. Application of the MCS method for uncertainty propagation In order to show how the MC method is used to evaluate measurement uncertainties, an example application for EPR retrospective dosimetry is reported here. Consider a plot of EPR signal as a function of absorbed dose, and assume that the data may be fitted with a calibration curve characterised by a quadratic trend. This behaviour is common for samples that present a background signal that is of the same shape and overlaps the signal induced by irradiation; in these cases the exposure to ionising radiation produces an increase of EPR intensity of these signals. The expression for the fitting function in this case is of the general form:   S=a+bD+cD2 (23)where S is the EPR signal, D is the absorbed dose and a, b and c are the fitting parameters of the calibration curve, which in this example have the values a = 10.4 ± 0.2, b = 1.527 ± 0.011 and c = 0.409 ± 0.005. In EPR retrospective dosimetry, the general approach is to reconstruct the absorbed dose Dr deposited during an exposure from a measurement of the induced signal, with corrections applied to account for fading and other measurement conditions. As per equation (7), S is the signal measured from a sample, and that the corresponding standard deviation σED is calculated by considering the various contributions from the standard deviations associated with the fading correction (σF), the sample preparation process (σS), the EPR measurement (σE), and from the numerical treatment of spectra (σT). For the present example, suppose that S ± σED have the values 25.5 ± 0.7. The reconstructed dose can be calculated by inverting equation (23), i.e.:   D=−b+b2−4ac+4cS2c (24) As can be seen, equation (24) contains a fraction and a square root term. The calculation of the standard deviation, σD, of the reconstructed dose following the GUM modelling approach is therefore not straightforward because, to take into account the uncertainties of the calibration curve parameters, the corresponding partial derivatives of the factors in equation (24) should be calculated. In fact, under the simplifying hypothesis that the covariances between the various fitting coefficients are negligible, this uncertainty becomes analogous to equation (8):   σD=(dDda)2σa+2(dDdb)2σb2+(dDdc)2σc2+(dDds)2σs2 (25) After the calculation of the partial derivatives and the substitution of the above-mentioned values, this approach can be shown to provide the result Dr = 4.49 ± 0.14 Gy. On the other hand, this calculation can be performed by MCS following an easy procedure with common spreadsheet software. An example of this MC analysis is reported in Figure 3. In Figure 3, the values in the A, B, C and D columns are realisations of the S measurement and the a, b and c parameters, respectively. All of these quantities are assumed to be distributed normally around their respective average values with their respective standard deviations, which are stated at the top of Figure 3. Based on these distributions, trial values are drawn for each of the input variables (a, b, c and S), and the corresponding value of the dose D (column H) is then calculated using equation (24). In this example spreadsheet, the values were calculated by using a combination of the NORMINV (for calculating Gaussian-distributed values) and RAND (for generating pseudo-random values) functions, according to the procedure described in the literature(120). Figure 3 lists the results from 10 such applications of this process; for the complete analysis, a total of 106 trials were performed. A histogram of the dose values obtained from these 106 trials is reported in Figure 4 binned in 0.02 Gy increments, with the probability (y-axis) derived from a normalisation of their respective populations. Figure 3. View largeDownload slide Example of a Monte Carlo simulation for estimating the uncertainty of an EPR measurement, performed using a spreadsheet programme. Figure 3. View largeDownload slide Example of a Monte Carlo simulation for estimating the uncertainty of an EPR measurement, performed using a spreadsheet programme. Figure 4. View largeDownload slide Histogram of dose values obtained by means of Monte Carlo simulations. Figure 4. View largeDownload slide Histogram of dose values obtained by means of Monte Carlo simulations. The mean value and the standard deviation of the results are readily calculated from the histogram (Figure 4), and can be shown to be equal to 4.49 Gy and 0.14 Gy, respectively, in this example. These values are consistent with those obtained by the GUM approach. However, as mentioned previously an advantage of the Monte Carlo analysis is that it also provides the PDF for the output quantity, which is dependent on the PDFs of the various inputs. Other moments of the distribution can also be obtained from this PDF, such as the skewness and the kurtosis. These are found to be −0.0588 and 3.0093, respectively, for the data in the current example (Figure 4), which are similar to the values of 0 (skewness) and 3 (kurtosis) for a Gaussian distribution, as expected since the PDFs of the input variables were all assumed to be Gaussian-like. Another application of the Monte Carlo technique is the uncertainty analysis for TL measurements on display glass of mobile phones. The absorbed dose measurement is influenced in this case by the presence of an intrinsic background signal and signal fading(103). The intrinsic background signal can be reduced by etching the glass sample in concentrated HF before measurement but not completely eliminated(104). In both cases (etched or unetched), the distribution of intrinsic background doses could be shown to approximately follow a log-normal distribution. From the measured dose D, along with its estimated uncertainty σD, the corrected unknown absorbed dose Dcorr is then calculated from the expression:   Dcorr=D−DBGf, (26)with DBG being the median of the intrinsic background dose distribution and f the fading factor. Analysis of the signal fading of 17 different glass samples for different storage times indicated that the variability (standard deviation) in f is approximately independent of the value of f itself, therefore a constant value for σf is assumed. Since the calculation of uncertainty for the corrected dose involves the combination of Gaussian and non-symmetrically distributed parameters, the GUM methodology is not directly applicable whereas with the MCS the simulated distribution of possible corrected dose values is easily obtainable, allowing the immediate assessment of the median and the 95% confidence interval. An example for two unetched glass samples from mobile phones is shown in Figure 5. For the sample with the lower dose the uncertainty in the intrinsic background dose dominates, leading to a distribution skewed to the left whereas for the sample with the higher dose, the uncertainty in the fading dominates, leading to a distribution skewed to the right. Figure 5. View largeDownload slide Histogram of dose values for two display glass samples of irradiated mobile phones (Samsung Galaxy Y S5360). Nominal doses were 0.6 Gy and 1.5 Gy, reconstructed doses with 95% CI, 0.59 [0.18–0.83] Gy and 1.6 [1.3–2.2] Gy, respectively. Figure 5. View largeDownload slide Histogram of dose values for two display glass samples of irradiated mobile phones (Samsung Galaxy Y S5360). Nominal doses were 0.6 Gy and 1.5 Gy, reconstructed doses with 95% CI, 0.59 [0.18–0.83] Gy and 1.6 [1.3–2.2] Gy, respectively. Radiation transport modelling with Monte Carlo There are a number of Monte Carlo radiation transport codes currently available, examples including the EGSnrc(121), FLUKA(122), GEANT(123), MCNP(124), PENELOPE(125) and PHITS(126) families of software. These codes are described as ‘general purpose’: they are intended, in principle, to be able to model the passage of any type of ionising radiation from any type of source through any arrangement of matter that might be required by their users, providing output data on parameters such as energy depositions and fluences at any location of interest in the geometry. Accordingly, these codes have widespread application in retrospective dosimetry(127–133), where computer models of the dosimetry system in question may be created and interrogated to understand or improve its performance, limitations and uncertainties. Despite these successes, however, the techniques are not without drawback: although they may be a valuable tool in evaluating and handling uncertainty, they may also be a source of this. Statistical uncertainties with MC modelling It is relatively easy for the users of general purpose MC codes to reduce the statistical uncertainties on their results. Essentially, these procedures typically rely on increasing the number of scored histories in the regions of interest within the geometry. The most elementary such method is simply to instruct the programme to simulate the histories of a greater number of particles, though inevitably this is achieved at the expense of an increased CPU time. More sophisticated techniques of variation reduction can also often be implemented, such as biasing source directions, forcing particular interactions to occur, or artificially splitting individual particle histories into multiplicities for example, with scores then having to be weighted accordingly to ensure fairness and fidelity of results. Using these techniques, and coupled with the power of modern computers (especially cluster-based platforms), it is therefore not uncommon for simulated results to be associated with very small statistical uncertainties, sometimes a fraction of a per cent. But anecdotally, at least within the experience of the authors of this current article, such values are often then quoted in the scientific literature as the primary or only uncertainty that is provided with a particular Monte Carlo result. This is misleading, however, because it neglects the type B uncertainties that are also inevitably associated with the modelling, which tend to be much harder to quantify and may be substantially larger in magnitude. In summary, with Monte Carlo modelling it is often easy to derive highly precise results, but this does not necessarily mean that they are accurate; arguably, the important difference between these two qualities is not always granted enough weight. Known type B ‘systematic’ uncertainties The MC radiation transport method simulates the passage of particles through a user-defined configuration of matter. Accordingly, the accuracy with which the computational model reflects physical reality will dictate the accuracy of its results. Clearly, then, there are a number of factors that could introduce significant uncertainty into the modelling. These might be classified into two broad types: uncertainties inherent in the Monte Carlo software; and uncertainties associated with the user-defined model itself. Type B uncertainties within the Monte Carlo software incorporate factors such as uncertainties in the underlying physics upon which it relies. These uncertainties include, for example, limitations and inaccuracies of the interaction models that might be in use, including any energy-dependencies that they might have. Although many of these uncertainties may be known in principle, or could be derived from the various references that describe the origins of the physical data and models underpinning the general purpose codes, their magnitudes may not be readily apparent to ‘casual’ users of the software, with their combined effects even harder to quantify. Their contributions to the overall uncertainty budgets arising from the use of Monte Carlo modelling in retrospective dosimetry are therefore highly context dependent, and difficult to numerate in general. Type B uncertainties that originate from the users of codes reflect the inevitable inability of these users to construct a perfect model of the physical system. This failure might be because of factors that can only be known with limited resolution, and can hence only be input to the MC programme with limited accuracy. To give an illustrative example, in the modelling of resistors in mobile phones for fortuitous dosimetry(132), the absorbed doses received by the target will depend strongly on accurate knowledge of the material compositions and densities of the aluminium oxide substrate, the high-Z contact electrodes adjacent to it, the circuit board to which it is attached, and the screen, case, battery and other features that surround it, as well as on all of their relative locations in 3D space; the estimates of each of these physical parameters will be subject to a significant measurement uncertainty, and this is translated into an unavoidable inaccuracy of the MC model (and hence its results). Some type B uncertainties may be mitigated by performing sensitivity analyses with the model. In fact, investigating the likely effects of such sensitivities in the physical world might be the primary motivation for developing the Monte Carlo model in the first instance. For example, the impact on dosimetry of the measurement uncertainty on the density of a given object in the real world may be estimated by perturbing the density of that object in the model by an amount deemed equivalent to that uncertainty, and then repeating the simulation; comparison of the perturbed and unperturbed results provides an estimate of the effects of that density uncertainty. Similarly, by varying the concentration of crucial elements, the same approach may be used to estimate the impact of uncertainty regarding the material composition(128). The MC method is thus seen to be a quick and effective means of quantifying the effects of a given uncertainty in a physical system. The above univariate sensitivity analysis may be generalised to account for error propagation and the overall uncertainty budget. Specifically, uncertainty propagation analyses can be achieved by repeating the sensitivity analysis for all parameters within the physical model that are associated with a significant type B uncertainty. In fact, this procedure may be applied to assess the impacts of both the physical uncertainties (i.e. the measurement uncertainties) and the code-specific ones; this latter assessment might be achieved by rerunning the simulation using different simulation parameters, for example choosing alternative cross-section databases or interaction models. Overall, the procedure therefore leads to distributions of perturbed data around a mean, which may be interrogated by standard techniques to obtain a handle on the overall quality of the quoted result, and hence on the robustness of its predictions about the performance of the physical system being modelled. Unknown type B ‘systematic’ uncertainties In addition to applying Monte Carlo techniques to estimate the effects of known type B uncertainties in a physical dosimetry system, such as uncertainties on the precise material composition of a dosemeter, it is also possible to use them to estimate the effects of unknown type B uncertainties. Unknown type B uncertainties include those factors that will affect the results recorded by a retrospective dosemeter, but are a consequence of ignorance of the values of key parameters rather than any imprecision in the estimates of them; these could include uncertainties resulting from incomplete knowledge of the nature of the exposure conditions of the dosemeter, for instance, or missing data on the characteristics of the source term. An example of this might be a situation in which it acknowledged that an energy-dependent correction to the response of a dosemeter needs to be made, but when the energy of the radiation source to which it was exposed is not known. In such cases, it might be appropriate to use Monte Carlo techniques to model the responses of the dosemeter to a range of plausible sources with different energies, with the subsequent variation in the results then used to provide a handle on the maximum error that is likely to be caused by the ignorance of the true energy of the physical exposure. As an illustrative case study of the application of MC in handling this type of uncertainty, consider the use of mobile phones as emergency retrospective dosemeters. Although phones, or more specifically their display screens or resistors, possess many of the features considered advantageous to act as reliable fortuitous dosemeters, for them to be useful it is mandatory to relate the absorbed doses they record in an exposure to the concurrent doses deposited in their owners. This can be achieved by Monte Carlo modelling of the phones located at various positions on an anthropomorphic phantom, exposing the configuration to various fields, and then comparing the doses deposited in the phones and phantom to generate a set of exposure- and location-dependent conversion factors; for some locations and exposures, phone and body absorbed doses may differ by a factor of ~20(129). In the real world, however, the precise location of the phone relative to the body during an unplanned exposure may not be known in hindsight, at least not to those performing the dosimetry. Moreover the precise exposure conditions, and orientation of the individual relative to the source, are also unlikely to be recorded. Accordingly, this ignorance introduces a significant unknown type B uncertainty into the conversion of phone doses to body doses. But, this ignorance may be managed by the use of mean conversion factors that are averaged over the datasets of all of the parameters that are unknown. For instance, if the exposure geometry and radiation source were known with some degree of confidence, this averaging might just be over the conversion factor datasets generated by the Monte Carlo model for the different phone positions; but if only the source were known, then the averaging would be over the datasets for all phone positions and all exposure geometries. Use of these mean conversion factors is then associated with conservative uncertainties identified as the maximum over- and under-responses that are expected to arise from their application. These extrema may be taken from the envelope function of the conversion factors that were summed-over in the averaging process, and quantify the worst-case errors in the dosimetry that might be anticipated in adopting this conversion process due to the unknown type B uncertainty in the exposure conditions. This is an important separate topic for further consideration. BRIEF DISCUSSION AND CONCLUSIONS In this paper, the current state of the art in terms of uncertainty analysis techniques for biological dosimetry (with the DCA as the most well-developed example) and physical retrospective dosimetry has been reviewed, with a particular emphasis on the potential for increased use of the more sophisticated Bayesian and Monte Carlo modelling methodologies to support uncertainty characterisation. To survey the current situation, a questionnaire was compiled and sent to all members of EURADOS WG10 on retrospective dosimetry. The questionnaire was designed to gather information on current experience related to uncertainty estimation and also to assess the possible needs in terms of training or courses. From the 28 laboratories who responded, 72% currently use physical retrospective dosimetry techniques (EPR, TL and OSL), 19% biological techniques (micronuclei, dicentrics, FISH, and γ-H2AX) and 8% others techniques (UV–vis spectroscopy, neutron activation, etc.). Fifty six per cent of the responders use only the classical GUM approach, 13% the approach described in the IAEA manual for cytogenetics, about 18% Monte Carlo methods and about 9% use a partial Bayesian approach. None of the responders used a formal Bayesian approach. It is interesting to note that 56% of responders use software to calculate the uncertainties (35% in house-developed software, 41% commercial software and 24% freely available software). 64% of responders were satisfied with their uncertainty characterisation method, but would be interested in improving it and/or to evaluate and compare other approach such Bayesian or MC methods. In contrast, 24% were aware of weaknesses in their approach or were concerned that their method may be incorrect or inappropriate. So, in general the situation looks positive for the most well-established assays, with practical, accepted analysis techniques in place (chiefly based on GUM) which likely support good overall absorbed dose and uncertainty estimates. The exception is for the more complex exposure scenarios(19), which are known to introduce additional uncertainties which should ideally be characterised on a case-by-case basis. The tools to do this are in place (for instance in the GUM(21)) but, in practice, are not often applied for biological dose estimation, and to date physical retrospective dosimetry techniques have only really been used for a set of ‘standard’ scenarios. Thus there is still work to do. The next steps in development of uncertainty analysis techniques across the field of retrospective dosimetry will be to look at standardisation of techniques—i.e. to evaluate which of the methods detailed in this work give the most accurate representation of uncertainty in various different exposure scenarios, including the more difficult cases. In addition, as discussed, uncertainty analysis is a complex field in itself and thus an ‘expert’ level of knowledge is required. For example, the application of GUM can be very complex and time consuming, especially when the different terms of uncertainties are correlated (cf. calculation of covariance terms). In many circumstances GUM also requires approximation, and the situation is further complicated when the mathematical model that fits the relation between input data (measurand) and output data (dose) is non-linear. The estimation of each of the uncertainty terms may need a large amount of work (including experimental work). In light of this, MC or Bayesian approaches would be particularly efficient for retrospective dosimetry application. However, to date, only a small number of EURADOS retrospective dosimetry Group members use these methods, and further work is required. For example, for GUM the algebraic benefits of using the Gaussian approximation should be balanced against its potential divergence from the ‘true’ uncertainty of the observations. However, the way in which the uncertainty is characterised at each stage in a Monte Carlo calculation should be appropriate to the observations and/or should allow for uncertainty in its own assignment(28), and the extra analytical power provided by the use of Bayesian priors requires that their presence and form be carefully justified in order to limit the potential for mistakes(26, 31, 134). In addition, it is worth noting that beside retrospective dosimetry, EPR has long been used for metrology with for instance alanine and recently with tartrates and formats(135, 136). In this field comparatively large amount of effort has ensured that the uncertainty analysis supports highly accurate absorbed dose determinations(137). However, these principles are equally useful for retrospective dosimetry even if the intrinsic uncertainties are larger and thus the accuracy normally is lower in this case. It will thus be very important for researchers active in these fields to ensure that new methods are disseminated and that new and existing colleagues access appropriate training. This is something that EURADOS WG10 will continue to support going forward. ACKNOWLEDGEMENTS The authors would like to express their thanks to the EURADOS WG10 members who supported this work, in particular Octavia Monteiro Gil of the Centro de Ciencias e Tecnologias Nucleares, Instituto Superior Tecnico, Portugal, and Seongjae Jang of the Korea Institute of Radiological and Medical Sciences. FUNDING This work was supported by partly supported by the European Radiation Dosimetry Group (EURADOS; WG10) and by the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Chemical & Radiation Threats & Hazards at Newcastle University in partnership with Public Health England (PHE). The views expressed are those of the author(s) and not necessarily those of the NIHR, the Department of Health or Public Health England. REFERENCES 1 International Atomic Energy Agency EPR-Biodosimetry series. Cytogenetic dosimetry: applications in preparedness for and response to radiation emergencies ( 2011). 2 Oestreicher, U. et al.  . RENEB intercomparisons applying the conventional Dicentric Chromosome Assay (DCA). Int. J. Radiat. Biol.  93, 20– 29 ( 2017). Google Scholar CrossRef Search ADS   3 Depudyt, J. et al.  . RENEB intercomparison exercises analyzing micronuclei (Cytokinesis-block Micronucleus Assay). Int. J. Radiat. Biol.  93, 36– 47 ( 2017). Google Scholar CrossRef Search ADS   4 Barquinero, J. F. et al.  . RENEB biodosimetry intercomparison analyzing translocations by FISH. Int. J. Radiat. Biol.  93, 30– 35 ( 2017). Google Scholar CrossRef Search ADS   5 Terzoudi, G. I. et al.  . Dose assessment intercomparisons within the RENEB network using G0-lymphocyte prematurely condensed chromosomes (PCC assay). Int. J. Radiat. Biol.  93, 48– 57 ( 2017). Google Scholar CrossRef Search ADS   6 Moquet, J. et al.  . The second gamma-H2AX assay inter-comparison exercise carried out in the framework of the European biodosimetry network (RENEB). Int. J. Radiat. Biol.  93, 58– 64 ( 2017). Google Scholar CrossRef Search ADS   7 Trompier, F. et al.  . Overview of physical dosimetry methods for triage application integrated in the new European network RENEB. Int. J. Radiat. Biol.  93, 65– 74 ( 2017). Google Scholar CrossRef Search ADS   8 Trompier, F. et al.  . EPR dosimetry for actual and suspected overexposures during radiotherapy treatments in Poland. Radiat. Meas.  42, 1025– 1028 ( 2007). Google Scholar CrossRef Search ADS   9 Degteva, M. O. et al.  . Analysis of EPR and FISH studies of radiation doses in persons who lived in the upper reaches of the Techa River. Radiat. Environ. Biophys.  54, 433– 444 ( 2015). Google Scholar CrossRef Search ADS   10 Trompier, F. et al.  . EPR retrospective dosimetry with fingernails. Health Phys.  106, 798– 805 ( 2014). Google Scholar CrossRef Search ADS   11 Sholom, S. and McKeever, S. W. S. Emergency EPR dosimetry technique using vacuum-stored dry nails. Radiat. Meas.  88, 41– 47 ( 2016). Google Scholar CrossRef Search ADS   12 Marciniak, A. and Ciesielski, B. EPR dosimetry in nails—a review. Appl. Spectrosc. Rev.  51, 73– 92 ( 2016). Google Scholar CrossRef Search ADS   13 Fattibene, P. et al.  . EPR dosimetry intercomparison using smart phone touch screen glass. Radiat. Environ. Biophys.  53, 311– 320 ( 2014). 14 Bailiff, I. K., Sholom, S. and McKeever, S. W. S. Retrospective and emergency dosimetry in response to radiological incidents and nuclear mass-casualty events: a review. Radiat. Meas.  94, 83– 139 ( 2016). Google Scholar CrossRef Search ADS   15 Jaworska, A. et al.  . Operational guidance for radiation emergency response organisations in Europe for using biodosimetric tools developed in eu multibiodose project. Radiat. Prot. Dosim.  164, 165– 169 ( 2015). Google Scholar CrossRef Search ADS   16 Kulka, U. et al.  . Realising the European network of biodosimetry: Reneb-status quo. Radiat. Prot. Dosim.  164, 42– 45 ( 2015). Google Scholar CrossRef Search ADS   17 Kulka, U. et al.  . RENEB – Running the European Network of biological dosimetry and physical retrospective dosimetry. Int. J. Radiat. Biol.  93, 2– 14 ( 2017). Google Scholar CrossRef Search ADS   18 Helton, J. C. Uncertainty and sensitivity analysis for models of complex systems. In: Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 62). (Berlin, Heidelberg: Springer Berlin Heidelberg) (2008). 19 Vinnikov, V. A., Ainsbury, E. A., Lloyd, D. C., Maznyk, N. A. and Rothkamm, K. Difficult cases for chromosomal dosimetry: statistical considerations. Radiat. Meas.  46, 1004– 1008 ( 2011). Google Scholar CrossRef Search ADS   20 Ainsbury, E. et al.  . Integration of new biological and physical retrospective dosimetry methods into EU emergency response plans – joint RENEB and EURADOS inter-laboratory comparisons. Int. J. Radiat. Biol.  93, 99– 109 ( 2017). Google Scholar CrossRef Search ADS   21 ISO/IEC. Uncertainty of measurement – Guide to the expression of uncertainty in measurement (GUM:1995). ISO/IEC Guide 98-3:2008 2008 ( 2008). 22 International Standard Organization. ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories. Int. Stand.  2005, 1– 36 ( 2005). 23 JCGM 100. JCGM 100:2008 – evaluation of measurement data – guide to the expression of uncertainty in measurement. Int. Organ. Stand. Geneva ISBN  50, 134 ( 2008). 24 ISO 5725-1:1994. Accuracy (trueness and precision) of measurement methods and results – Part 1: general principles and definitions ( 1994). 25 ISO 13528:2015. Statistical Methods for Use in Proficiency Testing by Interlaboratory Comparison ( 2015). 26 Sivia, D. S. and Skilling, J. Data Analysis: A Bayesian Tutorial  ( Oxford: Oxford University Press) ( 2006). 27 Marrale, M. et al.  . Assessing the impact of copy number variants on miRNA genes in autism by Monte Carlo simulation. PLoS ONE  9, e90947 ( 2014). Google Scholar CrossRef Search ADS   28 Sivia, D. S., Burbidge, C., Roberts, R. G. and Bailey, R. M. A Bayesian approach to the evaluation of equivalent doses in sediment mixtures for luminescence dating. AIP Conf. Proc.  735, 305– 311 (AIP, 2004). 29 Thomsen, K. J., Murray, A. S. and Bøtter-Jensen, L. Sources of variability in OSL dose measurements using single grains of quartz. Radiat. Meas.  39, 47– 61 ( 2005). Google Scholar CrossRef Search ADS   30 Higueras, M., Puig, P., Ainsbury, E. A., Vinnikov, V. A. and Rothkamm, K. A new Bayesian model applied to cytogenetic partial body irradiation estimation. Radiat. Prot. Dosim.  168, 330– 336 ( 2016). 31 Ramsey, C. B. Deposition models for chronological records. Quat. Sci. Rev.  27, 42– 60 ( 2008). Google Scholar CrossRef Search ADS   32 Sigurdson, A. J. et al.  . International study of factors affecting human chromosome translocations. Mutat. Res. Toxicol. Environ. Mutagen  652, 112– 121 ( 2008). Google Scholar CrossRef Search ADS   33 McNamee, J. P., Flegal, F. N., Greene, H. B., Marro, L. and Wilkins, R. C. Validation of the cytokinesis-block micronucleus (CBMN) assay for use as a triage biological dosimetry tool. Radiat. Prot. Dosim.  135, 232– 242 ( 2009). Google Scholar CrossRef Search ADS   34 Flegal, F. N., Devantier, Y., McNamee, J. P. and Wilkins, R. C. Quickscan dicentric chromosome analysis for radiation biodosimetry. Health Phys.  98, 276– 281 ( 2010). Google Scholar CrossRef Search ADS   35 ISO 19238:2014. Radiological protection – performance criteria for service laboratories performing biological dosimetry by cytogenetics ( 2014). 36 ISO 21243:2008. Radiation protection—performance criteria for laboratories performing cytogenetic triage for assessment of mass casualties in radiological or nuclear emergencies—general principles and application to dicentric assay ( 2008). 37 Vral, A., Fenech, M. and Thierens, H. The micronucleus assay as a biological dosimeter of in vivo ionising radiation exposure. Mutagenesis  26, 11– 17 ( 2011). Google Scholar CrossRef Search ADS   38 ISO 17099:2014. Radiological protection – performance criteria for laboratories using the cytokinesis block micronucleus (CBMN) assay in peripheral blood lymphocytes for biological dosimetry ( 2014). 39 Higueras, M. et al.  . A new inverse regression model applied to radiation biodosimetry. Proc. R. Soc. A  471, 20140 ( 2015). Google Scholar CrossRef Search ADS   40 Ainsbury, E. A. and Barquinero, J. F. Biodosimetric tools for a fast triage of people accidentally exposed to ionizing radiation. Statistical and computational aspects. Ann. Ist. Super. Sanita  45, 307– 312 ( 2009). 41 Fenech, M. The lymphocyte cytokinesis-block micronucleus cytome assay and its application in radiation biodosimetry. Health Phys.  98, 234– 243 ( 2010). Google Scholar CrossRef Search ADS   42 Pernot, E. et al.  . Ionizing radiation biomarkers for potential use in epidemiological studies. Mutat. Res. Mutat. Res.  751, 258– 286 ( 2012). Google Scholar CrossRef Search ADS   43 Ainsbury, E. A. et al.  . What radiation dose does the fish translocation assay measure in cases of incorporated radionuclides for the Southern Urals populations? Radiat. Prot. Dosim.  159, 26– 33 ( 2014). Google Scholar CrossRef Search ADS   44 Barnard, S. et al.  . The first gamma-H2AX biodosimetry intercomparison exercise of the developing european biodosimetry network RENEB. Radiat. Prot. Dosim.  164, 265– 270 ( 2015). Google Scholar CrossRef Search ADS   45 Rothkamm, K., Krüger, I., Thompson, L. H. and Löbrich, M. Pathways of DNA double-strand break repair during the mammalian cell cycle. Mol. Cell. Biol.  23, 5706– 5715 ( 2003). Google Scholar CrossRef Search ADS   46 Rothkamm, K. and Horn, S. gamma-H2AX as protein biomarker for radiation exposure. Ann. dell’Istituto Super. di sanità  45, 265–– 2671 ( 2009). 47 Horn, S. and Rothkamm, K. Candidate protein biomarkers as rapid indicators of radiation exposure. Radiat. Meas.  46, 903– 906 ( 2011). Google Scholar CrossRef Search ADS   48 Valdiglesias, V., Laffon, B., Pásaro, E. and Méndez, J. Evaluation of okadaic acid-induced genotoxicity in human cells using the micronucleus test and γH2AX analysis. J. Toxicol. Environ. Heal. A  74, 980– 992 ( 2011). Google Scholar CrossRef Search ADS   49 Wojcik, A. et al.  . Multidisciplinary biodosimetric tools for a large-scale radiological emergency – the MULTIBIODOSE Project. Radiat. Emerg. Med.  3, 19– 23 ( 2014). 50 Rübe, C. E. et al.  . DNA repair in the context of chromatin: New molecular insights by the nanoscale detection of DNA repair complexes using transmission electron microscopy. DNA Repair. (Amst)  10, 427– 437 ( 2011). Google Scholar CrossRef Search ADS   51 Löbrich, M. et al.  . In vivo formation and repair of DNA double-strand breaks after computed tomography examinations. Proc. Natl. Acad. Sci. USA  102, 8984– 8989 ( 2005). Google Scholar CrossRef Search ADS   52 Rothkamm, K., Balroop, S., Shekhdar, J., Fernie, P. and Goh, V. Leukocyte DNA damage after multi-detector row CT: a quantitative biomarker of low-level radiation exposure. Radiology  242, 244– 251 ( 2007). Google Scholar CrossRef Search ADS   53 Sedelnikova, O. A. et al.  . Delayed kinetics of DNA double-strand break processing in normal and pathological aging. Aging Cell.  7, 89– 100 ( 2008). Google Scholar CrossRef Search ADS   54 Joyce, E. F. et al.  . Drosophila ATM and ATR have distinct activities in the regulation of meiotic DNA damage and repair. J. Cell. Biol.  195, 359– 367 ( 2011). Google Scholar CrossRef Search ADS   55 Ricceri, F. et al.  . Involvement of MRE11A and XPA gene polymorphisms in the modulation of DNA double-strand break repair activity: a genotype–phenotype correlation study. DNA Repair.  10, 1044– 1050 ( 2011). Google Scholar CrossRef Search ADS   56 Valdiglesias, V., Giunta, S., Fenech, M., Neri, M. and Bonassi, S. γH2AX as a marker of DNA double strand breaks and genomic instability in human population studies. Mutat. Res. Mutat. Res.  753, 24– 40 ( 2013). Google Scholar CrossRef Search ADS   57 Horn, S., Barnard, S. and Rothkamm, K. Gamma-H2AX-based dose estimation for whole and partial body radiation exposure. PLoS ONE  6, e25113 ( 2011). Google Scholar CrossRef Search ADS   58 Abragam, A. and Bleaney, B. Electron Paramagnetic Resonance of Transition Ions | A. Abragam and B. Bleaney | 9780199651528 | Oxford University Press Canada. (Clearendon Press) ( 1970). 59 Weil, A. J. and Bolton, J. Electron Paramagnetic Resonance: Elementary Theory and Practical Applications , second edn. ( Hoboken, NJ: John Wiley & Sons, Inc.) ( 2006) 10.1002/0470084987. Google Scholar CrossRef Search ADS   60 Poole, C. P. Electron Spin Resonance: A Comprehensive Treatise on Experimental Techniques  ( Mineola, New York: Dover Publications) ( 1997). 61 Trompier, F. et al.  . Radiation-induced signals analysed by EPR spectrometry applied to fortuitous dosimetry. Ann. dell’Istituto Super. di sanità  45, 287– 296 ( 2009). 62 Israelsson, A., Gustafsson, H. and Lund, E. Dose response of xylitol and sorbitol for EPR retrospective dosimetry with applications to chewing gum. Radiat. Prot. Dosim.  154, 133– 141 ( 2013). Google Scholar CrossRef Search ADS   63 Trompier, F., Bassinet, C. and Clairand, I. Radiation accident dosimetry on plastics by EPR spectrometry. Health Phys.  98, 388– 394 ( 2010). Google Scholar CrossRef Search ADS   64 Sholom, S. and Chumak, V. EPR emergency dosimetry with plastic components of personal goods. Health Phys.  98, 395– 399 ( 2010). Google Scholar CrossRef Search ADS   65 Kamenopoulou V. Barthe J. Hickman C. Portal G. 1986 Accidental gamma irradiation dosimetry using clothing Radiat. Prot. Dosim.  17 185 188 Google Scholar CrossRef Search ADS   66 Barthe, J., Kamenopoulou, V., Cattoire, B. and Portal, G. Dose evaluation from textile fibers: a post-determination of initial ESR signal. Int. J. Radiat. Appl. Instrum. A. Appl. Radiat. Isot.  40, 1029– 1033 ( 1989). Google Scholar CrossRef Search ADS   67 McKeever, S. W. S. et al.  . Numerical solutions to the rate equations governing the simultaneous release of electrons and holes during thermoluminescence and isothermal decay. Phys. Rev. B  32, 3835– 3843 ( 1985). Google Scholar CrossRef Search ADS   68 Woda, C. et al.  . Radiation-induced damage analysed by luminescence methods in retrospective dosimetry and emergency response. Ann. Ist. Super. Sanita.  45, 297– 306 ( 2009). 69 Bassinet, C. et al.  . Retrospective radiation dosimetry using OSL of electronic components: results of an inter-laboratory comparison. Radiat. Meas.  71, 475– 479 ( 2014). Google Scholar CrossRef Search ADS   70 Vinnikov, V. A., Ainsbury, E. A., Maznyk, N. A., Lloyd, D. C. and Rothkamm, K. Limitations associated with analysis of cytogenetic data for biological dosimetry. Radiat. Res.  174, 403– 414 ( 2010). Google Scholar CrossRef Search ADS   71 Savage, J. R. K. and Papworth, D. G. Constructing a 2B calibration curve for retrospective dose reconstruction. Radiat. Prot. Dosim.  88, 69– 76 ( 2000). Google Scholar CrossRef Search ADS   72 Merkle, W. Statistical methods in regression and calibration analysis of chromosome aberration data. Radiat. Environ. Biophys.  21, 217– 233 ( 1983). Google Scholar CrossRef Search ADS   73 Szłuińska, M., Edwards, A. A. and Lloyd, D. C., Health Protection Agency (Great Britain).. Radiation Protection Division. Statistical Methods for Biological Dosimetry. (Health Protection Agency, Radiation Protection Division) ( 2005). 74 Ainsbury, E. A., Vinnikov, V. A., Maznyk, N. A., Lloyd, D. C. and Rothkamm, K. A comparison of six statistical distributions for analysis of chromosome aberration data for radiation biodosimetry. Radiat. Prot. Dosim.  155, 253– 267 ( 2013). Google Scholar CrossRef Search ADS   75 Ainsbury, E. A. et al.  . Uncertainty of fast biological radiation dose assessment for emergency response scenarios. Int. J. Radiat. Biol.  93, 127– 135 ( 2017). Google Scholar CrossRef Search ADS   76 Schenker, N. and Gentleman, J. F. On judging the significance of differences by examining the overlap between confidence intervals. J. Am. Statist.  ( 2012) http://dx.doi.org/10.1198/000313001317097960. 77 Austin, P. and HUX, J. A brief note on overlapping confidence intervals. J. Vasc. Surg.  36, 194– 195 ( 2002). Google Scholar CrossRef Search ADS   78 Deperas, J. et al.  . CABAS: a freely available PC program for fitting calibration curves in chromosome aberration dosimetry. Radiat. Prot. Dosim.  124, 115– 123 ( 2007). Google Scholar CrossRef Search ADS   79 Ainsbury, E. A. and Lloyd, D. C. Dose estimation software for radiation biodosimetry. Health Phys.  98, 290– 295 ( 2010). Google Scholar CrossRef Search ADS   80 Sasaki, M. S. Chromosomal biodosimetry by unfolding a mixed Poisson distribution: a generalized model. Int. J. Radiat. Biol.  79, 83– 97 ( 2003). Google Scholar CrossRef Search ADS   81 Ainsbury, E. A. et al.  . CytoBayesJ: Software tools for Bayesian analysis of cytogenetic radiation dosimetry data. Mutat. Res.Genet. Toxicol. Environ. Mutagen  756, 184– 191 ( 2013). Google Scholar CrossRef Search ADS   82 Brame, R. S. and Groer, P. G. Bayesian analysis of overdispersed chromosome aberration data with the negative binomial model. Radiat. Prot. Dosim.  102, 115– 119 ( 2002). Google Scholar CrossRef Search ADS   83 DiGiorgio, M. and Zaretzky, A. Biological dosimetry – a Bayesian approach for presenting uncertainty on biological dose estimates. Annals of ‘II Encuen- tro de Docentes e Investigadores de Estadstica en Psicologa ( 2011). 84 Groer, P. G. and De Pereira, C. A. B. Probability and Bayesian Statistics  ( US: Springer) pp. 225– 232 ( 1987) 10.1007/978-1-4613-1885-9_23. Google Scholar CrossRef Search ADS   85 Edwards, A. A and Lloyd, D. C. The Early Effects of Radiation on DNA  ( Berlin, Heidelberg: Springer) pp. 385– 396 ( 1991) 10.1007/978-3-642-75148-6_40. Google Scholar CrossRef Search ADS   86 Moriña, D., Higueras, M., Puig, P., Ainsbury, E. A. and Rothkamm, K. <tt>radir</tt> package: an <tt>R</tt> implementation for cytogenetic biodosimetry dose estimation. J. Radiol. Prot.  35, 557– 569 ( 2015). Google Scholar CrossRef Search ADS   87 IAEA. et al. Use of electron paramagnetic resonance dosimetry with tooth enamel for retrospective dose assessment. Vienna IAEA 57 ( 2002). 88 Fattibene, P., La Civita, S., De Coste, V. and Onori, S. Analysis of sources of uncertainty of tooth enamel EPR signal amplitude. Radiat. Meas.  43, 827– 830 ( 2008). Google Scholar CrossRef Search ADS   89 Nagy, V. Accuracy considerations in EPR dosimetry. Appl. Radiat. Isot.  52, 1039– 1050 ( 2000). Google Scholar CrossRef Search ADS   90 ISO/ASTM 51607:2013. Practice for use of the alanine-EPR dosimetry system. ( 2013). 91 Fattibene, P. and Callens, F. EPR dosimetry with tooth enamel: a review. Appl. Radiat. Isot.  68, 2033– 2116 ( 2010). Google Scholar CrossRef Search ADS   92 Fattibene, P., Duckworth, T. L. and Desrosiers, M. F. Critical evaluation of the sugar-EPR dosimetry system. Appl. Radiat. Isot.  47, 1375– 1379 ( 1996). Google Scholar CrossRef Search ADS   93 Wilcox, D. E. et al.  . Dosimetry based on EPR spectral analysis of fingernail clippings. Health Phys.  98, 309– 317 ( 2010). Google Scholar CrossRef Search ADS   94 Anton, M. et al.  . Uncertainties in alanine/ESR dosimetry at the Physikalisch-Technische Bundesanstalt. Phys. Med. Biol.  51, 5419– 5440 ( 2006). Google Scholar CrossRef Search ADS   95 Antonovic, L., Gustafsson, H., Carlsson, G. A. and Carlsson Tedgren, A. Evaluation of a lithium formate EPR dosimetry system for dose measurements around 192Ir brachytherapy sources. Med. Phys.  36, 2236– 2247 ( 2009). Google Scholar CrossRef Search ADS   96 Wieser, A. et al.  . Assessment of performance parameters for EPR dosimetry with tooth enamel. Radiat. Meas.  43, 731– 736 ( 2008). Google Scholar CrossRef Search ADS   97 Currie, L. A. Nomenclature in evaluation of analytical methods including detection and quantification capabilities: (IUPAC Recommendations 1995). Anal. Chim. Acta.  391, 105– 126 ( 1999). Google Scholar CrossRef Search ADS   98 Currie, L. A. Detection and quantification limits: basic concepts, international harmonization, and outstanding (‘low-level’) issues. Appl. Radiat. Isot.  61, 145– 149 ( 2004). Google Scholar CrossRef Search ADS   99 Fattibene, P. et al.  . The 4th International comparison on EPR dosimetry with tooth enamel, part 1: report on the results. Radiat. Meas.  46, 765– 771 ( 2011). Google Scholar CrossRef Search ADS   100 Sholom, S., Dewitt, R., Simon, S. L., Bouville, A. and McKeever, S. W. S. Emergency dose estimation using optically stimulated luminescence from human tooth enamel. Radiat. Meas.  46, 778– 782 ( 2011). Google Scholar CrossRef Search ADS   101 Bernhardsson, C., Christiansson, M., Mattsson, S. and Rääf, C. L. Household salt as a retrospective dosemeter using optically stimulated luminescence. Radiat. Environ. Biophys.  48, 21– 28 ( 2009). Google Scholar CrossRef Search ADS   102 Sholom, S. and McKeever, S. W. S. Integrated circuits from mobile phones as possible emergency OSL/TL dosimeters. Radiat. Prot. Dosim.  170, 398– 401 ( 2015) 10.1093/rpd/ncv446. Google Scholar CrossRef Search ADS   103 Discher, M. and Woda, C. Thermoluminescence of glass display from mobile phones for retrospective and accident dosimetry. Radiat. Meas.  53–54, 12– 21 ( 2013). Google Scholar CrossRef Search ADS   104 Discher, M., Woda, C. and Fiedler, I. Improvement of dose determination using glass display of mobile phones for accident dosimetry. Radiat. Meas.  56, 240– 243 ( 2013). Google Scholar CrossRef Search ADS   105 Christiansson, M., Bernhardsson, C., Geber-Bergstrand, T., Mattsson, S. and Rääf, C. L. Household salt for retrospective dose assessments using OSL: signal integrity and its dependence on containment, sample collection, and signal readout. Radiat. Environ. Biophys.  53, 559– 569 ( 2014). Google Scholar CrossRef Search ADS   106 Woda, C. et al.  . Luminescence dosimetry in a contaminated settlement of the Techa River valley, Southern Urals, Russia. Radiat. Meas.  46, 277– 285 ( 2011). Google Scholar CrossRef Search ADS   107 Woda, C. et al.  . Evaluation of external exposures of the population of Ozyorsk, Russia, with luminescence measurements of bricks. Radiat. Environ. Biophys.  48, 405– 417 ( 2009). Google Scholar CrossRef Search ADS   108 Bailiff, I. K. et al.  . The application of retrospective luminescence dosimetry in areas affected by fallout from the semipalatinsk nuclear test site: an evaluation of potential. Health Phys.  87, 625– 641 ( 2004). Google Scholar CrossRef Search ADS   109 Galbraith, R. F. A further note on the variance of a background – corrected OSL count. Anc. TL  32, 1– 3 ( 2014). 110 Duller, G. A. T. The Analyst software package for luminescence data: overview and recent improvements. Anc. TL  33, 35– 42 ( 2015). 111 Sigmaplot. ( 2017). Available at: http://www.sigmaplot.co.uk/products/peakfit/peakfit.php. (accessed 22 February 2017). 112 Origin. ( 2017). Available at: http://www.originlab.com/index.aspx?go=Solutions/Applications/Spectroscopy. (accessed 22 February 2017). 113 Burbidge, C. I. A broadly applicable function for describing luminescence dose response. J. Appl. Phys.  118, 044904 ( 2015). Google Scholar CrossRef Search ADS   114 Committee, A. M. Robust statistics: a method of coping with outliers. AMC Tech. Br. 2 ( 2001). 115 Galbraith, R. F., Roberts, R. G., Laslett, G. M., Yoshida, H. and Olley, J. M. Optical dating of single and multiple grains of quartz from jinmium rock sheltern, northern Australia: Part I, experimental design and statistical models. Archaeometry  41, 339– 364 ( 1999). Google Scholar CrossRef Search ADS   116 Couto, P. R. G., Damasceno, J. C. and de Oliveira, S. P. Monte Carlo simulations applied to uncertainty in measurement. In: Theory and Applications of Monte Carlo Simulations. Wai Kin (Victor) Chan, Ed. (In Tech) (2013). DOI: 10.5772/53014. Available on: https://www.intechopen.com/books/theory-and-applications-of-monte-carlo-simulations/monte-carlo-simulations-applied-to-uncertainty-in-measurement. 117 Ángeles Herrador, M. and González, A. G. Evaluation of measurement uncertainty in analytical assays by means of Monte-Carlo simulation. Talanta  64, 415– 422 ( 2004). Google Scholar CrossRef Search ADS   118 Lepek, A. A computer program for a general case evaluation of the expanded uncertainty. Accredit. Qual. Assur.  8, 296– 299 ( 2003). Google Scholar CrossRef Search ADS   119 Chew, G. and Walczyk, T. A Monte Carlo approach for estimating measurement uncertainty using standard spreadsheet software. Anal. Bioanal. Chem.  402, 2463– 2469 ( 2012). Google Scholar CrossRef Search ADS   120 Farrance, I. and Frenkel, R. Uncertainty in measurement: a review of monte carlo simulation using microsoft excel for the calculation of uncertainties through functional relationships, including uncertainties in empirically derived constants. Clin. Biochem. Rev.  35, 37– 61 ( 2014). 121 Kawrakow, I. and Rogers, D. The EGSnrc code system: Monte Carlo simulation of electron and photon transport. NRCC Rep. PIRS-701 ( 2000). 122 Ferrari, A., Sala, P. R., Fasso, A. and Ranft, J. FLUKA: a multi-particle transport code ( 2005). 123 Agostinelli, S. et al.  . Geant4—a simulation toolkit. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip.  506, 250– 303 ( 2003). Google Scholar CrossRef Search ADS   124 Pelowitz, D. B. E. MCNP6TM users manual Version 1.0. LA-CP-13–00634. ( 2013). 125 Salvat, F., Fernandez-Varea, J. M. and Sempau, J. Penelope-2006: A code system for Monte Carlo simulation of electron and photon transport – workshop proceedings, Barcelona, Spain, 4–7 July 2006 - nea6222-penelope.pdf. ( 2006). 126 Niita, K., Matsuda, N., Iwamoto, Y., Iwase, H. and Sato, T. PHITS: Particle and Heavy Ion Transport code System, Version 2. 23. JAEA-Data/Code ( 2010). 127 Discher, M. Lumineszenzuntersuchungen an körpernah getragenen Gegenständen für die Notfalldosimetrie. (München, Technische Universität ( 2015). 128 Discher, M., Hiller, M. and Woda, C. MCNP simulations of a glass display used in a mobile phone as an accident dosimeter. Radiat. Meas.  75, 21– 28 ( 2015). Google Scholar CrossRef Search ADS   129 Eakins, J. S. and Kouroukla, E. Luminescence-based retrospective dosimetry using Al2O3 from mobile phones: a simulation approach to determine the effects of position. J. Radiol. Prot.  35, 343– 381 ( 2015). Google Scholar CrossRef Search ADS   130 Gómez-Ros, J. M., Pröhl, G., Ulanovsky, A. and Lis, M. Uncertainties of internal dose assessment for animals and plants due to non-homogeneously distributed radionuclides. J. Environ. Radioact.  99, 1449– 1455 ( 2008). Google Scholar CrossRef Search ADS   131 Hervé, M. L., Clairand, I., Trompier, F., Tikunov, D. and Bottollier-Depois, J. F. Relation between organ and whole body doses and local doses measured by ESR for standard and realistic neutron and photon external overexposures. Radiat. Prot. Dosim.  125, 355– 360 ( 2007). Google Scholar CrossRef Search ADS   132 Kouroukla, E. Luminescence dosimetry with ceramic materials for application to radiological emergencies and other incidents. ( 2015). 133 Ulanovsky, A., Pröhl, G. and Gómez-Ros, J. M. Methods for calculating dose conversion coefficients for terrestrial and aquatic biota. J. Environ. Radioact.  99, 1440– 1448 ( 2008). Google Scholar CrossRef Search ADS   134 Chipman, H., George, E. and MuCulloch, R. The practical implementation of Bayesian model selection. IMS Lect. Notes – Monogr. Ser. 38, ( 2001). 135 Lund, E. et al.  . Formates and dithionates: sensitive EPR-dosimeter materials for radiation therapy. Appl. Radiat. Isot.  62, 317– 324 ( 2005). Google Scholar CrossRef Search ADS   136 Marrale, M. et al.  . Neutron ESR dosimetry through ammonium tartrate with low Gd content. Radiat. Prot. Dosim.  159, 233– 236 ( 2014). Google Scholar CrossRef Search ADS   137 Bergstrand, E. S., Hole, E. O. and Sagstuen, E. A simple method for estimating dose uncertainty in ESR/alanine dosimetry. Appl. Radiat. Isot.  7, 845– 854 ( 1998). Google Scholar CrossRef Search ADS   © Crown copyright 2017. This article contains public sector information licensed under the Open Government Licence v3.0 (http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/).

Journal

Radiation Protection DosimetryOxford University Press

Published: Mar 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off