Williams et al. discuss Chandra and FUSE absorption lines towards Mkn421 and argue that they have detected the IGM at z~0.

Finoguenov et al analyze an XMM mosaic of A3266 and find an extended region of low entropy gas which they interpret as stripped from an infalling subcluster core.

Wolfe and Melia argue that continuous stochastic acceleration cannot explain the hard X-ray emission from Coma because the energy gained by the particles is redistributed over the whole plasma on a fast timescale.

Crawford et al. present optical and Chandra observations of the filament in A1795.

Willis et al. claim detections of polarization in GRB based on detailed modelling of Compton scattering of the prompt emission off the Earth's atmosphere.

Berger et al. present the results of optical, near-IR and radio follow-ups of Swift bursts. Up to a third of the bursts may be dark.

Subramanian et al. discuss turbulence and magnetic fields in the ICM.

Brogan et al. identify a HESS source with a source of non-thermal radio and X-ray emission due to a young shell-type SNR.

## Tuesday, May 31, 2005

## Monday, May 23, 2005

### posm model

Chris Shrader provided an improved version of the posm model which performs a trapezoid integration over each energy bin. Available for v11 as bug fix 11.3.2e.

Keywords: heasoft, xspec

Keywords: heasoft, xspec

### kbmodels

Terry Gaetz spotted a couple of errors in sizing of the param array in kbmodels.f. These generated compiler warnings but probably had no effect. Fixed them.

Keywords: heasoft

Keywords: heasoft

## Saturday, May 21, 2005

### GravStat 2005: Bose

Networks of broadband detectors.

A single broadband detector can be thought of as a network of narrow band detectors.

Mergers have 3 phases - s/n may be small in each phase but combined may exceed threshold. How can we be sure we are associating 3 phases correctly - consistency checks.

Networking broadband detectors -> sky positions, stochastic GW background. Computationally very intensive.

Detectors have hugely different sensitivities - network dimensionality reduced for weakest detectable sources (only most sensitive detectors are likely to see them).

Network of aligned detectors - same signal in all detectors (to within time delay), can trigger on one detector and search on others, improved background determination. But blind spots and cannot find polarization of transients.

Unaligned detectors - can unravel more parameters, signal coverage better.

With more detectors there are more ways of analyzing the data - how do we decide which are best ?

What happens if different detectors have different noise characteristics - gaussian vs non-gaussian ?

In a network of detectors do you set confidence intervals for each unique combination ?

With large numbers of detectors computational costs are high - tricks to control this mutilate the normality of the initial statistic pdf.

How stringent are requirements on calibration accuracies across the network ?

How and when sould we effect astrophysical figures-of-merit, vetoes ? eg suppose derived position is where network has low sensitivity ?

A single broadband detector can be thought of as a network of narrow band detectors.

Mergers have 3 phases - s/n may be small in each phase but combined may exceed threshold. How can we be sure we are associating 3 phases correctly - consistency checks.

Networking broadband detectors -> sky positions, stochastic GW background. Computationally very intensive.

Detectors have hugely different sensitivities - network dimensionality reduced for weakest detectable sources (only most sensitive detectors are likely to see them).

Network of aligned detectors - same signal in all detectors (to within time delay), can trigger on one detector and search on others, improved background determination. But blind spots and cannot find polarization of transients.

Unaligned detectors - can unravel more parameters, signal coverage better.

With more detectors there are more ways of analyzing the data - how do we decide which are best ?

What happens if different detectors have different noise characteristics - gaussian vs non-gaussian ?

In a network of detectors do you set confidence intervals for each unique combination ?

With large numbers of detectors computational costs are high - tricks to control this mutilate the normality of the initial statistic pdf.

How stringent are requirements on calibration accuracies across the network ?

How and when sould we effect astrophysical figures-of-merit, vetoes ? eg suppose derived position is where network has low sensitivity ?

### GravStat 2005: Prodi

Time-coincidence search using bar detectors (IGEC). Template search - matched filters optimized for short and rare transients.

Many trials at once (9 pairs + 7 triples + 2 4-folds). Many target thresholds. Directional/non-directional searches.

Control of false dismissal probability - balance between detection efficiency and background fluctuations.

Background noise estimation - 10^3 time lags for pairs, 10^4-10^5 time lags for triples. GOF tests with background model.

Blind analysis - tuning of procedures before looking at data (no playground).

IGEC-2 has agreed on blind data exchange (secret time lag).

Resample by shifting event time series to get statistics of accidental claims.

Realized that technique good for upper limits but optimized for discoveries - false possitives hide true discoveries. Need to decrease false alarm probability (type I error).

Want to build confidence belt based on false alarm probability for each value of unknown parameter. Use Benjamini & Hochberg FDR procedure.

Open questions:

Check fluctuation of random variable FDR with respect to mean

Are we required to report a rejection of the null hypothesis ? - this is claim for an excess correlation in the observations at the true time, not taking into account measured noise background at different time lags - it might not be GW.

Many trials at once (9 pairs + 7 triples + 2 4-folds). Many target thresholds. Directional/non-directional searches.

Control of false dismissal probability - balance between detection efficiency and background fluctuations.

Background noise estimation - 10^3 time lags for pairs, 10^4-10^5 time lags for triples. GOF tests with background model.

Blind analysis - tuning of procedures before looking at data (no playground).

IGEC-2 has agreed on blind data exchange (secret time lag).

Resample by shifting event time series to get statistics of accidental claims.

Realized that technique good for upper limits but optimized for discoveries - false possitives hide true discoveries. Need to decrease false alarm probability (type I error).

Want to build confidence belt based on false alarm probability for each value of unknown parameter. Use Benjamini & Hochberg FDR procedure.

Open questions:

Check fluctuation of random variable FDR with respect to mean

Are we required to report a rejection of the null hypothesis ? - this is claim for an excess correlation in the observations at the true time, not taking into account measured noise background at different time lags - it might not be GW.

## Friday, May 20, 2005

### GravStat 2005: Summerscales

Max. entropy deconvolution of detector response.

Then as a test for supernova detection cross-correlated with all waveforms in Ott et al. catalog. Showed able to recover physics.

Simultaneous analysis of L and H time series to give one deconvolved "input" waveform.

Then as a test for supernova detection cross-correlated with all waveforms in Ott et al. catalog. Showed able to recover physics.

Simultaneous analysis of L and H time series to give one deconvolved "input" waveform.

### GravStat 2005: Shawhan

GW transients: upper limits and discoveries. Waveforms may be known or unknown. Known depends on one or more physical parameters. Physically identical signals may have different source origins. Given source classes can produce different signals.

Selection of event candidates

Identify regions of interest - filter to produce a trigger. Use a low threshold then apply additional tests - waveform consistency, noise stationarity, coincidence among detectors, cross-correlation of datastreams.

Summarize the outcome - define a statistic which has some power to distinguish S+B from B eg events with S/N > some threshold.

Cleanest evidence for discovery is a p-value < some threshold. A frequentist confidence interval is a set of models for which the observed outcome was "likely". Maximum S/N ratio test (Brady et al. CQG 21, S1775, 2004).

Issues:

Do we really need to calculate a statistic - can we use all the information available.

Is there another way to talk about discovery other than the p-value.

What is appropriate background rate ?

Why do a F analysis, a B analysis,... ?

Selection of event candidates

Identify regions of interest - filter to produce a trigger. Use a low threshold then apply additional tests - waveform consistency, noise stationarity, coincidence among detectors, cross-correlation of datastreams.

Summarize the outcome - define a statistic which has some power to distinguish S+B from B eg events with S/N > some threshold.

Cleanest evidence for discovery is a p-value < some threshold. A frequentist confidence interval is a set of models for which the observed outcome was "likely". Maximum S/N ratio test (Brady et al. CQG 21, S1775, 2004).

Issues:

Do we really need to calculate a statistic - can we use all the information available.

Is there another way to talk about discovery other than the p-value.

What is appropriate background rate ?

Why do a F analysis, a B analysis,... ?

### GravStat 2005: Loredo (again)

Spatial coincidence (with Wasserman). Consider 2 objects - frequentist nearest neighbour test p(Bayesian coincidence assessment. Compare models of no repetition and repetition. Odds favouring repetition = 4 pi Int dn L_1(n)L_2(n) ~ (2/sigma_12) exp(-theta_12^2/2sigma_12^2), sigma_12^2 = sigma_1^2+sigma_2^2. Generalizing to more events is tough - number of pairs to calculate goes as N^4.

Bayesian Adaptive Exploration (with Chernoff). Sequential analysis.

Observation -> Inference -> Design

Stopping problem: binomial (fixed # of obs) and negative binomial (fixed # of particular type of object) give different frequentist test results. A Bayesian comparison does not depend on the stopping condition.

Maximum entropy sampling: learn the most by observing where we know the least.

Bayesian Adaptive Exploration (with Chernoff). Sequential analysis.

Observation -> Inference -> Design

Stopping problem: binomial (fixed # of obs) and negative binomial (fixed # of particular type of object) give different frequentist test results. A Bayesian comparison does not depend on the stopping condition.

Maximum entropy sampling: learn the most by observing where we know the least.

### GravStat 2005: Rawlins

Frequentist interpretation. Feldman-Cousins unified scheme for confidence intervals. A 90% CL interval is guaranteed to include the true parameter value in 90% of hypothetical experiments (note F-C technique actually overcovers most of the time).

Intervals (each correct) will look different depending on background, ordering scheme, and probability distribution assumed - so need to fix these before the experiment.

But what happens if you come up with a new background component after performing the experiment ?

When a first GW is discovered it will be investigated in detail to check for alternative explanations. We can't routinely examine the background at this level so we will be potentially vetoing based on information not originally considered.

Knowing in advance the veto procedure before looking at the final events helps a lot. If changes after the fact are unavoidable then understanding the process as much as possible helps estimate the errors.

What should be done ?

1. Open box and take what's inside - statistically pure but risky

2. Stop worrying and do the best you can. Establish "fire drill" procedures

3. Conduct two separate searches : one for upper limits and one for discovery. For latter limit yourself to best behaved data and post hoc veto.

If we publish 90% confidence limits why aren't there more claimed detections ?

Intervals (each correct) will look different depending on background, ordering scheme, and probability distribution assumed - so need to fix these before the experiment.

But what happens if you come up with a new background component after performing the experiment ?

When a first GW is discovered it will be investigated in detail to check for alternative explanations. We can't routinely examine the background at this level so we will be potentially vetoing based on information not originally considered.

Knowing in advance the veto procedure before looking at the final events helps a lot. If changes after the fact are unavoidable then understanding the process as much as possible helps estimate the errors.

What should be done ?

1. Open box and take what's inside - statistically pure but risky

2. Stop worrying and do the best you can. Establish "fire drill" procedures

3. Conduct two separate searches : one for upper limits and one for discovery. For latter limit yourself to best behaved data and post hoc veto.

If we publish 90% confidence limits why aren't there more claimed detections ?

### GravStat 2005: Woan

Search for quasi-sinusoidal signal from triaxial NS. Large number of parameters, sime of large dimension. However signal is believed to be coherent on long timescales and may be phase-locked to the radio pulse.

If searching for a radio pulsar only need narrow bandwidth so can be in a clear region of the frequency spectrum. If frequency is not known then problem is that instrument has many lines that can be picked up by matched filters. However instrumental lines will not exhibit correct doppler/antenna pattern modulations.

Time-domain searches targeted MCMC over narrow parameter space. Frequency domain searches for isolated lines. Incoherent searches - Hough transform, stack-slide, powerflux.

Incoherent techniques are worth doing because coherent searches are over such a large parameter space that a source must be strong to be statistically significant.

LISA confusion - Umstatter et al 2005 blind detection and parameter estimation of 100 unkown sinusoids in noise of unknown variance. Frequentist approaches are fast but clumsy when making a final statement on sensitivity - can we recast in B terms. Is it sensible to use worsre case scenarios for upper limits ?

Given a posterior how do we make a statement about whether a detection has been made ? What is the best credible interval to use ?

If searching for a radio pulsar only need narrow bandwidth so can be in a clear region of the frequency spectrum. If frequency is not known then problem is that instrument has many lines that can be picked up by matched filters. However instrumental lines will not exhibit correct doppler/antenna pattern modulations.

Time-domain searches targeted MCMC over narrow parameter space. Frequency domain searches for isolated lines. Incoherent searches - Hough transform, stack-slide, powerflux.

Incoherent techniques are worth doing because coherent searches are over such a large parameter space that a source must be strong to be statistically significant.

LISA confusion - Umstatter et al 2005 blind detection and parameter estimation of 100 unkown sinusoids in noise of unknown variance. Frequentist approaches are fast but clumsy when making a final statement on sensitivity - can we recast in B terms. Is it sensible to use worsre case scenarios for upper limits ?

Given a posterior how do we make a statement about whether a detection has been made ? What is the best credible interval to use ?

### GravStat 2005: Romano

Search for stochastic GWs by cross-correlating 2 GW detectors

Y = Int df s_1(f) Q(f) s_2(f)

where Q(f) is the filter. Look for Omega ~ f^alpha. Q normalized to give = Omega_0.

Split data into short segments and calculate Y. Downsample 16384 -> 1024 Hz. High-pass filter and window to reduce leakage. Calibrate in f domain. Sliding PSD estimation to eliminate bias. Remove coherent lines (nx60Hz, nx16Hz). Optimal filter for different spectral indices. H1-H2 shows broadband coherence due to acoustic coupling. B analysis with prior 1/Omega_max where

Omega_max is from previous analysis.

Marginalise over theoretical variances - estimating for observed Y. S3 H1-L1 fits theoretical variance very well. Doing search in S4 for anisotropic stochastic GW - eg center of Galaxy, Virgo cluster.

Are there more general relationships for more detectors ?

Y = Int df s_1(f) Q(f) s_2(f)

where Q(f) is the filter. Look for Omega ~ f^alpha. Q normalized to give

Split data into short segments and calculate Y. Downsample 16384 -> 1024 Hz. High-pass filter and window to reduce leakage. Calibrate in f domain. Sliding PSD estimation to eliminate bias. Remove coherent lines (nx60Hz, nx16Hz). Optimal filter for different spectral indices. H1-H2 shows broadband coherence due to acoustic coupling. B analysis with prior 1/Omega_max where

Omega_max is from previous analysis.

Marginalise over theoretical variances - estimating for observed Y. S3 H1-L1 fits theoretical variance very well. Doing search in S4 for anisotropic stochastic GW - eg center of Galaxy, Virgo cluster.

Are there more general relationships for more detectors ?

### GravStat 2005: Roundtable

Giovani Prodi introductory remarks

1. Blind vs. explanatory analysis - use of priors, exploit F & B methods.

2. Noise background estimation - confidence required ? statistical uncertainties on background ? discriminate correlated noise from target gw signals.

3. Can we rely on statistics alone for gw searches ? what is added value of a posteriori physical information ? role of systematic uncertainties ?

4. Multiple observations - comparison & synthesis of different observations, control False Discovery Rate in multiple trials.

5. Validation of results of a network or a detector - independent analysis procedures, injections of s/w signals.

Comments by Daisuke Tatsumi

On 1) blind analysis ideal for 1st detection. Need full detector noise simulation (including burst noise).

On 2) most important is systematics

On 5) joint search program looks good.

Questions from Peter Shawhan

How much should astrophysical assumptions feed into Bayesian analysis of data ?

How much should assumptions on rates influence design of analysis procedures ?

General discussion

Should posteriors from previous experiments be used as priors ? This is done for pulsars. For LIGO the sensitivities are improving so rapidly that posteriors for previous runs are effectively flat.

What cuts do you apply on background ? This might depend on how strong a source you were looking for.

Types of noise: white, harmonic, burst, auto-regressive (1/f). If all these matter then very tough statistical problem.

Can create models looking for ill-defined targets eg Larry Bretthorst's work on searching for radar profiles of planes for US Army - priors for known planes plus low prior for a new type of plane which is very poorly defined. Unknown type of plane will not be detected till it it closer but it can be detected.

There are techniques to build likelihood from multiple time series that have linear correlations.

Critical test is: does signal/noise ratio improve as sensitivity improves.

1. Blind vs. explanatory analysis - use of priors, exploit F & B methods.

2. Noise background estimation - confidence required ? statistical uncertainties on background ? discriminate correlated noise from target gw signals.

3. Can we rely on statistics alone for gw searches ? what is added value of a posteriori physical information ? role of systematic uncertainties ?

4. Multiple observations - comparison & synthesis of different observations, control False Discovery Rate in multiple trials.

5. Validation of results of a network or a detector - independent analysis procedures, injections of s/w signals.

Comments by Daisuke Tatsumi

On 1) blind analysis ideal for 1st detection. Need full detector noise simulation (including burst noise).

On 2) most important is systematics

On 5) joint search program looks good.

Questions from Peter Shawhan

How much should astrophysical assumptions feed into Bayesian analysis of data ?

How much should assumptions on rates influence design of analysis procedures ?

General discussion

Should posteriors from previous experiments be used as priors ? This is done for pulsars. For LIGO the sensitivities are improving so rapidly that posteriors for previous runs are effectively flat.

What cuts do you apply on background ? This might depend on how strong a source you were looking for.

Types of noise: white, harmonic, burst, auto-regressive (1/f). If all these matter then very tough statistical problem.

Can create models looking for ill-defined targets eg Larry Bretthorst's work on searching for radar profiles of planes for US Army - priors for known planes plus low prior for a new type of plane which is very poorly defined. Unknown type of plane will not be detected till it it closer but it can be detected.

There are techniques to build likelihood from multiple time series that have linear correlations.

Critical test is: does signal/noise ratio improve as sensitivity improves.

### xselect case-insensitivity

Since the Astro-E2 folks insist on having TELESCOP='Astro-E2' not 'ASTRO-E2' modified xselect so that it is case-insensitive for mission, detector, etc. names. This shouldn't matter for any other missions. Also copied xselect.mdb entries for ASTRO-E to those for Astro-E2.

Keywords: heasoft, Astro-E2

Keywords: heasoft, Astro-E2

### GravStat 2005: Finn

Sam Finn's talk on statistical issues for GW.

Inference requires assumptions:

Examples : Bound the rate of coalescing compact binaries as a function of mass : what can we surmise about binary synthesis and evolution.

: Bound amplitude of gravitational waves from a pulsar: how can we choose among models for nuclear EOS.

: Detect burst associated with GRB : what can we say about the central engine.

Regimes of interest

Today : weak (low snr) & rare (serendipitous)

Tomorrow: ground-based : strong at moderate rate

space-based : strong and (over-)abundant - confusion

We observe signals not sources

Stochastic: continuous, possibly modulated

Periodic: known frequency, unknown frequency

Bursts: characterized by eg waveform, (time-)frequency spectrum, energy spectrum, amplitude, bandwidth, duration

Multi-layered data analysis:

Data conditioning: removing artifacts - regression, vetoes, whitening (flat-field),...

Signal ID and characterization: amplitudes, durations, spectra, locations, bandwidths, waveforms, rates. classification - signals of same tyoe

Interpretation of signals as sources: source eg mass, spin, differential rotation; process - hypernova or binary coalescence, neutrino opacity etc.; population - mass function, spatial distribution, luminosity function, evolution,...

Where are we now ?

Focussed on signal ID and characterization - haven't seriously tackled interpretation.

Statistical tools used for ID and characterization:

confidence interval construction on rates (bursts, stochastic)

extreme value statistics (inspiral)

Bayesian credible sets (periodic signals)

Open questions:

Do we/did we see strong evidence for gravitational waves. Quantify strong.

How do we incorporate prior knowledge or experience without closing our minds to discovery ?

When and how may we "re-analyze" data ? ie carry out an improved analysis, incorporate new info.

How do we go about classifying signals ?

How do we select among physical models for phenomena ?

How do we infer population properties for incomplete and biased samples ?

How do we compare and combine different experiments.

Inference requires assumptions:

Examples : Bound the rate of coalescing compact binaries as a function of mass : what can we surmise about binary synthesis and evolution.

: Bound amplitude of gravitational waves from a pulsar: how can we choose among models for nuclear EOS.

: Detect burst associated with GRB : what can we say about the central engine.

Regimes of interest

Today : weak (low snr) & rare (serendipitous)

Tomorrow: ground-based : strong at moderate rate

space-based : strong and (over-)abundant - confusion

We observe signals not sources

Stochastic: continuous, possibly modulated

Periodic: known frequency, unknown frequency

Bursts: characterized by eg waveform, (time-)frequency spectrum, energy spectrum, amplitude, bandwidth, duration

Multi-layered data analysis:

Data conditioning: removing artifacts - regression, vetoes, whitening (flat-field),...

Signal ID and characterization: amplitudes, durations, spectra, locations, bandwidths, waveforms, rates. classification - signals of same tyoe

Interpretation of signals as sources: source eg mass, spin, differential rotation; process - hypernova or binary coalescence, neutrino opacity etc.; population - mass function, spatial distribution, luminosity function, evolution,...

Where are we now ?

Focussed on signal ID and characterization - haven't seriously tackled interpretation.

Statistical tools used for ID and characterization:

confidence interval construction on rates (bursts, stochastic)

extreme value statistics (inspiral)

Bayesian credible sets (periodic signals)

Open questions:

Do we/did we see strong evidence for gravitational waves. Quantify strong.

How do we incorporate prior knowledge or experience without closing our minds to discovery ?

When and how may we "re-analyze" data ? ie carry out an improved analysis, incorporate new info.

How do we go about classifying signals ?

How do we select among physical models for phenomena ?

How do we infer population properties for incomplete and biased samples ?

How do we compare and combine different experiments.

## Thursday, May 19, 2005

### GravStat 2005: Loredo

Tom Loredo's introductory talk on statistics.

Inference: deductive/inductive

Statistical inference: quantify inductive inference -> probability

Consider set of models M_i with parameters P_i.

Parameter estimation - given model i what can we say about P_i

Model uncertainty - which model is better ? Is M_0 adequate

Hybrid uncertainty - models share some common parameters: what can we say about them ?

Frequentist (F): devise procedure to choose among hypotheses H_i using D. Apply to D_obs. Report long-run performance.

Bayesian (B) : calculate probability of hypotheses given D_obs and modelling premises using rules of probability theory.

Frequency -> Probability - Bernoulli law of large numbers

Probability -> Frequency - Bayes' original paper

B vs F

B more general: can in principle contemplate probability of anything

B more narrow: data only appear through likelihood function

F can always base a procedure on a B calculation.

Decision theory: a rule a(D) that chooses a particular action when D is observed.

Risk : R(o) = Sum_D p(D|o) L(a(D),o)

where L is the loss function. Seek rules with small risks.

Inference: F calibration - the long-run average actual accuracy ~ long-run average reported accuracy. Decision theory is used to decide between calibrated rules.

B decision theory : average over outcomes.

Wald's Complete Class Theorem: Admissable F decision rules are B rules. Less useful than appears because sometimes inadmissable rules are better than admissable.

Model uncertainty: B method requires alternative model.

Comparison of B & F: B credible regions are usually close to F confidence regions but often better. Decision results are very different.

Counting experiment with background known - B approach (Helene 1983), F approach (Roe & Woodruffe 1999).

Nuisance parameters : profile likelihood - maximise over nuisance parameters

L_p(P) = max_Q L(P,Q)

can be biased and confidence intervals too small. Modern F approach is asymptotic adjustment

L_p(P) x |I_QQ(P)|^-1/2

where I_QQ is information matrix for Q. Alternatively evaluate F properties of B marginal solution.

Hypothesis testing :

Noone does Neymann-Pearson - you need to pick alpha ahead of time and only report this eg quote a 2 sigma detection even if actually 10 sigma.

Fisher proposed p-values to get round this. But depends on data and easily interpretable. p-values don't accurately measure how often the null will be wrongly rejected.

Recent work on conditional testing solves some of these problems. Comparison with B methods by Berger.

Multiple tests - False Discovery Rate. Active research topic.

Non-parametric situation between F and B is much more controversial and has not convered.

Keywords: statistics

Inference: deductive/inductive

Statistical inference: quantify inductive inference -> probability

Consider set of models M_i with parameters P_i.

Parameter estimation - given model i what can we say about P_i

Model uncertainty - which model is better ? Is M_0 adequate

Hybrid uncertainty - models share some common parameters: what can we say about them ?

Frequentist (F): devise procedure to choose among hypotheses H_i using D. Apply to D_obs. Report long-run performance.

Bayesian (B) : calculate probability of hypotheses given D_obs and modelling premises using rules of probability theory.

Frequency -> Probability - Bernoulli law of large numbers

Probability -> Frequency - Bayes' original paper

B vs F

B more general: can in principle contemplate probability of anything

B more narrow: data only appear through likelihood function

F can always base a procedure on a B calculation.

Decision theory: a rule a(D) that chooses a particular action when D is observed.

Risk : R(o) = Sum_D p(D|o) L(a(D),o)

where L is the loss function. Seek rules with small risks.

Inference: F calibration - the long-run average actual accuracy ~ long-run average reported accuracy. Decision theory is used to decide between calibrated rules.

B decision theory : average over outcomes.

Wald's Complete Class Theorem: Admissable F decision rules are B rules. Less useful than appears because sometimes inadmissable rules are better than admissable.

Model uncertainty: B method requires alternative model.

Comparison of B & F: B credible regions are usually close to F confidence regions but often better. Decision results are very different.

Counting experiment with background known - B approach (Helene 1983), F approach (Roe & Woodruffe 1999).

Nuisance parameters : profile likelihood - maximise over nuisance parameters

L_p(P) = max_Q L(P,Q)

can be biased and confidence intervals too small. Modern F approach is asymptotic adjustment

L_p(P) x |I_QQ(P)|^-1/2

where I_QQ is information matrix for Q. Alternatively evaluate F properties of B marginal solution.

Hypothesis testing :

Noone does Neymann-Pearson - you need to pick alpha ahead of time and only report this eg quote a 2 sigma detection even if actually 10 sigma.

Fisher proposed p-values to get round this. But depends on data and easily interpretable. p-values don't accurately measure how often the null will be wrongly rejected.

Recent work on conditional testing solves some of these problems. Comparison with B methods by Berger.

Multiple tests - False Discovery Rate. Active research topic.

Non-parametric situation between F and B is much more controversial and has not convered.

Keywords: statistics

### genrsp

For Astro-E2 XRS we want to be able to use genrsp to make the response. This requires adding the capability to specify and calculate escape peaks (assumed gaussian in shape). I've made the changes to support this except for the code in rdinpd which will actually get the main and escape peak information. This is waiting on the definition of the FITS file.

Keywords: heasoft, response

Keywords: heasoft, response

## Monday, May 16, 2005

### Mac freebies and wiki stuff

From Lifehacker :

Mac freebies : TextWrangler text editor, Quicksilver App launcher, GmailStatus notifier, iTerm terminal, VLC media player.

Wiki stuff : Instiki, GTD TiddlyWiki

Mac freebies : TextWrangler text editor, Quicksilver App launcher, GmailStatus notifier, iTerm terminal, VLC media player.

Wiki stuff : Instiki, GTD TiddlyWiki

## Wednesday, May 11, 2005

### v12 current issues

The following are larger outstanding issues I need to think about

1. xmmpsf model needs addition of ability to read and write FITS files with the fact array. Also need to work out why v11 and v12 results differ.

2. The /b syntax for background models is no longer available. Can we provide simple instructions or even a tool to convert v11 scripts.

3. Need to add the extend command. Is there a better way to deal with this than in v11 ?

Keywords: xspec

1. xmmpsf model needs addition of ability to read and write FITS files with the fact array. Also need to work out why v11 and v12 results differ.

2. The /b syntax for background models is no longer available. Can we provide simple instructions or even a tool to convert v11 scripts.

3. Need to add the extend command. Is there a better way to deal with this than in v11 ?

Keywords: xspec

### Attilla Kovacs: The Universe seen through the eyes of a SHARC

Attilla Kovacs from Caltech gave the EUD seminar on results using the SHARC II array. This 2-D bolometer array built by Harvey Moseley's group is on the Caltech Submillimeter Observatory on Mauna Kea. The sky is so bright and variable at 350 microns that it is equivalent to observing a 16th mag star in the daytime. They use an iterative process involving successive max likelihood solutions for each process creating signal or noise in the image. Looking at a wide variety of objects from solar system to distant radio galaxies.

### Steinn Sigurdsson: Planets round WDs

UMCP colloquium by Steinn Sigurdsson on finding planets around WDs. The idea is to look at a system where the contrast between star and planet is less. DAX WDs are H WDs but with metal lines in their atmospheres. The lifetimes for such lines are short since they should settle. One theory is that the WDs are being bombarded by comets (note that Lee Mundy wondered about X-ray flares in this case). However comet rates are ~100 times too low. Steinn worked out that if there is a planetary system then comet rates could be increased as required. So searched 6 nearby DAZs with HST. Found a number of candidates none of which have been confirmed by proper motion tests. Impressive image taken with Gemini(N) AO system where they guided on the WD to get a 50 mas resolution.

## Monday, May 09, 2005

### gauss model in v11

John Houck and Mike Nowak spotted a potential problem with xsgaul.f when the gaussian line extends beyond the lowest energy channel. Fixed this as 11.3.2d.

I then noticed that a number of models are not in sync between v11 and v12. It looks like a number of minor bug fixes and enhancements made over the last couple of years didn't make it into the files in the XSFunctions directory. Files changed are compbb.f, cont.f, ionsneqs.f, ldaped.f, neispec.f, oneispec.f, pileup.f, reflct.f, smdem2.f, sumape.f, sumdem.f, xeq.f, xsdskb.f,

xseq.f, xsgaul.f, xsnteea.f, xsvape.f

Keywords: xspec

I then noticed that a number of models are not in sync between v11 and v12. It looks like a number of minor bug fixes and enhancements made over the last couple of years didn't make it into the files in the XSFunctions directory. Files changed are compbb.f, cont.f, ionsneqs.f, ldaped.f, neispec.f, oneispec.f, pileup.f, reflct.f, smdem2.f, sumape.f, sumdem.f, xeq.f, xsdskb.f,

xseq.f, xsgaul.f, xsnteea.f, xsvape.f

Keywords: xspec

## Friday, May 06, 2005

### Preprints

Ostriker et al. present a simple polytropic model for the ICM.

Nulsen et al. find a Mach 1.65 shock in the X-ray image of the cluster around Her A.

Nevaleinen et al. describe XMM EPIC background modelling.

Page et al. suggest that it may be possible to use observations of minor planets in the outer Solar System to test the Pioneer Effect.

Matheson and Safi-Harb put together all the Chandra data on G21.5-0.9.

Begelmann & Nath invoke a feedback model to explain the MBH-bulge sigma relation.

Angulo et al. argue that a survey of clusters at z~1 looking for baryon fluctuations can determine dark energy w to better than 10% without requiring knowledge of the cluster mass.

Jenet et al. show that radio observations of 40 pulsars with a timing accuracy of 100 nano-seconds could detect the effects of the stochastic gravitational wave background from merging BHs. This would take 5-20 years of observations depending on the actual background.

Fabian et al. analyze six XMM observations of 1H0419-577 and argue that the lowest flux state is dominated by emission from within a few gravitational radii of the BH, which must be in an extreme Kerr state.

Keywords: ICM, XMM, Gravity, BH, SNR

Nulsen et al. find a Mach 1.65 shock in the X-ray image of the cluster around Her A.

Nevaleinen et al. describe XMM EPIC background modelling.

Page et al. suggest that it may be possible to use observations of minor planets in the outer Solar System to test the Pioneer Effect.

Matheson and Safi-Harb put together all the Chandra data on G21.5-0.9.

Begelmann & Nath invoke a feedback model to explain the MBH-bulge sigma relation.

Angulo et al. argue that a survey of clusters at z~1 looking for baryon fluctuations can determine dark energy w to better than 10% without requiring knowledge of the cluster mass.

Jenet et al. show that radio observations of 40 pulsars with a timing accuracy of 100 nano-seconds could detect the effects of the stochastic gravitational wave background from merging BHs. This would take 5-20 years of observations depending on the actual background.

Fabian et al. analyze six XMM observations of 1H0419-577 and argue that the lowest flux state is dominated by emission from within a few gravitational radii of the BH, which must be in an extreme Kerr state.

Keywords: ICM, XMM, Gravity, BH, SNR

Subscribe to:
Posts (Atom)