PortadaGruposCharlasMásPanorama actual
Buscar en el sitio
Este sitio utiliza cookies para ofrecer nuestros servicios, mejorar el rendimiento, análisis y (si no estás registrado) publicidad. Al usar LibraryThing reconoces que has leído y comprendido nuestros términos de servicio y política de privacidad. El uso del sitio y de los servicios está sujeto a estas políticas y términos.

Resultados de Google Books

Pulse en una miniatura para ir a Google Books.

Statistical Inference as Severe Testing: How…
Cargando...

Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018 original; edición 2018)

por Deborah G. Mayo (Autor)

MiembrosReseñasPopularidadValoración promediaConversaciones
311776,281 (4)Ninguno
Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.… (más)
Miembro:jfullback
Título:Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars
Autores:Deborah G. Mayo (Autor)
Información:Cambridge University Press (2018), Edition: 1, 500 pages
Colecciones:Tu biblioteca
Valoración:
Etiquetas:Ninguno

Información de la obra

Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars por Deborah G. Mayo (2018)

Ninguno
Cargando...

Inscríbete en LibraryThing para averiguar si este libro te gustará.

Actualmente no hay Conversaciones sobre este libro.

I think I'd be wasting my time continuing with this. I'm not a statistician or a user of statistical methods, so I'm not really in a position to judge the book -- it's just not for me. A couple of thoughts anyway:

- The style is strange; it gave me the feeling of being dropped into the author's train of thought, or perhaps thrown headlong into its path. I don't think that was due to the complexity of the ideas, or even to the semantic density of the writing, but rather to some combination of stream-of-consciousness composition and assumed shared background between writer and reader. In a popular book I would simply call this bad writing/editing, but since Mayo is a specialist writing for specialists, I'm not in a position to lay down that kind of judgment. It didn't feel necessary, though, at least in the early sections that I read.

- (What follows may simply reveal my own ignorance and misunderstanding)

p. 53: "Analogous situations to the optional stopping example occur even without optional stopping, as with selecting a data-dependent, maximally likely, alternative. Here's an example from Cox and Hinkley [citation] attributed to Allan Birnbaum [citation]. A single observation is made on X, which can take values 1, 2, ..., 100. [...] If X is observed to be r, [...] then the most likely hypothesis is θ = r. In fact, Pr(X = r; θ = r) = 1. By contrast, Pr(X = r; θ = 0) = 0.01. Whatever value r that is observed, hypothesis θ = r is 100 times as likely as is θ = 0. [...] So "even if in fact θ = 0, we are certain to find evidence apparently pointing strongly against θ = 0, if we allow comparisons of likelihoods chosen in the light of the data" [citation]. This does not happen if the test is restricted to two preselected values. [...] Allan Birnbaum gets the prize for inventing chestnuts that deeply challenge both those who do, and those who do not, hold the Likelihood Principle."

(As far as I can remember, or find using the index, the Likelihood Principle has not yet been formally defined, but on page 30 it is said to be "related" to the "Law of Likelihood", i.e. "Data x are better evidence for hypothesis H1 than for H0 if x is more probable under H1 than under H0: Pr(x; H1) > Pr(x; H0), that is, the likelihood ratio (LR) of H1 over H0 exceeds 1.")

Earlier (p. 38) a similar case is used to illustrate "our most serious problem: The Law of Likelihood permits finding evidence in favor of a hypothesis deliberately arrived at using the data". A deck of cards is shuffled, and the top card turned over: it is the Ace of Diamonds. The LL tells us that "the hypothesis that the deck consists of 52 aces of diamonds [...] is [52 times] better supported than the hypothesis that the deck is normal". This is supposed to present a problem for 'Likelihoodists', one which they can only evade by insisting on the distinction between evidence and belief.

I don't understand what is supposed to be strange or threatening about the 'trick deck' case. The hypothesis that the deck consists of 52 aces of diamonds *has* just received some support, whether we formulated it in advance or not -- but this doesn't imply that we need to reduce our confidence that the deck is normal. All that has happened is the other 51 'the deck consists of 52 copies of one card' hypotheses have just been ruled out. If we gave (or would have given) each of them a 0.01% chance before looking at the data, we should now consider the AD hypothesis 0.52% likely to be true.

Similarly, I don't understand what is supposed to be puzzling about the Allan Birnbaum case, which seems to be presented almost as a paradox. If we had no other reason to favour the hypothesis that θ = r, then the observed value provides no evidence for or against θ = 0, only evidence in favour of θ = r. The ratio Pr(θ = r)/Pr(θ = 0) increases, but only because θ = [anything other than 0 or r] has been completely ruled out, and its probability mass transferred to θ = r.

Surely nobody really advocates the Bayesianism-without-prior-probabilities approach that these examples seem to target? Mayo must have a reason for presenting them, but I would have loved a clear explanation of what it is.
  matt_ar | Dec 6, 2019 |
sin reseñas | añadir una reseña
Debes iniciar sesión para editar los datos de Conocimiento Común.
Para más ayuda, consulta la página de ayuda de Conocimiento Común.
Título canónico
Título original
Títulos alternativos
Fecha de publicación original
Personas/Personajes
Lugares importantes
Acontecimientos importantes
Películas relacionadas
Epígrafe
Dedicatoria
Primeras palabras
Citas
Últimas palabras
Aviso de desambiguación
Editores de la editorial
Blurbistas
Idioma original
DDC/MDS Canónico
LCC canónico

Referencias a esta obra en fuentes externas.

Wikipedia en inglés

Ninguno

Mounting failures of replication in social and biological sciences give a new urgency to critically appraising proposed reforms. This book pulls back the cover on disagreements between experts charged with restoring integrity to science. It denies two pervasive views of the role of probability in inference: to assign degrees of belief, and to control error rates in a long run. If statistical consumers are unaware of assumptions behind rival evidence reforms, they can't scrutinize the consequences that affect them (in personalized medicine, psychology, etc.). The book sets sail with a simple tool: if little has been done to rule out flaws in inferring a claim, then it has not passed a severe test. Many methods advocated by data experts do not stand up to severe scrutiny and are in tension with successful strategies for blocking or accounting for cherry picking and selective reporting. Through a series of excursions and exhibits, the philosophy and history of inductive inference come alive. Philosophical tools are put to work to solve problems about science and pseudoscience, induction and falsification.

No se han encontrado descripciones de biblioteca.

Descripción del libro
Resumen Haiku

Debates activos

Ninguno

Cubiertas populares

Enlaces rápidos

Valoración

Promedio: (4)
0.5
1
1.5
2
2.5
3
3.5
4 1
4.5
5

¿Eres tú?

Conviértete en un Autor de LibraryThing.

 

Acerca de | Contactar | LibraryThing.com | Privacidad/Condiciones | Ayuda/Preguntas frecuentes | Blog | Tienda | APIs | TinyCat | Bibliotecas heredadas | Primeros reseñadores | Conocimiento común | 206,382,671 libros! | Barra superior: Siempre visible