PortadaGruposCharlasMásPanorama actual
Buscar en el sitio
Este sitio utiliza cookies para ofrecer nuestros servicios, mejorar el rendimiento, análisis y (si no estás registrado) publicidad. Al usar LibraryThing reconoces que has leído y comprendido nuestros términos de servicio y política de privacidad. El uso del sitio y de los servicios está sujeto a estas políticas y términos.

Resultados de Google Books

Pulse en una miniatura para ir a Google Books.

Cargando...

Expert Political Judgment: How Good Is It? How Can We Know?

por Philip E. Tetlock

MiembrosReseñasPopularidadValoración promediaMenciones
305585,978 (4.22)5
The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.… (más)
Ninguno
Cargando...

Inscríbete en LibraryThing para averiguar si este libro te gustará.

Actualmente no hay Conversaciones sobre este libro.

» Ver también 5 menciones

Mostrando 5 de 5
An interesting book about the kind of biases "professional" pundits have in talking about political topics. Very data-driven, and shows that experts are better than the completely uninformed (college students or worse), but that experts are actually also good outside of their areas of expertise, if they rely on decent information sources. There's an even more exciting result -- fairly straightforward computer models can do even better than experts, even within their areas of expertise.

One thing missing is that experts often are better at coming up with indicators -- i.e. "if X happens, then Y will result" -- which is more useful than just saying "I think Y will happen". There are also structured ways to get far better predictions from people (on par with the best machine models) by asking better questions (and using markets, etc.)

Irony: most of the "wrong" predictions citied in the book ended up coming true shortly thereafter ("Russia invades Ukraine", etc.)

Unfortunately, the audiobook narrator is gratingly annoying; I'd read the book instead. ( )
  octal | Jan 1, 2021 |
This book is an empirical study of expert knowledge. The author questioned experts on political matters for more than a decade and followed up to see which predictions held up and which didn't. He presents most of his results with reference to two kinds of experts, "hedgehogs" and "foxes", those who stick to one big scheme and those who are more open to alternative interpretations.

The book is filled with correlation coefficients and other statistical data, so it certainly doesn't lack precision. But I was a bit disappointed with the narrow scope of "political judgment" in this study. The survey questions seem to have focused very much on international politics and other very large issues. No wonder the expert predictions were so shaky. It would have been interesting to see how expert opinion on small-scale, local issues would have fared in comparison. This book wasn't quite as informative as I expected but I still hope other researchers will be inspired to conduct similar studies.
  thcson | May 5, 2013 |
This is a critical book for anyone one who depends on professional forecasters of "social" variables, and even more for anyone whose livelihood rests on making such forecasts. "Social" because Tetlock's book is focussed on political forecasting, but I'm convinced that it applies to economic and social forecasting as well. (Having spent a professional career forecasting economic variables, I have some insight here). Tetlock is not discussing forecasting in the hard sciences, where forecasting is based on much harder data.

His first critical conclusion is that, in forecasting complex political events, "we could do as well by tossing coins as by consulting experts". This is based on a massive set of surveys of expert opinion that were compared to outcomes in the real world over many years. The task was enormously complex to set up; defining an experiment in the social sciences presents the problems that constantly arise in making judgements in these sciences (what does one measure, and how? How can bias be measured and eliminated? etc. etc.) Much of the book is devoted to the problems in constructing the study, and how they were resolved.

His second key conclusion is that, while that may be true of experts as an undifferentiated group, some experts do significantly better than other experts. This does not reflect the level of expertise involved, nor does it reflect political orientation. Rather, it reflects the way the experts think. Poorer performers tend to be what Tetlock characterizes as "hedgehogs" -- people who apply theoretical frameworks, who stick with a line of argument, and who believe strongly in their own forecasts. The better performers tend to be what he calls "foxes" -- those with an eclectic approach, who examine many hypotheses, and who are more inclined to think probabilistically, by grading the likelihood of their forecasts.

But, as he notes, the forecasters who get the most media exposure tend to be the hedgehogs, those with a strong point of view that can be clearly expressed. This makes all the sense in the world; someone with a clear cut and compelling story is much more fun to listen to (and much more memorable than) someone who presents a range of possible outcomes (as a former many-handed economist, I know this all too well).

What does that mean for those of us who use forecasts? We use them in making political decisions, personal financial decisions, and investment decisions. This book tells us that WHAT THE EXPERTS SAY IS NOT LIKELY TO ADD MUCH TO THE QUALITY OF YOUR OWN DECISION MAKING. And that says be careful how much you pay for expert advice, and how much you rely on it. That of course applies to experts in the social sciences, NOT to experts in the hard (aka real) sciences. Generally, it is a good idea to regard your doctor as a real expert.

Because it makes it impossible to avoid these conclusions, I gave this book five stars; this is very important stuff. I would not have given it five stars for the way in which it is written. For me, it read as if it had been written for other academics, rather than for the general reader. This is hard to avoid, but some other works in the field do manage -- for example, "Thinking Fast and Slow". Don't skip the book because it is not exactly an enjoyable read, however: its merit far outweighs its manner. ( )
  annbury | Sep 6, 2012 |
How good are political academics/think-tankers/pundits at predicting the outcome of political events? Tetlock studies their predictions over many years in an attempt to answer this question. It's an interesting question, and the research is solid, but I ended up drowning in the details of his analysis and addressing of the various threats to validity. The book feels too much like a PhD dissertation to be a compelling read. ( )
1 vota lorin | Mar 19, 2009 |
Mostrando 5 de 5
sin reseñas | añadir una reseña
Debes iniciar sesión para editar los datos de Conocimiento Común.
Para más ayuda, consulta la página de ayuda de Conocimiento Común.
Título canónico
Título original
Títulos alternativos
Fecha de publicación original
Personas/Personajes
Lugares importantes
Acontecimientos importantes
Películas relacionadas
Epígrafe
Dedicatoria
Primeras palabras
Citas
Últimas palabras
Aviso de desambiguación
Editores de la editorial
Blurbistas
Idioma original
DDC/MDS Canónico
LCC canónico

Referencias a esta obra en fuentes externas.

Wikipedia en inglés (1)

The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.

No se han encontrado descripciones de biblioteca.

Descripción del libro
Resumen Haiku

Debates activos

Ninguno

Cubiertas populares

Enlaces rápidos

Valoración

Promedio: (4.22)
0.5
1
1.5
2 1
2.5
3 4
3.5 1
4 10
4.5 2
5 12

¿Eres tú?

Conviértete en un Autor de LibraryThing.

 

Acerca de | Contactar | LibraryThing.com | Privacidad/Condiciones | Ayuda/Preguntas frecuentes | Blog | Tienda | APIs | TinyCat | Bibliotecas heredadas | Primeros reseñadores | Conocimiento común | 204,858,259 libros! | Barra superior: Siempre visible