tag:blogger.com,1999:blog-1941035713549589372.post189856526204125067..comments2023-05-22T04:56:48.268-04:00Comments on Big Nick At Large: Corrupt Indexes, Part IINick Willett-Jeffrieshttp://www.blogger.com/profile/07682993346919964490noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-1941035713549589372.post-74508962685800133372010-10-11T02:26:18.008-04:002010-10-11T02:26:18.008-04:00Pretty much completely agree. I don't see any ...Pretty much completely agree. I don't see any easy way to get the public to interpret data better. Teaching basic statistics in high schools might help a little but this is, at best, only going to result in gradual progress.<br /><br />But that's not too much of a problem. What you describe as a "stopgap measure," I would actually consider a fairly elegant solution. Scientists and researchers need to be more careful about the format and language that they use when they release data. A good start would be doing away with executive summarys (or writing better executive summarys). Changing the way we graphically present data would help too. Just putting error bars onto the graphs that are given to the media would be progress. Not publishing if your research has HUGE error ranges would also be a good thing... This actually isn't too much to ask...<br /><br />Another side of this problem is that we don't have a good, preferably open source and free way to produce high quality data graphics. Excel is terribad for this sort of thing, and R - which is what most real statisticians use, is horribly inaccessible for your average user...Nick Willett-Jeffrieshttps://www.blogger.com/profile/07682993346919964490noreply@blogger.comtag:blogger.com,1999:blog-1941035713549589372.post-11469318085880949312010-10-10T17:35:34.647-04:002010-10-10T17:35:34.647-04:00I've just been catching up on your blog, and I...I've just been catching up on your blog, and I think you've really identified one of the major problems facing the broader scientific community and its interfacing with public media. We're past the point where scientists and statisticians can claim innocence in how their data is used. Especially in the case of something like the CPI where the study is done entirely to further practical goals of reducing production, the onus is on the people releasing the data to consider how it will be picked up and used by the broader public who have not done all of the background research and who don't understand all of the nuances of the methodology. Facts are 'newsworthy' and uncertainty is not, so organizations that are largely concerned with public and political use of their statistics need to be very careful about the nature of the facts that they are releasing when the uncertainty is actually the predominant story. I don't see an easy answer on how to improve the use of scientific data in the public domain, and maybe for now the best stopgap measure we can take is to foresee its misappropriation and control the language of what actually gets released. It's a sticky situation. Interesting stuff.Anonymoushttps://www.blogger.com/profile/13404540052625139699noreply@blogger.comtag:blogger.com,1999:blog-1941035713549589372.post-8851667050046804482010-09-27T21:02:49.957-04:002010-09-27T21:02:49.957-04:00Very interesting. If you are not familiar with the...Very interesting. If you are not familiar with their programs you should look at the Millenium Challenge Corporation in the U.S.. Much of what you say had to be incorporated into their decision making process. This program (MCC) was slow to get off the ground because of the difficulties in getting comfortable with the "measureables." The program now seems to be chugging along very well.Menoreply@blogger.com