"Gromov: Mathematical Structures of Biology", a playlist created by Daniel Ferrante: Four videos from IHES worth their time.
|—||Franz Kafka, writer.|
|—||Isaac Asimov, Russian-born American writer and biochemist.|
|—||Charles Bukowski, German-born American writer.|
|—||Scott Adams, American cartoonist.|
Two very interesting articles from today’s The Economist:
» “Problems with scientific research: How science goes wrong”, http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong
» “Unreliable research: Trouble at the lab”, http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble
Particularly relevant when read in light of the following Nature article:
» “Power failure: why small sample size undermines the reliability of neuroscience”, http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html
And, to keep the “academic” theme rolling…
» “Study: Poor children are now the majority in American public schools”, http://www.washingtonpost.com/local/education/study-poor-children-are-now-the-majority-in-american-public-schools-in-south-west/2013/10/16/34eb4984-35bb-11e3-8a0e-4e2cf80831fc_story.html?hpid=z5
To cap it off, let’s put some numbers in perspective: these past two weeks of Government Shutdown cost about the equivalent of NIH’s annual budget:
» “Government shutdown: Cost could be up to $24 billion”, http://www.nbcnews.com/business/budget-battles-bite-out-economy-will-be-billions-8C11409508
» “NIH Budget”, http://www.nih.gov/about/budget.htm
But, because it’s friday night, let’s finish with some positive news,
» “Open access science news is mostly good, with a bit of ugly”, http://arstechnica.com/science/2013/04/open-access-science-news-is-mostly-good-with-a-bit-of-ugly/
'Maddog', much like Stallman, has been hammering points such as this for a very loOong time. But i still think there's a larger point to be capture in this dialogue, a perspective that does not seem to have been quite fully appreciated yet. Computer Science is an endeavor of human knowledge that manages to make its highly theoretical and abstract results something of daily value, fairly well understandable by the lay public. Think, for example, of cryptography, or even browsers: when was the last time that someone used Graph Theory in the “real world”?
My point is that very similar (/mutatis mutandis/) warnings can be made across other disciplines of Science, ranging from Physics to Molecular Biology, Genetics, Bioinformatics, & Neuroscience. As our technology gets better and further improves, the “distance” between ‘abstract rumblings’ (theory) and ‘daily applications’ (experiment) decreases more and more. And the point remains: There exists analogues to ‘closed source’ practices in many fields out there, Computer Science is far from being alone here. So, why not sound the proverbial trumpet on these other ones as well? Are they not as relevant, or strategic, to society?
Software, at this present stage of our history, is ubiquitous everywhere: from elevators and cell phones, to GPS and the Space Station. But so is Science: without transistors we would not have had this “computer revolution”, as much as without modern medical treatments many diseases would still be fatal and not have been transformed into chronic conditions (treatable and manageable).
Thus, even though this one example raised by ‘Maddog’ and Stallman are very relevant to our day-to-day activities, so are the myriad of other ones that seem to go mostly unnoticed. The more this “distance” between “theory” and “practice” decreases (and it does so everyday!), the more relevant comments such as this one becomes. We need “better practices” not only in the software industry, but in many others out there… and this discussion needs to start at some point… FOSS, its principles, have been ‘borrowed’ from practices long held in high esteem by the scientific community. If they have overflowed into the software industry, why not have the same principles flooded other industries?
» “Hello, President Rousseff … I told you so.”, http://www.linuxpromagazine.com/Online/Blogs/Paw-Prints-Writings-of-the-maddog/Hello-President-Rousseff-I-told-you-so
» “Project Cauã”, http://www.projectcaua.org/
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative.
|—||Leonardo da Vinci, Italian artist, inventor and mathematician.|