May 28, 2012

The Space for Magic in Science

Whiteboarding is cheap, almost free.

Yet experimenters rush new experiments from the whiteboard into data collection, only to thrash during analysis. "It would be groundbreaking if we did ________ analysis." But alas, the design does not support it.

Try letting an experiment linger unfinished on the whiteboard a little longer.

Something magically might happen in that space.

May 25, 2012

How to look inside the brain: Carl Schoonover on TED.com



It is amazing how far and fast neuroscience has developed in the last 100 years. This talk reaffirms my passion to explore one of the most important frontiers of human knowledge. We are in the golden age of neuroscience, and I strive to make my meager contribution everyday.

Additionally, the beautiful, powerful images in this talk are examples of "Science as Art." I am off to track down more prints for my office.

May 23, 2012

Best Practices for Science

There are a plethora of best practices for business books.

I cannot find any best practices for science books.

Hmmm ...

May 18, 2012

The Assumptions of Academic Hierarchy

It is assumed someone who can:
Achieve high grades can conduct novel research
Conduct novel research can teach effectively
Teach effectively can lead a research team
Lead a research team will be skilled at administration

In my experience, those assumptions are violated more often than they hold.

May 16, 2012

A Critical Look at Presenting Science: Daniel Poppel


MIT Tech TV

Daniel Poppel's talk is a tour de force of presenting science - equal parts wit, clarity, brevity, and humility.

A good example of presenting data can be seen at 9:12. On the top panel, he presents groups means with error bars and a best fit line. The x-axis is tone frequency in Hz, therefore assuming interval scaling is more likely to hold. On the bottom panel, he shows data for individual trials with error bars. Then he goes one step further by presenting individual participants' data with error bars. The audience can decide at which level the data should be averaged, if at all. Remember: not all data should be averaged.

Those three graphs provide the audience with the information needed to make their own informed decisions concerning the validity of his conclusions.

May 14, 2012

A Critical Look at Presenting Science: Patricia Kuhl


I had the privilege of attending the 2012 McGovern Institute Symposium "MEG: Applications to Cognitive Neuroscience." The symposium covered the breadth and depth of magnetoencephalography (MEG) research. The majority of the talks were brilliantly presented, well-done science. There was one talk, despite being well-received by the audience, which did not live up to rigorous scientific standards.

Patricia Kuhl's talk is rife with scientific peccadilloes. The figure presented at 24:19 is a one of the best examples. The complete absence of error bars is the greatest error of omission. Variance is a critical element in understanding mean differences, and the audience is not given that information and is unable to draw their own conclusions concerning the significance of the mean differences.

In addition to that error of omission, there is an error of commission. Dr. Kuhl drew a straight line between the groups in the graph. She is assuming, with no justification, interval scaling on the x-axis. Why can we assume distance between 6-8 months and 10-12 months is the same as the distance between 10-12 months and "adult"*? When in doubt, a scientist should assume the lowest scale of measurement. It is better to assume ordinal scale when comparing these groups, thus making a bar graph appropriate. Just because you can draw a line between data points does not mean you should.

There needs to be a high standard for the presentation of scientific findings in all contexts. It is the responsibility of fellow scientists to uphold that standard.

* It is unclear what "adult" means in this context. It is a chronological or developmental definition? How was it operationalized?

May 4, 2012

Best (but not right)

All models are inherently wrong.

They can never match reality. The distillation of reality is their power and their limitation.

Therefore the goal is not to find the right model but find the best model for a given context.