Read on Wikipedia:

“Cut out all those exclamation points. An exclamation point is like laughing at your own jokes.”

—F. Scott Fitzgerald

I am afraid we have reached a point where the discussion is no longer relevant for a general audience… So we will discuss those points later this week in Paris! (Yes, yet another exclamation mark. I am working on it and I was convinced [sic] was written [sic!], thanks for the correction and pardon my poor latin.)

]]>p. 40:

“The Central Limit Theorem is a very powerful result. It implies that for any population distribution, under simple random sampling (for appropriate n and n/N), the sample average has an approximate normal distribution.

“The normal distribution can be used to provide interval estimates for the population parameter. One interval estimate for \mu is

(\bar{x} -\sigma /\sqrt{n},\bar{x} + \sigma /\sqrt{n}).

“The interval is called a 68% confidence interval for the population parameter. A 95% confidence interval for \mu is (\bar{x}-2\sigma/\sqrt{n},\bar{x}+2\sigma/\sqrt{n}). These interval estimates derive their names from the fact that, by the Central Limit Theorem, the chance that \bar{x} is within one (or two) standard error(s) of \mu is approximately 68% (or 95\%).”

Further down on the page, they mention that the 95% refers to the confidence intervals under repeated sampling (you can read the rest in the book: “The sample statistic \bar{x} is random, so we can think of the interval estimates as random intervals….”

What do you think of this presentation of the CLT and confidence intervals? It’s been written by authorities in the field. If you like this more than our presentation, we can modify our book in the corrected edition to quote Nolan and Speed.

This book is also by Springer, by the way. In the series, “Springer Texts in Statistics”, series editors George Casella, Stephen Fienberg, Ingram Olkin. John Rice is mentioned in the acknowledgements.

]]>Brief version:

The main point of your response is well-taken. The criticism that the book should not have been published without review by a statistician is valid. I should have made sure I could recruit a statistician to help us out. There is one caveat: this book cannot satisfy statisticians even after we correct the mistakes you pointed out. It’s intended for a very different audience than the one you encounter.

Long version:

1. Regarding the fact that the book should have been reviewed by a statistician.

The book draft was reviewed by an anonymous reviewer, I don’t know if that person was a statistician. The review is in German, and I can send it to Christian if he wants to see it and can read German (well, Google Translate should do an adequate job).

In addition, I had it read by an expert in statistical methods in linguistics (but not someone like Christian), a sort of internal review. This was after I failed to get statisticians to read it for me (i.e., the refused to read it for lack of time or whatever other reason). In other words, I was aware that I could use a review by a statistician, but nobody I asked was willing to do it.

Why didn’t Springer have the book reviewed by a statistician? I don’t know, but I can guess. The type of person this book is addressed to is quite different from the type of person a professional statistician is. I explain this below in more detail.

Christian, would you be willing to do such a review? This is why I want to meet you in person, to ask you specific questions about your comments, because the review you published on the various blogs is inadequate. I’m now going to give you a detailed critique of what I think is wrong with your review. BTW, I appreciate the fact that you are willing to talk about this; most statisticians just shut me out. So bear with me.

A critique of Christian’s review:

a. “The following chapters are about analysis of variance (5), linear models (6), and linear mixed models (7). all of which face fatal deficiencies similar to the ones noted above.”

“And then the permanent confusion between the distribution and the sample, the true parameters and their estimates.”

I think you’ll agree this is all a bit vague. It would have been helpful to know what the precise “fatal deficiencies” are that you mention, and some examples of this “permanent confusion.” We are definitely not confused about the distribution and the sample, the true parameters and their estimates, but I don’t know how to evaluate this complaint because no details are provided. It’s a dismissive review in that sense. I think you owe me more detail there.

I am grateful for the errors you did take the trouble to point out (the one about \bar{x}=\mu is really embarrassing, but it’s not a result of ignorance, as you imply. We know it’s a meaningless statement. Saying that s is unbiased estimator of \sigma *is* a result of ignorance on my part, and I am working on fixing my understanding of this).

b. You probably don’t like the statement of the Central Limit Theorem probably because we ignore the Cauchy distribution and we don’t mention the Law of Large Numbers, perhaps among other things. We could have mentioned all that and more in the book, I agree. But the issue was that we were focused on explaining the relevance of the CLT for understanding what a standard error is and why it’s important. This is a concept which seems to be very hard for students to grasp. Given the evidence (looking at results from courses I have taught), we succeed in that goal. Students will have an incomplete understanding of statistics once they read our book, and that’s absolutely fine at their level (I explain more below). There are books out there that present this material with a similar degree of incompleteness (look at the book Stat Labs for example; I could list a lot more books here, written by statisticians or psychologists).

c. You dislike our use of integration to find the area under the curve, you want us to use the pnorm function. But we wanted students to make the connection with the summations we did in the previous chapter, and to show that integration (despite its frightening visual effect on the student when they see it in other textbooks) is just a summation. It serves a useful purpose in this particular context.

d. You say: “As the authors seem reluctant to introduce the binomial probability function from the start, they resort to an intuitive discourse based on (rather repetitive) graphs” We are forced to remain at an intuitive level for the audience we are targeting (see below). I have tried more mathematically oriented approaches and these have never worked.

Similarly, you find it remarkable that “the book spends four pages [36-39] showing through an R experiment that “the sum of squared deviations from the mean are [sic!] smaller than from any other number.” Many students that I teach need that level of presentation to really understand what this means.

BTW, the English grammar error that you point out above with “sic” (with your favorite punctuation tool, the exclamation mark, which at least to me reads like a triumphant proof of incompetence on part of the writers, not just in statistics, but even with the English language!) is a frequent error about which scientific papers have been written. It’s called the agreement attraction error. There’s a singular noun phrase, sum, modified by a prepositional phrase, “of squared deviations…”, and then the auxiliary. For reasons that psycholinguists are still trying to uncover, the human language production system makes this interesting error that it takes the nearer plural noun to agree with the auxiliary, even though syntactically it’s an impossible candidate for agreement. It’s one among a slew of truly amazing but very common errors that the human comprehension/production system makes, and we/I made that error. I wouldn’t get as excited about it as you did with that exclamation mark.

e. In many places you assume that your criticism is self-evident, this is achieved by a generous but completely uninformative use of exclamation marks. For example:

“I am also dissatisfied with the way confidence and testing are handled (and not only because of my Bayesian inclinations!). The above quote, which replicates the usual fallacy about the interpretation of confidence intervals, is found a few lines away from a warning about the inversion of confidence statements! A warning only repeated later “it’s a statement about the probability that the hypothetical confidence intervals (that would be computed from the hypothetical repeated samples) will contain the population mean” (page 59).”

We did start talking about CIs with a statement that’s widely believed but false, but as you point out, we do explain the meaning correctly. I don’t know how to interpret the exclamation mark in: “The above quote, which replicates the usual fallacy about the interpretation of confidence intervals, is found a few lines away from a warning about the inversion of confidence statements!” Does the exclamation mark mean that we should have provided the warning earlier, or do you mean that the warning is wrong? It’s hard to know from your review, at least for me.

You imply that a proper treatment of confidence intervals should be a bayesian one:

“I am also dissatisfied with the way confidence and testing are handled (and not only because of my Bayesian inclinations!).”

If you are seriously suggesting that the audience this is intended for should be given an introduction to Bayesian methods, you need a reality check. Try teaching such a course yourself once. There is a good bayesian book by Kruschke that might work at the grad level for linguists, but introducing CIs from a bayesian perspective at the undergrad level would only lead to disaster. Even Gelman and Hill 2007 starts his book from a non-bayesian perspective.

f. “The book spends a large amount of pages on hypothesis testing, presumably because of the own interests of the authors, however it is unclear a neophyte could gain enough expertise from those pages to conduct his own tests.”

“[the book] cannot deliver the expected outcome on its readers and train them towards more sophisticated statistical analyses. As a non-expert on linguistics, I cannot judge of the requirements of the field and of the complexity of the statistical models it involves. However, even the most standard models and procedures should be treated with the appropriate statistical rigour. While the goals of the book were quite commendable, it seems to me it cannot endow its intended readers with the proper perspective on statistics.”

How do you know how useful this book will be for linguists? What experience do you have in teaching linguistics or students in such areas to do t-tests, anova, and linear mixed models, etc.? I would like to point out that you have zero experience in that area, and so you are not really competent to judge the book’s usefulness for the audience it’s aimed at.

I looked at your home page to find out what level you teach at. I thought that the course entitled “Cours de Statistique” (http://www.ceremade.dauphine.fr/~xian/MD2.html) would be probably an introductory one compared to the others you have. None of your links for the chapters accompanying the lectures work (I feel the urge to put an exclamation mark here, but I desisted! ;) so I couldn’t judge what the exact level is. But your course textbook, Casella and Berger (which I have; a great book), would be an impossible textbook for my students to use. Just as an example: I would lose my students if I tried to walk them through chapter 1 on probability theory. Example 1.5.5 on page 32 would be the kind of thing that would cause the course to sink like a stone. I saw the Feuille de Travaux Dirig\’es 1 on this home page. There is no way a linguistics student could work through those problems.

I would like to suggest an experiment: just for fun, go and teach statistics to an undergraduate class in linguistics or some such area (I can give you the names of some contacts in Paris; there are many prominent psychologists and psycholinguists there). Please do share your experiences on this blog once you do that. It’s possible French undergrads are much better than US or German ones. If so, please let me know, and please let me know how I can get a professorship in Paris, I speak French and would love to teach such students (even if I have to teach in French).

Your above statements about the supposed uselessness of this book (modulo the errors, which I apologize for and will correct as soon as I can get Springer to bring out a corrected edition) are empirically verifiable. Your comment only shows how much of a disconnect there is between statisticians and amateur end-users of statistics.

Here are some emails I got for ch 7 of the book (which was available online as a tutorial for some years for free). I anonymized the emails because I am not sure people would want to see their names in a public domain.

1. “Your “Tutorial on mixed-efects [sic] models (Part 1 of 2)” was a great introduction of the topic for me.”

2. “I’ve read your tutorial part 1, and it was very helpfull [sic] for me.”

3. “I have come across a paper written by you entitled: Tutorial on mixed-effects models (Part 1 of 2). I have found it very helpful in learning mixed effects models in R,”

4. “This is xxx, a PhD student in the Psychology department at Stanford. I read your lmertutorial.pdf, and I was wondering if Part 2 is already available. It is such a good tutorial so far!!”

5. [Math student:]

“I’d like to thank you for writing the tutorial, it has been very helpful in getting an understanding of mixed-effect modelling in R. I hope you’ll write part 2 eventually.”

6. “Hi, my name is xxxx; I’m a Ph.D. student in Bioinformatics at xxx University, where I study natural language processing, machine learning, and Neuroscience. I’ve been researching various methods for analyzing a particularly complicated dataset of longitudinal, user-specific data of …, and recently came across a document you wrote this past May–“Tutorial on mixed-effects models (Part 1 of 2).” You have a unique ability to explain mathematical ideas in an easy-to-understand way; I found this article very helpful, especially since R is my preferred program for statistical analysis. I’m giving a talk to some colleagues next week on the potential usefulness of using mixed-effects models on psychophysical and neuropsychological data, and I was wondering, would you mind if I passed your article along to my audience as background reading? Is your May, 6th, 2008 version the most recent one, or is there a version that includes part 2? Were there any articles you found particularly helpful in seeing how the mixed-effects framework could be applied in your research?”

And here’s a review from a summer school (European Summer School on Language, Logic, and Information) course I taught in 2009 in Bordeaux. Keep in mind that the kind of students you get in ESSLLI are the cream of the crop; many of these students come from mathematics and logic and could (and do) run circles around me. At this point (in 2009) the book existed as a free download, and several of the students told me that I should publish it because they like to have a physical book to look at rather than (free) online lecture notes.

####################### begin ##############################

Sophia Katrenko katrenko@science.uva.nl

sender-time Sent at 1:41 PM (GMT+01:00). Current time there: 6:39 AM. ✆

to vasishth@rz.uni-potsdam.de

date Tue, Feb 9, 2010 at 1:41 PM

subject ESSLLI 2009 Course Evaluation

Dear Prof. Vasishth,

Thank you very much for a very interesting course at ESSLLI 2009! Please find the evaluation details below (max = 5).

Lecturer 4.8

Course content (did it correspond to what was proposed?) 4.6

Course notes 4.2

Session attendance 4.6

(27 respondents)

interesting, entertaining, and useful – a great introduction to the topic!

Excellent starters course. Even though I knew all the content, it was nice to listen to the course

the lecture was excellent! Motivating and inspring.

Excellent course! Very useful and packed with lots of tips and statistical best practices. It was great it came with an ebook as sometimes the material was a bit too dense and the ebook helps you to catch up easily. The simulations in R were very useful. The lecturer was great in explaining the topics and made the class quite interesting. One of my favourite courses at ESSLLI this year.

Very good lecturer and teacher. A bit too much of handwaving maybe at times, and some steep accelerations between class 2 and 3, but the lecture notes and the demos are very helpful, and the atmostphere was excellent. It definitely helps me to move on to R, so the purpose of the course is fulfilled for me.

The speaker was very ambitious and motivating! Well done! :) but the level of the course was not foundational for me (too difficult)

The lecturer did a good job catering for a highly heterogeneous audience. The course covered hypothesis testing with little time left for simulations. Perhaps next time the statistics course could be spread across 2 weeks?

I wish there had been more exploration of the program R, but I can understand why the instructor chose to focus more on the theoretical side.

I wish this would have been a two weeks course.

I really liked this course and the lecturer! It was entertaining and useful!

Great course! Great teaching style — clear and at the same time entertaining. Really enjoyed it and learned a lot that I’m sure I’m gonna use! Thanks!

Everything I wanted to know about (this kind of) statistics but was afraid to ask :-). Besides it gave immediately re-usable examples of how to use R – what more can you ask?

adapted for people who don’t know statistics very good presentation and motivation of the lecturer

On behalf of the ESSLLI Standing Committee,

Sophia Katrenko

—

Sophia Katrenko

Informatics Institute

Faculty of Science

University of Amsterdam

Science Park 107

1098XG Amsterdam

The Netherlands

####################### end ##############################

This is only a sampling of the feedback I got. Of course, I could have made up the above. But I didn’t. I can forward the emails to you if you want to verify the sources yourself.

g. You say: “In Section 2.3, the distinction between binomial and hypergeometric sampling is not mentioned, i.e. the binomial approximation is used without any warning.”

Again, our goal is not completeness, our goal is to try to get people to understand at a very basic level the minimum that they need to know in order to use statistics. Knowing about hypergeometric sampling is very cool, but it’s not going to take them any further.

You need to understand that there’s a cron job running inside the head of each student: “why do I need to know this?” (another cron job is: “Is this going to be in the exam?”; most students are not there because they are hungry for knowledge, they are there to pass the course somehow). If you start telling them things that will not be relevant down the road, you risk losing them completely. I would like to point out that the ”normal” linguistics students who uses statistics will never need to know what hypergeometric sampling is. For example, an advanced book like Gelman and Hill 2007 talks extensively about the binomial sampling, but never about hypergeometric sampling (correct me if I am wrong). Now why would that be? If we add this kind of detail in the book, students will be asking us: why are you telling us this? The same problem holds for Casella and Berger’s intro chapter, which has an extensive excursus (excursus from the students’ perspective) on set theory.

h. I would like to point out that you did not read the book carefully and as a result your review misrepresents quite a few things.

(i) For example, you say:“the variance being np(1-p) is not stated at all.” Look at page 168. Maybe you wanted it in the main text, but the problem was: how to explain where np(1-p) comes from?

What I really dislike about statistics textbooks are their off-hand comments (rather, commandments) which one has to just take on trust. For example, Gelman and Hill 2007 really made me suffer regarding residuals: On p. 46 they say: “The regression assumption that is generally least important is that the errors are normally distributed. In fact, for the purpose of estimating the regression line (as compared to predicting individual data points), the assumption of normality is barely important at all. Thus, in contrast to many regression textbooks, we do not recommend diagnostics of the normality of regression residuals.” There’s *no* explanation for where this comes from. Statisticians make “recommendations” that us non-statisticians are supposed to take and use blindly. This is the kind of a-statistician-told-me-this situation that leads to cookbook approaches and disasters of the kind I outline below. BTW, I have asked Andrew about this several times, but never really got any useful answer about why the residuals are not important. This issue has important implications for me in my own research. In eyetracking data for reading, we often compute something called re-reading time, the amount of time one spends on a word when one has gone past it to a later word but revisits it later on. Often, 80% or 85% of the re-reading times are 0 milliseconds. This means that only 15-20% of the dependent variable values is a non-zero reading time. Should I fit a model like:

lmer(rrt~factor+(1|subj)+(1|item),data)

knowing that the residuals are going to be wildly non-normal? Till now, no statistician has given me a straight answer to this question. Can you? This is the kind of thing that irritates me about statisticians. They’ve got clear ideas in their mind about the modeling issues, but they won’t take the time to articulate it. Instead, I get responses like “it’s in any standard stats book” (really? which one?), which I interpret as “go to hell, I don’t have the time or energy to talk about this”. Or they’ll heap scorn on you for not knowing the apparently obvious answer to this question.

Returning to the np(1-p) complaint, it’s because I didn’t want to hand down formulas without showing where they come from that I put it in the appendix. For a statistician like you the way things should be presented is very different than the way things need to be presented to the kind of students I teach. The problem is that you have no experience in this area (of presenting material to unprepared students).

(ii) You are pretty contemptuous about the recommendation to use LaTeX etc., wrongly implying in the review that we consider it a prerequisite, but you leave out the statement in the book that none of this is necessary for reading the book. I interpret that as rage, because you deliberately leave out the crucial sentence from the book, which I repeat below:

“In order to use the code that comes with this book, you only need to install

R.”

(iii) The first line of your review is wrong. You say that the book is written by “two linguists”. Go and look at Michael Broe’s home page. He was a linguist (and perhaps, once a linguist, always a linguist), but he is certainly not a linguist at the time this book was written.

(iv) Your criticism about the blog is factually incorrect and that’s why I think it reflects rage rather than reason. You didn’t spend any time verifying whether what you wrote about the blog is correct. I didn’t bring this up in my first response because I felt it was petty and I should stick to responding to the important points in your review.

There are two mistakes in your characterization. First, you say: “The authors advertise a blog about the book that contains very little information. (The last entry is from December 2010: “The book is out”.)” The last entry *is* in December 2010, but it’s *not* what you claim. You must have looked at the blog and you must have seen that that’s not the last entry (I didn’t manipulate the blog by the way–I wouldn’t even know how to), so this is a deliberate misrepresentation, and that’s why I feel that the review rampaged out of control.

Second, you say: ” I perfectly understand the many reasons for not maintaining a blog (!), but then the site should have been advertised as a site rather than a blog.” You also say: “The reference to a blog in the book could be a major incentive to adopt the book, so if the blog does not live as a blog, it is both a disappointment to the reader and a sort of a breach of advertising.” This is also a misrepresentation and comes across as a hostile attack based not on fact but on rage (I’m explaining my comment in my first reply). If you look at page 2 of the book, we advertise a *static* web page for the book, and then we say: “The accompanying website for the book contains (among other things): 1. A blog for asking questions that the authors or other readers can answer, or for submitting comments (corrections, suggestions, etc.) regarding the book that other readers can also benefit from. …” The static page *is* the main page. How is this a breach of advertising? It’s a blog for the *reader* to raise questions etc. The book is six months old, and I’m hoping that there will be more discussion (such as your comments). I agree that I could have made regular entries to the blog (of the type I made in December 2010), but that would be just to keep it active for its own sake.

I think you devalue your review when you make this kind of unfair attack; it’s a gross misrepresentation of the web content accompanying the book (main point: the static page is the main page and that’s clearly stated in the book). I’m going on about this at great length because this kind of thing tells me you are just looking for things to criticize, regardless of whether the facts support your statements or not. That’s a sign of uncontrolled rage. Of course, I may be wrong, maybe you just quickly scanned the pages and made some leaps of inference that just turned out to be wrong. But such is the nature of electronic communication, I cannot see into your state of mind.

The bottom line:

I had offered Christian co-authorship if he’d would help us clean up the book, but (like all other statisticians I have ever encountered) I got a highly predictable answer: no. One reason for this, according to Christian (see his blog entry above), is that he would not want to be co-author on a book that refuses to use calculus (another reason is that he’s busy, which he explained to me in an email–I understand, but the bottom line is that that leaves me at the same point where I started, trying to engage with statisticians but failing). The refusal to avoid calculus shows to me that statisticians and mathematicians have not the faintest clue what things are like in a humanities department. I would like to use this opportunity to educate Christian and people like him a little bit:

Most of my linguistics students (graduate and undergraduate) range from reasonably intelligent to highly capable people (we do get people at the undergrad who should not be in university, but they get filtered out quickly). However, in the last seven years of my teaching statistics to such students, all but one (maybe two) failed to answer this question correctly: Define odds=p/1-p. Can you solve for p? I.e., can you state p in terms of odds? Most of them simply cannot do it (I gave them 15 minutes). This is a stunning fact for me, and it should give Christian pause. Many students have never even heard of intercept and slope in school, never seen a summation symbol (or so they claim—they probably forget everything). If you mention the phrase “system of linear equations” in class the humidity level goes up exponentially because everyone starts to sweat buckets of fear. Straightforward matrix multiplication is an alien concept to them (I encountered matrices in class 11 in India; apparently, some students have never seen matrices).

Now, what should be the response to such a situation? Are these people dumb and should therefore be all thrown out of university? The answer is clearly: no. We have lots of other evidence about them that suggests that they (those who survive our courses into grad school, and many others who move on to other things) are highly capable people and at least some will go on to make important contributions to linguistics. Was their schooling deficient? I don’t know–I don’t have the resources to find out what’s wrong with school education that they cannot work out the answer to the above questions.

These people need statistics in their daily life, and calculus and friends are completely beyond their reach. There have been two responses to this dilemma, which is present worldwide in linguistics. One response is to teach cookbook statistics: When you want to do a paired t-test, do t.test(x1,x2,paired=TRUE), and if the p-value is below 0.05 accept the alternative hypothesis, otherwise accept the null hypothesis. YES! This is the suggestion I have gotten in the past. Indeed, if you scan the literature in psycholinguistics and even psychology, you will find that it’s flooded with published null results in top-class journals; such null results are published as crucial evidence in favor of this or that theory. To my knowledge there is not a single attempt in psycholinguistics to do a sensible bayesian analysis for arguing for null results (Kruschke’s campaign will probably change that soon). When the focus is on publishing non-null results, journal editors in top journals think that lower p-values mean that that probability of the alternative being true is higher, and so they try to force you to get low p-values. I.e., when you teach a generation of linguists to do cookbook statistics you mostly get a lot of odd stuff flooding the field that would shock a statistician like you. Here are some more examples: It’s standard in psycholinguistics (e.g., event-related potentials experiments) to do 40-50 ANOVAs on different (sometimes overlapping) partitions of the data, and then declare the one or two that are significant at alpha=0.05 as informative about the result. As a reviewer I have complained about this to editors who are world-leaders in psycholinguistics, but the complaints are quietly shelved with no further discussion. The current American Psychological Association guidelines (I have only heard this second-hand, I might be wrong) ask for p-values up to three decimal places. Despite Cohen’s and others’ many publications on the subject of power, people do low power experiments to argue for null results. A reviewer complained about one of my recent papers that we were only repeatedly replicating a previous experiment done by someone else (our result was different, and we did four replications), and what’s the point of that if we already got a significant result in the earlier published work? I once asked a prominent researcher why they did event-related potentials studies with so few subjects. The answer was: there’s a pressure to publish. There just isn’t enough knowledge out there for even the basic stuff.

Cookbook statistics leads to chaos. So, the alternative is to teach them something about what a p-value and confidence interval are (and aren’t), what a standard error is, what a t-value is, from scratch and without getting too technical (for reasons explained above). That’s what this book does, and it seems it succeeds in this modest aim. At the end of the course our students know what these things mean. One major drawback of this approach is that mathematical precision suffers and this is what Christian’s reacting to. I would like to point out to Christian that (a) he has no good suggestion for an alternative (because he doesn’t teach at this level), and (b) he is not willing to put his money where his mouth is (he will not help write a book, or write a book himself, that’s accessible to such an audience as I describe). I completely agree that someone more competent than me should be doing this; I would love to not have to amateurishly wade through statistics. But statisticians seem to be incapable doing this (one great exception is Fox’s books, but even those are too hard for the majority of my students).

Now, Christian can say: well, these students need to learn some math. I agree. But we have to teach them a lot of stuff and we cannot provide them with a math education that they should have gotten in grade 8. Solving for p above in the definition for odds should be a highly practiced procedure that’s in their blood and is executed effortlessly. This is not the case, it’s too late for these students to rewind their lives to an early stage and catch up now (well, they could catch up, if they want to, but they don’t).

BTW, Christian, about one month before you started this discussion, I came up with a solution for bridging the statistician-user gap. I’ve got a sabbatical right now, and during this sabbatical I’m starting to work towards doing an MSc in statistics at Sheffield (distance course). My solution is that I will give up three years of my research life to cover my gaps in understanding of statistical theory so that I can communicate accurately with the audience I am faced with. I’m going to have to become the statistician that nobody else is willing to be.

]]>