The Ethical Advantage of Collaborative Research
The Citizens’ Initiative Review (CIR) research project is a collaborative effort, involving a pair of principal investigators but dozens of graduate student and faculty colleagues who serve as authors and co-authors. Recent events involving falsified data in a prominent political science article have prompted me to reflect on the choice to lead research collaboratives the past several years.
The original purpose for conducting the research in this way was entirely pragmatic: More can be learned (and published more rapidly) by involving a large and diverse group of colleagues, many of whom are junior scholars eager to get their work completed as they prepare for the job market or seek tenure or a promotion at their respective colleges and universities.
I chose this model for studying the CIR because it had proven effective in previous projects I’ve undertaken with funds from the National Science Foundation (NSF). My work on the civic impact of juries spawned more than a dozen articles with diverse primary and co-authors, including a book I wrote with three colleagues. One of those articles was inspired by a colleague who believed a reanalysis of our primary data would reveal hidden civic benefits that flowed from civil juries, and her hunch proved correct. The resulting article, “Deliberative democracy and the American civil jury,” was published in 2014, a full decade after the corresponding data was collected.
Another NSF grant enabled me to join a collaborative project already in progress to study the 2009 Australian Citizens’ Parliament. Once we completed primary data collection on that project, three colleagues and I chose to write-up the results with two dozen other authors in an edited volume that looked at the unique deliberative event from a diversity of perspectives. The resulting book, The Australian Citizens’ Parliament and the Future of Deliberative Democracy, includes chapters written by graduate students from the US and the UK who had full access to the data. Some of the conclusions they drew were far from the original interpretations of those who initiated the research.
The CIR project has taken this idea even farther. When we observe a CIR event, two or three different researchers observe the process first-hand, and thirteen different people have played that role across the nine CIRs from 2010-2014. The transcripts created by an independent agency are widely shared for interpretation from any conceivable angle. More than a dozen people have shaped the surveys, with new items added each round to test new hypotheses. The survey data itself is then shared even more widely. When a co-author presents work at an academic conference, the audience is typically invited to join the team and investigate the data further. Thus, as the CIR dataset grows richer, the collaborative attracts more partners. (Anyone reading this who wants to join the team should drop us a line. The data are described broadly at this site, and inquiries should be sent to firstname.lastname@example.org.)
One indirect result of this approach is that the data, and the results of our analyses thereof, become ever more reliable and transparent. Any member of the collaborative can inspect the analyses done by another, and no single person controls the quantitative or qualitative data. Even the grant proposals seeking funding for the research have been a collaborative effort, with a successful 2013 proposal being refined and vetted at a large workshop consisting of CIR researchers.
The tragic case of academic misconduct currently in the news (in which a graduate student falsified data in a study of attitude change on same-sex marriage) suffered from having the opposite features. The fraud was carried out by a single individual, who had a co-author and an academic adviser who were too distant from the initial data collection and its refinement to recognize what was happening. When Science chose to retract the article, editor-in-chief Marcia McNutt cited three reasons:
(i) Survey incentives were misrepresented.
(ii) The statement on sponsorship was false.
(iii) …independent researchers have noted certain statistical irregularities in the responses [and the primary author] has not produced the original survey data from which someone else could independently confirm the validity of the reported findings.
Those kinds of misrepresentations and obfuscations would be impossible in a collaborative like the CIR, given that so many different hands touch each of these elements of the project. One can read about other cases of data and analysis falsification in places like Retraction Watch and see that nearly all of these involve a “lone wolf,” a student or faculty member who chose to write false lab notes, invent or doctor a dataset, or simply produce fake results to advance his or her career. Working in a collaborative provides a safeguard against such activities because any prospective perpetrator would have to recognize the exceedingly high likelihood of getting caught by another member of the team.
As with previous projects on the American jury and the Australian Citizens’ Parliament, the membership of the collaborative changes each year as new scholars come aboard. In other words, one does not necessarily even know who might next pick up the data for analysis and what they could discover. This fosters a spirit of openness, which encourages me to seek out and bring on as co-authors anyone with relevant expertise who shares our interest in the CIR.
When I reflect on where I got started with this approach, I think back to my doctoral dissertation. As I worked to get its various chapters published, I would discuss my findings with anyone interested in the subject matter. That led to a collaboration with Dr. Patricia Moy at the University of Washington, who suggested a reversal of a causal model I had used in my thesis. In the end, I convinced her that she’d become the first author of that particular essay (“Predicting deliberative conversation”). I now belong to what is probably a very small group of scholars who have published a dissertation study as second author, with the lead being someone they didn’t even know at the time the thesis was completed.
At the time, Dr. Moy thought it was generous of me to offer her lead author status, but I suspected then what I now know to be true: If one is systematically studying sufficiently interesting research topics (and has a measure of good fortune), there will be more than enough publication credit to share with one’s colleagues and students. The greater challenge is keeping up with the incoming data and writing up its infinitely complex results. This problem is particularly acute if one can attract the support of NSF, which helps make large-scale research programs possible. By drawing in colleagues, including graduate and undergraduate students, a scholar can built a research team that can not only carry the load but also keep developing the ideas within a research program. As I now recognize, that collaborative model also protects the integrity of the project against any single author who might otherwise suffer an ethical lapse.