Friday, May 07, 2010

Cool Site - Open source cognitive science

Excellent. I am a fan of all things open source, especially research - we need to move beyond the competition model of research and toward a collaborative model. I just wish they could post more often.

Open source cognitive science

About

In the cognitive sciences, experimentation, analysis, and dissemination are becoming cheaper, easier, and more open. Researchers can produce results more accurately, reproducibly, and collaboratively.

These articles range across the cognitive sciences, and over a suite of techniques that are coming to be known as Science 2.0.

________

Bio

Based in Ottawa, Worldchanging Canada editor Mark Tovey is completing his Ph.D. in Cognitive Science in the Advanced Cognitive Engineering Lab at Carleton University. He is studying, among other things, human response to change, and how to accelerate and catalyze societal change.

He recently edited a book which looks at how the space of Web 2.0, mass collaboration, and open source methods can be used to gain traction on global problems, Collective Intelligence: Creating a Prosperous World at Peace. (648 pp., EIN Press).

For more information on Mark, see his website at marktovey.ca.

Here are a couple of recent posts - the only on open source psych research makes me giddy.

Cognitive Science Dictionaries and Open Access

September 22, 2009

There are some quick reference applications in the cognitive sciences for which Wikipedia is not yet fully adequate. I notice this especially when I’m trying to understand a difficult paper. Usually, one of the reasons a paper is difficult to understand is because it contains unfamiliar terminology.

My own experience, and I suspect that this will be a fairly uncontroversial claim, is that the level of technical coverage provided in paper-only specialist dictionaries is currently greater than that provided by either Wikipedia or most other free online reference sources. Three paper reference works I find particularly helpful are the APA Dictionary of Psychology (which seems to be one of the most extensive available), David Crystal’s A Dictionary of Linguistics and Phonetics, and the Dictionary of Cell and Molecular Biology.

When I am fortunate enough to be reading a paper in the library, what I sometimes like to do is pull from the shelves not just one, but several dictionaries relating to the subject at hand. Each time I come to a word I don’t understand, I will look up that term in all of the dictionaries. In many cases, there are pieces missing from one definition that are neatly filled in by another.

If I am reading a psychology paper, I will assemble several dictionaries of psychological terms. In reading a linguistics paper, I will take down multiple dictionaries of linguistic terms. And so on.

There are a few problems with this approach. First, if I am using a set of dictionaries, no one else in the library can use them. Second, I must actually be sitting in a reference library, with a stack of dictionaries in front of me, to do my reading. Third, it is time-consuming to look up each of the definitions one by one.

Single library copies are less of a problem as publishers create online editions of their reference works, and libraries subscribe to them. My institution, for instance, allows me to access the Routledge Encyclopedia of Philosophy (REP) and the MIT Encyclopedia of Cognitive Science (MITECS).

This is useful to the academic researcher, but not all institutions subscribe to all publications. And not all researchers have institutional access. Even when they do, it does not fully help with the third problem—which is that it is time consuming to look up a single term in multiple dictionaries at once.

There are a lot of metacrawler search engines out there, with various levels of customizability. None of them, that I know of, allows you the flexibility to be able to work with your library’s proxy server to be able to query multiple subscribed dictionaries at once.

I started wondering: when it comes to reference works in the cognitive sciences, are there strong open access alternatives? Is it possible to produce a reference work which approaches the depth and specificity of those mentioned above, in a cost-effective, open access format?

It is possible, because at least one example already exists.

A model reference work in this space is the Stanford Encylopedia of Philosophy (SEP). It’s peer-reviewed, revised quarterly, and each entry is maintained by a single expert or team of experts. Reading the SEP’s Publishing Model is highly instructive. They have carefully automated a great deal of their workflow to keep costs down. Designated areas for authors, subject editors, and the Principal Editor each have functionality designed to make that role easier to perform. Reminders, quarterly archiving (to create unchanging, citable references), cross-referencing and link-checking are all automatic, reducing the editorial burden. This approach, which the SEP bills as a scholarly dynamic reference work, seems to be unique to the Stanford Encyclopedia of Philosophy, but could clearly be applied elsewhere.

What other Open Access reference works exist in the cognitive sciences? Are there other Open Access reference works with the depth of MITECS or the range of the APA? Could the scholarly dynamic approach (peer-reviewed, single-author, regular fixed editions), work across the cognitive sciences? What other models exist that might be tried?

Lastly, are there tools I have missed that might be useful for looking up multiple definitions simultaneously?

I welcome your thoughts.

______

About the Stanford Encyclopedia of Philosophy’. The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/about.html#pubmod>.

Craig, E. (2003). Routledge encyclopedia of philosophy online. London: Routledge. http://www.rep.routledge.com/index.html.

Crystal, D., & Crystal, D. (2008). A dictionary of linguistics and phonetics. Malden, MA: Blackwell Pub.

Lackie, J. M., Dow, J. A. T., & Blackshaw, S. E. (1999). The dictionary of cell and molecular biology. San Diego: Academic Press.

Wilson, RA. & Kiel FC (eds.). MIT encyclopedia of the cognitive sciences. (1999). Cambridge, Mass: MIT Press.

This is the one:

Open data for online psychology experiments

April 13, 2010

The research activities in cognitive science are usually limited to a single lab, or to a small group of collaborators. But they need not be. A key problem in encouraging wider collaboration is finding ways of sharing data from human subjects in ways that do not compromise the privacy and confidentiality of the participants (DeAngelis, 2004), or the legal and ethical norms designed to protect that privacy.

Bullock, qtd. in DeAngelis (2004), notes other difficulties: Psychology traditionally has not built large data systems for storing or sharing large data sets, and, has not developed a culture of sharing data as seen in other disciplines.

The challenge in brief: what would it take to build a general purpose internet-based experiment presentation and data collection system where the resulting data is automatically and anonymously shared after a suitable embargo period? This could be done by individual researchers self-archiving data, or be sent to a repository. One of the advantages of such an an automated data publishing system would be a reduction in the costs in publishing properly formatted raw psychological data.

This is a practical project, the contemplation and building of which is entirely feasible with today’s technology. The charm of this project is that it requires three issues to be worked out in order to succeed.

One of these issues has to do with the internet: how do we offer reasonable guarantees that data collected remains anonymous, and is not compromised in transmission back to the experimental server or on the server itself? Although this varies by institution, in order to make human subjects data publicly available, the main requirement is that the data be presented so that it is anonymous. The name, and any personally identifying information, must not be stored with the data.

This simple practice of separating data from identifying information can be complicated by the fact that the data is being sent over the internet, where data can be intercepted, and servers can be compromised. Some solutions to this problem could include the use of strong encryption or IP anonymization. They could also include discarding potentially identifying data on the client machine before it is even transmitted, or discarding some of the received information after summary calculations are made, but before it is stored.

A second issue is societal: under what circumstances do we have the right to share human subjects data? This question particularly includes challenges that can vary by institution. It also involves aligning the policies and interests of different stakeholders. University oversight on these matters can include ethics committees, privacy officers, and legal departments. There may also be assertions of intellectual property by either the institution or the body funding the research. It may be that one or all of these groups would need to be consulted, depending on the location of the researcher and/or repository. This is an issue that would need to be explored carefully and sensitively with the relevant stakeholders at the repository institution.

A third issue is experimental. There are no shortage of online experiments on the web. Psychological Research on the Web lists hundreds. As the Top Ten Online Psychology Experiments points out, it’s a little hard to assess the validity of these results because of variations in the speed of the hardware. (This post also notes that we also don’t know who is taking these tests, or whether they have understood the instructions properly). How can we offer reasonable guarantees that data collected on different hardware will be valid? This can include timing accuracy for both input and output (Plant, RR & Turner, G., 2009), and adjusting stimuli to ensure similarity of size, colour, or volume.

An embargo period is important for three reasons: (a) to protect participant privacy, (b) to protect the integrity of the experiment, and (c) to protect, to the extent they desire, the researcher’s work. In particular,

(a) it is important not to release data immediately upon collection because anyone with a knowledge of when that person had conducted the experiment might be able to trace the data back to them. A standard (known) period of embargo would have the same property. A better approach may be a randomized period of embargo, or a set period of embargo for all collected data.

(b) usually when online experiments are conducted, the data will not be made available until after the experiment is run so that there is no way a potential participant could look at the results and be influenced by them.

(c) some researchers may not wish to release their data until they have published, but would be happy to release the data afterwards.

The online-experiment-runner could, of course, be made open source, as could the experiments that run on it, but these are separate issues.

Does my account include problems that don’t exist in practice? Are there places where it is actually more complicated than I’ve sketched out here? Are there examples of open data collected on the internet? Do you know of other references on the ethics and practice of making human subjects data available in various context (for psychology or otherwise)?

Acknowledgements: Terry Stewart for many illuminating conversations on open models, and open modelling, and to the folks at ISPOC for their model repository. Thanks also for stimulating questions and feedback to Michael Nielsen, Greg Wilson, Jon Udell, Andre and Carlene Paquette, Jim McGinley, and James Redekop.

______

DeAngelis, T. (2004). ‘Data sharing: a different animal‘. APA Monitor on Psychology. February 2004. 35: 2.

Plant, RR & Turner, G. (2009). Millisecond precision psychological research in a world of commodity computers: New hardware, new problems? Behavior Research Methods, 41, 598-614. doi: 10.3758/BRM.41.3.598. [Thanks to Mike Lawrence for pointing me at this]


No comments: