This is a read only archive of pad.okfn.org. See the
shutdown announcement
for details.
open_versus_traditional_science
Open vs Traditional Science Session
Thursday 17 July 14:00-15:00, OKFest14, M1
http://sched.co/1oCWs4R
Hashtag: #ovcs
AT THE FESTIVAL
##Facilitators
- Alexandre Hannud Abdo
- Daniel Mietchen
- Jenny Molloy
## Participants - name, contact (if you want to leave it), number of attendees
- Konrad Förstner (@konradfoerstner)
- Matthias Fromm (@matthiasfromm, fromm@mfromm.de)
- Catriona MacCallum (@catmacOA)
INTRO
Daniel Mietchen posted a great blog post about this session at http://2014.okfestival.org/can-we-make-research-more-efficient-through-increased-openness/ which is copied below:
Researchers spend a lot of their time thinking about how to test assumptions or hypotheses and how to separate different effects that jointly influence some observation or measurements. In their famous experiment in the late 1890s, for instance, Michaelson and Morley took great care to measure the speed of light both in the direction of the Earth’s rotation, and perpendicular to it. Within a small observational error, the two speeds were identical, which provided the first crucial hints that the speed of light might actually be a constant in a given medium, and that there may actually be no ether involved in transmitting light through space.
Surprisingly, similar rigor is not normally applied to the practice of research itself: we do not know what research funding and evaluation schemes are best suited to make specific kinds of research most efficient, we keep the Journal Impact Factor as a means of evaluating articles, researchers, institutions and all sorts of other non-journal things despite knowing that it is ill-suited for those purposes, and we do not know whether the status quo of keeping the research process out of public view (and publishing some rough summary at the end) is actually beneficial to the research system as a whole.
We want to tackle the latter issue by putting research practice to a test in which we compare the efficiency of traditional to that of open science. While there is some anecdotal evidence, this has never been investigated systematically before. That’s why we are organizing a session at OKFest (Thursday, July 17 • 14:00 – 15:00) to develop a framework for:
- showing that open science can be more efficient than traditional approaches under some conditions
- exploring the space of meaningful conditions in a systematic fashion
- understanding what outcomes can be properly compared in a practical experiment
- actually putting open and traditional research to an efficiency test
- identifying funders that may be interested in supporting such an efficiency test
We hope to see you there in person or via the session’s Etherpad.
The idea is to turn this framework into a research proposal that stands realistic chances of getting funded in some way. The outcomes of the session will then be fed into a whole-day open grant writing session at Open Science, Open Issues on August 19, at the end of which we hope to have a draft proposal that covers all major aspects of the topic and can easily be adapted for submission to suitable funding schemes around the globe.
Even if these proposals are being rejected, the submissions will help to raise awareness of the issue amongst funders and reviewers, and if such a proposal actually gets funded, then we can finally put research to a test to find out whether openness increases research efficiency or not.
The four main groups people could choose to join covered:
- Who could be the research subjects for a study on open science? Is a particular discipline/country/career stage better suited to such a study?
- How could one go about answering the question? Potential research methodologies and pitfalls, correlated factors and complications to consider.
- How can we actually make this happen? What might the funding mechanism be and are there collaborators or funding agencies that would be interested?
- An introduction to how we reached this point - what are the proposed benefits and problems of open science and what questions should we be asking more broadly (this session focuses on efficiency)
As background, some of the discussion in the open and collaborative science for development workshops funded by IDRC and organised by OKF and the OpenUCT initiative in 2013 were mentioned, you can find the working paper arising from that project which is openly licensed and open for comment and suggestions here:
https://t.co/iaYp8Jk6nJ
NOTES FROM GROUP WORK
In which contexts could we study open science?
- Disciplines
- In computer science - open source vs. closed source software project
- evaluatoin via hourst spend/time needed - help by external commiters
- Drug development
- Could not completly open - another layer to anonomyse is needed
- Should be performed in different research field to proove the increase of efficiency for all of them
- ROS (ROS.org) - Robotor Operating System - as an example where an open approach is widely adapted
- Matthew Todd - had a simple setup - testing the activity of compound; did this openly
- Focus maybe only on certain parts of the research cycle e.g. grand application
- would require a larger number of grant applications
- Geography
- Could be done in developing countries
- Career Stage
- For established scientist (e.g. a professor) has a lower risc and could to this but would not necessarily have the time; a PhD or Postdoc would
- Methods
- Challenge/Competition to solve a research question where you can win a price
- As the closed group could have access to the open groups outcome the task would have to differ a litte bit
- Maybe use simple questions like find the optimal concentration of compoind
- Lower price might be an incentive to perform a experiment openly
- Universitis and funder should require openess
- How can we measure the advantage ot the open approach like usuability
- on the long run via data citiations, currently we might miss it
- There is a increains amount of tools forthis in the Digital Humanitiers - this could be used to convetional approaches
- We always need a (closed) control group - in many scenarious this is already available
- Maybe doing a meta study with the open projects that are around
- problem - usually not good control group
How could we fund research into open science vs traditional science?
- Study Considerations
- advantages of open science maybe harder to test then advantages of open access
- studies/analysis of impact necessary
- mentioning of Human Genome Project -> huge investments following funding of the project
- efficiency might be a good argument, but how to measure efficiency?
- who can tell how efficient science is, at the moment?
- fund projects that can prove the need for open data, open science - reuse of data, new applications, etc.
- there are criteria besides efficiency, use for society
- the funding landscape
- "how do you provide incentives" - this is something funders can do
- how do we change the incentive system?
- have funders ask the right questions, e.g. was your past research needed?
- increase funders' pressure regarding requirements of openness
- Projects and people
- who can do that, who can apply for funds?
- social scientists, economists might be the people to test this, to apply for grants
- citation analysts
- suitable projects
- if open data is useful there should be an impact in papers
- how many papers, how many products, how many actual user resulting from science projects?
- metadata on open access publications, where did innovations, products come from, do they stem from open access publications
- citation analysis
- altmetrics
- look at publication data, citations ?
- fund open review to discover advantages of open science projects
- look for patterns, where changes follow policies
- measure citizen science projects' success
How to study efficiency of open science?
-
- Has this been tried before?
- European Comission: open data pilot review
- -they dont know themselves how to evaluate
- -science 2.0 framework: look at evaluation
- -incentives for reuse and efficiency
- -based on how they implemented open access
- Viability evidence
- -Identified galaxy times
- What is efficiency?
- Two approaches:
- -Efficient for whom
- -the peope, the scientists, the institutions
- -but, public policy can handle incentives to scientists
- Can we assume non-negativity of open?
- -why not making things open, fear
- -overhead: metadata, reproducible outcomes
- -but, it also benefits yourself
- Measurables
- Impact of different licenses
- measuring the fear of opening
- Replication
- -assume it is generally a good thing
- -study if it leads to more, better publication
- -compare whether studies have been replicated, for how long, if replications confirm or disprove
- -use licenceses
- -science exchange
- -could be turned into an experiment
- -measure time between publication of work and citation (of some work item) (in open science it could be negative)
- -indexed vs non-indexed open scientific data
- Reuse
- -time for data reuse
- -databases like dryad have both closed and open datasets
- -are open scientific works less likely to be lost? (data loss)
- -there is a paper already looking at a baseline for this
- efficiency of translating scientific resulsts into technology
- Process
- -review of past registry
- -social experiments
- take 2 researchers giving a similar task
- -track the open bits of a project in a registry, over multiple projects
- -but how to separate?
- -what can be done, cant be done
- -there are measurables through the process, not only in the outcomes
- -number of collaborators
- -work hour metrics?
- -list of people who applied for a study from a funding body
- -check the outcomes for open and closed
- -collaboration with people who are not under the influence of open science advocacy mindset
- -ideal experiment would involve both open and closed science advocates
- -participation of multiple interests in the design
- Our discussion assumes simple measurements vulnerable to variations in context and actors.
- We did not have time to discuss other methods, such as qualitative or mixed methodologies, that might shed light on the bias and counfounders involved.
- Dependencies on what part, tendency, of open science
NEXT STEPS
Daniel will be taking these results to Rio in August for a whole day open grant writing session at Open Science, Open Issues http://www.cienciaaberta.net/encontro2014-en/
The etherpad here will remain live, we will look into setting up further methods of communication around the topic for the months (and years!) to come.
Interested in Participating?
Register you interest and contact details below and we'll make sure you're sent links to relevant documents.
Jacopo Durandi, @jdurandi jacopo.durandi@gmail.com