DuraSpace at Public Access to Federally-Supported Research and Development Data and Publications Meetings
Submitted by on Thu, 2013-05-23 12:53
Winchester, MA DuraSpace staff members Jonathan Markow and Carol Minton Morris participated in two planning meetings focused on sharing information related to Public Access to Federally-Supported Research and Development Data and Publications. The meetings were held at the National Academies in Washington, DC May 14-17, 2013 to address issues related to the Office of Science and Technology Policy (OSTP) memorandum requiring that the results of federally funded research be made openly accessible. The meetings were sponsored by a group of cooperating agencies and the National Research Council (NRC) Division on Behavioral and Social Sciences and Education (DBASSE).
In the public comment period DuraSpace was introduced as an independent not-for-profit organization committed to our shared digital future in collaboration with academic, research, cultural, government and technology communities. It was emphasized that our open source repository projects help knowledge communities ensure that current and future generations will have access to our digital heritage while hosted services enable organizations to archive and preserve content with minimal maintenance.
It was made clear that DuraSpace supports the OSTP initiative to promote open access to, dissemination and long-term stewardship of publicly funded research.
DuraSpace strongly recommended that technology solutions deployed for this initiative be based on open source software applications, which have advantages relevant to the OSTP directive.
"For one thing, licensing expenses are non-existent compared to the often steep costs of commercially licensed software. Open source software comes with freely available source code, as well, and is supported by active and engaged communities of practice. Government agencies and departments deploying open source applications like DSpace and Fedora are able to join a global community of developers to add or change features to meet specific requirements. Changes may be contributed back to the community so that others can take advantage of them and help maintain them. Or, they may simply use the software without any obligation to write program code themselves. Finally, open source software is most often based on open standards, which facilitate interoperability with other applications that adhere to standards.
Most importantly, users of open source software may invest in its use without any fear that changes to proprietary code will someday stop an application from functioning or, even worse, become obsolete and simply disappear from the marketplace, stranding users without a growth path. It seems to us that this kind of assurance is critical when one is considering the preservation of our nation’s research data and publications."
You may access archive videos of the webcast, written statements, PowerPoint presentations of the formal speakers and rapporteur and a transcript of the verbal comments made at the meeting here:
Federal agencies have historically relied on publishers and learned societies to review the results of publicly funded research, and then for academic libraries to purchase, make available and preserve the results of research published in scholarly journals. Everyone spoke highly about the concept of public open access at the planning meetings. How to balance the revenue equation among publishers, learned societies, libraries and the general public, however, remains unresolved.
There is a wide variety of federal cultures and related content and data–intelligence, health, finance etc. The idea of a federal repository of interoperating repositories seems to have some traction because one size will not fit all. Open source solutions could ensure that research results will not be locked up and that related software will evolve into the future with community support.
A technical functionality wish list came out of the summary of the publications portion of the meetings:
• Unified search
• Link to data--not all data is tied to publications
• Require globally, unique, persistent identifiers for articles, data, authors, funders and everything else
• Reuse and machine readable interoperability
• Need mechanisms for open evaluation metrics
• Platforms for grey literature and negative results
Many stakeholders had excellent suggestions for how to solve the open access issue for federal agencies. There are integrally related and significant issues around the publication of research data either as part of publications or as stand-alone data sets. In a move to enhance government efficiency and fuel economic growth the Obama Administration recently released open data rules: bit.ly/ZTDmKM
Open access to data has begun to transform cultures and governments around the world. In a recent IEEE Spectrum article author Prachi Patel provides an overview of the recent Forum for Agricultural Research in Africa conference where "leaders of the G8—the world’s eight wealthiest countries—brainstormed the best ways to make data available without restrictions, in formats easy for humans and machines to parse." Open agricultural data and other government data (The Kenya Open Data initiative: https://opendata.go.ke) could provide the key to sustaining a growing world population.