2016 Texas Conference on Digital Libraries
Permanent URI for this collectionhttps://hdl.handle.net/2249.1/76248
Browse
Browsing 2016 Texas Conference on Digital Libraries by Issue Date
Now showing 1 - 20 of 57
Results Per Page
Sort Options
Item Introducing MAGPIE (Metadata Assignment GUI Providing Ingest and Export)(2015-05-26) Welling, William; Elmquist, Stephanie; Creel, James; Huff, Jeremy; Savell, Jason; Mathew, Rincy; Hahn, Doug; Bolton, Michael; Texas A&M UniversityThe Libraries at Texas A&M University have curated immense output from graduate programs for many decades. With the advent of the Vireo ETD (Electronic Thesis and Dissertation) submittal system, dissertations have been submitted in digital format and made available for download from TAMU’s OAKTrust institutional repository. However, many older dissertations are only discoverable through TAMU’s Voyager based online card catalog and are publicly available to visiting researchers in print format. A current digitization effort will make available these dissertations online at OAKTrust. The tool being developed for this purpose is designated MAGPIE (Metadata Assignment GUI Providing Ingest and Export). For the dissertation use case, librarians specified that the tool should display scanned PDF files and OCR (optical character recognition) text output from a file system. The tool then presents these data to annotators (typically, student workers) to augment and amend metadata. The presentation interface reads metadata, in this case MARC records, from TAMU’s Voyager card catalog database, thereby pre-populating important fields, such as the title and author name. However, a number of other fields, such as the abstract and names of committee members, do not exist in the card catalog but are available in the document itself. The annotator can simply copy and paste these character strings from the source document into a metadata input form specifically configured for the legacy dissertation digitization and preservation project. The MAGPIE workflow allows a manager to amend, reject, or approve these metadata entries, and to push approved documents into the OAKTrust repository with a single click. The MAGPIE tool has been developed using the Weaver framework, an open source web-development front-end and web service code-base from TAMU Libraries. The web service is built on top of Spring-boot, which is a popular framework with a large and growing community with documentation and support. The front-end of the web-stack consists of AngularJS and Bootstrap. The Weaver framework offers certain advantages, such as automatic updates of document status in the browser window without a page reload. The MAGPIE tool has also been developed with future projects in mind – the importation of content is modular and customizable, as is the metadata import service, the metadata form, and the export/push functionality. We anticipate that the MAGPIE tool will find use for metadata enhancement and automatic repository deposit of newspapers, images, and other institutional collections with or without existing metadata. In this talk, we will examine the initial use case of scanned legacy dissertations, provide some background on the MAGPIE software and its development, demonstrate the functionality of the tool, and conclude with an overview of future ambitions.Item Batch Importing into DSpace with the SAFCreator(2016-05-24) Creel, James; Texas A&M UniversityA commonly difficult use case for any digital repository is the ingest of large batches of items. Batches can come from all sorts of campus and community stakeholders with varying types and quantities of content, differing ways of representing metadata, and unique needs for access control and licensing. The heterogeneous nature of batches presents a fundamental challenge to automating the importation workflow and has lead to ad hoc and brittle solutions. The DSpace institutional repository software enjoys wide adoption in academia and industry, and is a flagship service of the Texas Digital Library to its member institutions. DSpace offers a simple but powerful batch import format called SAF (Simple Archive Format) that allows for metadata assignment, licensing, organization of files into bundles, and authorization management. SAF is simpler than other programmatic means of importation into DSpace such as the METS SIP (Submission Information Package) used by SWORD or HTTP POST requests to the REST API. However, generating SAF batches usually still requires external software, programming work, or a combination of both. There have been some efforts to provide generalized tools for processing metadata and content into SAF (notably Peter Dietz’s SAFBuilder https://github.com/DSpace-Labs/SAFBuilder), but when batches have special requirements regarding licensing and permissions, it has usually entailed custom code to do the processing. In addition, the spreadsheets often used to encode metadata are prone to errors such as invalid field labels and incorrect or missing filenames. It greatly accelerates a batch loading workflow to get validation of the input prior generating the archive and attempting to import it into DSpace. A new tool designated SAFCreator aims to provide enough flexibility to eliminate programming requirements for a wide variety of batch loads, and has been used by librarians at Texas A&M to ingest content into several collections this past year. The tool is packaged as a lightweight desktop java application. A list of important features includes: Input of metadata and file references as CSV spreadsheets; support for any number of schema.element.qualifier labels; support for multiple values in a field; wildcards to select all the files in a directory; customizable item licenses; customizable read access policies on items; modular verifiers for batches. The code is open source at https://github.com/jcreel/SAFCreator and under current development. I welcome and encourage pull requests for new features and verifiers. In this workshop, I will demonstrate the tool and provide instruction on DSpace batch imports with SAF.Item BIBFRAME Beginnings at UT Austin(2016-05-24) Cofield, Melanie; Davis, Jee-Hyun; Brown, Amy; Quagliana, Alisha; Ringwood, Alan; University of Texas at Austin; Harry Ransom CenterStaff from UT Libraries, the Harry Ransom Center, and the Tarlton Law Library have been collaborating in discussion group activities during the last year to develop knowledge and skills in anticipation of life after MARC, investigating the brave new world of linked data in libraries with a focus on the Library of Congress Bibliographic Framework (BIBFRAME) initiative. Our group efforts to better understand BIBFRAME and linked data for libraries include in-depth discussions of current literature, webcasts, and presentations; strategic application of Zepheira’s Practical Practitioner training; and hands-on experimentation transforming local metadata in various formats for various resource types to BIBFRAME. Our analysis of the resulting transformations has helped us gain insight on mapping complexities, data loss, false transformations, potential new metadata displays, and the limitations of the tools involved. The experimentation process overall has afforded us the opportunity to ask targeted questions about what is needed to move towards linked data and to gain a better view of the frontier of Technical Services staff skillsets. In this panel presentation, we’ll share details about our approaches to maximizing the group learning experience, and lessons learned from grappling with new concepts, data models, terminology, and tools. Representatives from our experimentation teams will report on the initial experience of transforming MARC and non-MARC data sets to BIBFRAME, and what we see as emerging questions and next steps.Item Memorandum of Understanding Workshop: Creating a Process for Successful Digital Collaboration(2016-05-24) Currier, Brett D.; Mirza, Rafia; Williamson, Peace Ossom; University of Texas at ArlingtonWhen working on digital projects, it is necessary to utilize experience in various departments within and outside of the library. A planning document called a Memorandum of Understanding (MOU) serves as an agreement between all stakeholders, which will likely include multiple library departments. In order to set expectations, an MOU can assist in the following ways: (1) Evaluating current and potential infrastructure; (2) Determining whether funding is needed or available, (3) Establishing clearly demarcated responsibilities and outcomes for each individual participant, (4) Accounting for and settling potential disagreements, and (5) Serving as a project management plan. Many faculty members, librarians, and other higher education staff members have experience working individually on projects, but not large scale, multi-phase, interdepartmental, collaborative projects that require project management as a priority. The goals of this workshop are to introduce participants to a method of project management suitable for digital projects, to provide participants with the opportunity to practice the negotiations necessary for creating common ground around digital projects, and to supply participants with documentation that they can adapt for use in their institutions. This workshop will begin with the presenters introducing the MOU template (licensed under a CC-BY-NC-SA), followed by a presentation and discussion of its structure, where each section will be described. Participants will be able to ask questions about the document in order to begin work on their own MOU. After choosing between an MOU for a Data Services Project, Digital Humanities Project, or Scholarly Communications Project, the participants will spend time developing their draft MOU. Each presenter will lead in providing guidance for participants working on MOUs for the area of focus that falls under his or her expertise. The presenters will provide common questions or perspectives of outside stakeholders, and participants will work through common pitfalls and troubleshoot and negotiate problems with an example patron. Each participant will leave with a Memorandum of Understanding Workbook, which includes an executive summary, a MOU template, an example workflow, a document to track estimate of university support, and explanatory documents for all of the above and will have completed an example MOU around one of the three subject areas. Sample Activity: After the Workbook has been presented to attendees, each presenter will walk a group through writing an MOU on a project based on their expertise. Based on the presenters’ experience, they will emphasize the iterative and collaborative nature by presenting common issues that arise through each step of the process. Participants of each group will write a memorandum of understanding based on the template. After progress has been made on writing the draft, presenters will then present common pitfalls, roadblocks, and objections. Participants will learn how to account for the following issues: party disengagement, third party stakeholders who are not parties to the MOU, and gold-plating – which is the act of giving the partner more than what they originally asked for. Attendees will be then be given the opportunity to work through those issues with their groups.Item Managing Assets as Linked Data with Fedora 4(2016-05-24) Woods, Andrew; DuraSpaceFedora is a flexible, extensible, open source repository platform for managing, preserving, and providing access to digital content. Fedora is used in a wide variety of institutions including libraries, museums, archives, and government organizations. Fedora 4 introduces native linked data capabilities and a modular architecture based on well-documented APIs and ease of integration with existing applications. Both new and existing Fedora users will be interested in learning about and experiencing Fedora 4 features and functionality first-hand. Attendees will be given pre-configured virtual machines that include Fedora 4 bundled with the Solr search application and a triplestore that they can install on their laptops and continue using after the workshop. These virtual machines will be used to participate in hands-on exercises that will give attendees a chance to experience Fedora 4 by following step-by-step instructions. Participants will learn how to create and manage content in Fedora 4 in accordance with linked data best practices, and how to search and run SPARQL queries against content in Fedora using the included Solr index and triplestore. This workshop is intended to be an introduction to Fedora 4 - no prior experience with the platform is required. Repository managers and librarians will get the most out of this workshop, though developers new to Fedora would likely also be interested. Attendees can expect to come away with a working understanding of Fedora's main features and benefits, and a clear path for adopting Fedora as a new repository platform or migrating from a previous version of Fedora.Item Animating Digital Libraries(2016-05-24) Williamson, James; Southern Methodist UniversityCultural heritage institutions and archival repositories are increasing their presence online with social media and are working to make a bigger impact online while making the best use of staff time. One of the ways that these institutions have been successful in reaching communities on social media has been through the adoption of internet communication and language. Significantly, more institutions are using frame animation to create GIFs (Graphic Interchange Format). GIFs created from movies, TV shows, artwork, etc. are ubiquitous on the internet. By taking physical and digital materials from their collections and manipulated them to create looping videos, animated artwork, and 3D models, institutions have found a way to adapt this internet currency to promote their archival holdings. This workshop will instruct participants on how to use image editing tools to create 3 types of GIFs used by cultural heritage institutions and archival repositories on the web. The instructor will lay out the underlying techniques that go into creating these GIFs step by step. The first part of the workshop will facilitate the use of several still images of an object to create a 3D like model. The second part of the workshop will help participants work with digitized video to create a looping video. The third part of the workshop will train participants to animate a piece of artwork. The workshop will conclude with a discussion on the issues surrounding the use of these techniques and how they can be addressed. The workshop will last 2 hours. Participants in the workshop will need to bring their own laptop along with a version of either Photoshop or Photoshop Elements. Both are available as a trial version for 30 days. To better facilitate hands on instruction, the number of participants will need to be capped at 25Item Updating a community metadata standard: Challenges and outcomes(2016-05-25) Long, Kara; Rivero, Monica; Thompson, Santi; Potvin, Sarah; Park, Kristi; Lyon, Colleen; Baylor University; Rice University; University of Houston; Texas A&M University; Texas Digital Library; University of Texas at AustinIn 2014 the Texas Digital Library (TDL) convened a working group, charged with updating the existing descriptive metadata standard for electronic theses and dissertations (ETDs), published by TDL in 2008. The metadata working group’s report and accompanying data dictionary were released in September of 2015 (http://hdl.handle.net/2249.1/68437). The group’s mixed-methods approach to revising the standard revealed divergences between the 2008 guidelines as they were originally published and the emergent practices of librarians, repository administrators, and others involved in ETD workflows. The updated standard needed to address and recommend significant changes to the Vireo ETD Submission Management System, also developed and hosted by TDL. In this presentation, members of the TDL ETD metadata working group will discuss the effort to update the standard, with a focus on negotiating barriers to dramatic shifts in community standards. We will also discuss outstanding issues, areas of future focus, and the difficult-to-achieve balance between ease of adoption and creating an optimal descriptive metadata standard.Item “A Battle Axe in the Time of Battle” - Procedures, Policies and Other Protectants When Working With Sensitive Content(2016-05-25) Ames, Eric; Baylor UniversityEvery collection has them: materials that contain sensitive content, from songs that disparage cultural groups to forms with identifying information like Social Security numbers and birthdates. And as more and more archival resources are digitized and made available online, it can be a challenge to ensure nothing slips through the cracks. Fortunately, there are concrete steps digitizers can take to keep unpleasant surprises from derailing digital content. This presentation will focus on three key concepts when dealing with sensitive content: prescreening archival collections, working with stakeholder groups and creating policies to help the institution prepare for and address negative feedback. Eric S. Ames, Baylor University’s curator of digital collections, will share the steps Baylor has taken over the years to ensure proper handling of materials ranging from sheet music to campus yearbooks and beyond.Item ETDs, ORCID, and Vireo(2016-05-25) Lyon, Colleen; University of Texas at AustinIn early spring of 2015 the University of Texas at Austin added an option for graduate students to claim and add an ORCID to their ETD submission. UT Austin uses the Vireo submission system for processing ETDs and adding ORCID was done as part of a software upgrade. ORCID are persistent digital identifiers for researchers. They help researchers distinguish their research from everyone else. At the time we weren’t prepared to publicize the option, but we want to make it available for anyone to use. We intended to provide education and outreach at some point in the future. In the summer of 2015, library staff noticed that many submissions were coming through Vireo with an ORCID included. An initial look at the data revealed approximately 29% of students had chosen to include an ORCID. This was quite a surprise given the lack of marketing, so in an effort to better understand how many students were choosing this option, we decided to investigate the use of ORCID for all 2015 submissions. A complete assessment of ORCID will be done once all the December 2015 submissions are finished being processed in late February. We intend to look at total numbers, numbers by department, and by degree level (masters vs doctoral). We will present our findings along with plans for marketing efforts to increase the use of ORCID.Item Crafting a Digital Preservation Patchwork: Stitching the Pieces Together(2016-05-25) Buckner, Sean; Texas A&M UniversityCharged with developing a digital preservation program at Texas A&M University that would provide coverage for the University Libraries and those they serve, in 2015 the newly hired Digital Preservation Librarian began assessing the Libraries’ goals, content, resources, and needs in regards to digital preservation. What he found was a set of existent and missing elements that were generally not interdependent or connected. This poster would visually represent the actions taken at A&M to “stitch” together a Libraries-wide digital preservation program, a gradual and ongoing process that involves interweaving previously independent or non-existent elements into one blanketing program. This patchwork of elements include, among others, the development of guiding documentation, selection and/or implementation of crucial asset management/storage systems, modification of preexisting and future workflows, reorganization of legacy content with retroactive acquisition of associated metadata, and coordination with interested or overseeing units. The poster would detail and describe the reasoning, methodology, and results for crafting a nascent digital preservation program in this manner at A&M.Item Digital Image Collections @ University of Hawaii(2016-05-25) Chantiny, Martha; University of Hawaii at ManoaPresent wide range of uses of open-source software "Streetprint" (LAMP-based, pre-Omeka-type application) - including making unique and fragile scrapbooks and research materials available as digital surrogates. See: http://digicoll.manoa.hawaii.edu/Item Open Source: From Technology to Content: A Bigger Picture(2016-05-25) Herbert, John; LYRASISItem Digitizing the Hawaii Mainichi - English/Japanese OCR @ University of Hawaii at Mano(2016-05-25) Chantiny, Martha; University of Hawaii at ManoaPoster presentation for the 2016 Texas Conference on Digital Libraries (TCDL) discussing the University of Hawai'i at Manoa Library's digitization of the Hawaii Mainichi and the English/Japanese OCR.Item oEmbed Service for Islandora(2016-05-25) Zhao, Tao; University of OklahomaThe University of Oklahoma Libraries launched an international exhibit in 2015 called Galileo’s World (http://galileo.ou.edu) featuring our unique History of Science collections. The Drupal-based exhibit site links to high-resolution scans in an Islandora repository (https://repository.ou.edu) using an Islandora-specific fork of oEmbed (http://oembed.com) that will be contributed to the Islandora Foundation in 2016.Item Systems Interoperability and Collaborative Development for Web Archiving - Filling Gaps in the IMLS National Digital Platform(2016-05-25) Mumma, Courtney; Phillips, Mark; Internet Archive; University of North TexasThe Institute of Museum and Library Services (IMLS) awarded a National Leadership Grant, in the National Digital Platform category, to a proposal by Internet Archive’s Archive-It, Stanford University Libraries (DLSS and LOCKSS), University of North Texas, and Rutgers University. The $353,221 grant will support the project “Systems Interoperability and Collaborative Development for Web Archiving,” a two-year research project to test economic and community models for collaborative technology development, prototype system integration through development of Export APIs, and build community participation in web archiving development and new research and access tools. The project supports the National Digital Platform funding priority of IMLS by increasing access to shared services and infrastructure while building capacity for broader community input in technology development. Project outcomes will promote system integration, facilitate increased distributed preservation of archived data, and help support new global and local access models possible through export APIs, with an eye towards modeling post-grant interoperable systems architectures. Archive-It’s status as widely-used, shared web archiving infrastructure ensures broad community impact and makes possible the involvement of institutions of all sizes in project work. The involvement of Stanford University Libraries builds on their work in the Hydra community and with digital preservation services. UNT contributes experience in digital library and web archiving technology development and Rutgers’ work on research uses of web archives ensures the involvement of downstream user communities. Overall, the project will lay the groundwork for future collaboration around interoperability that will enhance the integration of disparate systems, increase local preservation, and improve the discoverability and use of web archives. Mark Phillips of UNT and Courtney Mumma of IA will describe the grant and provide an update about the work completed in the first six months of activity. Attendees will be invited to participate in an active and growing community, a key component in the grant’s success and the work’s sustainability.Item The Dataverse Project: An Open-Source Data Repository and Data Sharing Community(2016-05-25) Quigley, Elizabeth; Harvard UniversityThis poster discusses the Dataverse Project, an open-source data repository and data sharing community.Item Automating Digital Collection Processes(2016-05-25) Starcher, Christopher; Luttrell, Robert; Texas Tech UniversityProcessing digital collections is tedious and time consuming. It is also susceptible to human error. Although it is impossible to automate all digital collection creation processes, at the Texas Tech University Libraries, we have taken steps to automate many of our processes in an effort to expedite digital collection creation and to eliminate human error where possible. The University Library and the Southwest Collection/Special Collections Library conduct digital collection projects autonomously and collaboratively. While some of the processes at each library are similar, others are unique to the individual environments. In this presentation, we will discuss the needs of each environment and reveal the solutions implemented to meet those needs.Item A Geospatially Oriented Humanities Exhibit(2016-05-25) Peters, Todd C.; Dede-Bamfo, Nathaniel; Long, Jason R.; Texas State UniversityOn January 1, 1987 Texas Monthly writer Dick Reavis set out on a year-long journey to drive every road on the official map of Texas, and report his experiences in a series of articles. The Dick Reavis Papers at The Wittliff Collections at Texas State University holds a large collection of postcards, color slides, a travel log book, and several hundred pages of typewritten notes from the journey. The Digital and Web Services Department and The Wittliff Collections are building an innovative web exhibit using Reavis’ own shaded highway map to navigate digitized items from the collection by using ArcGIS, Google Maps and web scripting. This presentation will discuss the overall development of the project, digitization of materials, the use of ArcGIS to create shapefiles, and the creation and integration of the website with Google Maps.Item Envisioning a Geospatial Data Portal and Curation Network(2016-05-25) Weimer, Katherine Hart; Burns, Douglas; Been, Joshua; Ricker, Kim; Smith, Cecelia; Rice University; University of North Texas; University of Houston; Texas A&M UniversityLocal, state and federal agencies, as well as non-governmental agencies and a variety of researchers are producing geospatial data in increasing amounts. Libraries are challenged to collect, manage and provide discovery to this variety of geospatial data. This work requires both technical infrastructure and personal expertise. University libraries are curating and storing locally created geospatial data at various levels, however, there are no coordinated efforts across the state of Texas to curate, store, nor share out the data. This panel will explore the multifaceted issues surrounding geospatial data curation, including: • What are the current local efforts to curate, store and share data? • What technical options exist for a collaborative data preservation and discovery environment (i.e. data portal)? • What skills and expertise are required? • What metadata standards are being followed? • What costs and benefits are there to a coordinated approach? • How might TDL serve to facilitate this endeavor? • How might a collective/common data portal support GIS services across TDL libraries? This panel will include GIS librarians / data managers who will share their experiences and challenges in an effort to begin conversations to create a state-wide geospatial data portal. Each panelist will present a ten minute briefing on the GIS data and services provided through their library and describe their campus environment, including any challenges or gaps they have found which impede meeting patron requests. Each will share ideas on what possible improvements may exist and what collaborative role TDL may play. The panelists will encourage wide audience engagement in question and answer and discussions during the second half of the allotted panel time.Item Enhancing discovery and slaying workflows: Using the WorldCat Digital Collection Gateway to sync repository metadata to worldcat.org(2016-05-25) Lindsey, Nerissa Spring; Texas A&M International UniversityAs the cataloger at Texas A&M International University I am charged with the task of making our digital repository items discoverable through our local library discovery tool. This presentation will explore how I was able to use the WorldCat Digital Collection Gateway to make the workflow of representing repository items in our local discovery tool more efficient, while also increasing the potential for global discovery of our local content. The WorldCat Digital Collection Gateway is a free, self-service tool, which managers of OAI-PMH compliant repositories can use to automatically crosswalk metadata to worldcat.org into the MARC format.
- «
- 1 (current)
- 2
- 3
- »