Browsing by Subject "metadata"
Now showing 1 - 20 of 50
Results Per Page
Sort Options
Item Accessing the Making Cancer History® Voices Oral History Collection(2015-04-27) Garza, Jose Javier; MD Anderson Cancer CenterThe Historical Resources Center of the UT MD Anderson Cancer Center has been collecting oral history interviews since 2000. With over sixty participants and several hundred hours of interview footage, the archives is using the Oral History Metadata Synchronizer (OHMS) and CONTENTdm (CDM) to facilitate access to the interview collection. Since 2009, the archives began experimenting with various platforms to ensure access to both audio and text version of the oral history interviews while protecting the privacy of MD Anderson faculty, staff, and patients. The entire oral history collection is described using an internal coding scheme to allow cross-referencing among key topics in the interviews. After consideration, the archives believes that combination of OHMS as the delivery platform and CDM as the searching tool will create a searchable ecosystem that provides access to the interviews will preserving internal metadata structure of the interviews.Item AtoM-izing Archival Collections(Texas Digital Library, 2023-05-18) Richardson, MatthewLaunched in March 2020 (really!), the McGovern Historical Center (MHC)’s archival management system utilizes Artefactual’s AtoM (Access to Memory) application to provide access to archival finding aids as well as digitized and born digital assets. This short presentation will walk through the MHC’s workflow for creating archival description, importing it into AtoM via CSV, and making it available online.This lightning talk will give particular emphasis to the workflow as it relates to digital objects. The CSV upload includes links to Amazon S3, where the MHC stores its access files. From there, AtoM automatically generates thumbnail representations, which appear in-line with the archival description and link out to the full-size access files.While nothing is perfect, the MHC’s systems and workflow offer efficiencies for a small staff and seamless discoverability for users.Item Austin Music Documentation Initiative Portal(2015-04-27) Atkins, Grace; Rainey, Hannah; Selvidge, Jeremy; University of Texas at AustinAustin, Texas is famous for a thriving music scene. The contemporary scene is apparent to all who travel and move to Austin, yet the rich history and development of the music scene is hidden in various private and public collections. The Austin Music Documentation Initiative (AMDI) intends to increase access and awareness of the music history of Austin by providing a portal through which organizations and individuals can contribute metadata and thumbnails of Austin music history related materials. Under the guidance of the digital archivist at the Perry-Castaneda Library, students from the Spring 2015 section of Digital Libraries at the UT iSchool will create a proof of concept cataloging app for the AMDI. The proof of concept app will be used in support of grant applications. The end goal for this project will include a metadata schema, as well as a form and workflow for uploading metadata to a central directory.Item BIBFRAME Beginnings at UT Austin(2016-05-24) Cofield, Melanie; Davis, Jee-Hyun; Brown, Amy; Quagliana, Alisha; Ringwood, Alan; University of Texas at Austin; Harry Ransom CenterStaff from UT Libraries, the Harry Ransom Center, and the Tarlton Law Library have been collaborating in discussion group activities during the last year to develop knowledge and skills in anticipation of life after MARC, investigating the brave new world of linked data in libraries with a focus on the Library of Congress Bibliographic Framework (BIBFRAME) initiative. Our group efforts to better understand BIBFRAME and linked data for libraries include in-depth discussions of current literature, webcasts, and presentations; strategic application of Zepheira’s Practical Practitioner training; and hands-on experimentation transforming local metadata in various formats for various resource types to BIBFRAME. Our analysis of the resulting transformations has helped us gain insight on mapping complexities, data loss, false transformations, potential new metadata displays, and the limitations of the tools involved. The experimentation process overall has afforded us the opportunity to ask targeted questions about what is needed to move towards linked data and to gain a better view of the frontier of Technical Services staff skillsets. In this panel presentation, we’ll share details about our approaches to maximizing the group learning experience, and lessons learned from grappling with new concepts, data models, terminology, and tools. Representatives from our experimentation teams will report on the initial experience of transforming MARC and non-MARC data sets to BIBFRAME, and what we see as emerging questions and next steps.Item Card Catalog Conversion: The Revenant(2016-05-25) Scott, Bethany; Vinson, Emily; University of HoustonThroughout the late 1970s and early 1980s libraries around the world tackled the monumental task of converting miles of card catalog to machine readable formats accessible by computer. While this immense undertaking was ably handled and traditional card catalogs are rarely seen in libraries today, they continue to be the only means of access to some legacy archival collections. This was the case with a substantial portion of the KUHT TV video collection at the University of Houston Special Collections. In the summer of 2015, six Rolodexes were donated along with several thousand videos representing almost 30 years of public television broadcasting in Houston. With a goal of gaining intellectual control and creating patron access to this unique video collection, Bethany Scott, UH Coordinator of Digital Projects and Emily Vinson, UH Audiovisual Archivist designed a pilot project to assess methodologies for card catalog conversion in the twenty-first century. In this presentation we will discuss past approaches to card catalog digitization, and the two methods we utilized to convert our Rolodex card data into a usable digital format - manual data entry for handwritten cards and scanning, OCR and data parsing for typewritten cards. We will discuss the pros and cons of each approach, how this pilot will inform future UH projects and ideas for others wishing to create digital access points for similar collections.Item Data Management 101(2016-11-15) Trelogan, Jessica; University of Texas at AustinThis three-hour workshop provides a high-level overview of a range of topics related to the management of research data. Intended for librarians and library staff who are new to providing research data services to faculty, students, and staff, this introductory course will cover the basics of data management throughout the research lifecycle, from the creation of a data management plan through long-term archiving.Item Diacritics and Special Characters: How to avoid gobbledygook in item displays(Texas Digital Library, 2021-05-24) Stokes, CharityItem Digitizing, Georeferencing, and Metadata Creation of Local Maps: Enhancing Discoverability and Celebrating the Centennial Anniversary of the Texas Collection at Baylor University(Texas Digital Library, 2023-05-18) Been, Joshua; Stuhr, DarrylThis project involves the digitization, georeferencing, and metadata creation of the local Waco map collection at the Texas Collection at Baylor University Libraries. We have two main objectives. First, to create an interactive mapping website to celebrate the upcoming centennial anniversary of the Texas Collection. Second, to enhance the discoverability of the georeferenced map content for researchers in Texas and worldwide. When this project was envisaged in the Fall of 2022, only a handful of Waco-area maps from our collection had been scanned and made discoverable through our Digital Collections portal powered by Quartex. To increase this collection, the first two steps were to scan additional maps and then to georeference these maps. The digitization process involves scanning the original maps and converting them into digital files. The georeferencing process involves aligning the scanned maps with real-world coordinates so that they can be overlaid using mapping platforms. One of our objectives is to enhance the accessibility of our local Waco-area maps to researchers beyond Baylor University, both within and beyond the state. To achieve this, we will create metadata in the Aardvark format, which will enable other institutions running a GeoBlacklight portal to import our metadata and discover our growing collection. Another key objective is to develop an interactive mapping display that highlights the georeferenced maps and enables users to explore the historical geography of the local area. To accomplish this, we will leverage ArcGIS Online to create a compelling and user-friendly interactive mapping experience.Item Do more with less: Potential automated ETD cataloging with batch processing(2017-05-24) Garrett, Kelly A.; University of Texas at AustinAt UT Libraries, current metadata workflows for electronic theses and dissertations (ETDs) require catalogers to create records in separate systems concurrently: (1) Dublin Core for the DSpace institutional repository via ETD management software Vireo and (2) MARC in OCLC Connexion for the library catalog. In the spring of 2016, UT Libraries’ Cataloging & Metadata Services began exploring the capabilities of Vireo’s batch export feature as a means to streamline the work. This 24x7 presentation will focus on UT Libraries’ envisioned ETD batch editing workflow using the Vireo 3 MARC export feature, MarcEdit Tools, and Regular Expressions. Lingering issues and recommendations for Vireo 4 export features will also be covered.Item Enhancing discovery and slaying workflows: Using the WorldCat Digital Collection Gateway to sync repository metadata to worldcat.org(2016-05-25) Lindsey, Nerissa Spring; Texas A&M International UniversityAs the cataloger at Texas A&M International University I am charged with the task of making our digital repository items discoverable through our local library discovery tool. This presentation will explore how I was able to use the WorldCat Digital Collection Gateway to make the workflow of representing repository items in our local discovery tool more efficient, while also increasing the potential for global discovery of our local content. The WorldCat Digital Collection Gateway is a free, self-service tool, which managers of OAI-PMH compliant repositories can use to automatically crosswalk metadata to worldcat.org into the MARC format.Item Enhancing Educational Access to Art(2012-05-25) Higgins, Jessica; Karadkar, Unmil P.; Pavelka, Karen; Zinser, Catherine; University of Texas at AustinArt museums are an important unit on several university campuses. These museums bring value to the university community by serving as custodians of paintings, sculptures, prints, and drawings. These museums serve as a resource of unparalleled importance in education related to art, architecture, language, and culture by providing instructors with access to rare artifacts of cultural significance. While the museum staff is committed to helping faculty locate items of interest, they are hard pressed for time and do not always possess the domain-specific vocabulary used by instructors in diverse disciplines. Artifacts in the museums are organized and described by museum professionals, while they are used by academics. The resulting disconnect between the expectations of both groups affects the use of these artifacts. We aim to address this issue by enhancing a collection of prints and drawings at the Blanton Museum of Art with a rich, domain-specific description that meets the expectations of a multi-disciplinary faculty. Instructors in several departments at UT Austin use the Prints and Drawings Collection as a teaching tool. This collection includes over 13,000 artifacts, which were executed over four centuries. This is a closed collection and the collection manager provides access to specific prints and drawings upon request. The metadata related to the prints can be accessed only through computers situated in the museum, further limiting access to it. Thus, instructors are unable to browse the collection at their convenience and rely heavily on the Blanton staff to provide suggestions for relevant works. This practice results in a small pool of items being viewed repeatedly, while other prints of interest go unnoticed. We take a used-centered design approach to create a prototype of a richly described repository of artifacts from this collection. We started by conducting interviews of faculty in the areas of Art, Art History, French, and Architecture to gain an understanding of their challenges in accessing the collection and their needs for effectively locating items of interest. Based on the responses from these instructors, we have made two modifications to the infrastructure: firstly, we populated a repository using CollectiveAccess, an open source repository software, with representative samples of prints used by these instructors to enable long-distance, internet-based access. We also augmented the metadata contained in the museum’s proprietary cataloging software to include fields and content desired by the instructors using the Getty Institute’s CDWA Lite schema. The resulting repository is thus based on open standards, improving the potential for its use by various demographics on campus, as well as, improving its visibility for remote users and repositories through interoperability protocols. We are currently evaluating this prototype repository. In the first stage, we are evaluating our design with the help of the instructors who set the expectations for this repository. This evaluation will help us fine tune the interface features, repository architecture, as well as our use of the CDWA Lite schema.Item The Government Documents Digitization Initiative: Shepherding Resources from Shelf to Server(2017-05-25) Laddusaw, Ryan; Sare, Laura; Buckner, Sean; Texas A&M UniversityIn Fall 2016, the Texas A&M University Libraries embarked on a project to digitize a collection of Flood Insurance Studies, published by the Federal Insurance & Mitigation Administration and to submit them to HathiTrust. To enable long-term access and discoverability, we have decided to assign each item an Archival Resource Key (ARK) as both a persistent identifier and a uniform resource locator. We are using the EZID service to maintain our identifiers and their N2T (name-to-thing) resolver to persist and provide metadata for our items. We then create metadata for each report and process each one into a submission information packet according to HathiTrust’s guidelines and submit them for ingestion. A Flood Insurance Study (FIS) is a compilation and presentation of flood risk data for specific watercourses, lakes, and coastal flood hazard areas within a community. When a flood study is completed for the National Flood Insurance Program (NFIP), the information and maps are assembled into an FIS. For a few years, these studies were distributed to federal depository libraries. Many depository libraries are digitizing their collections for inclusion into HathiTrust. We noticed that some FIS digitized in HathiTrust were missing some of the foldout data tables, so we decided to digitize our collection and focus on making sure the maps and data tables were viewable in an online format. To ensure continued access to this collection, we have created an Archival Resource Key (ARK) for each item. ARKs are a type of persistent identifier that also function as a Uniform Resource Locator (URL). This allows a user to enter the ARK and the N2T resolver’s hostname into a web browser and arrive at a page containing metadata that will enable them to easily identify and locate the desired resource. This allows researchers to embed the ARK in their work, and anyone can use this URL property to quickly locate and access the referenced material. By digitizing this collection, we are able to increase accessibility and discoverability of these resources. This project will produce a digital version of this collection, and will allow us to reduce the size of the physical collection and save space, without sacrificing access to any of these items. In this presentation, we will review the origins of the project, present the workflow involved from scanning to HathiTrust submission, and talk about the future of the project.Item Harvesting Quality: Evaluating Metadata for Digital Collections(2014-03-25) Biswas, Paromita; Western Carolina UniversityMetadata creation practices for digital library projects vary widely amongst libraries. Digital library projects often have to deal with multiple metadata creators, new formats and resources, and dynamic metadata standards for different communities (Park & Tosaka, 2010). As a result while accuracy and consistency in metadata are prioritized by field practitioners, metadata records created for specific digital projects may lack the quality needed to support successful end-user resource discovery and access. Park and Tosaka’s survey of metadata quality control in digital repositories and collections reveal that digital repositories often rely on periodic sampling or peer review of original metadata records as mechanisms for quality assurance (Park & Tosaka, 2010). This poster proposal presents another means of running quality checks on metadata created for digital projects based on Hunter Library’s experience with the WorldCat Digital Collection Gateway tool used for harvesting metadata for digital collections into WorldCat. Hunter Library’s digital collections are described using Dublin Core in Contentdm and the Library has recently started harvesting its collections into WorldCat using Gateway. During harvesting the Gateway, by default, places the names of “creators” and “contributors” recorded in separate fields in the local metadata environment into one broad “Author” field for WorldCat users. A cursory review of this “Author” field in WorldCat for several harvested items from one of the library’s collections revealed an unexpected presence of corporate body names alongside personal names. Consequently this led to an evaluation of how the “creator” and “contributor” fields had been used in that collection. The “Frequency Analysis” feature in Gateway proved to be particularly useful in this evaluation since it provided a breakdown of each field in a particular collection by the values used in that field and the number of times they had been used. For example, a high frequency usage of a particular name indicated that the usage had not been a random mistake but had been consistent. A subsequent analysis of the library’s digital collections’ metadata using “Frequency Analysis” revealed that for some collections, the “contributor” field had been used to record entities whose roles, in relation to the item described, spanned from publisher, printer, editor, or recipient of letter. However, the library’s then current metadata schema had limited the definition of the “contributor” field to entities who had a direct but secondary role in the creation of an item like editors or illustrators. This discrepancy between the library’s metadata schema and the usage of the “contributor” field led to a redefinition of the role of the “contributor.” The schema now incorporates the plethora of roles that “contributors” could have in relation to an item and recommends that the role of each “contributor” be explained in the “description” field to account for the diversity of roles. Updating of the schema has thus promoted consistency in recording the “contributor” field across the library’s digital collections while also possibly benefitting users searching for an item by the various names associated with it.Item Here Be Dragons: Navigating the Uncharted Waters of Legacy Thesis and Dissertation Digitization(Texas Digital Library, 2021-05-24) Weidner, Andrew; Wu, AnnieItem Hot Topics in Metadata(2010-05-18) Harlan, Amanda; Baylor UniversityAn informal discussion about metadata topics and issues that concern Texas academic institutions.Item Inter-University Upper Atmosphere Global Observation NETwork: Metadata Database for Geoscience by using DSpace(2011-06-08) Koyama, Yukinobu; Kouno, Takahisa; Hori, Tomoaki; Abe, Shuji; Yoshida, Daiki; Hayashi, Hiroo; Shinbori, Atsuki; Tanaka, Yoshimasa; Kagitani, Masato; UeNo, Satoru; Kaneda, Naoki; Tadokoro, Hiroyasu; Yoneda, Mizuki; Kyoto University; University of Tokyo; Nagoya University; Kyushu University; Weather Information and Communications Service; Tohoku UniversityItem Intermediate DSpace: Metadata Imports and Exports(2017-03-22) McElfresh, LauraItem Intermediate DSpace: Metadata Imports and Exports [presentation](2016-12-21) McElfresh, Laura KaneItem Intermediate DSpace: Metadata Imports and Exports [video](2016-12-21) McElfresh, Laura KaneItem Introducing MAGPIE (Metadata Assignment GUI Providing Ingest and Export)(2015-05-26) Welling, William; Elmquist, Stephanie; Creel, James; Huff, Jeremy; Savell, Jason; Mathew, Rincy; Hahn, Doug; Bolton, Michael; Texas A&M UniversityThe Libraries at Texas A&M University have curated immense output from graduate programs for many decades. With the advent of the Vireo ETD (Electronic Thesis and Dissertation) submittal system, dissertations have been submitted in digital format and made available for download from TAMU’s OAKTrust institutional repository. However, many older dissertations are only discoverable through TAMU’s Voyager based online card catalog and are publicly available to visiting researchers in print format. A current digitization effort will make available these dissertations online at OAKTrust. The tool being developed for this purpose is designated MAGPIE (Metadata Assignment GUI Providing Ingest and Export). For the dissertation use case, librarians specified that the tool should display scanned PDF files and OCR (optical character recognition) text output from a file system. The tool then presents these data to annotators (typically, student workers) to augment and amend metadata. The presentation interface reads metadata, in this case MARC records, from TAMU’s Voyager card catalog database, thereby pre-populating important fields, such as the title and author name. However, a number of other fields, such as the abstract and names of committee members, do not exist in the card catalog but are available in the document itself. The annotator can simply copy and paste these character strings from the source document into a metadata input form specifically configured for the legacy dissertation digitization and preservation project. The MAGPIE workflow allows a manager to amend, reject, or approve these metadata entries, and to push approved documents into the OAKTrust repository with a single click. The MAGPIE tool has been developed using the Weaver framework, an open source web-development front-end and web service code-base from TAMU Libraries. The web service is built on top of Spring-boot, which is a popular framework with a large and growing community with documentation and support. The front-end of the web-stack consists of AngularJS and Bootstrap. The Weaver framework offers certain advantages, such as automatic updates of document status in the browser window without a page reload. The MAGPIE tool has also been developed with future projects in mind – the importation of content is modular and customizable, as is the metadata import service, the metadata form, and the export/push functionality. We anticipate that the MAGPIE tool will find use for metadata enhancement and automatic repository deposit of newspapers, images, and other institutional collections with or without existing metadata. In this talk, we will examine the initial use case of scanned legacy dissertations, provide some background on the MAGPIE software and its development, demonstrate the functionality of the tool, and conclude with an overview of future ambitions.
- «
- 1 (current)
- 2
- 3
- »