Browsing by Author "Texas A&M University"
Now showing 1 - 20 of 52
Results Per Page
Sort Options
Item Barn Raising the Digital Humanities(2013-04-16) Christy, Matthew; Potvin, Sarah; Clement, Tanya; Henry, Geneva; Mitchell, J. Lawrence; Hoover, Ryan; University of Texas at Austin; Rice University; Texas A&M University; St. Edwards UniversityWriting in a recent special issue of Journal of Library Administration dedicated to exploring “Digital Humanities in Libraries: New Models for Scholarly Engagement,” Miriam Posner references libraries and archives as formative to Digital Humanities (DH): “what we now call digital humanities grew out of a set of practices, and a community of practitioners, which themselves arose in libraries and archives.” This foundational relationship continues to shift and grow, some have argued, to the mutual benefit of academic DH and cultural heritage institutions (including galleries, libraries, museums, and archives) alike. Jefferson Bailey, writing for dh+lib, notes that “DH tools, methods, and technologies have the potential to help enhance and evolve a wealth of professional practices beyond academic … It is this ability to reinvigorate the work of non-academics, such as librarians, archivists, and collection managers, that has many of us in cultural heritage excited about DH as an emerging idiom within memory institutions.” With the increasing popularity and growth of Digital Humanities, however, the question of how universities can and will support this emergent area asserts itself. In 2009, Christine Borgman observed a lack of basic infrastructure for DH and recommended: “Much work remains to build the scholarly infrastructure necessary for digital scholarship to become mainstream in the humanities. Humanities scholars must lead the effort, because only they understand the goals and requirements for what should be built. Librarians, archivists, programmers, and computer scientists will be essential collaborators, each bringing complementary skills.” Four years later, what does this infrastructure look like? With this background in mind, our panel seeks to meet two objectives: First, and simply, we aim to introduce TCDL attendees to a range of DH projects and initiatives underway in universities across Texas. To this end, we’ve invited a number of distinguished panelists-- based in academic departments, libraries, archives, and information schools-- engaged with a variety of DH projects and asked them to provide an overview of their work. Second, we are interested in understanding and examining the structures and supports in place to enable DH collaboration, as well as those being built. In considering this, we’ll pay particular attention to the role of libraries and archives. What has institutional collaboration looked like for those engaged in DH activities from the perspective of libraries, archives, and academic departments? And what has facilitated or impeded this collaboration? Panelists: Tanya Clement, Assistant Professor, School of Information, University of Texas at Austin Geneva Henry, Executive Director, Center for Digital Scholarship, Rice University J. Lawrence Mitchell, Professor, Department of English, and Director of Cushing Memorial Library & Archives, Texas A&M University Ryan Hoover, Assistant Professor, English Writing and Rhetoric, St. Edwards University Moderators/Proposers: Sarah Potvin, Assistant Professor, Texas A&M University Libraries Matthew Christy, Lead Software Applications Developer, Initiative for Digital Humanities, Media, and Culture, Texas A&M UniversityItem Batch Importing into DSpace with the SAFCreator(2016-05-24) Creel, James; Texas A&M UniversityA commonly difficult use case for any digital repository is the ingest of large batches of items. Batches can come from all sorts of campus and community stakeholders with varying types and quantities of content, differing ways of representing metadata, and unique needs for access control and licensing. The heterogeneous nature of batches presents a fundamental challenge to automating the importation workflow and has lead to ad hoc and brittle solutions. The DSpace institutional repository software enjoys wide adoption in academia and industry, and is a flagship service of the Texas Digital Library to its member institutions. DSpace offers a simple but powerful batch import format called SAF (Simple Archive Format) that allows for metadata assignment, licensing, organization of files into bundles, and authorization management. SAF is simpler than other programmatic means of importation into DSpace such as the METS SIP (Submission Information Package) used by SWORD or HTTP POST requests to the REST API. However, generating SAF batches usually still requires external software, programming work, or a combination of both. There have been some efforts to provide generalized tools for processing metadata and content into SAF (notably Peter Dietz’s SAFBuilder https://github.com/DSpace-Labs/SAFBuilder), but when batches have special requirements regarding licensing and permissions, it has usually entailed custom code to do the processing. In addition, the spreadsheets often used to encode metadata are prone to errors such as invalid field labels and incorrect or missing filenames. It greatly accelerates a batch loading workflow to get validation of the input prior generating the archive and attempting to import it into DSpace. A new tool designated SAFCreator aims to provide enough flexibility to eliminate programming requirements for a wide variety of batch loads, and has been used by librarians at Texas A&M to ingest content into several collections this past year. The tool is packaged as a lightweight desktop java application. A list of important features includes: Input of metadata and file references as CSV spreadsheets; support for any number of schema.element.qualifier labels; support for multiple values in a field; wildcards to select all the files in a directory; customizable item licenses; customizable read access policies on items; modular verifiers for batches. The code is open source at https://github.com/jcreel/SAFCreator and under current development. I welcome and encourage pull requests for new features and verifiers. In this workshop, I will demonstrate the tool and provide instruction on DSpace batch imports with SAF.Item Beyond the Early Modern OCR Project(2015-04-27) Christy, Matthew; Grumbach, Elizabeth; Mandell, Laura; Texas A&M UniversityThe Early Modern OCR Project (eMOP) is a Mellon Foundation grant funded project, nearing completion at the Initiative for Digital Humanities, Media, and Culture (IDHMC) at Texas A&M University. eMOP’s goal is to improve optical character recognition (OCR) output for early modern printed English-language texts by utilizing and creating open-source tools and workflows. In addition to establishing an impressive OCR workflow infrastructure, eMOP has produced several open-source post-processing tools to evaluate and improve the text output of Google’s Tesseract OCR engine. Work on eMOP is nearing completion this summer, and the team is now looking beyond eMOP towards sharing its accrued knowledge and tools. As a Mellon Foundation grant funded project, eMOP is tasked with sharing the results of its work whenever possible. This is in line with the IDHMC’s stated goals of aiding Humanities scholars with conducting digital research and/or creating digital outcomes of their research. As such, we are pursuing a variety of methods to disseminate the various products of our work. We are creating open-source code repositories for all software created by, and for, eMOP. We are creating an open-source repository of all eMOP typeface training created for the Tesseract OCR engine. We are creating a publicly available database of early modern printers, publishers and booksellers based on the imprint metadata of the entire Eighteenth-Century Collection Online (ECCO) and Early English Books Online (EEBO) proprietary collections. We are making the recently released Phase I hand-transcriptions of EEBO by the Text Creation Partnership (TCP), available for full-text searching via the Advanced Research Consortium’s (ARC’s) 18thConnect website. We are making the first-ever-produced OCR transcriptions of the entire EEBO catalog available via 18thConnect’s online crowd-sourced transcript correction tool, TypeWright. TypeWright will provide free access to the EEBO transcriptions, and a text or XML version of that corrected transcription for anyone who corrects an entire document. In addition, the eMOP team is committed to continuously improving the accuracy and robustness of our workflow. We are currently in discussion with, or actively engaged in, partnerships with teams at Notre Dame, Penn State, and the University of Texas to apply eMOP’s workflow to different collections. These partnerships will provide us with the ability to improve eMOP by: Adding more OCR engines to our workflow in addition to Tesseract, currently being used; Expanding our collected dictionaries beyond the current early modern English used with eMOP; Expanding our database of google-3grams beyond the early modern period to aid in post-processing OCR correction of documents outside of the early modern period; Expanding our printers & publishers database to include data from outside of the ECCO and EEBO collections. We are proud of the work we have done with eMOP and are eager to continue to find ways to build upon what we have accomplished. We feel that much of our work would be of interest to libraries and librarians. We look forward to sharing the outcomes of eMOP and our vision for future work with the participants at TCDL this April.Item Cataloging Services in Support of Digital Library Collections(2016-05-26) Olivarez, Joseph; Furubotten, Lisa; McGeachin, Robert; Texas A&M UniversityTo promote inter-departmental collaboration, our presentation shows real examples of some of the many services your cataloging department can offer to support your digital projects: Organizing your serials/series and their numbering and sequencing for digitizing and indexing; Repairing or creating MARC bibliographic records representing the print format of your objects, and; Mapping the resulting MARC data to spreadsheets, or other schema as needed (i.e., Dublin Core, MODS etc.); Preparing MARCXML records for HathiTrust; Using XSLT to crosswalk data from one schema to another; Use MARCEdit and/or other tools to efficiently normalize, refine and correct data; Develop strategies, workflows, tools, and documentation enabling staff to quickly create metadata for unregistered collections.Item A Catalyst for Social Activism: The Digital Black Bibliographic Project at Texas A&M University(2016-05-26) Potvin, Sarah; Hankins, Rebecca; Ives, Maura; Earhart, Amy; Texas A&M UniversityWhat can we learn from bibliographies? A proof of concept currently underway at Texas A&M University, the Digital Black Bibliographic Project (DiBB) poses bibliographies as sites of and tools for activism, allowing new fields and communities to quickly categorize and organize themselves. This presentation considers Dorothy Porter’s A catalogue of the African collection in the Moorland Foundation, Howard University Library (1958) and Abdul Al-Kalimat’s The Afro-Scholar Newsletter (1983-91); reviews a historical schism in libraries between bibliographies and subject categorization; and outlines the goals of DiBB, which seeks to diversify the digital cultural record and produce a robust dataset for black cultural research.Item Crafting a Digital Preservation Patchwork: Stitching the Pieces Together(2016-05-25) Buckner, Sean; Texas A&M UniversityCharged with developing a digital preservation program at Texas A&M University that would provide coverage for the University Libraries and those they serve, in 2015 the newly hired Digital Preservation Librarian began assessing the Libraries’ goals, content, resources, and needs in regards to digital preservation. What he found was a set of existent and missing elements that were generally not interdependent or connected. This poster would visually represent the actions taken at A&M to “stitch” together a Libraries-wide digital preservation program, a gradual and ongoing process that involves interweaving previously independent or non-existent elements into one blanketing program. This patchwork of elements include, among others, the development of guiding documentation, selection and/or implementation of crucial asset management/storage systems, modification of preexisting and future workflows, reorganization of legacy content with retroactive acquisition of associated metadata, and coordination with interested or overseeing units. The poster would detail and describe the reasoning, methodology, and results for crafting a nascent digital preservation program in this manner at A&M.Item Data Management Plans(2016-11-15) Herbert, Bruce; Texas A&M UniversityItem Determining and Mapping Locations of Study in Scholarly Documents: A Spatial Representation and Visualization Tool for Information Discovery(2013-03-21) Creel, James; Weimer, Katherine; Texas A&M UniversityTheses and dissertations play a significant role in the scholarly literature and often refer to locations of interest or regions under study. Through geoparsing, which is the identification and disambiguation of place names, we have created a tool to generate interactive maps of the geographic locations referenced in theses and dissertations. Our visualization affords increased awareness of the numerous locations being researched and which departments and majors are studying each location. More broadly, the interface supports multidisciplinary research, student recruitment and faculty collaboration. Using geographic and gazetteer metadata and open source mapping applications, this tool provides researchers with serendipitous geographic and interdisciplinary connections. The beta version consists of several DSpace curation tasks to take a given ETD through each step of the metadata creation and mapping processes. Once the tool has suggested geospatial metadata for an ETD, the DSpace administrative interface allows curators to approve the suggested metadata values. Our geoparser integrates various open-source tools as well as specialized heuristics to automate the name extraction and disambiguation tasks. We have employed the OpenNLP and Stanford NLP libraries for the name extraction task, and use the Geonames gazetteer as our source for referenced entities. A preliminary evaluation of the tool indicates an accuracy of 84% with regard to the disambiguation of names to specific Geonames IDs. Work toward improving the accuracy is ongoing. The visualization component of the tool reads geospatial metadata as KML and can render the referenced locations in any of three map visualization options selected by the reader: OpenLayers, OpenStreetMaps and Google Maps. Once a site of interest is located on the map, the reader may select a link to the complete thesis or dissertation stored in the university's instance of DSpace, our institutional repository. The long-term goal of this project is to extend the content to include all TDL ETDs for a widely used search mechanism.Item Developing a Common Submission System for ETDs in the Texas Digital Library(2007-05-30) Mikeal, Adam; Brace, Tim; Texas A&M University; University of Texas at AustinThe Texas Digital Library (TDL) is a consortium of universities organized to provide a single digital infrastructure for the scholarly activities of Texas universities. The four current Association of Research Libraries (ARL) universities and their systems comprise more than 40 campuses, 375,000 students, 30,000 faculty, and 100,000 staff; while non-ARL institutions represent another sizable addition in both students and faculty. TDL's principal collection is currently its federated collection of ETDs from three of the major institutions; The University of Texas, Texas A&M University, and Texas Tech University. Since the ARL institutions in Texas alone produce over 4,000 ETDs per year, the growth potential for a single state-wide repository is significant. To facilitate the creation of this federated collection, the schools agreed upon a common metadata standard represented by a MODS XML schema. Although this creates a baseline for metadata consistency, there exists ambiguity within the interpretation of the schema that creates usability and interoperability challenges. Name resolution issues are not addressed by the schema, and certain descriptive metadata elements need consistency in format and level of significance so that common repository functionality will operate intuitively across the collection. It was determined that a common ingestion point for ETDs was needed to collect metadata in a consistent, authoritative manner. A working group was formed that consisted of representatives from five universities, and a state-wide survey of the state of ETDs was conducted, with varied levels of engagement with ETDs reported. Many issues were identified, including policy questions such as open access publishing, copyright considerations and the collection of release authorizations, the role of infrastructure development such as a Shibboleth federation for authentication, and interoperability with third-party publishers such as UMI. ETD workflows at six schools were analyzed, and a meta-workflow was identified with three stages: ingest, verification, and publication. It was decided that Shibboleth would be used for authentication and identity management within the application. This paper reports on the results of the survey, and describes the system and submission workflow that was developed as a consequence. A functional prototype of the ingest stage has been built, and a full prototype with Shibboleth integration is slated for completion in June of 2007. Demonstrators of the application are expected to be deployed in fall of 2007 at three schools.Item Developing a Library Open Access Portal That Bypasses the Need for Authentication(2014-03-25) Herbert, Bruce; Potvin, Sarah; Ponsford, Bennett; Highsmith, Anne; Texas A&M UniversityTexas A&M University was established as Texas’ only land grant university through the First Morrill Act (1862), which sought to provide a broad segment of the population with a practical education that had direct relevance to their daily lives. Our impact on society was later expanded through the creation of the agricultural experiment stations and the Cooperative Extension Service, which disseminate the results of experiment station research to improve the state’s agricultural industry. The Sterling C. Evans Library at Texas A&M is building upon this history to help bring all of Texas A&M’s scholarly work to bear on many of society’s greatest challenges by promoting open access. We are working to identify and advance appropriate information systems, practices, and policies that improves societal access to the scholarly and creative work at Texas A&M. The Texas A&M University Libraries, has begun work to design a portal that bypasses the need for authentication and allows a user to search through a collection of open access materials. Working with Ex Libris, the vendor from which we license our Primo discovery layer, we have installed a separate instance of Primo aimed at aggregating open access materials and making them accessible to the public. This dedicated portal will draw materials identified as open access from the Primo Central Index, a “meta-aggregation of hundreds of millions of scholarly e-resources of global and regional importance,” including “journal articles, e-books, reviews, legal documents and more.” We are currently working to have OAK Trust open access items harvested into Primo Central and made available alongside harvests from other institutional repositories. In establishing this Portal to Open Access Resources, we will also work to identify materials that are legitimately open access (gratis) and that meet basic quality standards. This poster presentation will discuss the technical aspects and policy decisions made during the design and implementation phase of the project, and show how the portal supports a Texas A&M University – K12 School District reforming their science, technological, engineering and mathematics (STEM) education.Item Developments and Innovations in the Vireo 4.x ETD Submittal System(2016-05-26) Larrison, Stephanie; Krumholz, Gad; Creel, James; Huff, Jeremy; Welling, William; Mathew, Rincy; Hahn, Doug; Bolton, Michael; Steans, Ryan; Texas State University; Texas A&M University; Texas Digital LibraryThe Vireo ETD (Electronic Thesis and Dissertation) Submission and Management System began service in the graduate office at Texas A&M in 2008. An open source software project, Vireo has now been deployed at over a dozen universities in Texas and almost as many outside of the state, and is used to process thousands of theses and dissertations every semester. Developed primarily as a collaboration of Texas A&M University Libraries and the Texas Digital Library, new features and changes originate from – and are approved by - the Vireo Users Group, a nation-wide community of practitioners from both libraries and graduate schools. This community effort has resulted in the release of new versions of the software on a continual basis. The current release in deployment is Vireo 3.0.5. As practices have matured, needs and expectations of students, administrators, and libraries of record have evolved and the community has become aware of the potential for more robust uses of the software. In Summer 2016, Vireo 4 will undergo its beta release and first deployments. This latest version involves a fundamental reimagining of the flexibility and power of the system. Customers of the Vireo software have come to recognize the diversity of needs among various schools, programs, and departments, and version 4 addresses these needs with highly customizable workflows that can be applied at any level from the institution to the specific degree. In a related initiative, input forms can now be customized with controlled vocabularies to enhance discipline-specific metadata and facilitate knowledge-capture from authors at the time of submittal. Finally, the software is being migrated to a modern web-framework using Spring Boot and Angular.js. The reimagined functionality has meant that the migration is more than a simple re-write, and required software developers to devise a novel, highly sophisticated data model to efficiently support new dynamic use cases. This presentation will discuss the workings of the Vireo User Group and the major changes this will mean for Vireo as a tool and for users. We will discuss the software development process, technical decisions made among the development team in conjunction with the Co-Chair of the Vireo Users Group, and the plans surrounding the 4.0 release.Item Diving into Data: Implementing a Data Repository at the Texas Digital Library(2016-05-26) Thompson, Santi; Park, Kristi; Donald, Jeremy; Herbert, Bruce; Quigley, Elizabeth; Buckner, Sean; Kaspar, Wendi Arant; Lauland, Nick; Peters, Todd C.; Rodgers, Denyse; Smith, Cecelia; Starcher, Christopher; Uzwyshyn, Ray; Waugh, Laura; University of Houston; Texas Digital Library; Trinity University; Texas A&M University; Harvard University; Texas State University; Baylor University; Texas Tech UniversityThe need for Data Management services is one of two large‐scale needs consistently expressed by members of the Texas Digital Library (TDL), a consortium of academic libraries throughout the state. In particular, members are seeking a repository that offers researchers a platform for publishing, citing, reusing, and preserving research data. In response to this need, TDL has formed a series of working groups aimed at building a statewide data repository. This panel session presentation will document the work of two TDL working groups focused on the storage and accessibility of research data, as well as connect their efforts to a growing number of research data repositories worldwide: The first group, the TDL Data Management Working Group, selected a platform to act as the statewide repository. Panel presenters will outline the group’s methodology, including the development of researcher use cases and system evaluation criteria and the testing of Dataverse, an open source platform for research data sharing and management developed by Harvard’s Institute for Quantitative Social Science (IQSS). They will also highlight the results of these efforts and discuss why the group recommended that TDL and its members implement the Dataverse repository. Secondly, presenters will share the current activities of the TDL Dataverse Implementation Working Group, which is charged with launching an instance of Dataverse as the statewide data repository for Texas. Updates will focus on the work of four subgroups (Budget and Business Model, Policy and Governance, Technical Configuration, and Workflow and Outreach) as well as the results and lessons learned from an initial pilot launch of the software in Spring 2016. Finally, a representative of the Dataverse project from Harvard IQSS will situate the TDL Dataverse project within a wider community of Dataverse implementations, both at Harvard and elsewhere across the globe. As more institutions consider launching a repository for research data, our panel presentation offers important lessons that others may value. Attendees of our session will learn more about the assessment of data repositories, including potential methods and criteria for evaluating systems, as well as the challenges and benefits to building a collaborative, consortial data repository.Item Elements Supporting the Development of Effective Data Management Programs(2016-11-15) Herbert, Bruce; Texas A&M UniversityItem Embedding A Digital Repository within the Texas A&M University Library Web Services(2008-06-09) Leggett, John; Tarpley, Jeremy; Ponsford, Bennett; Phillips, Scott; Mikeal, Adam; Maslov, Alexey; Messinger, Tina; Armstrong, Tommy; Creel, James; Texas A&M University; Texas Digital LibraryThe development and deployment of the Manakin theme for the digital repository at Texas A&M University provides an informative case study in embedding DSpace repositories within an institutional web presence. Last year, the Texas A&M University Libraries began a redesign of the existing web interface in accordance with a new institution-wide branding initiative. A collaborative effort between administrators, designers, and developers has yielded a look and feel for the institutional repository that integrates seamlessly with the library's and university's other web services while providing the unique functionalities required by various and diverse collections. The use of Manakin themes ensured that the development process was modular and employed well-established web development techniques and technologies. The design of the digital repository theme began with consultations between library designers and TAMU branding authorities. The designers used Photoshop to produce mock-up pages for primary use cases with colors, fonts, and graphics that adhered to the institutional branding mandates while satisfying usability heuristics. These designs underwent iterative refinement with comments from administrators and developers. When all parties were satisfied, the design team translated the images into HTML and CSS mock-ups for web browser rendering. Designers handed off the HTML code to the Manakin theme developers, who coded XSL to produce such HTML from XML DRI data generated from the repository. Developers coded additional Javascript to implement the UI vision of the designers. Developers produced two Manakin themes of different specificity - A theme for the repository in general, and one that specifically applied to the Geologic Atlas of the United States map collection. That theme, known as "Geofolios," employs the Yahoo! Maps API and Google Earth overlays to allow patrons to browse the collection in the context of manipulable maps indicating the geographic context of the folios. In summary, embedding the digital repository in the institutional web presence required no more effort than other XML-based content would have. The pre-development design process and use of XSL transforms are standard practices in institutional web development. Manakin's ability to apply themes to specific content enabled a neat separation of development between the Geofolios theme and the general theme. THe augmentation of additional collections with customized interfaces in the future would be a similarly modular activity. Importantly, the use of Manakin themes provides a seamless integration between the repository and the library's existing web presence, reducing patrons' cognitive overhead in navigating between the repository and other services.Item Energy Systems Laboratory:Building a Repository Collection and Planning for the Future(2008-06-09) Koenig, Jay; Haberl, Jeff S.; Gilman, Don; Hughes, Sherrie; Texas A&M University; Energy Systems LaboratoryThe Energy Systems Laboratory (ESL) is a division of the Texas Engineering Experiment Station and part of the Texas A&M University System. First established in 1939, the ESL maintains a testing laboratory on the Riverside Campus in Bryan, Texas, and offices on the main campus of Texas A&M. The group consists of five faculty members from the Department of Mechanical Engineering, as well as three faculty members from the Departments of Architecture and Construction Science. The lab currently employs approximately 120 staff members, including mechanical engineers, computer science graduates, lab technicians, support staff, and graduate and undergraduate students. The Lab focuses on energy-related research, energy efficiency, and emissions reduction, and has a total annual income for external research and testing exceeding $4.5 million. With energy research and policy at the forefront of public discussion, both academic and political, the urgency of making this research publicly available is very high. The Energy Systems Laboratory collection in the Texas A&M Digital Repository is unique in a number of ways. After first contacting the library in March 2005, the ESL became one of Texas A&M's earliest adopters of the repository. The collection is very diverse, and contains conference proceedings, published articles, technical reports, and electronic theses and dissertations produced by students affiliated with the ESL. The ESL is also the first repository client to take the initiative of assigning staff members to learn the batch loading process for themselves, both relieving library staff of the burden and allowing the collection to expand even more rapidly. The collection has also successfully made the transition, despite some challenges, from the original DSpace interface to the Manakin-themed repository now in place. After three years, the collection remains one of the largest collections in the system, continues to grow as more of the group's research and publications are added to the collection, and is held forth as a model collection to prospective repository clients in the Texas A&M community. This is a testament to the Energy Systems Laboratory's dedication to the building of their repository collection, and their clear understanding of the advantages of open access. This presentation will discuss the excellent working relationship built between the Energy System Laboratory and the library, and how much relationships can be fostered with other collections as the repository expands. It will also recount the events leading up to the ESL's original adoption of the repository, and will chronicle the evolution of the repository collection, the addition of new content, the transition and adaptation to new technology, the copyright and other challenges faced, and the group's future needs for additional tools and services.Item Envisioning a Geospatial Data Portal and Curation Network(2016-05-25) Weimer, Katherine Hart; Burns, Douglas; Been, Joshua; Ricker, Kim; Smith, Cecelia; Rice University; University of North Texas; University of Houston; Texas A&M UniversityLocal, state and federal agencies, as well as non-governmental agencies and a variety of researchers are producing geospatial data in increasing amounts. Libraries are challenged to collect, manage and provide discovery to this variety of geospatial data. This work requires both technical infrastructure and personal expertise. University libraries are curating and storing locally created geospatial data at various levels, however, there are no coordinated efforts across the state of Texas to curate, store, nor share out the data. This panel will explore the multifaceted issues surrounding geospatial data curation, including: • What are the current local efforts to curate, store and share data? • What technical options exist for a collaborative data preservation and discovery environment (i.e. data portal)? • What skills and expertise are required? • What metadata standards are being followed? • What costs and benefits are there to a coordinated approach? • How might TDL serve to facilitate this endeavor? • How might a collective/common data portal support GIS services across TDL libraries? This panel will include GIS librarians / data managers who will share their experiences and challenges in an effort to begin conversations to create a state-wide geospatial data portal. Each panelist will present a ten minute briefing on the GIS data and services provided through their library and describe their campus environment, including any challenges or gaps they have found which impede meeting patron requests. Each will share ideas on what possible improvements may exist and what collaborative role TDL may play. The panelists will encourage wide audience engagement in question and answer and discussions during the second half of the allotted panel time.Item ETD Embargoes: A Comparison of Institutional Policies and Practices(2013-05-09) Larrison, Stephanie; Hammons, Laura; Henry, Geneva; Texas State University; Texas A&M University; Rice UniversityETD embargoes policies and practices vary widely among institutions. Although institutional websites often make embargo policies quite clear, the practices that support those policies is less so. Even more interesting, but less obvious, are the history and rational surrounding the development of embargo policies, as well as how exceptional cases and appeals for extensions, redactions, and permanent holds are handled. The Vireo ETD Submission and Management System is a flexible tool that can accommodate variations in ETD embargo policies and practices to support the needs of the institution. In this birds of a feather session, you will have an opportunity to learn the what, why, and how of embargo policy and management (both inside and outside of Vireo) from three different institutions.Item ETD Management in DSpace(2009-05-27) Mikeal, Adam; Creel, James; Maslov, Alexey; Phillips, Scott; Texas A&M UniversityThe Texas Digital Library (TDL) is a consortium of public and private educational institutions from across the state of Texas. Founded in 2005, TDL exists to promote the scholarly activities of its members. One such activity is the collection and dissemination of ETDs. A federated collection of ETDs from multiple institutions was created in 2006, and has since grown into an all-encompassing ETD Repository project that is partially supported by a grant from the Institute for Museum and Library Sciences (IMLS). This project seeks to address the full life-cycle of ETDs, providing tools and services from the point of ingestion, through the review process, and finally to dissemination in the centrally federated repository. A primary component of this project was the development of Vireo, a web application for ETD submittal and management. Built directly into the DSpace repository, Vireo provides a customized submission process for students, and a rich, “Web-2.0″ style management interface for graduate and library staff. Because it is built directly in the DSpace repository, scalability is possible from a single department or college up to a multiple-institution consortium. In 2008, we reported the results of a demonstrator system that took place at Texas A&M University. Vireo has replaced the legacy application and is now the single point of entry for all theses and dissertations at that university. Rollout to other schools will follow a gradual, phased approach. This presentation examines the challenges faced as Texas A&M transitioned to a new ETD management system, and the architectural issues involved with scaling such a system to a statewide consortium. Finally, it will discuss the application’s release to the ETD community under an open-source license.Item ETD Management in the Texas Digital Library(2008-06-09) Brace, Tim; Mikeal, Adam; Paz, Jay; Phillips, Scott; McFarland, Mark; Leggett, John; Texas A&M University; University of Texas at Austin; Texas Digital LibraryOne of the earliest TDL initiatives was a federated collection of electronic theses and dissertations (ETDs) from across the state. There are currently four schools contributing over 4000 ETDs per year, and with 16 participating member schools in TDL, this number is continually increasing. A diverse set of content contributors introduces problems of inconsistent metadata and incompatible storage and access methods, making it difficult to offer effective tools and services. This situation drove the decision to create a common system for managing the entire life-cycle of ETDs, from the point of ingestion to final publication. ETD management fits nicely with the other services offered by TDL, and a single point of ingestion is appealing for both technical and economic reasons. In 2007, we reported on the status of the functional system prototype. Much progress has been made toward implementation of this system, starting with the majority of the development, and leading to the demonstrator event that is currently taking place in spring 2008 at Texas A&M University and the University of Texas. This presentation discusses the ETD management system from a functional point-of-view, starting with the student interface for ETD submission (the ingestion point into the repository), and then covering the administrative interface used by university staff members for managing the iterative verification workflow. Finally, we will discuss the requirements for moving forward into a production environment. These include testing and scaling the system to handle the large numbers of users dispersed over a significant geographic area (Texas is the third-highest producer of PhDs in the United States). Rough timelines will be discussed for deployment, first at Texas A&M and the University of Texas, then as the system is gradually expanded through a program of beta testers, and finally into open enrollment.Item An Evolving Model for Supporting Scholarly Communication at Texas A&M University(2010-05-17) Mercer, Holly; Texas A&M UniversityIn 2007, the Texas A&M University Libraries Bridge Group was charged to "support the developing infrastructure of the Texas A&M University’s and TDL’s Repositories." The group reported on its activities at the 2008 TCDL. The Texas A&M University Libraries continues to expand its support of scholarly communication activities with changes in services, strategies, and staffing. Using Texas A&M as an example, this poster will help attendees explore changing support models and resource needs for a growing scholarly communication program.
- «
- 1 (current)
- 2
- 3
- »