2019 Texas Conference on Digital Libraries
Permanent URI for this collectionhttps://hdl.handle.net/2249.1/156348
Browse
Browsing 2019 Texas Conference on Digital Libraries by Issue Date
Now showing 1 - 20 of 77
Results Per Page
Sort Options
Item Leadership Academy 2019(Texas Digital Library, 2019-05-20) Bailey, DianeItem TDL DSpace User Group Pre-conference Session(Texas Digital Library, 2019-05-21) Creel, James; Davis-Van Atta, Taylor; Lyon, Colleen; Park, Kristi; Warga, EdwardThe TDL DSpace User Group proposes a full-day (10:00 AM - 4:30 PM) preconference to take place on May 21, 2019. A series of interactive sessions has been planned and will be offered to any TCDL registrant, regardless of whether they are a member of the User Group. Key presenters and organizers for each session have committed to leading the planning and design of this pre-conference in collaboration with the coauthors of this proposal. The proposed pre-conference schedule is as follows: 10 AM - 12 PM: Hack-at-thon session with possible breakout sessions, including focuses on DSpace fundamentals and DSpace API functionality; 12 PM - 1 PM: Networking lunch; 1 pm - 2:30 PM: DSpace import training workshop, focused on using the SAFCreator tool and SAF bulk packaging functionality; 2:30 PM - 3:30 PM: DSpace Accessibility Panel, with potential participants including representatives from DuraSpace, TDL developer staff, and UT-Austin DEI community; 3:30 PM - 4:30 PM: TDL update on DSpace 7, with a virtual call from Tim Donohue at DuraSpace. This preconference has been developed in coordination with the entire TDL DSpace User Group and addresses many of the most pressing issues faced by the community. Each session will be developed in open collaboration with the User Group so that there are opportunities for members to interact with one another around a variety of different aspects of the DSpace technology as well as repository services and scholarly communication more generally.Item Vireo 4 Workshop. Y'all ready for this?(Texas Digital Library, 2019-05-21) Larrison, Stephanie; Mumma, Courtney; Starcher, ChristopherThe walls of development are down and Vireo 4 is taking flight! Are you ready to get your learn on? The Vireo Users Group Steering Committee is proud to offer this free workshop where you will get the essential training you need to use this powerful new system. •Want to separate your dissertation submissions from theses? •Want to give students of Creative Writing their own unique submission area? •Want to include the undergraduate theses in the institutional repository, or preserve them in a separate location? •Tired of correcting the names and emails of committee members students have provided? •Need to collect information from submitters you were never able to do previously? •Happy with what you have and want to keep it that way? All of this and more is possible with Vireo 4. Learning outcomes include: •Understanding the core differences between Vireo 3 and Vireo 4. •Understanding the new organizational structure of Vireo 4 and how to create workflows •Understanding the use of field profiles for customization of workflows •Understanding the creation and application of controlled vocabularies Bring your laptop for hands-on exercises that will prepare you to construct your institution’s unique workflow. This workshop is only open to current users of the Vireo ETD Submission & Management System.Item TCDL 2019 Signage(Texas Digital Library, 2019-05-21) DeForest, LeaItem LaTeX for Beginners(Texas Digital Library, 2019-05-21) Barba, Ian; Barba, Shelley E.LaTeX is an important document standard for a variety of disciplines. LaTeX has features that make it unique from other document software (MS Word), that allow for greater flexibility. However, as a result of these features, the format may be less intuitive for first-time users. This proposed workshop will cover basic use of LaTeX editors, as well as an introduction to the markup language of LaTeX. Participants will gain a basic understanding of how to use the format to create documents, how to enter plain text, and how to format non-standard scripts and typesettings. For professionals that work directly or indirectly with students, particularly graduate students writing theses and dissertations, familiarity with LaTeX improves libraries’ ability to help navigate the document creation process. Though librarians typically have little to no familiarity with LaTeX, its prominence as a document standard argues for librarians to change.Item Querying & Accessing Scholarly Literature Metadata: Using rcrossref, rorcid, and roadoi(Texas Digital Library, 2019-05-21) lakovakis, ClarkeLibrarians are increasingly called on to gather and analyze metadata from the scholarly literature. This may include understanding open access publishing at their own institutions, publication patterns in specific disciplines or journals, citation analysis, and much more. Software developers have created a number of packages for accessing the scholarly literature in R over the last several years: among them rcrossref, rorcid, and roadoi. These packages make use of the APIs in their respective systems to allow users to execute specific queries, and pull the structured data into R, where it can be reshaped, merged with other data, and analyzed. While some experience with working in R will be helpful, this session will assume no knowledge of R. Therefore the session will begin with a brief introduction to what R is, what it can do, and how to operate in the R Studio environment. In advance of the workshop, attendees will be provided full instructions for installing R and preparing their computers for the session. They will also be provided pre-written R scripts, as well as step-by-step instructions for each section of the course. This will help ease them into using R, and will serve as a resource they can use in the future as they make their own queries. Three R packages will be introduced that allow us to access the scholarly literature. rcrossref interfaces with the CrossRef API, allowing users to pull article metadata based on DOIs, keywords, funders, authors, and more. This can be immensely powerful for collecting citation data, conducting literature reviews, understanding publication patterns, and more. rorcid interfaces with the ORCID API, allowing users to pull publication data based on a specific ORCID iD, or to input names and other identifying information to find a specific individual’s identifier. Finally, roadoi interfaces with Unpaywall, allowing users to input a set of DOIs and return publication information along with potential locations of open access versions. By the conclusion of the session, attendees will be able to work with and analyze data in R on a basic level, and will be familiar with some of the major functions in each of the listed packages. On a deeper level, they will have more powerful tools for gathering subsets of the scholarly literature, in clean and structured formats, based on specific parameters. Furthermore, as the session is designed to provide basic competence in R, attendees will be able to make use of a far more powerful tool than spreadsheet software, such as Excel. As librarians are increasingly required to master and make sense of data, using R provides many more paths for analysis and visualization, and therefore understanding of that data.Item TCDL 2019 Conference Program(Texas Digital Library, 2019-05-21) DeForest, LeaItem Give it Away Give it Away Give it Away Now(Texas Digital Library, 2019-05-21) Degler, RoyLearn how to promote and give away free textbooks. Our library needed a simple and functional way to distribute OER textbooks created as part of the Wise Open Textbook Initiative, which offered faculty a stipend to develop or adopt OER textbooks. This presentation introduces OERx, a textbook delivery system, built using Modx and Adopt an OER Textbook suggestion site to aid faculty with selecting OER textbooks. The OERx software is free to all and you’ll learn how to get it for your library.Item A Quest to Upgrade from a Legacy to a Modern Open Source Repository(Texas Digital Library, 2019-05-22) Casados, Teblos; Goldsmith, Beth; Knudson, Frances; Neblitt-Jones, Valentina; Sacks, Sara; Thomas, Cristel; Trujillo, ValerieThe Los Alamos National Laboratory Research Library is facing the inevitable need to upgrade software supporting online tools. Our legacy repository was developed in house and grew with our data model and environment - tailored to our needs but complex to maintain. We turned to existing open source software for a replacement to ease both software set up and maintenance, with the understanding that we would have to curate and standardize our data for the new infrastructure. We will present our entire workflow for this endeavor, documenting each step from the choice of an open source repository to the very few modifications needed to meet our deployment environment needs. DSpace is a lightweight infrastructure which meets our technical requirements and upkeep goals, while the DSpace-CRIS extension offers additional record management capabilities. We selected version 5.8 of DSpace-CRIS and elected to deploy it in Docker instances. This strategy suits our particular environment framework, as the Research Library seeks to replace separate user portals operated on separate networks with different access control. Docker instances running the same DSpace code can be configured for each user portal and deployed on different networks while minimizing development efforts and required maintenance. Additionally, docker containers offer independence from deployment systems, shifting the burden of system administration to docker container set up – ideal in an academic context where system administration is often the responsibility of other entities. In addition to front-end replacement, upgrading our repositories entailed migrating data from the database, as well as translating our legacy data models to one supported by DSpace. Extension of DSpace-supported schemas with custom fields allowed enough flexibility for backward compatibility and harmonization of the various data sets we migrated. We customized the DSpace code very lightly to accommodate the data model extension, provide users with a pleasant experience and satisfy our contractual requirements. Our aging software replacement strategy, while specific to our system and unquestionably not the only approach to follow, can certainly shed some light on potential hurdles and considerations faced by other libraries in a similar predicament.Item CAP - Curators' Administrative Platform - A Flexible Approach to Repository Management(Texas Digital Library, 2019-05-22) Bolton, Michael; Creel, James; Day, Kevin; Hahn, Doug; Huff, Jeremy; Laddusaw, Ryan; Savell, Jason; Welling, WilliamThe Applications Development team at Texas A&M Libraries has developed an innovative approach to IR (Institutional Repository) management with an open-source application called CAP (Curators’ Administration Platform) available at https://github.com/TAMULib/Cap The initial usage at Texas A&M Libraries has shown exciting capabilities for curators. Recent discussions and demonstrations have piqued the attention of the wider library community, including a recent feature on DuraSpace’s latest news blog: https://duraspace.org/introducing-cap-curators-administrative-platform-from-texas-am-university-libraries/ We believe that the current appeal of CAP derives from two major features: 1) The management of multiple Repository Views (RV) from one application and 2) The ability to import RDF vocabularies from the internet and cherry-pick appropriate metadata properties on an RV basis. These features are exposed via both an API, consumable by a Digital Asset Management Ecosystem (DAME), and a modern user interface. Initially, the CAP project was conceived of as a replacement for the demonstration UI that is distributed with Fedora’s REST client. We hoped to build a production-ready user interface for interacting with Fedora which could receive the benefits of an iterative development process. We opted to build a new front-end based on the fcrepo4 Java client. In this framework, in the effort to recapitulate and simplify the functionality of the demonstration UI, we also found it extremely simple to enable registration and configuration of multiple IR instances with the system. Capabilities of the current build include navigation with breadcrumbs through an RV container hierarchy and CRUD (Create, Read, Update, Delete) functionality for RV instances, resources, metadata, and metadata schemata. CAP has been designed to enable dynamic customization of metadata application profiles for any of its registered repositories and to facilitate the basic interactions with those repositories APIs (LDP, Fixity, Versioning, and Transactions) through a clean, intuitive UI. CAP also includes viewers to render image resources in the browser withtags or with OpenSeadragon and an IIIF Image Server. In the near term, we will explore development in several other areas. First, we plan to add DSpace as another RV type. We would also like to explore the ability of users to define new navigational relationships in addition to the default parent-child hierarchy. Finally, we would like to expand on CAP’s interconnectivity with other DAME components through, e.g., IIIF manifest generation. Additionally, the CAP API is positioned to play a central role in the DAME in its ability to serve as a source of generalized API for RV discovery and interaction. CAP could indicate to a DAME all RVs which it manages and expose a unified API for interacting with all of those repositories. In this way, the participation of an Institutional Repository with the DAME would be mediated through CAP. Though CAP is still in the early stages of development, the current and potential innovations represent a culmination of many lessons learned about healthy interactions between Institutional Repositories and the ecosystems in which they operate.
Item Developing Metadata Guidelines for TxHub(Texas Digital Library, 2019-05-22) Duran, Albert; Long, Kara; Tomfohrde, Katie; Washington, Anne; Woodward, NickThe DPLA Metadata Working Group formed in 2018 and was charged with developing a metadata application profile for the Texas aggregation hub. This group has worked closely with TDL through the development and pilot of the prototype harvester. Building on UNT’s experience with the Portal, some of the challenges that the group faces include mapping of new properties added to DPLA MAP v.5 to support International Image Interoperability Framework (IIIF), for which a clear crosswalk has not yet been defined by the community, as well as the uncertainties brought up by the recent reduction of the DPLA staffing. This poster will provide a visual introduction to the technical infrastructure of the TxHub at its current stage of development, an overview of the metadata requirements and guidelines, and a timeline for implementation and future goals. We will also provide information on how to get involved and provide community feedback on the current proposed guidelines. This will be of interest to libraries or other institutions with digital collections that are not currently visible in DPLA, or who are interested in learning more about DPLA or the TxHub.Item The Archivo Histórico de la Policía Nacional Digital Preservation Project at UT Libraries(Texas Digital Library, 2019-05-22) Bliss, David AThis poster will detail the digital preservation workflow developed for the Archivo Histórico de la Policia Nacional de Guatemala (AHPN) Digital Archive. The AHPN is a collection of approximately 80 million records of police activity in Guatemala from the 19th century to the 1990s. It contains information vital to the study of Guatemalan history, particularly the National Police's role in human rights abuses that took place during the Guatemalan Civil War, which lasted from 1960-1996. After the records were discovered in 2005, work began quickly to digitize and preserve them in Guatemala, and in 2011 UT Austin entered into a formal agreement with the AHPN to preserve and publish the digital collection. The sheer size of the collection means creating a digital surrogate is a complex undertaking. As of 2018, the AHPN has digitized approximately 21 of the full archive's 80 million documents. The digital collection consists of more than 8 TB of small TIFF images with arbitrary file names and directory structures, both of which are generated automatically during the scanning process. The structure of the physical collection is recreated digitally via a complex SQL database, rather than in the file or directory names. As a result, the digital collection cannot be easily broken into discrete intellectual units, rather it must be kept together even as it grows past 8 TB. This opaque digital collection structure, as well as the collection's size, present a challenge for digital preservation. This poster will describe the collection and the heightened need to digitally preserve it in light of recent developments in Guatemala. It will then detail the digital preservation work on the collection undertaken at UT Libraries beginning in spring 2018. This preservation process involved several months of continuous technical metadata extraction, bagging, and writing to tape. The poster will also outline a proposed workflow for future preservation of the AHPN digital archive, which uses a combination of BagIt payload manifests and OpenRefine processing to identify and copy only new additions to the collection, obviating the need to write a complete copy of the archive to tape every time an update is delivered from Guatemala.Item Unlocking Access: Building an Inventory Control System for a High-Density Storage Facility(Texas Digital Library, 2019-05-22) Arnspiger, Crystal; Martin, WendyHigh-density storage facilities rely on inventory control software to locate and retrieve the physical library resources they hold. In 2018, the University of Texas Libraries rose to the challenge of updating the inventory system written for its high-density facility when it initially opened in 1993. The Library Storage Facility (LSF) at the University of Texas was built to address the challenge of managing growing collections on an urban campus constrained for space. Situated nine miles north of the main campus, it was the third facility of its kind to be constructed in the United States. The facility would allow the Libraries to move some materials off site to preservation-quality storage in order to make room for new acquisitions in campus libraries. Designed to make very efficient use of space, high-density storage facilities like LSF have densely packed shelves on ranges that are over 30 feet tall. Materials are arranged by size rather than call number. While this approach to arrangement maximizes the use of the space, it also limits the ability to find items simply by looking for them on the shelves. Instead, location information for each item is recorded in an inventory control system. While this back-end system is not public facing, it is a critical component in providing access to these materials for patrons. As the number of items housed in LSF continues to grow, so does the necessity for an up-to-date, reliable inventory system so that we may continue to provide unfailing access to these materials. This presentation describes the work of a UT Libraries software development team to create a product that would replicate the critical business functions of the original inventory system while modernizing it for today’s users. Working within the Agile software development method, a product owner coordinated communication between key library stakeholders and the software development squad. The result is a web-based system that increases interoperability with our library service platform and allows for simplification of workflows and business processes at the facility. Looking forward, the new system will adapt as the Library Storage Facility expands and is able to feature storage areas for different types of materials. Together, a software developer and a library stakeholder will talk through the process of building this system, offering both technical information and a big picture point of view.Item Texas Data Repository - National Transportation Library Conformant Data Repository(Texas Digital Library, 2019-05-22) Nugent, MichaelIn 2018, I started the process of submitting the Texas Data Repository (TDR) to the National Transportation Library (NTL) for inclusion in its list of data repositories conformant with the US Department of Transportation's (DOT) Public Access Plan. The NTL has a list of guidelines that a repository must meet before it can be included. These guidelines are based on the 16 CoreTrustSeal requirements "which are intended to reflect the characteristics of trustworthy repositories." As a team, we had to document TDR's conformance, preferably using public facing documentation (e.g. FAQs), for each of the 16 different requirements. The effort began in earnest in March and the final documentation was delivered to NTL on December 13, 2018. The DOT's Public Access Plan was published in 2015 and covers all DOT employees and awardees from non-DOT organizations working under a DOT grant, contract, or other agreement. As part of this plan, all publications must be submitted to the NTL digital repository (ROSA-P). Similarly, all data (to the extent feasible) must "be stored and publically accessible for search, retrieval, and analysis." When seeking funding from the DOT, research proposals "must include a supplementary document labeled "Data Management Plan" (DMP)." In this DMP, researchers outline their strategy(ies) to deposit digital data sets in a repository that enables and allows for search, retrieval, and analysis. Since the TDR has been certified by the NTL as conformant (awaiting documentation from the NTL attesting to this 1/22/19), DOT-funded researchers at the TDR member institutions will be able to use the TDR and its services in the DMP. In my presentation, I plan to 1) inform the wider community of TDR's conformance; and 2) describe the steps taken to show conformance.Item Core Metadata Elements: Guidelines to Promote Consistency and Access at TAMU(Texas Digital Library, 2019-05-22) Ho, Jeannette; Stokes, CharityIn 2018, a working group was formed at Texas A&M University Libraries to propose policies that would ensure the quality and consistency of descriptive metadata in the Libraries’ new DAME (Digital Asset Management Ecosystem), which consists of multiple repositories. This presentation will describe the process that it used to identify a “core” set of metadata elements for all resources that would be stored within the DAME, and their recommendations regarding three levels of “fullness” for metadata records, as well as issues encountered in recommending metadata schema and controlled vocabularies and standards. Finally, it will describe challenges for the future and plans to address them.Item The State of Open Access in Texas Institutions(Texas Digital Library, 2019-05-22) Hawkins, Kevin; Herbert, Bruce; Lyon, Colleen; Park, Kristi; Thomas, Camille; White, JustinThis panel will include representatives of institutions across Texas engaged in Open Access and broader Open Agenda outreach and implementation. Attendees will hear about policy development, outreach, infrastructure, investments, and staffing at the University of Texas at Austin, Texas Tech, the University of North Texas, the University of Texas Rio Grande Valley, and Texas A&M University, including the integration and interoperability of repositories, authority control records, and other initiatives that support open scholarship. The panel will discuss future possibilities for statewide collaboration, building upon existing initiatives like the Texas Digital Library and the various Open Educational Resource adoption projects. Panelists will share successful outreach strategies from their perspective campuses. One major focus point is the Promotion and Tenure process, and how this process may not promote and sometimes even dissuades faculty from publishing in OA journals or self-archiving manuscripts. Given the statewide interest in OA funds, there will also be opportunities to discuss sources of funding and cost-offsetting in negotiations with publishers who charge APCs. The panel will also touch on concerns commonly raised by faculty members including the OA publishing learning curve, mandates and other demands placed on faculty, and concerns about academic freedom that we see playing out in the debate over Plan S and how that debate might influence the discussion in the United States.Item On board with OnBase: Migrating a Digital Collection to an Enterprise Content Management System(Texas Digital Library, 2019-05-22) Williams, ElliotWhat do you do when your library’s largest digital collection is only accessible to fewer than 15 people, most of whom are not part of the library? As the University of Miami Libraries prepare to migrate to a new hosted digital library system, we have faced that question in dealing with the digitized papers of the university’s first three presidents. Together, the digitized collection contains more than 842,000 images, which make up over 263,000 digital objects. Due to privacy concerns, the collection is accessible only to the University Archivist and staff in the President’s Office. To meet the unique needs of this collection, the library has partnered with the university’s central IT office to migrate the collection from CONTENTdm to OnBase, the enterprise content management system used by the university. Managing the collection in OnBase will allow the library to make the digitized materials available in the same system where the current president’s materials are stored, and allow the library to take advantage of systems and storage provided by the university. This poster will report on the process of reimagining a digital collection for a non-library repository system, and the collaboration with the university IT department. Particularly, it will address the reasons for migrating the collection to OnBase, comparisons between OnBase and digital library systems, such as metadata and OCR functionality, and challenges and opportunities in partnering with university IT staff. The poster will also note some of the cleanup work necessary for migrating a large legacy digital collection to a new system, such as metadata remediation and reconciling access files with archival master files.Item Evolution of a Service Management Framework - Spotlight at Stanford as a Use Case(Texas Digital Library, 2019-05-22) Aster, CatherineThe practice of service management is a growth area for libraries and archives, particularly for the support and sustainability of open source repository solutions and their allied applications in the digital library ecosystem. Opportunities exist to leverage practices from IT service management as well as Agile methodologies, to craft effective and innovative service frameworks we can use to support digital library applications and most importantly, our users. As with the practical application of project management principles in libraries and archives has taught us, we need to mine from these standards and communities of practice to create our own unique frameworks that are both feasible and sustainable. A good service framework also positions us to bring key value as collaborators, enabling us to work effectively with software development teams. Using Spotlight at Stanford as a service management use case, we’ll explore the inner workings of an institutional service team: our initial challenges, what it took to get us up and running effectively, our goal-setting process, our accomplishments to date, and our future aspirations. Spotlight was developed in 2013/14, to address a gap we had identified in our digital library ecosystem, where we needed to have a sustainable, repurposable and configurable solution to feature and showcase selected digital collections. We also needed a solution that provided the ability for curators and content experts to build their own websites without the need for developer support for each online exhibit. Spotlight is a repository-agnostic plugin to Blacklight, an open source discovery platform. Spotlight allows curators, librarians and collection managers to showcase digital collections, and contextualize the content with “storytelling” features. In early 2016, a shift in management in Digital Library Systems and Services at Stanford, alongside a growing need to train more curators and grow the use of Spotlight at Stanford, resulted in the formal establishment of a service team and a named service manager. In April 2016, guided by a Service Charter created specifically for Spotlight at Stanford, we held our first service team meeting. As we wrestled with the well-known challenges of establishing a new workgroup, i.e. what is referred to as “forming, storming, norming, and performing,” we made adjustments during the remainder of our first year to diversify service team membership, better define the service team scope of influence and responsibility, and incorporated a newly-learned Agile SCRUM technique to coalesce the service team around identifying, prioritizing, and executing annual goals. With three years under our belt now as a service team, we’ll summarize some of our challenging moments as well as our accomplishments, alongside sharing generalizable lessons learned in the hope that allied communities of practice can benefit from our experiences. Finally, we’ll also discuss the process we undertook to help establish a Spotlight Service Community with a core group of institutional partners and summarize the progress that has been made over the past eighteen months.Item Increasing Access to Content-Rich Publications from Web Archives with Machine Learning Models(Texas Digital Library, 2019-05-22) Caragea, Cornelia; Fox, Nathan; Patel, Krutarth; Phillips, MarkThe University of North Texas (UNT) Libraries, in partnership with the University of Illinois at Chicago, were awarded a National Leadership Grant (IMLS:LG-71-17-0202-17) from the Institute of Museum and Library Services (IMLS) to research the efficacy of using machine-learning models to identify and extract content-rich publications (publications considered to be “within scope” for a given collection or repository) located in web archives. This research project seeks to combine machine learning and traditional qualitative research methods in order to improve the ability for the team to identify documents and publications from web archives that align with existing collections held by cultural heritage organizations. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. It is posed that the use of qualitative methods will promote the identification and formulation of features that can be used to leverage machine learning to pull content-rich publications from web archives. The study exists in two phases. The first phase of the research, being performed by researchers at the University of North Texas, focuses on interviewing subject matter experts having experience in collecting publications from the web. Efforts have been made to collect a representative sample of collection types that align with the three use cases in this study. These use cases include: populating an institutional repository from a university domain crawl (unt.edu), extracting state publications from a texas.gov domain crawl, and identifying technical reports from a large web archive of a federal agency (usda.gov). The interviews and subsequent analysis are aimed at identifying potential features that can be used in to inform the machine learning algorithms being developed and refined in phase two of the study. Interviews have been conducted with librarians and archivists to better understand how they approach collecting publications from the web and to determine what kind of workflows and features aid these individuals in identifying documents of interest for the collections that they are building. Interviews were subsequently transcribed and analyzed using qualitative analysis software (NVivo 12). Recommendations for features to be incorporated into the machine learning models were then made to the research partners to carry out in later stages of the study. The hope is that these features, when integrated with machine learning models, can be used to identify content-rich publications from the massive amount of material available in web archives that can then, in turn, be used to aid libraries and archives in their collection efforts. It is also hoped that these methods will inform future research in the pursuit of breaking down barriers to the access and utilization of the wealth of resources available through web archives. This poster will present the research design of the project and the workflow for the qualitative data collection, transcription, and analysis. A discussion of findings from the analysis of the interviews will also be included as well as examples of feature suggestions identified for further testing in machine learning models.Item Collaborative Governance: Creating Shared Leadership for a Unified User Experience(Texas Digital Library, 2019-05-22) Bolton, Micheal; German, Elizabeth; Mosbo, Julie; Potvin, Sarah"For years, the line between public and technical services has narrowed, blurred, and, in many ways, eroded. From a user-centric point of view, there is no difference between the libraries’ chat service, article access, instruction session, digitized archives, or group study rooms -- it is all the library. While libraries continue to adapt their organizational structures to reflect the new realities, cross-departmental work and cooperation will always be necessary in order to provide positive user experiences. This presentation outlines different governance structures that exist within a single institution. We will reflect on the alignment and conflict that comes from multiple perspectives and purviews. The presentation will also use a case study of the launch of digital exhibit software, Spotlight, to demonstrate the types of barriers that exist and how they can be overcome.