Browsing by Subject "Human-computer interaction"
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item A user learning based DSS implementation methodology(Texas Tech University, 1990-05) Chatfield, Akemi TakeokaThe behavioral problem of user resistance to change and its role in systems implementation failure has been raised in MIS and related fields. The opportunity cost of unused DSS technology is substantial, because it cannot improve decision performance. Despite this recognition, extant DSS design methodologies have directed a great deal of their focuses toward the technical issues related to the design of DSS technology. These methodologies are deficient from the perspective of managing the behavioral problem and motivating DSS utilization. The purpose of this research is to provide a conceptual understanding of the behavioral problem of user resistance to change, and to identify and develop a means of resolving user resistance to change and hence motivating DSS utilization. This research presents a user-learning-based DSS implementation methodology. The methodology development is built upon prior research in MIS and related fields. A user-learning-based DSS implementation methodology consists of a user-learning model of DSS implementation, a user-learning approach to DSS implementation, a set of implementation steps, and a generic architectural model of knowledge-based user-learning support systems (KULSS). This methodology is applicable to most DSS implementation situations where user resistance to change is observed at the onset of DSS implementation. The methodology facilitates user-cognitive learning to resolve user resistance to change and to develop a felt need for DSS utilization. A set of generic KULSS commands enables the user to identify his actual decision performance, to leam a desired decision performance, and to understand how the actual decision performance differs from the desired decision performance.Item A web-based software system to support academic engineering advising(Texas Tech University, 2004-05) Neek, CyrusThe basic relationship for individual development and group experience (BRIDGE) is an advisory system designed and overseen by the Texas Tech University College of Engineering. It is intended to help incoming freshmen students learn basic engineering principles, problem solving, teamwork, and time organization. The BRIDGE software tool, which is the subject of this thesis, is a web-based software system to support this academic engineering advising function. The software is implemented to help administrators coordinate mentors and students in synchronizing events and schedules and managing resources.Item An investigation of the relationship between similarity in cognitive processing and time-sharing performance in a computer-windowing environment(Texas Tech University, 1994-05) Eaglin, Jennifer WillisRecent advancements in computer systems have led to highly complex human-computer interfaces. One of the most notable developments is the multiple-window display technology which allows the user of a computer generated display to simultaneously access and act upon multiple sources of information (Gaylin, 1986). Although preliminary research involving computer windowing has been favorable, little research is available concerning the effects of window management techniques on operator performance involving complex, concurrent task combinations. Furthermore, some information processing theories indicate that human time-sharing capabilities may be affected by various task factors. This study examined possible performance variations that may result from combinations of cognitive tasks presented via computer generated multiple-window displays. The experiment employed a computerized assessment battery. Complex Cognitive Assessment Battery (CCAB), to generate four sets of dual tasks that varied in terms of cognitive processing similarity. The results suggested that similarity of cognitive processes affected time-sharing task performance. Specifically, subjects' performances were not affected by task similarity in the single-task conditions; however, performance levels for one of the dual tasks decreased as the similarity of cognitive processes in the multi-task conditions decreased. Additionally, test assessment procedures significantly affected subjects' response strategies. Subjects performed tasks with freestyle solutions more slowly but more accurately than tasks with multiple-choice answers. These findings were translated into design considerations for real-world systems.Item Animation in user interface design for decision making: a research framework and empirical analysis(Texas Tech University, 1995-12) Gonzalez, CleotildeAnimation is becoming an increasingly popular feature in user interfaces. Animation in infoiTnation displays is expected to influence decision making by facilitating and improving the human and computer interaction (HCI). Unfortunately, the use and effect of animated user interfaces for decision making are unknown. How should animated interfaces be designed to improve decision making performance? Answers to this question are crucial to design effective infoi-mation systems that support decision making. This research provides a new conceptual Animation User Interface Design (AUID) research framework for answering this question. In addition, this research empirically evaluates some of the AUID's propositions. The AUID research framework suggests a definition of animation in HCI, defines animation design goals, and presents an ai'chitecture to illustrate decision making with animated interfaces. This framework proposes that animation may support decision making if its design accounts for the task domain and structure; individual difference factors such as visual imaging abilities and experience; and characteristics of the animated interface such as images, alterations, transitions, timing, and interactivity. To explain possible decision making effects, the AUID framework focuses on theories of visual perception and cognition of successive displays. Several research hypotheses are derived from the propositions of the AUID framework. Primary hypotheses test the relative effects of images (realistic and abstract), transitions (gradual and abrupt), and interactivity (parallel and sequential) in two different decision making domains. Secondary hypotheses test the interaction between the animation interface design elements, the task domain, and the individual difference factors. A laboratory experiment was conducted to investigate these hypotheses. The results show that decision making performance in animated interfaces is highly contingent on the properties of the animation user interface such as image type, transition smoothness, and interactivity style as well as sensitive to the task domain. In sum, this reseai-ch suggests that a human information processing approach to design animated interfaces is a powerful one for supporting decision making. To be an effective decision support tool, animation must be smooth, simple, interactive, and explicitly account for the appropriateness of the user's mental model of the task.Item Automated dispensing cabinets: A usability study using virtual reality simulation(2012-05) Linn, Colleen M.; Haq, Saif; Hill, Glenn E.; DeLucia, Patricia R.; Decker, SharonAutomated Dispensing Cabinets (ADC) are increasingly becoming essential technology in hospitals. Currently, the available research on ADCs is primarily in two areas. One area is in the design and specifications of ADCs, and the other is their role in reducing medical and inventory errors. Regarding the latter, 'before-after' studies are predominant. ADCs are a relatively new addition to hospital equipment. This means that there are fewer machine-human interaction studies. Machine-human interaction is important for assessing efficiency and errors, especially when taking into account the long working hours of the nurses along with its associated cognitive loads and fatigue. A study was devised that investigated the machine-human interrelationships of the hospital ADCs. Since these are expensive, this study uses a Virtual Reality Simulation (VRS) of a one particular brand. Experiments on machine-human interactions are carried out within the VRS. The first step of the study involved creating the VRS, and then it was evaluated to determine its usability. This was done by asking nurses, those who have had prior knowledge and experience with ADCs, to test out the VRS and provide feedback through a 10 question modified system usability survey (SUS). Nurses were given a task to complete within the VRS. This was one that is commonly done in a real world situation. At the conclusion of the study, the results from the usability data are used to report on the advantages, disadvantages, and the implications of creating a VRS of an ADC. Recommendations for future research are also included in this thesis.Item Development of a goal-driven analysis for requirements definition in hypertext information systems supporting complex-problem solving(Texas Tech University, 1999-05) Albers, Michael JoelWhen engaged in open-ended problem solving, the user must evaluate information from multiple sources. Unfortunately, people find it difficult to effectively search for and integrate multiple sources of information, requiring the system to provide the information in a manner which relates to the context of the problem. Also, rather than needing information in pre-defined ways, the viewing order and specific information requires changes with each problem. As a result, the methods used in conventional task analysis, which focus on defining the individual steps of a well-defined sequence, fail to provide good requirements for systems intended for supporting open-ended problem solving. Rather than focusing on individual steps, this dissertation develops a goal-driven analysis methodology based on defining and relating user's goals and information needs. Unlike a task-based analysis, the goal-driven analysis methodology revolves around uncovering the user's goals, the information needed to achieve those goals, and the contextual relationships between information elements. The analysis strives to uncover the major potential problem-solving paths and the information required to support following those paths, to provide the problem solver with varied routes to solving a specific problem. The unique feature of goal-driven analysis is that, throughout the methodology, it focuses on maintaining a connection between the user's goals, information needs, and problem context. This dissertation integrates the technical communication, cognitive psychology, and situation awareness literature, and explores the socio/cognitive aspects of information design as they relate to complex problem solving. It begins by arguing that effective information presentation requires a match between the user's mental model, the real-world context, and the factors which contribute to situation awareness. The dissertation then derives a four-step methodology: ethnography, interviews, scenario development, and group discussion, to develop a goal/information diagram which captures a graphical representation of the user's goals and information needs. The goal/information diagram then becomes the foundation for the analyst to use when developing system requirements. The dissertation also provides an extended example of how to perform a goal-driven analysis.Item Dynamic voice user interface(Texas Tech University, 2002-08) Onal, ErhanEase of use has always been one of the most important goals in Human Computer Interaction (HCI) research [1]. Since speech is the most natural method of communication for humans, researchers of HCI have recently focused on voice UIs. Even though considerable advances are made in Voice Recognition field of HCI, the ease of use is yet to be achieved. The users have certain goals in mind while using a computer. To reach these ends, they use specific functionality of certain applications. With current interfaces, the users have to go through opening the application first to access the functionality they want. If they want to activate the functionality of another application, they either have to launch that application or change the active window. Thus there is no unified and easy way of working at a feature level. Today, users are overwhelmed by not only the number of applications, but also the number of features a typical application has. Finding the needed functionality on the screen among other functionality is a time consuming task that hampers ease of use. Applications' features are presented on the screen even when they are not needed. It is not possible to customize the UI such that only the features that the user wants are available. The design of our tool. Dynamic Voice User Interface (DV-UI) that is presented in this thesis, addresses these issues to create a unified, customizable voice user interface. DV-UI uses speech to text, speaker identification, and text to speech synthesis to provide an easy to use voice interface. It lets the user select any functionality from a set of applications and associate them with voice commands in real time. In this way, users can reach the functionality they need by a single voice command of user's choice without having to open an application or looking for the functionality on the screen. DV-UI provides a user-centric environment by allowing the user to add, modify, and delete voice commands and save these preferences under their voiceprints via speech. We expect that this approach of unification and customization will help the voice user interface to become mainstream way of HCI.Item Haptic rendering of volumetric soft-bodied objects(Texas Tech University, 1998-08) Burgin, Jonathan RonaldThe interfacing of force-feedback devices to computers adds touchability in computer interactions, called computer haptics. Computer haptics has two components (1) collision detection of virtual objects with the haptic interface device and (2) determining and displaying appropriate force feedback to the user via the haptic interface device. This is a new field, with most of the original work done in the fields of mechanical engineering and the biophysical sciences. As such, the computing model that incorporates haptics was, until recently, a secondary concern. Most of the data structures and algorithms applied to haptic rendering have been adopted from non-pliable surface-based graphic systems, which is not always appropriate because of the different characteristics required to render haptic systems. Two new algorithms are currently available that can be applied to haptics to improve the collision detection and force-feedback generation of computer haptics. Currently, there are two basic methods available: (1) The occupancy-map algorithm (OMA), which is used for fast collision detection with solid non-deformable convex virtual objects, (2) The chainmail algorithm (CMA) used for calculating the behavior of 2D (surface) convex objects. The work we have done uses advanced computer modeling and coding techniques or implementing 1) haptic rendering of 3D volumetric objects using OMA for collision detection and 2) CMA for the generation of the real-time force feedback. Comparative analysis of this technique for haptic rendering versus more traditional methods has been provided. This work has enhanced the previous versions of this technique and has shown the viability and advantages of this new haptic rendering paradigm. These algorithms were implemented using the PHANToM haptic device from Sensible Technologies. This is a six-degree of freedom force feedback device used with many haptic displays. Graphics were implemented using the version of OpenGL provided with the Windows NT operating system.Item Haptic virtual environment(Texas Tech University, 2001-05) Acosta, Eric JavierVirtual Reality is "the illusion of participation in a synthetic environment rather than external observation of such an environment" [12]. The concept of experiencing a virtual world, that the user may otherwise never be able to experience, has drawn an enormous amount of publicity for many years. This multi-sensory experience typically relies on three-dimensional (3D) graphics and sound, but now we are able to incorporate the sense of touch into these virtual worlds. Haptics is a technology that adds the sense of touch to virtual reality and recent advancements in this field have spawned worldwide interest from different fields of study for both commercial and research interests. Given the importance of the sense of touch for humans, it is desirable to combine tactile, visual, and audio cues to develop a more realistic environment. Such cues would be applicable in a variety of applications ranging from entertainment to simulation training. The incorporation of haptic displays in virtual environments bring many new possibilities, but not without introducing a new dimension of problems that have to be overcome. One such problem is the formation of haptic virtual objects. Unfortunately, there are no high-level tools for the creation, visualization, and manipulation of complex haptic virtual environments and the incorporation of haptics into a system usually requires low-level programming efforts by the developers, forcing them to be knowledgeable in 3D graphical and haptics programming. The goal of this research was to provide an underlying infrastructure that could be built to replace the current labor-intensive methods of creating haptic virtual environments by an easier method that is equivalent to creating graphical virtual environments. This research demonstrates the feasibility of this concept by describing a prototype that was implemented as a plug-in for 3D Studio Max, a commercial graphics package. This plug-in transforms a graphical virtual environment into a haptic virtual environment without any additional programming efforts, allowing developers of haptic scenes to model 3D scene objects graphically, or use preexisting models, and make them haptic with the press of a button. This plug-in also provides the user with the ability to dynamically define haptic materials and apply them to objects in the scene. The user can then modify the properties of the materials interactively to change how the objects feel in an attempt to model more realistic materials. These materials can then be saved into a database for reusing when creating haptic virtual environments.Item The relationship between learners' goal orientation and their cognitive tool use and achievement in an interactive hypermedia environment(2001-05) Katz, Heather Alicia; Liu, Min, Ed. D.Item Software architecture for cross platform user interfaces(Texas Tech University, 1996-08) Keshavamurthy, BadriprasadThis research is a study of user interfaces, and in particular, cross platform user interfaces. The problems associated with portability and the challenges and difficulties associated with developing user interfaces are examined. The models used for describing user interfaces architectures are evaluated and compared. The techniques used to build cross platform user interfaces and the existing user interface industry standards are examined and the requirements and the characteristics of cross platform user interface architectures are examined in great detail. Also, the steps necessary to convert a built GUI library into a cross platform library are defined.Item Speaker independent real-time speech recognition system(Texas Tech University, 1998-08) Jindani, Abid MThis thesis attempts to develop a real-time speaker-independent Automatic Speech Recognition (ASR) system. The system recognizes isolated utterances from a limited vocabulary, and is small and cost-efficient to be incorporated into a consumer appliance. The recognition is based on zero crossings and energy content measurement on the speech waveforms. The algorithm is based on segmenting the speech waveform into ten equally spaced intervals and performing a match with the patterns in a reference template. The system was implemented on an IBM Personal Computer and achieved an error rate of 0% on a vocabulary of four words from an initial ten-word database of 16 speakers (8 male and 8 female). The system recognized unknown utterances in less than 0.3 seconds.Item Speech system for a voice-impaired person(Texas Tech University, 1999-12) Sirigineedi, Ravi Kumar AnjaniThis thesis attempts to develop a speaker-dependent speech system for voice impaired people. The system recognizes isolated utterances from a limited vocabulary, and is small and cost-efficient enough to be incorporated into a hand-held system. A 20-dimensional feature vector was generated based on zero crossing and energy content measurements of the speech waveforms. The generated feature vectors were used to train a neural network and the trained network was tested with known and unknown utterances. The system was implemented on an IBM Personal Computer and achieved a recognition rate of 76% on a ten-word database of 16 speakers (8 male and 8 female). A test database, which mimics a voice-impaired person's speech, was developed, and a recognition rate of 60% was observed. The system recognized utterances at an average rate of 0.15 seconds/recognition.Item The effect of level of automation and adaptive automation on performance in dynamic control environments(Texas Tech University, 1996-08) Kaber, David BLevel of automation (LOA) designates the degree of human and computer interaction in controlling an automated system and has typically been examined in a binary fashion; therefore, either the human or computer is assigned to a given task. Recently, studies have investigated intermediary levels of automation with the intent of keeping both the human operator and computer involved in systems performance to promote operator situation awareness and reductions in out-of-the-loop performance problems. Allocating automated system (manual) control between human and computer servers for varying durations of time throughout system functioning with the intent of improving human operator performance has been labeled as adaptive automation (AA). The effect of manual control allocations on operator performance during fully automated operations following various scheduling strategies has been also been investigated as a means of promoting operator monitoring performance in working with automated systems. The objective of this study was to examine the interaction between LOA and AA. An experiment was conducted to investigate the impact of various levels of automation and AA strategies on human/machine system performance, operator situation awareness and workload in dynamic control tasks within a multitask environment. A secondary goal of this study was to assess the relation of primary task performance, situation awareness and workload with secondary task performance. Thirty subjects performed a dynamic control simulation and secondary gauge monitoring task, simultaneously. Testing involved trials at five levels of automation ("Batch Processing," "Shared Control," "Blended Decision Making," "Supervisory Control" and "Full Automation") allocated during manual functioning for different automation allocation cycle times (AACT) comprising 0%, 20%, 40%, 60% and 100% of total task time, respectively. Results revealed AACT to have little effect on SA and performance, as compared to LOA, yet it was the driving factor in changes in subjective workload and secondary task performance. Level of automation (as a main effect) had little influence on workload and secondary task performance, yet it accounted for a significant portion of the variance in primary task performance and SA. The combined effect of AACT and LOA on all response measures was not additive in nature. Interestingly, the LOA yielding the "best" overall performance ("Batch Processing") did not do so at the AACT producing superior functioning (100% automation cycle time).Item The effect of unit task granularity on performance in teleoperations(Texas Tech University, 1997-08) Onal, EmrahThe choice of the command language in teleoperations has been left to systems designers, often resulting in arbitrary decisions. The granularity of the conmiand language (unit task granularity) is one of the factors that determines the level of interaction in teleoperations. A 3-D computer simulation was used to test the effect of two levels of unit task granularity (high and low) on performance in nuclear material handling. Data on humanmachine performance, operator situation awareness, and operator workload were collected and studied. Results revealed that the operator workload and level 2 situation awareness was higher in high unit task granularity than in low unit task granularity. The analysis of the data under normal mode of operation revealed that the human-machine performance (as measured in terms of time and number of errors) was higher in low unit task granularity than in high unit task granularity. During system failure, human-machine performance was higher in high unit task granularity than in low unit task granularity. This study revealed the trade-off between the performance under normal mode of operation and during system failure that is determined by the unit task granularity.Item The effects of field dependence-independence and graphical/non-graphical user interfaces upon word processing errors(Texas Tech University, 1991-12) Sparkman, Gerald WayneRecent studies suggest that word processing errors can arise from individual differences along the continuum of field dependence-independence, or from features of the word processing software per se. Experiment I compared the performance of field-dependent and field-independent subjects on three types of word processors: Non-graphical, Graphical, and one which combined features of these two major types. Naive subjects were asked to find errors which had been inserted into texts, and to create their own brief essays. Experiment I found that under high-resolution conditions software type influenced the total number of errors found during the proofreading task. An interaction effect was found between software type and field-dependence for the number of transposition errors remaining during the creation task. For Experiment II, naive subjects were asked to perform the same tasks using a cluttered or uncluttered word processing environment. Experiment II found that under less than high-resolution conditions field dependence interacted with interface type for the number of proofreading errors found and total number of errors remaining during the creation task. Implications for future research are discussed.Item The effects of software disruption on goal commitment, task self-efficacy, computer self-efficacy, and test performance in a computer-based instructional task(Texas Tech University, 2000-12) Lincecum, LeAnnThe societal value placed on the acquisition of computing skills is reflected by the increasing prevalence of computing technology in educational settings. Computers have purpose and value because they enable humans to accomplish valued tasks more efficiently. Since the computer decreased the amount of time to produce documents, management's expectations in terms of product output increased. The value of task outcomes and the time to complete tasks has remained constant, but external expectations and the rate of output has increased, resulting in both physical and psychological ramifications to the user. The stress placed on those dealing with technology on a daily basis is heightened when something goes wrong with the technology. The trend in higher education is leading towards more computer-based instructional environments. However, the effects of technology-related problems experienced by learners are generally unknown. Cognized goals within the context of Social Cognitive Theory are one of the most important cognitive motivators in learning. However, there is little research regarding learner goal commitment within the context of instructional design. Further there is little research on the effects of software disruptions on self-efficacy, goal commitment or performance in computer-based instruction. This study considered the effects of software disruptions on goal-directed behavior, computer self-efficacy, task self-efficacy and performance. The results of one-way multivariate analysis of variance failed to find statistically significant differences between the treatment groups with regard to the dependent variables of task self-efficacy, computer self-efficacy, goal commitment and test performance as a result of the software disruptions. Results indicate that participants were unaffected by the software disruptions. Supplemental one-way analysis of variance conducted on subscale measures of satisfaction, anxiety and frustration also failed to find significant differences between the treatment groups. While not statistically significant, there appeared to be differences in test performance between treatment group one and treatment groups two and three in that group one made more test performance gains on the second trial of the software program. Additionally, post-test goal commitment means for treatment group three, which experienced the longest disruption, dropped. Post-test goal commitment means for treatment groups one and two rose. Further research recommended includes studies to ascertain the effects of software disruptions adding time constraints and multiple disruption events to treatment conditions.Item Time in human-computer interaction: performance as a function of delay type, delay duration, and task difficulty(Texas Tech University, 1990-05) Stokes, Michael ThomasHuman-computer interaction comprises two time phases, System Response Time (SRT), the interval of time between user input and computer response, and User Response Time (URT), the time that elapses between computer response and user input. Delays within each of these time intervals have been shown to significantly impact performance of computer-mediated tasks. The results of studies assessing this problem suggest that delay effects depend on the difficulty level or information processing requirements imposed by the task being performed. Time delays must be commensurate with the amount of time required to process the information associated with task decisions. As task difficulty increases, computer-imposed delays should facilitate performance by releasing the user from the time constraints associated within quick computer response rates. The effect of such delays should be to induce the user to take more time to think about the task. The current study assessed the relationship of delays (0, 1, 2, 4, 8 seconds) within the SRT and URT time phases, and task difficulty level (Low, Medium, or High) as they relate to users' performance of a problem-solving task. Results suggest that moderate delays facilitate performance on tasks of greater difficulty, but serve no beneficial purpose for tasks that do not impose significant information processing loads.