Browsing by Subject "Measurement"
Now showing 1 - 18 of 18
Results Per Page
Sort Options
Item Are icons pictures or logographical words? Statistical, behavioral, and neuroimaging measures of semantic interpretations of four types of visual information(2012-05) Huang, Sheng-Cheng; Bias, Randolph G.; Dillon, Andrew; Francisco-Revilla, Luis; Schnyer, David; Sussman, HarveyThis dissertation is composed of three studies that use statistical, behavioral, and neuroimaging methods to investigate Chinese and English speakers’ semantic interpretations of four types of visual information including icons, single Chinese characters, single English words, and pictures. The goal is to examine whether people cognitively process icons as logographical words. By collecting survey data from 211 participants, the first study investigated how differently these four types of visual information can express specific meanings without ambiguity on a quantitative scale. In the second study, 78 subjects participated in a behavioral experiment that measured how fast people could correctly interpret the meaning of these four types of visual information in order to estimate the differences in reaction times needed to process these stimuli. The third study employed functional magnetic resonance imaging (fMRI) with 20 participants selected from the second study to identify brain regions that were needed to process these four types of visual information in order to determine if the same or different neural networks were required to process these stimuli. Findings suggest that 1) similar to pictures, icons are statistically more ambiguous than English words and Chinese characters to convey the immediate semantics of objects and concepts; 2) English words and Chinese characters are more effective and efficient than icons and pictures to convey the immediate semantics of objects and concepts in terms of people’s behavioral responses, and 3) according to the neuroimaging data, icons and pictures require more resources of the brain than texts, and the pattern of neural correlates under the condition of reading icons is different from the condition of reading Chinese characters. In conclusion, icons are not cognitively processed as logographical words like Chinese characters although they both stimulate the semantic system in the brain that is needed for language processing. Chinese characters and English words are more evolved and advanced symbols that are less ambiguous, more efficient and easier for a literate brain to understand, whereas graphical representations of objects and concepts such as icons and pictures do not always provide immediate and unambiguous access to meanings and are prone to various interpretations.Item A BIST circuit for random jitter measurement(2012-05) Lee, Jae Wook; Abraham, Jacob A.; Gharpurey, Ranjit; Touba, Nur; Pan, David Z.; Driga, Mircea; Gerosa, GianfrancoJitter is a dominant factor contributing to a high bit error rate (BER) in high speed I/O circuitry, and it aggravates the quality of a clock signal from a phase-locked loop (PLL), subsequently impacting a given timing budget. The recent proliferation of systems-on-a-chip (SoCs) with help of technology scaling makes jitter measurement more challenging as the SoCs integrate more I/O circuitry and PLLs within a chip. Jitter has been, however, one of the most difficult parameters to measure accurately when validating the high speed serial I/O circuitry or PLLs, mostly due to its small value. External instruments with full-fledged high precision measurement hardware, along with comprehensive analysis tools, have been used for jitter measurement, but increased test cost from long test time, signal integrity, and human intervention prevent this approach from being used for high volume manufacturing testing. Built-in self-test (BIST) solutions have recently become attractive to overcome these drawbacks, but complicated analog circuit designs that are sensitive to ever increasing process variations, and associated complex analysis methods impede their adoption in the SoCs. This dissertation studies practical random jitter measurement methods that achieve measurement accuracy by exploiting a differential approach and make the proposed methods tester-friendly solutions for an automatic test equipment (ATE). We first propose a method of measuring the average value of the random jitter, rather than measuring the jitter at every clock cycle, that can be converted to the root-mean-square (RMS) value of the random jitter, which is the key indicator of the quantity of the random jitter. Then, we propose a simple but accurate delay measurement method which uses the proposed jitter measurement method for random jitter measurement when a reference signal, such as a golden PLL output in high speed I/O validation, is not available. The validity of the proposed random jitter measurement method is supported by measurement results from a test chip. The impact of substrate noise on the signal of interest is also shown with measurements using a test chip. To address the random jitter of a clock signal when the clock is operating in its functional mode, we demonstrate a novel method for random jitter measurement that explores the shmoo capability of a low-cost production tester without relying on any BIST circuitry.Item Client and practitioner perspectives on multicultural counseling competence(2012-05) Ihorn, Shasta Marie; Keith, Timothy, 1952-; Cokley, Kevin O.As the population of the United States becomes more diverse, it is important that research be done to inform the implementation of psychological services that meet the needs of a wide variety of ethnic and socioeconomic groups. Research suggests that minority and low-SES clients with mental health disorders are underserved and receive inferior care when they do receive treatment. Although a large body of theory on multicultural counseling competence (MCC) has been developed over the last 30 years, little empirical research has been done in this area. This research proposal reviews the current research and theory and proposes the development and norming of a consumer measure of MCC.Item Developing and testing a protocol for curricula feedback in higher education(Texas Tech University, 1997-05) Ergish, Gary A.The purpose of this investigation was to develop and test a curriculum evaluation protocol to determine its validity, comprehensiveness, and usefulness for evaluating higher education academic curricula. While having application in other higher education environments, the Curriculum-Evaluation Protocol was tested in the specific professional higher education environment of the United States Air Force's B-IB (strategic bomber) student academic curriculum. This study of the Curriculum-Evaluation Protocol included two independent groups composed of 183 adult students and curriculum development experts which were given an accompanying survey to determine the validity, comprehensiveness, and usefulness of the protocol. One group was composed of 129 students in seven separate student classes throughout one academic year. The students were requested to use the protocol in an academic setting prior to completing the survey, and 116 students returned completed surveys. The same survey instrument was given to a second group of 54 curriculum development experts, and 52 returned completed surveys. Statistical analysis of the 116 student and 52 curriculum development expert survey resuhs indicate that both populations rated the Curriculum-Evaluation Protocol, using a rating scale of 0 to 5, to be overall valid, overall comprehensive, and overall useful. Additionally, both populations rated the individual parts that comprise the protocol to also be valid.Item Development and validation of the cognitive vulnerability schemas questionnaire for anxious youth(2014-12) Winton, Samantha Marie; Stark, Kevin DouglasAccording to cognitive theories of anxiety, anxiogenic schemata are a set of beliefs, rules, and assumptions that influence how those with anxiety make inferences and interpret threat. It is hypothesized that each anxiety disorder has a unique anxiogenic schema. This report describes the development of the Cognitive Vulnerability Schemas Questionnaire for Anxious Youth, an instrument used to measure anxiogenic schemata in youth aged 7-17 years old. Factor analyses of the scale demonstrated two empirically distinct and relatively stable dimensions of anxiogenic schema. The two identified factors of anxiogenic schema were: (1) Generalized Anxiety and Social Phobia Schema, and (2) Separation Anxiety Schema. The measure demonstrated good psychometric properties on a range of indices of reliability and validity. Results indicated that scores on the questionnaire subscales predicted anxiety symptomology. Regression analyses showed that both factors were predictors of anxiety symptomology, however did not predict anxiety diagnosis. Significant differences in the Cognitive Vulnerability Schemas Questionnaire for Anxious Youth subscales were demonstrated between patients with clinically significant Generalized Anxiety Symptoms, Social Phobia Symptoms, and Separation Anxiety Symptoms. The implications of these findings for theories of cognitive vulnerability and schema development in youth are discussed.Item Establishing the reliability and validity of a processing measure of big picture appraisal(2015-12) Haner, Morgynn Lynn; Rude, Stephanie Sandra; Allen, GregThree separate studies established the psychometric properties of the Scrambled Sentences Test for Big Picture Appraisal (SST-BPA), a performance measure which entails viewing difficult situations and one’s reactions to them in terms of a larger context that includes perspectives such as extended time, one’s broader life, and common human struggles. Study 1 established the content validity of the SST-BPA by showing that judges rated SST-BPA items as consistent with a description of the construct. In Studies 2 and 3, participants completed paper- and computer-administered versions (respectively) of the SST-BPA along with self-report measures of similar and dissimilar constructs. Item-total correlations supported internal consistency and correlations with other measures supported convergent and discriminant validity of the SST-BPA.Item Initial development and validation of the Assessment of Beliefs and Behaviors in Coping (ABC)(2012-08) Kulkarni, Monique Shah; McCarthy, Christopher J.; Cokley, Kevin; Dodd, Barbara; Keith, Timothy Z.; Parker, RandallThe central purpose of this study was to use structural equation modeling techniques on a newly developed measure of religious coping, the Assessment of Beliefs and Behaviors in Coping (ABC), in order to confirm the factor structure previously established through exploratory factor analysis. The ABC is a two-part, 40-item measure (each part containing 20 items) that measures attitudes about the helpfulness of religious coping as well as use of religious coping behaviors. Multi-group confirmatory factor analysis was conducted to determine whether the established factor structure is the same across religious groups. Participants were 885 undergraduate students from the Department of Educational Psychology subject pool. Confirmatory factor analysis (CFA) was used to assess the fit of the hypothesized structure as well as explore the fit of competing models. The factor structure of the attitude portion of the measure was confirmed independently of the behavior portion of the measure. Both scales demonstrated the initially theorized four-factor model. Multi-group analyses were then conducted on each portion of the ABC, again, independently. Partial scalar invariance was demonstrated for the ABC – Attitudes (across three groups, Christians, Non-Christians, and Non-Believers). Partial scalar invariance was also demonstrated for the ABC – Behaviors, but only for the Christian and Non-Christian groups. Finally, participants’ scores on the ABC were compared to their scores on existing measures of similar constructs to assess for convergent validity. Reliability of the instrument was also evaluated. By better understanding the role religion plays in coping with stressful life events, the objective is to aid mental health professionals in addressing religion, when applicable, with their clients. Limitations, directions for future research, and implications for counseling psychology are also discussed.Item IRLbot: design and performance analysis of a large-scale web crawler(Texas A&M University, 2008-10-10) Lee, Hsin-TsangThis thesis shares our experience in designing web crawlers that scale to billions of pages and models their performance. We show that with the quadratically increasing complexity of verifying URL uniqueness, breadth-first search (BFS) crawl order, and fixed per-host rate-limiting, current crawling algorithms cannot effectively cope with the sheer volume of URLs generated in large crawls, highly-branching spam, legitimate multi-million-page blog sites, and infinite loops created by server-side scripts. We offer a set of techniques for dealing with these issues and test their performance in an implementation we call IRLbot. In our recent experiment that lasted 41 days, IRLbot running on a single server successfully crawled 6:3 billion valid HTML pages (7:6 billion connection requests) and sustained an average download rate of 319 mb/s (1,789 pages/s). Unlike our prior experiments with algorithms proposed in related work, this version of IRLbot did not experience any bottlenecks and successfully handled content from over 117 million hosts, parsed out 394 billion links, and discovered a subset of the web graph with 41 billion unique nodes.Item Large-scale network analytics(2011-08) Song, Han Hee, 1978-; Zhang, Yin, doctor of computer scienceScalable and accurate analysis of networks is essential to a wide variety of existing and emerging network systems. Specifically, network measurement and analysis helps to understand networks, improve existing services, and enable new data-mining applications. To support various services and applications in large-scale networks, network analytics must address the following challenges: (i) how to conduct scalable analysis in networks with a large number of nodes and links, (ii) how to flexibly accommodate various objectives from different administrative tasks, (iii) and how to cope with the dynamic changes in the networks. This dissertation presents novel path analysis schemes that effectively address the above challenges in analyzing pair-wise relationships among networked entities. In doing so, we make the following three major contributions to large-scale IP networks, social networks, and application service networks. For IP networks, we propose an accurate and flexible framework for path property monitoring. Analyzing the performance side of paths between pairs of nodes, our framework incorporates approaches that perform exact reconstruction of path properties as well as approximate reconstruction. Our framework is highly scalable to design measurement experiments that span thousands of routers and end hosts. It is also flexible to accommodate a variety of design requirements. For social networks, we present scalable and accurate graph embedding schemes. Aimed at analyzing the pair-wise relationships of social network users, we present three dimensionality reduction schemes leveraging matrix factorization, count-min sketch, and graph clustering paired with spectral graph embedding. As concrete applications showing the practical value of our schemes, we apply them to the important social analysis tasks of proximity estimation, missing link inference, and link prediction. The results clearly demonstrate the accuracy, scalability, and flexibility of our schemes for analyzing social networks with millions of nodes and tens of millions of links. For application service networks, we provide a proactive service quality assessment scheme. Analyzing the relationship between the satisfaction level of subscribers of an IPTV service and network performance indicators, our proposed scheme proactively (i.e., detect issues before IPTV subscribers complain) assesses user-perceived service quality using performance metrics collected from the network. From our evaluation using network data collected from a commercial IPTV service provider, we show that our scheme is able to predict 60% of the service problems that are complained by customers with only 0.1% of false positives.Item Measurement of arsenic in water and soil based on gas-phase chemiluminescence(Texas Tech University, 2007-05) Idowu, Ademola David; Dasgupta, Purnendu K.; Liu, Shaorong; Quitevis, Edward L.Arsenic occurs widely in nature and is a known human carcinogen. Developmental, immunological, and neurological defects are linked with chronic exposure to arsenic in drinking water. The United States Environmental Protection Agency (US EPA) prescribed safe limit is 10 μg/L. Standard atomic spectrometry based methods are expensive. Field wet techniques require large amounts of acid, other reagents and paper strips impregnated with toxic mercury and lead compounds. This dissertation presents a new, fast, safe, affordable automated system configurable for laboratory or field use. Arsenic in the sample is chemically or electrochemically reduced to arsine that reacts with ozone atop a photomultiplier tube, producing chemiluminescence. Direct chemical, electrochemical, and liquid chromatography methods are described. The first method uses sodium borohydride for the reduction of arsenic. Differential determination of arsenate and arsenite is based on the different pH dependence on their conversion to arsine. At pH ≤1, both arsenate and arsenite are quantitatively converted. At pH 4-5, only arsenite is converted. Under these conditions, limit of detection (LOD) is 0.05 and 0.09 μg/L for total arsenic and arsenite, respectively, with a 3-mL water sample. The relative standard deviation for 3 determinations was 1.2 and 2.1% for 1 μg/L total arsenic and arsenite respectively. The arsenic concentrations in this dissertation are all based on that of elemental arsenic. The Electrochemical method uses a Platinum screen anode and stainless steel cathode in two compartments, separated by a Nafion membrane. Arsenite is selectively reduced on a stainless steel cathode while a cadmium-coated cathode reduces both forms. The limit of detection is 1.5 and 4 μg/L for arsenite and total arsenic respectively with a 2-mL water sample. The relative standard deviation for 3 determinations was 2.6 and 4.5% for 10 μg/L arsenite and total arsenic respectively. This environment-friendly method uses only re-usable sulfuric acid electrolyte, air, water and electricity but requires further development. Arsenite, arsenate, dimethylarsinic acid (DMA) and monomethylarsonic acid (MMA) are separated on anion-exchange column using carbonate and hydroxide eluents. Separated species are photolytically oxidized by UV-light, converting organic species to their respective inorganic forms. Subsequent online reaction with acid and borohydride produces arsine, detected by CL. For arsenite, arsenate, MMA and DMA the LOD is 0.4, 0.2, 0.5 and 0.3 μg/L respectively for a 100-μL injected sample. The relative standard deviation for 3 determinations was 3.5, 2.8, 2.2, and 4.1% for 10 μg/L of each of arsenite, arsenate, MMA, and DMA respectively. The system has been tested successfully on water and soil samples, and can be adapted for matrices such as biological samples and body fluids. There are no significant practical interferencesItem Measurement of internal and external geometric imperfections of lined pipes(2015-05) Harrison, Benjamin Duncan; Kyriakides, S.; Liechti, KennethCarbon steel pipe is often lined with a thin layer of non-corrosive material to protect it against corrosion from sour hydrocarbons. The product is commonly assembled by mechanical expansion of a liner shell bringing it into contact with the inner surface of a seamless steel pipe. During installation and operation, lined pipelines can experience bending or compressive deformations large enough to cause the liner to buckle and collapse inside an intact outer pipe. It has been demonstrated that such buckling instabilities are very sensitive to small initial geometric imperfections in the liner. Liner imperfections in 8- and 12- inch lined pipes have been measured using custom scanning devices and have been characterized by trigonometric Fourier series. These measurement schemes revealed that the imperfection geometry is dominated by imperfections in the circumferential direction, whereas axial imperfections are of relatively small amplitude and short wavelength. Imperfection amplitudes were determined to be on the order of 0.2% of the OD for both pipes studied. Liner geometry of the 8-inch pipe can be approximated as a shape, and the 12-inch pipe can be approximated as a shape. In general, the imperfection geometry of the interior surface follows that of the exterior surface, presumably due to the nature of the manufacturing process. The main source of the imperfections is from the piercing, rolling, and external finishing of the carrier pipe. Following the expansion process by which the liner is installed, interior surface imperfection of the carrier pipe is “transferred” to the liner. Overall, the interior surface is found to more imperfect than the exterior. Finite element models of a 12-inch lined pipe that incorporate liner imperfections defined by the results from this study demonstrate their detrimental effect on liner wrinkling and collapse.Item Measuring customer contribution to the agile software development process : a case study(2010-12) Brockley, Susan Ragaz; Perry, Dewayne E.; Krasner, HerbAgile project management and software development practices have become widely accepted in the industry and much of the currently published literature focuses on the developer's uptake of the methodology. Although it is commonly known that customers play a key role in Agile project success, the extent to which they can influence a project is not as well understood. This case study measures the contribution of customer involvement to the success of Agile projects. The study demonstrates that active customer participation is one of the top three factors for successful Agile projects. It also demonstrates that successful Agile projects have customers that are "knowledgeable, committed, collaborative, representative, and empowered". Similarly, the study shows that successful Agile projects have customers who transfer domain knowledge to project team members efficiently and effectively. The study concludes with recommendations for developers and customers that maximize an Agile project's potential for success.Item Model-Based Methodology for Building Confidence in a Dynamic Measuring System(2013-05-03) Reese, Isaac MarkThis thesis examines the special case in which a newly developed dynamic measurement system must be characterized when an accepted standard qualification procedure does not yet exist. In order to characterize this type of system, both physical experimentation and computational simulation methods will be used to build trust in this measurement system. This process of establishing credibility will be presented in the form of a proposed methodology. This proposed methodology will utilize verification and validation methods that apply within the simulation community as the foundation for this multi-faceted approach. The methodology will establish the relationships between four key elements: physical experimentation, conceptual modeling, computational simulations, and data processing. The combination of these activities will provide a comprehensive characterization study of the system. In order to illustrate the methodology, a case study was performed on a dynamic force measurement system owned by Sandia National Laboratories. This system was designed to measure the force required to pull a specimen to failure in tension at a user-input velocity. The results of the case study found that there was a significant measurement error occurring as the pull event involved large break loads and high velocities. 100 pull events were recorded using an experimental test assembly. The highest load conditions discovered a force measurement error of over 100%. Using computational simulations, this measurement error was reduced to less than 10%. These simulations were designed to account for the inertial effects that skew the piezoelectric load cells. This thesis displays the raw data and the corrected data for five different pull settings. The simulations designed using the methodology significantly reduced the error in all five pull settings. In addition to the force analysis, the simulations provide insight into the complete system performance. This includes the analysis of the maximum system velocity as well as the analysis of several proposed design changes. The findings suggest that the dynamic measurement system has a maximum velocity of 28 fps, and that this maximum velocity is unaffected by the track length or the mass of the moving carriage.Item A random jitter RMS measurement method using AND and OR operations(2009-12) Lee, Jae Wook, 1972-; Abraham, Jacob A.; Touba, NurJitter is defined as timing uncertainties of digital signals at their intended ideal positions in time. While it undermines valuable clock budget and limits the maximum clock frequency in I/O circuitry, it is one of the most difficult parameters to measure accurately due to the small value and randomness. This thesis proposes a random jitter RMS measurement method using AND and OR operations, which targets BIST applications. This thesis is organized as follows. Chapter 1 introduces the motivation of the proposed work. It includes a comparison between two major approaches to jitter measurement. Chapter 2 explains the proposed random jitter estimation method in detail. Chapter 3 describes circuit implementations with design considerations. Chapter 4 demonstrates estimation results from circuit level simulation runs. Chapter 5 discusses the source of error in the jitter estimation and concludes.Item Random or fixed testlet effects : a comparison of two multilevel testlet models(2010-08) Chen, Tzu-An, 1978-; Beretvas, Susan Natasha; Dodd, Barbara G.; Pituch, Keenan A.; Whittaker, Tiffany A.; Powers, Daniel A.This simulation study compared the performance of two multilevel measurement testlet (MMMT) models: Beretvas and Walker’s (2008) two-level MMMT model and Jiao, Wang, and Kamata’s (2005) three-level model. Several conditions were manipulated (including testlet length, sample size, and the pattern of the testlet effects) to assess the impact on the estimation of fixed and random effect parameters. While testlets, in which items share the same stimulus, are common in educational tests, testlet item scores violate the assumption of local item independence (LID) underlying item response theory (IRT). Modeling LID has been widely discussed in previous studies (for example, Bradlow, Wainer, and Wang, 1999; Wang, Bradlow, and Wainer, 2002; Wang, Cheng, and Wilson, 2005). More recently, Jiao et al. (2005) proposed a three-level MMMT (MMMT-3r) in which items are modeled as nested within testlets (level two) and then testlets are nested with persons (level three). Testlet effects are typically modeled as random in previous studies involving LID. However, item effects (difficulties) are commonly modeled as fixed under IRT models: that is, persons with the same ability level are assumed to have the same probability of answering an item correctly. Therefore, it is also important that a testlet effects model permit modeling of item effects as fixed. Moreover, modeling testlet effect as random implies testlets are being sampled from a larger population of testlets. However, as with item effects, researchers are typically more interested in a particular set of items or testlets that are being used in an assessment. Given the interest of the researcher or psychometrician using a testlet response model, it seems more useful to use a testlet response model that permits modeling testlets effects as fixed. An alternative MMMT that permits modeling testlet effect as fixed and/or randomly varying has been proposed (Beretvas and Walker, 2008). The MMMT-2f and MMMT-2r models treat testlet effects as item-set-specific but not person-specific. However, no simulation has been conducted to assess how this proposed model performs. The current study compared the performance of the MMMT-2f, MMMT-2r with that of the MMMT-3r. Results of the present simulation study showed that the MMMT-2r yielded the best parameter bias in estimation on fixed item effects, fixed testlet effects, and random testlet effects for conditions with nonzero equal pattern of random testlet effects’ variance even when the MMMMT-2r was not the generating model. However, random effects estimation did not perform well when unequal random testlet effects’ variances were generated. Fit indices did not perform well either as other studies have found. And it should be emphasized that model differences were of very little practical significance. From a modeling perspective, MMMT-2r does allow the greatest flexibility in terms of modeling testlet effects as fixed, random, or both.Item Study of microfluidic measurement techniques using novel optical imaging diagnostics(Texas A&M University, 2007-04-25) Park, JaesungNovel microscale velocity and temperature measurement techniques were studied based on confocal laser scanning microscopy (CLSM) and optical serial sectioning microscopy (OSSM). Two microscopic measurement systems were developed, 1) a CLSM micro particle image velocimetry (PIV) system with a dual Nipkow disk confocal unit (CSU-10), a CW argon-ion laser and an upright microscope, and 2) an OSSM micro- particle tracking velocimetry (PTV) system with an epi-fluorescence microscope and a non-designed specimen to make a three-dimensional (3-D) diffraction particle image. The CLSM micro-PIV system shows a unique optical slicing capability allowing true depth-wise resolved vector field mapping. A comparative study is presented between the CLSM micro-PIV and a conventional epi-fluorescence micro-PIV. Both have been applied to the creeping Poiseuille flows in two different microtubes of 99-????m (Re = 0.00275) and 516-????m ID diameters (Re = 0.021). The CLSM micro-PIV consistently shows significantly improved particle image contrasts, the definition of "optical slicing" and measured flow vector fields more accurately agreeing with predictions based on the Poiseuille flow fields, compared to the conventional micro-PIV. The OSSM micro-PTV technique is applied for a 3-D vector field mapping in a microscopic flow and a Brownian motion tracking of nanoparticles. This technique modifies OSSM system for a micro-fluidic experiment, and the imaging system captures a diffracted particle image having numerous circular fringes instead of an in-focus particle image. The 3-D particle tracking is based on a correlation between the 3-D diffraction pattern of a particle and the defocus distance from a focal plane. A computational program is invented for the OSSM micro-PTV, and provides a 3-D velocity vector field with a spatial resolution of 5.16 ????m. In addition, a concept of nonintrusive thermometry is presented based on the correlation of the Brownian motion of suspended nanoparticles with the surrounding fluid temperature. Detection of fully three-dimensional Brownian motion is possible by the use of the OSSM, and the measured value of mean square displacement (MSD) is compared fairly well with Einstein's predictions.Item Temporal modeling of crowd work quality for quality assurance in crowdsourcing(2015-12) Jung, Hyun Joon; Lease, Matthew A.; Mooney, Raymond; Bennett, Paul; Fleischmann, Kenneth; Wallace, Byron CWhile crowdsourcing offers potential traction on data collection at scale, it also poses new and significant quality concerns. Beyond the obvious issue of any new methodology being untested and often suffering initial growing pains, crowdsourcing has faced a very particular criticism since its inception: given anonymity of crowd workers, it is questionable whether we can trust their contributions as much as work completed by trusted workers. To relieve this concern, recent studies have proposed a variety of methods. However, while temporal behavioral patterns can be discerned to underlie real crowd work, prior studies have typically modeled worker performance under an assumption that a sequence of model variables is independent and identically distributed (i.i.d). This dissertation focuses on the measurement and prediction of crowd work quality by considering its temporal properties. To better model such temporal worker behavior, we present a time-series prediction model for crowd work quality. This model captures and summarizes past worker label quality, enabling us to better predict the quality of each worker’s next label. Further- more, we propose a crowd assessor model for predicting crowd work quality more accurately. By taking account of multi-dimensional features of a crowd assessor, we aim to build a better quality prediction model of crowd work. Finally, this dissertation explores how the proposed prediction models work under realistic scenarios. In particular, we consider a realistic use case in which limited gold labels are provided for learning our proposed model. For this problem, we leverage instance weighting with soft labels, which takes ac- count of uncertainty of each training instance. Our empirical evaluation with synthetic datasets and a public crowdsourcing dataset has shown that our pro- posed models significantly improve prediction quality of crowd work as well as lead to an acquisition of better quality labels in crowdsourcing.Item Thermoelectric and structural characterization of individual nanowires and patterned thin films(2008-12) Mavrokefalos, Anastassios Andreas; Shi, Li, Ph. D.This dissertation presents the development of methods based on microfabricated devices for combined structure and thermoelectric characterizations of individual nanowire and thin film materials. These nanostructured materials are being investigated for improving the thermoelectric figure of merit defined as ZT=S²[sigma]T/K, where S is the Seebeck coefficient, [sigma] is the electrical conductivity, K is the thermal conductivity, and T is the absolute temperature. The objective of the work presented in this dissertation is to address the challenges in the measurements of all the three intrinsic thermoelectric properties on the same individual nanowire sample or along the in plane direction of a thin film, and in correlating the measured properties with the crystal structure of the same nanowire or thin film sample. This objective is accomplished by the development of a four-probe thermoelectric measurement procedure based on a micro-device to measure the intrinsic K, [sigma], and S of the same nanowire or thin film and eliminate the contact thermal and electrical resistances from the measured properties. Additionally the device has an etched through hole that facilitates the structural characterization of the sample using transmission electron microscopy (TEM) and energy dispersive X-ray spectroscopy (EDS). This measurement method is employed to characterize individual electrodeposited Bi[subscript 1-x]Te[subscript x] nanowires. A method based on annealing the nanowire sample in a forming gas is demonstrated for making electrical contact between the nanowire and the underlying electrodes. The measurement results show that the thermoelectric propertied of the nanowires are sensitive to the crystal quality and impurity doping concentration. The highest ZT found in three nanowires is about 0.3, which is still lower than that of bulk single crystals at the optimum carrier concentration. The lower ZT found in the nanowires is attributed to the high impurity or carrier concentration and defects in the nanowires. The micro-device is further modified to extend its use to characterization of the in-plane thermoelectric properties of thin films. Existing practice for thermoelectric characterization of thin films is obtaining K in the cross plane direction using techniques such as the 3[omega] method or time domain laser thermal reflectance technique whereas the [sigma] and S are usually obtained in the in-plane direction. However, transport properties of nanostructured thin films can be highly anisotropic, making this combination of measurements along different directions unsuitable for obtaining the actual ZT value. Here, the micro-device is used to measure all three thermoelectric properties in the in-plane direction, thus obtaining the in-plane ZT. A procedure based on a nano-manipulator is developed to assemble etched thin film segments on the micro-device. Measurement results of two different types of thin films are presented in this dissertation. The first type is mis-oriented, layered thin films grown by the Modulated Elemental Reactant Technique (MERT). Three different structures of such thin films are characterized, namely WSe₂, W[subscript x](WSe₂)[subscript y] and (PbSe₀.₉₉)[subscript x](WSe₂)[subscript x] superlattice films. All three structures exhibit in-plane K values much higher than their cross-plane K values, with an increased anisotropy compared to bulk single crystals for the case of the WSe₂ film. The increased anisotropy is attributed to the in-plane ordered, cross-plane disordered nature of the mis-oriented, layered structure. While the WSe₂ film is semi-insulating and the W[subscript x](WSe₂)[subscript y] films are metallic, the (PbSe₀.₉₉)[subscript x](WSe₂)[subscript x] films are semiconducting with its power factor (S²[sigma]) greatly improved upon annealing in a Se vapor environment. The second type of thin films is semiconducting InGaAlAs films with and without embedded metallic ErAs nanoparticles. These nanoparticles are used to filter out low energy electrons with the introduction of Schottky barriers so as to increase the power factor and scatter long to mid range phonons and thus suppress K. The in-plane measurements show that both the S and [sigma] increase with increasing temperature because of the electron filtering effect. The films with the nanoparticles exhibited an increase in [sigma] by three orders of magnitude and a decrease in S by only fifty percent compared to the films without, suggesting that the nanoparticles act as dopants within the film. On the other hand, the measured in-plane K shows little difference between the films with and without nanoparticles. This finding is different from those based on published cross-plane thermal conductivity results.