skip to main content

Sketch Recognition Lab
Director: Dr. Tracy Anne Hammond

SRL Dissertations and Theses


 


PhD Doctoral Dissertations


PublicationImage 2019 Paul Taele. 2019. "A Sketch Recognition-Based Intelligent Tutoring System for Richer Instructor-Like Feedback on Chinese Characters." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: December 2019. Advisor: Tracy Hammond. First Position: Visiting Assistant Professor, TAMU.
Show Abstract:

Students wishing to achieve strong fluency in East Asian languages such as Chinese and Japanese must master various language skills such as the reading and writing of those languages' non-phonetic symbols of Chinese characters. For such students with English fluency learning an East Asian language as a foreign language and with only primary fluency in English, mastery of such languages' written component is challenging due to vastly distinct linguistic differences in reading and writing. In this dissertation, I developed a sketch recognition-based intelligent tutoring system for providing richer assessment and feedback that emulates human language instructors, specifically for novice students' introductory course study of East Asian languages and their written Chinese characters. The system relies on various sketch recognition heuristics for evaluating the performance of students' writing technique of introductory Chinese characters through features such as metric scores and visual animations. From evaluating the proposed system from instructor feedback for classroom students and self-study learners, I provide a stylus-driven solution for novice language students to study and practice introductory Chinese characters with deeper assessment levels, so that they may have richer feedback to improve their writing performance.

Show BibTex

@mastersthesis{paultaele2019ASketchRecognitionBasedIntelligentTutoringSystemforRicherInstructorLikeFeedbackonChineseCharactersPhD,
type = {{PhD Doctoral Dissertation}},
author = {Taele, Paul},
title = {A Sketch Recognition-Based Intelligent Tutoring System for Richer Instructor-Like Feedback on Chinese Characters},
year = {2019},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Blake Williford. 2019. "Exploring methods for holistically improving drawing ability with artificial intelligence." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: December 2019. Advisor: Tracy Hammond. First Position: Visiting Assistant Professor, TAMU.
Show Abstract:

Students wishing to achieve strong fluency in East Asian languages such as Chinese and Japanese must master various language skills such as the reading and writing of those languages' non-phonetic symbols of Chinese characters. For such students with English fluency learning an East Asian language as a foreign language and with only primary fluency in English, mastery of such languages' written component is challenging due to vastly distinct linguistic differences in reading and writing. In this dissertation, I developed a sketch recognition-based intelligent tutoring system for providing richer assessment and feedback that emulates human language instructors, specifically for novice students' introductory course study of East Asian languages and their written Chinese characters. The system relies on various sketch recognition heuristics for evaluating the performance of students' writing technique of introductory Chinese characters through features such as metric scores and visual animations. From evaluating the proposed system from instructor feedback for classroom students and self-study learners, I provide a stylus-driven solution for novice language students to study and practice introductory Chinese characters with deeper assessment levels, so that they may have richer feedback to improve their writing performance.

Show BibTex

@mastersthesis{blakewilliford2019ExploringmethodsforholisticallyimprovingdrawingabilitywithartificialintelligencePhD,
type = {{PhD Doctoral Dissertation}},
author = {Williford, Blake},
title = {Exploring methods for holistically improving drawing ability with artificial intelligence},
year = {2019},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2018 Vijay Rajanna. 2018. "Addressing Situational and Physical Impairments and Disabilities with a Gaze-assisted, Multi-modal, Accessible Interaction Paradigm." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: December 2018. Advisor: Tracy Hammond. ISBN: ORCID id: 0000-0001-7550-0411.
Show Abstract:

Every day we encounter a variety of scenarios that lead to situationally induced impairments and disabilities, i.e., our hands are assumed to be engaged in a task, and hence unavailable for interacting with a computing device. For example, a surgeon performing an operation, a worker in a factory with greasy hands or wearing thick gloves, a person driving a car (police have multiple screens around them in a car), a musician playing an instrument, and so on all represent scenarios of situational impairments and disabilities. In such cases, performing point-and-click interactions, text entry, or authentication on a computer using conventional input methods like the mouse, keyboard, and touch is either inefficient or not possible. While being able to work on a computer amidst situational impairments and disabilities remains a problem to be addressed, unfortunately, individuals with physical impairments and disabilities by birth or due to an injury are forced to deal with these limitations every single day. Generally, these individuals experience difficulty or are completely unable to perform basic operations on a computer. However, the technology should enable them to use computers as individuals without impairments or disabilities would. Therefore, to address situational and physical impairments and disabilities it is crucial to develop hands-free, accessible interactions. Situational impairments often result in poor task performance, and sometimes even lead to fatal accidents. This is becoming increasingly pressing as the growing pervasiveness of computing devices has resulted in increasing scenarios of situational impairments. Hands-free interactions would not only make working on computers possible in scenarios of situational impairments, but also improve the task efficiency and ensure user safety. For individuals with physical impairments, the solutions that do exist generally suffer from a number of limitations. Many of these solutions are prohibitively expensive; many public places such as schools, libraries, banks, and kiosks simply do not have any of the available solutions due to their cost. Furthermore, many of these solutions are bulky and inefficient, limiting the range of operations individuals using them can perform on computing devices. Hence, as much there is a need for solutions for individuals experiencing situational impairments, there is equally as great a need for efficient and affordable accessible solutions for individuals who have physical impairments and disabilities. In this research, we try to address the limitations, inabilities, and challenges arising from situational and physical impairments and disabilities by developing a gaze-assisted, multi-modal, hands-free, accessible interaction paradigm. Specifically, we focus on the three primary interactions: 1) point-and-click, 2) text entry, and 3) authentication. We present multiple ways in which the gaze input can be modeled and combined with other input modalities to enable efficient and accessible interactions. In this regard, we have developed a gaze and foot-based interaction framework to achieve accurate ``point-and-click" interactions and to perform dwell-free text entry on computers. In addition, we have developed a gaze gesture-based framework for user authentication and to interact with a wide range of computer applications using a common repository of gaze gestures. The interaction methods and devices we have developed are a) evaluated using the standard HCI procedures like the Fitts' Law, text entry metrics, authentication accuracy and video analysis attacks, b) compared against the speed, accuracy, and usability of other gaze-assisted interaction methods, and c) qualitatively analyzed by conducting user interviews. From the evaluations, we found that our solutions achieve higher efficiency than the existing systems and also address the usability issues. To discuss each of these solutions, first, the gaze and foot-based system we developed supports point-and-click interactions to address the ``Midas Touch" issue. The system performs at least as good (time and precision) as the mouse, while enabling hands-free interactions. We have also investigated the feasibility, advantages, and challenges of using gaze and foot-based point-and-click interactions on standard (up to 24") and large displays (up to 84") through Fitts' Law evaluations. Additionally, we have compared the performance of the gaze input to the other standard inputs like the mouse and touch. Second, to support text entry, we developed a gaze and foot-based dwell-free typing system, and investigated foot-based activation methods like foot-press and foot gestures. we have demonstrated that our dwell-free typing methods are efficient and highly preferred over conventional dwell-based gaze typing methods. Using our gaze typing system the users type up to 14.98 Words Per Minute (WPM) as opposed to 11.65 WPM with dwell-based typing. Importantly, our system addresses the critical usability issues associated with gaze typing in general. Third, we addressed the lack of an accessible and shoulder-surfing resistant authentication method by developing a gaze gesture recognition framework, and presenting two authentication strategies that use gaze gestures. Our authentication methods use static and dynamic transitions of the objects on the screen, and they authenticate users with an accuracy of 99% (static) and 97.5% (dynamic). Furthermore, unlike other systems, our dynamic authentication method is not susceptible to single video iterative attacks, and has a lower success rate with dual video iterative attacks. Lastly, we demonstrated how our gaze gesture recognition framework can be extended to allow users to design gaze gestures of their choice and associate them to appropriate commands like minimize, maximize, scroll, etc., on the computer. We presented a template matching algorithm which achieved an accuracy of 93%, and a geometric feature-based decision tree algorithm which achieved an accuracy of 90.2% in recognizing the gaze gestures. In summary, our research demonstrates how situational and physical impairments and disabilities can be addressed with a gaze-assisted, multi-modal, accessible interaction paradigm.

Show BibTex

@mastersthesis{vijayrajanna2018AddressingSituationalandPhysicalImpairmentsandDisabilitieswithaGazeassistedMultimodalAccessibleInteractionParadigmPhD,
type = {{PhD Doctoral Dissertation}},
author = {Rajanna, Vijay},
title = {Addressing Situational and Physical Impairments and Disabilities with a Gaze-assisted, Multi-modal, Accessible Interaction Paradigm},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0001-7550-0411},
}
PublicationImage
 
PublicationImage 2016 Stephanie Valentine. 2016. "Design, Deployment, Identity, & Conformity: An Analysis of Children's Online Social Networks." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: August 2016. Advisor: Tracy Hammond. First Position: University of Nebraska Asst. Professor of Practice. ISBN: ORCID id: 0000-0003-1956-8125.
Show Abstract:

Preadolescents (children aged 7 to 12 years) are participating in online social networks whether we, as a society, like it or not. The Children’s Online Privacy Protection Act, enacted by the United States Congress in 1998, made illegal the collection of online data about children under the age of 13 without express parental consent. As such, most mainstream social networks, such as Twitter, Facebook, and Instagram, limit their registration by requiring new users to agree that they are at least 13 years of age, an assertion which is often falsified. Researchers, bound by the same legal requirements regarding online data collection, have resorted to surveys and interviews to understand how and why children interact on social networks. While valuable, these prior works explain only what children say they do online, and not what they actually do on a daily basis. In this work, we describe the design, development, deployment, and analysis of our own online social network for children, KidGab. This work explores common social networking affordances for adults and their suitability for child audiences. It analyzes the participatory behaviors of our users (Girl Scouts from around central Texas) and describes how they shaped KidGab’s continuing growth. This work discusses our quantitative analysis of users’ tendencies and proclivities toward identity exploration leverages graph algorithms and link analysis techniques to understand the sociality of conformity on the network. Finally, this work describes the lessons we learned about children’s social networks and social networking throughout KidGab’s 450 days of active deployment.

Show BibTex

@mastersthesis{stephanievalentine2016DesignDeploymentIdentityConformityAnAnalysisofChildrensOnlineSocialNetworksPhD,
type = {{PhD Doctoral Dissertation}},
author = {Valentine, Stephanie},
title = {Design, Deployment, Identity, \& Conformity: An Analysis of Children's Online Social Networks},
year = {2016},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-1956-8125},
}
PublicationImage
 
PublicationImage 2016 Folami Alamudun. 2016. "Analysis of Visuo-cognitive Behavior in Screening Mammography." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond. First Position: Oak Ridge Laboratories. ISBN: ORCID id: 0000-0002-0803-4542. http://hdl.handle.net/1969.1/157040
Show Abstract:

Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli. In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density). We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision. Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks

Show BibTex

@mastersthesis{folamialamudun2016AnalysisofVisuocognitiveBehaviorinScreeningMammographyPhD,
type = {{PhD Doctoral Dissertation}},
author = {Alamudun, Folami},
title = {Analysis of Visuo-cognitive Behavior in Screening Mammography},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-0803-4542},
}
PublicationImage
 
PublicationImage 2016 Hong-Hoe (Ayden) Kim. 2016. "A Fine Motor Skill Classifying Framework to Support Children's Self-regulation Skills and School Readiness." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond. First Position: Samsung. ISBN: ORCID id: 0000-0002-1175-8680.
Show Abstract:

Children’s self-regulation skills predict their school-readiness and social behaviors, and assessing these skills enables parents and teachers to target areas for improvement or prepare children to enter school ready to learn and achieve. Assessing these skills enables parents and teachers to target areas for improvement or prepare children to enter school ready to learn and achieve. To assess children’s fine motor skills, current educators are assessing those skills by either determining their shape drawing correctness or measuring their drawing time durations through paper-based assessments. However, the methods involve human experts manually assessing children’s fine motor skills, which are time consuming and prone to human error and bias. As there are many children that use sketch-based applications on mobile and tablet devices, computer-based fine motor skill assessment has high potential to solve the limitations of the paper-based assessments. Furthermore, sketch recognition technology is able to offer more detailed, accurate, and immediate drawing skill information than the paper-based assessments such as drawing time or curvature difference. While a number of educational sketch applications exist for teaching children how to sketch, they are lacking the ability to assess children’s fine motor skills and have not proved the validity of the traditional methods onto tablet-environments. We introduce our fine motor skill classifying framework based on children’s digital drawings on tablet-computers. The framework contains two fine motor skill classifiers and a sketch-based educational interface (EasySketch). The fine motor skill classifiers contain: (1) KimCHI: the classifier that determines children’s fine motor skills based on their overall drawing skills and (2) KimCHI2: the classifier that determines children’s fine motor skills based on their curvature- and corner-drawing skills. Our fine motor skill classifiers determine children’s fine motor skills by generating 131 sketch features, which can analyze their drawing ability (e.g. DCR sketch feature can determine their curvature-drawing skills). We first implemented the KimCHI classifier, which can determine children’s fine motor skills based on their overall drawing skills. From our evaluation with 10- fold cross-validation, we found that the classifier can determine children’s fine motor skills with an f-measure of 0.904. After that, we implemented the KimCHI2 classifier, which can determine children’s fine motor skills based on their curvature- and corner-drawing skills. From our evaluation with 10-fold cross-validation, we found that the classifier can determine children’s curvature-drawing skills with an f-measure of 0.82 and corner-drawing skills with an f-measure of 0.78. The KimCHI2 classifier outperformed the KimCHI classifier during the fine motor skill evaluation. EasySketch is a sketch-based educational interface that (1) determines children’s fine motor skills based on their drawing skills and (2) assists children how to draw basic shapes such as alphabet letters or numbers based on their learning progress. When we evaluated our interface with children, our interface determined children’s fine motor skills more accurately than the conventional methodology by f-measures of 0.907 and 0.744, accordingly. Furthermore, children improved their drawing skills from our pedagogical feedback. Finally, we introduce our findings that sketch features (DCR and Polyline Test) can explain children’s fine motor skill developmental stages. From the sketch feature distributions per each age group, we found that from age 5 years, they show notable fine motor skill development.

Show BibTex

@mastersthesis{honghoekim2016AFineMotorSkillClassifyingFrameworktoSupportChildrensSelfregulationSkillsandSchoolReadinessPhD,
type = {{PhD Doctoral Dissertation}},
author = {Kim, Hong-Hoe (Ayden)},
title = {A Fine Motor Skill Classifying Framework to Support Children's Self-regulation Skills and School Readiness},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-1175-8680},
}
PublicationImage
 
PublicationImage 2014 Manoj Prasad. 2014. "Designing Tactile Interfaces for Abstract Interpersonal Communication, Pedestrian Navigation and Motorcyclists Navigation." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: May 2014. pp. 183. Advisor: Tracy Hammond. First Position: Microsoft. ISBN: ORCID id: 0000-0002-3554-2614.
Show Abstract:

The tactile medium of communication with users is appropriate for displaying information in situations where auditory and visual mediums are saturated. There are situations where a subject’s ability to receive information through either of these channels is severely restricted by the environment they are in or through any physical impairments that the subject may have. In this project, we have focused on two groups of users who need sustained visual and auditory focus in their task: Soldiers on the battlefield and motorcyclists. Soldiers on the battlefield use their visual and auditory capabilities to maintain awareness of their environment to guard themselves from enemy assault. One of the major challenges to coordination in a hazardous environment is maintaining communication between team members while mitigating cognitive load. Compromise in communication between team members may result in mistakes that can adversely affect the outcome of a mission. We have built two vibrotactile displays, Tactor I and Tactor II, each with nine actuators arranged in a three-by-three matrix with differing contact areas that can represent a total of 511 shapes. We used two dimensions of tactile medium, shapes and waveforms, to represent verb phrases and evaluated ability of users to perceive verb phrases the tactile code. We evaluated the effectiveness of communicating verb phrases while the users were performing two tasks simultaneously. The results showed that performing additional visual task did not affect the accuracy or the time taken to perceive tactile codes. Another challenge in coordinating Soldiers on a battlefield is navigating them to respective assembly areas. We have developed HaptiGo, a lightweight haptic ii vest that provides pedestrians both navigational intelligence and obstacle detection capabilities. HaptiGo consists of optimally-placed vibro-tactile sensors that utilize natural and small form factor interaction cues, thus emulating the sensation of being passively guided towards the intended direction. We evaluated HaptiGo and found that it was able to successfully navigate users with timely alerts of incoming obstacles without increasing cognitive load, thereby increasing their environmental awareness. Additionally, we show that users are able to respond to directional information without training. The needs of motorcyclists are different from those of Soldiers. Motorcyclists’ need to maintain visual and auditory situational awareness at all times is crucial since they are highly exposed on the road. Route guidance systems, such as the Garmin, have been well tested on automobilists, but remain much less safe for use by motorcyclists. Audio/visual routing systems decrease motorcyclists’ situational awareness and vehicle control, and thus increase the chances of an accident. To enable motorcyclists to take advantage of route guidance while maintaining situa- tional awareness, we created HaptiMoto, a wearable haptic route guidance system. HaptiMoto uses tactile signals to encode the distance and direction of approaching turns, thus avoiding interference with audio/visual awareness. Evaluations show that HaptiMoto is intuitive for motorcyclists, and a safer alternative to existing solutions.

Show BibTex

@mastersthesis{manojprasad2014DesigningTactileInterfacesforAbstractInterpersonalCommunicationPedestrianNavigationandMotorcyclistsNavigationPhD,
type = {{PhD Doctoral Dissertation}},
author = {Prasad, Manoj},
title = {Designing Tactile Interfaces for Abstract Interpersonal Communication, Pedestrian Navigation and Motorcyclists Navigation},
pages = {183},
year = {2014},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-3554-2614},
}
PublicationImage
 
PublicationImage 2013 Danielle Cummings. 2013. "Multimodal Interaction for Enhancing Team Coordination on the Battlefield." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: August 2013. pp. 201. Advisor: Tracy Hammond. First Position: DoD. http://hdl.handle.net/1969.1/151044
Show Abstract:

Team coordination is vital to the success of team missions. On the battlefield and in other hazardous environments, mission outcomes are often very unpredictable because of unforeseen circumstances and complications encountered that adversely affect team coordination. In addition, the battlefield is constantly evolving as new technology, such as context-aware systems and unmanned drones, becomes available to assist teams in coordinating team efforts. As a result, we must re-evaluate the dynamics of teams that operate in high-stress, hazardous environments in order to learn how to use technology to enhance team coordination within this new context. In dangerous environments where multi-tasking is critical for the safety and success of the team operation, it is important to know what forms of interaction are most conducive to team tasks. We have explored interaction methods, including various types of user input and data feedback mediums that can assist teams in performing unified tasks on the battlefield. We’ve conducted an ethnographic analysis of Soldiers and researched technologies such as sketch recognition, physiological data classification, augmented reality, and haptics to come up with a set of core principles to be used when de- signing technological tools for these teams. This dissertation provides support for these principles and addresses outstanding problems of team connectivity, mobility, cognitive load, team awareness, and hands-free interaction in mobile military applications. This research has resulted in the development of a multimodal solution that enhances team coordination by allowing users to synchronize their tasks while keeping an overall awareness of team status and their environment. The set of solutions we’ve developed utilizes optimal interaction techniques implemented and evaluated in related projects; the ultimate goal of this research is to learn how to use technology to provide total situational awareness and team connectivity on the battlefield. This information can be used to aid the research and development of technological solutions for teams that operate in hazardous environments as more advanced resources become available.

Show BibTex

@mastersthesis{daniellecummings2013MultimodalInteractionforEnhancingTeamCoordinationontheBattlefieldPhD,
type = {{PhD Doctoral Dissertation}},
author = {Cummings, Danielle},
title = {Multimodal Interaction for Enhancing Team Coordination on the Battlefield},
pages = {201},
year = {2013},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2013 Sashikanth Damaraju. 2013. "An Exploration of Multi-touch Interaction Techniques." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: August 2013. pp. 145. Advisor: Tracy Hammond. First Position: Zillabyte. http://hdl.handle.net/1969.1/151366
Show Abstract:

Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique. I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders. To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications.

Show BibTex

@mastersthesis{sashikanthdamaraju2013AnExplorationofMultitouchInteractionTechniquesPhD,
type = {{PhD Doctoral Dissertation}},
author = {Damaraju, Sashikanth},
title = {An Exploration of Multi-touch Interaction Techniques},
pages = {145},
year = {2013},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2010 Brandon Paulson. 2010. "Rethinking pen input interaction: Enabling freehand sketching through improved primitive recognition." PhD Doctoral Dissertation. Texas A&M University (TAMU). College Station, TX, USA: May 2010. pp. 217. Advisor: Tracy Hammond. First Position: Capsure. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7808
Show Abstract:

Online sketch recognition uses machine learning and artificial intelligence techniques to interpret markings made by users via an electronic stylus or pen. The goal of sketch recognition is to understand the intention and meaning of a particular user's drawing. Diagramming applications have been the primary beneficiaries of sketch recognition technology, as it is commonplace for the users of these tools to rst create a rough sketch of a diagram on paper before translating it into a machine understandable model, using computer-aided design tools, which can then be used to perform simulations or other meaningful tasks. Traditional methods for performing sketch recognition can be broken down into three distinct categories: appearance-based, gesture-based, and geometric-based. Although each approach has its advantages and disadvantages, geometric-based methods have proven to be the most generalizable for multi-domain recognition. Tools, such as the LADDER symbol description language, have shown to be capable of recognizing sketches from over 30 different domains using generalizable, geometric techniques. The LADDER system is limited, however, in the fact that it uses a low-level recognizer that supports only a few primitive shapes, the building blocks for describing higher-level symbols. Systems which support a larger number of primitive shapes have been shown to have questionable accuracies as the number of primitives increase, or they place constraints on how users must input shapes (e.g. circles can only be drawn in a clockwise motion; rectangles must be drawn starting at the top-left corner). This dissertation allows for a significant growth in the possibility of free-sketch recognition systems, those which place little to no drawing constraints on users. In this dissertation, we describe multiple techniques to recognize upwards of 18 primitive shapes while maintaining high accuracy. We also provide methods for producing confidence values and generating multiple interpretations, and explore the difficulties of recognizing multi-stroke primitives. In addition, we show the need for a standardized data repository for sketch recognition algorithm testing and propose SOUSA (sketch-based online user study application), our online system for performing and sharing user study sketch data. Finally, we will show how the principles we have learned through our work extend to other domains, including activity recognition using trained hand posture cues.

Show BibTex

@mastersthesis{brandonpaulson2010RethinkingpeninputinteractionEnablingfreehandsketchingthroughimprovedprimitiverecognitionPhD,
type = {{PhD Doctoral Dissertation}},
author = {Paulson, Brandon},
title = {Rethinking pen input interaction: Enabling freehand sketching through improved primitive recognition},
pages = {217},
year = {2010},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2007 Tracy Hammond. 2007. "LADDER: A Perceptually-Based Language to Simplify Sketch Recognition User Interface Development." PhD Doctoral Dissertation. Massachusetts Institute of Technology (MIT). Cambridge, MA, USA: February 2007. pp. 495. Advisor: Randall Davis. First Position: TAMU Asst. Professor.
Show Abstract:

Diagrammatic sketching is a natural modality of human-computer interaction that can be used for a variety of tasks, for example, conceptual design. Sketch recognition systems are currently being developed for many domains. However, they require signal-processing expertise if they are to handle the intricacies of each domain, and they are time-consuming to build. Our goal is to enable user interface designers and domain experts who may not have expertise in sketch recognition to be able to build these sketch systems. We created and implemented a new framework (FLUID - facilitating user interface development) in which developers can specify a domain description indicating how domain shapes are to be recognized, displayed, and edited. This description is then automatically transformed into a sketch recognition user interface for that domain. LADDER, a language using a perceptual vocabulary based on Gestalt principles, was developed to describe how to recognize, display, and edit domain shapes. A translator and a customizable recognition system (GUILD - a generator of user interfaces using ladder descriptions) are combined with a domain description to automatically create a domain specific recognition system. With this new technology, by writing a domain description, developers are able to create a new sketch interface for a domain, greatly reducing the time and expertise for the task. Continuing in pursuit of our goal to facilitate UI development, we noted that 1) human generated descriptions contained syntactic and conceptual errors, and that 2) it is more natural for a user to specify a shape by drawing it than by editing text. However, computer generated descriptions from a single drawn example are also flawed, as one cannot express all allowable variations in a single example. In response, we created a modification of the traditional model of active learning in which the system selectively generates its own near-miss examples and uses the human teacher as a source of labels. System generated near-misses offer a number of advantages. Human generated examples are tedious to create and may not expose problems in the current concept. It seems most effective for the near-miss examples to be generated by whichever learning participant (teacher or student) knows better where the deficiencies lie; this will allow the concepts to be more quickly and effectively refined. When working in a closed domain such as this one, the computer learner knows exactly which conceptual uncertainties remain, and which hypotheses need to be tested and confirmed. The system uses these labeled examples to auto- matically build a LADDER shape description, using a modification of the version spaces algorithm that handles interrelated constraints, and which also has the ability to learn negative and disjunctive constraints.

Show BibTex

@mastersthesis{tracyhammond2007LADDERAPerceptuallyBasedLanguagetoSimplifySketchRecognitionUserInterfaceDevelopmentPhD,
type = {{PhD Doctoral Dissertation}},
author = {Hammond, Tracy},
title = {LADDER: A Perceptually-Based Language to Simplify Sketch Recognition User Interface Development},
pages = {495},
year = {2007},
month = {February},
address = {Cambridge, MA, USA},
school = {Massachusetts Institute of Technology ({MIT})},
note = {Advisor: Randall Davis},
}
PublicationImage
 



Master's Theses


PublicationImage 2019 Siddharth Subramaniyam. 2019. "Sketch Recognition Based Classification for Eye Movement Biometrics in Virtual Reality." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: June 2019. Advisor: Tracy Hammond.
Show Abstract:

Biometrics is an active area of research in the HCI, pattern recognition, and machine learning communities. In addition to various physiological features such as fingerprint, DNA, and facial recognition, there has been an interest in using behavioral biometric modalities such as gait, eye movement patterns, keystroke dynamics signature, etc. In this work, we explore the effectiveness of using eye movement as a biometric modality by treating it as a sketch and develop features using sketch recognition techniques. For testing our methods, we built a system for authentication in virtual reality (VR) that combines eye movement biometric with passcode based authentication for an additional layer of security against spoofing attacks.

Show BibTex

@mastersthesis{siddharthsubramaniyam2019SketchRecognitionBasedClassificationforEyeMovementBiometricsinVirtualRealityMS,
type = {{MS Master's Thesis}},
author = {Subramaniyam, Siddharth},
title = {Sketch Recognition Based Classification for Eye Movement Biometrics in Virtual Reality},
year = {2019},
month = {June},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Megha Yadav. 2019. "Mitigating public speaking anxiety using virtual reality and population-specific models.." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: June 2019. Advisor: Theodora Chaspari & Tracy Hammond.
Show Abstract:

Public speaking is essential in effectively exchanging ideas, persuading others, and making a tangible impact. Yet, public speaking anxiety (PSA) ranks as a top social phobia among many people. This research utilizes wearable technologies and virtual reality (VR) to expose individuals to PSA stimuli and quantify their PSA levels via group-based machine learning models. These machine learning models leverage common information across individuals and fine-tune their decisions based on specific individual and contextual factors. In this way, prediction decisions would be made for clusters of people with common individual-specific factors which would benefit the overall system accuracy. Findings of this study will enable researchers to better understand ante-decedents and causes of PSA contributing to behavioral interventions using VR.

Show BibTex

@mastersthesis{meghayadav2019MitigatingpublicspeakinganxietyusingvirtualrealityandpopulationspecificmodelsMS,
type = {{MS Master's Thesis}},
author = {Yadav, Megha},
title = {Mitigating public speaking anxiety using virtual reality and population-specific models.},
year = {2019},
month = {June},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Theodora Chaspari \& Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Jake Leland. 2019. "Recognizing Seatbelt-Fastening Behavior with Wearable Technology and Machine Learning." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2019. pp. 136. Advisor: Tracy Hammond.
Show Abstract:

In the case of many fatal automobile accidents, the victims were found to have not been wearing a seatbelt. This occurs in spite of the numerous safety sensors and warning indicators embedded within modern vehicles. Indeed, there is yet room for improvement in terms of seatbelt adoption. This work aims to lay the foundation for a novel method of encouraging seatbelt use: the utilization of wearable technology. Wearable technology has enabled considerable advances in health and wellness. Specifically, fitness trackers have achieved widespread popularity for their ability to quantify and analyze patterns of physical activity. Thanks to wearable technology's ease of use and convenient integration with mobile phones, users are quick to adopt. Of course, the practicality of wearable technology depends on activity recognition—the models and algorithms which are used to identify a pattern of sensor data as a particular physical activity (e.g. running, sitting, sleeping). Activity recognition is the basis of this research. In order to utilize wearable trackers toward the cause of seatbelt usage, there must exist a system for identifying whether a user has buckled their seatbelt. This was our primary goal. To develop such a system, we collected motion data from 20 different users. From this data, we identified trends which inspired the development of novel features. With these features, machine learning was used to train models to identify the motion of fastening a seatbelt in real time. This model serves as the basis for future work in systems which may provide more intelligent feedback as well as methods for interventions in dangerous user behavior.

Show BibTex

@mastersthesis{jakeleland2019RecognizingSeatbeltFasteningBehaviorwithWearableTechnologyandMachineLearningMS,
type = {{MS Master's Thesis}},
author = {Leland, Jake},
title = {Recognizing Seatbelt-Fastening Behavior with Wearable Technology and Machine Learning},
pages = {136},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Sharmistha Maity. 2019. "Combining Paper-Pencil Techniques with Immediate Feedback for Learning Chemical Drawings." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2019. Advisor: Tracy Hammond.
Show Abstract:

Introductory chemistry courses teach the process of drawing basic chemical molecules with the use of Lewis dot diagrams. Many beginner students, however, have difficulty in mastering these diagrams. While several computer applications are being developed to help students learn Lewis dot diagrams, there is a potential hidden benefit from paper and pencil that many students may not realize. Sketch recognition has been used to identify advanced chemical diagrams, however using the recognition in an educational setting requires a focus beyond identifying the final drawing. The goal of this research is to infer whether paper-pencil techniques provide educational benefits for learning Lewis dot diagrams. An analysis of pre-post assessments shows how combining sketch recognition of paper-pencil techniques and immediate feedback allows greater benefits for students with a basic chemistry understanding.

Show BibTex

@mastersthesis{sharmisthamaity2019CombiningPaperPencilTechniqueswithImmediateFeedbackforLearningChemicalDrawingsMS,
type = {{MS Master's Thesis}},
author = {Maity, Sharmistha},
title = {Combining Paper-Pencil Techniques with Immediate Feedback for Learning Chemical Drawings},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Larry Powell. 2019. "The Evaluation of Recognizing Aquatic Activities Through Wearable Sensors and Machine Learning." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2019. pp. 112. Advisor: Tracy Hammond.
Show Abstract:

Swimming is a complex and dangerous sport. A recent study found that swimming is the third leading cause of death among children in the world each year. A significant factor contributing to these statistics may be the limitations of current approaches to water-based education. As such, the Red Cross and Bangladesh have started investing in research into water-based education.Current technology, monitors only the main swim styles backstroke, breaststroke, butterfly, and freestyle. These existing systems are missing additional activities, such as rest (treading water), transitions (flip turns), and low energy strokes (sidestroke). These additional activities have an effect on a person's swimming ability, and they form the baseline for what is taught by the Red Cross, Bangladesh, and the military. We developed and tested an aqua-tracker system for monitoring swimmers in all forms of activities expected from a swimming-based training session. Our system uses a waterproof mobile device to capture a swimmer's flip-turns, ability to tread water, sidestroke, freestyle, backstroke, breaststroke, and butterfly strokes.Activities are recognized using a sliding-window framework, comparing both a deep learning and a feature-based recognition system. Our tracker has shown that the system can accurately detect each of the activities, from beginner to expert level, with an f-measure of .94.Equipped with the capabilities provided by our aqua-tracker system, people can monitor their own swimming ability, parents can monitor their children while they are in the water, and lifeguards and swimmers taking proficiency exams will be able to perform the exams without the needs of a proctor.

Show BibTex

@mastersthesis{larrypowell2019TheEvaluationofRecognizingAquaticActivitiesThroughWearableSensorsandMachineLearningMS,
type = {{MS Master's Thesis}},
author = {Powell, Larry},
title = {The Evaluation of Recognizing Aquatic Activities Through Wearable Sensors and Machine Learning},
pages = {112},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2018 Sabyasachi Chakraborty. 2018. "A Novel Methodology for Creating Auto Generated Spring-Based Truss Problems Through Mechanix." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2018. Advisor: Maurice Rojas & Tracy Hammond. ISBN: ORCID id: 0000-0003-1640-845X.
Show Abstract:

Software that provides automated teaching assistance and instantaneous feedback forstudents has revolutionized the modern classroom. In addition to helping instructors man-age large classes, the interactive experience can also benefit students. For instance, severalexisting systems incorporate recognition of student’s hand-drawn solutions to problems.In these cases, the instructor sketches the solution to the problem and the student’s sketchesare expected to match this template. While this framework provides immediate feedbackto students, it is still a constraint on instructors’ time; additionally, it can be difficult totest conceptual understanding through only a small number of problems. There remains astrong need to generate questions automatically based on templates drawn by instructorsso as to promote greater customization and variation in problems for students.The focus of our research is to develop a novel method that can automatically generatenew valid problems from a given reference problem. We have chosen linear spring-basedtruss systems as our domain. Another outcome of our research is to develop a method forrecognizing a spring network sketched naturally by the user with commonly used symbols.We also generate different types of questions and boundary conditions from the recognizedand auto-generated truss structures using the finite element method (FEM) in a novel way.Our system has been integrated with Mechanix, a tool developed at Texas A&M Uni-versity which supports free body diagrams (FBDs) and the creative design of truss struc-tures. Mechanix supports engineering learning by providing intelligent and immediatefeedback on hand-drawn sketches, and it has already been actively deployed in a numberof university classrooms. We build a problem generator on top of Mechanix to leverageits capabilities for instantaneous, personalized feedback while enabling more thoroughtesting of student abilities and providing them a limitless pool of practice problems.

Show BibTex

@mastersthesis{sabyasachichakraborty2018ANovelMethodologyforCreatingAutoGeneratedSpringBasedTrussProblemsThroughMechanixMS,
type = {{MS Master's Thesis}},
author = {Chakraborty, Sabyasachi},
title = {A Novel Methodology for Creating Auto Generated Spring-Based Truss Problems Through Mechanix},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Maurice Rojas \& Tracy Hammond, ISBN: ORCID id: 0000-0003-1640-845X},
}
PublicationImage
 
PublicationImage 2018 Adil Malla. 2018. "A Gaze-Based Authentication System: From Authentication to Intrusion Detection." MA Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2018. Advisor: Tracy Hammond. ISBN: ORCID id: 0000-0002-9598-1635.
Show Abstract:

The use of authentication systems has increased significantly due to the advancement of technology, greater affordability of devices, increased ease of use, and enhanced functionality. These authentication systems help safeguard users’ private personal information. There are a plethora of authentication systems based on a variety of inputs such as pins, biometrics, and smart cards. All of these authentication systems experience different threats and attacks. Shoulder surfing is an attack when an intruder tries to look at what a user is inputting on the authentication system either by looking over the shoulder or using the video technology. Pin-based authentication systems are prone to shoulder surfing; e.g. at ATM’s or other public places an intruder can shoulder surf what a user is entering as their pin/password. Biometric-based authentication systems are prone to spoofing attacks. Smart Cards can be easily stolen, replicated, or even spoofed. Thus, the goal of this research is to explore, develop, and quantify an alternate authentication system that addresses issues/attacks faced by the most commonly used authentication systems. We do this through the development of a gaze-based authentication system which addresses the problem of shoulder surfing, video analysis attacks, and spoofing attacks by an intruder. Results show an accuracy of 97.5% and F-measure of 0.97 is achievable while authenticating a user and an accuracy of 89.5 % and F-measure of 0.89 is achievable when attempting to detect an intruder trying to log in using someone else’s password.

Show BibTex

@mastersthesis{adilhamidmalla2018AGazeBasedAuthenticationSystemFromAuthenticationtoIntrusionDetectionMA,
type = {{MA Master's Thesis}},
author = {Malla, Adil},
title = {A Gaze-Based Authentication System: From Authentication to Intrusion Detection},
year = {2018},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-9598-1635},
}
PublicationImage
 
PublicationImage 2017 Tianshu Chu. 2017. "A Sketch-based Educational System for Learning Chinese Handwriting." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2017. Advisor: Tracy Hammond. First Position: Microsoft Redmond. ISBN: ORCID id: 0000-0002-9497-058X.
Show Abstract:

Learning Chinese as a Second Language (CSL) is a difficult task for students in English-speaking countries due to the large symbol set and complicated writing techniques. Traditional classroom methods of teaching Chinese handwriting have major drawbacks due to human experts' bias and the lack of assessment on writing techniques. In this work, we propose a sketch-based educational system to help CSL students learn Chinese handwriting faster and better in a novel way. Our system allows students to draw freehand symbols to answer questions, and uses sketch recognition and AI techniques to recognize, assess, and provide feedback in real time. Results have shown that the system reaches a recognition accuracy of 86% on novice learners' inputs, higher than 95% detection rate for mistakes in writing techniques, and 80.3% F-measure on the classification between expert and novice handwriting inputs.

Show BibTex

@mastersthesis{tianshuchu2017ASketchbasedEducationalSystemforLearningChineseHandwritingMS,
type = {{MS Master's Thesis}},
author = {Chu, Tianshu},
title = {A Sketch-based Educational System for Learning Chinese Handwriting},
year = {2017},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-9497-058X},
}
PublicationImage
 
PublicationImage 2017 Jung In Koh. 2017. "Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communication." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2017. Advisor: Tracy Hammond. First Position: TAMU Phd Student. ISBN: ORCID id: 0000-0002-3909-0192.
Show Abstract:

Recent trends in computer-mediated communications (CMC) have not only led to expanded instant messaging (IM) through the use of images and videos, but have also expanded traditional text messaging with richer content, so-called visual communication markers (VCM) such as emoticons, emojis, and stickers. VCMs could prevent a potential loss of subtle emotional conversation in CMC, which is delivered by nonverbal cues that convey affective and emotional information. However, as the number of VCMs grows in the selection set, the problem of VCM entry needs to be addressed. Additionally, conventional ways for accessing VCMs continues to rely on input entry methods that are not directly and intimately tied to expressive nonverbal cues. One such form of expressive nonverbal that does exist and is well-studied come in the form of hand gestures. In this work, I propose a user-defined hand gesture set that is highly representative to VCMs and a two-stage hand gesture recognition system (feature-based, shape based) that distinguishes the user-defined hand gestures. The goal of this research is to provide users to be more immersed, natural, and quick in generating VCMs through gestures. The idea is for users to maintain the lower-bandwidth online communication of text messaging to largely retain its convenient and discreet properties, while also incorporating the advantages of higher-bandwidth online communication of video messaging by having users naturally gesture their emotions that are then closely mapped to VCMs. Results show that the accuracy of user-dependent is approximately 86% and the accuracy of user independent is about 82%.

Show BibTex

@mastersthesis{junginkoh2017DevelopingaHandGestureRecognitionSystemforMappingSymbolicHandGesturestoAnalogousEmojiinComputerMediatedCommunicationMS,
type = {{MS Master's Thesis}},
author = {Koh, Jung In},
title = {Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communication},
year = {2017},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-3909-0192},
}
PublicationImage
 
PublicationImage 2017 Seth Polsley. 2017. "Identifying outcomes of care from medical records to improve doctor-patient communication." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: June 2017. pp. 81. Advisor: Tracy Hammond. First Position: TAMU PhD Student. ISBN: ORCID id: 0000-0002-8805-8375.
Show Abstract:

Between appointments, healthcare providers have limited interaction with their patients, but patients have similar patterns of care. Medications have common side effects; injuries have an expected healing time; and so on. By modeling patient interventions with outcomes, healthcare systems can equip providers with better feedback. In this work, we present a pipeline for analyzing medical records according to an ontology directed at allowing closed-loop feedback between medical encounters. Working with medical data from multiple domains, we use a combination of data processing, machine learning, and clinical expertise to extract knowledge from patient records. While our current focus is on technique, the ultimate goal of this research is to inform development of a system using these models to provide knowledge-driven clinical decision-making.

Show BibTex

@mastersthesis{sethpolsley2017IdentifyingoutcomesofcarefrommedicalrecordstoimprovedoctorpatientcommunicationMS,
type = {{MS Master's Thesis}},
author = {Polsley, Seth},
title = {Identifying outcomes of care from medical records to improve doctor-patient communication},
pages = {81},
year = {2017},
month = {June},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-8805-8375},
}
PublicationImage
 
PublicationImage 2017 Aqib Bhat. 2017. "Sketchography - Automatic grading of map sketches for geography education." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2017. Advisor: Tracy Hammond. First Position: Amazon. ISBN: ORCID id: 0000-0003-2718-7736. http://hdl.handle.net/1969.1/161656
Show Abstract:

Geography is a vital classroom subject that teaches students about the physical features of the planet we live on. Despite the importance of geographic knowledge, almost 75% of 8th graders scored below proficient in geography on the 2014 National Assessment of Educational Progress. Sketchography is a pen-based intelligent tutoring system that provides real-time feedback to students learning the locations, directions, and topography of rivers around the world. Sketchography uses sketch recognition and artificial intelligence to understand the user’s sketched intentions. As sketches are inherently messy, and even the most expert geographer will draw only a close approximation of the river’s flow, data has been gathered from both novice and expert sketchers. This data, in combination with professors’ grading rubrics and statistically driving AI-algorithms, provide real-time automatic grading that is similar to a human grader’s score. Results show the system to be 94.64% accurate compared to human grading.

Show BibTex

@mastersthesis{aqibbhat2017SketchographyAutomaticgradingofmapsketchesforgeographyeducationMS,
type = {{MS Master's Thesis}},
author = {Bhat, Aqib},
title = {Sketchography - Automatic grading of map sketches for geography education},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-2718-7736},
}
PublicationImage
 
PublicationImage 2017 Josh Cherian. 2017. "Recognition of Everyday Activities through Wearable Sensors and Machine Learning." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2017. Advisor: Tracy Hammond & Theresa Maldonado. First Position: TAMU PhD Student. ISBN: ORCID id: 0000-0002-7749-2109.
Show Abstract:

Over the past several years, the use of wearable devices has increased dramatically,primarily for fitness monitoring, largely due to their greater sensor reliability, increasedfunctionality, smaller size, increased ease of use, and greater affordability.These devices have helped many people of all ages live healthier lives and achieve their personal fitnessgoals, as they are able to see quantifiable and graphical results of their efforts every step of the way (i.e. in real-time). Yet, while these device systems work well within the fitnessdomain, they have yet to achieve a convincing level of functionality in the larger domainof healthcare. As an example, according to the Alzheimer’s Association, there are currently approxi-mately 5.5 million Americans with Alzheimer’s Disease and approximately 5.3 million ofthem are over the age of 65, comprising 10% of this age group in the U.S. The economictoll of this disease is estimated to be around $259 billion. By 2050 the number of Amer-icans with Alzheimer’s disease is predicted to reach around 16 million with an economictoll of over $1 trillion. There are other prevalent and chronic health conditions that arecritically important to monitor, such as diabetes, complications from obesity, congestiveheart failure, and chronic obstructive pulmonary disease (COPD) among others. The goal of this research is to explore and develop accurate and quantifiable sensingand machine learning techniques for eventual real-time health monitoring by wearabledevice systems. To that end, a two-tier recognition system is presented that is designed to identify health activities in a naturalistic setting based on accelerometer data of commonactivities. In Tier I a traditional activity recognition approach is employed to classify shortwindows of data, while in Tier II these classified windows are grouped to identify instancesof a specific activity. Everyday activities that were explored in this research include brushing one’s teeth, combing one’s hair, scratching one’s chin, washing one’s hands,taking medication, and drinking. Results show that an F-measure of 0.83 is achievablewhen identifying these activities from each other and an F-measure of of 0.82 is achievablewhen identifying instances of brushing teeth over the course of a day.

Show BibTex

@mastersthesis{joshcherian2017RecognitionofEverydayActivitiesthroughWearableSensorsandMachineLearningMS,
type = {{MS Master's Thesis}},
author = {Cherian, Josh},
title = {Recognition of Everyday Activities through Wearable Sensors and Machine Learning},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa Maldonado, ISBN: ORCID id: 0000-0002-7749-2109},
}
PublicationImage
 
PublicationImage 2017 Jorge Ivan Camara. 2017. "Flow2Code - From Hand-drawn Flowchart to Code Execution." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2017. Advisor: Tracy Hammond. First Position: Yellowme & Lecturer at Universidad Autónoma de Yucatán. ISBN: ORCID id: 0000-0003-0922-5508.
Show Abstract:

Flowcharts play an important role when learning to program by conveying algorithms graphically and making them easy to read and understand. When learning how to code with flowcharts and transitioning between the two, people often use computer based software to design and execute the algorithm conveyed by the flowchart. This requires the users to learn how to use the computer-based software first, which often leads to a steep learning curve. We claim that the learning curve can be decremented by using off-line sketch recognition and computer vision algorithms on a mobile device. This can be done by drawing the flowchart on a piece of paper and using a mobile device with a camera to capture an image of the flowchart. Flow2Code is a code flowchart recognizer that allows the users to code simple scripts on a piece of paper by drawing flowcharts. This approach attempts to be more intuitive since the user does not need to learn how to use a system to design the flowchart. Only a pencil, a notebook with white pages, and a mobile device are needed to achieve the same result. The main contribution of this thesis is to provide a more intuitive and easy-to-use tool for people to translate flowcharts into code and then execute the code.

Show BibTex

@mastersthesis{jorgecamara2017Flow2CodeFromHanddrawnFlowcharttoCodeExecutionMS,
type = {{MS Master's Thesis}},
author = {Camara, Jorge Ivan},
title = {Flow2Code - From Hand-drawn Flowchart to Code Execution},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-0922-5508},
}
PublicationImage
 
PublicationImage 2017 Nahum Villanueva. 2017. "ARCaching: Using Augmented Reality on Mobile Devices to Improve Geocacher Experience." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2017. Advisor: Tracy Hammond. ISBN: ORCID id: 0000-0002-0451-4805.
Show Abstract:

Geocaching is an outdoor treasure-hunting game that uses GPS and mobile devices to assist players in the quest of finding a geocache — a cleverly hidden physical container with a log and other items inside. The current game’s smartphone interface provides the GPS location of a geocache on a map that updates as the user gets closer to the hidden location. However, constantly checking in with the map to correct one’s location can substantially reduce situational awareness, which can become a quite a danger, as the user wanders through the woods or up a cliff to find a geocache. ARCaching is an Android-based augmented reality (AR) mobile application that facilitates navigation to a geocache and also increases situational awareness by combining environmental information gathered by the camera and overlapping it with rendered images to aid the players in their quest. ARCaching uses BeyondAR as an augmented reality browser to guide players to a cache while still providing pertinent information about the environment to help reduce risk. ARCaching was developed and evaluated against the original Geocaching.com application to determine how the user experience is affected by the AR technology. Results showed that AR while geocaching can facilitate the task of searching for caches and improves the user experience.

Show BibTex

@mastersthesis{nahumvillanueva2017ARCachingUsingAugmentedRealityonMobileDevicestoImproveGeocacherExperienceMS,
type = {{MS Master's Thesis}},
author = {Villanueva, Nahum},
title = {ARCaching: Using Augmented Reality on Mobile Devices to Improve Geocacher Experience},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-0451-4805},
}
PublicationImage
 
PublicationImage 2016 Siddhartha Karthik. 2016. "Labeling by Example." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2016. Advisor: Tracy Hammond. First Position: Uber. ISBN: ORCID id: 0000-0003-4445-8008.
Show Abstract:

Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition.

Show BibTex

@mastersthesis{siddharthakarthik2016LabelingbyExampleMS,
type = {{MS Master's Thesis}},
author = {Karthik, Siddhartha},
title = {Labeling by Example},
year = {2016},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-4445-8008},
}
PublicationImage
 
PublicationImage 2016 Swarna Keshavabhotla. 2016. "PerSketchTivity: Recognition and Progressive Learning Analysis." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2016. Advisor: Tracy Hammond. First Position: FactSet Research Systems. ISBN: ORCID id: 0000-0001-5892-858X.
Show Abstract:

PerSketchTivity is a sketch-based tutoring system for design sketching that allows students to hone their skills in design sketching and self-regulated learning through real-time feedback. Students learn design-sketching fundamentals through drawing exercises of reference shapes starting from basic to complex shapes in all dimensions and subsequently receive real-time feedback assessing their performance. PerSketchTivity consists of a recognition system that evaluates the correctness of a student's sketch and provides real-time feedback, evaluating the sketch based on error (accuracy), smoothness, and speed. The focus of this thesis is to evaluate the performance of the system in terms of the recognition accuracy (does the system correctly understand what the student intended to draw) as well as the educational impact on the sketching abilities of the students practicing with this system. Each student's increase in sketching ability is measured in terms of the accuracy, smoothness, and the speed at which the strokes. Data analysis comparing the early to late sketches showed a statistically significant increase in sketching ability.

Show BibTex

@mastersthesis{swarnakeshavabhotla2016PerSketchTivityRecognitionandProgressiveLearningAnalysisMS,
type = {{MS Master's Thesis}},
author = {Keshavabhotla, Swarna},
title = {PerSketchTivity: Recognition and Progressive Learning Analysis},
year = {2016},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0001-5892-858X},
}
PublicationImage
 
PublicationImage 2016 Shalini Ashok Kumar. 2016. "Evaluation of Conceptual Sketches on Stylus-based Devices." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond. First Position: Google. ISBN: ORCID id: 0000-0002-6044-790X.
Show Abstract:

Design Sketching is an important tool for designers and creative professionals to express their ideas and thoughts onto visual medium. Being a very critical and versatile skill for engineering students, this course is often taught in universities on pen and paper. However, this traditional pedagogy is limited by the availability of human instructors for their feedback. Also, students having low self-efficacy do not learn efficiently in traditional learning environment. Using intelligent interfaces this problem can be solved where we try to mimic the feedback given by an instructor and assess the student drawn sketches to give them insight of the areas they need to improve on. PerSketchTivity is an intelligent tutoring system which allows students to practice their drawing fundamentals and gives them real-time assessment and feedback. This research deals with finding the evaluation metrics that will enable us to grade students from their sketch data. There are seven metrics that we will work with to analyse how each of them contribute in deciding the quality of the sketches. The main contribution of this research is to identify the features of the sketch that can distinguish a good quality sketch from a poor one and design a grading metric for the sketches that can give a final score between 0 and 1 to the user sketches. Using these obtained features and our grading metric method, we grade all the sketches of students and experts.

Show BibTex

@mastersthesis{shaliniashokkumar2016EvaluationofConceptualSketchesonStylusbasedDevicesMS,
type = {{MS Master's Thesis}},
author = {Ashok Kumar, Shalini},
title = {Evaluation of Conceptual Sketches on Stylus-based Devices},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-6044-790X},
}
PublicationImage
 
PublicationImage 2016 Purnendu Kaul. 2016. "Gaze Assisted Classification of On-Screen Tasks (by Difficulty Level) and User Activities (Reading, Writing/Typing, Image-Gazing)." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond. First Position: Walmart Technologies. ISBN: ORCID id: 0000-0002-7657-9616.
Show Abstract:

Efforts toward modernizing education are emphasizing the adoption of Intelligent Tutoring Systems (ITS) to complement conventional teaching methodologies. Intelligent tutoring systems empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students’ progress. Despite the advantages, existing intelligent tutoring systems do not automatically assess how students engage in problem solving? How do they perceive various activities, while solving a problem? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that can assess how eye movements manifest students’ perceived activities and overall engagement in a sketch based Intelligent tutoring system, “Mechanix.” Mechanix guides students in solving truss problems by supporting user initiated feedback. Through an evaluation involving 21 participants, we show the potential of leveraging eye movement data to recognize students’ perceived activities, “reading, gazing at an image, and problem solving,” with an accuracy of 97.12%. We are also able to leverage the user gaze data to classify problems being solved by students as difficult, medium, or hard with an accuracy of more than 80%. In this process, we also identify the key features of eye movement data, and discuss how and why these features vary across different activities.

Show BibTex

@mastersthesis{purnendukaul2016GazeAssistedClassificationofOnScreenTasksbyDifficultyLevelandUserActivitiesReadingWritingTypingImageGazingMS,
type = {{MS Master's Thesis}},
author = {Kaul, Purnendu},
title = {Gaze Assisted Classification of On-Screen Tasks (by Difficulty Level) and User Activities (Reading, Writing/Typing, Image-Gazing)},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-7657-9616},
}
PublicationImage
 
PublicationImage 2016 Jaideep Ray. 2016. "Finding Similar Sketches." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond. First Position: Facebook. ISBN: ORCID id: 0000-0003-2266-576X. http://hdl.handle.net/1969.1/156837
Show Abstract:

Searching is an important tool for managing and navigating the massive amounts of data available in today’s information age. While new searching methods have be-come increasingly popular and reliable in recent years, such as image-based searching, these methods are more limited than text-based means in that they don’t allow generic user input. Sketch-based searching is a method that allows users to draw generic search queries and return similar drawn images, giving more user control over their search content. In this thesis, we present Sketchseeker, a system for indexing and searching across a large number of sketches quickly based on their similarity. The system includes several stages. First, sketches are indexed according to efficient and compact sketch descriptors. Second, the query retrieval subsystem considers sketches based on shape and structure similarity. Finally, a trained support vector machine classifier provides semantic filtering, which is then combined with median filtering to return the ranked results. SketchSeeker was tested on a large set of sketches against existing sketch similarity metrics, and it shows significant improvements in both speed and accuracy when compared to existing known techniques. The focus of this thesis is to outline the general components of a sketch retrieval system to find near similar sketches in real time.

Show BibTex

@mastersthesis{jaideepray2016FindingSimilarSketchesMS,
type = {{MS Master's Thesis}},
author = {Ray, Jaideep},
title = {Finding Similar Sketches},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-2266-576X},
}
PublicationImage
 
PublicationImage 2015 Shiqiang (Frank) Guo. 2015. "ResuMatcher: A Personalized Resume-Job Matching System." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2015. Advisor: Treacy Hammond. First Position: Amazon. ISBN: ORCID id: 0000-0002-6846-3234. http://hdl.handle.net/1969.1/154963
Show Abstract:

Today, online recruiting web sites such as Monster and Indeed.com have become one of the main channels for people to find jobs. These web platforms have provided their services for more than ten years, and have saved a lot of time and money for both job seekers and organizations who want to hire people. However, traditional information retrieval techniques may not be appropriate for users. The reason is because the number of results returned to a job seeker may be huge, so job seekers are required to spend a significant amount of time reading and reviewing their options. One popular approach to resolve this difficulty for users are recommender systems, which is a technology that has been studied for a long time. In this thesis we have made an effort to propose a personalized job-résumé matching system, which could help job seekers to find appropriate jobs more easily. We create a finite state transducer based information extraction library to extract models from résumés and job descriptions. We devised a new statistical-based ontology similarity measure to compare the résumé models and the job models. Since the most appropriate jobs will be returned first, the users of the system may get a better result than current job finding web sites. To evaluate the system, we computed Normalized Discounted Cumulative Gain (NDCG) and precision@k of our system, and compared to three other existing models as well as the live result from Indeed.com.

Show BibTex

@mastersthesis{shiqiangguo2015ResuMatcherAPersonalizedResumeJobMatchingSystemMS,
type = {{MS Master's Thesis}},
author = {Guo, Shiqiang (Frank)},
title = {ResuMatcher: A Personalized Resume-Job Matching System},
year = {2015},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Treacy Hammond, ISBN: ORCID id: 0000-0002-6846-3234},
}
PublicationImage
 
PublicationImage 2014 Zhengliang Yin. 2014. "Chinese Calligraphist: A Sketch Based Learning Tool for Learning Written Chinese." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2014. Advisor: Tracy Hammond. First Position: Amazon. ISBN: ORCID id: 0000-0003-2996-3639. http://hdl.handle.net/1969.1/153841
Show Abstract:

Learning Chinese as a foreign language is becoming more and more popular in western countries, however it is also very hard to be proficient, especially in writing. The involvement of the teachers in the process of learning Chinese writing is extremely necessary because they can give timely critiques and feedbacks as well as correct the students’ bad writing habits. However, it is inadequate and inefficient of the large class capacity therefore it is urgent and necessary to design a computer-based system to help students in practice Chinese writing, correct their bad writing habits early, and give feedback personally. The current written Chinese learning tools such as online tutorials emphasize writing rules including stroke order, but it could not provide practicing sessions and feedback. Hashigo, a novel CALL (Computer Assisted Language Learning) system, introduced the concept of sketch-based learning, but it’s low level recognizer is not proper for Chinese character domain. Therefore in order to help western students learn Chinese with better understanding, we adopted LADDER description language, machine learning techniques, and sketch recognition algorithms to improve handwritten Chinese stroke recognition rate. With our multilayer perceptron recognizer, it improved Chinese stroke recognition accuracy by 15.7% than the average of the four basic recognizer. In feature selection study we found that the most important features were “the aspect of the bounding box”, and the “density metrics”, and “curviness”. We chose 8 most important features after the recursive selecting stabilized. We discovered that in most situations, feature recognition is more important than template recognition. Since the writing technique is emphasized while they are taught, only 2 templates is enough. It worked as well as 20 templates, which improved recognition speed dramatically. In conclusion, in this thesis our contribution is that we (1) proposed a natural way to describe Chinese characters; (2) implemented a hierarchical Chinese character recognizer combining LADDER with the multilayer perceptron low level recognizer; (3) analyzed the performance of different recognition schemes; (4) designed a sketch-based Chinese writing learning tool, Chinese Calligraphist; and (5) find the best feature combination to recognize Chinese strokes while improving the recognition accuracy.

Show BibTex

@mastersthesis{zhengliangyin2014ChineseCalligraphistASketchBasedLearningToolforLearningWrittenChineseMS,
type = {{MS Master's Thesis}},
author = {Yin, Zhengliang},
title = {Chinese Calligraphist: A Sketch Based Learning Tool for Learning Written Chinese},
year = {2014},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-2996-3639},
}
PublicationImage
 
PublicationImage 2012 Hong-Hoe (Ayden) Kim. 2012. "Analysis of Children's Sketches to Improve Recognition Accuracy in Sketch-Based Applications." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2012. pp. 105. Advisor: Tracy Hammond. First Position: TAMU PhD Student. ISBN: ORCID id: 0000-0002-1175-8680. http://hdl.handle.net/1969.1/156963
Show Abstract:

The current education systems in elementary schools are usually using traditional teaching methods such as paper and pencil or drawing on the board. The benefit of paper and pencil is their ease of use. Researchers have tried to bring this ease of use to computer-based educational systems through the use of sketch-recognition. Sketch-recognition allows students to draw naturally while at the same time receiving automated assistance and feedback from the computer. There are many sketch-based educational systems for children. However, current sketch-based educational systems use the same sketch recognizer for both adults and children. The problem of this approach is that the recognizers are trained by using sample data drawn by adults, even though the drawing patterns of children and adults are markedly different. We propose that if we make a separate recognizer for children, we can increase the recognition accuracy of shapes drawn by children. By creating a separate recognizer for children, we improved the recognition accuracy of children’s drawings from 81.25% (using the adults’ threshold) to 83.75% (using adjusted threshold for children). Additionally, we were able to automatically distinguish children’s drawings from adults’ drawings. We correctly identified the drawer’s age (age 3, 4, 7, or adult) with 78.3%. When distinguishing toddlers (age 3 and 4) from matures (age 7 and adult), we got a precision of 95.2% using 10-fold cross validation. When we removed adults and distinguished between toddlers and 7 year olds, we got a precision of 90.2%. Distinguishing between 3, 4, and 7 year olds, we got a precision of 86.8%. Furthermore, we revealed that there is a potential gender difference since our recognizer was more accurately able to recognize the drawings of female children (91.4%) than the male children (85.4%). Finally, this paper introduces a sketch-based teaching assistant tool for children, EasySketch, which teaches children how to draw digits and characters. Children can learn how to draw digits and characters by instructions and feedback.

Show BibTex

@mastersthesis{honghoekim2012AnalysisofChildrensSketchestoImproveRecognitionAccuracyinSketchBasedApplicationsMS,
type = {{MS Master's Thesis}},
author = {Kim, Hong-Hoe (Ayden)},
title = {Analysis of Children's Sketches to Improve Recognition Accuracy in Sketch-Based Applications},
pages = {105},
year = {2012},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-1175-8680},
}
PublicationImage
 
PublicationImage 2012 Drew Logsdon. 2012. "Arm-Hand-Finger Video Game Interaction." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2012. pp. 108. Advisor: Tracy Hammond. First Position: IBM. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10567
Show Abstract:

Despite the growing popularity and expansion of video game interaction techniques and research in the area of hand gesture recognition, the application of hand gesture video game interaction using arm, hand, and finger motion has not been extensively explored. Most current gesture-based approaches to video game interaction neglect the use of the fingers for interaction, but inclusion of the fingers will allow for more natural and unique interaction and merits further research. To implement arm, hand and finger-based interaction for the video game domain, several problems must be solved including gesture recognition, segmentation, hand visualization, and video game interaction that responds to arm, hand, and finger input. Solutions to each of these problems have been implemented. The potential of this interaction style is illustrated through the introduction of an arm, hand, and finger controlled video game system that responds to players' hand gestures. It includes a finger-gesture recognizer as well as a video game system employing various interaction styles. This consists of a first person shooter game, a driving game, and a menu interaction system. Several users interacted with and played these games, and this form of interaction is especially suitable for real time interaction in first-person games. This is perhaps the first implementation of its kind for video game interaction. Based on test results, arm, hand, and finger interaction a viable form of interaction that deserves further research. This implementation bridges the gap between existing gesture interaction methods and more advanced virtual reality techniques. It successfully combines the solutions to each problem mentioned above into a single, working video game system. This type of interaction has proved to be more intuitive than existing gesture controls in many situations and also less complex to implement than a full virtual reality setup. It allows more control by using the hands' natural motion and allows each hand to interact independently. It can also be reliably implemented using today's technology. This implementation is a base system that can be greatly expanded on. Many possibilities for future work can be applied to this form of interaction.

Show BibTex

@mastersthesis{drewlogsdon2012ArmHandFingerVideoGameInteractionMS,
type = {{MS Master's Thesis}},
author = {Logsdon, Drew},
title = {Arm-Hand-Finger Video Game Interaction},
pages = {108},
year = {2012},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2012 George Lucchese. 2012. "Sketch Recognition on Mobile Devices." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2012. pp. 54. Advisor: Tracy Hammond. First Position: IBM. http://hdl.handle.net/1969.1/148264
Show Abstract:

Sketch recognition allows computers to understand and model hand drawn sketches and diagrams. Traditionally sketch recognition systems required a pen based PC interface, but powerful mobile devices such as tablets and smartphones can provide a new platform for sketch recognition systems. We describe a new sketch recognition library, Strontium (SrL) that combines several existing sketch recognition libraries modified to run on both personal computers and on the Android platform. We analyzed the recognition speed and accuracy implications of performing low-level shape recognition on smartphones with touch screens. We found that there is a large gap in recognition speed on mobile devices between recognizing simple shapes and more complex ones, suggesting that mobile sketch interface designers limit the complexity of their sketch domains. We also found that a low sampling rate on mobile devices can affect recognition accuracy of complex and curved shapes. Despite this, we found no evidence to suggest that using a finger as an input implement leads to a decrease in simple shape recognition accuracy. These results show that the same geometric shape recognizers developed for pen applications can be used in mobile applications, provided that developers keep shape domains simple and ensure that input sampling rate is kept as high as possible.

Show BibTex

@mastersthesis{georgelucchese2012SketchRecognitiononMobileDevicesMS,
type = {{MS Master's Thesis}},
author = {Lucchese, George},
title = {Sketch Recognition on Mobile Devices},
pages = {54},
year = {2012},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2012 Wenzhe Li. 2012. "Acoustic Based Sketch Recognition." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2012. pp. 91. Advisor: Tracy Hammond. First Position: USC PhD Student, Goldman Sachs. http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11880
Show Abstract:

Sketch recognition is an active research field, with the goal to automatically recognize hand-drawn diagrams by a computer. The technology enables people to freely interact with digital devices like tablet PCs, Wacoms, and multi-touch screens. These devices are easy to use and have become very popular in market. However, they are still quite costly and need more time to be integrated into existing systems. For example, handwriting recognition systems, while gaining in accuracy and capability, still must rely on users using tablet-PCs to sketch on. As computers get smaller, and smart-phones become more common, our vision is to allow people to sketch using normal pencil and paper and to provide a simple microphone, such as one from their smart-phone, to interpret their writings. Since the only device we need is a single simple microphone, the scope of our work is not limited to common mobile devices, but also can be integrated into many other small devices, such as a ring. In this thesis, we thoroughly investigate this new area, which we call acoustic based sketch recognition, and evaluate the possibilities of using it as a new interaction technique. We focus specifically on building a recognition engine for acoustic sketch recognition. We first propose a dynamic time wrapping algorithm for recognizing isolated sketch sounds using MFCC(Mel-Frequency Cesptral Coefficients). After analyzing its performance limitations, we propose improved dynamic time wrapping algorithms which work on a hybrid basis, using both MFCC and four global features including skewness, kurtosis, curviness and peak location. The proposed approaches provide both robustness and decreased computational cost. Finally, we evaluate our algorithms using acoustic data collected by the participants using a device's built-in microphone. Using our improved algorithm we were able to achieve an accuracy of 90% for a 10 digit gesture set, 87% accuracy for the 26 English characters and over 95% accuracy for a set of seven commonly used gestures.

Show BibTex

@mastersthesis{wenzheli2012AcousticBasedSketchRecognitionMS,
type = {{MS Master's Thesis}},
author = {Li, Wenzhe},
title = {Acoustic Based Sketch Recognition},
pages = {91},
year = {2012},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2012 Francisco Vides. 2012. "TAYouKi: A Sketch-Based Tutoring System for Young Kids." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2012. pp. 129. Advisor: Tracy Hammond. First Position: PayPal. http://hdl.handle.net/1969.1/ETD-TAMU-2012-08-11497
Show Abstract:

Intelligent tutoring systems (ITS) have proven to be effective tools for aiding in the instruction of new skills for young kids; however, interaction methods that employ traditional input devices such as the keyboard and mouse may present barriers to children who have yet learned how to write. Existing applications which utilize pen-input devices better mimic the physical act of writing, but few provide useful feedback to the users. This thesis presents a system specifically designed to serve as a useful tool in teaching children how to draw basic shapes, and helping them develop basic drawing and writing skills. The system uses a combination of sketch recognition techniques to interpret the handwritten strokes from sketches of the children, and then provides intelligent feedback based on what they draw. Our approach provides a virtual coach to assist teachers teaching the critical skills of drawing and handwriting. We do so by guiding children through a set of exercises of increasing complexity according to their progress, and at the same time keeping track of students' performance and engagement, giving them differentiated instruction and feedback. Our system would be like a virtual Teaching Assistant for Young Kids, hence we call it TAYouKi. We collected over five hundred hand-drawn shapes from grownups that had a clear understanding of what a particular geometric shape should look like. We used this data to test the recognition of our system. Following, we conducted a series of case studies with children in age group three to six to test the interactivity efficacy of the system. The studies served to gain important insights regarding the research challenges in different domains. Results suggest that our approach is appealable and engaging to children and can help in more effectively teach them how to draw and write.

Show BibTex

@mastersthesis{franciscovides2012TAYouKiASketchBasedTutoringSystemforYoungKidsMS,
type = {{MS Master's Thesis}},
author = {Vides, Francisco},
title = {TAYouKi: A Sketch-Based Tutoring System for Young Kids},
pages = {129},
year = {2012},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2010 Paul Taele. 2010. "Freehand Sketch Recognition for Computer-Assisted Language Learning of Written East Asian Languages." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2010. pp. 96. Advisor: Tracy Hammond. First Position: TAMU PhD Student. ISBN: ORCID id: 0000-0002-1271-0574. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8977
Show Abstract:

One of the challenges students face in studying an East Asian (EA) language (e.g., Chinese, Japanese, and Korean) as a second language is mastering their selected language’s written component. This is especially true for students with native fluency of English and deficient written fluency of another EA language. In order to alleviate the steep learning curve inherent in the properties of EA languages’ complicated writing scripts, language instructors conventionally introduce various written techniques such as stroke order and direction to allow students to study writing scripts in a systematic fashion. Yet, despite the advantages gained from written technique instruction, the physical presence of the language instructor in conventional instruction is still highly desirable during the learning process; not only does it allow instructors to offer valuable real-time critique and feedback interaction on students’ writings, but it also allows instructors to correct students’ bad writing habits that would impede mastery of the written language if not caught early in the learning process. The current generation of computer-assisted language learning (CALL) applications specific to written EA languages have therefore strived to incorporate writing-capable modalities in order to allow students to emulate their studies outside the classroom setting. Several factors such as constrained writing styles, and weak feedback and assessment capabilities limit these existing applications and their employed techniques from closely mimicking the benefits that language instructors continue to offer. In this thesis, I describe my geometric-based sketch recognition approach to several writing scripts in the EA languages while addressing the issues that plague existing CALL applications and the handwriting recognition techniques that they utilize. The approach takes advantage of A Language to Describe, Display, and Editing in Sketch Recognition (LADDER) framework to provide users with valuable feedback and assessment that not only recognizes the visual correctness of students’ written EA Language writings, but also critiques the technical correctness of their stroke order and direction. Furthermore, my approach provides recognition independent of writing style that allows students to learn with natural writing through size- and amount-independence, thus bridging the gap between beginner applications that only recognize single-square input and expert tools that lack written technique critique.

Show BibTex

@mastersthesis{paultaele2010FreehandSketchRecognitionforComputerAssistedLanguageLearningofWrittenEastAsianLanguagesMS,
type = {{MS Master's Thesis}},
author = {Taele, Paul},
title = {Freehand Sketch Recognition for Computer-Assisted Language Learning of Written East Asian Languages},
pages = {96},
year = {2010},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-1271-0574},
}
PublicationImage
 
PublicationImage 2010 Aaron Wolin. 2010. "Segmenting Hand-Drawn Strokes." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2010. pp. 160. Advisor: Tracy Hammond. First Position: Credera. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7869
Show Abstract:

Pen-based interfaces utilize sketch recognition so users can create and interact with complex, graphical systems via drawn input. In order for people to freely draw within these systems, users' drawing styles should not be constrained. The low-level techniques involved with sketch recognition must then be perfected, because poor low-level accuracy can impair a user's interaction experience. Corner finding, also known as stroke segmentation, is one of the first steps to free-form sketch recognition. Corner finding breaks a drawn stroke into a set of primitive symbols such as lines, arcs, and circles, so that the original stoke data can be transformed into a more machine-friendly format. By working with sketched primitives, drawn objects can then be described in a visual language, noting what primitive shapes have been drawn and the shapes? geometric relationships to each other. We present three new corner finding techniques that improve segmentation accuracy. Our first technique, MergeCF, is a multi-primitive segmenter that splits drawn strokes into primitive lines and arcs. MergeCF eliminates extraneous primitives by merging them with their neighboring segments. Our second technique, ShortStraw, works with polyline-only data. Polyline segments are important since many domains use simple polyline symbols formed with squares, triangles, and arrows. Our ShortStraw algorithm is simple to implement, yet more powerful than previous polyline work in the corner finding literature. Lastly, we demonstrate how a combination technique can be used to pull the best corner finding results from multiple segmentation algorithms. This combination segmenter utilizes the best corners found from other segmentation techniques, eliminating many false negatives (missed primitive segmentations) from the final, low-level results. We will present the implementation and results from our new segmentation techniques, showing how they perform better than related work in the corner finding field. We will also discuss limitations of each technique, how we have sought to overcome those limitations, and where we believe the sketch recognition subfield of corner finding is headed.

Show BibTex

@mastersthesis{aaronwolin2010SegmentingHandDrawnStrokesMS,
type = {{MS Master's Thesis}},
author = {Wolin, Aaron},
title = {Segmenting Hand-Drawn Strokes},
pages = {160},
year = {2010},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2009 Daniel Dixon. 2009. "A Methodology for Using Assistive Sketch Recognition For Improving a Person’s Ability to Draw." MS Master's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2009. pp. 114. Advisor: Tracy Hammond. First Position: ReelFX. http://hdl.handle.net/1969.1/ETD-TAMU-2009-12-7289
Show Abstract:

When asked to draw, most people are hesitant because they believe themselves unable to draw well. A human instructor can teach students how to draw by encouraging them to practice established drawing techniques and by providing personal and directed feedback to foster their artistic intuition and perception. This thesis describes the first methodology for a computer application to mimic a human instructor by providing direction and feedback to assist a student in drawing a human face from a photograph. Nine design principles were discovered and developed for providing such instruction, presenting reference media, giving corrective feedback, and receiving actions from the student. Face recognition is used to model the human face in a photograph so that sketch recognition can map a drawing to the model and evaluate it. New sketch recognition techniques and algorithms were created in order to perform sketch understanding on such subjective content. After two iterations of development and user studies for this methodology, the result is a computer application that can guide a person toward producing his/her own sketch of a human model in a reference photograph with step-bystep instruction and computer generated feedback.

Show BibTex

@mastersthesis{danieldixon2009AMethodologyforUsingAssistiveSketchRecognitionForImprovingaPersonsAbilitytoDrawMS,
type = {{MS Master's Thesis}},
author = {Dixon, Daniel},
title = {A Methodology for Using Assistive Sketch Recognition For Improving a Person’s Ability to Draw},
pages = {114},
year = {2009},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2001 Tracy Hammond. 2001. "Ethnomathematics: Concept Definition and Research Perspectives." MA Master's Thesis. Columbia University. New York, NY, USA: February 2001. pp. 57. Advisor: Ellen Marakowitz. First Position: MIT PhD Student.
Show Abstract:

Although the term ethnomathematics has been in use in the anthropological literature for quite sometime now, a standard definition of the construct has yet to emerge. More than one definition exists, causing confusion and inhibiting systematic research on the subject. Most definitions loosely refer to it as the study of mathematical ideas of non-literate peoples (e.g., Ascher and Ascher, 1997), thereby ignoring or underplaying its profound relationship to culture. More importantly, current definitions are restrictive and too narrow to adequately explain phenomena that rightfully fall within its realm. Providing a conceptually grounded definition is a necessary first step to galvanize the thinking and investigative activity on the subject. My aim in this thesis is to offer such a definition and to descriptively examine its relevance for theory building and research on ethnomathematics. I start with a brief review of the current definitions of ethnomathematics, highlighting their parochial nature. I then propose an over-arching definition that derives its grounding from interaction and reciprocity-based models. My definition suggests ethnomathematics as the study of the evolution of mathematics that has shaped, and in turn shaped by, the values of groups of people. I then use this definition to historically examine how mathematics, despite its universality and constancy themes, suffers from culture-based disparities and has been influenced in its development by various social groups over time. Specifically, I examine the role of culture in the learning and use of math, gender capabilities in math, and how even racism has played a significant part in the evolution of math.

Show BibTex

@mastersthesis{tracyhammond2001EthnomathematicsConceptDefinitionandResearchPerspectivesMA,
type = {{MA Master's Thesis}},
author = {Hammond, Tracy},
title = {Ethnomathematics: Concept Definition and Research Perspectives},
pages = {57},
year = {2001},
month = {February},
address = {New York, NY, USA},
school = {Columbia University},
note = {Advisor: Ellen Marakowitz},
}
PublicationImage
 



Undergraduate Honor's Theses


PublicationImage 2021 Castro Yuri. 2021. "Mathematical Sketching---Improving the Learning of Graphing Mathematical Equations with Intelligent Tutoring." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2021. Advisor: Tracy Hammond.
Show Abstract:


Show BibTex

@mastersthesis{yuricastro2021MathematicalSketchingImprovingtheLearningofGraphingMathematicalEquationswithIntelligentTutoringBS,
type = {{Undergraduate Honors Thesis}},
author = {Yuri, Castro},
title = {Mathematical Sketching---Improving the Learning of Graphing Mathematical Equations with Intelligent Tutoring},
year = {2021},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2021 Lina Zhang. 2021. "Vibrotactile Feedback for Understanding Time in Calendar Notifications." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2021. Advisor: Tracy Hammond. in progress.
Show Abstract:


Show BibTex

@mastersthesis{linazhang2021VibrotactileFeedbackforUnderstandingTimeinCalendarNotificationsBS,
type = {{Undergraduate Honors Thesis}},
author = {Zhang, Lina},
title = {Vibrotactile Feedback for Understanding Time in Calendar Notifications},
year = {2021},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammondin progress},
}
PublicationImage
 
PublicationImage 2020 Chinaemere Ike. 2020. "Voice Recognition Systems and African Diaspora Accents." BA. Texas A&M University (TAMU). College Station, TX, USA: May 2020. Advisor: Tracy Hammond.
Show Abstract:


Show BibTex

@mastersthesis{chinaemereike2020VoiceRecognitionSystemsandAfricanDiasporaAccentsBA,
type = {BA},
author = {Ike, Chinaemere},
title = {Voice Recognition Systems and African Diaspora Accents},
year = {2020},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Benton Phillipy Guess. 2019. "Sketch Recognition Applications to the Rey-Osterrieth Complex Figure Test." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2019. Advisor: Tracy Hammond.
Show Abstract:


Show BibTex

@mastersthesis{bentonguess2019SketchRecognitionApplicationstotheReyOsterriethComplexFigureTestBS,
type = {{Undergraduate Honors Thesis}},
author = {Guess, Benton Phillipy},
title = {Sketch Recognition Applications to the Rey-Osterrieth Complex Figure Test},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Chinmay Milind Phulse. 2019. "Using Eye Tracking Data for User Identification." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2019. Advisor: Tracy Hammond.
Show Abstract:


Show BibTex

@mastersthesis{chinmayphulse2019UsingEyeTrackingDataforUserIdentificationBS,
type = {{Undergraduate Honors Thesis}},
author = {Phulse, Chinmay Milind},
title = {Using Eye Tracking Data for User Identification},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2019 Piyush Tandon. 2019. "Identification of Swimming Strokes Using Smart Devices." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2019. Advisor: Tracy Hammond.
Show Abstract:


Show BibTex

@mastersthesis{piyushtandon2019IdentificationofSwimmingStrokesUsingSmartDevicesBS,
type = {{Undergraduate Honors Thesis}},
author = {Tandon, Piyush},
title = {Identification of Swimming Strokes Using Smart Devices},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2018 Leslie Escalante-Trevino. 2018. "HapticDive: An Intuitive Warning System for Underwater Users." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2018. pp. 55. Advisor: Tracy Hammond. Coauthored with Sneha Santani.
Show Abstract:

All divers—regardless of skill or activity—are constantly at risk of decompression sickness; mild symptoms can often go ignored, and can also be deadly if left untreated.Currently, divers receive training and carry a dive computer or a combination of a depth gauge and a depth watch for checking to avoid such situations. However, this equipment does not warn a user if they are in danger of decompression sickness, since users have to keep track of their ascension rates and since shallow-water divers often carry minimal equipment. This work proposes an application called HapticDive to keep track of a user’s depth in relation to the time passed underwater. The application paces their ascent to the surface by providing “stop” signals to users as an audio-visual combination, so that users avoid experiencing “the bends” (i.e., decompression sickness symptoms). Haptic-Dive aims to provide the foundation for a cost-effective application that warns divers—especially surface supported divers, free divers, and general shallow-water divers—when they are at risk of decompression sickness, so they may avoid symptoms.

Show BibTex

@mastersthesis{leslieescalante2018HapticDiveAnIntuitiveWarningSystemforUnderwaterUsersBS,
type = {{Undergraduate Honors Thesis}},
author = {Escalante-Trevino, Leslie},
title = {HapticDive: An Intuitive Warning System for Underwater Users},
pages = {55},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Sneha Santani},
}
PublicationImage
 
PublicationImage 2018 Sneha Santani. 2018. "HapticDive: An Intuitive Warning System for Underwater Users." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: December 2018. pp. 55. Advisor: Tracy Hammond. Coauthored with Leslie Escalante-Trevino.
Show Abstract:

All divers—regardless of skill or activity—are constantly at risk of decompression sickness; mild symptoms can often go ignored, and can also be deadly if left untreated.Currently, divers receive training and carry a dive computer or a combination of a depth gauge and a depth watch for checking to avoid such situations. However, this equipment does not warn a user if they are in danger of decompression sickness, since users have to keep track of their ascension rates and since shallow-water divers often carry minimal equipment. This work proposes an application called HapticDive to keep track of a user’s depth in relation to the time passed underwater. The application paces their ascent to the surface by providing “stop” signals to users as an audio-visual combination, so that users avoid experiencing “the bends” (i.e., decompression sickness symptoms). Haptic-Dive aims to provide the foundation for a cost-effective application that warns divers—especially surface supported divers, free divers, and general shallow-water divers—when they are at risk of decompression sickness, so they may avoid symptoms.

Show BibTex

@mastersthesis{snehasantani2018HapticDiveAnIntuitiveWarningSystemforUnderwaterUsersBS,
type = {{Undergraduate Honors Thesis}},
author = {Santani, Sneha},
title = {HapticDive: An Intuitive Warning System for Underwater Users},
pages = {55},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Leslie Escalante-Trevino},
}
PublicationImage
 
PublicationImage 2018 Jiayao Li. 2018. "Compare Accuracy and Time Complexity of Machine Learning Algorithms for Eye Gesture Recognition." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: August 2018. Advisor: Tracy Hammond.
Show Abstract:

The eye motion data can be utilized to perform behavior analysis and improve common applications, such as accessible HCI, interactive interface, marketing, and remote-controlling. This research project compares accuracy and time complexity of three commonly used machine learning algorithms for eye gesture recognition. The importance of this project is to examine ways to improve efficiency in recognizing eye gestures. It was found that the template matching algorithm has the best accuracy, followed by the Pearson correlation algorithm, and lastly the decision tree algorithm. For time performance, it was found that the decision tree algorithm performs the best, closely followed by the Pearson correlation algorithm, and lastly the template matching algorithm. The template matching algorithm is recommended to be used in accuracy-sensitive situations. The decision tree algorithm and the Pearson correlation algorithm are recommended for time-sensitive situations. The algorithms perform better when the directions and other relative properties of input gestures are majorly different. One should consider the properties of the input gesture and the nature of application when it comes to deciding which algorithm to use.

Show BibTex

@mastersthesis{jiayaoli2018CompareAccuracyandTimeComplexityofMachineLearningAlgorithmsforEyeGestureRecognitionBS,
type = {{Undergraduate Honors Thesis}},
author = {Li, Jiayao},
title = {Compare Accuracy and Time Complexity of Machine Learning Algorithms for Eye Gesture Recognition},
year = {2018},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2018 Eleanor Miller. 2018. "Recognizing Elementary Elements in Chemical Diagram Sketches." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2018. pp. 55. Advisor: Tracy Hammond.
Show Abstract:

Organic Chemistry is a challenging subject that requires dedicated practice to learn the meticulous rules composing the subject, otherwise a student risks failure. Current software to teach chemical structures contains drag-and-drop components and fails to provide students with true understanding of Organic Chemistry concepts. My solution is to integrate a sketch recognition interface that can learn to recognize components of various, user-sketched chemical structures with a back-propagation neural network that can be trained to translate the components of the chemical structure to determine correctness. The accuracy of the program will be rigorously tested to determine correctness in interpreting chemical structures.

Show BibTex

@mastersthesis{eleanormiller2018RecognizingElementaryElementsinChemicalDiagramSketchesBS,
type = {{Undergraduate Honors Thesis}},
author = {Miller, Eleanor},
title = {Recognizing Elementary Elements in Chemical Diagram Sketches},
pages = {55},
year = {2018},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2017 Jake Leland. 2017. "Recognizing Seatbelt-Fastening Activity Using Wearable Sensor Technology." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2017. pp. 43. Advisor: Tracy Hammond. Coauthored with Ellen Stanfill.
Show Abstract:

Many fatal car accidents involve victims who were not wearing a seatbelt, even though systems for detecting such behavior and intervening to correct it already exist. Activity recognition using wearable sensors has been previously applied to many health-related fields with high accuracy. In this paper, activity recognition is used to generate an algorithm for real-time recognition of putting on a seatbelt, using a smartwatch. Initial data was collected from twelve participants to determine the validity of the approach. Novel features were extracted from the data and used to classify the action, with a final accuracy of 1.000 and an F-measure of 1.000 using the MultilayerPerceptron classifier using laboratory collected data. Then, an iterative real-time recognition user study was conducted to investigate classification accuracy in a naturalistic setting. The F-measure of naturalistic classification was 0.825 with MultilayerPerceptron. This work forms the basis for further studies which will aim to provide user feedback to increase seatbelt use.

Show BibTex

@mastersthesis{jakeleland2017RecognizingSeatbeltFasteningActivityUsingWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Leland, Jake},
title = {Recognizing Seatbelt-Fastening Activity Using Wearable Sensor Technology},
pages = {43},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Ellen Stanfill},
}
PublicationImage
 
PublicationImage 2017 Ellen Stanfill. 2017. "Recognizing Seatbelt-Fastening Activity Using Wearable Sensor Technology." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2017. pp. 43. Advisor: Tracy Hammond. Coauthored with Jake Leland.
Show Abstract:

Many fatal car accidents involve victims who were not wearing a seatbelt, even though systems for detecting such behavior and intervening to correct it already exist. Activity recognition using wearable sensors has been previously applied to many health-related fields with high accuracy. In this paper, activity recognition is used to generate an algorithm for real-time recognition of putting on a seatbelt, using a smartwatch. Initial data was collected from twelve participants to determine the validity of the approach. Novel features were extracted from the data and used to classify the action, with a final accuracy of 1.000 and an F-measure of 1.000 using the MultilayerPerceptron classifier using laboratory collected data. Then, an iterative real-time recognition user study was conducted to investigate classification accuracy in a naturalistic setting. The F-measure of naturalistic classification was 0.825 with MultilayerPerceptron. This work forms the basis for further studies which will aim to provide user feedback to increase seatbelt use.

Show BibTex

@mastersthesis{ellenstanfill2017RecognizingSeatbeltFasteningActivityUsingWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Stanfill, Ellen},
title = {Recognizing Seatbelt-Fastening Activity Using Wearable Sensor Technology},
pages = {43},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Jake Leland},
}
PublicationImage
 
PublicationImage 2016 David Brhlik. 2016. "Enhancing Blind Navigation with the Use of Wearable Sensor Technology." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond & Theresa Maldonado. http://hdl.handle.net/1969.1/167913 Coauthored with Temiloluwa Otuyelu and Chad Young.
Show Abstract:

The goal of this research is to design and develop a wearable technology that will enable blind and visually impaired users to accomplish regular navigation without the assistance of another person, a guidance animal, or a cane. This new technology will have the distinct advantage of being more discreet and user friendly than an animal or cane, allowing the user to feel more comfortable as they use the device. Extensive research will be performed to determine the best user interface; this includes the location of the sensors on the body and how the device will communicate with the user. Potential devices could be designed to be worn on the shoes, belt, hat, glasses, or any number of other locations. The device may communicate with the wearer using vibrations, pressure, or sound. The best combination of wearability and communication will be built for user testing. This research will enable the visually impaired population to navigate more quickly, easily, and discretely; and help them learn their surroundings.

Show BibTex

@mastersthesis{davidbrhlik2016EnhancingBlindNavigationwiththeUseofWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Brhlik, David},
title = {Enhancing Blind Navigation with the Use of Wearable Sensor Technology},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa MaldonadoCoauthored with Temiloluwa Otuyelu and Chad Young},
}
PublicationImage
 
PublicationImage 2016 Temiloluwa Otuyelu. 2016. "Enhancing Blind Navigation with the Use of Wearable Sensor Technology." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond & Theresa Maldonado. http://hdl.handle.net/1969.1/167913 Coauthored with David Brhlik and Chad Young.
Show Abstract:

The goal of this research is to design and develop a wearable technology that will enable blind and visually impaired users to accomplish regular navigation without the assistance of another person, a guidance animal, or a cane. This new technology will have the distinct advantage of being more discreet and user friendly than an animal or cane, allowing the user to feel more comfortable as they use the device. Extensive research will be performed to determine the best user interface; this includes the location of the sensors on the body and how the device will communicate with the user. Potential devices could be designed to be worn on the shoes, belt, hat, glasses, or any number of other locations. The device may communicate with the wearer using vibrations, pressure, or sound. The best combination of wearability and communication will be built for user testing. This research will enable the visually impaired population to navigate more quickly, easily, and discretely; and help them learn their surroundings.

Show BibTex

@mastersthesis{temiotuyelu2016EnhancingBlindNavigationwiththeUseofWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Otuyelu, Temiloluwa},
title = {Enhancing Blind Navigation with the Use of Wearable Sensor Technology},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa MaldonadoCoauthored with David Brhlik and Chad Young},
}
PublicationImage
 
PublicationImage 2016 Chad Young. 2016. "Enhancing Blind Navigation with the Use of Wearable Sensor Technology." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2016. Advisor: Tracy Hammond & Theresa Maldonado. http://hdl.handle.net/1969.1/167913 Coauthored with Temiloluwa Otuyelu and David Bhrlik.
Show Abstract:

The goal of this research is to design and develop a wearable technology that will enable blind and visually impaired users to accomplish regular navigation without the assistance of another person, a guidance animal, or a cane. This new technology will have the distinct advantage of being more discreet and user friendly than an animal or cane, allowing the user to feel more comfortable as they use the device. Extensive research will be performed to determine the best user interface; this includes the location of the sensors on the body and how the device will communicate with the user. Potential devices could be designed to be worn on the shoes, belt, hat, glasses, or any number of other locations. The device may communicate with the wearer using vibrations, pressure, or sound. The best combination of wearability and communication will be built for user testing. This research will enable the visually impaired population to navigate more quickly, easily, and discretely; and help them learn their surroundings.

Show BibTex

@mastersthesis{chadyoung2016EnhancingBlindNavigationwiththeUseofWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Young, Chad},
title = {Enhancing Blind Navigation with the Use of Wearable Sensor Technology},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa MaldonadoCoauthored with Temiloluwa Otuyelu and David Bhrlik},
}
PublicationImage
 
PublicationImage 2012 Sarin Regmi. 2012. "Haptigo Tactile Navigation System." Undergraduate Honor's Thesis. Texas A&M University (TAMU). College Station, TX, USA: May 2012. Advisor: Tracy Hammond. First Position: Motorola. http://hdl.handle.net/1969.1/154397
Show Abstract:

Tactile navigation systems employ the use of ones sense of touch with haptic feedback to communicate directions. This type of navigation presents a potentially faster and more accurate mode of navigation than preexisting visual or auditory forms. We developed a navigation system, HaptiGo, which uses a tactile harness controlled by an Android application to communicate directions. The use of a smartphone to provide GPS and compass information allows for a more compact and user-friendly system then previous tactile navigation systems. HaptiGo has been tested for functionality and user approval of tactile navigation. It was further tested to determine if tactile navigation provides for faster navigation times, increased path accuracy and improved environmental awareness compared to traditional maps navigation methods. We discuss the novel usage of smartphones for tactile navigation, the effectiveness of the HaptiGo navigation system, its accuracy compared to the use of static map-based navigation, and the potential benefits of tactile navigation.

Show BibTex

@mastersthesis{sarinregmi2012HaptigoTactileNavigationSystemBS,
type = {{Undergraduate Honors Thesis}},
author = {Regmi, Sarin},
title = {Haptigo Tactile Navigation System},
year = {2012},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}
PublicationImage
 
PublicationImage 2011 Stephanie Valentine. 2011. "A Shape Comparison Technique for Use in Sketch-based Tutoring Systems." Undergraduate Honor's Thesis. St. Mary's University of Minnesota. Winona, MN, USA: May 2011. Advisor: Ann Smith & Tracy Hammond. First Position: TAMU PhD Student.
Show Abstract:

TBD

Show BibTex

@mastersthesis{stephanievalentine2011AShapeComparisonTechniqueforUseinSketchbasedTutoringSystemsBS,
type = {{Undergraduate Honors Thesis}},
author = {Valentine, Stephanie},
title = {A Shape Comparison Technique for Use in Sketch-based Tutoring Systems},
year = {2011},
month = {May},
address = {Winona, MN, USA},
school = {St. Mary's University of Minnesota},
note = {Advisor: Ann Smith \& Tracy Hammond},
}
PublicationImage
 


Show All BibTex


@mastersthesis{paultaele2019ASketchRecognitionBasedIntelligentTutoringSystemforRicherInstructorLikeFeedbackonChineseCharactersPhD,
type = {{PhD Doctoral Dissertation}},
author = {Taele, Paul},
title = {A Sketch Recognition-Based Intelligent Tutoring System for Richer Instructor-Like Feedback on Chinese Characters},
year = {2019},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{blakewilliford2019ExploringmethodsforholisticallyimprovingdrawingabilitywithartificialintelligencePhD,
type = {{PhD Doctoral Dissertation}},
author = {Williford, Blake},
title = {Exploring methods for holistically improving drawing ability with artificial intelligence},
year = {2019},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{vijayrajanna2018AddressingSituationalandPhysicalImpairmentsandDisabilitieswithaGazeassistedMultimodalAccessibleInteractionParadigmPhD,
type = {{PhD Doctoral Dissertation}},
author = {Rajanna, Vijay},
title = {Addressing Situational and Physical Impairments and Disabilities with a Gaze-assisted, Multi-modal, Accessible Interaction Paradigm},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0001-7550-0411},
}


@mastersthesis{stephanievalentine2016DesignDeploymentIdentityConformityAnAnalysisofChildrensOnlineSocialNetworksPhD,
type = {{PhD Doctoral Dissertation}},
author = {Valentine, Stephanie},
title = {Design, Deployment, Identity, \& Conformity: An Analysis of Children's Online Social Networks},
year = {2016},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-1956-8125},
}


@mastersthesis{folamialamudun2016AnalysisofVisuocognitiveBehaviorinScreeningMammographyPhD,
type = {{PhD Doctoral Dissertation}},
author = {Alamudun, Folami},
title = {Analysis of Visuo-cognitive Behavior in Screening Mammography},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-0803-4542},
}


@mastersthesis{honghoekim2016AFineMotorSkillClassifyingFrameworktoSupportChildrensSelfregulationSkillsandSchoolReadinessPhD,
type = {{PhD Doctoral Dissertation}},
author = {Kim, Hong-Hoe (Ayden)},
title = {A Fine Motor Skill Classifying Framework to Support Children's Self-regulation Skills and School Readiness},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-1175-8680},
}


@mastersthesis{manojprasad2014DesigningTactileInterfacesforAbstractInterpersonalCommunicationPedestrianNavigationandMotorcyclistsNavigationPhD,
type = {{PhD Doctoral Dissertation}},
author = {Prasad, Manoj},
title = {Designing Tactile Interfaces for Abstract Interpersonal Communication, Pedestrian Navigation and Motorcyclists Navigation},
pages = {183},
year = {2014},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-3554-2614},
}


@mastersthesis{daniellecummings2013MultimodalInteractionforEnhancingTeamCoordinationontheBattlefieldPhD,
type = {{PhD Doctoral Dissertation}},
author = {Cummings, Danielle},
title = {Multimodal Interaction for Enhancing Team Coordination on the Battlefield},
pages = {201},
year = {2013},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{sashikanthdamaraju2013AnExplorationofMultitouchInteractionTechniquesPhD,
type = {{PhD Doctoral Dissertation}},
author = {Damaraju, Sashikanth},
title = {An Exploration of Multi-touch Interaction Techniques},
pages = {145},
year = {2013},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{brandonpaulson2010RethinkingpeninputinteractionEnablingfreehandsketchingthroughimprovedprimitiverecognitionPhD,
type = {{PhD Doctoral Dissertation}},
author = {Paulson, Brandon},
title = {Rethinking pen input interaction: Enabling freehand sketching through improved primitive recognition},
pages = {217},
year = {2010},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{tracyhammond2007LADDERAPerceptuallyBasedLanguagetoSimplifySketchRecognitionUserInterfaceDevelopmentPhD,
type = {{PhD Doctoral Dissertation}},
author = {Hammond, Tracy},
title = {LADDER: A Perceptually-Based Language to Simplify Sketch Recognition User Interface Development},
pages = {495},
year = {2007},
month = {February},
address = {Cambridge, MA, USA},
school = {Massachusetts Institute of Technology ({MIT})},
note = {Advisor: Randall Davis},
}


@mastersthesis{siddharthsubramaniyam2019SketchRecognitionBasedClassificationforEyeMovementBiometricsinVirtualRealityMS,
type = {{MS Master's Thesis}},
author = {Subramaniyam, Siddharth},
title = {Sketch Recognition Based Classification for Eye Movement Biometrics in Virtual Reality},
year = {2019},
month = {June},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{meghayadav2019MitigatingpublicspeakinganxietyusingvirtualrealityandpopulationspecificmodelsMS,
type = {{MS Master's Thesis}},
author = {Yadav, Megha},
title = {Mitigating public speaking anxiety using virtual reality and population-specific models.},
year = {2019},
month = {June},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Theodora Chaspari \& Tracy Hammond},
}


@mastersthesis{jakeleland2019RecognizingSeatbeltFasteningBehaviorwithWearableTechnologyandMachineLearningMS,
type = {{MS Master's Thesis}},
author = {Leland, Jake},
title = {Recognizing Seatbelt-Fastening Behavior with Wearable Technology and Machine Learning},
pages = {136},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{sharmisthamaity2019CombiningPaperPencilTechniqueswithImmediateFeedbackforLearningChemicalDrawingsMS,
type = {{MS Master's Thesis}},
author = {Maity, Sharmistha},
title = {Combining Paper-Pencil Techniques with Immediate Feedback for Learning Chemical Drawings},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{larrypowell2019TheEvaluationofRecognizingAquaticActivitiesThroughWearableSensorsandMachineLearningMS,
type = {{MS Master's Thesis}},
author = {Powell, Larry},
title = {The Evaluation of Recognizing Aquatic Activities Through Wearable Sensors and Machine Learning},
pages = {112},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{sabyasachichakraborty2018ANovelMethodologyforCreatingAutoGeneratedSpringBasedTrussProblemsThroughMechanixMS,
type = {{MS Master's Thesis}},
author = {Chakraborty, Sabyasachi},
title = {A Novel Methodology for Creating Auto Generated Spring-Based Truss Problems Through Mechanix},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Maurice Rojas \& Tracy Hammond, ISBN: ORCID id: 0000-0003-1640-845X},
}


@mastersthesis{adilhamidmalla2018AGazeBasedAuthenticationSystemFromAuthenticationtoIntrusionDetectionMA,
type = {{MA Master's Thesis}},
author = {Malla, Adil},
title = {A Gaze-Based Authentication System: From Authentication to Intrusion Detection},
year = {2018},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-9598-1635},
}


@mastersthesis{tianshuchu2017ASketchbasedEducationalSystemforLearningChineseHandwritingMS,
type = {{MS Master's Thesis}},
author = {Chu, Tianshu},
title = {A Sketch-based Educational System for Learning Chinese Handwriting},
year = {2017},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-9497-058X},
}


@mastersthesis{junginkoh2017DevelopingaHandGestureRecognitionSystemforMappingSymbolicHandGesturestoAnalogousEmojiinComputerMediatedCommunicationMS,
type = {{MS Master's Thesis}},
author = {Koh, Jung In},
title = {Developing a Hand Gesture Recognition System for Mapping Symbolic Hand Gestures to Analogous Emoji in Computer-Mediated Communication},
year = {2017},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-3909-0192},
}


@mastersthesis{sethpolsley2017IdentifyingoutcomesofcarefrommedicalrecordstoimprovedoctorpatientcommunicationMS,
type = {{MS Master's Thesis}},
author = {Polsley, Seth},
title = {Identifying outcomes of care from medical records to improve doctor-patient communication},
pages = {81},
year = {2017},
month = {June},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-8805-8375},
}


@mastersthesis{aqibbhat2017SketchographyAutomaticgradingofmapsketchesforgeographyeducationMS,
type = {{MS Master's Thesis}},
author = {Bhat, Aqib},
title = {Sketchography - Automatic grading of map sketches for geography education},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-2718-7736},
}


@mastersthesis{joshcherian2017RecognitionofEverydayActivitiesthroughWearableSensorsandMachineLearningMS,
type = {{MS Master's Thesis}},
author = {Cherian, Josh},
title = {Recognition of Everyday Activities through Wearable Sensors and Machine Learning},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa Maldonado, ISBN: ORCID id: 0000-0002-7749-2109},
}


@mastersthesis{jorgecamara2017Flow2CodeFromHanddrawnFlowcharttoCodeExecutionMS,
type = {{MS Master's Thesis}},
author = {Camara, Jorge Ivan},
title = {Flow2Code - From Hand-drawn Flowchart to Code Execution},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-0922-5508},
}


@mastersthesis{nahumvillanueva2017ARCachingUsingAugmentedRealityonMobileDevicestoImproveGeocacherExperienceMS,
type = {{MS Master's Thesis}},
author = {Villanueva, Nahum},
title = {ARCaching: Using Augmented Reality on Mobile Devices to Improve Geocacher Experience},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-0451-4805},
}


@mastersthesis{siddharthakarthik2016LabelingbyExampleMS,
type = {{MS Master's Thesis}},
author = {Karthik, Siddhartha},
title = {Labeling by Example},
year = {2016},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-4445-8008},
}


@mastersthesis{swarnakeshavabhotla2016PerSketchTivityRecognitionandProgressiveLearningAnalysisMS,
type = {{MS Master's Thesis}},
author = {Keshavabhotla, Swarna},
title = {PerSketchTivity: Recognition and Progressive Learning Analysis},
year = {2016},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0001-5892-858X},
}


@mastersthesis{shaliniashokkumar2016EvaluationofConceptualSketchesonStylusbasedDevicesMS,
type = {{MS Master's Thesis}},
author = {Ashok Kumar, Shalini},
title = {Evaluation of Conceptual Sketches on Stylus-based Devices},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-6044-790X},
}


@mastersthesis{purnendukaul2016GazeAssistedClassificationofOnScreenTasksbyDifficultyLevelandUserActivitiesReadingWritingTypingImageGazingMS,
type = {{MS Master's Thesis}},
author = {Kaul, Purnendu},
title = {Gaze Assisted Classification of On-Screen Tasks (by Difficulty Level) and User Activities (Reading, Writing/Typing, Image-Gazing)},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-7657-9616},
}


@mastersthesis{jaideepray2016FindingSimilarSketchesMS,
type = {{MS Master's Thesis}},
author = {Ray, Jaideep},
title = {Finding Similar Sketches},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-2266-576X},
}


@mastersthesis{shiqiangguo2015ResuMatcherAPersonalizedResumeJobMatchingSystemMS,
type = {{MS Master's Thesis}},
author = {Guo, Shiqiang (Frank)},
title = {ResuMatcher: A Personalized Resume-Job Matching System},
year = {2015},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Treacy Hammond, ISBN: ORCID id: 0000-0002-6846-3234},
}


@mastersthesis{zhengliangyin2014ChineseCalligraphistASketchBasedLearningToolforLearningWrittenChineseMS,
type = {{MS Master's Thesis}},
author = {Yin, Zhengliang},
title = {Chinese Calligraphist: A Sketch Based Learning Tool for Learning Written Chinese},
year = {2014},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0003-2996-3639},
}


@mastersthesis{honghoekim2012AnalysisofChildrensSketchestoImproveRecognitionAccuracyinSketchBasedApplicationsMS,
type = {{MS Master's Thesis}},
author = {Kim, Hong-Hoe (Ayden)},
title = {Analysis of Children's Sketches to Improve Recognition Accuracy in Sketch-Based Applications},
pages = {105},
year = {2012},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-1175-8680},
}


@mastersthesis{drewlogsdon2012ArmHandFingerVideoGameInteractionMS,
type = {{MS Master's Thesis}},
author = {Logsdon, Drew},
title = {Arm-Hand-Finger Video Game Interaction},
pages = {108},
year = {2012},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{georgelucchese2012SketchRecognitiononMobileDevicesMS,
type = {{MS Master's Thesis}},
author = {Lucchese, George},
title = {Sketch Recognition on Mobile Devices},
pages = {54},
year = {2012},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{wenzheli2012AcousticBasedSketchRecognitionMS,
type = {{MS Master's Thesis}},
author = {Li, Wenzhe},
title = {Acoustic Based Sketch Recognition},
pages = {91},
year = {2012},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{franciscovides2012TAYouKiASketchBasedTutoringSystemforYoungKidsMS,
type = {{MS Master's Thesis}},
author = {Vides, Francisco},
title = {TAYouKi: A Sketch-Based Tutoring System for Young Kids},
pages = {129},
year = {2012},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{paultaele2010FreehandSketchRecognitionforComputerAssistedLanguageLearningofWrittenEastAsianLanguagesMS,
type = {{MS Master's Thesis}},
author = {Taele, Paul},
title = {Freehand Sketch Recognition for Computer-Assisted Language Learning of Written East Asian Languages},
pages = {96},
year = {2010},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond, ISBN: ORCID id: 0000-0002-1271-0574},
}


@mastersthesis{aaronwolin2010SegmentingHandDrawnStrokesMS,
type = {{MS Master's Thesis}},
author = {Wolin, Aaron},
title = {Segmenting Hand-Drawn Strokes},
pages = {160},
year = {2010},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{danieldixon2009AMethodologyforUsingAssistiveSketchRecognitionForImprovingaPersonsAbilitytoDrawMS,
type = {{MS Master's Thesis}},
author = {Dixon, Daniel},
title = {A Methodology for Using Assistive Sketch Recognition For Improving a Person’s Ability to Draw},
pages = {114},
year = {2009},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{tracyhammond2001EthnomathematicsConceptDefinitionandResearchPerspectivesMA,
type = {{MA Master's Thesis}},
author = {Hammond, Tracy},
title = {Ethnomathematics: Concept Definition and Research Perspectives},
pages = {57},
year = {2001},
month = {February},
address = {New York, NY, USA},
school = {Columbia University},
note = {Advisor: Ellen Marakowitz},
}


@mastersthesis{yuricastro2021MathematicalSketchingImprovingtheLearningofGraphingMathematicalEquationswithIntelligentTutoringBS,
type = {{Undergraduate Honors Thesis}},
author = {Yuri, Castro},
title = {Mathematical Sketching---Improving the Learning of Graphing Mathematical Equations with Intelligent Tutoring},
year = {2021},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{linazhang2021VibrotactileFeedbackforUnderstandingTimeinCalendarNotificationsBS,
type = {{Undergraduate Honors Thesis}},
author = {Zhang, Lina},
title = {Vibrotactile Feedback for Understanding Time in Calendar Notifications},
year = {2021},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammondin progress},
}


@mastersthesis{chinaemereike2020VoiceRecognitionSystemsandAfricanDiasporaAccentsBA,
type = {BA},
author = {Ike, Chinaemere},
title = {Voice Recognition Systems and African Diaspora Accents},
year = {2020},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{bentonguess2019SketchRecognitionApplicationstotheReyOsterriethComplexFigureTestBS,
type = {{Undergraduate Honors Thesis}},
author = {Guess, Benton Phillipy},
title = {Sketch Recognition Applications to the Rey-Osterrieth Complex Figure Test},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{chinmayphulse2019UsingEyeTrackingDataforUserIdentificationBS,
type = {{Undergraduate Honors Thesis}},
author = {Phulse, Chinmay Milind},
title = {Using Eye Tracking Data for User Identification},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{piyushtandon2019IdentificationofSwimmingStrokesUsingSmartDevicesBS,
type = {{Undergraduate Honors Thesis}},
author = {Tandon, Piyush},
title = {Identification of Swimming Strokes Using Smart Devices},
year = {2019},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{leslieescalante2018HapticDiveAnIntuitiveWarningSystemforUnderwaterUsersBS,
type = {{Undergraduate Honors Thesis}},
author = {Escalante-Trevino, Leslie},
title = {HapticDive: An Intuitive Warning System for Underwater Users},
pages = {55},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Sneha Santani},
}


@mastersthesis{snehasantani2018HapticDiveAnIntuitiveWarningSystemforUnderwaterUsersBS,
type = {{Undergraduate Honors Thesis}},
author = {Santani, Sneha},
title = {HapticDive: An Intuitive Warning System for Underwater Users},
pages = {55},
year = {2018},
month = {December},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Leslie Escalante-Trevino},
}


@mastersthesis{jiayaoli2018CompareAccuracyandTimeComplexityofMachineLearningAlgorithmsforEyeGestureRecognitionBS,
type = {{Undergraduate Honors Thesis}},
author = {Li, Jiayao},
title = {Compare Accuracy and Time Complexity of Machine Learning Algorithms for Eye Gesture Recognition},
year = {2018},
month = {August},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{eleanormiller2018RecognizingElementaryElementsinChemicalDiagramSketchesBS,
type = {{Undergraduate Honors Thesis}},
author = {Miller, Eleanor},
title = {Recognizing Elementary Elements in Chemical Diagram Sketches},
pages = {55},
year = {2018},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{jakeleland2017RecognizingSeatbeltFasteningActivityUsingWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Leland, Jake},
title = {Recognizing Seatbelt-Fastening Activity Using Wearable Sensor Technology},
pages = {43},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Ellen Stanfill},
}


@mastersthesis{ellenstanfill2017RecognizingSeatbeltFasteningActivityUsingWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Stanfill, Ellen},
title = {Recognizing Seatbelt-Fastening Activity Using Wearable Sensor Technology},
pages = {43},
year = {2017},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy HammondCoauthored with Jake Leland},
}


@mastersthesis{davidbrhlik2016EnhancingBlindNavigationwiththeUseofWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Brhlik, David},
title = {Enhancing Blind Navigation with the Use of Wearable Sensor Technology},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa MaldonadoCoauthored with Temiloluwa Otuyelu and Chad Young},
}


@mastersthesis{temiotuyelu2016EnhancingBlindNavigationwiththeUseofWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Otuyelu, Temiloluwa},
title = {Enhancing Blind Navigation with the Use of Wearable Sensor Technology},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa MaldonadoCoauthored with David Brhlik and Chad Young},
}


@mastersthesis{chadyoung2016EnhancingBlindNavigationwiththeUseofWearableSensorTechnologyBS,
type = {{Undergraduate Honors Thesis}},
author = {Young, Chad},
title = {Enhancing Blind Navigation with the Use of Wearable Sensor Technology},
year = {2016},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond \& Theresa MaldonadoCoauthored with Temiloluwa Otuyelu and David Bhrlik},
}


@mastersthesis{sarinregmi2012HaptigoTactileNavigationSystemBS,
type = {{Undergraduate Honors Thesis}},
author = {Regmi, Sarin},
title = {Haptigo Tactile Navigation System},
year = {2012},
month = {May},
address = {College Station, TX, USA},
school = {Texas A\&M University ({TAMU})},
note = {Advisor: Tracy Hammond},
}


@mastersthesis{stephanievalentine2011AShapeComparisonTechniqueforUseinSketchbasedTutoringSystemsBS,
type = {{Undergraduate Honors Thesis}},
author = {Valentine, Stephanie},
title = {A Shape Comparison Technique for Use in Sketch-based Tutoring Systems},
year = {2011},
month = {May},
address = {Winona, MN, USA},
school = {St. Mary's University of Minnesota},
note = {Advisor: Ann Smith \& Tracy Hammond},
}


Show All Latex Include

\subsection{Dissertations and Theses}

\subsubsection{PhD Dissertations}
\begin{enumerate}
\item \bibentry{paultaele2019PhD}
\item \bibentry{blakewilliford2019PhD}
\item \bibentry{vijayrajanna2018PhD}
\item \bibentry{stephanievalentine2016PhD}
\item \bibentry{folamialamudun2016PhD}
\item \bibentry{honghoekim2016PhD}
\item \bibentry{manojprasad2014PhD}
\item \bibentry{daniellecummings2013PhD}
\item \bibentry{sashikanthdamaraju2013PhD}
\item \bibentry{brandonpaulson2010PhD}
\item \bibentry{tracyhammond2007PhD}
\end{enumerate}

\subsubsection{Master's Theses}
\begin{enumerate}
\item \bibentry{siddharthsubramaniyam2019MS}
\item \bibentry{meghayadav2019MS}
\item \bibentry{jakeleland2019MS}
\item \bibentry{sharmisthamaity2019MS}
\item \bibentry{larrypowell2019MS}
\item \bibentry{sabyasachichakraborty2018MS}
\item \bibentry{adilhamidmalla2018MA}
\item \bibentry{tianshuchu2017MS}
\item \bibentry{junginkoh2017MS}
\item \bibentry{sethpolsley2017MS}
\item \bibentry{aqibbhat2017MS}
\item \bibentry{joshcherian2017MS}
\item \bibentry{jorgecamara2017MS}
\item \bibentry{nahumvillanueva2017MS}
\item \bibentry{siddharthakarthik2016MS}
\item \bibentry{swarnakeshavabhotla2016MS}
\item \bibentry{shaliniashokkumar2016MS}
\item \bibentry{purnendukaul2016MS}
\item \bibentry{jaideepray2016MS}
\item \bibentry{shiqiangguo2015MS}
\item \bibentry{zhengliangyin2014MS}
\item \bibentry{honghoekim2012MS}
\item \bibentry{drewlogsdon2012MS}
\item \bibentry{georgelucchese2012MS}
\item \bibentry{wenzheli2012MS}
\item \bibentry{franciscovides2012MS}
\item \bibentry{paultaele2010MS}
\item \bibentry{aaronwolin2010MS}
\item \bibentry{danieldixon2009MS}
\item \bibentry{tracyhammond2001MA}
\end{enumerate}

\subsubsection{Undergraduate Honor's Theses}
\begin{enumerate}
\item \bibentry{yuricastro2021BS}
\item \bibentry{linazhang2021BS}
\item \bibentry{chinaemereike2020BA}
\item \bibentry{bentonguess2019BS}
\item \bibentry{chinmayphulse2019BS}
\item \bibentry{piyushtandon2019BS}
\item \bibentry{leslieescalante2018BS}
\item \bibentry{snehasantani2018BS}
\item \bibentry{jiayaoli2018BS}
\item \bibentry{eleanormiller2018BS}
\item \bibentry{jakeleland2017BS}
\item \bibentry{ellenstanfill2017BS}
\item \bibentry{davidbrhlik2016BS}
\item \bibentry{temiotuyelu2016BS}
\item \bibentry{chadyoung2016BS}
\item \bibentry{sarinregmi2012BS}
\item \bibentry{stephanievalentine2011BS}
\end{enumerate}