Finished Thesis Topics

If you are interested in writing a thesis (in German or English) in the scope of one of our research topics, just come talk to us.

List of already finished thesis topics

04.02.2022

Universalizing the VR Experience - A Web Approach

VR technology has grown immensely in the last few years, yet the great majority of people have not experienced it. One reason is that the software is too specialized and the hardware is prohibitively costly. This Bachelor Thesis investigates the feasibility of web-based VR applications in combination with accessible smartphone VR devices. This contributes to the question of how universally accessible can VR applications be to individuals. As the subject of the web-based VR application a simple chess game was chosen, which includes a single-player (AI) and a multiplayer mode. To control individual pieces you can either choose between voice commands or a gaze pointer. As lense gears, two low-tech head-mounted devices (HMD), the Zeiss VR One Plus and the Google Cardboard V2 are tested. These HMDs were combined with two different smartphones, the Google Pixel 6 (2021) and the Samsung Galaxy A5 (2016). This broad array of devices accommodates old and modern smartphones, as well as cheap and more expensive HMDs, thus representing the equipment of the broader public. First, the thesis describes extensively the game development process, as this app was written from scratch. Second, the game performance is taken into account, which is measured by the loading times, the video quality, and the influences the hardware and different 3D resources can have on these factors. Finally, it is evaluated how much impact the combination of the hardware has on these more novel web VR approaches. All together newer smartphones, like the Google Pixel 6 perform the best in these quality points, and in combination with not too complex meshes in the 3D scene, the VR browser chess is offering a satisfying user experience, especially with the multiplayer attribute. However, some weaker hardware devices, such as the Samsung Galaxy A5 lack computation power to accommodate the profound rendering. Thus web-VR is a promising technology branch, which in theory everybody could experience with little money, nonetheless, some smartphones are exempted from an immersive experience. Based on the recognitions gained in this thesis, investigations on the user experience of web-based VR applications in combination with smartphone HMD devices can be conducted with actual person studies. Additionally, the program can be extended to make use of the camera and perform eye-tracking studies or realize hand-tracking. These features would allow the user to actually interact physically with the VR application and thus realize a novel, truly XR, low-tech web application.

Read more ...

28.10.2021

Data augmentations in mixed reality machine learning applications

Machine learning models have accomplished much in the modern day. Nevertheless, they are reliant on big datasets to have practical relevance. Since it is not always possible to obtain masses of data, augmentations of already present data has become an appealing alternative. In this thesis, a data augmentation system is proposed that uses a mixed reality environment to create augmented image data for the task of classification. In order to achieve this, ArUco markers are tracked in a picture, which are then used to insert any virtual object onto the marker using a homography. Finally, the augmentations are evaluated by training a neural network with the augmented and real data as input datasets. The proposed system achieves augmentations, which can partly substitute real model data in a machine learning application. This indicates the possibility to create data augmentations using a mixed reality approach that can expand or substitute existing datasets with augmented pictures in an image classification task trained on a machine learning model.

Read more ...

12.03.2021

Towards avatar interaction and teleportation in virtual environments

This work focuses on the design and implementation of a virtual environment. This environment is intended to investigate avatar interaction as well as teleportation in virtual reality. Two teleportation methods are thereby implemented: gaze teleportation and automatic teleportation. In the gaze teleportation, the environment can further serve to explore gaze-based interaction. A gallery serves as a setting in which a virtual guide provides information about paintings. The interaction with the guide consists of giving speech or controller commands regarding the information about paintings. Furthermore, the environment can be customized in terms of its paintings and the possible positions to be teleported to. Overall, the environment serves as an extensible platform for research projects to study different topics in virtual reality. Using the environment as a template can save time in the required software development.

Read more ...

01.03.2021

Development of an omnidirectional 2D/3D virtual reality experiment designer and player for eye tracking studies

Die Zielsetzung war es nun also, eine Software zu schreiben, welche für den wissenschaftlichen Gebrauch genutzt werden kann. Dabei ging es darum, dass verschiedene Stimuli in einem UI (hier”Experiment Designer\ genannt) geladen werden können, diese Stimuli dann korrekt klassifiziert werden, und im Anschluss korrekt in einer Liste zusammengefügt werden. Dabei ist es dann möglich die Stimuli noch anzupassen, wie z.B. die initiale Kameradrehung für 360 Grad Videos, oder auch wie lange ein Bild angezeigt werden soll, bevor es für den Probanden automatisch wieder verschwindet. Weitere Funktionen wie das Speichern und Laden einer solchen Liste an Stimuli, um diese schnell wieder mit den selben angepassten Werten gri bereit zu haben, sind selbstverstandlich ebenfalls integriert worden. Im Anschluss ist es über einen Button im Experiment Designer möglich, die Liste an Stimuli automatisch durchlaufen zu lassen. Dabei ist es allerdings selbstverstaendlich weiterhin möglich den Abspielvorgang zu pausieren, Stimuli zu überspringen oder erneut abzuspielen, sowie frühzeitig abzubrechen. Hierbei werden waehrend des Abspielvorgangs die Augen getracked, um somit sowohl die Herkunft, wie auch die Richtung des Blickes, und zwar immer relativ zur Kopfbewegung, zu verfolgen und in einer gesonderten Textdatei automatisch nieder zuschreiben, damit diese Daten im Anschluss analysiert werden können.

Read more ...

23.10.2020

Towards understanding attention in virtual reality - Analysing visual attention in a VR-Classroom experiment

Attention can be seen as a key aspect of learning. Most of children’s everyday learning takes place in a classroom. But investigating children’s attention and learning in a real-world classroom can be difficult. Therefore, we used an Immersive Virtual Reality classroom to investigate children’s attention in a 14 minute virtual lesson. We collected information about the objects children had looked at. With the gazed object information, we analysed the total time spent on specific objects of interest (peer learners, teacher, screen) or investigated children’s visual attention behaviour with scanpath analysis (ScanMatch, SubsMatch). The study was conducted as a between design with three different classroom manipulations regarding participants sitting position, the avatar style of the peer learners and their hand raising behaviour. We found significant differences regarding children’s visual attention for the position they are seated in the classroom and regarding the visual appearance of the peer learners. Additionally, we found indications that children also process social information in the virtual classroom due to effects of the hand raising condition on children’s visual attention. These findings can be seen as a first step towards understanding children’s visual attention in an Immersive Virtual Reality classroom.

Read more ...

01.09.2020

A Deep Learning Approach for Expertise Classification using Saccade Behavior

  • Eye movements reflect the cognitive advantage of experts over novices in a domain specific task. Current literature focuses on fixations but leaves out saccades. This research investigates the gaze behavior of dentistry students and expert dentists viewing orthopantomograms (OPTs). All proposed Long Short-Term Memory (LSTM) models were able to distinguish expert and novice gaze behavior by saccade features above guess chance, with the best performing feature having an accuracy of 77.1%. The results provide further evidence for the holistic model of image perception, which proposes that experts initially analyze an image globally, and then proceed with a focal analysis. Further, our results show that saccade features are important to understand expert gaze behavior, and therefore should get integrated into current theories on expertise.

Read more ...

01.01.2020

Manipulating Yarbus. Online adaptive feedback for scanpath redirection

  • Alfred L. Yarbus investigated eye movement patterns in the 1960s and discovered that top-down contextual clues affect our gaze behavior. With further development of Bailey’s subtle gaze direction method, we investigated whether gaze behavior can be manipulated using adaptive gaze guidance. Trough manipulating salient, visual features in an image, gaze can be guided. Although we did not succeed in significantly manipulating gaze behavior, we developed a stable software for manipulating gaze online. Considering our suggested improvements, it may be possible to manipulate the gaze behavior with adaptive gaze guidance to a certain extent.

Read more ...

01.01.2020

Differences in attention in near vs. far hand conditions during propaganda viewing

  • The propaganda images used during the active time of the Nationalsozialistische Deutsche Arbeiterpartei (NSDAP), were one of the most influential images used to promote a certain ideology. In modern times these images seem outdated. However, do these propaganda images still influence our perception of Adolf Hitler and his ideology? Using eye tracking, we analyzed the attention distribution of people touching these pictures to find influences of the “near-hand phenomenon”, which explains a more sympathetic affect towards people we touch. It was found that the near-hand viewing condition leads to lower viewing duration and higher number of fixations, indicating a tendency to devote more attention to the images when touching them than compared to the control group.

Read more ...

10.07.2019

Vein extraction and eye rotation determination

The first step is setting up an recording environment with fixed subject position. This environment is used for data acquisition with predefined head rotations of the subjects. Based on this data an algorithm has to be developed measuring the eyeball rotation of the subject. The resulting angle is then compared and validated based on the head rotation.

Read more ...

10.07.2019

EyeTrace CUDA extesion

EyeTrace is a software for gaze data visualization and analysis. Due to the increasing amount of data these visualizations need more computation time. In this thesis existing visualizations should be implemented using CUDA for GPU computations. Additionally this includes a data storage model making it possible to shift the data between the GPU and the host computer. Due to the fact that nowadays not all computers have a CUDA capable card the modul should also allow CPU computations. This should be determined automatically by the module.

Read more ...

10.07.2019

Ein Dashboard für Eyetrace

Die Eyetrace Software bietet eine Vielzahl an Visualisierungsmöglchkeiten für Eye-Tracking Daten. In diesem Projekt soll eine grafisch ansprechende Übersicht über die momentan ins Programm geladenen Daten erzeugt werden.

Read more ...

10.07.2019

3D Eyeball generation based on vein motion

The first step is robust feature extraction. This can be done using SURF, SIFT, BRISK or MSER features if sufficient. Those features have to be mapped on features found in consecutive images. Based on the displacement a 3D model has to be computed. This model is used afterwards for gaze position estimation.

Read more ...

21.06.2019

Effectiveness of augmented reality for human performance in assembly

Augmented Reality (AR) has evolved recently and has emerged as one of the most promising technologies for assisting human operators with assembly, which is highly in demand in today’s manufacturing environments. However, there is still a lack of empirical studies investigating the effectiveness of AR for human performance in assembly tasks. An empirical study with 20 participants was conducted to counter this lack by comparing the use of printed instructions to the use of AR as instructional medium. Three models of a fischertechnik kit were assembled using the same instructional medium for each assembly. Times of assembly, error rates, cognitive load and usability aspects were measured to compare both media. Even though time of task completion could not be significantly improved under the use of AR, especially reading the instructions and finding required storage boxes benefited from using AR. Moreover, error rates for multiple types of error were significantly decreased using AR instructions. Major limitations in this application for AR-aided human assembly arose from a lack of alignment between virtual and physical objects, along with the limited field of view. Transferring similar applications to the industrial environment can be considered in the near future.

Read more ...