Hyperhaptics is a research project investigating the following questions: How can haptics and materiality be transported into the virtual world? How can senses be meaningfully coupled to complement and expand physical reality? How can the imagination of users be stimulated while they mentally construct imaginary virtual worlds beyond spatial and temporal boundaries?
Within three sprints of 3 weeks each, I experimented with micro motions, haptic vision, and pseudo haptics. My research documentation and results have been published in this project catalog.
1. HAPTIC VISION
Research question
The interaction with screens and augmented reality has become part of our everyday life. However, they primarily target our visual and auditory perception. In this sprint, I investigated how I could enhance these interactions to create a richer experience that triggers our haptic sense – for example, in cybersports, where visual and auditory feedback is frequently not enough.
The interaction with screens and augmented reality has become part of our everyday life. However, they primarily target our visual and auditory perception. In this sprint, I investigated how I could enhance these interactions to create a richer experience that triggers our haptic sense – for example, in cybersports, where visual and auditory feedback is frequently not enough.
Methodology
I used the scenario of virtual fitness as a starting point and focused on two main aspects: the coordination of specific movements and the feeling of weight. Then, I designed small experimental programs with Processing and Arduino to visually communicate the idea of coordination and weight. Finally, I connected them with a vibration motor as an output of the interaction attached to my index finger.
I used the scenario of virtual fitness as a starting point and focused on two main aspects: the coordination of specific movements and the feeling of weight. Then, I designed small experimental programs with Processing and Arduino to visually communicate the idea of coordination and weight. Finally, I connected them with a vibration motor as an output of the interaction attached to my index finger.
Parameters
To approach the aspect of coordination, I started with a simple one-dimensional vibration stimulus (ca. 0.5s vibration signal on a consistent intensity level). That would be triggered by clicking the mouse, moving in a specific direction with a certain speed, or hitting a visual obstacle. Next, I added variety to the vibration output by defining patterns like fade-outs and fade-ins or connecting the position or moving speed of the mouse with the intensity of the vibration feedback.To approach the feeling of weight, I explored visual delays of moving objects on the screen in the sense of gravity and translated those into matching delayed vibration feedback. Moving an object up would cause a visible delay and a delayed vibration feedback that would fade in. Moving the thing down would result in no delay in creating a physical experience (since it's easier to move an object down than up due to gravity).
To approach the aspect of coordination, I started with a simple one-dimensional vibration stimulus (ca. 0.5s vibration signal on a consistent intensity level). That would be triggered by clicking the mouse, moving in a specific direction with a certain speed, or hitting a visual obstacle. Next, I added variety to the vibration output by defining patterns like fade-outs and fade-ins or connecting the position or moving speed of the mouse with the intensity of the vibration feedback.To approach the feeling of weight, I explored visual delays of moving objects on the screen in the sense of gravity and translated those into matching delayed vibration feedback. Moving an object up would cause a visible delay and a delayed vibration feedback that would fade in. Moving the thing down would result in no delay in creating a physical experience (since it's easier to move an object down than up due to gravity).
Findings
During the process, I noticed that using a mouse and a screen is such an established interaction and works so fast that it was hard to integrate a haptic element in it and actually »wait for it. « Therefore, I had to slow down some of the experiments. Also, it wasn't easy to communicate a message/information with the vibration feedback when combined with the actual movement of the mouse. However, the setup worked well to connect and interchange all parameters and tweak them accurately. Giving coordinated guidance through haptic feedback was a well-working practice. The pseudo-haptic effect created through delayed visual and vibration feedback was also convincing and had a high potential for more investigation.
During the process, I noticed that using a mouse and a screen is such an established interaction and works so fast that it was hard to integrate a haptic element in it and actually »wait for it. « Therefore, I had to slow down some of the experiments. Also, it wasn't easy to communicate a message/information with the vibration feedback when combined with the actual movement of the mouse. However, the setup worked well to connect and interchange all parameters and tweak them accurately. Giving coordinated guidance through haptic feedback was a well-working practice. The pseudo-haptic effect created through delayed visual and vibration feedback was also convincing and had a high potential for more investigation.
My findings inspired me to use haptic and tactile experiences to guide workout routines. Read more about it on the project page of Metron.
2. HAPTIC VISION
Investigation
Digital technology often wants to convince with a high level of accuracy. But why do, for instance, traditional (analog) instruments have so much more charm and stay competitive in comparison to digital music software? In this sprint, I want to explore different ways to interact with sound at the intersection of a digital, an analog, and an augmented reality environment.
Digital technology often wants to convince with a high level of accuracy. But why do, for instance, traditional (analog) instruments have so much more charm and stay competitive in comparison to digital music software? In this sprint, I want to explore different ways to interact with sound at the intersection of a digital, an analog, and an augmented reality environment.
Methodology
I created some sounds and some paper cubes with different visual targets attached. Then, with Unity, Vuforia, and an external camera, I set up a program to connect them with the target cubes. Once the camera detects a target, the associated sound is played. An augmented colorful sphere would be visual in front of the targets on the screen to indicate the successful detection.
I created some sounds and some paper cubes with different visual targets attached. Then, with Unity, Vuforia, and an external camera, I set up a program to connect them with the target cubes. Once the camera detects a target, the associated sound is played. An augmented colorful sphere would be visual in front of the targets on the screen to indicate the successful detection.
Parameters
I upgraded the basic setup by adding parameters that would enhance the quality of the sound on different levels, such as volume, pitch, and stereo pan. Next, I connected these parameters to the position of the paper cubes. This allowed me to explore different interactions – rotating, lifting up and down, back and forth, correlation of several cubes – and evaluate their quality to the perceived sound.
I upgraded the basic setup by adding parameters that would enhance the quality of the sound on different levels, such as volume, pitch, and stereo pan. Next, I connected these parameters to the position of the paper cubes. This allowed me to explore different interactions – rotating, lifting up and down, back and forth, correlation of several cubes – and evaluate their quality to the perceived sound.
Findings
Eventually, I would end up with a setup where I paired the distance between the cube and the camera with the volume parameter. The closer the cube is to the camera, the louder its sound plays. I also tested the same with the interaction of lifting the cube (the higher, the louder) or their correlation to other cubes (the closer, the louder). Rotating the cube to the right caused the sound to move more to the right within a stereo sound system and the other way around.
Eventually, I would end up with a setup where I paired the distance between the cube and the camera with the volume parameter. The closer the cube is to the camera, the louder its sound plays. I also tested the same with the interaction of lifting the cube (the higher, the louder) or their correlation to other cubes (the closer, the louder). Rotating the cube to the right caused the sound to move more to the right within a stereo sound system and the other way around.
I learned that there were better ways to adjust the parameters of multiple sounds than several cubes because the correlation provided only one value (distance) which was not practical for tweaking several sounds. Also, the lifting up and down interaction was not suitable because it limited the setup to two cubes (one for each hand of the user), and their position could not be fixed.
Although the correlation of cubes didn't work for the sound, there was still a visible constellation of the cubes. After some practice, this constellation would allow the user to read the sound out of the cubes' positions. Pairing the distance between the camera and the cubes to tweak the volume worked the best and allowed an intuitive interaction. Also, the overall experience had a playful and inviting character to interact and discover sound.
Credits
The project is based on the research work of the Cutting project at the Cluster of Excellence Matters of Activity, which deals with the cultural practices of cutting and understands them as a process of dividing as well as composing, ultimately deciding.
The course is being realized under the patronage of Prof. Carola Zwick by Judith Glaser, lecturer for interaction designer (Studio NAND), Felix Rasehorn Pre-Doctoral Researcher (MoA), and Felix Groll Head of the eLAB (weissensee school of art and design berlin).
The course is being realized under the patronage of Prof. Carola Zwick by Judith Glaser, lecturer for interaction designer (Studio NAND), Felix Rasehorn Pre-Doctoral Researcher (MoA), and Felix Groll Head of the eLAB (weissensee school of art and design berlin).