For my master degree in Human Media Interaction I am required to participate in several projects throughout the year. This page describes the projects that I am most proud of, learnt the most from or find very interesting shortly. They are in the reverse order of when I participated in these projects, so the latest project is first.
Critical thinking and problem solving, are essential skills for people everywhere, and especially in developing countries. Being able to look around and work towards a better future for yourself and those around you can help communities and countries grow and evolve. Combine these analytic skills with practical skills in electronics, and people can start to build their own interventions. This combination of skills empowers people to build towards a better future for themselves and their surroundings. This is why I would like to set up a workshop in Physical Computing and Critical Thinking in Africa.
During this workshop, there will be an additional goal. Next to teaching the participants the skills mentioned above, I would like to research how much user research I can do whilst giving the workshop. The goal here is to look at the results of integrated user research, and compare it with traditional user research methods (e.g. a focus group). Gathering user information through user research methods takes time, and can generally only give an indication of the user’s needs and wants. This is because user research methods are evasive methods, the users that a researcher wants to interview, observe or follow, is aware of this. This awareness can influence a user, he or she might change the way he or she behaves, or responds to questions. During the workshop, I would like to try out whether user research can be integrated into the workshop, subtly, such that it does not draw attention to itself. The results of this will then be compared to a more traditional method that will also used during the workshop, but separately.
The subject of the user research will focus on the introduction of electronics in poor low-educated parts of Africa (in this case Namibia). There are many people who are starting for example FabLab’s in Africa, but what do the users need, expect and want in this field? What fields would they want to work on, and what information and technology do they want? This is not directly related to problem solving, but does give an insight into what the participants aim to do, and what they would like to achieve.
Human Media Interaction Project
The influence of a robot exterior on the child-robot interaction was researched in two iterations. This research focused on less humanoid robots with non-verbal behaviour. For the first iteration three robot exteriors for the Robotino were designed, based on the same dome shape. The least humanoid exterior was just an orange dome (a), the semi humanoid exterior had a visor to give it a point of orientation (b) and the most humanoid exterior had eyes and a mouth (c). The first iteration was conducted with students from the University of Twente, due to the organizational difficulties of user testing with children.
This experiment was divided into three sections: drawing a robot, the task and an interview. The 18 participants were divided equally into the robot categories such that the gender ratio was equal. The behaviour of the participant was coded in task and non-task oriented behaviour. There were no signicant results after these experiments. The drawings indicated that people think of square and human like robots when asked to draw a robot. The results do indicate that people are willing to help the robot regardless of its exterior. These also indicate that people tend to talk more to the least humanoid robot. This contradicts with the expectations. The shape and colour were both mentioned as contributor to the positive mood of the robot. The results of this iteration were the inspiration for the second iteration (d, e, f).
This iteration was conducted with six children from the after school-program at “De Vlinder”. The goal was to determine the influence of human like features on the perception of the robot by the children, this was done using focus groups. The children were also asked to draw a robot. Their results were similar to those of the students. The children were asked several questions about the exteriors, comparing the different robot exteriors. The children were drawn towards the robot with the eyes. They thought the square robot with eyes resembled a real robot. The round robot with eyes was perceived as the friendliest.
In general the square robot with the most human like features was most in line with what the participants expected. Still they thought of the round one with the eyes as the friendliest. At last a few side notes need to be made with regard to this conclusion. In both iteration the sample size was rather small and neither one of these iterations showed a clear result. Still this research shines some light on the perception and expectations of robots by students and children.
Lip synchronization, or in short lip sync, can be described as matching the lip movements of a human or virtual character with pre-recorded vocals, such as songs or spoken text. Since 1926, people have been trying to animate mouths to seem like the are speaking the words heard at that moment. This art is being used more and more in virtual worlds like games. Lip synchronisation is used to enhance the perception and understanding of speech in these virtual worlds.
Often when a 3D model is used to accompany speech, the 3D model has to be lip synced beforehand to fit the spoken language. This is long and tedious work that requires a lot of skill. We looked at a real-time method of creating lip synchronisation. Groups tend to assume that using a many-to-one technique to link phonemes to visemes is preferable because of computing reasons. We explored this assumption and compared the ‘traditional’ many-to-one technique with a more elaborate one-to-one technique. Our goal in this project was to create a real-time text-to-lip sync conversion application.
To accurately compare the effect on the participants of the different techniques, our participants were shown both techniques. For each technique they were given an interactive model, this model could visualize text that was typed in by the participant, they were asked to type two standard sentences. After inputting the standard sentences they were allowed to explore the application, and let the model speak whatever they want. The participants were shown alternating models as first model, so odd-numbered participants started with the many-to-one model, and even-numbered participants started with the one-to-one model. These models were not known to the participant as such, but had different hair colours to be able to differentiate between them. The many-to-one model was blonde, and the one-to-one model was a brunette. After interacting with a model, the participants were asked to answer questions about the animation.
We expected the animations of the brunette agent, the one-to-one connection between phonemes and visemes, to be more effective. We were able to accept our hypothesis as the quantitative results indicated that the brunette agent was more fluent regarding its animation than the blonde agent. This corresponded with the qualitative results as all participants preferred the brunette agent.