ShapeTones is an audiovisual memory game for iOS (iPhone/iPad). A sequence of 3 shapes and tones (“ShapeTones”) is played, and the player tries to reproduce it. Tapping different areas of the screen trigger different ShapeTones. The game starts with 3 ShapeTones. As the game evolves, more ShapeTones become available. When a new ShapeTone is added, a trial screen is shown to demonstrate where each ShapeTone is triggered. Some surprises happen along the way. As a one player game, the sequence is created automatically. As a two player game, one player creates the sequence, passes the device to the other player, who tries to repeat it. They then swap the roles.
[Notice: We are no longer recruting participants for this study]
We are runnining a series of studies to evaluate an audio game that you can play on a tablet computer or a mobile device. The game is like a music puzzle, it involves listening to a sequence of musical tones and reproducing them using a sequence of tapping.
We would like to invite you to help us evaluate this game concept by taking part in one of the evaluation sessions, which last between 45mins and 1hour. Evaluation sessions are held at Queen Mary University of London. You will receive £10 for your participation. An evaluation session involves training you on how to play the game (it's very simple!) then playing different versions of the game and filling some questionnaires to give us your feedback about each version.
Having consulted with members of Bristol and District Blind Bowling Club we are interested in ways we can use technology to improve accessibility for visually impaired bowlers. At present, sighted bowlers convey information such as ‘jack’ distance and position of bowls (woods) in the head by verbal instructions. Each bowls position relative to the jack and other bowls has to be memorised (clock face strategy) prior to targeting where the bowler wishes to direct their wood. The two main goals are 1) to reduce memory load in targeting 2) a targeting system to direct where to bowl the wood.
Throughout the summer of 2014, we will be running a series of studies to investigate audio-haptic interaction techniques that we designed to support non-visual interaction with Digital Audio Workstations. The studies will take place at Queen Mary University of London, mile End campus. Currently, we are running the Sonic Interaction for Point Estimation study which compares audio display techniques that help make the task of moving a visual object to a target position on a 2D plane non-visually accessible.
Check out this page for more details about this study as well as details about how to participate.
We are interested in the haptic representation of auditory slopes. A diagonal line sonified is easy to recognise even if no conversion instructions are given to the participant. The degree of slope is dependent on the duration and frequency components of the sonified image and therefore with a small amount of training it is relatively easy to discriminate lines with a shallow gradient from steeper ones. We are interested in the cross-modal conveyance of slope angle i.e using a haptic input device (Falcon) will participants under or overestimate the degree of slope when transmitting the sonified information to a haptic response device? If using haptic devices to represent sounds or move between areas of interest in a soundscape a standardisation is required to adapt features of the device (resistance/ viscosity) and/or training protocols to ensure accurate performance.
Horizontal lines in a sonified image can be difficult to discriminate dependent on the frequency range of the gaps that separate them. We ran a study in which sighted participants had to discriminate horizontal line soundscapes. Aside from a general linear pattern in which lines/frequencies further apart were more successfully discriminated, areas of interest within this general pattern showed marked drops in performance. Analysis of the frequencies of these areas of interest showed this occurred for octave and major fifth intervals. This implies that sonifications should pay attention to the imagery at such frequencies to give a true representation of the visual spatial components of the object. In a second part of the task the procedure was repeated but with simultaneous visual input. Performance was better than baseline when the visual and sonified input was congruous and inferior when incongruous. This demonstrates that provision of the sensory information in more than one modality has a positive effect on performance.
On December the 18th, the DePIC team will be running a Participatory Design Workshop with visually impaired musicians and audio production specilists. The workshop will be held at Queen Mary University of London and aims to explore design ideas for accessible tools that would assist visually imapired users when collaborating with sighted colleagues in tasks requiring the use of a Digital Audio Workstations, e.g. composing, recording, adding effects, etc.
During the workshop, we plan to discuss what problems people have encountered in using existing tools and when collaborating with others. We will also examine some bits of technology that enable users to interact with a computer using other senses than sight (e.g. audio and haptics), and together, we will start thinking about ways to use this technology for accessible music production tasks.