Original Post: 18th August, 2018
Feedback
Questions:
1)
2)
3)
4)
5)
6)
7)
0 Comments
Original Post: 18th July, 2018 The Technology See this in-depth guide to Bluetooth beacon technology from Blue Maestro. Summary Bluetooth beacons are Bluetooth low energy transmitters which broadcast or ‘advertise’ their unique identifier to any nearby Bluetooth capable devices, such as a smartphone or tablet. When in close proximity to a beacon, or within a specified range, the receiving device can use this information to trigger a specific action or set of actions. Generally, beacons do not receive information from other devices so cannot store information about nearby devices. Beacon Protocols iBeacon – Apple protocol released in 2013. Stable but locked down. Works with both their own and Android devices but more stable on Apple devices. Eddystone – Google’s open standard protocol released in 2015. Can be implemented without restriction and offers developers more access to features. Works on both iOS and Android. AltBeacon – Open source specification that defines a message format for beacon advertisements. All features are available to developers for no cost but is an underdeveloped platform. What Can They Do? The Physical Web
Proximity
Telemetry
For more examples of beacon use cases and in-depth case studies, see this post from 'Lighthouse'. Pre-existing Uses in Theatre This talk by Dustin Freeman on his work on Joshua Marx’s ‘The Painting’ is the only example of Bluetooth beacon technologies use in a theatre production I could find (there is no useful record of the production of ‘The Circle’ Dustin Mentions at the end of the talk). This production is useful experiment into this type of work and does an excellent job highlighting the issues they faced, potential workarounds and their ideas for the future of this type of work. Notes on the Talk
Dustin's Experience with iBeacons
My Thought’s on a Bluetooth Beacon System I would look to use Bluetooth beacons in a similar way to how audio events are triggered within a game. By using the radius of the signal as a type of box collider, it might be possible replicate some of the features of Wwise // Unity integration scripts such as ‘trigger enter’ (receiver enters the radius of a beacon) and ‘trigger exit’ (receiver exits radius). This would avoid the need to calculate the exact distance from a beacon, which would be advisable considering the margin of error (± 50% according to Dustin). I have seen examples using more of the sensors available in a smartphone device to better determine a position in space (see: https://estimote.com/) and, if this project were to develop to that point, this would be useful to more accurately trigger events or control RTCP’s. As for connecting to my existing system, it would be possible to connect a phone to Pure Data by developing an app which transmits the beacon triggers through the Open Sound Control (OSC) protocol into the central system. Whether a new app would need to be developed for each show or a universal app could be created is an area which would need to be explored in more detail. It is also possible to use a Micro:bit as an Eddystone beacon.
Perhaps a Bluetooth Lilypad Arduino could be made into a wearable receiver that could be incorporated into the story of the show. By controlling where the receiver is placed on participants it might be possible to counter some of the triggering issues encountered by Dustin in ‘The Painting’. Beacons could also provide further context to a show by broadcasting information to be displayed in the app such as story cues, suggestions or directions. They could also prompt audio cues which are reserved for individuals, which could be listened to in either traditional or bone conduction headphones. Conclusion
Overall, this technology could significantly advance this project and is worth exploring in future experiments. Giving the system some sense of the surrounding space brings the immersive experience one step closer to a virtual world scenario where everything can be strictly monitored and adapted to. Further studies into 'The Physical Web' would also benefit this More Resources
Original Post: 8th July, 2018
Description
Intended to showcase one type of narrative mechanism made possible by the unique combination of real world input, Pure Data and Wwise. In theory this is a book that, when opened, narrates a story from a randomly selected position within the recording which deteriorates into digital noise as the story progresses. When the book is closed the voiceover fades away, when it is reopened it starts the story at a new random position and also at a greater level of deterioration. Eventually it becomes impossible to hear the story, no matter how many times the book is opened. This build was inspired by the idea of the ‘audio log’, a narrative device widely used in videogames to tell concurrent stories, extend the game world beyond what the player directly experiences and provide clues or context for missions etc. In this example the audio is given to you non-linearly in that it could begin the story from any point, giving the impression that this audio log is deteriorating in front of you. This could obviously be changed into a more standard narrative telling device if required.
Preparing Audio
For this demonstration, four stories were prepared using archive recordings from a previous project of mine. One copy of each recording was kept clean and the other affected using Glitch Machines ‘Fracture’ plugin to create a deteriorated glitchy digital noise version. The impression of gradual deterioration is achieved by crossfading between these two files.
A chain reaction of processes follows the disconnection of the circuit besides the triggering of ‘Play_Story_03’ event. These control the RTCP value assigned to the level of deterioration:
When the circuit is closed (i.e. when the magnet is close to the reed switch) the ‘Stop_Story_03’ event is triggered in Wwise to fade out the story. The count is reset but the previous starting level of deterioration is remembered by the patch unless manually reset.
‘Detroit: Become Human’ is a narrative driven adventure game, developed by Quantic Dream (Heavy Rain, Beyond Two Souls), with an emphasis on non-linear methods of experiencing a story.
Description of game from their website: “Enter the near-future metropolis of Detroit in 2038 – a city rejuvenated by the introduction of highly advanced androids that exist only to serve mankind. But that’s all about to change… Step into the shoes of three distinct android characters as this brave new world teeters on the brink of chaos. Your decisions dramatically alter how the game’s intense, branching narrative plays out. With thousands of choices and dozens of endings, how will you affect the future of Detroit?” Oops, did I accidentally watch all of this 9 hour walkthrough for 'research'...?
Analysis
This game is extremely cinematic, with music, ambiences, sound effects and dialogue deliberately falling into the hyper-real style of a Hollywood blockbuster. Its sound world fits its near future setting by blending familiar sounds of present day Detroit with ever-present digital elements to give the world a sci-fi flavour. Each playable character is an android and so there are also appropriately electrical interface sounds and blockbuster sci-fi style ‘wooshes’ when ‘abilities’ are enabled, including a complete sonic shift when Connor uses his heightened senses to investigate a crime scene. Gameplay revolves around the idea of freewill, complimenting the narrative themes, which is explored by providing the player with a continuous series of choices. These decisions might appear while in conversation with other story characters or during action sequences, where the option is usually between taking either a violent or non-violent approach. Music and sound both respond to a player’s choice by matching the tone of the resulting consequences, for example, if a decision results in violent action, as opposed to a non-violent option, the music and sound will change to appropriately underscore this. This is possible because most of the action plays like a series of cutscenes. While gameplay is relatively linear, in that you are ushered from one choice to the next with next to no element of exploration, it is the numerous choices and multiple story paths which provide the incentive to play. It is even possible for a narrative to prematurely end at any time, reinforcing in the player the importance of their decisions.
Once a particular scene of a character’s story has played out, players are shown a ‘narrative tree’ which displays any decisions made, how they impacted the story and where significant deviation from their narrative could have occurred. It does not show the contents of the alternative narrative path but does provide an option to load the game from a checkpoint to remake a decision. However, changing a decision at any point will have a knock-on effect for the entire story so these story-trees also remind the player that each decision matters, encouraging multiple playthroughs.
Using a story tree is a very clear method of mapping out a non-linear narrative: providing an increasing level of detail the closer it is inspected but also showing story structure at a glance. If this project were to move in a similar direction, I would make use of a similar visual story mapping method. Perhaps assigning a sound cue to each junction which alters some part of the story.
Relevance to Project
Constructing a ‘non-linear’ narrative in this way requires choices to appear one after the other related to what came before and changing what might happen after. In a game world setting players can be physically limited in their options. In a real-world space, however, the audience have no restrictions on what they might interact with. Unless they were led down a corridor or path which contained only the available options, one after another, then it becomes impossible to predict // restrict a participant engagement with a space. This causes particular issues with the sound of a production, which cannot behave in the same linear ‘they chose this story path, now play this cue until they reach the next choice’. It must behave in a way that reacts dynamically and evolves depending on a participant’s interaction with a space. This poses different challenges in terms of underscoring a narrative, which remains a key component of this project. I have found that open world // sandbox games still have issues with linearity in their storytelling - i.e the player has reached this quest so will linearly complete it (with perhaps a few deviations based on given choices) or abandon it entirely. However, the most recent Zelda game (Zelda: Breath of the Wind (2017)) moves away from this type of open-world quest storytelling. It also gives the player an unrivalled level of interaction with its game-world while at the same time containing a story which players are encouraged to experience on their own terms. A future case study will explore the role of sound in this game and how elements might be implemented into my own project. For the record, I would class productions developed by Punchdrunk as ‘open world’ exploration of a linear narrative. In this post on his site Seb Chan describes his experience of Then She Fell by Third Rail, another company focused on immersive theatre. This particular production seems to function in a similar way to the narrative structure displayed in 'Detroit: Become Human' (as is noted by Seb with his own example of 'Dragon Age: Origins'). While he makes no mention of sound's role in the production, it seems feasible to me that alternate cues could be triggered dependent on the choices made by participants. Their direction through the space by a cast member would allow them to be forced into a series of choices, the cast member could then lead them down their chosen path and, subsequently, the sound could be triggered to follow them on the chosen path. The overall problem I can see with implementing this form of game sound in a real space lies precisely in this idea of the individual experience. Without the use of headphones (something I'd would like to avoid in this project) how can an participant be assured a unique sonic experience? Since my project will not take place is an empty space large enough, or be a show that complex, this shouldn't be an issue for the moment. It will, however, be an area worth discussing in the final report. Games and Companies Similar to Quantic Dream
Punchdrunk in their own words:
“Since 2000, Punchdrunk has pioneered a game changing form of theatre in which roaming audiences experience epic storytelling inside sensory theatrical worlds. Blending classic texts, physical performance, award-winning design installation and unexpected sites, the company's infectious format rejects the passive obedience usually expected of audiences. Punchdrunk has developed a phenomenal reputation for transformative productions that focus as much on the audience and the performance space as on the performers and narrative. Inspired designers occupy deserted buildings and apply a cinematic level of detail to immerse the audience in the world of the show. This is a unique theatrical experience where the lines between space, performer and spectator are constantly shifting. Audiences are invited to rediscover the childlike excitement and anticipation of exploring the unknown and experience a real sense of adventure. Free to encounter the installed environment in an individual imaginative journey, the choice of what to watch and where to go is theirs alone.”
Notes Taken
The Drowned Man
Summary Based on the points raised in this lecture about the use of sound in Punchdrunk's productions, there does not appear to be any way the audience can influence the show, let alone the sound of it. This company focuses on their audience experiencing a story in a non-linear fashion in the sense that they have agency over the order in which they view scenes and scenarios. The sound runs the length of the performance and, while the performers do react to sonic cues, each scene and the show as a whole still appears to move from point A to point B. However, it was interesting to hear about the intuitive ways in which the creative team worked around budget limitations, as is often the case when working in theatre, and sound that bleeds in from other rooms or 'stages'. This will not be an issue for this project as it will be confined to one space, but something to keep in mind should this project expand.
Punchdrunk & MIT
See MIT Media Projects site for information about this project and here for a first-hand account of the experience. Key points: This project was primarily focused on ways in which virtual participants could experience a Punchdrunk show. This was achieved by pairing an audience member through an internet connection to a ‘player’. Very interesting work with the masks, which were “equipped with a microphone, a temperature sensor, a heart rate monitor, an EDA sensor, a Bluetooth location sensor, and a RFID tag to capture an onsite participant’s activities, expressions, and state of mind. Masks were retrofitted with bone conduction headsets to allow operators to send audio messages to onsite participants which kept their ears free to listen to the immersive audio experience”. A number of ‘portals’ were set up around the space where the virtually-connected user had influence the real-world environment. For example a ‘ghost’ typewriter which would tap out messages written by the companion. Other portals were automatically triggered when someone wearing an augmented mask is near, such as ghostly writing on a mirror or books flying off shelves. Each cue controlled by the offsite participant was processed by a “master logic system” called ‘cauldron’. These cues not only controlled actions in the physical space but also controlled all the audio elements experienced by the virtual audience member. “The audio systems for Sleep No More were based around a virtual streaming and mixing environment running inside Reaper. The audio experience was organized into cues, coded in XML, which were executed by the script engine. Each cue could smoothly alter parameters of effects, inputs and outputs, or play back pre-recorded material. Outputs were streamed in real-time to online participants, and android devices in the space using a combination of Icecast and Wowza streaming servers. Live inputs originated from performer microphones and could also be connected to telephones on the set and at participants’ homes. All content for the experience was encoded binaurally.” There is no mention of whether any main sound elements were directly affected, Punchdrunk are notoriously cagey about releasing details, but based on the interview with Stephen Dobbie (which took place three years after the project) it can be assumed that there was no effect on main elements of the sound cues. The audience also had no direct effect on the narrative of the performance.
Inspirations for Project
Position Tracking System Using my own positioning system to track participants position would be very beneficial to this project, this would be possible by using a Bluetooth system: such as iBeacon. Some examples are listed below: It also appears to be possible to build an iBeacon type system using Arduino or Micro:bit (which already contains Bluetooth capabilities) as seen here: This will be explored in greater detail and followed up in a separate blog post. This could be useful for determining where a participant is in relation to specific triggers, what they are looking at and to control real-time game parameters (RTCP’s) within Wwise. It might be possible to track participants mobile phones using built in Bluetooth capabilities, otherwise a small Bluetooth emitter could be used and either given to participants or worked into the narrative somehow, similar to the masks in ‘Sleep No More’. Categories of Sound From Stephen Dobbie’s lecture it seems that the company work with sound based on three defined categories: songs, music and soundscape. Thinking about my own project in terms of sonic categories will help me to define levels of interaction. During a performance it seems participants only have influence over the sound in a very localised sense, triggering songs from car radios etc, whereas the goal with this project is to have an entire sonic world which evolves based around participants actions within a space. Further References 'The Immersed Audience: How Immersive Theatre is Taking it's Cue from Video Games' by Thomas Mullen for The Guardian 'Welcome to Fallow Cross: Inside the Secret Village Made by Punchdrunk' by Lyn Gardner for The Guardian 'A Guinea Pigs Night at the Theatre' by Dave Itzkoff for the New York Times Orignal Post: 12th June, 2018
New ideas previous to meeting:
Meeting:
Next Steps:
By next week:
By preview:
Original Post: 10th June, 2018
Original Post: 10th June, 2018
This post outlines some early reading, questions and areas of interest with regards to the overall aesthetic and the more general question of how the project might sound, look and feel? Throughout this project it will be necessary to research a number of separate, but related, areas: Theatrical Sound Design Site-specific discussions and examples of immersive sonic practices.
Sonic Installations Examples that react to interaction or try to influence their surroundings, and how people experience them.
Game Sound Design Participants will be able to directly influence their surroundings allowing for the potential use of ‘achievement’ sounds or sound triggers // cues. Exploring the crossover points between this and theatre sound is where this project becomes especially interesting.
Nonlinear Storytelling In a similar fashion to the player of a non-linear video game, participants will have the option to interact with elements of the space in any order they choose. This provides the potential for a non-linear, participant led method of storytelling which could follow multiple possible paths.
Originally Posted: 9th June, 2018
The first week was dedicated to finding a connection between Arduino and Wwise. Below are three ways in which I believe it would be possible to could control Wwise using Arduino sensors:
1 ) Using Unity to receive OSC messages and relay these to Wwise through scripting events. Pros: - Can design the 'room' within Unity and set up parameters in this virtual space without the need to change the commands when transferring into the real world. Cons: - Requires Unity to be running all the time, which is not my ideal solution. Eventually I would like the system to run itself from a Raspberry Pi which, while being able to run Wwise ( https://www.audiokinetic.com/library/2017.1.6_6446/?source=SDK&id=linux__specificinfo.html ), does not support Unity without running as an android build. ( http://www.instructables.com/id/Raspberry-Pi-Running-Unity/ ) It might however be possible to create a simple 2D 'app' which runs on the Raspberry Pi which shares the same Soundbanks and simply triggers the defined events. 2 ) Create a 'game engine' using the Wwise authoring API to trigger sounds. Pros:
Cons:
3 ) Use MIDI to directly control Wwise. Pros:
Cons:
Ultimately I want to create a system which works as closely to game sound as possible ( soundbanks, event triggers etc ), is stable, no limits on functionality and will work without the need of a laptop ( i.e. on Raspberry pi ). Through further testing and attempts at the second option listed above I will determine whether or not to continue with MIDI control. Originally Posted: June 9th, 2018
The aim of my project will be to develop a real-world space which responds to interaction in a similar way to that of a game world. Using a combination of Arduino sensors, Wwise and Unity I aim to devise methods of real world interaction which produce changes in overall atmosphere, one-off events or a developing soundscape. This idea was formulated with theatre applications in mind, the intention being that actors could react to cues triggered by an audience’s interaction with a space. If a set designer were to design the space within the Unity platform, it would then be possible for a sound designer to create an immersive experience within Wwise in a similar method to game design. The crux of this project relies on communication between the Arduino sensors and Wwise, whether this be a direct connection or by using Unity a ‘way in’. I have seen an example connecting an Arduino board directly to FMod but would prefer to use Wwise as it is a software I am more familiar with and it appears to be the more flexible option. It might also be possible to work with a Design and Digital Media student to develop a standard asset pack of objects which could be used to design a prototype set which could then be represented in the real world. Ultimately, I would like to develop a non-linear method of experiencing theatre, or indeed any space, in a way which puts sound at the forefront. I can, however, see this project having uses outside of what we classify as theatre, for example, in 'Escape Rooms' or installation work. Therefore an exploration and discussion of other possible uses will be another key element of this project. |
Categories
All
Spotify
Monthly Playlist
|