1. Reading Emotions Through Affective Computing: Rosalind Picard (Future of StoryTelling 2014)

    05:06

    from Future Of StoryTelling / Added

    29 Plays / / 0 Comments

    MIT professor Rosalind Picard is the foremost expert on “affective computing.” Her groundbreaking research uses computers to read people’s emotions and offers incredible opportunities for storytellers to measure and understand their stories’ precise impact.

    + More details
    • Fusiform Polyphony by Ken Rinaldo

      03:32

      from Ken Rinaldo / Added

      259 Plays / / 0 Comments

      Description of the project Fusiform Polyphony is a series of 6 interactive robotic sculptures that compose their own music with input from participant facial images. Micro video cameras mounted on the ends of these robots, move toward people’s body heat and faces while capturing human snapshots. These images are digitally processed, pixelated and produce constantly evolving generative soundscape, where facial features and interaction are turned into sound melody, tone and rhythm. These elements fused, manifest the viewer as participant / actor and conductor in defining new ways of interacting with robots and allow the robots to safely interact with humans in complex and natural environments. An important element of this installation is to see self, through the robots artificial eyes, as each robot tracks and captures images in the process of showing the nature of algorithmic robotic vision. These works are covered in human hair and explore new morphologies of soft robotics, an emerging field, where natural materials make the works approachable and friendly. The hair serves to point to a human robotic hybrid moment in our own evolution, where the intelligence of robots is more fully fusing with our own, in allowing new forms of robotic augmentation. Each robot has different colored hair creating individual character for each. The live camera based video of the robots is processed through MAX MSP and Jitter and projected to the periphery of the installation on 5 screens. When the robot is at head height a sensor at the tip of the robot is triggered and a facial snapshot is taken. This snapshot is held in the small area of the projected screen to the upper right. That snapshot is broken down into a 300 - pixel grid and the variations of red, green and blue data of each pixel is extracted and interfaced to Max MSP to Ableton Live a sound composition tool which selects the musical sample determining rhythm, tempo and dynamics. The robotic aspects of this work are controlled with 6 Mac Minis with solid-state drives wired to individual, midi-based controller to sensor and motor drive units. The Mac Minis are all networked to a Mac Pro Tower which processes the video of the 6-selected images, interfacing them to the Ableton Live sound program. Changing pixel data constantly changes Ableton virtual instrument selection sets with random seeds coming from the snapshots. The robotic structures are were created with 3D modeled cast urethane plastics, monofilament and carbon fiber rod and laser cut aluminum elements supporting the computers microprocessor and motor drive systems. These robots structure, inform, enhance and magnify, people’s behavior and interactions as they auto generate a unique and a constantly evolving generative soundscape. They take the unique multicultural makeup of each person and create “facial songs” where those songs joining with 6 robotic / human soundscapes, creates an overall human polyphonic and video experience.

      + More details
      • I Am Disappointed

        01:41

        from Joseph Oak / Added

        101 Plays / / 0 Comments

        This video is about a college student named Solomon who is a using machine learning program to determine the source of his acne. The service, however, not only informs him, but also tries to motivate and change his behavior by delivering the message in a nagging (somewhat mother-like) tone. By discussing the fields of machine learning and affective computing, I hope to illustrate a future where computers have become more human and "intelligent" in their interaction and deduction but, as a result, more temperamental.

        + More details
        • The Drama Manager

          05:00

          from Anni Garza / Added

          60 Plays / / 0 Comments

          Excerpt. Interactive installation with an animated character which mood changes depending on how sweet or rough you comunicate with her. According to her humor she will guide the user through a non-linear story

          + More details
          • Affective Camera Control in Games

            01:42

            from Hector P. Martinez / Added

            158 Plays / / 1 Comment

            This video showcases the idea of affective camera control. Several computational model of 'fun' have been created based on data from several players and used to control the camera configuration across a new game. The online demo is available at http://www.itu.dk/people/hpma/MB/Affective_Camera_Control/Demos.html

            + More details
            • lykkemat

              00:56

              from csmpls / Added

              219 Plays / / 0 Comments

              Lykkemat uses a bluetooth EEG headset to entrain a user into elevated alpha and theta wave activity - a state associated with calm focus and mindful awareness. It analyzes the user's progress, generating sound and customizing meditation tapes to guide the user along. The software keeps a profile on its user, tweaking its meditation tapes over the course of months or years as the user's skill improves. Eventually, Lykkemat refuses to coach the user at all. http://cosmopol.is/lk - thanks to irina shklovski @ ITU copenhagen http://www.itu.dk/people/irsh/ shout out to my libraries - jose cordoso's MindsetProcessing, minim & controlP5 http://www.josecardoso.eu/ http://code.compartmental.net/tools/minim/ www.sojamo.de/libraries/controlP5/ written in Processing http://processing.org headset by Neurosky http://neurosky.com

              + More details
              • BrainGain BiG talk: Toward Affective BCI

                03:11

                from BrainGain / Added

                30 Plays / / 0 Comments

                Speaker in this video: Christian Muehl (m), German, University of Twente, Human Media Interaction Title: “Toward Affective BCI” Short summary: The recognition of a user's emotion is the prerequisite and challenge of truly natural human-computer interaction. We investigated if the electrophysiology of the brain (EEG) can give insight into the emotional states in a variety of situations: during active self-regulation in a computer game, during passive consumption of music clips, and during controlled auditory and visual stimulation. Why this video? BrainGain is a Dutch research consortium that researches and develops neurotechnologies. Research topics include for example Brain-Computer Interfaces (BCI), neurostimulation (eg. Deep Brain Stimulation) and neurofeedback. http://www.braingain.nl On 18-20 December 2011 BrainGain held its annual meeting at the University of Maastricht (the Netherlands). The focus of this year's meeting was value creation and communication to patient organisations and industry. One highlight of the meeting was the competition of the BiG talks. Our idea was that BiG Talks should be the opposite of small talk. BiG talk = to-the-point communication of a clear, facilitating and enabling message in 3 minutes. The BiG talks forced us to reflect on the essential and most interesting aspects of our research. The most persuasive speakers were rewarded with a prize. Photography and Editing: Anna Sanmartí Dr. Aleksander Väljamäe

                + More details
                • EMO-Synth at M HKA OUTPUT II by CHAMP/EXP

                  01:52

                  from Valery Vermeulen / Added

                  114 Plays / / 0 Comments

                  On the 4th of march, the interactive multimediaproject Montage Cinema Revived
 by the EMO-Synth (Prototype 05.1) will be shown during the event M HKA OUTPUT II organised by Champ d'Action and M HKA (Museum of Contemporary Art Antwerp). Montage Cinema Revived by the EMO-Synth is the final performance in a series realized by the collective Office Tamuraj and the curator Astrid David. In Montage Cinema Revived by the EMO-Synth , a renewing interactive multimedia system, the EMO-Synth plays a central role. This soft and hardware system automatically generates music and manipulates images to bring the user in certain predefined
 emotional states. During performances of the project the emotional reactions of the volunteers
 are measured and processed using biosensors that register various psychofysiological 
parameters such as heart rate (ECG) and stress level (GSR, galvanic skin response). 
Due to its outspoken multidisciplinary character the EMO-Synth project 
comprises several techniques involving artificial intelligence (genetic programming, machine
learning, reinforcement learning,...), affective computing, advanced statistical modeling and 
automatic sound generation and image manipulation techniques. For the realisation of Montage Cinema Revived by the EMO-Synth, the EMO-Synth is integrated for the first time into a cinematographic setting. During the performances at M HKA OUTPUT II, the
 EMO-Synth will seek to automatically generate ideal personalized soundtracks with maximal
 emotional impact for re-edited versions of dedicated experimental video material. For the automatic music generation use is not only being made of synthesized sounds but also of live musicians 
directed by the system using virtual scores. The performances of Montage Cinema Revived by the EMO-Synth (Prototype 05.1) relies on the most recent prototype - Prototype 05.1 - of the EMO-Synth. This prototype was realized during a residency and exhibition Tangible Feelings at the Center for Digital Cultures and Technologies (iMAL) in september 2011. For the event M HKA OUTPUT II by CHAMP/EXP, a performance of Montage Cinema Revived by the EMO-Synth (Prototype05.1) will be presented realized in collaboration with Champ d'Action directed by Serge Verstockt. The involved musicians of Champ d'Action are Kathleen Coessens (piano, keyboard), Ann Eysermans (double bass, harp), 
Tim Vets (e guitars) and Ko Kowalsky (e guitars). During the performances the video Black and White life by Ann Eysermans will be remixed by the EMO-Synth. For more info on the EMO-Synth project please visit www.emo-synth.com Video editing by Kathleen Coessens --------------------------- Montage Cinema Revived by the EMO-Synth (Prototype 05.1) was realised with the support of the Flemish Audiovisual Fund (www.vaf.be), Flanders Image (www.flandersimage.com) and iMAL (www.imal.org)

                  + More details
                  • BCI & Physiological Computing: Differences, Similarities & Intuitive Control

                    19:41

                    from Kiel Gilleade / Added

                    87 Plays / / 0 Comments

                    For more information on Physiological Computing visit our research blog at http://physiologicalcomputing.net

                    + More details
                    • BCI, Physiological Computing and Gaming

                      19:41

                      from Stephen Fairclough / Added

                      15 Plays / / 0 Comments

                      Conference presentation from the aBCI workshop at CHI 2008.

                      + More details

                      What are Tags?

                      Tags

                      Tags are keywords that describe videos. For example, a video of your Hawaiian vacation might be tagged with "Hawaii," "beach," "surfing," and "sunburn."