During Dinacon I will create environment-specific sonifications using the Datapods (developed for Unnatural Language with Scott Kildall). Datapods are electronic devices that translate the unseen activity of plants and the environment into sounds for human appreciation. The troupe of Datapods will be spread into the jungle to converse with the environment, us, and each other. The Datapods are a modular system based on Arduino – curious to see if we can create new sensors and ways for them to interact with nature!
Michael Ang (https://michaelang.com) is a Berlin-based artist and engineer who creates light objects, interactive installations, and technological tools that expand the possibilities of human expression and connection. Applying a hacker’s aesthetic, he often repurposes existing technology to create human-centered experiences in public space and the open field. Countering the trend for technology to dissociate us from ourselves and surroundings, Michael’s works connect us to each other and the experience of the present moment.
Hey everyone, I’m honored to be a part of Dinacon and will be joining you the first week of August. I’m hard to categorize; I’m an engineer and artist at heart and am happiest when I get to wear a lot of hats. I’ve built spacecraft and fighting robots, made costumes and movies, chased the shadows of asteroids and rare birds, and designed electronics for both the space station and just to make people laugh. I was most recently employed by Sparkfun Electronics, where I was an engineer with strong proclivities towards education and citizen science. I’m currently doing freelance design work for the Boulder CO aerospace community (will work for launch). At Dinacon I plan to serve as roving tech support; if you need electronics, coding, or media help, just let me know!
August 12-19th. Kris Casey is a visual artist and creative researcher from Chicago, IL. Her work draws heavily from various fields of philosophical and scientific inquiry, including evolutionary developmental biology, bio-aesthetics, evolutionary aesthetics, and genetics. Her research and practice examines relationships between biology and technology, natural and artificial, material and immaterial, subject and object. Her paintings can be seen as assemblages or accumulations of natural and technological elements whereby the biological concepts of mutation, contamination, decay, generation, emergence and metamorphosis become modes of inquiry into the production of novel forms.
Hello! I am Stephanie, an artist and prof from Buffalo, NY. I will be experimenting with sensors and creatures for a larger project that looks at biology as technology and living machines in extreme landscapes. I am going to try and harness the aggressive energy of crocodiles to power the blockchain.
I am a behavioral ecologist currently based in Foz do Iguaçu, Brazil. I have a passion for artistically expressing research, and will be developing a piece that conveys how soil nutrients affects Cecropia trees and their symbiotic Azteca ants.
Project: Connected experience with feelings & wires
Bio: I’m a lowkey software engineer, part-time goofbal, and full-time bricoleur with an interest building some physical representation of two human connections with wires, colorful headbands, and guages.
Generative Dance in the Wild and EEG Sonification.
How do movements couple to sounds in the natural environment, and can paired dance communication by improvised both in the movements and musical composition realms? I used SonicPi to generatively sample sound recorded from nature to make musical beats and rhythms. These beats will couple to pair dance metaphors in paradigms in salsa and zouk, which are popular dances in Panama. Specifically the project consists of the following phases.
Record sounds in the natural environment of Panama and use them to construct simple phrases in SonicPi, choosing the right envelopes to synthesize beat sounds which, when live-looped together, produces Latin-like rhythms.
Begin recruiting conference attendees for a performance which involves dancing in sync to the collected beats. I will train those who are not familiar with simple steps of salsa and bachata latin dancing so that all can practice together even without formal training.
We will construct a wearable interface for switching between different SoniPi sketches for generating different sounds. We will prototype a teensy-based device that can use accelerometer data to switch between beats. The choice will depend on the leader in the dance pair.
We will user test a pair of dancers, one of whom (leader) can switch between rhythms and music that inspires different dance forms and speeds. The leader can choose both her steps and the musical rhythms being generated. For example, she can choose to dance bachata rather than salsa, or to have a dip in the salsa, and can choose the musical motifs appropriate to these specific actions.
If time permits, we will organize a Casino Rueda performance using pairs of dancers who can all control the music in different ways. If the technology does not permit it, we can prototype the process using calls much like in Casino Rueda, giving our DJ a cue to change the music.
The project investigates whether improvisation in dance can be coupled also to improvisation in music. Can we create a system for both changing the musicality and the movements in dance? We aim to investigate this in a natural context where Latin rhythms and natural sounds can be used as samples to create a performance of higher order improvisation.
Can EEG be used as a source of sound and can this sound be used to harmonize with the environment? This project generates a work of symphonic sound using human EEG attention data and EEG data in the wild. I use a MindWave Mobile headset to get attention data from humans and translates that scale to pitch for the melody. I use plant electrical data to recorded using plant electrodes (thanks to Seamus) to generate the tonic portion for the work. Combining the phasic EEG music with the tonic plant environmental music gives a voice to the way we operate in the universe. We humans make a lot of phasic noise, but the plant and environment of the world embody the tone and mood that form the substance of a work. We co-create with electrical recordings from the brain and the plant to make a symphony of Gamboa.
MindWave Mobile data is piped to BrainWaveOSC app, which sends the data to Unity. Unity uses an AudioSource to generate the pitch as mapped from attention data. On the plant side, Arduino is used to record and log plant electrical values. These two sources of EEG are part of the environment we exist in. Human EEG as you can see in the video demo, is used to generate pitch, making directed musical phrases using attention, so humans can control to come extent (but not all). Plant EEG will be used to generate the subtext of the symphony, forming the chords that the human EEG will play on top of. Both have a life of its own, so that the final form of the work is as much part of the environment of Gamboa as to any conscious control by any party.
Ray LC’s artistic practice incorporates cutting-edge neuroscience research for building bonds between humans and between humans and machines. He studied AI (Cal) and neuroscience (UCLA), building interactive art in Tokyo while publishing papers on PTSD. He’s Visiting Professor, Northeastern University College of Art, Media, Design. He was artist-in-residence at BankArt, 1_Wall_Tokyo, Brooklyn Fashion BFDA, Process Space LMCC, NYSCI, Saari Residence. He exhibited at Kiyoshi Saito Museum, Tokyo GoldenEgg, Columbia University Macy Gallery, Java Studios, CUHK, Elektra, NYSCI, Happieee Place ArtLab. He was awarded by Japan JSPS, National Science Foundation, National Institute of Health, Microsoft Imagine Cup, Adobe Design Achievement Award. http://www.raylc.org/
Saad is a professional geek with a passion for coffee, technology, and the OpenSource way of doing things. For a living he conceptualizes tech solutions for Tusitala, the digital publishing arm of Potato Productions. Tusitala means “story-teller”. Tusitala is on the look out for Asian stories that adopt the interactivity of the digital medium to go beyond the page, without trying to replace it. “trans-media storytelling” as the marketese would have it. Saad also volunteers with several non-profits and strongly believes that social enterprises should be the key users of and contributors to OpenSource tech. Enough of that boring day-job stuff.
Saad is a self-confessed maker of sometimes brilliant but mostly useless things and coffee geek. There, that’s better.