LISA SCHONBERG (pattern ecology)

I am a percussionist and composer with a background in entomology and ecology, and am interested in bringing a focus to inconspicuous elements of ecosystems through sound work and music composition, with a focus on insects and their habitats. I attended Dinacon with my collaborator in Pattern Ecology, musician/composer Kristina Dutton.

My focus at Dinacon 2019 was recording hidden sounds – those sounds that humans cannot hear without the aid of technology. How can human opinions on invertebrates be shifted through listening? Can listening encourage us to challenge our assumptions, and change our behaviour and decision-making processes concerning our relations to non-human species? Can it move us towards a biocentric viewpoint? In this time when talk about the ‘environment’ is all over the popular media, I wonder if people are becoming more open to paradigm shifts of this nature.

1. BUILT HIDDEN SOUNDSCAPE: Pipeline Road, Gamboa
https://vimeo.com/364574378

At Dinacon I worked on developing a process for constructing synthesized “built” soundscapes of hidden sounds. Built Hidden Soundscape: Pipeline Road, Gamboa is a video of a spectrogram of sounds that cannot be heard by humans without the use of technology. I built a sound work using field recordings I made on Pipeline Road, and synthesized an imagined soundscape represented by the spectrogram. Sounds that are easily heard by human ears are excluded from this soundscape. The Y axis represents frequency and the X axis represents time. This built soundscape includes ultrasonic sounds (above the range of human hearing, played back at lower frequency), substrate-borne vibrations, and otherwise very quiet sounds. I recorded all of the sounds on and around Pipeline Road, with the exception of one recording of a wasp nest, recorded in a field in downtown Gamboa.

Sounds featured, in rough order of appearance:
1. Ultrasonic component of dawn soundscape on Pipeline Road
2. Paper wasp nest on cecropia branch, through substrate
3. Atta (leaf-cutter ant) foraging trail, locomotion sounds
4. Azteca ants on Cecropia tree, locomotion sounds
5. Cicada, ultrasonic component
6. Odontomachus (trap-jaw) ant, stridulation (partially ultrasonic)
7. Labidus (army ant) trail, sounds of locomotion and aggressive behavior
8. Ultrasonic component of dusk soundscape, from canopy, Pipeline Road

2. PATTERN ECOLOGY – with Kristina Dutton

Much of my work at Dinacon was in collaboration with my Pattern Ecology partner Kristina Dutton. We produced two videos and conducted interviews with biologists and artists about the intersection of their practices. In 2018, Pattern Ecology composed a musical score to Rearing Anartia, a 1976 8mm documentary film produced by entomologists Robert Silberglied and Annette Aiello at the Smithsonian Tropical Research Institute’s Barro Colorado Island (BCI) research station in Panama. We had the opportunity to visit BCI and to meet with Annette while in Panama, and our work came full-circle. At Dinacon, we worked on material for an eventual video-album in the spirit of Aiello and Silberglied’s efforts to share intimate details of field work and scientific processes with the public. We are embarking on a revision of the Rearing Anartia score with Annette this coming year, to carry out the vision they originally intended for the film.

We completed two short films while in Gamboa. These films feature Kristina’s immersive binaural audio and video recording and my field recording using ultrasonic and substrate-borne techniques. The first video shows workers of a colony of Atta ant species moving across Pipeline Road in Gamboa, and includes sounds of locomotion and stridulation from the ants, and binaural ambient sounds from the trail environment. The second video was filmed while kayaking in the Rio Chagres one afternoon, and features ambient sound, underwater soundscapes, and video. These videos will be combined with ecological insight from the field and components of interviews in our future works.

a. ATTA ON PIPELINE ROAD: https://vimeo.com/359170608

b. RIO CHAGRES: https://vimeo.com/359166693

Myrmecologist Hannah Marti observing stridulation movements of workers of Atta columbica colony while these sounds are being recorded, at her lab at STRI in Gamboa.
Recording flora + fauna + machines in the Rio Chagres

My time at DINACON was truly transformative. I learned so much from the other dinasaurs, whether over meals or on trails. Dedicating ourselves to our practices in tight living quarters created this sort of synergistic creative energy, and it affected my process and my scope of possibilities. Everything started to seem possible, and I am excited.

web: lisaschonberg.com
instagram: @lisaannschonberg | @patternecology | @atta_________
twitter: @schonbergpdx

Inaccurate Nomenclature

Originally, the idea was to create a radiophonic journey through Gamboa—exploring the variety of birdsongs on Pipeline Road, lingering in the marshes of the Chagres and simply strolling around the neighborhood, capturing sound bites of both human and nonhuman residents.

The first time I ventured out alone to Pipeline Road, I brought along my bamboo flute. Back in the city, when I would play inside my room behind paper shoji screens, occasionally a brazen bird would perch outside on the balcony and vividly respond to my shrill notes with rhapsodic chirps.

In the jungle, however, it was a different story. The sheer immensity of the rainforest was humbling enough, but it was the symphonic richness of its soundscape that stopped me in my tracks: the competitive chatter of mealy parrots, the percussive taps of a woodpecker on a hollow tree trunk, rhythmically improvised clicks and chuckles counterpointed by cicada crescendi and glissandi, the four piercingly pure notes of an ant-thrush, clearly heard but never seen, always on cue with metronomic precision. Out there in the wild, the human arrogance of “music” produced by blowing through a lacquered reed of polished bamboo seemed extraneously redundant. So I just listened.

Out on the river, I silently witnessed many creatures both up close and through binoculars: a caiman lurking just under the water’s surface, a creme-colored caracara tearing at its prey, red-headed turkey vultures preening on high branches, white egrets, blue herons, striped jacobins and yellow-winged jacanas. Yet it was the acousmatic motif of a hidden howler monkey that set the tempo adagio from deep within the forest.

The title of the sound piece was inspired by the insistently repetitive cry of what I have since identified as a red-lored parrot on the Río Chagres. “Accurate! Accurate!” it seemed to squawk, as if challenging our inevitably flawed human assessment of its species and its surroundings. I’m projecting, of course, but it’s hard not to associate sounds with signals, phonemes with meaning, utterances with intention.

So I continued to move along the river in a kayak, paddling through the dense marshwater with a splashproof smartphone on my lap recording in low-tech mono, clumsily picking up the rumbles of wind and bumps on the microphone, as well as the buzz of a persistently pesky fly.

“¿Cómo te llamas?” rhetorically asks Jorge, Panamanian avifauna expert who already knows the appellations of every local bird he is seeing or hearing. Returning from my excursions, I search through a handful of field guides, my superficial gateway to the vast database of human scientific knowledge about the resident species of central Panama.

And so I moved on to the naming of birds and other creatures—in learned English, in local Spanish, in scientific nomenclature: variable seedeater / espiguero variable / Sporophila corvina * wattled jacana / jacana carunculada / Jacana jacana * white-necked jacobin / jacobin nuquiblanco / Florisuga mellivora * striated heron / garza listada / Egretta tricolor * mantled howler monkey / mono aullador / Alouatta palliata * yellow-headed caracara / caracara cabeciamarilla / Milvago chimachima * turkey vulture / gallinazo cabecirrojo / Cathartes aura * lineated woodpecker / carpintero lineado / Dryocopus lineatus * mealy parrot / loro harinosa / Amazona farinosa * northern tamandua / hormiguero norteño / Tamandua mexicana * coati / gato solo / Nasua narica * crowned tree frog / rana arbórea coronada / Anotheca spinosa * fer-de-lance / equis / Bothrops asper * black-faced ant-thrush / formicario carinegro / Formicarius analis

This surtitled multilingual nomenclature, with male and female voices uttering very different words to describe essentially the same species, is more a reflection of human cultural perceptions than of the individual encountered in the wild. Juxtaposed with the natural soundscape of the creatures’ respective habitats, are these words disruptive, intrusive, invasive? Or merely indicative of our endless efforts to identify, capture, classify and label through relentless accumulation of data?

Inside a house in Gamboa, the melodic strings of a cello mingle freely with a giggling chorus of parrots in the tree outside. Agoutis roam neighborhood backyards, sloths and owls hang out in the branches above the sidewalk, puddles of túngara frogs turn up the volume after dusk… Humans seem to co-habit seamlessly with our nonhuman neighbors.

“Permanece escuchando” repeats Jorge, reminding us that there is always more to hear, signal after silence: Keep listening.

Cherise Fong

Binaural Audio/Video Recordings – Kristina Dutton + collab with Lisa Schonberg

I wanted to capture what it feels like to wander in the forests of Gamboa during both the sunset and evening choruses. Once I spent a little time on the Rio Charges I decided to weave that into the mix as well.

Binaural recordings imitate the spatial dimensions of human hearing. In other words, they reproduce sound the way we actually hear it. Because of this, listening to binaural recordings works best with headphones.

The microphones I used are designed specifically for quiet environments and I found the noise level of the evening chorus on Laguna Trail was enough to occasionally blow out the mics.

The audio was recorded in tandem with the video, so I moved both camera and mics (since they were attached to my ears) in whatever direction I was looking. This way, when I turn, the viewer hears the sound of the howlers from behind just as I did, whereas a moment before they were to the left, etc.

I wanted to convey the experience of sonic density in contrast to how little we actually see with our eyes in these environments, and to explore the idea that listening would have been important for our ancestors in wildly different ways than it is for us in most situations in modern cities or suburbs. Our relationship to sound has lost much of the meaning it once had and understanding it required. In cities we primarily filter out “noise” whereas, in the forest, we lean in and listen to understand what is around us.

For example, acoustic ecologist Gordon Hempton found that human hearing range is a perfect match for birdsong – that birds are indicators of a habitat that would be prosperous for human survival. He states that hearing is vital for all animals’ survival, and the bandwidth from 2.5 to 5 kHz are the resonant frequencies where we have super-senstive hearing – which is a perfect match for birdsong.

I’ve only made it through about 25% of what I recorded so I’ll continue to post more to my Vimeo page that will include other parts of Gamboa and the surrounding areas.

https://vimeo.com/363182488
This video was taken between 6:30am and 8 am and includes sounds of the red-lored parrot, howler monkey, black-necked stilt, collared plover, ringed kingfisher, green heron, wattled jacana, northern waterthrush, smooth-billed ani, anhinga, and southern lapwing. And the occasional fish splashing around.
https://vimeo.com/363481273
https://vimeo.com/363706636
This 3rd vid is mostly made up of binaural recordings, but I threw in a moment recorded with a shotgun mic just to demonstrate the difference. Also this vid has a lot of great frog sounds!

The collaborative project I did with Lisa Schonberg involved combining technologies to enable others to experience the ecosystems in Gamboa from new perspectives. We made two videos. The first of leaf-cutter ants combined substrate-borne stridulations and locomotion on Pipeline Road with binaural ambient sound. The second was filmed underwater in the Rio Chagres and uses a combination of hydrophone and iPhone recording above and under water.

In the Rio Charges, while putting my iPhone in the water to film, I discovered that many small fish were interested in sucking at my skin. I ended up playing with them for quite a long time, as the fish didn’t react much to me scooping them into my hand. Playing in the water reminded me a lot what I felt like as a kid when I’d hang out at the edge of the pond near my house. I decided to make the video from that perspective – an intimate, playful view of the world just below the surface of the water.- wonder and curiosity being two of the greatest assets of art/sci/tech folks.

https://vimeo.com/363220778
Recording Atta species on Pipeline Road
Wearing binaural headphone miss on Laguna Trail
Lisa Schonberg recording with ultrasonic mic on Laguna Trail
6:30 am on the Rio Charges with fellow Dinasaur Cherise

Bamboo ROV – Jonas Kramer-Dickie

This project was an attempt to make a cheap DIY submarine, there are very few cheap kits for teachers and researchers that give access to the underwater worlds around us. This was designed as a simple wired ROV with a camera lights and a few small DC motors that would be able to dive and maneuver while delivering live camera feed. to the surface.

The body is made out of bamboo because it was abundant and invasive in Gamboa. If I complete this project again I would use a water bottle or another seal-able cylindrical object. The internals seemed to function well but problems arose with the improvised body. It was still fun to play around with all of the electronic bits and learn about circuits, current, and motors. The whole project only cost about 70 dollars and as a kit it could teach basic electronics , and problem solving. It also lets people see the water through different perspectives.

Overall this was a very good first test and prototype, I think with a small amount of tweaking I could have a functional cheap ROV!

Completing this project in a beautiful and different location surrounded by beautiful and different people helped. What an amazing ‘Conference’

Original plans made to use fire extinguisher as body.

Control Panel

control panel rear with wiring diagram

ROV body next to internal camera

ROV body

Sculpting Shadows

By Albert Thrower – albertthrower@gmail.com

OVERVIEW

In this project, I created three-dimensional sculptural artworks derived from the shadows cast by found objects.

BACKGROUND

I began creating 3D prints through unusual processes in 2018, when I used oils to essentially paint a 3D shape. For me, this was a fun way to dip my toes into 3D modeling and printing using the skills I already had (painting) rather than those I didn’t (3D modeling). I was very happy with the output of this process, which I think lent the 3D model a unique texture–it wore its paint-ishness proudly, with bumpy ridges and ravines born from brushstrokes. There was an organic quality that I didn’t often see in 3D models fabricated digitally. I immediately began thinking of other unconventional ways to arrive at 3D shapes, and cyanotype solar prints quickly rose to the top of processes I was excited to try.

SHADOWS AND DIMENSIONS

My initial goal with this project was simply to test my theory that I could create interesting sculpture through the manipulation of shadow. However, a presentation by Josh Michaels on my first night at Dinacon got me thinking more about shadows and what they represent in the relationships between dimensions. Josh showed Carl Sagan’s famous explanation of the 4th dimension from Cosmos.

Sagan illustrates how a shadow is an imperfect two-dimensional projection of a three-dimensional object. I wondered–if all we had was a two-dimensional shadow, what could we theorize about the three-dimensional object? If we were the inhabitants of Plato’s cave, watching the shadows of the world play on the wall, what objects could we fashion from the clay at our feet to reflect what we imagined was out there? What stories could we ascribe to these imperfectly theorized forms? When early humans saw the the night sky, we couldn’t see the three-dimensional reality of space and stars–we saw a two-dimensional tapestry from which we theorized three-dimensional creatures and heroes and villains and conflicts and passions. We looked up and saw our reflection. What does a rambutan shadow become without the knowledge of a rambutan, with instead the innate human impulse to project meaning and personality and story upon that which we cannot fully comprehend? That’s what I became excited to explore with this project. But first, how to make the darn things?

THE PROCESS

For those who want to try this at home, I have written a detailed How To about the process on my website. But the basic workflow I followed was this:

STEP 1: MAKE A SOLAR PRINT OF SOME INTERESTING OBJECTS

The areas that are more shaded by our objects stay white, and the areas that the sun hits become a darker blue. Note that the solar print that results from three-dimensional objects like these rambutans have some midtones that follow their curves, because though they cast hard shadows, some light leaks in from the sides. The closer an object gets to the solar paper, the more light it blocks. This effect will make a big difference in how these prints translate to 3D models.

A rambutan print soon after exposure and washing.

STEP 2: USE THE SOLAR PRINT AS A DEPTH MAP TO CREATE A 3D MODEL

For those unfamiliar with depth maps, essentially the software* interprets the luminance data of a pixel (how bright it is) as depth information. Depth maps can be used for a variety of applications, but in this case the lightest parts of the image become the more raised parts of the 3D model, and the darker parts become the more recessed parts. For our solar prints, what this means is that the areas where our objects touched the paper (or at least came very close to it) will be white and therefore raised, the areas that weren’t shaded at all by our objects will become dark and therefore recessed, and the areas that are shaded but which some light can leak into around the objects will by our mid-tones, and will lead to some smooth graded surfaces in the 3D model.

 *I used Photoshop for this process, but if you have a suggestion for a free program that can do the same, please contact me. I’d like for this process to be accessible to as many people as possible.

Below, you can play around with some 3D models alongside the solar prints from which they were derived. Compare them to see how subtle variations in the luminance information from the 2D image has been translated into depth information to create a 3D model.

In the below solar print, I laid a spiralled vine over the top of the other objects being printed. Because it was raised off the paper by the other objects, light leaked in and created a fainter shadow, resulting in a cool background swirl in the 3D model. Manipulating objects’ distance from the paper proved to be an effective method to create foreground/background separation in the final 3D model.

The objects to be solar printed, before I laid the spiralled vine on the other objects and exposed the paper.

Another variable that I manipulated to create different levels in the 3D model was exposure time. The fainter leaves coming into the below solar print weren’t any father from the solar paper than the other leaves, but I placed them after the solar print had been exposed for a couple of minutes. This made their resulting imprint fainter/darker, and therefore more backgrounded than the leaves that had been there for the duration of the exposure. You can also see where some of the leaves moved during the exposure, as they have a faint double image that creates a cool “step” effect in the 3D model. You might also notice that the 3D model has more of a texture than the others on this page. That comes from the paper itself, which is a different brand than I used for the others. The paper texture creates slight variations in luminance which translate as bump patterns in the model. You run into a similar effect with camera grain–even at high ISOs, the slight variation in luminance from pixel to pixel can look very pronounced when translated to 3D. I discuss how to manage this in the How To page for this process.

One more neat thing about this one is that I made the print on top of a folder that had a barcode on it, and that reflected back enough light through the paper that it came out in the solar print and the 3D model (in the bottom right). After I noticed this I started exposing my prints on a solid black surface.

The below solar print was made later in the day–notice the long shadows. It was also in the partial shade of a tree, so the bottom left corner of the print darkens. If you turn the 3D model to its side you’ll see how that light falloff results in a thinning of the model. I also took this photo before the print had fully developed the deep blue it would eventually reach, and that lack of contrast results in the faint seedpod in the bottom left not differentiating itself much from the background in the 3D model. I found that these prints could take a couple days to fully “develop.”

STEP 3: 3D PRINT THE MODEL

The 3D models that Photoshop spits out through this process can sometimes have structural problems that a 3D printer doesn’t quite know how to deal with. I explain these problems and how to fix them in greater detail in the How To page for this process.

STEP 4: PAINT THE 3D PRINT

Now we get back to my musings about Plato’s cave. My goal in the painting stage was to find meaning and story in this extrapolation of 3D forms from a 2D projection. As of this writing I have only finished one of these paintings, pictured below.

FUTURE DIRECTIONS

– Carve the models out of wood with a CNC milling machine to reduce plastic use. I actually used PLA, which is derived from corn starch and is biodegradable under industrial conditions, but is still not ideal. This will also allow me to go BIGGER with the sculptural pieces, which wouldn’t be impossible with 3D printing but would require some tedious labor to bond together multiple prints. 

– Move away from right angles! Though I was attempting to make some unusual “canvasses” for painting, I ended up replicating the rectangular characteristics of traditional painting surfaces, which seems particularly egregious when modeling irregular organic shapes. Creating non-rectangular pieces will require making prints that capture the entire perimeter of the objects’ shadows without cutting them off. I can then tell the software to “drop out” the negative space. I have already made some prints that I think will work well for this, I’ll update this page once I 3D model them.

– Build a custom solar printing rig to allow for more flexibility in constructing interesting prints. A limitation of this process was that I wanted to create complex and delicate compositions of shadows but it was hard to not disturb the three-dimensional objects when moving between the composition and exposure phases. My general process in this iteration of the project was to arrange the objects on a piece of plexiglass on top of an opaque card on top of the solar print. This allowed me time to experiment with arrangements of the objects, but the process of pulling the opaque card out to reveal the print inevitably disrupted the objects and then I would have to scramble to reset them as best I could. Arranging the objects inside wasn’t a good option because I couldn’t see the shadows the sun would cast, which were essentially the medium I was working with. The rig I imagine to solve this would be a frame with a transparent top and a sliding opaque board which could be pulled out to reveal the solar paper below without disrupting the arrangement of objects on top. 

– Solar print living creatures! I attempted this at Dinacon with a centipede, as did Andy Quitmeyer with some leafcutter ants. It’s difficult to do! One reason is that living creatures tend to move around and solar prints require a few minutes of exposure time. I was thinking something like a frog that might hop around a bit, stay still, hop around some more would work, but still you would need to have some kind of clear container that would contain the animal without casting its own shadow. I also thought maybe a busy leafcutter ant “highway” would have dense enough traffic to leave behind ghostly ant trails, but Andy discovered that the ants are not keen to walk over solar paper laid in their path. A custom rig like the one discussed above could maybe be used–place the rig in their path, allow them time to acclimate to its presence and walk over it, then expose the paper underneath them without disturbing their work.

– Projection map visuals onto the 3D prints! These pieces were created to be static paintings, but they could also make for cool three-dimensional animated pieces. Bigger would be better for this purpose.

My project table at the end-of-Dinacon showcase.
This kiddo immediately began matching the objects I had on display to their respective solar prints!

Agouti, Agouti!

By Jason Bond, Blunderboffins

Agouti, Agouti! is a work of interactive digital art (i.e. a videogame) which aims to capture the spirit of the loveable agouti, a rodent commonly seen eating scraps and frolicking about in the backyards of Gamboa, Panama. They play an important role in the spread of seeds in the forest and are adorable to boot.

This prototype work can be played on a modern Mac or Windows computer with a two-stick game controller. The player is invited to explore a jungle, eat some fruit, and — as the agouti does when frightened — puff up the hair on their butt.

The humble Central American agouti.

The Virtual Agouti

The agouti featured in the game is an original model created in the modelling and animation software Blender. It has a small number of animations — enough to simulate some basic activities. In an effort to capture the agouti’s way of moving about, slow-motion video was taken of agoutis around Gamboa and a series of images were extracted as reference for the walking and running gaits.

Although the artist on this project has been working in videogames for many years, he is new to modelling and animating, making this work a significant learning exercise.

A low-poly agouti model created in Blender.
Frames on an agouti walking extracted from slow-motion video.

The Forest

The environment of Agouti, Agouti! is filled with virtual “plants”. These forms are more impressionistic than replicative, bearing little resemblance to the actual plants of Panama, but they are meant to reflect the variety in Gamboa’s forest and to provide a suitable jungle world for the agouti to play in.

Each type of virtual plant is generated by algorithm using custom software designed for this project. In fact, this generator was intended to be the centrepiece of this project until the agouti charmed its way into the starring role. 

The generator began as a simple branching algorithm not dissimilar from L-Systems — a common procedural generation technique — beginning with a trunk and randomly splitting off branches to create a tree-like structure. Inspired by the epiphytes of Panama, this algorithm was modified to take a more additive approach: any number of different forms can be attached to any part of the structure.

Because the results of this generator can be quite chaotic, some crude tools were developed to rapidly filter through them for the best stuff. This includes a mutation tool which can take a plant with some potential and produce interesting variations on it until the user is happy with the results.

A screenshot of the plant generator, showing three mutations of what was once the same plant.

Each plant is encoded with a growth animation so that it can begin as a simple seedling and gain branches and leaves over time. The agouti’s world can start out bare and grow a massive, abstract canopy.

The agouti’s planet with hundreds of small seedlings.

The planet after all plants have grown to full size.

Available Materials

The game and agouti model are freely available for download at:

https://blunderboffins.itch.io/agouti-agouti

Nom nom nom.

complexity + leafcutters: code/improvisation

The shimmering, industrious leafcutter ants that build highways on the forest floor make up a complex adaptive system – the sophisticated structures and patterns that they build are well beyond the sum of their individual parts. The ants’ collective intelligence emerges through the repetition of simple tasks, and somehow through self-organization they build cities without architects, roads without engineers. There’s something magnetic about their energetic movement as they carve through the jungle – wherever I found them at Gamboa, I found that I could not look away.

from pipeline trail and laguna trail, Gamboa
ant, Atlas
going around the stick barrier

I altered the code from a classic NetLogo simulation to model the behavior of the leafcutters. NetLogo allows you to code agent-based models and watch them play out over time – each of the ants acts as an autonomous “agent” with a simple task to perform, and the iteration of multiple ants performing these tasks begins to simulate how the ants behave in the jungle. What starts out as random walking drifts into road-like patterns as the ants pick up pixel leaves and deliver them to their digital fungus…

Ant Tasks:
1. choose a random angle between -45 and 45 degrees
2. walk 1 unit in that direction
3. repeat.
4. IF there’s food (green leaves or pink flowers), pick it up by turning green, and deliver it back to the fungus at the center.
5. IF you sense digital pheromone (ants carrying food tag the pixels they walk over with digital “scent” as they head to the center), follow that pheromone.

The Twist: music
A symphony of digital fungus stockpiling
An audio representation of the complex patterns and surprising order that arises from randomness…

Each ant in the simulation has an ID number, and that ID number corresponds to a note on the piano. When an ant picks up a leaf and successfully brings it back to the fungus in the middle, that ant will sound its unique note. I calibrated this so that extremely low notes and extremely high notes on the scale won’t play – instead of those extremes some ants are assigned the same middle C, which you can hear throughout the simulation over and over like a drum beat…

the simulation: turn up the sound!

The ants play their own bebop, they compose their own Xenakis-like songs. No two ant improvisations will be exactly alike; whenever you run the simulation, each ant makes different random choices and the behavior of the model will be different. But they sound like they spring from the same mind:

ant improv #1
ant improv #2
the ants start searching for food
making highways
one food source left…
starting the last highway

Our minds love patterns too – I find myself cheering the ants on when I watch the simulation, rooting for them to find the next leaf, hoping for them to route into the highway pattern, waiting to hear their eerie plunking, playful jazz…

coding in the jungle – on the balcony, adopta

extensions for this project:

-there is a web extension for NetLogo, but without sound; could translate these ants into Javascript/p5.js so users can press “play” themselves online and control different variables (how many ants? speed of ants?)

-connect the MIDI sound that the ants are making to a score, print out sheet music written by the ants, play it on the piano

-make the model more complex, closer to the structure of actual leafcutter colonies: different sizes of ants, different tasks…

-interactive projection version

you got this, ant.

Thanks to everyone at Dinacon!

-Madeline Blount
http://mab.space

NetLogo citation:
Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

Balloon Environmental Sensing Takes to the Air

We have liftoff. My first Balloon Environmental Sensing test successfully “slipped the surly bonds of earth, and danced the skies on laughter-silvered wings” sending data back the whole time. First flight was at the Digital Naturalism Conference in Gamboa, Panama, featuring 10+ sensor values streaming from the balloon to an online data collection system and dashboard.

It was a big success!

This party-balloon platform is designed for inexpensive aerial environmental sensing. Balloon lofting is perfect for scientific research, educational programs, hacker workshops, technology art, as well as low-cost indoor or industrial monitoring. Is the humidity overhead the same as on the ground? Does wind speed change? Is it dusty up there? How much UV light penetrates the jungle canopy at different levels? These are all questions that can be answered with this platform.

Since advanced LTE wasn’t available in Panama and SigFox coverage was absent, I decided to use the Digital Naturalism Lab’s LoRaWAN gateway—long-range radio networking that uses very little battery power. The data collection firmware code was written in MicroPython running on a LoPy4 wireless microcontroller module from Pycom. This first set of tests used all the Pysense evaluation board sensors including light, temperature, altitude, humidity, pitch, roll and acceleration in three axis. This data was taken in real time at 30-second intervals and transmitted using LoRaWAN across Things Network servers to be displayed on a Cayenne dashboard. The Pybytes cloud platform appears promising too, I’m looking forward to exploring that more in later phases of the project.

Gamboa has one very small grocery store. It does not sell helium or any other noble gas. Luckily the generous David Bowen allowed our sensor package to hitch a ride on his drone during my first week, so up we went for initial testing. As is so often the case, even this partial test resulted in lots of changes. In this case I realized we needed a frame counter, better battery connections and voltage monitoring before flying again. A second shakedown flight on Bowen’s drone proved the value of these additions, and gave us an excellent sampling of the data to come. We also did a bunch of range testing work, which is covered in a separate blog post.

A taxi trip into Panama City brought us to Mundo de los Globos (World of Balloons) where helium tanks are available, along with 1-meter balloons in plenty of colors. With a full tank of the squeaky gas, we returned to Gamboa and I started inflating our ride to the sky.

The next morning it was time for the sensor package to take its first balloon ride, and up we went. Andy Quitmeyer got some amazing footage from his drone and Trevor Silverstein shot high-end video from the ground (coming soon). I could not have asked for a better documentation team. The balloon reached 60 meters (about 200 feet) above ground level, which was the limit of the reel line I was using for a tether.

We got great data back from this flight, and soon made a second one—this time in a large field away from balloon-eating trees. It was easy to get LoRaWAN signal from altitude since LoRa works best in line-of-sight conditions. We plan to do more with the Things Network to support the biology and ecology research in Gamboa that are spearheaded by the local Smithsonian Tropical Research Institute.

Here’s a screenshot of the data dashboard from the flight.

And a few graphs:

Another afternoon was set aside for a proper party-balloon experiment. Using a smaller battery I was able to loft the sensor package using 6 small balloons and the small amount of remaining helium. This worked too, though 7 balloons would have provided more lift and handled the wind better. Next time, more balloons!

Data from these flights can be downloaded, and the MicroPython code for the LoPy4 or FiPy can be found on my GitHub.

For the next version of the Balloon Environmental Testing platform, my plan is to explore other sensors and wireless links. I’m especially interested in UV light, air quality, wind speed and loudness. In Gamboa we talked about trying some sound recording too. As the balloon itself is silent, it’s the perfect place to record. For wireless links I’m itching to explore some new cellular low-bandwidth, low-cost protocols, LTE Cat-M and NB-IoT, because they don’t require any dedicated base stations and should work great at the altitudes needed for balloon flights. Additional plans include extended day-long flights, free flight with GPS, and maybe look at hydrogen gas but not near any kids!

The initial prototype goal was to see if the full system will work, and it does! Gamboa was a great success for this project, giving me the time, venue and documentation assistance to bring this idea to life. If you get a chance to attend the next Dinacon, I strongly recommend it. And if you’re interested in balloon sensing for any experiment, class or project, let me know!

Unnatural Language – Michael Ang and Scott Kildall

By Scott (Seamus) Kildall and Michael Ang

Unnatural Language, a collaboration between Michael Ang and Scott Kildall, is a network of electronic organisms (“Datapods”) that create sonic improvisations from physical sensors in the natural environment. Each Datapod has custom electronics connected to sensors, a speaker, and a wireless network. The sensed data, for example from electrodes that measure the subtle electrical variations in the leaves of plants, is transformed into a unique synthesized sound. Encased in sculptural materials (natural fiber, leather, leaves, etc) and dispersed into a natural environment, the Datapods enter into a sonic dialogue with the existing ecosystem of plants and animals.

Unnatural Language proposes that technology and nature are forming a new hybrid ecology, where innovations such as intelligent devices that occupy the natural landscape are dissolving the traditional nature-culture dichotomy. This work repurposes this technology to amplify unseen processes such as plant intercommunication, river health and subtle microclimate changes. 

We were at Dinacon in Gamboa, Panama for 18 days and this was our first full development and installation of our project. After several adventures in the area, we decided to deploy eight Datapods in Lake Chagras, which feeds the Panama Canal, since this constitutes a transitional space: a brackish marshland, which also had signs of human outflow such as garbage floating in it.

At Dinacon, we developed two types of sensor-synthesizers. The first detected electrical conductivity levels in water and modulated different sampled sounds that we recorded of rocks sinking in water from a hydrophone. As the water quality fluctuated with these sensor readings, the output of the synthesizer played higher and lower-pitched samples accordingly.

For the water-based datapods, we put our speakers, and the electronics, which consisted of custom software synth code on an ESP32 chip with an on-board amp and water sensor onto various garbage flotillas, which we constructed from the litter that we had collected by kayak.

The second sensor-synth combination was a plant sensor, which detected electrical activity in plants using electrodes. Plants tend to respond relatively rapidly (2-3 minutes) in response to various environmental triggers. The synth we developed acted as a drum machine, modulating different tempos according the the plants that it was attached to.

We learned many things at Dinacon! Making a compelling Datapod took much longer than we thought it would. To achieve the best type of synth effect, we recorded humans performing an activity with the thing being sensed: rocks being thrown into water and water being poured through a strainer onto a plant. We then cut these up into bite-sized pieces and ported them into our software, which uses compiled C++ code on the ESP32 to make dynamic effects.

Also, the janky look for the sculptures themselves had a broad appeal and this will be a direction for the project into the future. We’re looking forward to further site-specific installations of Unnatural Language.

Many thanks to all our fabulous co-Dinasaurs for the wonderfully playful and productive atmosphere, and especially to our intrepid film crew (Monika, Ruben, Cherise, and Andy on the drone!)

Michael Ang & Scott (Seamus) Kildall

The Frog Show – by Mónica Rikić and Ruben Oya

Frog Show wants to elevate the singing frogs to an audiovisual experience.
Since our arrival to Gamboa every evening we were amazed by their singing. It didn’t sound like the frogs we knew. This was more of an electronic synth-like music performance. We saw opportuniy to join the frogs and develop some visuals to add to the show.

With the goal of low impact on the environment and not disturb the frog’s activity we came up with this solar-powered red LED installation. The solar power makes the system self-sufficient and the red light is known to be less perceived by frogs.

The installation relies on the following hardware: microphone, arduino board, battery pack, solar panel and LED strip.

Testing

The light effects are audio reactive and controlled through code on the arduino board. Every single frog sound triggers the LED strip depending on it’s volume.

The result is an installation that charges during daytime and activates at night with the frogs’s concert. You can read the intense activity of the animals through the light show.

Active show with frogs on a sidewalk

Technical details:

  • Arduino Nano
  • Adafruit MAX4466 microphone
  • 12.000mAh 2.4A 5V battery pack
  • 7W solar panel
  • 1,5m WS2812b LED strip
  • arduino code based on neopixel library.

Ruben Oya & Mónica Rikić