Watching Agoutis

I came to Dinacon already a devotee of agoutis. I had been observing them, photographing them, and following them around a city park in Rio de Janeiro for over a year.

In Rio the urban population of agoutis are not quite tame, but not quite wild any longer – they are not afraid of humans. Humans bring them vegetable scraps, french fries, even piles of cat food that they congregate around to enjoy. These agoutis only rarely flare up their butt hair, the signature agouti skittish gesture of fear. They co-exist with the population of stray cats, ducks, pigeons, geese, and peacocks that call the park home…

getting close to agoutis, Campo de Santana, Rio de Janeiro, BR
french fries + agoutis, Campo de Santana, Rio de Janeiro, BR
the agouti crowds, Campo de Santana, Rio de Janeiro, BR

In Gamboa, I had planned to film the local agoutis. I knew on some level that they would be different from their quasi-domesticated Brazilian cousins, but I did not realize that my entire understanding of agouti behavior was skewed by the city population I knew.

In Gamboa, an agouti is approximately 7.92x more skittish (data forthcoming). They hear the crinkly sound of a human stepping on a decaying leaf on the ground, and they snap to attention, look up, and run away. The most common image I captured when I began agouti observation in Gamboa was that of a retreating rear end.

The biggest difference was how the jungle agoutis in Panama did not seem to crowd around in groups. I never observed more than two agoutis in the same place, and often if there were two grazing, one would attempt to dominate the other and scare it away (cue: flare butt hair). The urban agoutis act more like we do in cities, gathering, eating fried food. In the jungle, the agouti’s important job of burying and dispersing seeds around the forest seems to be a solo endeavor.

So: in order to observe the agoutis of Gamboa, I knew I needed to get closer, and get quieter.

I took note of a spot near the water on the Laguna trail where multiple agoutis had crossed the footpath – frantically, running from me. I went back to the same spot on different days, in the early afternoon, and saw agoutis retreating from me on multiple occasions. This was a place they liked. This would be my stakeout. I set up a very lo-fi camera trap: my Ricoh GR II fixed-lens camera, attached to a hanging vine with a gorilla tripod (approximate cost: R$15, or less than $4 USD).

Under the gaze of the camera, I set up an offering. This was not the french fries and cat food of the Rio park, but a near-rotting pile of orange peels, banana peels, and hibiscus flowers. I set the stage. The bright colors of my food offering lay against the greying palm underneath it. I walked away. I waited.

lo-fi agouti camera trap set up
for watching agoutis, Ricoh GR II and cheap gorilla tripod

I waited until the forest forgot I was there. Or until I forgot to consider myself different than the forest. I looked through my scopes at hummingbirds, at toucans in the canopy. I knelt until I no longer felt my quads burning. A blue-crowned motmot landed on a branch inches above my face. A Panamanian flycatcher looked at me, asking. I became like a stone, and when I quieted the forest came alive, dense and throbbing.

I stayed wilding myself for a little more than an hour. When I stood up creaking and walked back to my camera, I saw that some of the food had been taken. I realized in that moment I could have caught any creature in the act – who else might want that banana peel?! But after about 40 minutes of filming only the food pile, my camera caught this:

key moments in the video:

0:18 – the second agouti arrives, clucking
0:22 – brief moment of shared snacking
0:47 – agouti fight!
1:30 – paws out, digging underneath the palm
2:47 – agouti returns, from under the palm
5:42 – return of the agouti, part ii

stills from the video:

If the garbage-food offering was a step towards domestication for these jungle agoutis, my sitting in the woods was a step towards wildness. We met somewhere in the middle.

possible extensions of project:

-what would the urban agoutis of Rio have to say to the forest agoutis of Panama? with a similar simple set-up, a signal could be sent (Arduino connected to Internet) from one group to the other – an LED light, a banana peel being delivered…the above could have been Phase 1 of “Cross-Continental Cutia Communication” (cutia = the Brazilian Portuguese word for agouti)

-an agouti hide, like a birding hide, built to be able to disappear and observe like my camera

-more footage, and a full-on documentary about agoutis

Thanks to everyone at Dinacon! And to agoutis everywhere.

-Madeline Blount

Eco-Digital Survival (Redux) in Extreme Landscapes

by Stephanie Rothenberg

The first time I heard about Andy Q and Digital Naturalism was when I stumbled across a copy of “Hacking the Wild: Madagascar” from 2015 on the internet. I found it to be incredibly thought provoking and inspiring. The hand drawn zine illustrated a 10-day expedition of a small group of folks that included artists, designers, scientists and locals who were exploring the diverse ecosystem of Madagascar through the design of simple electronic hacks. The zine was a collection of photographs, sketches of prototypes and personal and collective deep thoughts. The DIY convergence of nature with analog/digital media as a way to not only experience the wild but to exist within it continued to resonant in my mind. After Hurricane Maria hit Puerto Rico in 2017 and completely devastated the island, I started thinking more about DIY survivalist technologies — things you can quickly hack together in an emergency situation that could provide communication, power, food (especially things you can create with paper towel rolls).

Over the next year, I developed a project around this theme titled “Trading Systems: Bio-Economic Fairy Tales” that looked at the failures and inequities of human designed systems. It raised the question — what might it look like if non-humans were put in the driver’s seat of Puerto Rico’s reconstruction? The project engaged rather whimsical solutions to underscore the severity of the destruction and lack of support from the US government. Some of the design hacks included lemon batteries as a solution to the island’s non-functioning power grid and leveraging the earth’s own electromagnetic waves for communication through self-powered crystal/fox hole radios made out of household items such as lead pencils and razor blades .   

So when the opportunity emerged this summer to participate in a Dinacon I was more than excited! I had big project ambitions for my 2 weeks in Gamboa but as it happened I was so enthralled with the energized, lovely human and non-human community and lascivious landscape that I got just a tiny bit distracted. I will admit that some of my luxurious time was spent attempting the following: #1) impersonating a human laser frog chorus, #2) interspecies communication with agouti on best garbage foraging practices, #3) outracing a supermax ship in a slowly leaking kayak, #4) thinking about harvesting energy from baby crocodiles, and of course #5) swimming at the “tropical palace” every moment possible (you can IM me for details). 

But the majority of my time was spent reflecting on the wonderful hacks the Madagascar team created and seeing if I could recreate them. Although I made headway on a few, the one pictured here was most successful. I call it “Andy’s Ear” — a circuit and speaker made from a leaf, wax, metallic wire and magnets. Other experiments included exploring fiber optic threads to make an insect sensor, organic breadboards with giant mushroom caps, and a tactile way to analyze/collect data through your tongue using wire probes, a leaf and conductive thread. I am continuing to explore these digital-natural hybrids systems to incorporate into larger, future projects and so thankful for the amazing time I had learning and sharing at Dinacon!

Special thanks to the marvelous Jana for her expert modeling skills!


Sculpting Shadows

By Albert Thrower – [email protected]


In this project, I created three-dimensional sculptural artworks derived from the shadows cast by found objects.


I began creating 3D prints through unusual processes in 2018, when I used oils to essentially paint a 3D shape. For me, this was a fun way to dip my toes into 3D modeling and printing using the skills I already had (painting) rather than those I didn’t (3D modeling). I was very happy with the output of this process, which I think lent the 3D model a unique texture–it wore its paint-ishness proudly, with bumpy ridges and ravines born from brushstrokes. There was an organic quality that I didn’t often see in 3D models fabricated digitally. I immediately began thinking of other unconventional ways to arrive at 3D shapes, and cyanotype solar prints quickly rose to the top of processes I was excited to try.


My initial goal with this project was simply to test my theory that I could create interesting sculpture through the manipulation of shadow. However, a presentation by Josh Michaels on my first night at Dinacon got me thinking more about shadows and what they represent in the relationships between dimensions. Josh showed Carl Sagan’s famous explanation of the 4th dimension from Cosmos.

Sagan illustrates how a shadow is an imperfect two-dimensional projection of a three-dimensional object. I wondered–if all we had was a two-dimensional shadow, what could we theorize about the three-dimensional object? If we were the inhabitants of Plato’s cave, watching the shadows of the world play on the wall, what objects could we fashion from the clay at our feet to reflect what we imagined was out there? What stories could we ascribe to these imperfectly theorized forms? When early humans saw the the night sky, we couldn’t see the three-dimensional reality of space and stars–we saw a two-dimensional tapestry from which we theorized three-dimensional creatures and heroes and villains and conflicts and passions. We looked up and saw our reflection. What does a rambutan shadow become without the knowledge of a rambutan, with instead the innate human impulse to project meaning and personality and story upon that which we cannot fully comprehend? That’s what I became excited to explore with this project. But first, how to make the darn things?


For those who want to try this at home, I have written a detailed How To about the process on my website. But the basic workflow I followed was this:


The areas that are more shaded by our objects stay white, and the areas that the sun hits become a darker blue. Note that the solar print that results from three-dimensional objects like these rambutans have some midtones that follow their curves, because though they cast hard shadows, some light leaks in from the sides. The closer an object gets to the solar paper, the more light it blocks. This effect will make a big difference in how these prints translate to 3D models.

A rambutan print soon after exposure and washing.


For those unfamiliar with depth maps, essentially the software* interprets the luminance data of a pixel (how bright it is) as depth information. Depth maps can be used for a variety of applications, but in this case the lightest parts of the image become the more raised parts of the 3D model, and the darker parts become the more recessed parts. For our solar prints, what this means is that the areas where our objects touched the paper (or at least came very close to it) will be white and therefore raised, the areas that weren’t shaded at all by our objects will become dark and therefore recessed, and the areas that are shaded but which some light can leak into around the objects will by our mid-tones, and will lead to some smooth graded surfaces in the 3D model.

 *I used Photoshop for this process, but if you have a suggestion for a free program that can do the same, please contact me. I’d like for this process to be accessible to as many people as possible.

Below, you can play around with some 3D models alongside the solar prints from which they were derived. Compare them to see how subtle variations in the luminance information from the 2D image has been translated into depth information to create a 3D model.

In the below solar print, I laid a spiralled vine over the top of the other objects being printed. Because it was raised off the paper by the other objects, light leaked in and created a fainter shadow, resulting in a cool background swirl in the 3D model. Manipulating objects’ distance from the paper proved to be an effective method to create foreground/background separation in the final 3D model.

The objects to be solar printed, before I laid the spiralled vine on the other objects and exposed the paper.

Another variable that I manipulated to create different levels in the 3D model was exposure time. The fainter leaves coming into the below solar print weren’t any father from the solar paper than the other leaves, but I placed them after the solar print had been exposed for a couple of minutes. This made their resulting imprint fainter/darker, and therefore more backgrounded than the leaves that had been there for the duration of the exposure. You can also see where some of the leaves moved during the exposure, as they have a faint double image that creates a cool “step” effect in the 3D model. You might also notice that the 3D model has more of a texture than the others on this page. That comes from the paper itself, which is a different brand than I used for the others. The paper texture creates slight variations in luminance which translate as bump patterns in the model. You run into a similar effect with camera grain–even at high ISOs, the slight variation in luminance from pixel to pixel can look very pronounced when translated to 3D. I discuss how to manage this in the How To page for this process.

One more neat thing about this one is that I made the print on top of a folder that had a barcode on it, and that reflected back enough light through the paper that it came out in the solar print and the 3D model (in the bottom right). After I noticed this I started exposing my prints on a solid black surface.

The below solar print was made later in the day–notice the long shadows. It was also in the partial shade of a tree, so the bottom left corner of the print darkens. If you turn the 3D model to its side you’ll see how that light falloff results in a thinning of the model. I also took this photo before the print had fully developed the deep blue it would eventually reach, and that lack of contrast results in the faint seedpod in the bottom left not differentiating itself much from the background in the 3D model. I found that these prints could take a couple days to fully “develop.”


The 3D models that Photoshop spits out through this process can sometimes have structural problems that a 3D printer doesn’t quite know how to deal with. I explain these problems and how to fix them in greater detail in the How To page for this process.


Now we get back to my musings about Plato’s cave. My goal in the painting stage was to find meaning and story in this extrapolation of 3D forms from a 2D projection. As of this writing I have only finished one of these paintings, pictured below.


– Carve the models out of wood with a CNC milling machine to reduce plastic use. I actually used PLA, which is derived from corn starch and is biodegradable under industrial conditions, but is still not ideal. This will also allow me to go BIGGER with the sculptural pieces, which wouldn’t be impossible with 3D printing but would require some tedious labor to bond together multiple prints. 

– Move away from right angles! Though I was attempting to make some unusual “canvasses” for painting, I ended up replicating the rectangular characteristics of traditional painting surfaces, which seems particularly egregious when modeling irregular organic shapes. Creating non-rectangular pieces will require making prints that capture the entire perimeter of the objects’ shadows without cutting them off. I can then tell the software to “drop out” the negative space. I have already made some prints that I think will work well for this, I’ll update this page once I 3D model them.

– Build a custom solar printing rig to allow for more flexibility in constructing interesting prints. A limitation of this process was that I wanted to create complex and delicate compositions of shadows but it was hard to not disturb the three-dimensional objects when moving between the composition and exposure phases. My general process in this iteration of the project was to arrange the objects on a piece of plexiglass on top of an opaque card on top of the solar print. This allowed me time to experiment with arrangements of the objects, but the process of pulling the opaque card out to reveal the print inevitably disrupted the objects and then I would have to scramble to reset them as best I could. Arranging the objects inside wasn’t a good option because I couldn’t see the shadows the sun would cast, which were essentially the medium I was working with. The rig I imagine to solve this would be a frame with a transparent top and a sliding opaque board which could be pulled out to reveal the solar paper below without disrupting the arrangement of objects on top. 

– Solar print living creatures! I attempted this at Dinacon with a centipede, as did Andy Quitmeyer with some leafcutter ants. It’s difficult to do! One reason is that living creatures tend to move around and solar prints require a few minutes of exposure time. I was thinking something like a frog that might hop around a bit, stay still, hop around some more would work, but still you would need to have some kind of clear container that would contain the animal without casting its own shadow. I also thought maybe a busy leafcutter ant “highway” would have dense enough traffic to leave behind ghostly ant trails, but Andy discovered that the ants are not keen to walk over solar paper laid in their path. A custom rig like the one discussed above could maybe be used–place the rig in their path, allow them time to acclimate to its presence and walk over it, then expose the paper underneath them without disturbing their work.

– Projection map visuals onto the 3D prints! These pieces were created to be static paintings, but they could also make for cool three-dimensional animated pieces. Bigger would be better for this purpose.

My project table at the end-of-Dinacon showcase.
This kiddo immediately began matching the objects I had on display to their respective solar prints!

Agouti, Agouti!

By Jason Bond, Blunderboffins

Agouti, Agouti! is a work of interactive digital art (i.e. a videogame) which aims to capture the spirit of the loveable agouti, a rodent commonly seen eating scraps and frolicking about in the backyards of Gamboa, Panama. They play an important role in the spread of seeds in the forest and are adorable to boot.

This prototype work can be played on a modern Mac or Windows computer with a two-stick game controller. The player is invited to explore a jungle, eat some fruit, and — as the agouti does when frightened — puff up the hair on their butt.

The humble Central American agouti.

The Virtual Agouti

The agouti featured in the game is an original model created in the modelling and animation software Blender. It has a small number of animations — enough to simulate some basic activities. In an effort to capture the agouti’s way of moving about, slow-motion video was taken of agoutis around Gamboa and a series of images were extracted as reference for the walking and running gaits.

Although the artist on this project has been working in videogames for many years, he is new to modelling and animating, making this work a significant learning exercise.

A low-poly agouti model created in Blender.
Frames on an agouti walking extracted from slow-motion video.

The Forest

The environment of Agouti, Agouti! is filled with virtual “plants”. These forms are more impressionistic than replicative, bearing little resemblance to the actual plants of Panama, but they are meant to reflect the variety in Gamboa’s forest and to provide a suitable jungle world for the agouti to play in.

Each type of virtual plant is generated by algorithm using custom software designed for this project. In fact, this generator was intended to be the centrepiece of this project until the agouti charmed its way into the starring role. 

The generator began as a simple branching algorithm not dissimilar from L-Systems — a common procedural generation technique — beginning with a trunk and randomly splitting off branches to create a tree-like structure. Inspired by the epiphytes of Panama, this algorithm was modified to take a more additive approach: any number of different forms can be attached to any part of the structure.

Because the results of this generator can be quite chaotic, some crude tools were developed to rapidly filter through them for the best stuff. This includes a mutation tool which can take a plant with some potential and produce interesting variations on it until the user is happy with the results.

A screenshot of the plant generator, showing three mutations of what was once the same plant.

Each plant is encoded with a growth animation so that it can begin as a simple seedling and gain branches and leaves over time. The agouti’s world can start out bare and grow a massive, abstract canopy.

The agouti’s planet with hundreds of small seedlings.

The planet after all plants have grown to full size.

Available Materials

The game and agouti model are freely available for download at:

Nom nom nom.

complexity + leafcutters: code/improvisation

The shimmering, industrious leafcutter ants that build highways on the forest floor make up a complex adaptive system – the sophisticated structures and patterns that they build are well beyond the sum of their individual parts. The ants’ collective intelligence emerges through the repetition of simple tasks, and somehow through self-organization they build cities without architects, roads without engineers. There’s something magnetic about their energetic movement as they carve through the jungle – wherever I found them at Gamboa, I found that I could not look away.

from pipeline trail and laguna trail, Gamboa
ant, Atlas
going around the stick barrier

I altered the code from a classic NetLogo simulation to model the behavior of the leafcutters. NetLogo allows you to code agent-based models and watch them play out over time – each of the ants acts as an autonomous “agent” with a simple task to perform, and the iteration of multiple ants performing these tasks begins to simulate how the ants behave in the jungle. What starts out as random walking drifts into road-like patterns as the ants pick up pixel leaves and deliver them to their digital fungus…

Ant Tasks:
1. choose a random angle between -45 and 45 degrees
2. walk 1 unit in that direction
3. repeat.
4. IF there’s food (green leaves or pink flowers), pick it up by turning green, and deliver it back to the fungus at the center.
5. IF you sense digital pheromone (ants carrying food tag the pixels they walk over with digital “scent” as they head to the center), follow that pheromone.

The Twist: music
A symphony of digital fungus stockpiling
An audio representation of the complex patterns and surprising order that arises from randomness…

Each ant in the simulation has an ID number, and that ID number corresponds to a note on the piano. When an ant picks up a leaf and successfully brings it back to the fungus in the middle, that ant will sound its unique note. I calibrated this so that extremely low notes and extremely high notes on the scale won’t play – instead of those extremes some ants are assigned the same middle C, which you can hear throughout the simulation over and over like a drum beat…

the simulation: turn up the sound!

The ants play their own bebop, they compose their own Xenakis-like songs. No two ant improvisations will be exactly alike; whenever you run the simulation, each ant makes different random choices and the behavior of the model will be different. But they sound like they spring from the same mind:

ant improv #1
ant improv #2
the ants start searching for food
making highways
one food source left…
starting the last highway

Our minds love patterns too – I find myself cheering the ants on when I watch the simulation, rooting for them to find the next leaf, hoping for them to route into the highway pattern, waiting to hear their eerie plunking, playful jazz…

coding in the jungle – on the balcony, adopta

extensions for this project:

-there is a web extension for NetLogo, but without sound; could translate these ants into Javascript/p5.js so users can press “play” themselves online and control different variables (how many ants? speed of ants?)

-connect the MIDI sound that the ants are making to a score, print out sheet music written by the ants, play it on the piano

-make the model more complex, closer to the structure of actual leafcutter colonies: different sizes of ants, different tasks…

-interactive projection version

you got this, ant.

Thanks to everyone at Dinacon!

-Madeline Blount

NetLogo citation:
Wilensky, U. (1999). NetLogo. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

Balloon Environmental Sensing Takes to the Air

We have liftoff. My first Balloon Environmental Sensing test successfully “slipped the surly bonds of earth, and danced the skies on laughter-silvered wings” sending data back the whole time. First flight was at the Digital Naturalism Conference in Gamboa, Panama, featuring 10+ sensor values streaming from the balloon to an online data collection system and dashboard.

It was a big success!

This party-balloon platform is designed for inexpensive aerial environmental sensing. Balloon lofting is perfect for scientific research, educational programs, hacker workshops, technology art, as well as low-cost indoor or industrial monitoring. Is the humidity overhead the same as on the ground? Does wind speed change? Is it dusty up there? How much UV light penetrates the jungle canopy at different levels? These are all questions that can be answered with this platform.

Since advanced LTE wasn’t available in Panama and SigFox coverage was absent, I decided to use the Digital Naturalism Lab’s LoRaWAN gateway—long-range radio networking that uses very little battery power. The data collection firmware code was written in MicroPython running on a LoPy4 wireless microcontroller module from Pycom. This first set of tests used all the Pysense evaluation board sensors including light, temperature, altitude, humidity, pitch, roll and acceleration in three axis. This data was taken in real time at 30-second intervals and transmitted using LoRaWAN across Things Network servers to be displayed on a Cayenne dashboard. The Pybytes cloud platform appears promising too, I’m looking forward to exploring that more in later phases of the project.

Gamboa has one very small grocery store. It does not sell helium or any other noble gas. Luckily the generous David Bowen allowed our sensor package to hitch a ride on his drone during my first week, so up we went for initial testing. As is so often the case, even this partial test resulted in lots of changes. In this case I realized we needed a frame counter, better battery connections and voltage monitoring before flying again. A second shakedown flight on Bowen’s drone proved the value of these additions, and gave us an excellent sampling of the data to come. We also did a bunch of range testing work, which is covered in a separate blog post.

A taxi trip into Panama City brought us to Mundo de los Globos (World of Balloons) where helium tanks are available, along with 1-meter balloons in plenty of colors. With a full tank of the squeaky gas, we returned to Gamboa and I started inflating our ride to the sky.

The next morning it was time for the sensor package to take its first balloon ride, and up we went. Andy Quitmeyer got some amazing footage from his drone and Trevor Silverstein shot high-end video from the ground (coming soon). I could not have asked for a better documentation team. The balloon reached 60 meters (about 200 feet) above ground level, which was the limit of the reel line I was using for a tether.

We got great data back from this flight, and soon made a second one—this time in a large field away from balloon-eating trees. It was easy to get LoRaWAN signal from altitude since LoRa works best in line-of-sight conditions. We plan to do more with the Things Network to support the biology and ecology research in Gamboa that are spearheaded by the local Smithsonian Tropical Research Institute.

Here’s a screenshot of the data dashboard from the flight.

And a few graphs:

Another afternoon was set aside for a proper party-balloon experiment. Using a smaller battery I was able to loft the sensor package using 6 small balloons and the small amount of remaining helium. This worked too, though 7 balloons would have provided more lift and handled the wind better. Next time, more balloons!

Data from these flights can be downloaded, and the MicroPython code for the LoPy4 or FiPy can be found on my GitHub.

For the next version of the Balloon Environmental Testing platform, my plan is to explore other sensors and wireless links. I’m especially interested in UV light, air quality, wind speed and loudness. In Gamboa we talked about trying some sound recording too. As the balloon itself is silent, it’s the perfect place to record. For wireless links I’m itching to explore some new cellular low-bandwidth, low-cost protocols, LTE Cat-M and NB-IoT, because they don’t require any dedicated base stations and should work great at the altitudes needed for balloon flights. Additional plans include extended day-long flights, free flight with GPS, and maybe look at hydrogen gas but not near any kids!

The initial prototype goal was to see if the full system will work, and it does! Gamboa was a great success for this project, giving me the time, venue and documentation assistance to bring this idea to life. If you get a chance to attend the next Dinacon, I strongly recommend it. And if you’re interested in balloon sensing for any experiment, class or project, let me know!

Unnatural Language – Michael Ang and Scott Kildall

By Scott (Seamus) Kildall and Michael Ang

Unnatural Language, a collaboration between Michael Ang and Scott Kildall, is a network of electronic organisms (“Datapods”) that create sonic improvisations from physical sensors in the natural environment. Each Datapod has custom electronics connected to sensors, a speaker, and a wireless network. The sensed data, for example from electrodes that measure the subtle electrical variations in the leaves of plants, is transformed into a unique synthesized sound. Encased in sculptural materials (natural fiber, leather, leaves, etc) and dispersed into a natural environment, the Datapods enter into a sonic dialogue with the existing ecosystem of plants and animals.

Unnatural Language proposes that technology and nature are forming a new hybrid ecology, where innovations such as intelligent devices that occupy the natural landscape are dissolving the traditional nature-culture dichotomy. This work repurposes this technology to amplify unseen processes such as plant intercommunication, river health and subtle microclimate changes. 

We were at Dinacon in Gamboa, Panama for 18 days and this was our first full development and installation of our project. After several adventures in the area, we decided to deploy eight Datapods in Lake Chagras, which feeds the Panama Canal, since this constitutes a transitional space: a brackish marshland, which also had signs of human outflow such as garbage floating in it.

At Dinacon, we developed two types of sensor-synthesizers. The first detected electrical conductivity levels in water and modulated different sampled sounds that we recorded of rocks sinking in water from a hydrophone. As the water quality fluctuated with these sensor readings, the output of the synthesizer played higher and lower-pitched samples accordingly.

For the water-based datapods, we put our speakers, and the electronics, which consisted of custom software synth code on an ESP32 chip with an on-board amp and water sensor onto various garbage flotillas, which we constructed from the litter that we had collected by kayak.

The second sensor-synth combination was a plant sensor, which detected electrical activity in plants using electrodes. Plants tend to respond relatively rapidly (2-3 minutes) in response to various environmental triggers. The synth we developed acted as a drum machine, modulating different tempos according the the plants that it was attached to.

We learned many things at Dinacon! Making a compelling Datapod took much longer than we thought it would. To achieve the best type of synth effect, we recorded humans performing an activity with the thing being sensed: rocks being thrown into water and water being poured through a strainer onto a plant. We then cut these up into bite-sized pieces and ported them into our software, which uses compiled C++ code on the ESP32 to make dynamic effects.

Also, the janky look for the sculptures themselves had a broad appeal and this will be a direction for the project into the future. We’re looking forward to further site-specific installations of Unnatural Language.

Many thanks to all our fabulous co-Dinasaurs for the wonderfully playful and productive atmosphere, and especially to our intrepid film crew (Monika, Ruben, Cherise, and Andy on the drone!)

Michael Ang & Scott (Seamus) Kildall

The Frog Show – by Mónica Rikić and Ruben Oya

Frog Show wants to elevate the singing frogs to an audiovisual experience.
Since our arrival to Gamboa every evening we were amazed by their singing. It didn’t sound like the frogs we knew. This was more of an electronic synth-like music performance. We saw opportuniy to join the frogs and develop some visuals to add to the show.

With the goal of low impact on the environment and not disturb the frog’s activity we came up with this solar-powered red LED installation. The solar power makes the system self-sufficient and the red light is known to be less perceived by frogs.

The installation relies on the following hardware: microphone, arduino board, battery pack, solar panel and LED strip.


The light effects are audio reactive and controlled through code on the arduino board. Every single frog sound triggers the LED strip depending on it’s volume.

The result is an installation that charges during daytime and activates at night with the frogs’s concert. You can read the intense activity of the animals through the light show.

Active show with frogs on a sidewalk

Technical details:

  • Arduino Nano
  • Adafruit MAX4466 microphone
  • 12.000mAh 2.4A 5V battery pack
  • 7W solar panel
  • 1,5m WS2812b LED strip
  • arduino code based on neopixel library.

Ruben Oya & Mónica Rikić

Butterfly Wing Site-Specific Installation – Emily Volk

Inspired by the scale detail of butterfly wings, my project at Digital Naturalism centered on gathering microscope images and videos of butterfly wings, and using them for a site-specific projection installation.


Butterfly wings produce their detailed coloration and patterning through light refraction off of microscopic scales that cover the surface of wings. Scales also cover the heads, as well as parts of the thorax and abdomen in insect species, including butterflies. Scales also aid in flight and help waterproof the insect, and their delicate nature is a reason to avoid touching live butterfly wings (all of my specimens were deceased and gathered throughout the trails, roads, and buildings of Gamboa)! Through various optical properties of these microscopic scales, intricate and detailed patterns of colors are created across butterfly and moth species.

Panama is home to a great diversity of the world’s butterflies and moths, many of whom exhibit dramatic wing coloration. Panama is especially known for its diversity of neotropical Heliconius butterflies, which express an incredible array of wing colors and patterns. Panama is also known for its many mimics, as well, or species who express the same coloration for various hypothesized reasons, including imitating poisonous species in order to deter predators in a process called Batesian mimicry. Exploring the genetic pathways for scale expression, wing coloration, and patterning is an area of current research and interest to better explain the relationships between the incredibly rich array of butterflies in areas such as Panama.

Overall, scales provide not only biologically useful functionality through meaningful coloration and mimicry, assisting in flight, and waterproofing, but also draw the eye with incredible aesthetic beauty. To expose an audience to the aesthetic and biological wonder I find in observing butterfly wing scale detail, I gathered an array of microscope images and video of butterfly wing scale detail, and displayed my media in a site-specific projection installation outside of Dinalab on a public exhibition evening.


Throughout my time at the conference, I gathered deceased butterfly specimens throughout the Gamboa area. I found deceased butterfly wings, or fragments, throughout trails, roads, and buildings of Gamboa. Importantly, all specimens I gathered were deceased when found. Absolutely no live butterfly or moth specimens were handled during my time at Digital Naturalism Conference. Throughout my time at the conference, I collected wing fragments from a diversity of species. (As of October 1st, 2019: Many of these I am still working on correctly IDing–please reach out with correct scientific names for those shown!)

We found an incredible number of butterfly and moth fragments on the patio of the Gamboa Smithsonian Tropical Research Institute (STRI). Here are wings found by Tiare Ribeaux–her phone served as our collection plate after being surprised by the number of wings here! Unfortunately the STRI entryway seems to be an insect graveyard, being a large, concrete and covered area with consistent nighttime lights overhead.

Microscope Images and Video

I used a Plugable USB microscope (thanks Lee Wilkins, Dinasaur extraordinaire!!) and Plugable Digital Viewer free software ( to gather both video and still images of microscopic detail on collected specimens. (As a special shout-out to this microscope, it is relatively affordable at about a $20 price point online! Get your own, and explore microscope imagery in your own area!!).

Here is a selection of my favorite still images:


Installation set-up scene. Computer, projector, bromeliad and vegetation galore!

To display the microscope video I collected of wings, I set up a site-specific projection installation at one of our evening Digital Naturalism public installations in and around our Gamboa Dinalab. Here, I projected my microscope videos of butterfly wing detail onto a utilities box (shown below, from front and side). These utilities structures are common throughout Panama, and appear to me to be open canvases for a variety of art! This type of public canvas is especially conducive to using projection, which does not harm or modify its canvas.

I am drawn to projection art as a medium that seems to me to be both a light and a fluid. In working with projection, I seek to modify projection canvases to insert mobility, depth, and layers into a projection-based art installation. I’m interested in projection work that gives videos motion and disrupts a 2D canvas. I find that projection, through it’s light, motion, and ability to display on various surfaces, can be a uniquely dynamic and immersive medium for art installations. In using projection for my wing video installation, I seek to draw an audience into the colors and scale detail with a projection environment that blends technology, biology, and fascination.

I incorporated natural elements into the installation through arranging bromeliads saved by Dinasaur Rabia from a local tree-trimming operation and an adjacent tree in the projection surface. The process shots above and below show the location of my local Gamboa installation!

Still photos of installation


I received feedback on my installation from Panamanian artist Kevin Lim. For more feedback or project inquiries, please leave a comment below!

Future Work

Importantly, the media I collected of microscopic wing detail is now portable. With these images and video, I can create more site-specific installation pieces in different environments. I hope to explore a more static installation piece, in a gallery setting or outdoors, where these microscope videos are projected onto a mobile screen shaped like a wing, that can flutter in the wind.

Additionally, as always, I seek opportunities to continue to merge science and art in creative ways to showcase and promote the fascination and inquiry inherit to both disciplines.

For inquiries or collaborations, please comment on my bio page on the Digital Naturalism website, or reach out online through another medium.

Utter excitement with the flexibility of site-specific projection, and all of Dinacon:

Yay Dinacon!

Further Reading and Exploration

“Butterfly scale optics” Google Images search 🙂

Butterfly Wing Optics STEMvisions blog post (

Deshmukh, R. , Baral, S. , Gandhimathi, A. , Kuwalekar, M. and Kunte, K. (2018), Mimicry in butterflies: co‐option and a bag of magnificent developmental genetic tricks. WIREs Dev Biol, 7: e291. doi:10.1002/wdev.291

Florida State University and Olympus’s collaboration site Butterfly Wing Scale Digital Image Gallery (

Kolle, M., Salgard-cunha, P., Scherer, M. R. J., Huang, F., Vukusic, P., Mahajan, S., . . . Steiner, U. (2010). Mimicking the colourful wing scale structure of the Papilio blumei butterfly. Nature Nanotechnology, 5(7), 511-5. doi:10.1038/nnano.2010.101

Srinivasarao, M. (1999). Nano-Optics in the Biological World: Beetles, Butterflies, Birds, and Moths. Chem. Rev., 99(7), 1935-1962. doi: 10.1021/cr970080y.


Generous thanks to the Boulder Arts & Culture Professional Development Grant program for funding my 2019 Digital Naturalism Conference engagement.

Special appreciation goes to Betty Sargeant and Madeline Schwartzman, whose initial microscope exploration of insect wings and feathers drew me in to further exploring microscope wing detail! Thank you for sharing your incredible work, expertise, curiosity, and inspiration during my first days acclimatizing to Dinacon, and throughout our time together.

Thank you to Lee Wilkins for letting me use your rockin’ microscope!

Thank you to Tiare Riabeaux for mega wing collection during our few days of overlap (on-the-phone photo).

Thank you so much to Dina-captain Andrew Quitmeyer for tireless enthusiasm, and bringing us all together with your brilliant conference and curiosity!

And to all of you across the huge Digital Naturalism community, I’m so happy to have all of you new, inspiring friends and peers <3

Froggy camouflage handheld fans

Project by Anna Carreras. BAU Design College of Barcelona, Spain.

Hand fan (abanico) inspired by a glass frog. Photo by Anna Carreras. Gamboa, Panama.

Rainforests of Panama are some of the world’s most biologically diverse areas. Animals use camouflage tactics to blend in with their surroundings, to disguise their appearance. They mask their location, identity, and movement to avoid predators.

By the other hand in cities in many countries the increased use of surveillance technologies have become part of the public and private landscape. Citizens lack of camouflage tactics to avoid these forms of elevated vigilance. Can we learn and borrow tactics from animals to keep away from this constant monitoring?

The froggy camouflage handheld fans project proposes a playful way to act upon our surveying world while learning from frogs camouflage in Gamboa rainforest, Panama.

Hand fan inspired by a dart frog. Photo by Anna Carreras. Gamboa, Panama.

Nature in Gamboa

Exploring nature, animal watching in Laguna Trail. Photo by Marta Verde. Gamboa, Panama.

Attending the Digital Naturalism Conference (Dinacon) from August 26th to September 1st offered the possibility to do several exploratory walks around Adopta un Bosque station, La Laguna trail in Gamboa and Pipeline road on the border of the Soberania National Park. Animal watching includes birds (thank you Jorge), frogs, mammals and several butterflies and insects.

Bat sleeping place near the Panama Canal. Photo by Marta Verde. Gamboa, Panama.

A species’ camouflage depends on the physical characteristics of the organism, the behavior of the specie and is influenced by the behavior of its predators. Background matching is perhaps the most common camouflage tactic and animals using this tactic are difficult to spot and study. Another camouflage tactic is disruptive coloration that causes predators to misidentify what they are looking at. Other species use coloration tactics that highlight rather than hide their identity. Warning coloration makes predators aware of the organism’s toxic or dangerous characteristics. This type of camouflage is called aposematism or warning coloration.

Animals with different camouflage tactics. Photos by Mónica Rikić, Marta Verde and Tomás Montes. Gamboa, Panama.

Studding camouflage tactics includes animal observation and some readings. Frogs are easier to spot and photograph in Gamboa than insects or snakes. The animal books at Adopta un Bosque station and Dinalab gave the opportunity to classify the different species and gain some knowledge about their colors and skin patterns.

Frog spotted and photographed during Pipeline road walk. Photo by Tomás Montes. Gamboa, Panama.
Frogs spotted and photographed during walks. Photos by Tomás Montes, Päivi Maunu, Jorge Medina and Tomás Montes. Gamboa, Panama.
Frog identification triptych at Dinalab. Photos by Anna Carreras. Gamboa, Panama.
Frog identification book at Adopta un Arbol station. Photos by Anna Carreras. Gamboa, Panama.


Different frog skin patterns generated mathematically. Images and code by Anna Carreras.

The skin of some animals show a self-ordered spatial pattern formation. Cell growing and coloration creates some order resulting from the specific differentiation of cell groups. In such complex systems cells are only in contact with their closest neighbors. Which are this morphogenesis mechanisms where some order emerges from individual cells? Which are the mathematical models we can use to achieve this kind of growing patterns and gain some knowledge about them? Can we simulate some frog’s skin visible regularities with a coded system?

The mathematician Alan Turing predicted the mechanisms which give rise to patterns of spots and stripes. The model is quite simple, it places cells in a row that only interact with their adjacent cells. Each cell synthesizes two different types of molecules. And this molecules can diffuse passively to the adjacent cells. The diffusion process makes the system and the whole result more homogeneous. It tends to destroy any ordered structure. Nevertheless the diffusion process with some interaction by the cell molecules drives to macroscopic ordered structures. The mechanism is called reaction–diffusion system. It drives the emergence of order in a chaotic dynamic system.

Steps of a reaction-diffusion model evolving from chaotic randomness to structured patterns. Images by Anna Carreras
Steps of a reaction-diffusion model evolving from chaotic randomness to structured patterns. Images by Anna Carreras.
Steps of a reaction-diffusion model evolving from an organized grid to emergent patterns. Images by Anna Carreras.

Code and interface

Frog pattern generator using a reaction-difussion system. Image and system by Anna Carreras.

A system using the Gray-Scott model and formulas was coded in Processing language. The interface shows the animation of how a frog skin evolves. The GUI also shows the system values that lead to that skin pattern formation. These values and two selected colors generate a unique frog pattern each time the system is started. The spatial feeding system options and the values that can be selected and adjusted are inspired by Gamboa’s frogs. They derive from the observed and photographed species and from the consulted books.

Frog pattern generator using a reaction-difussion system with random feeding. Image and system by Anna Carreras.

Camouflage DIY hand fans

Two different hand fans. Photos by Anna Carreras. Gamboa, Panama.

Frog skin images are used to create light folding hand fans. They are suitable for Gamboa’s hot weather and help to camouflage inside the rainforest. They can easily be taken home and used around the world in several cities.

To build the hand fans two parts are needed: the fan frame and the fan leaf. The designed DIY hand fan is designed as a traditional Spanish hand fan. The frame structure is made of a thin material that can be waved back-and-forth, birch tree or pear tree wood.

Traditional Spanish hand fan structure for laser cut. Designed dxf file by Anna Carreras.

The produced hand fans use 0.8mm thick birch wood to make sure it can bend without breaking. The fabrication starts laser cutting the 16 fan ribs for the frame and printing the camouflage image. Cut the fan leaf, using scissors, as a half circle measuring 210mm the exterior radius and 95mm the inner radius.

Laser cutting the hand fan ribs structure. Photo by Anna Carreras.

When the parts are ready put together the 16 fan ribs, one wide rib at the beginning and one at the end. Fix the fan ribs with a m3 screw and nut, a metric screw with nominal diameter of 3mm or 0.12in. Extend the fan ribs as an opened hand fan. Glue the fan leaf on the thiner exterior part of each rib and allow the glue to dry. Finally, one rib at a time, put it above the previous ones and fold the paper carefully to create the folding shape.

Hand fan in action. Photos by Daniëlle Hoogendijk and Anna Carreras. Gamboa, Panama.
Resulting DIY hand fans. Photos by Anna Carreras. Gamboa, Panama.


Glass frog hand fan. Photo by Anna Carreras. Gamboa, Panama.

Two different models of the Froggy camouflage handheld fans were created. The green one is inspired by the glass frogs and the orange fan is inspired by the pumilio dart frog. Both frogs live in Panama.

Glass frog and Pumilio dart frog. Photos by Anna Carreras and Pavel Kirillov [CC BY-SA 2.0]. Gamboa and Bocas del Toro, Panama.
Glass frog hand fan. Photo by Anna Carreras. Gamboa, Panama
Pumilio dart frog hand fan. Photo by Pavel Kirillov [CC BY-SA 2.0] and Anna Carreras. Gamboa, Panama

The glass frog handheld fan and the pumilio dart frog handheld fan integrated quite well with Gamboa’s surroundings and the rainforest.

Glass frog hand fan. Photo by Anna Carreras. Gamboa, Panama.
Pumilio dart frog hand fan. Photo by Anna Carreras. Gamboa, Panama.
Glass frog hand fan camouflaged between leaves. Photo by Anna Carreras. Gamboa, Panama.

Conclusions and future work

To  act  upon  our  surveying  world camouflage is one of the plans we can play. It rises issues of mimesis, crypsis, perception, privacy and identity. Some artistic projects about fashion and cosmetics have been developed with this idea, like CV Dazzle and HyperFace, among others. The Froggy camouflage handheld fans project sums up in this direction creating hand fans inspired by Panama’s frogs camouflage strategies.

We can gain some knowledge and learn from animals and their hiding techniques. Some animal camouflage skin coloration can be modeled as a quite simple dynamic system that generates complex ordered patterns. We can mathematically model and code the system to simulate the growing process of frogs skin coloration. It helps us to better understand how different frog species have certain particular patterns. Moreover it gives us some insight about how order can emerge from random initial conditions.

Different animal patterns and camouflage tactics can be further investigated. It can help us to achieve different and diverse algorithms and colored results. They can suit in different environments and they can help us camouflage from the increasing number of surveillance systems. A battle between algorithms learned and borrowed from nature against vigilance algorithms.


Dinalab open Saturday exhibition. Photo by Anna Carreras. Gamboa, Panama.
Dinalab open Saturday exhibition. Photo by Anna Carreras. Gamboa, Panama.


First I would like to thank Dr. Andrew Quitmeyer for organizing the event and all the participants I met at Dinacon Gamboa. And thanks to Marta, Mónica, Tomás, Jorge, Päivi and Dani to help me documenting the work.


Book The Chemical Basis of Morphogenesis. Alan Turing. 1952.

Book Orden y Caos en Sistemas Complejos. Ricard V. Solé, Susanna C. Manrubia. 2000.

Videotutorial Coding challenge #13: Reaction Diffusion Algorithm in p5.js. Daniel Shiffman. 2016.

Project CV Dazzle: Camouflage from face detection. 2010.

Project HyperFace: False-Face Camouflage. 2017.