Andrew Coates is a specialist in Tropical architecture that is sustainable (Cresolus.com). Gamboa is home, office and inspiration since 2002. Cresolus moved to Panama from East Africa.
Andrew, his wife Beth and their team work around the tropical world creating infrastructure and buildings that function well in hot humid climates.
Cresolus’ main focus is on National Parks facilities and systems. Other projects range from low income housing, schools, all the way through to very high end eco resorts.
We have completed projects across Central America and Africa. Currently working in Panama, Belize and Gabon.
Andrew is passionate about creating buildings that keep the occupants comfortable even on the hottest, steamiest day during a power cut.
He also loves camouflage, and helped design the one in the photo for the Gabonese National Park rangers’ uniform.
In 2015 Andrew and Beth founded The Gamboa Discovery School (www.gamboaschool.org) for ages 4-10. Which takes advantage of the natural and scientific assets of Gamboa.
Zines have been around for a longtime with the goal being to make something cheap and easy to reproduce. This means there’s a big reliance on printer paper and classic photocopier inks. Though this can make production easy it also means that it’s not the most environmentally friendly situation. Bleached printer paper takes a long time to break down, inks and toners are can be fairly toxic and create environmental issues.
During DinaCon my goal was create a fully sustainable zine. My concept of sustainability was making something that would have little environmental impact and have the ability to break down easily as time goes by. This meant that I committed to foraging for materials to make paper, inks and binding for a zine.
Here’s a breakdown the components and the processes involved with each part:
Mould – I made my paper mould on-site using scrap wood provided by Andrew Coates’ building team and screen that was DinaLab. This took two days.
Brown Paper – Foraged banana tree bark, boiled for two hours and then blended to get pulpy. This was then put in a large plastic packing container with a 2:1 water to pulp ratio called a slurry. It took about 4 days of pulling and drying to make 12 sheets of paper.
White Paper – Foraged turkey tail mushrooms from a local trail thanks to Blackii’s suggestion, blended with water to get pulpy. This was then put in the large plastic container with the 2:1 water to pulp ratio slurry. It took 5 days of pulling and drying with a dehumidifier’s help due to the natural moistness of mushrooms to get 7 sheets of paper.
Black Ink – Foraged charcoal from a small fire, ground down and mixed with gum arabic to thicken. Gum arabic was the one item I brought with me from home, I wasn’t sure how much access I would have to a similar product. This took about 15 minutes
Blue/Green Ink – Algae pigment provided by the wonderful Elliot mixed with water/agar agar and gum arabic to try different textures and thicknesses. Each ink option took a few minutes to mix.
Brown Ink – This was made by boiling down rumbutan skins for one hour to get a gorgeous deep red burgundy and then mixing it with agar agar. This took one hour and 10 minutes to make.
Clear Ink – Mix of honey and coffee which made a transparent reflective ink which took the same amount of time as making a pot of coffee.
Wing Page – All the wings on the centrefold pages were foraged from different dead insects other dinasaurs used for their own experiments and projects. Once they were no longer being used I removed the wings and put them in the slurry and pulled the sheet with the wings embedded.
Bindings – The bindings are foraged vines from a plant in the backyard of DinaLab that I sewed through the pages for an easy bind. This took about ten minutes.
Night falls in Gamboa, Panama — site of the 2019 Digital Naturalism Conference. The Tungara frogs come to life, filling the air with their uncanny mating calls resonating from murky ponds and puddles. As day breaks, they retreat. I created a submersible infrared timelapse camera to capture the experience of dawn from beneath the surface of a muddy puddle, the end of a long night of singing and mating.
Here are a few of the locations where the camera was deployed overnight:
This is the device I ultimately produced (photo left, 3D scan right by Grace Grothaus). The transparent plastic on the box confused the scanner, but I find the aesthetic fitting, as if the capsule were rescued from the bottom of the ocean after years of decay.
I arrived at Dinacon with a loose idea of what I would need to make this project happen and what the results would look like. I brought an infrared camera, a Raspberry Pi, a waterproof case and 100 feet of paracord.
I took an iterative approach, repeatedly testing versions of the prototype. The first thing I realized was that for an infrared camera to work properly in low-light environments, it needs an infrared light source. I tried using one, then three small infrared LEDs in series, powered by the Raspberry Pi. It quickly became clear that this was not enough light to penetrate the murky underwater depths.
The next step was to take apart a heavy-duty infrared floodlight used by local bat researchers for nocturnal imaging. I extracted the circuit board and LEDs from the internals of this light, disabled the ambient light sensor and rewired the power supply to run off the Pi power supply using a SparkFun Buck Boost.
With this arrangement, I would experience seemingly random issues where the Pi would stop taking images and lose network connectivity once running on battery power out in the field. After some investigation and discussion with other knowledgeable folks, I measured the current drawn by the infrared light and determined it was drawing 1.8 amps. The battery pack I was using to power the Pi provides around 2.1 amps at peak capacity, so this arrangement only worked when it was fully charged. As soon as the battery was drawn down a bit, the Pi was not getting enough current to operate (around 80-100mA), so the camera ceased to work. The solution was to use a separate battery pack for the light.
Even after solving this and other technical problems, like running out of space on the Pi’s SD card, and figuring out the right cron / shell script configuration for timelapse images, a fundamental problem remained: infrared light doesn’t travel well underwater, perhaps because it is at the low end of the light spectrum, meaning it has low energy. Therefore, there wasn’t much to see in the middle of the night in the images that crepuscle produced. I schemed about how to make the most of these initially disappointing results.
At first, all the images seemed completely black. However, when I took a look at the histogram for a random image in the free image editing software GIMP, I noticed that there was some image data in the very low wavelengths. I experimented with bumping up the color curves so this information was more visible, and was pleased with the somewhat psychedelic effect.
I then applied this curve to each image with a command line batch process using ImageMagick, then compiled the images into videos using ffmpeg.
Overall, this project was a great learning experience. I learned about the physics of light and water, efficient and appropriate use of batteries in electronics, batch image processing using open source software, and how to use local found materials like bamboo and cement blocks.
Before the sun had even set on my first day in Gamboa I had already heard excited chattering about the sound of the “Laser Frogs”. Once it got dark there was a seemingly ubiquitous chorus of these laser sounds, an asynchronous melange of descending glissandi. One might mistake this biophony for a retro video-game arcade, but it is in fact the revelry of an amphibious Bacchanalia.
I had other plans and project ideas before arriving at DINACON, but I found myself continuously drawn to the sound of these frogs. I was completely ignorant of them at first, having no idea what their actual name was or anything about their behavior or even what they looked like. I just really liked the way they sounded and kept listening. Indeed these frogs sound like a laser beam from a video game, and since I have worked as a sound designer and synthesized laser beam sounds for video games, I thought “I bet I can synthesize these frogs!”.
My first attempt was a very quick patch using the ES2 synthesizer in Logic Pro. I did this based entirely on listening to the frogs before analyzing the spectrogram in detail. It captured the general gesture of the descending tone, but didn’t capture the timbre or slope of the glissando very well.
Making the Whine
Although the first attempt was not convincing enough, it was close enough to encourage me to continue on my quest to synthesize the frogs call. I began by inspecting an isolated call from one frog via the spectrogram in Audacity.
There are many noticeable things from this spectrogram that further inform what we hear. The first being that the frogs make not only this “laser” sound, but also have a percussive sound that follows it. At first I referred to these components as the “chirp” and “beep”, but after being clued in by some STRI researchers (thanks Amanda Savage!) I learned to use the terms “whine” and “chuck”. These are much better descriptions in my opinion.
I decided to use the SuperCollider programming language so that I had absolute control over the synthesis of this sound. The first area of focus was on creating a convincing “whine” using a bank of sine wave oscillators.
Looking at the spectrogram above we can see the “whine” portion of the call is a descending tone, starting around 1kHz and ending about an octave below. It also appears to have some harmonic overtones that decrease in intensity (up to about the 5th harmonic) Here is a recording and spectrogram of the first attempt in SuperCollider.
This was already sounding better, but by looking at the spectrogram of the synthetic whine some things are obviously still lacking. First, the slope of the glissando is still too linear, it needs to have more on a exponential (perhaps cubic?) curve to it. Additionally the upper harmonics are too strong and need to be attenuated relative to their ordinality. (The higher the harmonic, the less loud it is)
Making the Chuck
After a few more iterations of refining the whine, I moved onto the chuck.
Looking at the chucks in the above spectrogram we can see partials at relatively even spacing. We could perhaps model this by using a harmonic tone with a fundamental frequency of ~200 Hz or ~250 Hz, with the fundamental and first few overtones missing. The chucks seem to have their highest peak around 2.75 kHz. Is this sound produced via some sort of formant resonance? What mechanism makes it seemingly harmonic but with a missing fundamental? This is unclear to me, but I can recreate the sound nonetheless!
This group of three chucks have different durations, and the last one seems to have a downward pitch contour but not nearly as pronounced as the whine. The first two beeps are approximately 50 milliseconds long, and the third is 40ms.
Using the same approach as the whine, I used a bank of sine wave oscillators to recreate the chuck. Below is a recording and spectrogram along with the synthetic whine.
While the timing of the chucks is accurate, the tone is not convincing. I continued to iterate on the implementation until settling on the one below.
SuperCollider GUI Application
For presentation at Dinalab I put together a simple GUI application which allows the user to playback a recording of an isolated Túngara call and compare it to the synthetic whine and chuck. Additionally there are knobs to alter the pitch of both the whine and chuck to hear what GIANT or tiny Túngara frog might sound like.
Here is the audio of the final form the synthesizer took, there is still room for improvement of course.
Playback in the Field
In order to see if my synthesizer was effective at blending in with Túngara frogs in the field, I did some simple, not very well controlled tests on the streets of Gamboa.
Basically, I walked up to a pond where I heard Túngara frogs calling and they would usually stop calling as I approached. Then with my field recorder running I would play the synthesized call from my cell phone and wait to hear a response.
Here is the first trial (that loud percussive sound in the background is a Gladiator frog, I think) The synth is mostly in the left channel while the other frogs are mostly in the right.
After shuffling around a bit, the frogs got quiet and I tried again.
Now, can I conclusively say that the frogs responded to my call? I do not know, I am not a field biologist or experienced with phonotaxis studies, but the results of these simple tests seem promising. I think the frogs are buying my synthetic call.
I really enjoyed working on this project and am very interested in improving the audio synthesis and application interface so that it is useful to researchers both in the field and the laboratory. If you study frogs, bioacoustics, phonotaxis or have interest in this project please get in touch with me, I would love more feedback.
From my perspective, the synthesis could still use some refinement. First, it could use better filtering of the whine, perhaps via a resonant filter based on accurate resonances of the frogs vocal apparatus? Additionally, more variability in chuck production would be useful. With more analysis of recordings and a bit more reading on the physiology of the chuck production I think I could better refine the synthesis.
Some final questions:
Perhaps I should port this synthesizer for use in a web/mobile app?
Maybe I could synthesize Gladiator (or other) frog calls?
What additional features would be useful?
Do you have comments, criticisms or any feedback?
First I would like to thank all the participants I met at Dinacon, and Dr. Andrew Quitmeyer for organizing the event.
Amanda Savage was very generous in talking with me about my project and in introducing me to the vast literature and research available on Túngara frogs.