Sculpting Shadows

By Albert Thrower – [email protected]


In this project, I created three-dimensional sculptural artworks derived from the shadows cast by found objects.


I began creating 3D prints through unusual processes in 2018, when I used oils to essentially paint a 3D shape. For me, this was a fun way to dip my toes into 3D modeling and printing using the skills I already had (painting) rather than those I didn’t (3D modeling). I was very happy with the output of this process, which I think lent the 3D model a unique texture–it wore its paint-ishness proudly, with bumpy ridges and ravines born from brushstrokes. There was an organic quality that I didn’t often see in 3D models fabricated digitally. I immediately began thinking of other unconventional ways to arrive at 3D shapes, and cyanotype solar prints quickly rose to the top of processes I was excited to try.


My initial goal with this project was simply to test my theory that I could create interesting sculpture through the manipulation of shadow. However, a presentation by Josh Michaels on my first night at Dinacon got me thinking more about shadows and what they represent in the relationships between dimensions. Josh showed Carl Sagan’s famous explanation of the 4th dimension from Cosmos.

Sagan illustrates how a shadow is an imperfect two-dimensional projection of a three-dimensional object. I wondered–if all we had was a two-dimensional shadow, what could we theorize about the three-dimensional object? If we were the inhabitants of Plato’s cave, watching the shadows of the world play on the wall, what objects could we fashion from the clay at our feet to reflect what we imagined was out there? What stories could we ascribe to these imperfectly theorized forms? When early humans saw the the night sky, we couldn’t see the three-dimensional reality of space and stars–we saw a two-dimensional tapestry from which we theorized three-dimensional creatures and heroes and villains and conflicts and passions. We looked up and saw our reflection. What does a rambutan shadow become without the knowledge of a rambutan, with instead the innate human impulse to project meaning and personality and story upon that which we cannot fully comprehend? That’s what I became excited to explore with this project. But first, how to make the darn things?


For those who want to try this at home, I have written a detailed How To about the process on my website. But the basic workflow I followed was this:


The areas that are more shaded by our objects stay white, and the areas that the sun hits become a darker blue. Note that the solar print that results from three-dimensional objects like these rambutans have some midtones that follow their curves, because though they cast hard shadows, some light leaks in from the sides. The closer an object gets to the solar paper, the more light it blocks. This effect will make a big difference in how these prints translate to 3D models.

A rambutan print soon after exposure and washing.


For those unfamiliar with depth maps, essentially the software* interprets the luminance data of a pixel (how bright it is) as depth information. Depth maps can be used for a variety of applications, but in this case the lightest parts of the image become the more raised parts of the 3D model, and the darker parts become the more recessed parts. For our solar prints, what this means is that the areas where our objects touched the paper (or at least came very close to it) will be white and therefore raised, the areas that weren’t shaded at all by our objects will become dark and therefore recessed, and the areas that are shaded but which some light can leak into around the objects will by our mid-tones, and will lead to some smooth graded surfaces in the 3D model.

 *I used Photoshop for this process, but if you have a suggestion for a free program that can do the same, please contact me. I’d like for this process to be accessible to as many people as possible.

Below, you can play around with some 3D models alongside the solar prints from which they were derived. Compare them to see how subtle variations in the luminance information from the 2D image has been translated into depth information to create a 3D model.

In the below solar print, I laid a spiralled vine over the top of the other objects being printed. Because it was raised off the paper by the other objects, light leaked in and created a fainter shadow, resulting in a cool background swirl in the 3D model. Manipulating objects’ distance from the paper proved to be an effective method to create foreground/background separation in the final 3D model.

The objects to be solar printed, before I laid the spiralled vine on the other objects and exposed the paper.

Another variable that I manipulated to create different levels in the 3D model was exposure time. The fainter leaves coming into the below solar print weren’t any father from the solar paper than the other leaves, but I placed them after the solar print had been exposed for a couple of minutes. This made their resulting imprint fainter/darker, and therefore more backgrounded than the leaves that had been there for the duration of the exposure. You can also see where some of the leaves moved during the exposure, as they have a faint double image that creates a cool “step” effect in the 3D model. You might also notice that the 3D model has more of a texture than the others on this page. That comes from the paper itself, which is a different brand than I used for the others. The paper texture creates slight variations in luminance which translate as bump patterns in the model. You run into a similar effect with camera grain–even at high ISOs, the slight variation in luminance from pixel to pixel can look very pronounced when translated to 3D. I discuss how to manage this in the How To page for this process.

One more neat thing about this one is that I made the print on top of a folder that had a barcode on it, and that reflected back enough light through the paper that it came out in the solar print and the 3D model (in the bottom right). After I noticed this I started exposing my prints on a solid black surface.

The below solar print was made later in the day–notice the long shadows. It was also in the partial shade of a tree, so the bottom left corner of the print darkens. If you turn the 3D model to its side you’ll see how that light falloff results in a thinning of the model. I also took this photo before the print had fully developed the deep blue it would eventually reach, and that lack of contrast results in the faint seedpod in the bottom left not differentiating itself much from the background in the 3D model. I found that these prints could take a couple days to fully “develop.”


The 3D models that Photoshop spits out through this process can sometimes have structural problems that a 3D printer doesn’t quite know how to deal with. I explain these problems and how to fix them in greater detail in the How To page for this process.


Now we get back to my musings about Plato’s cave. My goal in the painting stage was to find meaning and story in this extrapolation of 3D forms from a 2D projection. As of this writing I have only finished one of these paintings, pictured below.


– Carve the models out of wood with a CNC milling machine to reduce plastic use. I actually used PLA, which is derived from corn starch and is biodegradable under industrial conditions, but is still not ideal. This will also allow me to go BIGGER with the sculptural pieces, which wouldn’t be impossible with 3D printing but would require some tedious labor to bond together multiple prints. 

– Move away from right angles! Though I was attempting to make some unusual “canvasses” for painting, I ended up replicating the rectangular characteristics of traditional painting surfaces, which seems particularly egregious when modeling irregular organic shapes. Creating non-rectangular pieces will require making prints that capture the entire perimeter of the objects’ shadows without cutting them off. I can then tell the software to “drop out” the negative space. I have already made some prints that I think will work well for this, I’ll update this page once I 3D model them.

– Build a custom solar printing rig to allow for more flexibility in constructing interesting prints. A limitation of this process was that I wanted to create complex and delicate compositions of shadows but it was hard to not disturb the three-dimensional objects when moving between the composition and exposure phases. My general process in this iteration of the project was to arrange the objects on a piece of plexiglass on top of an opaque card on top of the solar print. This allowed me time to experiment with arrangements of the objects, but the process of pulling the opaque card out to reveal the print inevitably disrupted the objects and then I would have to scramble to reset them as best I could. Arranging the objects inside wasn’t a good option because I couldn’t see the shadows the sun would cast, which were essentially the medium I was working with. The rig I imagine to solve this would be a frame with a transparent top and a sliding opaque board which could be pulled out to reveal the solar paper below without disrupting the arrangement of objects on top. 

– Solar print living creatures! I attempted this at Dinacon with a centipede, as did Andy Quitmeyer with some leafcutter ants. It’s difficult to do! One reason is that living creatures tend to move around and solar prints require a few minutes of exposure time. I was thinking something like a frog that might hop around a bit, stay still, hop around some more would work, but still you would need to have some kind of clear container that would contain the animal without casting its own shadow. I also thought maybe a busy leafcutter ant “highway” would have dense enough traffic to leave behind ghostly ant trails, but Andy discovered that the ants are not keen to walk over solar paper laid in their path. A custom rig like the one discussed above could maybe be used–place the rig in their path, allow them time to acclimate to its presence and walk over it, then expose the paper underneath them without disturbing their work.

– Projection map visuals onto the 3D prints! These pieces were created to be static paintings, but they could also make for cool three-dimensional animated pieces. Bigger would be better for this purpose.

My project table at the end-of-Dinacon showcase.
This kiddo immediately began matching the objects I had on display to their respective solar prints!

Agouti, Agouti!

By Jason Bond, Blunderboffins

Agouti, Agouti! is a work of interactive digital art (i.e. a videogame) which aims to capture the spirit of the loveable agouti, a rodent commonly seen eating scraps and frolicking about in the backyards of Gamboa, Panama. They play an important role in the spread of seeds in the forest and are adorable to boot.

This prototype work can be played on a modern Mac or Windows computer with a two-stick game controller. The player is invited to explore a jungle, eat some fruit, and — as the agouti does when frightened — puff up the hair on their butt.

The humble Central American agouti.

The Virtual Agouti

The agouti featured in the game is an original model created in the modelling and animation software Blender. It has a small number of animations — enough to simulate some basic activities. In an effort to capture the agouti’s way of moving about, slow-motion video was taken of agoutis around Gamboa and a series of images were extracted as reference for the walking and running gaits.

Although the artist on this project has been working in videogames for many years, he is new to modelling and animating, making this work a significant learning exercise.

A low-poly agouti model created in Blender.
Frames on an agouti walking extracted from slow-motion video.

The Forest

The environment of Agouti, Agouti! is filled with virtual “plants”. These forms are more impressionistic than replicative, bearing little resemblance to the actual plants of Panama, but they are meant to reflect the variety in Gamboa’s forest and to provide a suitable jungle world for the agouti to play in.

Each type of virtual plant is generated by algorithm using custom software designed for this project. In fact, this generator was intended to be the centrepiece of this project until the agouti charmed its way into the starring role. 

The generator began as a simple branching algorithm not dissimilar from L-Systems — a common procedural generation technique — beginning with a trunk and randomly splitting off branches to create a tree-like structure. Inspired by the epiphytes of Panama, this algorithm was modified to take a more additive approach: any number of different forms can be attached to any part of the structure.

Because the results of this generator can be quite chaotic, some crude tools were developed to rapidly filter through them for the best stuff. This includes a mutation tool which can take a plant with some potential and produce interesting variations on it until the user is happy with the results.

A screenshot of the plant generator, showing three mutations of what was once the same plant.

Each plant is encoded with a growth animation so that it can begin as a simple seedling and gain branches and leaves over time. The agouti’s world can start out bare and grow a massive, abstract canopy.

The agouti’s planet with hundreds of small seedlings.

The planet after all plants have grown to full size.

Available Materials

The game and agouti model are freely available for download at:

Nom nom nom.

complexity + leafcutters: code/improvisation

The shimmering, industrious leafcutter ants that build highways on the forest floor make up a complex adaptive system – the sophisticated structures and patterns that they build are well beyond the sum of their individual parts. The ants’ collective intelligence emerges through the repetition of simple tasks, and somehow through self-organization they build cities without architects, roads without engineers. There’s something magnetic about their energetic movement as they carve through the jungle – wherever I found them at Gamboa, I found that I could not look away.

from pipeline trail and laguna trail, Gamboa
ant, Atlas
going around the stick barrier

I altered the code from a classic NetLogo simulation to model the behavior of the leafcutters. NetLogo allows you to code agent-based models and watch them play out over time – each of the ants acts as an autonomous “agent” with a simple task to perform, and the iteration of multiple ants performing these tasks begins to simulate how the ants behave in the jungle. What starts out as random walking drifts into road-like patterns as the ants pick up pixel leaves and deliver them to their digital fungus…

Ant Tasks:
1. choose a random angle between -45 and 45 degrees
2. walk 1 unit in that direction
3. repeat.
4. IF there’s food (green leaves or pink flowers), pick it up by turning green, and deliver it back to the fungus at the center.
5. IF you sense digital pheromone (ants carrying food tag the pixels they walk over with digital “scent” as they head to the center), follow that pheromone.

The Twist: music
A symphony of digital fungus stockpiling
An audio representation of the complex patterns and surprising order that arises from randomness…

Each ant in the simulation has an ID number, and that ID number corresponds to a note on the piano. When an ant picks up a leaf and successfully brings it back to the fungus in the middle, that ant will sound its unique note. I calibrated this so that extremely low notes and extremely high notes on the scale won’t play – instead of those extremes some ants are assigned the same middle C, which you can hear throughout the simulation over and over like a drum beat…

the simulation: turn up the sound!

The ants play their own bebop, they compose their own Xenakis-like songs. No two ant improvisations will be exactly alike; whenever you run the simulation, each ant makes different random choices and the behavior of the model will be different. But they sound like they spring from the same mind:

ant improv #1
ant improv #2
the ants start searching for food
making highways
one food source left…
starting the last highway

Our minds love patterns too – I find myself cheering the ants on when I watch the simulation, rooting for them to find the next leaf, hoping for them to route into the highway pattern, waiting to hear their eerie plunking, playful jazz…

coding in the jungle – on the balcony, adopta

extensions for this project:

-there is a web extension for NetLogo, but without sound; could translate these ants into Javascript/p5.js so users can press “play” themselves online and control different variables (how many ants? speed of ants?)

-connect the MIDI sound that the ants are making to a score, print out sheet music written by the ants, play it on the piano

-make the model more complex, closer to the structure of actual leafcutter colonies: different sizes of ants, different tasks…

-interactive projection version

you got this, ant.

Thanks to everyone at Dinacon!

-Madeline Blount

NetLogo citation:
Wilensky, U. (1999). NetLogo. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

Balloon Environmental Sensing Takes to the Air

We have liftoff. My first Balloon Environmental Sensing test successfully “slipped the surly bonds of earth, and danced the skies on laughter-silvered wings” sending data back the whole time. First flight was at the Digital Naturalism Conference in Gamboa, Panama, featuring 10+ sensor values streaming from the balloon to an online data collection system and dashboard.

It was a big success!

This party-balloon platform is designed for inexpensive aerial environmental sensing. Balloon lofting is perfect for scientific research, educational programs, hacker workshops, technology art, as well as low-cost indoor or industrial monitoring. Is the humidity overhead the same as on the ground? Does wind speed change? Is it dusty up there? How much UV light penetrates the jungle canopy at different levels? These are all questions that can be answered with this platform.

Since advanced LTE wasn’t available in Panama and SigFox coverage was absent, I decided to use the Digital Naturalism Lab’s LoRaWAN gateway—long-range radio networking that uses very little battery power. The data collection firmware code was written in MicroPython running on a LoPy4 wireless microcontroller module from Pycom. This first set of tests used all the Pysense evaluation board sensors including light, temperature, altitude, humidity, pitch, roll and acceleration in three axis. This data was taken in real time at 30-second intervals and transmitted using LoRaWAN across Things Network servers to be displayed on a Cayenne dashboard. The Pybytes cloud platform appears promising too, I’m looking forward to exploring that more in later phases of the project.

Gamboa has one very small grocery store. It does not sell helium or any other noble gas. Luckily the generous David Bowen allowed our sensor package to hitch a ride on his drone during my first week, so up we went for initial testing. As is so often the case, even this partial test resulted in lots of changes. In this case I realized we needed a frame counter, better battery connections and voltage monitoring before flying again. A second shakedown flight on Bowen’s drone proved the value of these additions, and gave us an excellent sampling of the data to come. We also did a bunch of range testing work, which is covered in a separate blog post.

A taxi trip into Panama City brought us to Mundo de los Globos (World of Balloons) where helium tanks are available, along with 1-meter balloons in plenty of colors. With a full tank of the squeaky gas, we returned to Gamboa and I started inflating our ride to the sky.

The next morning it was time for the sensor package to take its first balloon ride, and up we went. Andy Quitmeyer got some amazing footage from his drone and Trevor Silverstein shot high-end video from the ground (coming soon). I could not have asked for a better documentation team. The balloon reached 60 meters (about 200 feet) above ground level, which was the limit of the reel line I was using for a tether.

We got great data back from this flight, and soon made a second one—this time in a large field away from balloon-eating trees. It was easy to get LoRaWAN signal from altitude since LoRa works best in line-of-sight conditions. We plan to do more with the Things Network to support the biology and ecology research in Gamboa that are spearheaded by the local Smithsonian Tropical Research Institute.

Here’s a screenshot of the data dashboard from the flight.

And a few graphs:

Another afternoon was set aside for a proper party-balloon experiment. Using a smaller battery I was able to loft the sensor package using 6 small balloons and the small amount of remaining helium. This worked too, though 7 balloons would have provided more lift and handled the wind better. Next time, more balloons!

Data from these flights can be downloaded, and the MicroPython code for the LoPy4 or FiPy can be found on my GitHub.

For the next version of the Balloon Environmental Testing platform, my plan is to explore other sensors and wireless links. I’m especially interested in UV light, air quality, wind speed and loudness. In Gamboa we talked about trying some sound recording too. As the balloon itself is silent, it’s the perfect place to record. For wireless links I’m itching to explore some new cellular low-bandwidth, low-cost protocols, LTE Cat-M and NB-IoT, because they don’t require any dedicated base stations and should work great at the altitudes needed for balloon flights. Additional plans include extended day-long flights, free flight with GPS, and maybe look at hydrogen gas but not near any kids!

The initial prototype goal was to see if the full system will work, and it does! Gamboa was a great success for this project, giving me the time, venue and documentation assistance to bring this idea to life. If you get a chance to attend the next Dinacon, I strongly recommend it. And if you’re interested in balloon sensing for any experiment, class or project, let me know!

Unnatural Language – Michael Ang and Scott Kildall

By Scott (Seamus) Kildall and Michael Ang

Unnatural Language, a collaboration between Michael Ang and Scott Kildall, is a network of electronic organisms (“Datapods”) that create sonic improvisations from physical sensors in the natural environment. Each Datapod has custom electronics connected to sensors, a speaker, and a wireless network. The sensed data, for example from electrodes that measure the subtle electrical variations in the leaves of plants, is transformed into a unique synthesized sound. Encased in sculptural materials (natural fiber, leather, leaves, etc) and dispersed into a natural environment, the Datapods enter into a sonic dialogue with the existing ecosystem of plants and animals.

Unnatural Language proposes that technology and nature are forming a new hybrid ecology, where innovations such as intelligent devices that occupy the natural landscape are dissolving the traditional nature-culture dichotomy. This work repurposes this technology to amplify unseen processes such as plant intercommunication, river health and subtle microclimate changes. 

We were at Dinacon in Gamboa, Panama for 18 days and this was our first full development and installation of our project. After several adventures in the area, we decided to deploy eight Datapods in Lake Chagras, which feeds the Panama Canal, since this constitutes a transitional space: a brackish marshland, which also had signs of human outflow such as garbage floating in it.

At Dinacon, we developed two types of sensor-synthesizers. The first detected electrical conductivity levels in water and modulated different sampled sounds that we recorded of rocks sinking in water from a hydrophone. As the water quality fluctuated with these sensor readings, the output of the synthesizer played higher and lower-pitched samples accordingly.

For the water-based datapods, we put our speakers, and the electronics, which consisted of custom software synth code on an ESP32 chip with an on-board amp and water sensor onto various garbage flotillas, which we constructed from the litter that we had collected by kayak.

The second sensor-synth combination was a plant sensor, which detected electrical activity in plants using electrodes. Plants tend to respond relatively rapidly (2-3 minutes) in response to various environmental triggers. The synth we developed acted as a drum machine, modulating different tempos according the the plants that it was attached to.

We learned many things at Dinacon! Making a compelling Datapod took much longer than we thought it would. To achieve the best type of synth effect, we recorded humans performing an activity with the thing being sensed: rocks being thrown into water and water being poured through a strainer onto a plant. We then cut these up into bite-sized pieces and ported them into our software, which uses compiled C++ code on the ESP32 to make dynamic effects.

Also, the janky look for the sculptures themselves had a broad appeal and this will be a direction for the project into the future. We’re looking forward to further site-specific installations of Unnatural Language.

Many thanks to all our fabulous co-Dinasaurs for the wonderfully playful and productive atmosphere, and especially to our intrepid film crew (Monika, Ruben, Cherise, and Andy on the drone!)

Michael Ang & Scott (Seamus) Kildall

The Frog Show – by Mónica Rikić and Ruben Oya

Frog Show wants to elevate the singing frogs to an audiovisual experience.
Since our arrival to Gamboa every evening we were amazed by their singing. It didn’t sound like the frogs we knew. This was more of an electronic synth-like music performance. We saw opportuniy to join the frogs and develop some visuals to add to the show.

With the goal of low impact on the environment and not disturb the frog’s activity we came up with this solar-powered red LED installation. The solar power makes the system self-sufficient and the red light is known to be less perceived by frogs.

The installation relies on the following hardware: microphone, arduino board, battery pack, solar panel and LED strip.


The light effects are audio reactive and controlled through code on the arduino board. Every single frog sound triggers the LED strip depending on it’s volume.

The result is an installation that charges during daytime and activates at night with the frogs’s concert. You can read the intense activity of the animals through the light show.

Active show with frogs on a sidewalk

Technical details:

  • Arduino Nano
  • Adafruit MAX4466 microphone
  • 12.000mAh 2.4A 5V battery pack
  • 7W solar panel
  • 1,5m WS2812b LED strip
  • arduino code based on neopixel library.

Ruben Oya & Mónica Rikić

Butterfly Wing Site-Specific Installation – Emily Volk

Inspired by the scale detail of butterfly wings, my project at Digital Naturalism centered on gathering microscope images and videos of butterfly wings, and using them for a site-specific projection installation.


Butterfly wings produce their detailed coloration and patterning through light refraction off of microscopic scales that cover the surface of wings. Scales also cover the heads, as well as parts of the thorax and abdomen in insect species, including butterflies. Scales also aid in flight and help waterproof the insect, and their delicate nature is a reason to avoid touching live butterfly wings (all of my specimens were deceased and gathered throughout the trails, roads, and buildings of Gamboa)! Through various optical properties of these microscopic scales, intricate and detailed patterns of colors are created across butterfly and moth species.

Panama is home to a great diversity of the world’s butterflies and moths, many of whom exhibit dramatic wing coloration. Panama is especially known for its diversity of neotropical Heliconius butterflies, which express an incredible array of wing colors and patterns. Panama is also known for its many mimics, as well, or species who express the same coloration for various hypothesized reasons, including imitating poisonous species in order to deter predators in a process called Batesian mimicry. Exploring the genetic pathways for scale expression, wing coloration, and patterning is an area of current research and interest to better explain the relationships between the incredibly rich array of butterflies in areas such as Panama.

Overall, scales provide not only biologically useful functionality through meaningful coloration and mimicry, assisting in flight, and waterproofing, but also draw the eye with incredible aesthetic beauty. To expose an audience to the aesthetic and biological wonder I find in observing butterfly wing scale detail, I gathered an array of microscope images and video of butterfly wing scale detail, and displayed my media in a site-specific projection installation outside of Dinalab on a public exhibition evening.


Throughout my time at the conference, I gathered deceased butterfly specimens throughout the Gamboa area. I found deceased butterfly wings, or fragments, throughout trails, roads, and buildings of Gamboa. Importantly, all specimens I gathered were deceased when found. Absolutely no live butterfly or moth specimens were handled during my time at Digital Naturalism Conference. Throughout my time at the conference, I collected wing fragments from a diversity of species. (As of October 1st, 2019: Many of these I am still working on correctly IDing–please reach out with correct scientific names for those shown!)

We found an incredible number of butterfly and moth fragments on the patio of the Gamboa Smithsonian Tropical Research Institute (STRI). Here are wings found by Tiare Ribeaux–her phone served as our collection plate after being surprised by the number of wings here! Unfortunately the STRI entryway seems to be an insect graveyard, being a large, concrete and covered area with consistent nighttime lights overhead.

Microscope Images and Video

I used a Plugable USB microscope (thanks Lee Wilkins, Dinasaur extraordinaire!!) and Plugable Digital Viewer free software ( to gather both video and still images of microscopic detail on collected specimens. (As a special shout-out to this microscope, it is relatively affordable at about a $20 price point online! Get your own, and explore microscope imagery in your own area!!).

Here is a selection of my favorite still images:


Installation set-up scene. Computer, projector, bromeliad and vegetation galore!

To display the microscope video I collected of wings, I set up a site-specific projection installation at one of our evening Digital Naturalism public installations in and around our Gamboa Dinalab. Here, I projected my microscope videos of butterfly wing detail onto a utilities box (shown below, from front and side). These utilities structures are common throughout Panama, and appear to me to be open canvases for a variety of art! This type of public canvas is especially conducive to using projection, which does not harm or modify its canvas.

I am drawn to projection art as a medium that seems to me to be both a light and a fluid. In working with projection, I seek to modify projection canvases to insert mobility, depth, and layers into a projection-based art installation. I’m interested in projection work that gives videos motion and disrupts a 2D canvas. I find that projection, through it’s light, motion, and ability to display on various surfaces, can be a uniquely dynamic and immersive medium for art installations. In using projection for my wing video installation, I seek to draw an audience into the colors and scale detail with a projection environment that blends technology, biology, and fascination.

I incorporated natural elements into the installation through arranging bromeliads saved by Dinasaur Rabia from a local tree-trimming operation and an adjacent tree in the projection surface. The process shots above and below show the location of my local Gamboa installation!

Still photos of installation


I received feedback on my installation from Panamanian artist Kevin Lim. For more feedback or project inquiries, please leave a comment below!

Future Work

Importantly, the media I collected of microscopic wing detail is now portable. With these images and video, I can create more site-specific installation pieces in different environments. I hope to explore a more static installation piece, in a gallery setting or outdoors, where these microscope videos are projected onto a mobile screen shaped like a wing, that can flutter in the wind.

Additionally, as always, I seek opportunities to continue to merge science and art in creative ways to showcase and promote the fascination and inquiry inherit to both disciplines.

For inquiries or collaborations, please comment on my bio page on the Digital Naturalism website, or reach out online through another medium.

Utter excitement with the flexibility of site-specific projection, and all of Dinacon:

Yay Dinacon!

Further Reading and Exploration

“Butterfly scale optics” Google Images search 🙂

Butterfly Wing Optics STEMvisions blog post (

Deshmukh, R. , Baral, S. , Gandhimathi, A. , Kuwalekar, M. and Kunte, K. (2018), Mimicry in butterflies: co‐option and a bag of magnificent developmental genetic tricks. WIREs Dev Biol, 7: e291. doi:10.1002/wdev.291

Florida State University and Olympus’s collaboration site Butterfly Wing Scale Digital Image Gallery (

Kolle, M., Salgard-cunha, P., Scherer, M. R. J., Huang, F., Vukusic, P., Mahajan, S., . . . Steiner, U. (2010). Mimicking the colourful wing scale structure of the Papilio blumei butterfly. Nature Nanotechnology, 5(7), 511-5. doi:10.1038/nnano.2010.101

Srinivasarao, M. (1999). Nano-Optics in the Biological World: Beetles, Butterflies, Birds, and Moths. Chem. Rev., 99(7), 1935-1962. doi: 10.1021/cr970080y.


Generous thanks to the Boulder Arts & Culture Professional Development Grant program for funding my 2019 Digital Naturalism Conference engagement.

Special appreciation goes to Betty Sargeant and Madeline Schwartzman, whose initial microscope exploration of insect wings and feathers drew me in to further exploring microscope wing detail! Thank you for sharing your incredible work, expertise, curiosity, and inspiration during my first days acclimatizing to Dinacon, and throughout our time together.

Thank you to Lee Wilkins for letting me use your rockin’ microscope!

Thank you to Tiare Riabeaux for mega wing collection during our few days of overlap (on-the-phone photo).

Thank you so much to Dina-captain Andrew Quitmeyer for tireless enthusiasm, and bringing us all together with your brilliant conference and curiosity!

And to all of you across the huge Digital Naturalism community, I’m so happy to have all of you new, inspiring friends and peers <3

Froggy camouflage handheld fans

Project by Anna Carreras. BAU Design College of Barcelona, Spain.

Hand fan (abanico) inspired by a glass frog. Photo by Anna Carreras. Gamboa, Panama.

Rainforests of Panama are some of the world’s most biologically diverse areas. Animals use camouflage tactics to blend in with their surroundings, to disguise their appearance. They mask their location, identity, and movement to avoid predators.

By the other hand in cities in many countries the increased use of surveillance technologies have become part of the public and private landscape. Citizens lack of camouflage tactics to avoid these forms of elevated vigilance. Can we learn and borrow tactics from animals to keep away from this constant monitoring?

The froggy camouflage handheld fans project proposes a playful way to act upon our surveying world while learning from frogs camouflage in Gamboa rainforest, Panama.

Hand fan inspired by a dart frog. Photo by Anna Carreras. Gamboa, Panama.

Nature in Gamboa

Exploring nature, animal watching in Laguna Trail. Photo by Marta Verde. Gamboa, Panama.

Attending the Digital Naturalism Conference (Dinacon) from August 26th to September 1st offered the possibility to do several exploratory walks around Adopta un Bosque station, La Laguna trail in Gamboa and Pipeline road on the border of the Soberania National Park. Animal watching includes birds (thank you Jorge), frogs, mammals and several butterflies and insects.

Bat sleeping place near the Panama Canal. Photo by Marta Verde. Gamboa, Panama.

A species’ camouflage depends on the physical characteristics of the organism, the behavior of the specie and is influenced by the behavior of its predators. Background matching is perhaps the most common camouflage tactic and animals using this tactic are difficult to spot and study. Another camouflage tactic is disruptive coloration that causes predators to misidentify what they are looking at. Other species use coloration tactics that highlight rather than hide their identity. Warning coloration makes predators aware of the organism’s toxic or dangerous characteristics. This type of camouflage is called aposematism or warning coloration.

Animals with different camouflage tactics. Photos by Mónica Rikić, Marta Verde and Tomás Montes. Gamboa, Panama.

Studding camouflage tactics includes animal observation and some readings. Frogs are easier to spot and photograph in Gamboa than insects or snakes. The animal books at Adopta un Bosque station and Dinalab gave the opportunity to classify the different species and gain some knowledge about their colors and skin patterns.

Frog spotted and photographed during Pipeline road walk. Photo by Tomás Montes. Gamboa, Panama.
Frogs spotted and photographed during walks. Photos by Tomás Montes, Päivi Maunu, Jorge Medina and Tomás Montes. Gamboa, Panama.
Frog identification triptych at Dinalab. Photos by Anna Carreras. Gamboa, Panama.
Frog identification book at Adopta un Arbol station. Photos by Anna Carreras. Gamboa, Panama.


Different frog skin patterns generated mathematically. Images and code by Anna Carreras.

The skin of some animals show a self-ordered spatial pattern formation. Cell growing and coloration creates some order resulting from the specific differentiation of cell groups. In such complex systems cells are only in contact with their closest neighbors. Which are this morphogenesis mechanisms where some order emerges from individual cells? Which are the mathematical models we can use to achieve this kind of growing patterns and gain some knowledge about them? Can we simulate some frog’s skin visible regularities with a coded system?

The mathematician Alan Turing predicted the mechanisms which give rise to patterns of spots and stripes. The model is quite simple, it places cells in a row that only interact with their adjacent cells. Each cell synthesizes two different types of molecules. And this molecules can diffuse passively to the adjacent cells. The diffusion process makes the system and the whole result more homogeneous. It tends to destroy any ordered structure. Nevertheless the diffusion process with some interaction by the cell molecules drives to macroscopic ordered structures. The mechanism is called reaction–diffusion system. It drives the emergence of order in a chaotic dynamic system.

Steps of a reaction-diffusion model evolving from chaotic randomness to structured patterns. Images by Anna Carreras
Steps of a reaction-diffusion model evolving from chaotic randomness to structured patterns. Images by Anna Carreras.
Steps of a reaction-diffusion model evolving from an organized grid to emergent patterns. Images by Anna Carreras.

Code and interface

Frog pattern generator using a reaction-difussion system. Image and system by Anna Carreras.

A system using the Gray-Scott model and formulas was coded in Processing language. The interface shows the animation of how a frog skin evolves. The GUI also shows the system values that lead to that skin pattern formation. These values and two selected colors generate a unique frog pattern each time the system is started. The spatial feeding system options and the values that can be selected and adjusted are inspired by Gamboa’s frogs. They derive from the observed and photographed species and from the consulted books.

Frog pattern generator using a reaction-difussion system with random feeding. Image and system by Anna Carreras.

Camouflage DIY hand fans

Two different hand fans. Photos by Anna Carreras. Gamboa, Panama.

Frog skin images are used to create light folding hand fans. They are suitable for Gamboa’s hot weather and help to camouflage inside the rainforest. They can easily be taken home and used around the world in several cities.

To build the hand fans two parts are needed: the fan frame and the fan leaf. The designed DIY hand fan is designed as a traditional Spanish hand fan. The frame structure is made of a thin material that can be waved back-and-forth, birch tree or pear tree wood.

Traditional Spanish hand fan structure for laser cut. Designed dxf file by Anna Carreras.

The produced hand fans use 0.8mm thick birch wood to make sure it can bend without breaking. The fabrication starts laser cutting the 16 fan ribs for the frame and printing the camouflage image. Cut the fan leaf, using scissors, as a half circle measuring 210mm the exterior radius and 95mm the inner radius.

Laser cutting the hand fan ribs structure. Photo by Anna Carreras.

When the parts are ready put together the 16 fan ribs, one wide rib at the beginning and one at the end. Fix the fan ribs with a m3 screw and nut, a metric screw with nominal diameter of 3mm or 0.12in. Extend the fan ribs as an opened hand fan. Glue the fan leaf on the thiner exterior part of each rib and allow the glue to dry. Finally, one rib at a time, put it above the previous ones and fold the paper carefully to create the folding shape.

Hand fan in action. Photos by Daniëlle Hoogendijk and Anna Carreras. Gamboa, Panama.
Resulting DIY hand fans. Photos by Anna Carreras. Gamboa, Panama.


Glass frog hand fan. Photo by Anna Carreras. Gamboa, Panama.

Two different models of the Froggy camouflage handheld fans were created. The green one is inspired by the glass frogs and the orange fan is inspired by the pumilio dart frog. Both frogs live in Panama.

Glass frog and Pumilio dart frog. Photos by Anna Carreras and Pavel Kirillov [CC BY-SA 2.0]. Gamboa and Bocas del Toro, Panama.
Glass frog hand fan. Photo by Anna Carreras. Gamboa, Panama
Pumilio dart frog hand fan. Photo by Pavel Kirillov [CC BY-SA 2.0] and Anna Carreras. Gamboa, Panama

The glass frog handheld fan and the pumilio dart frog handheld fan integrated quite well with Gamboa’s surroundings and the rainforest.

Glass frog hand fan. Photo by Anna Carreras. Gamboa, Panama.
Pumilio dart frog hand fan. Photo by Anna Carreras. Gamboa, Panama.
Glass frog hand fan camouflaged between leaves. Photo by Anna Carreras. Gamboa, Panama.

Conclusions and future work

To  act  upon  our  surveying  world camouflage is one of the plans we can play. It rises issues of mimesis, crypsis, perception, privacy and identity. Some artistic projects about fashion and cosmetics have been developed with this idea, like CV Dazzle and HyperFace, among others. The Froggy camouflage handheld fans project sums up in this direction creating hand fans inspired by Panama’s frogs camouflage strategies.

We can gain some knowledge and learn from animals and their hiding techniques. Some animal camouflage skin coloration can be modeled as a quite simple dynamic system that generates complex ordered patterns. We can mathematically model and code the system to simulate the growing process of frogs skin coloration. It helps us to better understand how different frog species have certain particular patterns. Moreover it gives us some insight about how order can emerge from random initial conditions.

Different animal patterns and camouflage tactics can be further investigated. It can help us to achieve different and diverse algorithms and colored results. They can suit in different environments and they can help us camouflage from the increasing number of surveillance systems. A battle between algorithms learned and borrowed from nature against vigilance algorithms.


Dinalab open Saturday exhibition. Photo by Anna Carreras. Gamboa, Panama.
Dinalab open Saturday exhibition. Photo by Anna Carreras. Gamboa, Panama.


First I would like to thank Dr. Andrew Quitmeyer for organizing the event and all the participants I met at Dinacon Gamboa. And thanks to Marta, Mónica, Tomás, Jorge, Päivi and Dani to help me documenting the work.


Book The Chemical Basis of Morphogenesis. Alan Turing. 1952.

Book Orden y Caos en Sistemas Complejos. Ricard V. Solé, Susanna C. Manrubia. 2000.

Videotutorial Coding challenge #13: Reaction Diffusion Algorithm in p5.js. Daniel Shiffman. 2016.

Project CV Dazzle: Camouflage from face detection. 2010.

Project HyperFace: False-Face Camouflage. 2017.

Seedpod LED Hack (Easy, educational bio-augmentation project) – Emily Volk

Exploring around Gamboa on trails and streets, I became fascinated with these flower-shaped seedpods. They appear as woody flowers, nearly blooming to release their inner fruit, and then expanding greater as they dry. This seedpod stalk was the first jungle object that I picked up as debris in the streets of Gamboa, and served as my first inspiration for a basic bio-hacking LED light project. What follows is a quick and easy tutorial for a basic natural object bio-augmentation project. This can serve as a simple lesson plan to explore bio-hacking to merge technology with natural objects and the directionality of LEDs.

Personal Process

Decorative Light: Personally, I explored various ways to rig this seedpod stalk as a full LED light that could decorate a space as a hanging decorative light. For this, I experimented with various conductive materials provided by Dinalab, including conductive thread and copper tape. I hoped to use a conductive wiring material that would either blend in to the seedpod stalk, or add aesthetic detail in the form of an attractive color or form. I did not settle on a favorite method for this full-stalk augmentation, and encourage others to pick up this process to explore different modes of creating a lamp with many seed pods!

Tactile Engagement: I also explored various interaction designs using LEDs to inspire tactile and up-close exploration of this seedpod I found to have such a fascinating shape and process of opening. In this exploration, I used LEDs activated by a DIY button where squeeze intensity and location determined which LED would light, and LED brightness. These LEDs and the tactile button control were meant to encourage a viewer to pick up the seedpod stalk, and explore both its structure and LED light augmentation as a way to encourage close observation of a natural structure.

Tactile exploration of bio-augmented LED seedpods, including fun Dinacon atmosphere of giggles and sharing work with an inspiring peer!

Project Tutorial: Quick educational lesson plan to explore bio-augmentation and LED basics!


In this quick tutorial, we explore a basic bio-augmentation project of adding an LED to a dried seedpod in order to make a quick and easy light. This project highlights the directionality of LEDs, and explores how technology and nature can merge to create new and innovative forms based on personal interest and exploration of natural objects.


  • Seedpod!
  • LED
  • 5V coin cell battery


The miraculous element of this project is how perfectly the base of one of these fully opened seedpods fits a standard 5V coin cell battery. This served as inspiration for this project, and allows the little LED product to be a compact and pretty sturdy unit!

Basics of LEDs: LED stands for “light-emitting diode.” A diode is a semiconductor device which only conducts electricity in one direction. An LED is a particular type of diode that emits light when current passes through it, in the positive to negative direction. On a basic LED, you can tell which side is positive for wiring because the positive prong is longer.

To fashion your own seedpod light, first note which side of your LED is positive (longer wire) and which side is negative (shorter wire). Then, extend the prongs of your LED horizontally, and carefully place your LED into the center of your seedpod. Position the LED prongs as close to the base of the pod as possible, and between “petals” of the pod. To secure your LED in your seedpod, carefully bend the prongs of your LED down with tension, which will secure your LED in your seed pod.

LED positioned in the middle of the seedpod, like the center of a flower. LED prongs are positioned through the gaps in the seedpod “petals,” and bent downward to secure the LED in the center of the pod.

From here, bend your LED prongs. Bend the negative prong to lay horizontally across the back of your pod, as close to the base as possible. Then, bend your positive prong above this, but leave slightly more space from the back of the seedpod. Make sure the positive and negative prongs are not touching, as this will short-circuit your LED.

Negative prong is bent level with the seedpod base, very close to the surface (left wire). Positive prong is bent slightly above the surface (right wire, above).

This little pocket between LED wires forms the fixture for your coin cell battery! Place your 5V coin cell battery face up (positive side up), and secure by clamping down the positive LED prong over the battery. Keep bending until the battery is snugly secured in the seedpod, and firmly contacting the negative LED prong.

Your LED should now be lit, leaving you with a completed little bio-augmented seedpod light! Make as many as you want, now that you know the basics of LED directionality and can experiment beyond with bio-augmentation.


Feel free to reach out with any feedback or interest. Thanks!

The Future Within – Grace Grothaus

Grace Grothaus
THE FUTURE WITHIN: A digital seed archive and interactive sculpture series exploring
threatened plant biodiversity in the americas

“First and above all an explanation must do justice to the thing that is to be explained, must not devaluate it, interpret it away, belittle it, or garble it, in order to make it easier to understand. The question is not “At what view of the phenomenon must we arrive in order to explain it in accordance with one or another philosophy?” but precisely the reverse: “What philosophy is requisite if we are to live up to the subject, be on a level with it?” The question is not how the phenomenon must be turned, twisted, narrowed, crippled so as to be explicable, at all costs, upon principles that we have once and for all resolved not to go beyond. The question is: “To what point must we enlarge our thought so that it shall be in proportion to the phenomenon…” – Schelling

“The future is not in front of us, for it is here already in the shape of a germ (seed).” “What is not with us will not be, even in the future.” Čapek

A result of cumulative anthropogenic activity, global mass extinction is currently in progress, a phenomenon which many refer to as the sixth extinction. I am attempting to grappling with this phenomenon as an artist and to live up to the enormity of the subject. In Schelling’s expostulation I begin to see the beginnings of a course of action. To enlarge my thought to be in proportion to the phenomenon, I must immerse myself in it, far beyond the four walls of my studio. In so doing I deepen my knowledge base and in turn the efficacy of my artistic practice upon return to studio. We see more clearly by recording what we see firsthand. With this understanding and via the support of the Digital Naturalism Conference, the Tinker Foundation, and the University of California San Diego, I conducted field research July-September 2019 in forests across the Americas: South, Central and North. Specifcally in the Panamanian canal zone tropical moist broadleaf forest (“rainforest”), Brazilian Cerrado, Mata Atlântica, and the North Atlantic forest of the Blue Ridge Parkway. Especially in Panama and in Brazil, these biodiversity hotspots are home to a great number of endemic species, some of which have not yet even been discovered. Especially in Brazil they are also threatened. According to UNESCO, the Cerrado, the second largest biome in South America, less than 30% of the natural vegetation remains and continues to shrink and the original Mata Atlântica has experienced 85% deforestation. These are places of irreplaceable biodiversity. For example the Cerrado is the most biodiverse savannah in the world. Yet devastating losses continue. It is highly probable that many endemic species have already faced extinction before being recognized by the scientific community and the broader world at large, and even more are at risk today.

These past few months during my hikes in these forests and grassland, I sought out seeds, seedpods, and fruiting bodies of as many different plant species as possible and from them created 3D digital models. In this way I digitally collected 60 unique specimens in Panama during Dinacon, another 152 in South America, and 45 thus far from North America where I am working now. All together this represents 257 unique species. The digital models of them are comprised of a staggering 26,000+ images taken of the specimens during the photogrammetry process. In addition, I have nearly seven thousand photographs, video, and audio recordings, numerous field notes and
sketches. Via field guides and discussion with generous researchers at Inhotim Botanical Gardens, the Smithsonian Tropical Research Institute, and the University of California San Diego I have been identifying the species of my specimens and learning about them.

In particular, discussion with two of STRI’s post-doctoral research fellows about their
research into seed dormancy in tropical forests was eye-opening. Seeds in more temperate forests are known for their lengthy fertile dormancy and it is not unknown to find specimens to lie dormant but still viable for even tens of thousands of years, yet in tropical forests the duration is much shorter, ending after only a few years and stretching into a decade or two max, species and soil conditions dependent. Reasons are not wholly clear, yet in both locations the seeds are not impaired by the soil, rather they actually need the soil microbes for the possibility of germination. Like the wildfires of the Cerrado, tropical soil is able to abrade the seed surface enough for germination. Seeds possess a chalazal area or plug, a round location on the surface that must break away, in order for the plant embryo inside to emerge and grow. In discussion of mechanisms by which future climate change may affect the species they study, the researchers explained that it is not only microbial/soil abrasion in the tropics that are sufficient to break physical dormancy in seeds, but also fluctuations in
the soil temperature. Surface soil temperature is dependent on ambient air temperature and so in a much warmer future, the viability window of opportunity for these seeds may well shorten. This is but one of the numerous concrete mechanisms by which species in this biome face future loss and potential extinction, and one I was previously wholly unaware of.

It can be cognitively difficult to focus on the slow growing and near silent plants around us. We have a tendency to look at plants as part of another and separate “natural world,” perhaps even a backdrop upon which we and other animals live out our lives, but this mindset is a fallacy. Plants are the keystone upon which all mammals rely.
The pace of plant growth is so slow compared to human movement, it lends toward the human impression of the plant world as constant, reliable backdrop, yet everything is growing constantly and all that I observed was constantly in flux. I’m grateful for the extended time dedicated to careful observation this field research provided, and reminded of Heraclitus’s truism that you can only step into the same river
once. In the same way that it is difficult to focus attention for the length of time requisite to really witness a plant grow, climate crisis is similarly difficult to observe, and yet with both the cumulative changes are unmistakable.

The photogrammetry process enables me to digitally collect only and leave no trace behind me – the real specimens in their home environments where they belong. I am developing a growing digital record, but just a tiny fraction of the species diversity that was all around me. Many more species were not in seed during the time of my visit and many more eluded my discovery, having already dispersed to the wind, soil, or animal digestion. The seed specimens I did collect vary greatly in size from a paper thin ~1/64 of an inch thick to 18 3/8 inch long. I was thorough in my collection but still others were too small, lacking any dimension of length and therefore impossible to capture using the set-up available to me. Perhaps farther in the future with other support I’ll be able to model via micro MRI imaging. In the near term I am now in the process of turning these digital models into sculptures. printing them into physical objects and embedding my motion-sensitive electronics inside that will enable the finished sculptures to murmur sounds into our ears. If a seashell held to the ear presents to our imagination sounds of the ocean though in fact amplifying the rush of our own blood circulating, then perhaps these seed sculpture sound compositions, composed from field recordings and whispered text, will amplify our hopes and fears about our planetary future to a level that we cannot ignore.