Cuttlefish are cephalopod mollusks who, together with octopus and squid, have done away with their external shell, and evolved both large brains and complex behaviours. One of their most striking behaviours is camouflage. These animals can change their skin pattern, colour and 3D texture within a fraction of a second and so as to avoid detection by predators (or prey). Their camouflage fools even the human visual system, suggesting that it relies on widely used principles of visual perception. Our study published this week in Nature (http://dx.doi.org/10.1038/s41586-018-0591-3) focuses on describing cuttlefish skin patterning quantitatively, and using this description to reveal hidden aspects of neural control and development in this fascinating system.
I am a postdoc in the lab of Gilles Laurent at the Max Planck Institute for Brain Research. I became interested in working on cuttlefish in the winter of 2016, after seeing the progress done on an ongoing project in the lab, defined to capture cuttlefish skin state at chromatophore resolution. A video, taken by Jessica Eberle, a master student in our lab, showed a zoomed-in view of cuttlefish skin (as in Fig. 1b), revealing aspects of the mechanistic generation of skin patterning in these animals. A young cuttlefish possesses tens of thousands of chromatophores, specialised pigment cells in the animal’s skin. Each chromatophore is surrounded radially by muscles that, when contracted, expand the chromatophore and thus, the “pigmented pixel” (Fig. 1c). These muscles are themselves controlled by small sets of motor neurons projecting from the brain. Thus, cuttlefish have wired their brains to their skin, in such a way as to allow for fast and precise control of their skin patterning. Much of this is known from decades of work by Ernst Florey, Roger Hanlon, John Messenger, and others.
Figure 1 | a, Top-down view of a juvenile cuttlefish (head faces bottom right). b, Zoomed-in view of skin showing chromatophores of different expansion states and colours. c, Super-resolution (STED) microscopy of a single chromatophore and connected muscles (actin stain).
What was new was work on these images by Philipp Hülsdunk, a computer science graduate from Goethe University and by then a joint graduate student in our lab and in Matthias Kaschube’s (our collaborator at Goethe University and Frankfurt Institute for Advanced Studies). Philipp showed that the cuttlefish skin could be aligned over time by using computer vision techniques to remove larger-scale movements (e.g. breathing and swimming), while retaining smaller-scale changes in chromatophore expansion. This suggested to me that the goal of tracking large numbers of chromatophores in freely behaving cuttlefish had become reachable, thus enabling the objective description of a complex behaviour with a particularly clear relationship to the underlying neural control.
In weekly meetings of our team, we began to iterate between improving the experimental conditions for optimal chromatophore tracking, spearheaded by Theodosia Woo, and improving the tracking algorithms and implementation to handle massive amounts of video data. In hindsight this fast feedback loop between experiment and analysis was crucial, as it allowed us to bootstrap towards increasingly usable quantitative behavioural data. By the end of 2016 we could track small stretches of video containing a full animal reliably. At this point, we had no choice but to confront the problem that we knew was lurking in the background.
Our tracking technique relied on having the animal continuously in view, and chromatophores not changing position by many pixels between frames. Unfortunately, cuttlefish are fast swimmers, capable of jetting out of our camera’s small, zoomed-in field of view. We could quickly bring the animal back into view of the camera using a motorised rail system, but this resulted in our dataset consisting of a series of ‘chunks’ of useable video. These chunks were separated by seemingly unbridgeable gaps where the animal would swim out of view, adopt a different orientation, and (annoyingly for us) often change skin patterning.
It turned out we could use small patches of skin as unique ‘fingerprints’ to ‘stitch’ chunks of video together. Taking a patch of skin from one image in one chunk and correlating it with small patches taken from images in another chunk at every possible translation and rotation usually resulted in only a single match, at the correct position. We used this fact to design a stitching algorithm that let us map hundreds of chunks of video into one common reference frame, and thus track tens of thousands of chromatophores simultaneously over hours. This opened the door to a range of fascinating questions about behavioural control and development that we begin to explore in our paper.
Several times throughout this project I was struck by how progress was only made possible by readily moving between traditional scientific disciplines. For example, early in the project Marcel Lauterbach (a postdoc in the lab) and Jessica Eberle became interested in chromatophore colour. It was clear at the time that there were multiple colours of chromatophores, but the number of colours, or whether a continuum of colours exists, had been debated for decades. After spectral measurements and pharmacology suggested the presence of two main colour classes (light and dark), we began collaborating with the chemists Jakob Meier-Credo and Julian Langer in our institute’s Proteomics facility. They used state of the art mass spectrometry techniques to identify the pigment giving the light chromatophores their colour. Using this chemical validation of two colour classes, we started tracking chromatophores over development (days to weeks), finding that chromatophores were born light and systematically turned dark over time. This in turn allowed us to describe many aspects of the system (its development, but also aspects of the distribution of light and dark pixels) that I initially didn’t think we would be able to address, using analytical models and computer simulations.
I think of our study as one step towards a larger goal of describing cuttlefish camouflage behaviour quantitatively: this behaviour is the expression of a mapping between an image of the local environment and an image generated on the cuttlefish’s skin. Understanding this mapping stands to teach us much about visual perception in these animals, in their predators, and perhaps in general. Additionally, this system offers an excellent setting for theorising about the relationship between animal behaviour and its underlying neural control. It has been a privilege to be able work on these topics with a wonderful team of collaborators, and I would like to thank everyone at the Max Planck Institute for Brain Research, especially Fritz Kretschmer in the Scientific Computing Facility, and our amazing team of animal care takers, for making this work possible.