Sometimes you read a paper and you just say to yourself: That is so cool! In Science magzine this week is one of those papers. A team from Harvard engineered a miniature swimming ray by using heart muscle cells engineered to respond to blue light, based around an elastomer body and gold skeleton.
And here’s the first figure of the paper if you prefer a diagram with a little more technical detail:
Legend: (A) A live Little skate, Leucoraja erinacea, swimming and (B) its musculoskeletal structure. (C to E) Tissue-engineered ray with (C) four layers of body architecture, (D) concept, and (E) phototactic control. Upon optical stimulation, the tissue-engineered ray induces sequential muscle activation via serpentine-patterned muscle tissues, generates undulatory locomotion, and sustains steady forward swimming. It changes direction by generating asymmetric undulating motion between left and right fins, modulated by light pulse frequency.
Once my brain got over the initial excitement of “Cool! Cyborg organisms! It’s so science fiction!” a couple of interesting things struck me. The first is that this is a nice example of the growing field of “soft robotics”; literally the use of soft (inorganic or organic) materials to make robots, often, but not always, inspired by biology. When you ask most people what they think of when they think of a “robot”, what they will tell you is an image that has been almost wholly generated by science fiction: a mechanical, humanoid figure, probably made out of gleaming metal with glowing eyes; asked for a real world example, and they might correctly point out factory assembly robot arms, which are, again, mechanical. There are advantages to using soft materials: an ability to deform means being more able to deal with unpredictable environments, and greater safety interacting with humans. With advances in materials technology, we can expect to see more in this field.
The other thought that occurred to me was sparked by that comment towards the end of the video about synthetic cognition: we have a tendency, when imagining creating a machine that can “think”, that it’s all about what’s going inside whatever matter is doing the thinking. But it’s not. An awful lot of human cognition is based on sensory input from the environment, and mental maps of our own bodies. In less cognitively complex animals, what passes for “thinking” is essentially cycles of detecting and responding to sensory information from the environment. There’s also a school of thought, arising from this theory of embodied cognition, that suggests our ability to predict future events is dependent on our having a mental model of our bodies and essentially modelling what those bodies will do in response to changing sensory stimuli, based on past experience of similar stimuli, and that even very simple animals do a simple version of this. This is leading to new types of artificial intelligence research, involving the use of neural nets, and trying to construct simple creatures like artificial insect. It’s also worth noting Moravec’s paradox: Contrary to what one might think, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.
This makes a lot of sense to me. It also implies that if we are to make artificial organisms, robots, if you will, that are able to interact with the environment and respond appropriately to changing situations, then this is what they will need to do too. Again, science fiction has often come up with the “thinking computer”, like HAL 9000; such brains-in-a-box might get sensory input from things like video cams and microphones, but they don’t have bodies with which to interact with the real world. I don’t dispute that a future artificial intelligence may not need a body, but I do think that, if we are to achieve true AI, in the sense of something that can think for itself, we may well need to evolve it up from simple things like biohybrid rays into something that does, indeed, work rather like a human does.