ABSTRACT

Two bottlenosed dolphins (Tursiops truncatus) have been trained to carry out instructions conveyed through artificial languages (Herman, 1986, 1987; Herman, Richards & Wolz, 1984). The instructions were conveyed through sequences of gestures or sounds and were functionally similar to imperative sentences in human languages. For one dolphin, named Akeakamai, a gestural language was employed in which the discrete gestures produced by a trainer were analogous to the words of a natural language, and referred to objects, simple actions, relationships, and indicants of spatial location. For the second dolphin, named Phoenix, an acoustical language was employed in which discrete, electronically generated sounds were used in place of gestures. Each language thus contained a vocabulary of gestures or sounds. These vocabulary items could be combined and recombined with one another according to a set of syntactic rules that governed the order in which the various semantic categories could be arranged to create expanded meanings. The syntactic rules allowed for strings of from two to five semantic entities to be constructed and for meaning to be varied by the ordering of these entities. In this way, many hundreds of unique, grammatically correct sequences could be formed from a limited vocabulary.