The assumption that there is a common set of linguistic principles underlying both spoken language and sign language phonology, which forms part of the human language capacity, is shared by most phonologists working on sign language. See Sandler (1993a) for an extensive discussion of these issues. But even though this assumption is reasonable, since both spoken and signed languages are products of the same human brain and fulfil the same function, it is not clear that theories of representation which have been proposed for spoken languages can be directly applied to the structure of sign languages. Such representations have been developed on the basis of the spoken language modality only. They are often so close to the phonetics of spoken languages that we cannot rule out the possibility that non-trivial aspects of them are modality-specific. Therefore, rather than, for example, attempting to test various competing (spoken language-based) theories of syllable structure, we must first investigate the structure of sign language in its own right. This strategy need not be pushed too far, however. In developing a model of signs we can benefit from general principles which have proved successful in the study of spoken languages, especially if these principles do not seem to be directly based on ‘spoken phonetics’.