But, although a statement of allophonic patterning necessarily establishes indirectly the overall distribution of each phoneme which exhibits variation of this sort, it is usually necessary (and certainly more transparent) to treat the constraints on phoneme distribution separately. This aspect of sound patterning is usually called phonotactics.
Imagine two languages (call them Language A and Language B) which differ in the phoneme sequences which they allow, but differ (at least phonologically) only this way. Let us suppose - to keep things simple but not misrepresent life in the real world - that the phonemes distinguished in these two languages are very few - say the consonants /p/, /t/, /k/, /l/, /n/, and /s/, and the vowels /i/, /a/, and /u/ - and let us imagine that these phonemes are pronounced more or less like the phonemes which would be represented the same way in a description of English.
Language A requires words to be made up of alternating sequences of single consonants and vowels. Words must furthermore begin with a consonant and end with a vowel. As a result of these constraints, the vocabulary of Language A might contain the words /palaka/, /masitu/ and /kunipa/. If you measure length in syllables (e.g. /palaka/ has 3, /pa-la-ka/), the words in such a language will tend to be rather 'long', for only in this way is it possible to provide for a sufficient variety of different word-forms.
Language B, by contrast, allows words with groups of consonants at the beginning and the end. The consonant groups may contain up to three elements and, as further evidence of the liberal regime, consonants are not required word-finally. Obeying these regulations, Language B is able to distinguish many more 'short' single syllable words than Language A. It can, for example, take advantage of quite complex single syllable forms like /skumps/, /tsukt/ and /pli/ to stock up its lexicon.
You need do no more than say out loud the words of Language A and Language B to feel forcibly enough the effects of a language's characteristic phonotactic pattern on the way it 'sounds'. And so, to return to our original question Why do languages sound different? , we can now answer that two languages may well sound different because the sounds of the two languages are differently distributed - or, if you prefer, because the permissible sequences of phonemes are governed by different restrictions or constraints. When there is significant variation in the sequential patterning of phonemes the difference in general auditory effect can be dramatic
This sort of diversity can be seen as picturesque and, indeed, it causes no one a problem until two languages come into contact. But it will perhaps not come as a surprise that, like many languages of the Pacific area, Rennellese has met up with English and as a result of that ongoing relationship has frequently borrowed English words. What is interesting from our point of view is that the discrepancy between the phonotactic patterns of the two languages will more often than not prohibit direct, immediate borrowing. Many English words are simply not pronounceable as they stand by a Rennellese speaker (speaking Rennellese). In order to assimilate them into the native Rennellese structures, considerable modification of the English forms is required . English plumber, for just this reason, can only turn up if remodelled as palama or pulama while English school must resurface as sikolo.
In describing language A, above, we proposed a condition of just such a type, which limited sounds of one kind - consonants - to appear in one place and sounds of different kind - vowels - to appear in another. And if we had attempted to set up constraints for a real language of type B, we would not only have had to refer to consonants and vowels, we would also have had to distinguish between phonetically different sub-classes of consonants, since, even within the consonant groups, there is no free-for-all. In English, for instance, when three consonants appear at the beginning of a word, the restrictions are so tight that the third can only be either an /l/ or an /r/ or - to use a phonetic feature which characterizes both - a liquid.
Syllable structures, for example, vary considerably from language to language, as we have seen. But syllable structures are none the less universally governed by a sonority sequencing principle which imposes tight restrictions on the sequencing of segments on either side of the syllable nucleus. Taking vowels to be the most sonorous sound types and obstruents the least (with glides, liquids and nasals forming the intermediate steps) the principle prohibits anything but a decreasing pattern of sonority from the centre to the margins of a syllable. This principle therefore imposes clear limits on, say, the choice of consonants forming a group in the syllable onset position. (The simple syllable counter discussed in Part 1 was based on this principle.)
In other cases it seems as if a range of phonotactic patterns may be handled by supposing them to be determined by the setting of a limited number of parameters (or switches). Word-stress variation across languages, for example, may be governed by a set of simple binary choices of the type: Are syllable counted from the left or the right? Does the selection of the stressed sllable depend on the weight of the syllable? In this context, learning the patterns would involve picking up enough clues to check the appropriate boxes alongside the questions.
Lastly it is worth noting that we can find parallels in phonotactic patterning to the absolute, statistical and implicational universals we found in phonemic systems. All languages use a CV (Consonant Vowel) syllable template. The syllable nucleus is normally occupied by a vowel. A language which has onset clusters of the type plosive nasal will also have plosive liquid onset clusters.
If for any language we can identify the specific phonotactic constraints clearly enough and if they are expressed in terms of some phonological hierarchy we are also have the means to develop tools for more sophisticated parsing - i.e. the analysis and possibly rejection - of phoneme sequences.
The phonotactic constraints we build into a parser can, of course, just as well be used in the development of procedures which would allow us to generate novel (but legal) word forms in that language - names for a new cosmetic or a new car.
There are two quite different ways in which this challenge might be taken up. We could attempt to provide a set of phonological word formation rules which were so tightly constrained at each stage that they could only ever generate possible words. Alternatively we could randomly put together sequences of phonemes and then subject these structures to a set of filters, which would block illegal combinations but allow the 'good' words to slip through.
For a simple CV language of type A, word formation rules are easy enough to implement. In the case of language type B, with more complex syllable structure, a filter-based approach is perhaps more tractable. Since filters can readily handle the simple patterns of a CV language, as well as the more complex forms, you might decide to opt for this approach as providing the most general mechanism. Here, however, is a production rule based syllable generator which opts for the constrained building approach.