16.9.06

Slow and steady wins the race, right?

So, Josh established once and for all what is and what isn't linguistics a few days ago. I've been meaning to discuss some of what he says in that post, but because I am typically slow to response-blog, he has another language related post I want to respond to, as well.

(Of all the linguistic issues Josh would end up blogging about, VOT is perhaps the one I expected least. In any case, my responses to the two posts are related, so it turns out to be a good thing I 'waited'.)

The earlier post is concerned primarily with whether or not sociolinguistics is linguistics proper or not, but it also touches on the role of phonetics in linguistics. Josh makes a compelling case that a fair bit of sociolinguistics is sociology, only tangentially related to linguistics (he estimates it's an 80/20 split, but I'm not willing to commit to any such numbers). He also insists "that phonetics is more concerned with the interface than the subject proper".

For Josh, among many other linguists, capital-L Language is competence, not performance. That is, linguistics proper is concerned with the knowledge underlying language, as opposed to what is actually said on any given occasion. Even if you concede this point, you have to be careful when deciding what counts as competence and what doesn't. Perhaps the most famous sociolinguist - William Labov - argues in in an unpublished manuscript on the foundations of linguistics that the competence/performance distinction is incoherent (emphasis added):
The terms 'idealism' and 'materialism' can be seen to be most appropriate in relation to the definitions of data involved. The idealist position is that the data of linguistics consists of speakers' opinions about how they should speak: judgments of grammatically or acceptability that they make about sentences that are presented to them....

The materialist approach to the description of language is based on the objective methods of observation and experiment. Subjective judgments are considered a useful and even indispensable guide to forming hypotheses about language structure, but they cannot be taken as evidence to resolve conflicting views. The idealist response is that these objective observations of speech production are a form of 'data flux' which are not directly related to the grammar of the language at all....

.... The idealist position has more recently been reinforced by a distinction between 'performance' and 'competence'. What is actually said and communicated between people is said to be the product of 'language performance', which is governed by many other factors besides the linguistic faculty, and is profoundly distorted by speaker errors of various kinds. The goal of linguistics is to get at the underlying 'competence' of the speaker, and the study of performance is said to lie outside of linguistics proper. The materialist view is that 'competence' can only be understood through the study of 'performance', and that this dichotomy involves an infinite regress: if there are separate rules of performance to be analyzed, then they must also comprise a 'competence', and then new rules of 'performance' to use them, and so on.
While I don't agree that this constitutes an infinite regress (it seems clear to me that a lower bound on linguistically relevant and controllable production and perception variables is establishable in principle), the general point is important, and one could easily make the case that the partition between linguistically interesting competence and mere performance typically excludes linguistically relevant knowledge. It may be that the conditioning context for some pronunciation variant is more social (e.g., economic status of interlocutor) than traditionally linguistic (e.g., prosodic position), but this fact alone doesn't make sysematic linguistic behavior any less indicative of underlying knowledge about the linguistic system. Even the most ardent 'idealists' (e.g., Chomsky) understand that we can only indirectly observe competence as it is 'filtered' through performance. From Apects of the Theory of Syntax (p. 4):
The problem for the linguist, as well as for the child learning the language, is to determine from the data of performance the underlying system of rules that has been mastered by the speaker-hearer and that he puts to use in actual performance.
So when Josh says that
People like me, though we accept that Phonetics is also Linguistics, would insist that Phonetics is more concerned with the interface than the subject proper. Language is competence. Studies of articulatory motor functions and sound processing are valuable (especially in practical industry terms, for things like speech processing on those annoying telephone menus that have you say "one" rather than press the button), no doubt about it, but mostly as a way to explain how useable information gets to the language module and back out again. I do not seriously believe that Phonetics has any bearing on meaning or grammar (though there are certainly those that do) - though there are bound to be certain mathematical artefacts of the way articulators are arranged that sometimes cause a speaker to prefer one possible form over another, etc.
I think he's mistaken. 'Mathematical artefacts' are by no means the only way in which the interface has an effect on grammar. In the early days of generative theory (i.e., whence the popular resurgence of the competence-performance divide), the interface was seen as crucial, if not central, to the theory of language. As Katz inelegantly wrote in The Philosophy of Language (1966, p. 98):
Natural languages are vehicles for communication in which syntactically structured and acoustically realized onjects transmit meaningful messages from one speaker to another....

Roughly, linguistic communication consists in the production of some external, publicly observable, acoustic phenomenon whose phonetic and syntactic structure encodes a speaker's inner, private thoughts or ideas and the decoding of the phonetic and syntactic structure exhibited in such a physical phenomenon by other speakers in the form of an inner, private experience of the same thoughts or ideas.
Had he been capable of linguistic communication, Katz could have written what he seems here to mean, and he could have avoided making silly mistakes. On the former hand, what he means is that communication is important to any understanding of language. On the latter hand, it should be obvious that not all linguistic communication depends on acoustic transmission. At the end of his discussion, Josh points this out to an unnamed 'sound person', which category I assume excludes Katz.

Katz's syntactic flourishes aside, it is my assertion that communication - and thereby the interface - should be central to a theory of language. I will provide support for this assertion by discussing an incompletely-asked question in phonology.

It is well known that certain combinations of phonological features are comparatively rare among the world's language. Take, for example, the uneven distribution of certain place and glottal features. Voiceless, labial [p] and voiced, velar [g] are commonly missing from phoneme inventories. As Gussenhoven and Jacobs eventually describe it (in Understanding Phonology, the book I should have used when I taught undergraduate phonology, instead of this one, whose name I will not utter, or whatever the written-version of 'utter' is), "... [p] is relatively hard to hear, and [g] is relatively hard to say." This is because of aeroacoustic effects in both cases. In the former case, the shape and size of the sub-closure cavity in [p] causes the noise that accompanies closure release to be very quiet, and thereby indistinct, relative to stops at other places of articulation. In the latter case, the closure for [g] is closer to the vocal folds than most other stops. Voicing requires a pressure drop across the glottis, and stopping airflow above the glottis inhibits this, particularly so when the super-glottal cavity is small, as it is with [g].

So, at least in some cases, the means by which sounds are produced and perceived directly, if not deterministically, affects the distribution of speech sounds across languages. Speech sounds - combinations of phonological features - are at the foundation of phonological theory. If you know the phonology of, say, Dutch, you know, among other things, which sounds are part of the language and (a subset of) which sounds are not. Which is another way of saying that you know which phonological features are functional in combination with which other features. Which means that you know which rules and constraints operate when. Which had better be part of competence, or the term risks losing all meaning, at least with regard to phonology. If this is part of competence, then at least some of phonetics is linguistics proper.

On a more abstract level, the central issue here is whether or not phonological features are independent. As soon as we ask the most obvious question - are they? - we realize that me must specify what we mean by independence. Phonological features certainly seem not to be independent with regard to cross-linguistic distribution or within-language rule and constraint application. In showing this, I presented an example of non-independence in production - [g] - and perception - [p].

With regard to the former, however underlying distinctive features map onto production parameters (not a simple issue), it seems unlikely in the extreme that independence would be the norm. Candidate violations of featural independence come readily to mind: for example, place of articulation affects all sorts of acoustic cues in all sorts of complicated (and likely non-independent) ways, such as VOT, burst amplitude and spectral shape, frication amplitude and spectral shape, frication and VOT duration, to name just a few.

With regard to the latter, the interactions in production are bound to have effects on perception. Whether independent or not, our perceptual systems are awfully good at perceiving relevant distinctions between speech sounds, although it is generally unknown precisely what sorts of (in)dependencies play what sorts of roles in perception. I am currently in the process of addressing exactly these issues. I recently posted about one of the tools I plan to use extensively. I will post in the future about others.

But enough horn autotooting. The point is, again, that, insofar as production and perception systems are understood to affect grammar, they are part of linguistics proper. This means that the models and theories we use to understand production and perception are necessary parts of our models and theories of language in general.

This all brings us nicely, if indirectly, to Josh's VOT post, in which the revised role of production and perception sheds some light on the issue discussed therein, which issue is shifts in bilingual relative to monolingual voice onset times. The basic observation is that, e.g., Spanish-English bilingual children produce more English-like 'Spanish' stops and more Spanish-like 'English' stops, at least in terms of VOT. Furthermore, the effects are apparently both reliably measureable and perceptually irrelevant to the adults around the children.

Josh writes:
There is some level at which the child is storing its two /p/s in a similar place. We know this because they affect each other. If these two categories were truly language-independent, what we would expect to see, I would imagine, is phonemes that pattern exactly as they do for monolingual speakers. Instead, there is (admittedly minimal) overlap.

It will be objected by people (like this guy) who reject a phonemic level of representation (or used to, or do on Tuesdays except during Passover, or something - it's not terribly clear) that this is an artefact of pronouncing the two sounds in similar locations repeatedly. Motor memory stores exemplars of past productions, and these end up interfering with each other. As to the question of how the subject manages to continue to differentiate between the two distinct (realizations of) phonemic categories in the distinct languges, they would presumably say that this comes from associations with the other serial sounds being produced. The similarity in VOT is an articulatory effect - but only one of many, the others being effects that come from repeatedly producing series of sounds in the given language category.

Yes, but that's dodging the question in a sense. The point is that Spanish and English have this sound that is pronounced in similar enough ways that the subject becomes at least a little confused as to which is which. The two categories do exhibit an influence on each other, and they do so because they are similar across the two languages in some important sense.

If, indeed, there are language-universal phonemic categories that are defined with respect to things in addition to articulation, we should expect to see effects here that cannot be predicted by articulation alone. That would indeed be very interesting.
Josh points out the obvious (although he also points out, correctly I think, that this isn't so obvious to everyone that it should be obvious to), that this is (at least indirect) evidence for, as Josh puts it, "a phonemic level of representation", although I think it is better described as evidence for features. Clearly, people cross-classify speech sounds. The Spanish and English [p]s in the bilingual child's mind have much in common. In fact, I am convinced that cross-classification is done along different dimensions, and to different degrees, depending on whether it is done with regard to production or perception. This is the independence issue again, but it's only indirectly relevant here.

Directly relevant is an understanding of the relationship between production (and perception) systems and the grammar. On the one hand, we've got pretty clear evidence of underlying cross-classification. On the other, we've got something that looks a lot like motor and exemplar memory effects. There's nothing inconsistent about maintaining both, and I don't think it's dodging the question to invoke motor or exemplar theories to explain a phenomenon such as this. In fact, I think it's exactly the right way to proceed.

Of course, it may be that this particular phenomenon (i.e., the shifts in bilingual VOTs) won't tell us much about competence, which would put it outside of linguistics proper. That's okay with me, as I've got a very different research agenda, and I hasten to add that it doesn't mean that this phenomenon is uninteresting in general.

Finally, I'm glad to see Josh posit a nice, if vague, phonetic hypothesis. If I wasn't convinced of the value of investigating non-articulatory aspects of phonology and phonetics, I wouldn't have spent the last four years taking the classes I did.

No comments: