Addendum to yesterday's post (updated)

Yesterday's post was long. Maybe too long. It certainly took long than I had expected to write.

There was one idea I forgot to include, and, since Josh has promised a response at the end of an excellent post about programming languages and political beliefs, I want to express my forgotten idea, address an important difference between me and Josh, and respond to an issue that Josh brings up in his post.

In yesterday's post, I made a case for the importance of the interface to grammar. The very short version says that production and perception systems affect the input to the rules and constraints part of the phonological grammar. These effects can be seen in, e.g., the gaps in distributions of distinctive features. Knowledge of the grammar of a language includes, among other things, knowledge of the functions of the distinctive features in that language. Therefore, interface systems affect grammar.

All of this was in the interest of establishing that the concern for the interface in phonetics does not put it outside of linguistics proper. In retrospect, this issue isn't very interesting to me, and I don't think it's crucial to our understanding of language to have a clearly defined division between linguistics and not-linguistics. It is inconsistent to be serious about understanding language and simultaneously rule out, in principle, the value of studying performance. The justification for focusing exclusively on competence has always been that performance relies, in part, on competence, so we need a theory of competence first. Performance will come later.

That's fine, but I think it held much more water in the early days of generative theory than it does now. We've got enough of an idea about the fundamental aspects of competence to build good performance models and theories today. Of course, there is still value in focusing on competence. If, as I argued yesterday, performance systems also affect competence, that's fine, too. Our goal, after all, is a thorough understanding of language.

I strongly believe that a big part of the difference in emphasis on competence vs. performance between me and Josh is the fact that I'm a phonetician and he's a syntactician. My arguments yesterday addressed phonology exclusively, and my argument immediately above that we may as well develop performance models now, are to be understood with respect to my interest in fairly low-level perceptual and decisional processes. It seems reasonable to assume that the effects of interface on grammar will be more numerous and easier to detect and investigate in phonology than in syntax.

But even within syntax, I think interfaces play a crucial role in the grammar. As my post yesterday mostly concerned speech sounds, the interfaces in question were those between phonological grammar and perception and production systems, while the interfaces that are likely to affect syntactic grammar are more likely with semantic, morphological, and phonological grammars than with perception and production systems.

To re-paraphrase my argument from yesterday, interfaces affect grammar because they determine the input to and, therefore, the applicability of the rules and constraints. In discussing scheme and abstraction (and libertarianism) this morning, Josh said that "[t]here's a lesson there for linguists like Noah who think that the details of the interface have something to say about the underlying engine."

On reading this, it occurred to me that we need to be very clear about what we think we're doing in creating theories of competence. It seems to me that these theories are about the function of grammar. That is, when we build, say, generative theories of linguistic universals, we're building descriptions and explanations of something akin to programmed functions over data arrays. The kind of data array used as input, and the kind needed as output, seem to me to have quite a bit of influence on the internal structure of the function.

If we're talking about discovering the functions and data arrays crucial to the operation of some system, in this case linguistic, we are manifestly not talking about which programming language these functions and data arrays are implemented in. The 'same' function can be implented in Scheme, C, C++, or Java. How efficiently or easily it is implemented in each of these languages may make you decide to use one or avoid another, without a doubt. But when you're competence-theorizing in linguistics, you're not building a language from the top down, you're observing extant languages and trying, based on these observations, to infer the basic, universal structures and functions of big-L Language.

The fact that Scheme is really good at abstraction of a given (set of) function(s) is neither here nor there with regard to the role of interfaces and grammars. Once you've settled on a language, the way you write functions will be determined by, among other things, the inputs and outputs to that particular function, proper (or improper, if you're no good at it) programming techniques, and the structure of that language. I don't see any reason to believe that big-L Language is necessarily implemented in one language or another. Observing inputs and outputs to infer the structure of a black-box function is hard, and limited, enough. Given this and the fact that multiple languages can perform the same function on the same input, observing inputs and outputs seems very unlikely to be able to tell us much about anything 'higher up' than the function itself.


I've changed my mind back to caring about the partition between competence and performance, at least indirectly, and, perhaps not surprisingly, I have also reconsidered my stance on the role of abstractness. Insofar as linguistic theories purport to explain the functions (and their inputs and outputs) of language, and insofar as the details of a programming language impact the form of these functions (and their inputs and outputs), it is certainly relevant how these functions are implemented. So, the level of abstractness of linguistic functions and variation in the ability of a given programming language to achieve a given level of abstractness are potentially interesting research questions. But whether or not a linguistic model has anything specific in common with a particular programming language is not so interesting, at least not to me.

Which brings us back to Labov's performance-competences and the difference in my and Josh's interests. I believe, and I think I made the case yesterday, that perception and production systems are relevant to phonological grammar. For what it's worth, plenty of OT and, to a lesser extent, generative phonologists see perception as relevant to phonology, as well. More specifically, I think that the perceptual models I work with are useful tools to study, well, perceptual systems. Insofar as I use these tools to describe the 'competence' that underlies perceptual performance, I'm making a claim to being on the 'proper' side of the the partition dividing linguistics proper and linguistics, um, improper. Whether or not anyone else agrees with me is not all that important to me (unless a grant approval depends on it). I am confident that my research program is worth pursuing, whether or not there is consensus regarding its place in linguistics.

Which, in turn, brings us back to Josh's unhealthy obsession with syntax and my perfectly reasonable obsession with phonetics. Both of our perspectives on competence and performance are undoubtedly colored by our respective interests. Josh's view is likely, but certainly not completely, influenced by the fact that syntax is 'deeper' than phonology and phonetics. The phenomena that Josh is interested in are considerably more abstract than the phenomena that I'm interested in. Don't get me wrong, I make use of plenty of abstraction - multivariate perceptual distributions, stochastic information processing channels, and decision rules defined on them aren't exactly concrete. But I hope everyone agrees that these abstract constructs reside much closer to sub-linguistic performance systems than do models purporting to describe syntactic knowledge.

I believe that is all.

No comments: