Belated Birthday Notice

Yesterday was the 125th birthday of Ludwig von Mises. Josh has a post about it, which includes a link to an excellent post by George Reisman about the importance of Mises' work. I wanted to add that the Mises Institute also has a nice biographical piece in honor of his birthday.


The Media Sucks. No. The Media Suck.

Listening to NPR this evening, I heard a good example of one of the most irritating and, frankly, damaging behaviors of the media - parroting assertions made by politicians with no accompanying evidence for, or against, the assertion.

The story that got me thinking was about the detainee interrogation bill that recently passed both houses of Congress. The parroting that got me irritated was the following quote:
"Our most important responsibility is to protect the American people from further attack," the president said. "And we cannot be able to tell the American people we're doing our full job unless we have the tools necessary to do so."
Perhaps its the political reading I've been doing lately that makes me feel this way, but I think that this quote is utterly, and obviously, ridiculous. First, there is no single most important responsibility of any branch of government, unless you're dealing in extremely (and appropriately) vague obligations like 'upholding the constitution'. Second, even if there were a single most important responsibility of, say, the executive branch, it would not be at all straightforward to decide what it is. Third, even if the appropriate calculations have, somehow and in some trustworthy way, been done, no one in the Bush administration, the House, Senate, court system, or any state government has provided an ounce of evidence or argument that protecting the American people from attack is, in fact, the single most important responsibility. As stated in the Cato dispatch:
In "Assaults on Liberty," Robert A. Levy, senior fellow in constitutional studies at the Cato Institute, argues: "In the post-9/11 environment, no rational person believes that civil liberties are inviolable. After all, government's primary obligation is to secure the lives of American citizens. But when government begins to chip away at our liberties, we must insist that it jump through a couple of hoops. First, government must offer compelling evidence that its new and intrusive programs will make us safer. Second, government must convince us that there is no less invasive means of attaining the same ends. In too many instances, those dual burdens have not been met."
At first glance, it appears that even the Cato fellows (this one, anyway) are buying the assertion that so bothers me, but if you read carefully, it's clear that Levy's assertion is much broader than Bush's. Saying that "government's primary obligation is to secure the lives of American citizens" is vague, likely intentionally so. The case can easily be made that "securing the lives of American citizens" is not coextensive with waging a war on terror. For example, it also involves providing and maintaining a legal system - courts, police, and the like - to protect private property rights. It seems to me that this kind of security is every bit as important as, if not more important than, fighting a 'war' against a tactic, engaging, at extremely high cost, an enemy that is nowhere near as powerful as those prosecuting the 'war' would have us believe.

Bush's assertions - and the willingness of pretty much every media outlet to repeat them without critical commentary - are all the more galling given that our invasion and occupation of Iraq is making the threat of terrorism worse. Worse still, the Bush administration is not only not willing to put the security of basic constitutional rights on par with their favored narrow construal of security as pertaining only to the threat of terrorism, they are willing, even eager, to cause injury to these basic rights. From Unclaimed Territory:
...as Law Professors Marty Lederman and Bruce Ackerman each point out, many of the extraordinary powers vested in the President by this bill also apply to U.S. citizens, on U.S. soil.

As Ackerman put it: "The compromise legislation... authorizes the president to seize American citizens as enemy combatants, even if they have never left the United States. And once thrown into military prison, they cannot expect a trial by their peers or any other of the normal protections of the Bill of Rights." Similarly, Lederman explains: "this [subsection (ii) of the definition of 'unlawful enemy combatant'] means that if the Pentagon says you're an unlawful enemy combatant -- using whatever criteria they wish -- then as far as Congress, and U.S. law, is concerned, you are one, whether or not you have had any connection to 'hostilities' at all."

This last point means that even if there were a habeas corpus right inserted back into the legislation (which is unlikely at this point anyway), it wouldn't matter much, if at all, because the law would authorize your detention simply based on the DoD's decree that you are an enemy combatant, regardless of whether it was accurate. This is basically the legalization of the Jose Padilla treatment -- empowering the President to throw people into black holes with little or no recourse, based solely on his say-so.
The silver lining? The related warrantless eavesdropping bill likely will not be passed before recess. Let's hope we can get some good old-fashioned gridlock in place this November to keep this travesty from becoming law. And lets hope that, somehow, court challenges to the detainee bill start repairing the damage soon.

I would like to think that, agree with the point of view or not, if we had more of this kind of behavior in the media, we'd have less of the kind of behavior described above in the goverment. That's probably wishful thinking, but it bothers me greatly that the media, whose freedoms are ensured precisely so that they can be adversarial with respect to the government, are typically all too willing to abstain from critical thought.


The importance of property rights [updated]

As I wrote in one of my first posts, I plan on using this blog in part to document my "slide into the netherworld of classical liberalism," or libertarianism. Because it is difficult, if not impossible, to overstate the importance of private property rights in libertarian philosophy, if I am to execute this slide effectively, I will have to read up on the subject. I am currently reading Timothy Sandefur's Cornerstone of liberty: Property rights in 21st-century America, which seems to be as good a place as any to begin.

As the title of this post suggest, though, I do have a nit to pick with an early portion of the book. The first proper chapter - 2, 'Why Property Rights Are Important' - is intended to lay the groundwork for the rest of the book. Unfortunately, Sandefur leads off with a pretty weak argument: the first subsection of the chapter is called 'Property Is Natural'. The gist of this section is that non-human animals and humans 'naturally' seek out private property, property is universal in human society, and depriving people of property has all sorts of negative effects. So, the nit I wish to pick is this: only the last of these has any hope of justifying (the importance of) property rights.

It is ironic that Sandefur attempts, initially, to justify property rights by way of a simple appeal to 'nature', as this is a fine example of the naturalistic fallacy. Even if we accept that private property is naturally sought out and universal among human societies, and I see no reason to believe otherwise, it does not follow that private property should be sought out or universal. It may turn out to be the case that private property should be sought out and that it should be universal (and I believe that this is, in fact, the case), but this conclusion must be arrived at via some other logical path.

I am optimistic that the book will be worth reading, though, for a couple of reasons. First, the next two subsections in chapter two have titles indicative of promising alternate logical paths: 'Property Is Good For Individuals' and 'Property Is Good For Society'. Second, despite my objections to the naturalistic fallacy, the 'Property is natural' subsection has some value. As stated above, this section discusses the negative effects of depriving people of their property. Insofar as these are well documented effects, their avoidance can serve as a justification for private property.

Sandefur quotes Dan Dennett, one of the more interesting philosophers of cognitive science, in a discussion of how humans use artefacts to establish their 'selves' as distinct from the world around them. The Dennett quote concerns the difficulties commonly encountered by elderly folks removed from familiar home environments to nursing 'homes'. Part of living in your own home is creating a familiar and useful environment. When removed from this, the elderly (and some young folks, to be sure) can have severe difficulty with basic daily activities. Our home environments come to mesh very closely with cognitive systems governing memory and perception.

On reading this, I was reminded of discussions in Philosophical Foundations of Cognitive Science (one of the two most blog-post inducing classes [with Friedman's class] that Josh is taking now) about the blurriness of the division between our 'selves' and our environments. Here's an example: when doing long division or multiplication by hand, most people use pencil (or pen) and paper to keep track of 'big picture' information while they perform simple calculations on subsets of the numbers (the digit in the 'ones' place, in the 'tens' place, in the 'hundreds' place, etc...). In a very real sense, then, that person's cognitive system straddles the skin, the most obvious and intuitive boundary between a person and his environment, to encompass the mind and part of the environment.

Although this particular situation only applies to people who have a (perceived) need to carry out long division and multiplication (and can't do it in their heads), the point is valid more generally, and it ties in with some of Sandefur's arguments about the personal value of 'home'. In addition to the cognitive value of 'home', Sandefur discusses its 'sentimental' value (and argues that all value is 'sentimental' insofar as it is subjective).

If we accept that our 'selves' - specifically our cognitive systems - extend into our environments, then the fact that there are negative effects of depriving someone of private property is clear. I can't imagine a justification for depriving an autonomous agent of his memory or perceptual facilities.

Update: Josh makes a good point that, if I'm remembering correctly, Sandefur does not (at least not explicitly), which is this: the onus is on those who would intervene in nature to provide evidence that such intervention is better than leaving it alone. This is, I think in retrospect, the point of Sandefur's discussion of the elderly in nursing homes, the problems faced by adults who were raised in property-free kibbutzim, and Soviet policy. The 'Property Is Natural' subsection would be improved a good bit if this line of reasoning were made explicit, as Josh has done.

This point of view brings up some interesting ethical questions that I will mention but not delve into at the moment. Any claim of 'better than' carries with it an implicit measure of 'good', about which reasonable people can potentially disagree. The end result is that property rights are put on a firmer foundation than the naturalistic fallacy can provide, although it is, perhaps, not as firm as we might want it to be.


One last time with celebrinerd

When I wrote about using links (lots of links) to get celebrinerd into popular usage, I wasn't thinking very carefully about where those links led to. I had forgotten about google bombing, described with specific reference to ken-jennings.com here.

So: celebrinerd, celebrinerd, celebrinerd.


Promises, promises... (pt. 2)

I promised a while back to post something new on this blog every day. Technically, I have fulfilled this promise.

With this post, I am solemnly backing out of this promise. I will still write (for the blog) every day, but I might not post what I write every day.

As stated in my first post, I was inspired to blog primarily to communicate about the research that I read about and conduct. Thus far, I have found that posts about my research (and related topics) take a long time to write, if they're to be written well. Since I would like my posts to be written well, I have decided to relax my posting frequency requirements.

Note that, prior to making a promise to post every day, I wrote "[n]o promises regarding frequency (or quality) of posts" in my first post. Today, this gives me a nice 'out' with regard to posting frequency. Perhaps in the future it will give me a nice 'out' with regard to posting quality, but let's hope not.


Programming, perception, and a priori postulates

I'm in the middle of working on my third, and final, qualifying 'exam'. I took a sit-down exam (no scare quotes) a little over a year ago, and I designed and carried out a study of speaker focus and fricative production during the Spring and Summer of this year. The final 'exam' will be much like the second one - I will design and conduct a research project from the ground up.

This afternoon, I was writing a Matlab script to do some corpus analysis. The general plan with this study is to investigate a couple of different kinds of frequency effects in speech perception. In terms of word recognition, there are a number of well documented frequency effects - on average, frequent words are more accurately recognized in noise identification tasks and responded to more quickly in lexical decision tasks than are infrequent words. Lower level (i.e., sub-word) frequency effects are, as far as I know, less well documented.

With regard to the qual, I am primarily interested in what I have been calling (phonological) contrast frequency. Phonologists call two words with distinct meanings and forms that are identical aside from a single feature difference at a single location a minimal pair. For example, the words 'sue' and 'zoo' - [su] and [zu] - mean two very different things, and the only difference in form is that, at the beginning of the word, the former has a voiceless fricative whereas the latter has a voiced fricative.

In its simplest form, the contrast frequency for a given pair of speech sounds is the number of minimal pairs involving these sounds. You can very likely come up with other minimal pairs involving 's' and 'z', but it would be very hard for you to come up with many minimal pairs for, say, the sounds at the beginning of 'this' and 'think'.

My third qual will address at least one possible psychophysical effect of differences in contrast frequency. Of course, I first have to establish that there are suitable differences in contrast frequency for me to employ in a perception experiment. I was working on this today, using the Hoosier Mental Lexicon, a 20,000 word dictionary that includes machine readable phonemic transcriptions and word usage frequencies, among other informations. It has a good track record, having been put to good use in, for example, word recognition work documenting the effects of lexical neighborhoods (I'll likely post about this at a later date).

I want to use the HML to tally some contrast frequencies so that I can use the best possible pairs of sounds (i.e., those that will maximize the effect I am looking for) to carry out a psychophysical experiment. It turns out to be less than entirely straightforward to tally contrast frequency, mostly because you have to make a number of potentially unwarranted assumptions about the organization of speech sounds (and words) in the mental lexicon.

In general, the idea of contrast frequency seems straightforward - simply count the number of minimal pairs for a given sound. Getting a machine to count the number of minimal pairs is reasonably easy. But what about pairs of words that are nearly minimal pairs (e.g., 'this' and 'think')? It seems to me that, if I'm interested in the relationship between 's' and 'z', I should take into account the relationship between every pair of words with one member containing an 's' and the other a 'z' - 'sue' vs. 'zoo', 'sing' vs. 'zing', sure, but 'ask' vs. 'as' and 'safe' vs. 'zap', and all the rest, too. But if I'm going to take all the occurences of these sounds into account, I have to devise a measure of how similar these two words are (i.e., how important the differences are), and how the location of the 's' and the 'z' in their respective words affects this.

So far, I've written code that will find all the occurences of any given pair of sounds. It then takes each occurence of one of them and, for each occurence of the other, compares their environments - the sounds that come before and after the pair of interest. I've been thinking of various ways to weight the value of a difference in environment according to how far from the pair of interest the difference occur, as it seems reasonable to assume that the immediate environment plays a more important role in contrast frequency. If two sounds in a non-minimal pair are in completely different environments, they will hardly seem contrastive at all. If these sounds are in a minimal pair, they are the very definition of contrastive. In between these two extremes, I assume there is some in-between level of 'contrastiveness', so it seems like a good idea to take these cases into account along with the true minimal pairs.

I've also thought how nice it would be if the transcriptions in the HML included syllable affiliation information for each consonant. It seems reasonable to assume that two sounds in a non-minimal pair would be 'more contrastive' in some sense if they were both in the 'same' syllable position in their respective words. Unless I code this into the HML myself, though, it isn't going to play a role in this project.

By writing code to get a computer to carry these functions out, I have forced myself to make explicit a number of assumptions about how speech sounds are organized in the mind. These assumptions inform a number of potentially important decisions I have to make. To name three, I have to decide how to weight segmental distances when tallying environment differences (should I weight with an exponential decrease or the reciprocal of the number of segments?), how to deal with word edges (if, after aligning occurences of two sounds, the word edges do not line up, how many difference-tallies do the misaligned edges count for?), and how (or whether) to factor in usage frequencies and morphosyntactic properties (do I incorporate raw usage frequencies, the logarithm of raw usage frequencies, and/or the relative proportion of content vs. function words when tallying a pair's contrast frequency?).

The next step is to fix a silly indexing mistake I made (I had to leave promptly at 4:30 to go eat carnitas, and so could not finish the code today), see what the numbers look like for some potentially interesting pairs of sounds, then check the literature on 'functional load', a notion that is likely closely related to my 'contrast frequency'.


Tortured legal reasoning

The cato blog has an excellent (and short) post on Bush's apparently imperiled pet legislation regarding torture.

It's frustrating to me that the vast majority of media outlets fail to discuss issues like this with the clarity and simplicity of this cato post. The basic issue is a straightforward bit of legal philosophy.

Most people would agree that, generally speaking, torture is immoral. However, we can all imagine extreme circumstances in which we might be willing to sanction torture, cases in which the alternative is much worse. We've been hearing a lot about the 'ticking time bomb' scenario lately precisely because this is the kind of extreme circumstance that would cause most of us to reconsider an otherwise reasonable aversion to inhumane treatment of a prisoner who may well be innocent.

So, is it better to have a law that prohibits or authorizes the immoral act? The severity of prohibition would be alleviated greatly by the fact that, in the truly extreme case, it is likely that punishment would be minimal, while authorization for the sake of the rare extreme case opens a pandora's toolbox for the everyday interrogator.

It all revolves around due process. In the case prohibition, due process (e.g., the protections granted by the 6th amendment) would help to ensure that extreme circumstances can be presented and explained in an attempt to justify a possible violation of a different bit of due process (e.g., the 8th amendment). In the latter case, this violation of due process would be codified.

Once more with the celebrinerd...

Jennings has posted on celebrinerd fever again, and his post contains links to no less than three other blogs that are fighting the good fight.

I have also announced our efforts around these parts on his message boards.

Links! Lots of links will bring 'celebrinerd' to the masses!

My post about 'celebrinerd' has been linked in both editor-Amy-from-Ohio's post about 'celebrinerd' and Josh's post about 'celebrinerd'. I'm in the (very small and obscure corner of the) big time now!

Editor-Amy-from-Ohio also mentions that Cathy suggests creating wikipedia and urban dictionary entries for celebrinerd. This is a fine idea, as it gives celebrinerd a larger number of distinct URLs, and it would give us all a new place to which to link the word 'celebrinerd'.

It turns out that someone(s) went ahead and took Cathy's suggestion: celebrinerd! celebrinerd!

To make Mr. Jennings task a bit easier, that's eight occurences of 'celebrinerd' in this post alone (nine now), five of which link to distinct URLs.


Doing my part

Ken Jennings (the guy who won 74 days in a row on Jeopardy a while back) made some good fun yesterday of Time magazine's habit of cooking up stupid neologisms (they claim to be responsible for 'guesstimate', among other abominations, which makes me hate them with the fire of a thousand suns). Mr. Jennings is the subject of the newest Time neologism - celebrinerd. Today, he laments that it has yet to catch on.

Well, I'm here to do my part. He gave me, among others, his beautiful 'Dear Jeopardy' letter (also read the 'correction' at the bottom of this post). I've got the time and energy to give back, by gum.

Mr. Jennings says that at 10:30 AM, there were no google hits for celebrinerd. I tried at 9:30 PM and got two. Google displays these as "Results 1 - 2 of about 3 for celebrinerd" (emphasis mine). The first is Jennings' post from yesterday, the second a Time-internal search result, and the approximate third result has a URL distinct from the second, but takes you to the same place.

So far, I've used celebrinerd four times, including this sentence. That could very well make me a non-redundant third google hit by tomorrow. And it could verily double my readership (Ken, meet Josh. Josh, Ken.)

I'd hate for celebrinerd (5!) to go the way of 'radiorator.'

I can't stop me that easily!

Josh thinks he's caught me in a failure to live up to my sober guarantee to blog every day. Well, thanks to a technicality, he's wrong!

For those of you who do not blog (it's fun to write as if I have readers other than Josh) on blogspot, when you save a post as a draft, it saves the time at and date on which you began writing the post. Whenever you finally publish it, it bears the original time stamp.

Armed with this knowledge, you can see that I started the carnitas post around 7 PM last night, and that Josh was, for some reason, up at 3:30 AM when he started his post about my imminent slide into non-blogging oblivion. So what if the carnitas post wasn't actually visible to the public (i.e., Josh) until this morning?

My alibi is, of course, airtight. First of all, all of the above is true. Join blogspot and find out for yourself if you don't believe me. Second of all (all of two), it is simply inconceivable that someone could change the nominal date and time of a blog post to make it appear as if he had blogged on a day he hadn't. That kind of technical know-how just doesn't exist.

Anyway, for what it's worth, I announced my earnest decree to provide myself with some measure of motivation. So far, so good, even if I did actually fail to post yesterday. Here I am with a follow-up, right? No blood, no foul.

I owe Josh a heartfelt 'thank you' for caring enough to keep me on my toes here at Source-Filter. I hadn't noticed today's scheduled outage. If not for his nocturnal emission (so to speak), I wouldn't have been so prompt to post today.

Now that I look at the details of the outage, though, I see that his warnings are unduly dire. The outage is scheduled to last from 4 PM to 4:15 PM.


Prefiero carnitas

When I lived in Prescott Arizona, there was a Mexican restaurant called Maya two blocks from my house. Under the name, the sign said 'Fish Tacos'. At first, I considered this undesirable.

I later came to my senses. The fish tacos and shrimp enchiladas at Maya were superb. As good as they were, though, the carnitas beat them hands down. The carnitas at Maya were the perfect combination of crispy edges and tender, juicy middles. The sides of beans and rice were excellent, the salsa was exceptional, and the size of the servings was eminently reasonable.

You might not think that you could find good carnitas in Bloomington, Indiana, but you would be wrong to not think that. By which I mean that you would be right to think that you can get good carnita here. At Casa Brava, in fact, you can get an unreasonably large portion of very tasty carnitas for about $10 (109 Mexican pesos). The beans and rice are fine at Casa Brave, but the salsa leaves a bit to be desired. With respect to the Maya carnitas, Casa Brava's are too heavy on the tender, juicy middles, too light on the crispy edges.

It turns out that you can also get a pretty good carnita taco at Tacos Don Chuy. Although the tacos at Tacos Don Chuy remind me of the food I ate in Mexico more than just about any other Mexican restaurant, they are strictly taco fodder. They are juicy and tender, to be sure, but they are too small to function as a main course, whereas the carnitas as Maya and Casa Brava arrive as the proud centerpiece of a (careful! hot!) dinner plate. Tacos Don Chuy has the advantage of a Taco Tuesday special, though - $0.99 (10.78 MXN) per taco.

Until recently, I was satisfied with my midwestern carnita fix. Some time ago, a Chipotle Mexican Grill opened up about a block from Tacos Don Chuy. Until recently, I thought, "Who needs a corporate chain Chipotle with Casa Brava and Tacos Don Chuy around?" (There's a Qdoba Mexican Grill and a Moe's Southwestern Grill in town, too, but the former is mediocre aside from the cheese dip and both have stupid names, so I will speak of them no more.)

Well, it turns out that Chipotle has yet another worthy variant of this exalted pork product (although their value is diminished somewhat by an annoyingly snarky sign concerning the pork's former free range lifestyle). The Chipotle carnitas aren't carnitas in the typical sense - they aren't in any recognizable chunk form, but, rather, seem to be shredded post-roast. They lack crispy edges of note, but they taste fantastic.

I only ate at Chipotle after receiving a coupon for a free burrito. Two days after eating the free meal, I went back and paid full price for a slightly different version of it. Yesterday, I was back at Tacos Don Chuy for a carnita burrito. My dad's visiting tomorrow, so a visit to Casa Brava - and an order of carnitas - is likely to be in my near future, too.

The moral of the story? Even if you think you have enough carnitas in your life, you probably don't. Venture forth and find more.


Addendum to yesterday's post (updated)

Yesterday's post was long. Maybe too long. It certainly took long than I had expected to write.

There was one idea I forgot to include, and, since Josh has promised a response at the end of an excellent post about programming languages and political beliefs, I want to express my forgotten idea, address an important difference between me and Josh, and respond to an issue that Josh brings up in his post.

In yesterday's post, I made a case for the importance of the interface to grammar. The very short version says that production and perception systems affect the input to the rules and constraints part of the phonological grammar. These effects can be seen in, e.g., the gaps in distributions of distinctive features. Knowledge of the grammar of a language includes, among other things, knowledge of the functions of the distinctive features in that language. Therefore, interface systems affect grammar.

All of this was in the interest of establishing that the concern for the interface in phonetics does not put it outside of linguistics proper. In retrospect, this issue isn't very interesting to me, and I don't think it's crucial to our understanding of language to have a clearly defined division between linguistics and not-linguistics. It is inconsistent to be serious about understanding language and simultaneously rule out, in principle, the value of studying performance. The justification for focusing exclusively on competence has always been that performance relies, in part, on competence, so we need a theory of competence first. Performance will come later.

That's fine, but I think it held much more water in the early days of generative theory than it does now. We've got enough of an idea about the fundamental aspects of competence to build good performance models and theories today. Of course, there is still value in focusing on competence. If, as I argued yesterday, performance systems also affect competence, that's fine, too. Our goal, after all, is a thorough understanding of language.

I strongly believe that a big part of the difference in emphasis on competence vs. performance between me and Josh is the fact that I'm a phonetician and he's a syntactician. My arguments yesterday addressed phonology exclusively, and my argument immediately above that we may as well develop performance models now, are to be understood with respect to my interest in fairly low-level perceptual and decisional processes. It seems reasonable to assume that the effects of interface on grammar will be more numerous and easier to detect and investigate in phonology than in syntax.

But even within syntax, I think interfaces play a crucial role in the grammar. As my post yesterday mostly concerned speech sounds, the interfaces in question were those between phonological grammar and perception and production systems, while the interfaces that are likely to affect syntactic grammar are more likely with semantic, morphological, and phonological grammars than with perception and production systems.

To re-paraphrase my argument from yesterday, interfaces affect grammar because they determine the input to and, therefore, the applicability of the rules and constraints. In discussing scheme and abstraction (and libertarianism) this morning, Josh said that "[t]here's a lesson there for linguists like Noah who think that the details of the interface have something to say about the underlying engine."

On reading this, it occurred to me that we need to be very clear about what we think we're doing in creating theories of competence. It seems to me that these theories are about the function of grammar. That is, when we build, say, generative theories of linguistic universals, we're building descriptions and explanations of something akin to programmed functions over data arrays. The kind of data array used as input, and the kind needed as output, seem to me to have quite a bit of influence on the internal structure of the function.

If we're talking about discovering the functions and data arrays crucial to the operation of some system, in this case linguistic, we are manifestly not talking about which programming language these functions and data arrays are implemented in. The 'same' function can be implented in Scheme, C, C++, or Java. How efficiently or easily it is implemented in each of these languages may make you decide to use one or avoid another, without a doubt. But when you're competence-theorizing in linguistics, you're not building a language from the top down, you're observing extant languages and trying, based on these observations, to infer the basic, universal structures and functions of big-L Language.

The fact that Scheme is really good at abstraction of a given (set of) function(s) is neither here nor there with regard to the role of interfaces and grammars. Once you've settled on a language, the way you write functions will be determined by, among other things, the inputs and outputs to that particular function, proper (or improper, if you're no good at it) programming techniques, and the structure of that language. I don't see any reason to believe that big-L Language is necessarily implemented in one language or another. Observing inputs and outputs to infer the structure of a black-box function is hard, and limited, enough. Given this and the fact that multiple languages can perform the same function on the same input, observing inputs and outputs seems very unlikely to be able to tell us much about anything 'higher up' than the function itself.


I've changed my mind back to caring about the partition between competence and performance, at least indirectly, and, perhaps not surprisingly, I have also reconsidered my stance on the role of abstractness. Insofar as linguistic theories purport to explain the functions (and their inputs and outputs) of language, and insofar as the details of a programming language impact the form of these functions (and their inputs and outputs), it is certainly relevant how these functions are implemented. So, the level of abstractness of linguistic functions and variation in the ability of a given programming language to achieve a given level of abstractness are potentially interesting research questions. But whether or not a linguistic model has anything specific in common with a particular programming language is not so interesting, at least not to me.

Which brings us back to Labov's performance-competences and the difference in my and Josh's interests. I believe, and I think I made the case yesterday, that perception and production systems are relevant to phonological grammar. For what it's worth, plenty of OT and, to a lesser extent, generative phonologists see perception as relevant to phonology, as well. More specifically, I think that the perceptual models I work with are useful tools to study, well, perceptual systems. Insofar as I use these tools to describe the 'competence' that underlies perceptual performance, I'm making a claim to being on the 'proper' side of the the partition dividing linguistics proper and linguistics, um, improper. Whether or not anyone else agrees with me is not all that important to me (unless a grant approval depends on it). I am confident that my research program is worth pursuing, whether or not there is consensus regarding its place in linguistics.

Which, in turn, brings us back to Josh's unhealthy obsession with syntax and my perfectly reasonable obsession with phonetics. Both of our perspectives on competence and performance are undoubtedly colored by our respective interests. Josh's view is likely, but certainly not completely, influenced by the fact that syntax is 'deeper' than phonology and phonetics. The phenomena that Josh is interested in are considerably more abstract than the phenomena that I'm interested in. Don't get me wrong, I make use of plenty of abstraction - multivariate perceptual distributions, stochastic information processing channels, and decision rules defined on them aren't exactly concrete. But I hope everyone agrees that these abstract constructs reside much closer to sub-linguistic performance systems than do models purporting to describe syntactic knowledge.

I believe that is all.


Slow and steady wins the race, right?

So, Josh established once and for all what is and what isn't linguistics a few days ago. I've been meaning to discuss some of what he says in that post, but because I am typically slow to response-blog, he has another language related post I want to respond to, as well.

(Of all the linguistic issues Josh would end up blogging about, VOT is perhaps the one I expected least. In any case, my responses to the two posts are related, so it turns out to be a good thing I 'waited'.)

The earlier post is concerned primarily with whether or not sociolinguistics is linguistics proper or not, but it also touches on the role of phonetics in linguistics. Josh makes a compelling case that a fair bit of sociolinguistics is sociology, only tangentially related to linguistics (he estimates it's an 80/20 split, but I'm not willing to commit to any such numbers). He also insists "that phonetics is more concerned with the interface than the subject proper".

For Josh, among many other linguists, capital-L Language is competence, not performance. That is, linguistics proper is concerned with the knowledge underlying language, as opposed to what is actually said on any given occasion. Even if you concede this point, you have to be careful when deciding what counts as competence and what doesn't. Perhaps the most famous sociolinguist - William Labov - argues in in an unpublished manuscript on the foundations of linguistics that the competence/performance distinction is incoherent (emphasis added):
The terms 'idealism' and 'materialism' can be seen to be most appropriate in relation to the definitions of data involved. The idealist position is that the data of linguistics consists of speakers' opinions about how they should speak: judgments of grammatically or acceptability that they make about sentences that are presented to them....

The materialist approach to the description of language is based on the objective methods of observation and experiment. Subjective judgments are considered a useful and even indispensable guide to forming hypotheses about language structure, but they cannot be taken as evidence to resolve conflicting views. The idealist response is that these objective observations of speech production are a form of 'data flux' which are not directly related to the grammar of the language at all....

.... The idealist position has more recently been reinforced by a distinction between 'performance' and 'competence'. What is actually said and communicated between people is said to be the product of 'language performance', which is governed by many other factors besides the linguistic faculty, and is profoundly distorted by speaker errors of various kinds. The goal of linguistics is to get at the underlying 'competence' of the speaker, and the study of performance is said to lie outside of linguistics proper. The materialist view is that 'competence' can only be understood through the study of 'performance', and that this dichotomy involves an infinite regress: if there are separate rules of performance to be analyzed, then they must also comprise a 'competence', and then new rules of 'performance' to use them, and so on.
While I don't agree that this constitutes an infinite regress (it seems clear to me that a lower bound on linguistically relevant and controllable production and perception variables is establishable in principle), the general point is important, and one could easily make the case that the partition between linguistically interesting competence and mere performance typically excludes linguistically relevant knowledge. It may be that the conditioning context for some pronunciation variant is more social (e.g., economic status of interlocutor) than traditionally linguistic (e.g., prosodic position), but this fact alone doesn't make sysematic linguistic behavior any less indicative of underlying knowledge about the linguistic system. Even the most ardent 'idealists' (e.g., Chomsky) understand that we can only indirectly observe competence as it is 'filtered' through performance. From Apects of the Theory of Syntax (p. 4):
The problem for the linguist, as well as for the child learning the language, is to determine from the data of performance the underlying system of rules that has been mastered by the speaker-hearer and that he puts to use in actual performance.
So when Josh says that
People like me, though we accept that Phonetics is also Linguistics, would insist that Phonetics is more concerned with the interface than the subject proper. Language is competence. Studies of articulatory motor functions and sound processing are valuable (especially in practical industry terms, for things like speech processing on those annoying telephone menus that have you say "one" rather than press the button), no doubt about it, but mostly as a way to explain how useable information gets to the language module and back out again. I do not seriously believe that Phonetics has any bearing on meaning or grammar (though there are certainly those that do) - though there are bound to be certain mathematical artefacts of the way articulators are arranged that sometimes cause a speaker to prefer one possible form over another, etc.
I think he's mistaken. 'Mathematical artefacts' are by no means the only way in which the interface has an effect on grammar. In the early days of generative theory (i.e., whence the popular resurgence of the competence-performance divide), the interface was seen as crucial, if not central, to the theory of language. As Katz inelegantly wrote in The Philosophy of Language (1966, p. 98):
Natural languages are vehicles for communication in which syntactically structured and acoustically realized onjects transmit meaningful messages from one speaker to another....

Roughly, linguistic communication consists in the production of some external, publicly observable, acoustic phenomenon whose phonetic and syntactic structure encodes a speaker's inner, private thoughts or ideas and the decoding of the phonetic and syntactic structure exhibited in such a physical phenomenon by other speakers in the form of an inner, private experience of the same thoughts or ideas.
Had he been capable of linguistic communication, Katz could have written what he seems here to mean, and he could have avoided making silly mistakes. On the former hand, what he means is that communication is important to any understanding of language. On the latter hand, it should be obvious that not all linguistic communication depends on acoustic transmission. At the end of his discussion, Josh points this out to an unnamed 'sound person', which category I assume excludes Katz.

Katz's syntactic flourishes aside, it is my assertion that communication - and thereby the interface - should be central to a theory of language. I will provide support for this assertion by discussing an incompletely-asked question in phonology.

It is well known that certain combinations of phonological features are comparatively rare among the world's language. Take, for example, the uneven distribution of certain place and glottal features. Voiceless, labial [p] and voiced, velar [g] are commonly missing from phoneme inventories. As Gussenhoven and Jacobs eventually describe it (in Understanding Phonology, the book I should have used when I taught undergraduate phonology, instead of this one, whose name I will not utter, or whatever the written-version of 'utter' is), "... [p] is relatively hard to hear, and [g] is relatively hard to say." This is because of aeroacoustic effects in both cases. In the former case, the shape and size of the sub-closure cavity in [p] causes the noise that accompanies closure release to be very quiet, and thereby indistinct, relative to stops at other places of articulation. In the latter case, the closure for [g] is closer to the vocal folds than most other stops. Voicing requires a pressure drop across the glottis, and stopping airflow above the glottis inhibits this, particularly so when the super-glottal cavity is small, as it is with [g].

So, at least in some cases, the means by which sounds are produced and perceived directly, if not deterministically, affects the distribution of speech sounds across languages. Speech sounds - combinations of phonological features - are at the foundation of phonological theory. If you know the phonology of, say, Dutch, you know, among other things, which sounds are part of the language and (a subset of) which sounds are not. Which is another way of saying that you know which phonological features are functional in combination with which other features. Which means that you know which rules and constraints operate when. Which had better be part of competence, or the term risks losing all meaning, at least with regard to phonology. If this is part of competence, then at least some of phonetics is linguistics proper.

On a more abstract level, the central issue here is whether or not phonological features are independent. As soon as we ask the most obvious question - are they? - we realize that me must specify what we mean by independence. Phonological features certainly seem not to be independent with regard to cross-linguistic distribution or within-language rule and constraint application. In showing this, I presented an example of non-independence in production - [g] - and perception - [p].

With regard to the former, however underlying distinctive features map onto production parameters (not a simple issue), it seems unlikely in the extreme that independence would be the norm. Candidate violations of featural independence come readily to mind: for example, place of articulation affects all sorts of acoustic cues in all sorts of complicated (and likely non-independent) ways, such as VOT, burst amplitude and spectral shape, frication amplitude and spectral shape, frication and VOT duration, to name just a few.

With regard to the latter, the interactions in production are bound to have effects on perception. Whether independent or not, our perceptual systems are awfully good at perceiving relevant distinctions between speech sounds, although it is generally unknown precisely what sorts of (in)dependencies play what sorts of roles in perception. I am currently in the process of addressing exactly these issues. I recently posted about one of the tools I plan to use extensively. I will post in the future about others.

But enough horn autotooting. The point is, again, that, insofar as production and perception systems are understood to affect grammar, they are part of linguistics proper. This means that the models and theories we use to understand production and perception are necessary parts of our models and theories of language in general.

This all brings us nicely, if indirectly, to Josh's VOT post, in which the revised role of production and perception sheds some light on the issue discussed therein, which issue is shifts in bilingual relative to monolingual voice onset times. The basic observation is that, e.g., Spanish-English bilingual children produce more English-like 'Spanish' stops and more Spanish-like 'English' stops, at least in terms of VOT. Furthermore, the effects are apparently both reliably measureable and perceptually irrelevant to the adults around the children.

Josh writes:
There is some level at which the child is storing its two /p/s in a similar place. We know this because they affect each other. If these two categories were truly language-independent, what we would expect to see, I would imagine, is phonemes that pattern exactly as they do for monolingual speakers. Instead, there is (admittedly minimal) overlap.

It will be objected by people (like this guy) who reject a phonemic level of representation (or used to, or do on Tuesdays except during Passover, or something - it's not terribly clear) that this is an artefact of pronouncing the two sounds in similar locations repeatedly. Motor memory stores exemplars of past productions, and these end up interfering with each other. As to the question of how the subject manages to continue to differentiate between the two distinct (realizations of) phonemic categories in the distinct languges, they would presumably say that this comes from associations with the other serial sounds being produced. The similarity in VOT is an articulatory effect - but only one of many, the others being effects that come from repeatedly producing series of sounds in the given language category.

Yes, but that's dodging the question in a sense. The point is that Spanish and English have this sound that is pronounced in similar enough ways that the subject becomes at least a little confused as to which is which. The two categories do exhibit an influence on each other, and they do so because they are similar across the two languages in some important sense.

If, indeed, there are language-universal phonemic categories that are defined with respect to things in addition to articulation, we should expect to see effects here that cannot be predicted by articulation alone. That would indeed be very interesting.
Josh points out the obvious (although he also points out, correctly I think, that this isn't so obvious to everyone that it should be obvious to), that this is (at least indirect) evidence for, as Josh puts it, "a phonemic level of representation", although I think it is better described as evidence for features. Clearly, people cross-classify speech sounds. The Spanish and English [p]s in the bilingual child's mind have much in common. In fact, I am convinced that cross-classification is done along different dimensions, and to different degrees, depending on whether it is done with regard to production or perception. This is the independence issue again, but it's only indirectly relevant here.

Directly relevant is an understanding of the relationship between production (and perception) systems and the grammar. On the one hand, we've got pretty clear evidence of underlying cross-classification. On the other, we've got something that looks a lot like motor and exemplar memory effects. There's nothing inconsistent about maintaining both, and I don't think it's dodging the question to invoke motor or exemplar theories to explain a phenomenon such as this. In fact, I think it's exactly the right way to proceed.

Of course, it may be that this particular phenomenon (i.e., the shifts in bilingual VOTs) won't tell us much about competence, which would put it outside of linguistics proper. That's okay with me, as I've got a very different research agenda, and I hasten to add that it doesn't mean that this phenomenon is uninteresting in general.

Finally, I'm glad to see Josh posit a nice, if vague, phonetic hypothesis. If I wasn't convinced of the value of investigating non-articulatory aspects of phonology and phonetics, I wouldn't have spent the last four years taking the classes I did.


Big Cats

I took my daughter to the Exotic Feline Rescue Center yesterday. It should take about an hour to drive there from Bloomington, but the directions given on the website, while technically accurate, are missing some key information.

We took highway 46 west out of Bloomington. Easy enough, although, before we left the house, I did briefly forget about a 4 year old change which road you take from downtown Bloomington to get to 46 west. I grew up here, but didn't get my driver's license until I was 21 and living in Arizona (just north of where Senator Highway became a gravel road, just south of Goldwater Lake), so I had no motorized automobility prior to moving away in the first place. In addition, I don't often have a reason to drive in that direction anyway, so that part of my mental map of my hometown is less developed than some others.

A few miles west of the very tiny town of Bowling Green, a normal right turn onto a nondescript perpendicular is called for. I passed it, then I turned around and drove back to the redolently named 200 E. I headed north, past Ashboro Road, along which is housed the EFRC, for about three miles. I looked, briefly, at state-sized maps of the whole state, then at the covers of maps of Ohio, Louisville, and similarly inappropriate locales. I headed south, back to a utility worker working on utilities along 200 E, and asked him if he knew where I could find an Ashboro Road.

We found the turn marked by a sign amid much lush flora, facing west such that, when you approach the Ashboro-200 E intersection from the South, it is nearly invisible. A half-mile or so east of the secret sign, we parked next to the entrance.

The facility is amazing and bizarre. They depend quite a bit on donations and volunteer labor, but despite the low budget, they've put together an impressive array of enclosures. The website says they have over 200 cats (the guide today told us 192 cats, and the 2006 mid-year report says something about some of the cats dying) in enclosures covering 30 acres.

First you see a shed and a fork in a gravel drive. The guide met us, told us the rules (no petting the cats, stay at least 3 feet from the fences, a third rule I can't remember, and if a cat turns its ass toward you, it's likely to spray you with a very stinky liquid territory marker. We were advised to step to the side to avoid the spray, as moving back merely (slightly) delays the arrival of the stink.

The tour began on the left fork, with cougars in smallish enclosures on the left, lions in a large cyclone-fence enclosure to the right. The landscaping inside each enclosure is left up to the cats. Apparently, cougars like plenty of vegetation, whereas lions do not. Some of the enclosures have fencing-material tops, others have electrically live wires along the rim, still others are high-walled but otherwise open on top.

Because the function of the EFRC (i.e., because it is not a zoo, but, rather, a life-long home for the cats), neither the facility itself nor the population of cats is not designed for show. Whereas a zoo might have a single lion pride, a couple of tigers, and a few other cats, we saw something like 40 tigers (one of which is a genetically rare white tiger), 30 lions, a dozen and a half cougars, two bobcats, a couple of black 'panthers' (or melanistic leopards), some standard leopards, maybe a jaguar (?), a serval, and probably some others that I am forgetting.

Although a small number of the cats that live there were born there, the vast majority have very sad backgrounds. There are a a bunch of ex-circus cats (including one 23 year old tiger whose canine teeth were worn to nubs from incessant chewing on the bars in her small circus cage), and quite a few cats who belonged to people with very bad ideas about what makes a good pet. Three of the cougars were found in an apartment in (or near) Chicago. One of the tigers was found in a residence that also contained a meth lab. Another tiger was owned by a man who later pled guilty to over a hundred counts of child molestation (the tiger was child-bait, apparently, and had to be dealt with after the guy moved to the slammer). A disturbing number of them were found in various towns just wandering around.

Keeping track of my daughter (and myself), trying to enjoy seeing such amazing animals up close, and hearing horrible story after horrible story, I quickly became overwhelmed and felt a bit numb.

On a somewhat lighter note, they keep the names the animals arrive with. Somewhat surprisingly, there are no lions named Simba at the EFRC. There are, however, quite a few Tonys and Tiggers (and variations of Raja) among the tigers.

In one area, a male tiger tried to spray us. We hurried by and avoided any incident, although on the way back out of the area a few minutes later, my daughter was not happy to learn that we would have to walk back by the spray-happy fellow. Not helping matters was the fact that, while we listened to that and the other nearby cats' stories, a young lion pounced on the fence behind us. We must have looked like a fine set of meat-based toys. If this 'kitten' had weighed about 200 pounds less, its behavior would have been adorable. It was fairly terrifying instead.

I brought my camera in hopes of snapping some national-geographic-worthy photographs. I got one (with the help of an employee) late in the tour, which can be viewed here. Because of the rules and the fencing materials used in the enclosures, it is very difficult to get good pictures. After realizing this, I quickly came to wish I had brought an audio recording device instead. The sounds were simply amazing. I don't know why, but a number of the cats were very vocal throughout our visit. Some of the lions seemed to have important things to roar at each other. Some of them were clearly excited about feeding time (some of them excited enough that the yowls and roars repeatedly made me flinch and feel a bit panicky). Some were happy to see the woman guiding us, including a very chirpy cougar and a large number of 'chuffing' tigers (chuffing is kind of a snorted tiger greeting). In any case, the sounds were loud and clear. I plan on returning to record at a later date.

All in all, it was a fine field trip. A bit frustrating to find, but well worth the time and effort. They house cats from all over the country, and they do so with not a lot of money. I highly recommend visiting and, if you can afford it, donating or volunteering, too.

Clarification of an off-hand comment

Josh is taking me to task for a comment I made with regard to a line in a Walter Williams column. From my post:
he [Walter Williams] writes that "We've seen widespread condemnation of alleged atrocities and prisoner mistreatment by the U.S., but how much media condemnation have you seen of beheadings and other gross atrocities by Islamists?" I respond that we, rightly, hold ourselves to higher moral standards than we do the Islamists.
Josh points out, rightly, that the Islamists should be held to the same moral standards as our GIs, and anyone else, for that matter. He writes:
Either a thing is a moral outrage deserving of condemnation or it isn't.... if complaining about conditions at Gitmo is the right thing to do, then I think we have plenty of time left over to criticize beheadings....
I agree. I should have said that the imbalance of criticism is likely based on different expectations, not different moral standards. We expect Islamic extremists to do barbaric things like behead people, while, perhaps incorrectly, we expect our armed forces to not torture prisoners. Both acts deserve condemnation.

There is another component to this debate that neither Josh nor have I touched on yet. Another possible response to Williams' complaint concerning condemnation imbalance would be to lower the standard we hold our armed forces to; we could hold both the Islamists and 'our boys' to a uniformly low standard of moral conduct. Doing so would, of course, lose us whatever moral high-ground we have (or had - the Abu Ghraib torture has surely lost us a good bit already).

The New Look: Awesome

I had some technical problems last night changing the template here (from 'Rounders 3' to 'Simple II' with a few 'margin:1%'s thrown in for good measure). Blogspot stopped responding to my repeated previews and template saves last night, so I saved the html locally halfway through making some minor adjustments. I wasn't able to republish the entire blog until just now.

Josh found out (and posted) about the new look after I began making changes but before I finished, so there was disagreement between the front page and the archives. Everything here on Source-Filter should look the same now (i.e., sleek and sexy, with a touch of sophisticated).


Ultimate Hype!

Tonight's pre-TUF (warning: link goes to obnoxious website with automated talkie) hour on Spike TV is devoted to hyping a fight between Chuck Lidell and Jeremy Horn that happened in August of 2005.

Chuck says: "No one gets in my head like that. I go out and fight, I fight."

It is reminiscent of Igor Zinoviev's heavily accented "I like fight. Fight is win."

[Postscript: An earlier version of this post described tonight's show as hyping a fight scheduled to happen in October of this year. The whole damn show was presented as hype for a fight-to-come, then at the end, they showed about 20 seconds of the fight-that-went, at which point I remembered having seen it. I was getting excited. The hype was working, and I already knew the outcome.]

'Ernesto' se traduce como 'Dick'?

This is one of the funniest t-shirts I've seen in a long time. I might even buy one, and it has been many years since I bought a t-shirt with anything other than a pocket on the front.

¡Viva absurdismo!

(hat-tip to boingboing; link to boingboing post about the shirt)


The Cognitive Lunch Shark Tank

Matt Jones (with a bit more head-hair and quite a lot more beard than in the picture on the linked page) gave a very interesting talk on generalization and categorization at this week's cognitive lunch. He (with Todd Maddox [and others] I assume) has developed some interesting new experimental techniques and accompanying analyses that have, if you buy his arguments, produced some reasonably shocking new results. Not everyone at the talk seemed to buy his arguments.

IU is arguably the best place to study mathematical psychology, particularly if you are interested in categorization. If I'm remembering correctly, Jones cited three of the people attending the talk (John Kruschke, Rob Nosofsky, and Rob Goldstone), and his association with Maddox links him to F. Gregory Ashby, which in turn links him to Jim Townsend (which kind of links him to me, but not in any substantive way).

He didn't cite them to kiss any asses, mind you. He cited them because their names are impossible to avoid if you're going to discuss categorization. Their names are impossible to avoid in no small part because they've each put many years of effort into investigating categorization.

Without going into too much detail, Jones' basic point was that, in a categorization task, you can analyze the effects of the immediately preceding trial on the current trial to investigate the role of inter-stimulus similarity in stimulus generalization and categorization. He discussed the effects of both the preceding stimulus, its structural relationship to the current stimulus, and, briefly, the feedback following the preceding stimulus on the probability of giving a particular response to the current stimulus. In setting things up at the beginning of the talk, he brought up some of the models that (some of) those present had developed. In discussing some of the results late in the talk, he brought up those models again in order to point out that they couldn't account for the data he had observed.

Thus far in my career as a cognitive scientist at IU, I have attended cognitive lunch only intermittently. I do know, however, that moments like these offer the series' most compelling, and, occasionally, most horrifying (to an up-and-coming scholar), moments. The speaker will make a claim, or gloss over an assumption, and before you know it, it's a feeding frenzy. As many of these professors have been working with, and around, each other for many years, they can, on occasion, get pretty snippy with each other. And they do not often hesitate to disagree vociferously with, well, anyone in the room.

Today wasn't bad. I've certainly seen worse. Jones dealt with the interruptions with aplomb (he's seems to be very smart, is certainly well-versed in the field, and elicited a few laughs, although he talked too fast for much of the talk, as he was, in my opinion, a bit over ambitious in planning the presentation [he gets a load of bonus points from me for using a very stripped-down black-on-white slide template]). Criticisms and questions were pointed, but appropriate.

I hope things go as well for me in January. Perhaps the fact that I don't study categorization will thin the herd a bit.


General Recognition Theory

I wrote in my first and second posts that I plan on writing 'science' posts describing research that is relevant to my own. Well, here's my first post doing just that, although I would like to note quickly that this post will not be as technical as some that follow. This post will be about a problem that I've set up for myself and my initial thoughts on how I plan on going about solving it.

First, a bit of background. I am working on a double major PhD in Linguistics and Cognitive Science at Indiana University. I also plan on getting the certificate in modeling in Cognitive Science (pdf link explaining the requirements, if you're interested). The combination of a double major and the certificate has allowed me to take, if I'm not mistaken, exactly zero elective classes. This is fine, though, as my coursework has for the most part been thoroughly enjoyable and extremely well tailored to my interests.

Anyway, my first exposure to the modeling that now consumes so much of my time occurred when I received an email about a reading group described as an "introduction to general recognition theory (GRT). GRT is a general, multi-dimensional signal detection theory of identification and categorization. We will focus on the original theory (Ashby & Townsend, Psy. Rev. 1986), which was applied to perceptual independence of psychological dimensions and features."

I recall thinking that a model that deals with perceptual independence of features was precisely the kind of model I needed to know about. I'm sure that the mention of signal detection theory influenced my decision to participate in the reading group, as well, as a professor I respect greatly said once, in an offhand manner, that signal detection theory was one of the most important ideas he ever learned about. Having now learned about it, I feel confident saying the same.

The bit about features caught my eye, though, because, within linguistics, I tend toward the phonology and phonetics side. Phonology and, to a lesser extent, phonetics are built on a foundation of distinctive features, yet I couldn't recall anyone in either phonology or phonetics ever having addressed the issue of (in)dependence between two (or more) features. Well, here was a reading group addressing the issue directly. So I joined in.

Here's a grossly over-simplified description of GRT. In the simplest case, the experimental task involved the identification of four stimuli. These four stimuli consist of the factorial combination of two features, each with two values. To provide a simple, concrete example, suppose that the two features of the stimuli are two 200 ms pure tones, one at 200 Hz, the other at 500 Hz, and that the two values are 'absent' and 'present'. If '0' indicates absence, '1' indicates presence, and the first position represents the lower frequency tone, the four stimuli are 00, 01, 10, and 11. Now, the stimuli in these sorts of experiments have to be confusable, either because they similar enough that the noise of the perceptual system itself makes it difficult to always identify them correctly, or because they are purposefully embedded in noise. In the present case, it would probably work best to embed them in noise.

Confusable stimuli are important because the data we work with are confusion probabilities. For example, given that the stimulus '01' was presented, we tally the number of times a subject responded '00', '01', '10', and '11'. We then divide each tally by the total number of times that stimulus '01' was presented and find that, say, Pr( response = '10' | stimulus = '01' ) = 0.23. We do this for each stimulus and get a 4x4 confusion matrix (typically arranged so that the columns correspond to responses, rows to stimuli).

It is assumed that each physical stimulus corresponds to a probabilistic distribution in perceptual space and that the distributions corresponding to the four stimuli overlap. In the most general form of the theory, no assumptions about the particular character of these distributions are made. In the most common special case of the general theory, it is assumed that each perceptual distribution is bivariate Gaussian. In any case, the perceptual effect of each stimulus presentation is assumed to be a point in the perceptual space. Because the perceptual representations are overlapping and distributed, the point is ambiguous - there is a non-zero probability that it came from each of the four possible distributions. So, we need the multidimensional analogs of signal detection theory's scalar criterion, which turn out to be decision bounds - curves in the perceptual space that define response regions. If the perceptual effect of a given stimulus presentation falls, say, above the bound separating the 'absence' and 'presence' regions for the 200 Hz tone and below the bound separating the 'absence' and 'presence' regions for the 500 Hz tone, the subject responds '10'.

One of the great strengths of GRT is that it allows us to differentiate a number of different kinds of perceptual independence. To make a long and very interesting story very short, GRT provides three key definitions. Perceptual independence holds for a given stimulus if statistical independence holds in the corresponding perceptual distribution; perceptual separability holds for a feature if the perceptual effect of that feature do not depend on the level of the other feature; and decisional separability holds if the decision bounds are parallel to the coordinate axes in perceptual space.

Perhaps the greatest weakness of the theory, though, is the fact that a 4x4 confusion matrix does not suffice to determine the properties of the underlying perceptual distributions and decision bounds. Some very smart people have put a lot of time and effort into coaxing useful information out of such a matrix, though, so the theory is, in fact, quite useful. Significantly, if failures of independence and separability occur, observed probabilities can be used to demonstrate this fact. Evidence for independence or separability is much harder to come by.

Which brings me to my problem. Maybe the most important aspect of standard signal detection theory is the ability to transform response probabilities into separate measures of sensitivity and response bias. It is well documented that systematically manipulating the frequencies of the stimuli or the payoff values of the responses can induce changes in subjects' response biases without affecting sensitivity. Well, these same kinds of manipulations ought to have the same kinds of effects in the multidimensional generalization of the standard theory. Changing the stimulus frequencies should induce systematic changes in subjects' decision bounds without changing the relative positions or properties of the perceptual distributions.

My problem is to figure out what kinds of manipulations of stimulus frequencies will produce what kinds of usable changes in the observed response probabilities. My intuition tells me that the larger degrees of freedom provided by the additional confusion matrices ought to enable more direct investigation of the underlying distributions. If shifts in stimulus frequencies induce predictable shifts in decision bounds, it ought to be possible to abstract away from the particulars of the decision bounds themselves to more directly get at properties of the perceptual distributions.

So, the first thing I need to do is figure out exactly what decision bound changes I expect when I change stimulus frequencies. Then I need to figure out what, if anything, this buys me in terms of the relationships between the underlying perceptual representations and the observable response statistics. At the very least, the additional data should allow for some Gaussian model fitting, but I would like to produce some respectable analytic results, too.

[Postscript: Okay, so this post was less about my problem and how I plan to solve it than it was a quick and dirty description of GRT. By quick and dirty, of course, I mean terribly incomplete. Among other things, I almost completely avoided explicit mathematical notation. I also mentioned only indirectly two of the many scholars who have developed the theory, without whom I wouldn't be thinking or writing about this stuff at all. I will rectify both of these shortcomings in the future with more complete presentations of a number of particular papers on GRT.

Also, although I am happy that I am, thus far, upholding yesterday's solemn oath, it seems likely that, by committing myself to daily blog posts, I have implicitly committed myself to less careful proof reading. For example, this post turned out to be longer than I expected it to be, and I don't really feel like proofing it again, but I don't want to withhold publication, so caveat lector.]


Promises, promises...

If you're familiar with my archive, you know that I have thus far posted only intermittently. Well, all that's about to change.

I hereby solemnly swear that I will post at least once a day.

It's not like there's a lack of subjects to write about. For example, there's a post on Josh's blog that I intend to respond to. And there's a line in a Walter William's column that I take issue with (he writes that "We've seen widespread condemnation of alleged atrocities and prisoner mistreatment by the U.S., but how much media condemnation have you seen of beheadings and other gross atrocities by Islamists?" I respond that we, rightly, hold ourselves to higher moral standards than we do the Islamists.). And there's what looks to be a good article on the Cato website, but I haven't read it yet.

Okay, so I haven't exactly opened up the floodgates here.

As I mentioned in my second post, part of my motivation to start a blog at all was to give myself a reason to write on a regular basis. Thus far, my posting has been rather irregular (i.e., aperiodic), but having a blog with a readership of unknown size - likely statistically indistinguishable from zero - does seem to give me some measure of motivation.

Why, prior to starting this blog, I almost never wrote rambling, semi-coherent disquisitions about The Ultimate Fighter.

As a quick aside, now is as good a time as any to point out how good was Josh's advice to me regarding topics per blog post. Pretty much the only thing I've written about Ultimate Fighting can be found at the tail end of my August 31 post 'Chasing Josh, Reading Laudan, Watching Fights'. Had I had the sense to write a post solely about the gentlemanly art of fisticuffs instead of appending it as an afterthought to a post about that and two other things, in the preceding paragraph, I could have linked directly to the rambling, semi-incoherent disquition. Instead, my early blogging errors are catching up with me. You can think of this paragraph as a ripple, in the pond that is my blog, caused by the skipped stone of a poorly planned three-topic post.



Given that, in my freshman composition class, the readership was known to be exactly one, it wasn't the promise of fame that had me patiently seated thrice weekly in front of my dedicated word processor. It wasn't fear of bad grades, either, as I recall making all sorts of ludicrous excuses to avoid waking up early enough to make it to class.

Even though I had numerous mildly unpleasant encounters with a freshman-essayist variant of writer's block, I believe now that I was motivated primarily by the joy I felt once the snowball's momentum freed it from my push. I own a haze of indistinct memories, dozens of late afternoons and early evenings of waiting for inspiration to strike me, sitting and looking around my dorm room, pondering the topic-worthiness of my plaid sheets, my trombone, the window, the desk, my word-processor, my hands... until, inevitably, eventually, a thought would follow immediately from a stray antecedent, a third would join in, and I was off to the races. Once I had a topic, however trivial and asinine, the words would arrive more quickly than I could hunt-and-peck them into the 5-by-13 inch screen.

It's interesting to note that this composition class was entirely focused on mechanics. We had free reign to choose any topic. Our essays would be returned with small, neat, red marks accompanied by numbers in the margins indicating which sub-section of the Little, Brown Handbook's section on punctuation and mechanics we had violated. For each error, we were tasked to write the incorrect sentence, the broken rule, and the corrected version of the sentence. I grew very tired indeed of repeating the refrain regarding the use of commas and 'introductory elements'.

By the end of the semester, I was writing error-free essays. It felt quite good to see clear evidence of progress and accomplishment, I had established an invigorating (yet, sadly, short-lived) essay habit, and the professor singled me out for praise (he enjoyed the absurdity of my sense of humor - it was icing on the cake that he looked and kind of sounded like John Lithgow).

Unfortunately, I don't have copies of any of those essays. But I do have a burgeoning essay habit again.


The source, and justification, of rights

I'm a bit late responding, but the Mises blog had an interesting post a couple of days ago on the source of rights. Not surprisingly, then, it's largely about the writer's (Stephan Kinsella) belief about what the source of rights is, but it also touches briefly on the justification of rights.

In the end, Mr. Kinsella argues that, if we must choose a source, it's empathy. This makes sense - rights are inherently tenuous and inherently social. My rights hold insofar as others respect them; they can be violated by the unscrupulous and sociopathic with disurbing ease. The empathetic among us are the most likely to respect the rights of one another, as Mr. Kinsella says, almost by definition.

This is fine if all we want to do is consider the source of rights. But Mr. Kinsella isn't only concerned with the source - he is concerned with their rationale, as well. Actually, he's concerned with the idea that rights have no rationale. He writes

But rights don't really "come from" anything. When it is demonstrated that 2+2=4, this is a truth, a fact. Does it make sense to ask what is the "source" of this "truth"? Where does 2+2=4 "come from"? This is just nonsense. And it is similar with normative propositions--with moral truths.

Putting aside the poor choice of an example of a mathematical axiom, the equation of normative propositions with axioms is, well, just nonsense.

Any statement of the form 'one should do X' - that is, any normative proposition - is implicitly incomplete. A more complete version can be expressed as 'one should do X, because X will further Y.' Even more thoroughly, one could employ the ponderous 'one should do X, because X will further the goal Y, which is a worthy and attainable goal, as evidenced by Z and W, and, furthermore, we have good reason to believe that doing X will actually move us toward Y - see V, U, and T.'

Let's tease this apart a bit.

Any prescribed action X must further a worthy goal Y. Of course, there is no guarantee that anyone, much less everyone, will agree that Y is worthy. For example, I may value the individual's autonomy very highly, and so prescribe actions accordingly, but the mere fact that I - or any number of people - value individual autonomy does not make it a worthy goal for anyone other than me, and it certainly doesn't make it, or the normative statement aimed at furthering it, 'true'. Goals must themselves be justified.

Here's a quick, and probably deeply flawed, stab at justifying this one: individual autonomy is a worthy goal because our existence depends on (some measure of) it. I can't take care of my own needs, or pursue my desires, if I must take care of the needs, or pursue the desires, of (too many) others. I can't take care of my own needs, and pursue my desires, if someone else insists he is responsible for them.

Any prescribed action X must further an attainable goal Y. When stated plainly, this seems very obvious, but it's not at all obvious when left implicit in a typically elliptical normative statement. If we have no hope of attaining a goal, there is no point in attempting to pursue it.

This requirement sheds some light on the 'some measure of' modifier above, as, in this case, absolute individual autonomy is clearly not attainable. Putting aside the obvious exceptions - children, the infirm, etc... - even an otherwise unencumbered adult cannot attain absolute autonomy. If I want to hold onto this goal, it should be rephrased to reflect this fact. I could, for example, recast it in terms of the maximization of individual autonomy.

Given a worthy and attainable goal Y, the action X must actually contribute to the pursuit of Y. If you've got a goal that you value, and that you can justify, it does no good to prescribe action that won't get you any closer to it. If I value individual autonomy, I can't advise you to lie if that lie will, for example, prevent another from making an informed decision.

The point, to reiterate, is that normative statements aren't axioms. The normative proposition enjoining an actor to do X is justified only insofar as X will actually promote Y, and only insofar as Y is a justifiable and attainable goal.

[Postscript: For what it's worth, my thoughts on ethics are hugely influenced by Larry Laudan's model of scientific progress outlined in Science and Values. This despite the fact that he writes, in the preface, that

"...[T]his book is neither about how to make scientists more moral nor about how to make moral theory more scientific, however desirable at least one of those outcomes might be .... [A]lthough it is devoutly to be wished that moral philosophers knew more than they do about science, I would not know how to recognize a scientific ethics I were confronted by one."

I make no claims to have accurately represented Laudan's model of science here, and clearly he doesn't think it applies to ethical issues (at least he didn't in 1984). Nonetheless, his framework for the justification of normative methodological statements in science seems to me to have direct parallels in moral philosophy.

[Post-postscript: while it is very likely that there are incomplete thoughts and problematic philosophical assertions in the above, I must leave now if I am to make it to the free performance of Book I of The Well Tempered Clavier on campus. This kind of thing is one of my favorite aspects of life in a university town, and I loves me some Bach. More later.]]


Some bias with a late night snack

Josh has taken issue with my having taken issue with his taking issue with, um, what was it again?

I bet that if we continue to link to each new salvo in this battle of wits and wisdom, we'll be in the top 40,000,000 or so results of a google search for "liberal media". We used to have these kind of exchanges via longform email, but now that we're bloggers, we offer to the internets on-loggers the opportunity to join in, too.

Since I don't want to get into this liberal media debate, I'll just finish by stating, for the record, that I agree with Josh's comment that it was appropriate for Rumsfeld to bring up a potentially valid historical episode when discussing current events. I also agree that, insofar as Rumsfeld's speech deserved a response at all, it deserved a reasoned response - for example, a response focusing on the differences between our current situation and that of the Allies-to-be prior to World War II. Instead, we got Barbara Boxer's faux outrage and Howard Dean's electoral prognostication.

I wrote above that "I don't want to get into this" half-seriously, because, on the one hand, I felt like giving a half-hearted shot at getting a rise out of Josh, and on the other, I believe that the debate over whether or not there is a liberal (or conservative) media bias is a statistical debate requiring careful, laborious consideration of, among other things, what constitutes bias in one direction or another, how many media outlets must exhibit bias, and how often they must do so to establish bias.

I will certainly concede that Josh believes that there is a liberal media bias (and I'll point out that Josh is not a lazy thinker - he comes by his beliefs the hard way), and I will concede that it's possible that there is, in fact, a liberal media bias. But I won't measure it.