I haven't been posting much to this blog lately, for obvious reasons. However, I did involve myself in a discussion in response to another blog's post recently.
To make a long(ish) story short(ish), Glenn Greenwald was disturbed to see the Washington Post praising recently deceased Chilean ex-dictator Augusto Pinochet. He drew parallels between US support for Pinochet's foreign lawlessness back then and support for domestic lawlessness today (note that he did not draw explicit parallels between Bush and Pinochet - he's not dumb, and he's not dishonest [Greenwald, not Bush or Pinochet]). I felt that the Post's editorial was less awful than Greenwald felt it was, and I posted a comment to that effect.
I argued that, as a historical case study (as opposed to a model on which to base one's own plans), Pinochet's 'free-market' economic policies are distinct from the violent poitical oppression of his regime. I made some facile comparisons between Castro and Pinochet and argued that the relative stability of Chile over the years was due, at least in part, to Pinochet's economic policies.
Others shot back that Pinochet's economic policies weren't even that beneficial, that they don't justify the political oppression (which I explicitly agreed with, even before this 'objection' was made to my argument), that welfare states 'just work', that laissez faire capitalism is equivalent to Dicken's London, and that I am a lying Nazi-sympathizer (way to respect the level of discourse that Glenn studiously maintains, 'truth machine'!).
I don't actually know that much about Pinochet's economic policies. It may well be the case that they were not good for Chile. It does seem to be the case that Chile has been more economically stable, and more economically healthy, than most other Latin American countries for much longer, but I'm happy to admit that this could be for reasons independent of Pinochet's economics. I remain unconvinced that welfare states 'just work' and that laissez faire capitalism is a bad idea. In addition, I value honesty very highly and, for what it's worth, I'm not a big fan of the Nazis.
All that said, it's kind of embarrassing to admit that this morning - a full two days after getting into the discussion at Unclaimed Territory - it occurred to me that Pinochet's economics and politics are not, in fact, separate. I am pro free market primarily because I don't like the idea of someone else making my decisions for me. It seems to me that no government official, whether democratically elected or installed from abroad, has the wisdom to plan an economy better than the mass of humanity participating in a market can. There's certainly no reason to think that any government officials are better suited than individuals are to make day to day decisions about who to associate with, what to buy, what to sell, or how hard to work. I think everyone would be better off, at least in the long run, if they had the opportunities afforded them by free markets.
It should have been obvious to me on Tuesday that imprisoning, torturing, and murdering political opponents is 100% antithetical to these values. It is as clear as day (today anyway) that Pinochet's political oppression of Chileans represents an utter lack of respect for private property, a crucial underpinning of any truly free market. After all, if a person's self is not owned by that person, then what is?
14.12.06
5.12.06
Solomon T Silbert
My son - Solomon T Silbert - was born on Monday, December 4, 2006 at 10 AM.
Here is a picture of him when he was about one minute old:
More here.
Here is a picture of him when he was about one minute old:
More here.
17.11.06
11.11.06
The Beauty of the Reductio ad Absurdum
Reductio ad absurdum can be striking, indeed beautiful, in its simplicity. Taking a bad premise to its logical extreme can concisely illustrate just how bad the premise is. Here are my two favorite 'reductios'.
The first is in response to epistemological relativism in philosophy of science. The basic claim (i.e., the bad premise) is that truth and knowledge are socially determined; I believe what I do because of my contingent history of social, economic, and cultural experiences.
If this assertion is made 'in good faith', it eats itself. If its true, then by necessity, the speaker believes it only by virtue of his social history. On the other hand, if the speaker has non-social reasons to believe the assertion, then the assertion can't be true (at least not in its strongest form).
Clearly personal social histories play some role in what and why people believe what they do. Just as clearly, one's social history is not the sole determinant of one's beliefs. If it were, I'd be just as much a socialist, epistemological relativist as the middle-class suburbanites in the typical 'arts and science' department. But I'm not. Personal social histories play a role in the formation of belief systems, but they clearly do not determine them.
My second favorite (i.e., the second on my list of favorites, not my slightly-less-favorite) reductio applies to arguments for (raising the) minimum wage. Proponents of raising the minimum wage typically argue that it is necessary in order for poor folks to work their way out of poverty. From the official Democratic platform (p. 30, pdf here):
The natural, and correct, response to this reductio ad absurdum is to point out that paying someone, say, $1000 per hour when their labor is worth far less than this is ridiculous. But this same logic applies to any stipulated wage floor. If someone wants and is willing to work for $3.00 per hour, why stop them?
Granted, it wouldn't be easy to pay rent and buy food working for $3.00 an hour, especially if you have to house and feed more than just yourself, but it would be easier to do so at $3.00 an hour than it would at $0.00 an hour. Legislating a lower limit on what members of the labor pool can accept for their services prices the least experienced out of the job market (here's an old Cato article about it that quotes Walter Williams' excellent State Against Blacks [which I first heard about when I read this article about the minimum wage at Mises.org]). It becomes quickly clear that the issue is less that of raising the minimum wage than it is that of having a minimum wage at all.
Of course, the reductio ad absurdum isn't the only, or even the most, useful logical tool. The cases against relativism and the minimum wage can be developed a good ways beyond the simple arguments made above. But the reductio's beauty is in its ability to allow a lousy premise to imply its own demise. It's like logical tai-chi.
The first is in response to epistemological relativism in philosophy of science. The basic claim (i.e., the bad premise) is that truth and knowledge are socially determined; I believe what I do because of my contingent history of social, economic, and cultural experiences.
If this assertion is made 'in good faith', it eats itself. If its true, then by necessity, the speaker believes it only by virtue of his social history. On the other hand, if the speaker has non-social reasons to believe the assertion, then the assertion can't be true (at least not in its strongest form).
Clearly personal social histories play some role in what and why people believe what they do. Just as clearly, one's social history is not the sole determinant of one's beliefs. If it were, I'd be just as much a socialist, epistemological relativist as the middle-class suburbanites in the typical 'arts and science' department. But I'm not. Personal social histories play a role in the formation of belief systems, but they clearly do not determine them.
My second favorite (i.e., the second on my list of favorites, not my slightly-less-favorite) reductio applies to arguments for (raising the) minimum wage. Proponents of raising the minimum wage typically argue that it is necessary in order for poor folks to work their way out of poverty. From the official Democratic platform (p. 30, pdf here):
The dream of the middle class should belong to all Americans willing to work for it. We still have work to do as long as millions of Americans work full-time, fulfill their responsibilities, and continue to live in poverty. We will offer these Americans a ladder to the middle class. That means raising the minimum wage to $7.00,....Why stop at $7.00? If raising the minimum wage to $7.00 will help people ascend to the middle class, won't raising it to $7.50 make the ascension quicker? How about $10.00? $20.00? $1000?
The natural, and correct, response to this reductio ad absurdum is to point out that paying someone, say, $1000 per hour when their labor is worth far less than this is ridiculous. But this same logic applies to any stipulated wage floor. If someone wants and is willing to work for $3.00 per hour, why stop them?
Granted, it wouldn't be easy to pay rent and buy food working for $3.00 an hour, especially if you have to house and feed more than just yourself, but it would be easier to do so at $3.00 an hour than it would at $0.00 an hour. Legislating a lower limit on what members of the labor pool can accept for their services prices the least experienced out of the job market (here's an old Cato article about it that quotes Walter Williams' excellent State Against Blacks [which I first heard about when I read this article about the minimum wage at Mises.org]). It becomes quickly clear that the issue is less that of raising the minimum wage than it is that of having a minimum wage at all.
Of course, the reductio ad absurdum isn't the only, or even the most, useful logical tool. The cases against relativism and the minimum wage can be developed a good ways beyond the simple arguments made above. But the reductio's beauty is in its ability to allow a lousy premise to imply its own demise. It's like logical tai-chi.
2.11.06
John Kerry vs. Humor
So, John Kerry botched a joke on Monday. In a speech to some California college students, he said, "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you get stuck in Iraq."
Not surprisingly, reaction was swift, loud, negative, and omnipresent. Despite Kerry's claim that the barb was aimed at the President, many were offended on behalf of our noble, selfless troops. Putting aside the commonly accepted absurdity that being in the military implies selflessness and nobility of purpose, the most obvious interpretation of the joke is that 'you' get stuck in Iraq as a grunt, signing up for military duty only because your lousy performance in school reduced the number of careers available to you to just the one.
It took me two full days to understand that the 'you' that gets stuck in Iraq is President Bush. He didn't do well in school, see, and now he's stuck in Iraq. That's almost funny.
Allow me to make a suggestion regarding how to make the joke actually funny. The problem is that the intended interpretation and the most likely interpretation are different. The most obvious solution, then, would be to include some uniquely presidential clue to the identity of the 'you' that gets stuck in Iraq.
For example, Kerry could have said, "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you get your country stuck in Iraq."
While this is certainly an improvement on Kerry's weak effort, it is, perhaps, too subtle. A less subtle possibility: "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you end up President of the United States and get your country stuck in Iraq."
This makes obvious another, even better option, namely to leave Iraq out of it: "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you end up President of the United States."
This version has three important qualities: it's funny (at least, it's funnier than Kerry's joke), its intended referent and the most likely referent to be assumed by the listener are the same (you'd have to be truly dense to fail to get who it's about), and it does what Kerry said he was trying to do in the first place - take a jab at the President.
I know Iraq is 'topical', but it's also a very loaded issue to bring up, especially at the end of an election campaign (granted, it's only a midterm). Poking fun at the fact that the President cruised through an Ivy League 'education' never gets old, though.
As a final aside, I think I have shown that the old adage that 'a joke always dies on the operating table' is not necessarily true. If you start with a very unfunny joke, ineptly delivered, you may well be able to analyze your way to a something funny. Funnier, anyway.
Not surprisingly, reaction was swift, loud, negative, and omnipresent. Despite Kerry's claim that the barb was aimed at the President, many were offended on behalf of our noble, selfless troops. Putting aside the commonly accepted absurdity that being in the military implies selflessness and nobility of purpose, the most obvious interpretation of the joke is that 'you' get stuck in Iraq as a grunt, signing up for military duty only because your lousy performance in school reduced the number of careers available to you to just the one.
It took me two full days to understand that the 'you' that gets stuck in Iraq is President Bush. He didn't do well in school, see, and now he's stuck in Iraq. That's almost funny.
Allow me to make a suggestion regarding how to make the joke actually funny. The problem is that the intended interpretation and the most likely interpretation are different. The most obvious solution, then, would be to include some uniquely presidential clue to the identity of the 'you' that gets stuck in Iraq.
For example, Kerry could have said, "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you get your country stuck in Iraq."
While this is certainly an improvement on Kerry's weak effort, it is, perhaps, too subtle. A less subtle possibility: "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you end up President of the United States and get your country stuck in Iraq."
This makes obvious another, even better option, namely to leave Iraq out of it: "You know education, if you make the most of it, you study hard, you do your homework, and you make an effort to be smart, you can do well. If you don't, you end up President of the United States."
This version has three important qualities: it's funny (at least, it's funnier than Kerry's joke), its intended referent and the most likely referent to be assumed by the listener are the same (you'd have to be truly dense to fail to get who it's about), and it does what Kerry said he was trying to do in the first place - take a jab at the President.
I know Iraq is 'topical', but it's also a very loaded issue to bring up, especially at the end of an election campaign (granted, it's only a midterm). Poking fun at the fact that the President cruised through an Ivy League 'education' never gets old, though.
As a final aside, I think I have shown that the old adage that 'a joke always dies on the operating table' is not necessarily true. If you start with a very unfunny joke, ineptly delivered, you may well be able to analyze your way to a something funny. Funnier, anyway.
27.10.06
I don't exist. [updated]
More specifically,
Noah
There are 32,997 people in the U.S. with the first name Noah. Statistically the 1003rd most popular first name. (tied with 30 other first names) More than 99.9 percent of people with the first name Noah are male.
Silbert
There are 510 people in the U.S. with the last name Silbert. Statistically the 48525th most popular last name. (tied with 4686 other last names)
Noah Silbert
There are 0 people in the U.S. named Noah Silbert. While both names you entered were found in our database, neither was common enough to make it likely that someone in the U.S. has that name.
Update: The update should really be a revision of the title to reflect that the more thorough analysis of my name indicates, as Josh points out, not that I don't exist, but that, as an allegedly attested occurence of a "Noah Silbert", I am merely extremely statistically unlikely.
26.10.06
Mortality Surveillance
Perhaps appropriately, Josh has a death watch on this blog. He also has a second blog dedicated to a recent fit of self-discipline with regard to algorithm analysis. On his first blog, he describes this second blog as "a daily journal" on his reading of a foundational three volume algorithm analysis book.
Well, it has been some time since he posted to this second blog. In fact, it has been nearly as long as it was between my last post and today's unexpected flurry of bloggery here at Source-Filter. It is possible, perhaps plausible, that this delay, like the delays in my own posting schedule, indicates that Josh's secondary blog is terminally ill.
Thus, I could, in theory, return the favor to Josh and commence what I might call an 'expiration vigil' for his Knuth blog. I'll have to think about it for a while before making a final decision...
Well, it has been some time since he posted to this second blog. In fact, it has been nearly as long as it was between my last post and today's unexpected flurry of bloggery here at Source-Filter. It is possible, perhaps plausible, that this delay, like the delays in my own posting schedule, indicates that Josh's secondary blog is terminally ill.
Thus, I could, in theory, return the favor to Josh and commence what I might call an 'expiration vigil' for his Knuth blog. I'll have to think about it for a while before making a final decision...
25.10.06
Medical Research and Signal Detection Theory
On my way home this afternoon, I heard an interesting story on NPR about a new medical study concerning a new and exceptionally effective lung cancer screening technique. The story was interesting for two distinct, though related, reasons: it can be used to illustrate the utility of signal detection theory, and it is a rare example of accurate (and precise) media coverage of scientific research.
Signal detection theory's utility resides both in its ability to tease sensitivity and decision bias apart and in what it tells us about how they relate. For a given level of sensitivity, making your decision criterion more liberal will increase both the probability of accurately detecting a signal that is, in fact, present (i.e., your 'hit' rate) and your probability of inaccurately 'detecting' a signal that isn't (i.e., your 'false alarm' rate), while making your decision criterion more conservative will have the opposite effect. Conversely, for a given decision criterion (defined in terms of hit rate), increasing sensitivity will lower the false alarm rate while decreasing sensitivity will increase it.
How does this relate to the study discussed in the NPR story linked above? The study presents a new, more sensitive test for early cases of lung cancer. This higher level of sensitivity will enable doctors to detect many more cases of lung cancer much earlier than they could before, which has two effects. More lung cancer cases caught early could lead to more lung cancer cases treated successfully and more misdiagnosed false alarms and inappropriate, expensive, and stressful treatment.
Now, signal detection theory tells us that, at least in principle, sensitivity and decision bias are independent. In fact, there is a lot of experimental evidence that this is the case. For example, you can systematically shift peoples' decision criteria around by manipulating the relative frequency of occurence of signal presence versus signal absence or the relative value of each type of response. Nonetheless, in a 'real world' situation like this, in which the stakes can be very high, decision bias and sensitivity can interact heavily.
The old (i.e., standard) tests are very insensitive to early lung cancer. Extreme insensitivity to the early stages of lung cancer precludes the utility of an adjustable decision criterion. Only relatively conclusive evidence of lung cancer even offers grounds for making a decision to get treatment or not. Now that a rather sensitive test is available, doctors are, in principle, free to set their decision criteria wherever they want. Hence, understanding the relationship between accurately catching and treating early cases and inaccurately mistreating non-cases becomes very important.
How does this relate to accurate (and precise) media coverage of a research issue? The NPR report does a good job of reporting these issues, which seems to me to be unusual in science reporting. There are those on the 'pro-hit' side who take this study to indicate that lung cancer is on par with other forms of cancer that have become very treatable, and there are those on the 'anti-false-alarm' side who warn of the danger of, well, false alarms. While I don't believe that balance for balance's sake makes for good reporting, in this case balance is appropriate. The relationship between hits and false alarms makes that clear.
The report also does discusses a methodological limitation of the study, namely that the lack of a control group severly limits what this study tells us about the efficacy of early diagnosis and treatment of lung cancer. Again, this attention to detail with regard to research is unusual in the media.
Whence 'precision'? All this in less than five minutes of audio.
Signal detection theory's utility resides both in its ability to tease sensitivity and decision bias apart and in what it tells us about how they relate. For a given level of sensitivity, making your decision criterion more liberal will increase both the probability of accurately detecting a signal that is, in fact, present (i.e., your 'hit' rate) and your probability of inaccurately 'detecting' a signal that isn't (i.e., your 'false alarm' rate), while making your decision criterion more conservative will have the opposite effect. Conversely, for a given decision criterion (defined in terms of hit rate), increasing sensitivity will lower the false alarm rate while decreasing sensitivity will increase it.
How does this relate to the study discussed in the NPR story linked above? The study presents a new, more sensitive test for early cases of lung cancer. This higher level of sensitivity will enable doctors to detect many more cases of lung cancer much earlier than they could before, which has two effects. More lung cancer cases caught early could lead to more lung cancer cases treated successfully and more misdiagnosed false alarms and inappropriate, expensive, and stressful treatment.
Now, signal detection theory tells us that, at least in principle, sensitivity and decision bias are independent. In fact, there is a lot of experimental evidence that this is the case. For example, you can systematically shift peoples' decision criteria around by manipulating the relative frequency of occurence of signal presence versus signal absence or the relative value of each type of response. Nonetheless, in a 'real world' situation like this, in which the stakes can be very high, decision bias and sensitivity can interact heavily.
The old (i.e., standard) tests are very insensitive to early lung cancer. Extreme insensitivity to the early stages of lung cancer precludes the utility of an adjustable decision criterion. Only relatively conclusive evidence of lung cancer even offers grounds for making a decision to get treatment or not. Now that a rather sensitive test is available, doctors are, in principle, free to set their decision criteria wherever they want. Hence, understanding the relationship between accurately catching and treating early cases and inaccurately mistreating non-cases becomes very important.
How does this relate to accurate (and precise) media coverage of a research issue? The NPR report does a good job of reporting these issues, which seems to me to be unusual in science reporting. There are those on the 'pro-hit' side who take this study to indicate that lung cancer is on par with other forms of cancer that have become very treatable, and there are those on the 'anti-false-alarm' side who warn of the danger of, well, false alarms. While I don't believe that balance for balance's sake makes for good reporting, in this case balance is appropriate. The relationship between hits and false alarms makes that clear.
The report also does discusses a methodological limitation of the study, namely that the lack of a control group severly limits what this study tells us about the efficacy of early diagnosis and treatment of lung cancer. Again, this attention to detail with regard to research is unusual in the media.
Whence 'precision'? All this in less than five minutes of audio.
18.10.06
The British The Office, The American The Office
Prior to seeing it, I heard mixed reviews of the BBC show The Office. Most of these reviews came from friends who were working, or had worked, in offices. They didn't like it, not one little bit. Someone told me to stick it out, that it would get funny after a few episodes.
I don't remember who it was, but they were right. Episode one was painful to watch. I quickly learned to cringe every time Ricky Gervais' David Brent entered a scene. Episode two was excruciating. It was clear why anyone with real office experience would find the show repugnant, at least initially. By episode three, I loved it. I still cringed, and squirmed, winced, and probably moaned. I watched both regular seasons, and the Christmas special hadn't come out on DVD yet. You may or may not know that the second season ends on a very low note. I had gotten so involved in the emotional lives of the characters, that it was, well, crushing. Thankfully, the Christmas special wraps it all up very nicely without cheapening anything. Before too long, I watched it all again. The (British The) Office is brilliant, in every respect.
Now, this was well after The Office had aired for the first time in Britain. In fact, it was just before the American version began its first season. I was wary of a new version, but Steve Carrell is funny as hell, so I gave it a chance, and I watched the first episode.
I felt like it followed the first episode of the original series too closely. It was funny, but not as funny as the original. I liked the casting in the original version quite a lot, and felt like some of the differences in the new one were no good.
I didn't watch any more episodes until tonight, when I re-watched the first episode. I still felt like it followed the first British episode closely, but not so closely that it felt unoriginal. I noticed nice variations on jokes from the original (the particulars of an early exchange between boss and receptionist) and new American jokes I had missed the first time through. I found it plenty funny enough to watch the second episode.
The second episode is brutal. It reminded me of what I liked so much about the other version. It evoked out-loud laughter and uncomfortable shifting in my (office) chair, as The Office should. Steve Carell is, indeed, funny as hell. With the benefit of decaying memory and a couple of years distance, I see clearly now that the American The Office is well worth watching. I will continue to do so.
However, I am still a bit wary. One of the best, and most interesting, properties of the British version is its length. Two series of six episodes each, two 45 minute specials. It left me simultaneously wanting more and feeling very satisfied that there was no more to be had. I know that the American version lasts longer. This could be detrimental to the overall package, or it could point to a worthwhile divergence from the original. I'm sure I'll enjoy putting myself through the discomfort of finding out.
I don't remember who it was, but they were right. Episode one was painful to watch. I quickly learned to cringe every time Ricky Gervais' David Brent entered a scene. Episode two was excruciating. It was clear why anyone with real office experience would find the show repugnant, at least initially. By episode three, I loved it. I still cringed, and squirmed, winced, and probably moaned. I watched both regular seasons, and the Christmas special hadn't come out on DVD yet. You may or may not know that the second season ends on a very low note. I had gotten so involved in the emotional lives of the characters, that it was, well, crushing. Thankfully, the Christmas special wraps it all up very nicely without cheapening anything. Before too long, I watched it all again. The (British The) Office is brilliant, in every respect.
Now, this was well after The Office had aired for the first time in Britain. In fact, it was just before the American version began its first season. I was wary of a new version, but Steve Carrell is funny as hell, so I gave it a chance, and I watched the first episode.
I felt like it followed the first episode of the original series too closely. It was funny, but not as funny as the original. I liked the casting in the original version quite a lot, and felt like some of the differences in the new one were no good.
I didn't watch any more episodes until tonight, when I re-watched the first episode. I still felt like it followed the first British episode closely, but not so closely that it felt unoriginal. I noticed nice variations on jokes from the original (the particulars of an early exchange between boss and receptionist) and new American jokes I had missed the first time through. I found it plenty funny enough to watch the second episode.
The second episode is brutal. It reminded me of what I liked so much about the other version. It evoked out-loud laughter and uncomfortable shifting in my (office) chair, as The Office should. Steve Carell is, indeed, funny as hell. With the benefit of decaying memory and a couple of years distance, I see clearly now that the American The Office is well worth watching. I will continue to do so.
However, I am still a bit wary. One of the best, and most interesting, properties of the British version is its length. Two series of six episodes each, two 45 minute specials. It left me simultaneously wanting more and feeling very satisfied that there was no more to be had. I know that the American version lasts longer. This could be detrimental to the overall package, or it could point to a worthwhile divergence from the original. I'm sure I'll enjoy putting myself through the discomfort of finding out.
13.10.06
The Death of Solemnity [updated]
Josh is a good friend. He has started a death watch on my blog. Well, he can put that death watch right back where it came from. At least until a week or so from today, when I get around to blogging again.
Not that this very short post will do much to quell the rising tide of voices crying out in despair as my priorities shift away from blogging (Josh is right about that bit). For now I will merely initiate a moratorium on solemn promises, whether blogging related or not.
Anyway, here's what makes me blog tonight: ever since the "nuclear" test a few days ago in North Korea, I have been hoping that it turns out to have been a bluff - a great big pile of conventional explosives in a hole. First of all, this is much funnier than North Korea actually having nuclear weapons. Second, it's also much better for pretty much everyone other than Kim Jong-Il and his cronies.
The day of the test, there was evidence of either a bluff or simple incompetence on the part of the North Korean nuclear scientists: the explosion was unexpectedly small. Today, CNN reports that there is no radiological evidence for a nuclear explosion.
This is also consistent with either a bluff or incompetence. While I hope for the former, one of these two hypotheses is looking more and more likely.
Update: Now the CNN report linked above says they did find radioactive material flying around above Korea. Oh well. I suppose it's still possible that it was a dirty bomb (i.e., still a bluff), but this wouldn't be as funny as a radation-free bluff. It's nice to know that the evidence still points to incompetence, though.
Not that this very short post will do much to quell the rising tide of voices crying out in despair as my priorities shift away from blogging (Josh is right about that bit). For now I will merely initiate a moratorium on solemn promises, whether blogging related or not.
Anyway, here's what makes me blog tonight: ever since the "nuclear" test a few days ago in North Korea, I have been hoping that it turns out to have been a bluff - a great big pile of conventional explosives in a hole. First of all, this is much funnier than North Korea actually having nuclear weapons. Second, it's also much better for pretty much everyone other than Kim Jong-Il and his cronies.
The day of the test, there was evidence of either a bluff or simple incompetence on the part of the North Korean nuclear scientists: the explosion was unexpectedly small. Today, CNN reports that there is no radiological evidence for a nuclear explosion.
This is also consistent with either a bluff or incompetence. While I hope for the former, one of these two hypotheses is looking more and more likely.
Update: Now the CNN report linked above says they did find radioactive material flying around above Korea. Oh well. I suppose it's still possible that it was a dirty bomb (i.e., still a bluff), but this wouldn't be as funny as a radation-free bluff. It's nice to know that the evidence still points to incompetence, though.
4.10.06
Political Science [updated 6.10.2006]
The Cato blog has an irritating new post (by Jerry Taylor) that criticizes what should be,but may well turn out not to be, a worthwhile new political organization with an adequately descriptive name - Scientists and Engineers for America.
Taylor is keen to complain about SEA, and the issues he raises are potentially valid, but very little on the SEA website and nothing Taylor presents about the organization provide reason for worry. Taylor quotes SEA, writing that its purpose is
Case in point: Taylor points out two obvious truths about science - "...there is disagreement among scientists about many of the issues they are concerned about..." and "...scientific truth is not determined by.... majority votes within politicized professional bodies." - and makes a truly annoying move, linking to an outdated book by Thomas Kuhn as 'support' for the half-redundant, half-irrelevant assertion that "[v]irtually every single thing that the scientific "consensus" believes today was once a fringe minority perspective." (link in original).
I will see Taylor's "virtually every single thing" and raise him an unqualified "every single thing." New theories have to start somewhere, but no one with an ounce of sense believes they occur simultaneously to even a sizable plurality, much less a majority, of scientists. Instead, theories start small, conceived typically by one person, perhaps on occasion by a small integer larger than one people. This fact is utterly banal, and it is irrelevant to Taylor's complaints. I would even argue that for his first two assertions to bear much weight, they must be situated in a broad view of how science works in general, which, at the very least, accounts for non-miraculous theory generation.*
I followed Taylor's link to the SEA homepage and read the introduction page, the 'scientific bill of rights', and the 'issues' page, though I haven't followed all of the links on the issues page. Almost everything I read, I liked. The 'bill of rights' even deals mostly in negatives (i.e., 'thou shalt nots'), which Josh rightly points out is precisely how rights are best defined. I also have a nitpick with 'right six':
The one part of SEA's site that gave me more serious pause was the 'Environment' paragraph on the issues page:
Taylor also links to two Cato papers that look to be pretty interesting (I haven't read them), so his 'argument' isn't completely limited to the silliness above. As far as I can tell, one paper is about the methodological underpinnigs of environmental policy (pdf), and the other is about politics and science more generally (pdf). I imagine that these provide some support for Taylor's general position(s) on science and policy, but I can't imagine they have much to say about SEA directly.
I hope that SEA turns out to be a worthwhile organization. Although I am not as pessimistic about its chances as Taylor is, I do have enough reservations to withhold my 'signature' for now. My 'conversion' to classical liberalism is based largely on mistrust of political organizations (the government chief among them). I'll keep an eye on SEA, and I encourage you, my vast army of loyal, thoughtful readers, to read what they have to say instead of simply taking Jerry Taylor's word for it (I know you all use that Cato blog link at the top of this page on a regular basis).
* A further illustration of the pointlessness of Taylor's Kuhn reference (and an implication of its necessity) is that the fact that every single theory that scientists don't currently believe started out as a "fringe minority perspective." Failure to recognize the irrelevance of the size-of-source of scientific ideas lends undeserved credence to hacks who point out the obvious truth that, as their theories are now ridiculed, so were Newton's. Taylor doesn't do this here, but some of what he did do is related to this, and it bugs me, so I wanted to address it.
Update: Josh brings up some damning material that I missed on the SEA site, and makes some good points about a number of other potential problems with the organization. It looks like Taylor's reaction to the organization was not as knee-jerk as I thought, although seeing Josh find such clear evidence of exactly what Taylor was complaining about makes it something of a mystery why Taylor chose the much less incriminating quotes that he used in his post.
Taylor is keen to complain about SEA, and the issues he raises are potentially valid, but very little on the SEA website and nothing Taylor presents about the organization provide reason for worry. Taylor quotes SEA, writing that its purpose is
to campaign for politicians “who respect evidence and understand the importance of using scientific and engineering advice in making public policy.” While the group professes to be nonpartisan, “the group will discuss the impact the Bush Administration’s science and technology policies have had in their fields and the need for voters to consider the science and technology policies by candidates in this year’s mid-term elections.”While he undoubtedly has reason to be skeptical - many, many academics, scientists included, are, in fact, far left - it is entirely reasonable for a nonpartisan group to pay special attention to the Bush administration's policies. After all, Bush is in the fifth year of his presidency. It would make little sense for such an organization to focus primarily on the policies of former administrations. It is possible that SEA's singling out of the Bush administration is politically motivated, just as it is possible that it is completely reasonable. Taylor continues:
I imagine that most people would agree that, in the words of SEFA [sic], “Scientists and engineers have a right, indeed an obligation, to enter the political debate when the nation’s leaders systematically ignore scientific evidence and analysis, put ideological interests ahead of scientific truths, suppress valid scientific evidence and harass and threaten scientists for speaking honestly about their research.” But there’s more than a whiff of the sentiment here that Americans should just shut up and let the guys in the white coats run the country.Again, while he may well have reason for concern, nothing in either of these quotes from SEA is disagreeable, at least not to me. In any case, whiffs don't make for coherent counter-arguments.
Case in point: Taylor points out two obvious truths about science - "...there is disagreement among scientists about many of the issues they are concerned about..." and "...scientific truth is not determined by.... majority votes within politicized professional bodies." - and makes a truly annoying move, linking to an outdated book by Thomas Kuhn as 'support' for the half-redundant, half-irrelevant assertion that "[v]irtually every single thing that the scientific "consensus" believes today was once a fringe minority perspective." (link in original).
I will see Taylor's "virtually every single thing" and raise him an unqualified "every single thing." New theories have to start somewhere, but no one with an ounce of sense believes they occur simultaneously to even a sizable plurality, much less a majority, of scientists. Instead, theories start small, conceived typically by one person, perhaps on occasion by a small integer larger than one people. This fact is utterly banal, and it is irrelevant to Taylor's complaints. I would even argue that for his first two assertions to bear much weight, they must be situated in a broad view of how science works in general, which, at the very least, accounts for non-miraculous theory generation.*
I followed Taylor's link to the SEA homepage and read the introduction page, the 'scientific bill of rights', and the 'issues' page, though I haven't followed all of the links on the issues page. Almost everything I read, I liked. The 'bill of rights' even deals mostly in negatives (i.e., 'thou shalt nots'), which Josh rightly points out is precisely how rights are best defined. I also have a nitpick with 'right six':
6. Appointments to federal scientific advisory committees shall be based on the candidate’s scientific qualifications, not political affiliation or ideology.It should read "Appointments to federal scientific advisory committees shall be based on the candidate’s scientific qualifications." Full stop. It's no good listing all of the things that shouldn't serve as criteria.
The one part of SEA's site that gave me more serious pause was the 'Environment' paragraph on the issues page:
Environment: We need to push beyond our first generation of environmental laws and regulations and move to more modern environmental policies that spur continued technological innovation. Government-industry covenants could allow businesses, in consultation with regulators and the public, to craft the most effective and efficient strategies to meet broad national environmental goals through market-based limits and incentives that don't harm our economy.This is incredibly vague, and where it's not vague, it's incoherent. It's even more vague than the rest of the site, which is plenty vague in its own right. Perhaps not surprisingly, the vagueness is part of what makes it agreeable. Most of what they say on the site is compatible with a variety of political agendas, including libertarianism. This seems entirely appropriate.
Taylor also links to two Cato papers that look to be pretty interesting (I haven't read them), so his 'argument' isn't completely limited to the silliness above. As far as I can tell, one paper is about the methodological underpinnigs of environmental policy (pdf), and the other is about politics and science more generally (pdf). I imagine that these provide some support for Taylor's general position(s) on science and policy, but I can't imagine they have much to say about SEA directly.
I hope that SEA turns out to be a worthwhile organization. Although I am not as pessimistic about its chances as Taylor is, I do have enough reservations to withhold my 'signature' for now. My 'conversion' to classical liberalism is based largely on mistrust of political organizations (the government chief among them). I'll keep an eye on SEA, and I encourage you, my vast army of loyal, thoughtful readers, to read what they have to say instead of simply taking Jerry Taylor's word for it (I know you all use that Cato blog link at the top of this page on a regular basis).
* A further illustration of the pointlessness of Taylor's Kuhn reference (and an implication of its necessity) is that the fact that every single theory that scientists don't currently believe started out as a "fringe minority perspective." Failure to recognize the irrelevance of the size-of-source of scientific ideas lends undeserved credence to hacks who point out the obvious truth that, as their theories are now ridiculed, so were Newton's. Taylor doesn't do this here, but some of what he did do is related to this, and it bugs me, so I wanted to address it.
Update: Josh brings up some damning material that I missed on the SEA site, and makes some good points about a number of other potential problems with the organization. It looks like Taylor's reaction to the organization was not as knee-jerk as I thought, although seeing Josh find such clear evidence of exactly what Taylor was complaining about makes it something of a mystery why Taylor chose the much less incriminating quotes that he used in his post.
30.9.06
Belated Birthday Notice
Yesterday was the 125th birthday of Ludwig von Mises. Josh has a post about it, which includes a link to an excellent post by George Reisman about the importance of Mises' work. I wanted to add that the Mises Institute also has a nice biographical piece in honor of his birthday.
28.9.06
The Media Sucks. No. The Media Suck.
Listening to NPR this evening, I heard a good example of one of the most irritating and, frankly, damaging behaviors of the media - parroting assertions made by politicians with no accompanying evidence for, or against, the assertion.
The story that got me thinking was about the detainee interrogation bill that recently passed both houses of Congress. The parroting that got me irritated was the following quote:
Bush's assertions - and the willingness of pretty much every media outlet to repeat them without critical commentary - are all the more galling given that our invasion and occupation of Iraq is making the threat of terrorism worse. Worse still, the Bush administration is not only not willing to put the security of basic constitutional rights on par with their favored narrow construal of security as pertaining only to the threat of terrorism, they are willing, even eager, to cause injury to these basic rights. From Unclaimed Territory:
I would like to think that, agree with the point of view or not, if we had more of this kind of behavior in the media, we'd have less of the kind of behavior described above in the goverment. That's probably wishful thinking, but it bothers me greatly that the media, whose freedoms are ensured precisely so that they can be adversarial with respect to the government, are typically all too willing to abstain from critical thought.
The story that got me thinking was about the detainee interrogation bill that recently passed both houses of Congress. The parroting that got me irritated was the following quote:
"Our most important responsibility is to protect the American people from further attack," the president said. "And we cannot be able to tell the American people we're doing our full job unless we have the tools necessary to do so."Perhaps its the political reading I've been doing lately that makes me feel this way, but I think that this quote is utterly, and obviously, ridiculous. First, there is no single most important responsibility of any branch of government, unless you're dealing in extremely (and appropriately) vague obligations like 'upholding the constitution'. Second, even if there were a single most important responsibility of, say, the executive branch, it would not be at all straightforward to decide what it is. Third, even if the appropriate calculations have, somehow and in some trustworthy way, been done, no one in the Bush administration, the House, Senate, court system, or any state government has provided an ounce of evidence or argument that protecting the American people from attack is, in fact, the single most important responsibility. As stated in the Cato dispatch:
In "Assaults on Liberty," Robert A. Levy, senior fellow in constitutional studies at the Cato Institute, argues: "In the post-9/11 environment, no rational person believes that civil liberties are inviolable. After all, government's primary obligation is to secure the lives of American citizens. But when government begins to chip away at our liberties, we must insist that it jump through a couple of hoops. First, government must offer compelling evidence that its new and intrusive programs will make us safer. Second, government must convince us that there is no less invasive means of attaining the same ends. In too many instances, those dual burdens have not been met."At first glance, it appears that even the Cato fellows (this one, anyway) are buying the assertion that so bothers me, but if you read carefully, it's clear that Levy's assertion is much broader than Bush's. Saying that "government's primary obligation is to secure the lives of American citizens" is vague, likely intentionally so. The case can easily be made that "securing the lives of American citizens" is not coextensive with waging a war on terror. For example, it also involves providing and maintaining a legal system - courts, police, and the like - to protect private property rights. It seems to me that this kind of security is every bit as important as, if not more important than, fighting a 'war' against a tactic, engaging, at extremely high cost, an enemy that is nowhere near as powerful as those prosecuting the 'war' would have us believe.
Bush's assertions - and the willingness of pretty much every media outlet to repeat them without critical commentary - are all the more galling given that our invasion and occupation of Iraq is making the threat of terrorism worse. Worse still, the Bush administration is not only not willing to put the security of basic constitutional rights on par with their favored narrow construal of security as pertaining only to the threat of terrorism, they are willing, even eager, to cause injury to these basic rights. From Unclaimed Territory:
...as Law Professors Marty Lederman and Bruce Ackerman each point out, many of the extraordinary powers vested in the President by this bill also apply to U.S. citizens, on U.S. soil.The silver lining? The related warrantless eavesdropping bill likely will not be passed before recess. Let's hope we can get some good old-fashioned gridlock in place this November to keep this travesty from becoming law. And lets hope that, somehow, court challenges to the detainee bill start repairing the damage soon.
As Ackerman put it: "The compromise legislation... authorizes the president to seize American citizens as enemy combatants, even if they have never left the United States. And once thrown into military prison, they cannot expect a trial by their peers or any other of the normal protections of the Bill of Rights." Similarly, Lederman explains: "this [subsection (ii) of the definition of 'unlawful enemy combatant'] means that if the Pentagon says you're an unlawful enemy combatant -- using whatever criteria they wish -- then as far as Congress, and U.S. law, is concerned, you are one, whether or not you have had any connection to 'hostilities' at all."
This last point means that even if there were a habeas corpus right inserted back into the legislation (which is unlikely at this point anyway), it wouldn't matter much, if at all, because the law would authorize your detention simply based on the DoD's decree that you are an enemy combatant, regardless of whether it was accurate. This is basically the legalization of the Jose Padilla treatment -- empowering the President to throw people into black holes with little or no recourse, based solely on his say-so.
I would like to think that, agree with the point of view or not, if we had more of this kind of behavior in the media, we'd have less of the kind of behavior described above in the goverment. That's probably wishful thinking, but it bothers me greatly that the media, whose freedoms are ensured precisely so that they can be adversarial with respect to the government, are typically all too willing to abstain from critical thought.
26.9.06
The importance of property rights [updated]
As I wrote in one of my first posts, I plan on using this blog in part to document my "slide into the netherworld of classical liberalism," or libertarianism. Because it is difficult, if not impossible, to overstate the importance of private property rights in libertarian philosophy, if I am to execute this slide effectively, I will have to read up on the subject. I am currently reading Timothy Sandefur's Cornerstone of liberty: Property rights in 21st-century America, which seems to be as good a place as any to begin.
As the title of this post suggest, though, I do have a nit to pick with an early portion of the book. The first proper chapter - 2, 'Why Property Rights Are Important' - is intended to lay the groundwork for the rest of the book. Unfortunately, Sandefur leads off with a pretty weak argument: the first subsection of the chapter is called 'Property Is Natural'. The gist of this section is that non-human animals and humans 'naturally' seek out private property, property is universal in human society, and depriving people of property has all sorts of negative effects. So, the nit I wish to pick is this: only the last of these has any hope of justifying (the importance of) property rights.
It is ironic that Sandefur attempts, initially, to justify property rights by way of a simple appeal to 'nature', as this is a fine example of the naturalistic fallacy. Even if we accept that private property is naturally sought out and universal among human societies, and I see no reason to believe otherwise, it does not follow that private property should be sought out or universal. It may turn out to be the case that private property should be sought out and that it should be universal (and I believe that this is, in fact, the case), but this conclusion must be arrived at via some other logical path.
I am optimistic that the book will be worth reading, though, for a couple of reasons. First, the next two subsections in chapter two have titles indicative of promising alternate logical paths: 'Property Is Good For Individuals' and 'Property Is Good For Society'. Second, despite my objections to the naturalistic fallacy, the 'Property is natural' subsection has some value. As stated above, this section discusses the negative effects of depriving people of their property. Insofar as these are well documented effects, their avoidance can serve as a justification for private property.
Sandefur quotes Dan Dennett, one of the more interesting philosophers of cognitive science, in a discussion of how humans use artefacts to establish their 'selves' as distinct from the world around them. The Dennett quote concerns the difficulties commonly encountered by elderly folks removed from familiar home environments to nursing 'homes'. Part of living in your own home is creating a familiar and useful environment. When removed from this, the elderly (and some young folks, to be sure) can have severe difficulty with basic daily activities. Our home environments come to mesh very closely with cognitive systems governing memory and perception.
On reading this, I was reminded of discussions in Philosophical Foundations of Cognitive Science (one of the two most blog-post inducing classes [with Friedman's class] that Josh is taking now) about the blurriness of the division between our 'selves' and our environments. Here's an example: when doing long division or multiplication by hand, most people use pencil (or pen) and paper to keep track of 'big picture' information while they perform simple calculations on subsets of the numbers (the digit in the 'ones' place, in the 'tens' place, in the 'hundreds' place, etc...). In a very real sense, then, that person's cognitive system straddles the skin, the most obvious and intuitive boundary between a person and his environment, to encompass the mind and part of the environment.
Although this particular situation only applies to people who have a (perceived) need to carry out long division and multiplication (and can't do it in their heads), the point is valid more generally, and it ties in with some of Sandefur's arguments about the personal value of 'home'. In addition to the cognitive value of 'home', Sandefur discusses its 'sentimental' value (and argues that all value is 'sentimental' insofar as it is subjective).
If we accept that our 'selves' - specifically our cognitive systems - extend into our environments, then the fact that there are negative effects of depriving someone of private property is clear. I can't imagine a justification for depriving an autonomous agent of his memory or perceptual facilities.
Update: Josh makes a good point that, if I'm remembering correctly, Sandefur does not (at least not explicitly), which is this: the onus is on those who would intervene in nature to provide evidence that such intervention is better than leaving it alone. This is, I think in retrospect, the point of Sandefur's discussion of the elderly in nursing homes, the problems faced by adults who were raised in property-free kibbutzim, and Soviet policy. The 'Property Is Natural' subsection would be improved a good bit if this line of reasoning were made explicit, as Josh has done.
This point of view brings up some interesting ethical questions that I will mention but not delve into at the moment. Any claim of 'better than' carries with it an implicit measure of 'good', about which reasonable people can potentially disagree. The end result is that property rights are put on a firmer foundation than the naturalistic fallacy can provide, although it is, perhaps, not as firm as we might want it to be.
As the title of this post suggest, though, I do have a nit to pick with an early portion of the book. The first proper chapter - 2, 'Why Property Rights Are Important' - is intended to lay the groundwork for the rest of the book. Unfortunately, Sandefur leads off with a pretty weak argument: the first subsection of the chapter is called 'Property Is Natural'. The gist of this section is that non-human animals and humans 'naturally' seek out private property, property is universal in human society, and depriving people of property has all sorts of negative effects. So, the nit I wish to pick is this: only the last of these has any hope of justifying (the importance of) property rights.
It is ironic that Sandefur attempts, initially, to justify property rights by way of a simple appeal to 'nature', as this is a fine example of the naturalistic fallacy. Even if we accept that private property is naturally sought out and universal among human societies, and I see no reason to believe otherwise, it does not follow that private property should be sought out or universal. It may turn out to be the case that private property should be sought out and that it should be universal (and I believe that this is, in fact, the case), but this conclusion must be arrived at via some other logical path.
I am optimistic that the book will be worth reading, though, for a couple of reasons. First, the next two subsections in chapter two have titles indicative of promising alternate logical paths: 'Property Is Good For Individuals' and 'Property Is Good For Society'. Second, despite my objections to the naturalistic fallacy, the 'Property is natural' subsection has some value. As stated above, this section discusses the negative effects of depriving people of their property. Insofar as these are well documented effects, their avoidance can serve as a justification for private property.
Sandefur quotes Dan Dennett, one of the more interesting philosophers of cognitive science, in a discussion of how humans use artefacts to establish their 'selves' as distinct from the world around them. The Dennett quote concerns the difficulties commonly encountered by elderly folks removed from familiar home environments to nursing 'homes'. Part of living in your own home is creating a familiar and useful environment. When removed from this, the elderly (and some young folks, to be sure) can have severe difficulty with basic daily activities. Our home environments come to mesh very closely with cognitive systems governing memory and perception.
On reading this, I was reminded of discussions in Philosophical Foundations of Cognitive Science (one of the two most blog-post inducing classes [with Friedman's class] that Josh is taking now) about the blurriness of the division between our 'selves' and our environments. Here's an example: when doing long division or multiplication by hand, most people use pencil (or pen) and paper to keep track of 'big picture' information while they perform simple calculations on subsets of the numbers (the digit in the 'ones' place, in the 'tens' place, in the 'hundreds' place, etc...). In a very real sense, then, that person's cognitive system straddles the skin, the most obvious and intuitive boundary between a person and his environment, to encompass the mind and part of the environment.
Although this particular situation only applies to people who have a (perceived) need to carry out long division and multiplication (and can't do it in their heads), the point is valid more generally, and it ties in with some of Sandefur's arguments about the personal value of 'home'. In addition to the cognitive value of 'home', Sandefur discusses its 'sentimental' value (and argues that all value is 'sentimental' insofar as it is subjective).
If we accept that our 'selves' - specifically our cognitive systems - extend into our environments, then the fact that there are negative effects of depriving someone of private property is clear. I can't imagine a justification for depriving an autonomous agent of his memory or perceptual facilities.
Update: Josh makes a good point that, if I'm remembering correctly, Sandefur does not (at least not explicitly), which is this: the onus is on those who would intervene in nature to provide evidence that such intervention is better than leaving it alone. This is, I think in retrospect, the point of Sandefur's discussion of the elderly in nursing homes, the problems faced by adults who were raised in property-free kibbutzim, and Soviet policy. The 'Property Is Natural' subsection would be improved a good bit if this line of reasoning were made explicit, as Josh has done.
This point of view brings up some interesting ethical questions that I will mention but not delve into at the moment. Any claim of 'better than' carries with it an implicit measure of 'good', about which reasonable people can potentially disagree. The end result is that property rights are put on a firmer foundation than the naturalistic fallacy can provide, although it is, perhaps, not as firm as we might want it to be.
23.9.06
One last time with celebrinerd
When I wrote about using links (lots of links) to get celebrinerd into popular usage, I wasn't thinking very carefully about where those links led to. I had forgotten about google bombing, described with specific reference to ken-jennings.com here.
So: celebrinerd, celebrinerd, celebrinerd.
So: celebrinerd, celebrinerd, celebrinerd.
22.9.06
Promises, promises... (pt. 2)
I promised a while back to post something new on this blog every day. Technically, I have fulfilled this promise.
With this post, I am solemnly backing out of this promise. I will still write (for the blog) every day, but I might not post what I write every day.
As stated in my first post, I was inspired to blog primarily to communicate about the research that I read about and conduct. Thus far, I have found that posts about my research (and related topics) take a long time to write, if they're to be written well. Since I would like my posts to be written well, I have decided to relax my posting frequency requirements.
Note that, prior to making a promise to post every day, I wrote "[n]o promises regarding frequency (or quality) of posts" in my first post. Today, this gives me a nice 'out' with regard to posting frequency. Perhaps in the future it will give me a nice 'out' with regard to posting quality, but let's hope not.
With this post, I am solemnly backing out of this promise. I will still write (for the blog) every day, but I might not post what I write every day.
As stated in my first post, I was inspired to blog primarily to communicate about the research that I read about and conduct. Thus far, I have found that posts about my research (and related topics) take a long time to write, if they're to be written well. Since I would like my posts to be written well, I have decided to relax my posting frequency requirements.
Note that, prior to making a promise to post every day, I wrote "[n]o promises regarding frequency (or quality) of posts" in my first post. Today, this gives me a nice 'out' with regard to posting frequency. Perhaps in the future it will give me a nice 'out' with regard to posting quality, but let's hope not.
21.9.06
Programming, perception, and a priori postulates
I'm in the middle of working on my third, and final, qualifying 'exam'. I took a sit-down exam (no scare quotes) a little over a year ago, and I designed and carried out a study of speaker focus and fricative production during the Spring and Summer of this year. The final 'exam' will be much like the second one - I will design and conduct a research project from the ground up.
This afternoon, I was writing a Matlab script to do some corpus analysis. The general plan with this study is to investigate a couple of different kinds of frequency effects in speech perception. In terms of word recognition, there are a number of well documented frequency effects - on average, frequent words are more accurately recognized in noise identification tasks and responded to more quickly in lexical decision tasks than are infrequent words. Lower level (i.e., sub-word) frequency effects are, as far as I know, less well documented.
With regard to the qual, I am primarily interested in what I have been calling (phonological) contrast frequency. Phonologists call two words with distinct meanings and forms that are identical aside from a single feature difference at a single location a minimal pair. For example, the words 'sue' and 'zoo' - [su] and [zu] - mean two very different things, and the only difference in form is that, at the beginning of the word, the former has a voiceless fricative whereas the latter has a voiced fricative.
In its simplest form, the contrast frequency for a given pair of speech sounds is the number of minimal pairs involving these sounds. You can very likely come up with other minimal pairs involving 's' and 'z', but it would be very hard for you to come up with many minimal pairs for, say, the sounds at the beginning of 'this' and 'think'.
My third qual will address at least one possible psychophysical effect of differences in contrast frequency. Of course, I first have to establish that there are suitable differences in contrast frequency for me to employ in a perception experiment. I was working on this today, using the Hoosier Mental Lexicon, a 20,000 word dictionary that includes machine readable phonemic transcriptions and word usage frequencies, among other informations. It has a good track record, having been put to good use in, for example, word recognition work documenting the effects of lexical neighborhoods (I'll likely post about this at a later date).
I want to use the HML to tally some contrast frequencies so that I can use the best possible pairs of sounds (i.e., those that will maximize the effect I am looking for) to carry out a psychophysical experiment. It turns out to be less than entirely straightforward to tally contrast frequency, mostly because you have to make a number of potentially unwarranted assumptions about the organization of speech sounds (and words) in the mental lexicon.
In general, the idea of contrast frequency seems straightforward - simply count the number of minimal pairs for a given sound. Getting a machine to count the number of minimal pairs is reasonably easy. But what about pairs of words that are nearly minimal pairs (e.g., 'this' and 'think')? It seems to me that, if I'm interested in the relationship between 's' and 'z', I should take into account the relationship between every pair of words with one member containing an 's' and the other a 'z' - 'sue' vs. 'zoo', 'sing' vs. 'zing', sure, but 'ask' vs. 'as' and 'safe' vs. 'zap', and all the rest, too. But if I'm going to take all the occurences of these sounds into account, I have to devise a measure of how similar these two words are (i.e., how important the differences are), and how the location of the 's' and the 'z' in their respective words affects this.
So far, I've written code that will find all the occurences of any given pair of sounds. It then takes each occurence of one of them and, for each occurence of the other, compares their environments - the sounds that come before and after the pair of interest. I've been thinking of various ways to weight the value of a difference in environment according to how far from the pair of interest the difference occur, as it seems reasonable to assume that the immediate environment plays a more important role in contrast frequency. If two sounds in a non-minimal pair are in completely different environments, they will hardly seem contrastive at all. If these sounds are in a minimal pair, they are the very definition of contrastive. In between these two extremes, I assume there is some in-between level of 'contrastiveness', so it seems like a good idea to take these cases into account along with the true minimal pairs.
I've also thought how nice it would be if the transcriptions in the HML included syllable affiliation information for each consonant. It seems reasonable to assume that two sounds in a non-minimal pair would be 'more contrastive' in some sense if they were both in the 'same' syllable position in their respective words. Unless I code this into the HML myself, though, it isn't going to play a role in this project.
By writing code to get a computer to carry these functions out, I have forced myself to make explicit a number of assumptions about how speech sounds are organized in the mind. These assumptions inform a number of potentially important decisions I have to make. To name three, I have to decide how to weight segmental distances when tallying environment differences (should I weight with an exponential decrease or the reciprocal of the number of segments?), how to deal with word edges (if, after aligning occurences of two sounds, the word edges do not line up, how many difference-tallies do the misaligned edges count for?), and how (or whether) to factor in usage frequencies and morphosyntactic properties (do I incorporate raw usage frequencies, the logarithm of raw usage frequencies, and/or the relative proportion of content vs. function words when tallying a pair's contrast frequency?).
The next step is to fix a silly indexing mistake I made (I had to leave promptly at 4:30 to go eat carnitas, and so could not finish the code today), see what the numbers look like for some potentially interesting pairs of sounds, then check the literature on 'functional load', a notion that is likely closely related to my 'contrast frequency'.
This afternoon, I was writing a Matlab script to do some corpus analysis. The general plan with this study is to investigate a couple of different kinds of frequency effects in speech perception. In terms of word recognition, there are a number of well documented frequency effects - on average, frequent words are more accurately recognized in noise identification tasks and responded to more quickly in lexical decision tasks than are infrequent words. Lower level (i.e., sub-word) frequency effects are, as far as I know, less well documented.
With regard to the qual, I am primarily interested in what I have been calling (phonological) contrast frequency. Phonologists call two words with distinct meanings and forms that are identical aside from a single feature difference at a single location a minimal pair. For example, the words 'sue' and 'zoo' - [su] and [zu] - mean two very different things, and the only difference in form is that, at the beginning of the word, the former has a voiceless fricative whereas the latter has a voiced fricative.
In its simplest form, the contrast frequency for a given pair of speech sounds is the number of minimal pairs involving these sounds. You can very likely come up with other minimal pairs involving 's' and 'z', but it would be very hard for you to come up with many minimal pairs for, say, the sounds at the beginning of 'this' and 'think'.
My third qual will address at least one possible psychophysical effect of differences in contrast frequency. Of course, I first have to establish that there are suitable differences in contrast frequency for me to employ in a perception experiment. I was working on this today, using the Hoosier Mental Lexicon, a 20,000 word dictionary that includes machine readable phonemic transcriptions and word usage frequencies, among other informations. It has a good track record, having been put to good use in, for example, word recognition work documenting the effects of lexical neighborhoods (I'll likely post about this at a later date).
I want to use the HML to tally some contrast frequencies so that I can use the best possible pairs of sounds (i.e., those that will maximize the effect I am looking for) to carry out a psychophysical experiment. It turns out to be less than entirely straightforward to tally contrast frequency, mostly because you have to make a number of potentially unwarranted assumptions about the organization of speech sounds (and words) in the mental lexicon.
In general, the idea of contrast frequency seems straightforward - simply count the number of minimal pairs for a given sound. Getting a machine to count the number of minimal pairs is reasonably easy. But what about pairs of words that are nearly minimal pairs (e.g., 'this' and 'think')? It seems to me that, if I'm interested in the relationship between 's' and 'z', I should take into account the relationship between every pair of words with one member containing an 's' and the other a 'z' - 'sue' vs. 'zoo', 'sing' vs. 'zing', sure, but 'ask' vs. 'as' and 'safe' vs. 'zap', and all the rest, too. But if I'm going to take all the occurences of these sounds into account, I have to devise a measure of how similar these two words are (i.e., how important the differences are), and how the location of the 's' and the 'z' in their respective words affects this.
So far, I've written code that will find all the occurences of any given pair of sounds. It then takes each occurence of one of them and, for each occurence of the other, compares their environments - the sounds that come before and after the pair of interest. I've been thinking of various ways to weight the value of a difference in environment according to how far from the pair of interest the difference occur, as it seems reasonable to assume that the immediate environment plays a more important role in contrast frequency. If two sounds in a non-minimal pair are in completely different environments, they will hardly seem contrastive at all. If these sounds are in a minimal pair, they are the very definition of contrastive. In between these two extremes, I assume there is some in-between level of 'contrastiveness', so it seems like a good idea to take these cases into account along with the true minimal pairs.
I've also thought how nice it would be if the transcriptions in the HML included syllable affiliation information for each consonant. It seems reasonable to assume that two sounds in a non-minimal pair would be 'more contrastive' in some sense if they were both in the 'same' syllable position in their respective words. Unless I code this into the HML myself, though, it isn't going to play a role in this project.
By writing code to get a computer to carry these functions out, I have forced myself to make explicit a number of assumptions about how speech sounds are organized in the mind. These assumptions inform a number of potentially important decisions I have to make. To name three, I have to decide how to weight segmental distances when tallying environment differences (should I weight with an exponential decrease or the reciprocal of the number of segments?), how to deal with word edges (if, after aligning occurences of two sounds, the word edges do not line up, how many difference-tallies do the misaligned edges count for?), and how (or whether) to factor in usage frequencies and morphosyntactic properties (do I incorporate raw usage frequencies, the logarithm of raw usage frequencies, and/or the relative proportion of content vs. function words when tallying a pair's contrast frequency?).
The next step is to fix a silly indexing mistake I made (I had to leave promptly at 4:30 to go eat carnitas, and so could not finish the code today), see what the numbers look like for some potentially interesting pairs of sounds, then check the literature on 'functional load', a notion that is likely closely related to my 'contrast frequency'.
20.9.06
Tortured legal reasoning
The cato blog has an excellent (and short) post on Bush's apparently imperiled pet legislation regarding torture.
It's frustrating to me that the vast majority of media outlets fail to discuss issues like this with the clarity and simplicity of this cato post. The basic issue is a straightforward bit of legal philosophy.
Most people would agree that, generally speaking, torture is immoral. However, we can all imagine extreme circumstances in which we might be willing to sanction torture, cases in which the alternative is much worse. We've been hearing a lot about the 'ticking time bomb' scenario lately precisely because this is the kind of extreme circumstance that would cause most of us to reconsider an otherwise reasonable aversion to inhumane treatment of a prisoner who may well be innocent.
So, is it better to have a law that prohibits or authorizes the immoral act? The severity of prohibition would be alleviated greatly by the fact that, in the truly extreme case, it is likely that punishment would be minimal, while authorization for the sake of the rare extreme case opens a pandora's toolbox for the everyday interrogator.
It all revolves around due process. In the case prohibition, due process (e.g., the protections granted by the 6th amendment) would help to ensure that extreme circumstances can be presented and explained in an attempt to justify a possible violation of a different bit of due process (e.g., the 8th amendment). In the latter case, this violation of due process would be codified.
It's frustrating to me that the vast majority of media outlets fail to discuss issues like this with the clarity and simplicity of this cato post. The basic issue is a straightforward bit of legal philosophy.
Most people would agree that, generally speaking, torture is immoral. However, we can all imagine extreme circumstances in which we might be willing to sanction torture, cases in which the alternative is much worse. We've been hearing a lot about the 'ticking time bomb' scenario lately precisely because this is the kind of extreme circumstance that would cause most of us to reconsider an otherwise reasonable aversion to inhumane treatment of a prisoner who may well be innocent.
So, is it better to have a law that prohibits or authorizes the immoral act? The severity of prohibition would be alleviated greatly by the fact that, in the truly extreme case, it is likely that punishment would be minimal, while authorization for the sake of the rare extreme case opens a pandora's toolbox for the everyday interrogator.
It all revolves around due process. In the case prohibition, due process (e.g., the protections granted by the 6th amendment) would help to ensure that extreme circumstances can be presented and explained in an attempt to justify a possible violation of a different bit of due process (e.g., the 8th amendment). In the latter case, this violation of due process would be codified.
Once more with the celebrinerd...
Jennings has posted on celebrinerd fever again, and his post contains links to no less than three other blogs that are fighting the good fight.
I have also announced our efforts around these parts on his message boards.
I have also announced our efforts around these parts on his message boards.
Links! Lots of links will bring 'celebrinerd' to the masses!
My post about 'celebrinerd' has been linked in both editor-Amy-from-Ohio's post about 'celebrinerd' and Josh's post about 'celebrinerd'. I'm in the (very small and obscure corner of the) big time now!
Editor-Amy-from-Ohio also mentions that Cathy suggests creating wikipedia and urban dictionary entries for celebrinerd. This is a fine idea, as it gives celebrinerd a larger number of distinct URLs, and it would give us all a new place to which to link the word 'celebrinerd'.
It turns out that someone(s) went ahead and took Cathy's suggestion: celebrinerd! celebrinerd!
To make Mr. Jennings task a bit easier, that's eight occurences of 'celebrinerd' in this post alone (nine now), five of which link to distinct URLs.
Editor-Amy-from-Ohio also mentions that Cathy suggests creating wikipedia and urban dictionary entries for celebrinerd. This is a fine idea, as it gives celebrinerd a larger number of distinct URLs, and it would give us all a new place to which to link the word 'celebrinerd'.
It turns out that someone(s) went ahead and took Cathy's suggestion: celebrinerd! celebrinerd!
To make Mr. Jennings task a bit easier, that's eight occurences of 'celebrinerd' in this post alone (nine now), five of which link to distinct URLs.
19.9.06
Doing my part
Ken Jennings (the guy who won 74 days in a row on Jeopardy a while back) made some good fun yesterday of Time magazine's habit of cooking up stupid neologisms (they claim to be responsible for 'guesstimate', among other abominations, which makes me hate them with the fire of a thousand suns). Mr. Jennings is the subject of the newest Time neologism - celebrinerd. Today, he laments that it has yet to catch on.
Well, I'm here to do my part. He gave me, among others, his beautiful 'Dear Jeopardy' letter (also read the 'correction' at the bottom of this post). I've got the time and energy to give back, by gum.
Mr. Jennings says that at 10:30 AM, there were no google hits for celebrinerd. I tried at 9:30 PM and got two. Google displays these as "Results 1 - 2 of about 3 for celebrinerd" (emphasis mine). The first is Jennings' post from yesterday, the second a Time-internal search result, and the approximate third result has a URL distinct from the second, but takes you to the same place.
So far, I've used celebrinerd four times, including this sentence. That could very well make me a non-redundant third google hit by tomorrow. And it could verily double my readership (Ken, meet Josh. Josh, Ken.)
I'd hate for celebrinerd (5!) to go the way of 'radiorator.'
Well, I'm here to do my part. He gave me, among others, his beautiful 'Dear Jeopardy' letter (also read the 'correction' at the bottom of this post). I've got the time and energy to give back, by gum.
Mr. Jennings says that at 10:30 AM, there were no google hits for celebrinerd. I tried at 9:30 PM and got two. Google displays these as "Results 1 - 2 of about 3 for celebrinerd" (emphasis mine). The first is Jennings' post from yesterday, the second a Time-internal search result, and the approximate third result has a URL distinct from the second, but takes you to the same place.
So far, I've used celebrinerd four times, including this sentence. That could very well make me a non-redundant third google hit by tomorrow. And it could verily double my readership (Ken, meet Josh. Josh, Ken.)
I'd hate for celebrinerd (5!) to go the way of 'radiorator.'
I can't stop me that easily!
Josh thinks he's caught me in a failure to live up to my sober guarantee to blog every day. Well, thanks to a technicality, he's wrong!
For those of you who do not blog (it's fun to write as if I have readers other than Josh) on blogspot, when you save a post as a draft, it saves the time at and date on which you began writing the post. Whenever you finally publish it, it bears the original time stamp.
Armed with this knowledge, you can see that I started the carnitas post around 7 PM last night, and that Josh was, for some reason, up at 3:30 AM when he started his post about my imminent slide into non-blogging oblivion. So what if the carnitas post wasn't actually visible to the public (i.e., Josh) until this morning?
My alibi is, of course, airtight. First of all, all of the above is true. Join blogspot and find out for yourself if you don't believe me. Second of all (all of two), it is simply inconceivable that someone could change the nominal date and time of a blog post to make it appear as if he had blogged on a day he hadn't. That kind of technical know-how just doesn't exist.
Anyway, for what it's worth, I announced my earnest decree to provide myself with some measure of motivation. So far, so good, even if I did actually fail to post yesterday. Here I am with a follow-up, right? No blood, no foul.
I owe Josh a heartfelt 'thank you' for caring enough to keep me on my toes here at Source-Filter. I hadn't noticed today's scheduled outage. If not for his nocturnal emission (so to speak), I wouldn't have been so prompt to post today.
Now that I look at the details of the outage, though, I see that his warnings are unduly dire. The outage is scheduled to last from 4 PM to 4:15 PM.
For those of you who do not blog (it's fun to write as if I have readers other than Josh) on blogspot, when you save a post as a draft, it saves the time at and date on which you began writing the post. Whenever you finally publish it, it bears the original time stamp.
Armed with this knowledge, you can see that I started the carnitas post around 7 PM last night, and that Josh was, for some reason, up at 3:30 AM when he started his post about my imminent slide into non-blogging oblivion. So what if the carnitas post wasn't actually visible to the public (i.e., Josh) until this morning?
My alibi is, of course, airtight. First of all, all of the above is true. Join blogspot and find out for yourself if you don't believe me. Second of all (all of two), it is simply inconceivable that someone could change the nominal date and time of a blog post to make it appear as if he had blogged on a day he hadn't. That kind of technical know-how just doesn't exist.
Anyway, for what it's worth, I announced my earnest decree to provide myself with some measure of motivation. So far, so good, even if I did actually fail to post yesterday. Here I am with a follow-up, right? No blood, no foul.
I owe Josh a heartfelt 'thank you' for caring enough to keep me on my toes here at Source-Filter. I hadn't noticed today's scheduled outage. If not for his nocturnal emission (so to speak), I wouldn't have been so prompt to post today.
Now that I look at the details of the outage, though, I see that his warnings are unduly dire. The outage is scheduled to last from 4 PM to 4:15 PM.
18.9.06
Prefiero carnitas
When I lived in Prescott Arizona, there was a Mexican restaurant called Maya two blocks from my house. Under the name, the sign said 'Fish Tacos'. At first, I considered this undesirable.
I later came to my senses. The fish tacos and shrimp enchiladas at Maya were superb. As good as they were, though, the carnitas beat them hands down. The carnitas at Maya were the perfect combination of crispy edges and tender, juicy middles. The sides of beans and rice were excellent, the salsa was exceptional, and the size of the servings was eminently reasonable.
You might not think that you could find good carnitas in Bloomington, Indiana, but you would be wrong to not think that. By which I mean that you would be right to think that you can get good carnita here. At Casa Brava, in fact, you can get an unreasonably large portion of very tasty carnitas for about $10 (109 Mexican pesos). The beans and rice are fine at Casa Brave, but the salsa leaves a bit to be desired. With respect to the Maya carnitas, Casa Brava's are too heavy on the tender, juicy middles, too light on the crispy edges.
It turns out that you can also get a pretty good carnita taco at Tacos Don Chuy. Although the tacos at Tacos Don Chuy remind me of the food I ate in Mexico more than just about any other Mexican restaurant, they are strictly taco fodder. They are juicy and tender, to be sure, but they are too small to function as a main course, whereas the carnitas as Maya and Casa Brava arrive as the proud centerpiece of a (careful! hot!) dinner plate. Tacos Don Chuy has the advantage of a Taco Tuesday special, though - $0.99 (10.78 MXN) per taco.
Until recently, I was satisfied with my midwestern carnita fix. Some time ago, a Chipotle Mexican Grill opened up about a block from Tacos Don Chuy. Until recently, I thought, "Who needs a corporate chain Chipotle with Casa Brava and Tacos Don Chuy around?" (There's a Qdoba Mexican Grill and a Moe's Southwestern Grill in town, too, but the former is mediocre aside from the cheese dip and both have stupid names, so I will speak of them no more.)
Well, it turns out that Chipotle has yet another worthy variant of this exalted pork product (although their value is diminished somewhat by an annoyingly snarky sign concerning the pork's former free range lifestyle). The Chipotle carnitas aren't carnitas in the typical sense - they aren't in any recognizable chunk form, but, rather, seem to be shredded post-roast. They lack crispy edges of note, but they taste fantastic.
I only ate at Chipotle after receiving a coupon for a free burrito. Two days after eating the free meal, I went back and paid full price for a slightly different version of it. Yesterday, I was back at Tacos Don Chuy for a carnita burrito. My dad's visiting tomorrow, so a visit to Casa Brava - and an order of carnitas - is likely to be in my near future, too.
The moral of the story? Even if you think you have enough carnitas in your life, you probably don't. Venture forth and find more.
I later came to my senses. The fish tacos and shrimp enchiladas at Maya were superb. As good as they were, though, the carnitas beat them hands down. The carnitas at Maya were the perfect combination of crispy edges and tender, juicy middles. The sides of beans and rice were excellent, the salsa was exceptional, and the size of the servings was eminently reasonable.
You might not think that you could find good carnitas in Bloomington, Indiana, but you would be wrong to not think that. By which I mean that you would be right to think that you can get good carnita here. At Casa Brava, in fact, you can get an unreasonably large portion of very tasty carnitas for about $10 (109 Mexican pesos). The beans and rice are fine at Casa Brave, but the salsa leaves a bit to be desired. With respect to the Maya carnitas, Casa Brava's are too heavy on the tender, juicy middles, too light on the crispy edges.
It turns out that you can also get a pretty good carnita taco at Tacos Don Chuy. Although the tacos at Tacos Don Chuy remind me of the food I ate in Mexico more than just about any other Mexican restaurant, they are strictly taco fodder. They are juicy and tender, to be sure, but they are too small to function as a main course, whereas the carnitas as Maya and Casa Brava arrive as the proud centerpiece of a (careful! hot!) dinner plate. Tacos Don Chuy has the advantage of a Taco Tuesday special, though - $0.99 (10.78 MXN) per taco.
Until recently, I was satisfied with my midwestern carnita fix. Some time ago, a Chipotle Mexican Grill opened up about a block from Tacos Don Chuy. Until recently, I thought, "Who needs a corporate chain Chipotle with Casa Brava and Tacos Don Chuy around?" (There's a Qdoba Mexican Grill and a Moe's Southwestern Grill in town, too, but the former is mediocre aside from the cheese dip and both have stupid names, so I will speak of them no more.)
Well, it turns out that Chipotle has yet another worthy variant of this exalted pork product (although their value is diminished somewhat by an annoyingly snarky sign concerning the pork's former free range lifestyle). The Chipotle carnitas aren't carnitas in the typical sense - they aren't in any recognizable chunk form, but, rather, seem to be shredded post-roast. They lack crispy edges of note, but they taste fantastic.
I only ate at Chipotle after receiving a coupon for a free burrito. Two days after eating the free meal, I went back and paid full price for a slightly different version of it. Yesterday, I was back at Tacos Don Chuy for a carnita burrito. My dad's visiting tomorrow, so a visit to Casa Brava - and an order of carnitas - is likely to be in my near future, too.
The moral of the story? Even if you think you have enough carnitas in your life, you probably don't. Venture forth and find more.
17.9.06
Addendum to yesterday's post (updated)
Yesterday's post was long. Maybe too long. It certainly took long than I had expected to write.
There was one idea I forgot to include, and, since Josh has promised a response at the end of an excellent post about programming languages and political beliefs, I want to express my forgotten idea, address an important difference between me and Josh, and respond to an issue that Josh brings up in his post.
In yesterday's post, I made a case for the importance of the interface to grammar. The very short version says that production and perception systems affect the input to the rules and constraints part of the phonological grammar. These effects can be seen in, e.g., the gaps in distributions of distinctive features. Knowledge of the grammar of a language includes, among other things, knowledge of the functions of the distinctive features in that language. Therefore, interface systems affect grammar.
All of this was in the interest of establishing that the concern for the interface in phonetics does not put it outside of linguistics proper. In retrospect, this issue isn't very interesting to me, and I don't think it's crucial to our understanding of language to have a clearly defined division between linguistics and not-linguistics. It is inconsistent to be serious about understanding language and simultaneously rule out, in principle, the value of studying performance. The justification for focusing exclusively on competence has always been that performance relies, in part, on competence, so we need a theory of competence first. Performance will come later.
That's fine, but I think it held much more water in the early days of generative theory than it does now. We've got enough of an idea about the fundamental aspects of competence to build good performance models and theories today. Of course, there is still value in focusing on competence. If, as I argued yesterday, performance systems also affect competence, that's fine, too. Our goal, after all, is a thorough understanding of language.
I strongly believe that a big part of the difference in emphasis on competence vs. performance between me and Josh is the fact that I'm a phonetician and he's a syntactician. My arguments yesterday addressed phonology exclusively, and my argument immediately above that we may as well develop performance models now, are to be understood with respect to my interest in fairly low-level perceptual and decisional processes. It seems reasonable to assume that the effects of interface on grammar will be more numerous and easier to detect and investigate in phonology than in syntax.
But even within syntax, I think interfaces play a crucial role in the grammar. As my post yesterday mostly concerned speech sounds, the interfaces in question were those between phonological grammar and perception and production systems, while the interfaces that are likely to affect syntactic grammar are more likely with semantic, morphological, and phonological grammars than with perception and production systems.
To re-paraphrase my argument from yesterday, interfaces affect grammar because they determine the input to and, therefore, the applicability of the rules and constraints. In discussing scheme and abstraction (and libertarianism) this morning, Josh said that "[t]here's a lesson there for linguists like Noah who think that the details of the interface have something to say about the underlying engine."
On reading this, it occurred to me that we need to be very clear about what we think we're doing in creating theories of competence. It seems to me that these theories are about the function of grammar. That is, when we build, say, generative theories of linguistic universals, we're building descriptions and explanations of something akin to programmed functions over data arrays. The kind of data array used as input, and the kind needed as output, seem to me to have quite a bit of influence on the internal structure of the function.
If we're talking about discovering the functions and data arrays crucial to the operation of some system, in this case linguistic, we are manifestly not talking about which programming language these functions and data arrays are implemented in. The 'same' function can be implented in Scheme, C, C++, or Java. How efficiently or easily it is implemented in each of these languages may make you decide to use one or avoid another, without a doubt. But when you're competence-theorizing in linguistics, you're not building a language from the top down, you're observing extant languages and trying, based on these observations, to infer the basic, universal structures and functions of big-L Language.
The fact that Scheme is really good at abstraction of a given (set of) function(s) is neither here nor there with regard to the role of interfaces and grammars. Once you've settled on a language, the way you write functions will be determined by, among other things, the inputs and outputs to that particular function, proper (or improper, if you're no good at it) programming techniques, and the structure of that language. I don't see any reason to believe that big-L Language is necessarily implemented in one language or another. Observing inputs and outputs to infer the structure of a black-box function is hard, and limited, enough. Given this and the fact that multiple languages can perform the same function on the same input, observing inputs and outputs seems very unlikely to be able to tell us much about anything 'higher up' than the function itself.
Update
I've changed my mind back to caring about the partition between competence and performance, at least indirectly, and, perhaps not surprisingly, I have also reconsidered my stance on the role of abstractness. Insofar as linguistic theories purport to explain the functions (and their inputs and outputs) of language, and insofar as the details of a programming language impact the form of these functions (and their inputs and outputs), it is certainly relevant how these functions are implemented. So, the level of abstractness of linguistic functions and variation in the ability of a given programming language to achieve a given level of abstractness are potentially interesting research questions. But whether or not a linguistic model has anything specific in common with a particular programming language is not so interesting, at least not to me.
Which brings us back to Labov's performance-competences and the difference in my and Josh's interests. I believe, and I think I made the case yesterday, that perception and production systems are relevant to phonological grammar. For what it's worth, plenty of OT and, to a lesser extent, generative phonologists see perception as relevant to phonology, as well. More specifically, I think that the perceptual models I work with are useful tools to study, well, perceptual systems. Insofar as I use these tools to describe the 'competence' that underlies perceptual performance, I'm making a claim to being on the 'proper' side of the the partition dividing linguistics proper and linguistics, um, improper. Whether or not anyone else agrees with me is not all that important to me (unless a grant approval depends on it). I am confident that my research program is worth pursuing, whether or not there is consensus regarding its place in linguistics.
Which, in turn, brings us back to Josh's unhealthy obsession with syntax and my perfectly reasonable obsession with phonetics. Both of our perspectives on competence and performance are undoubtedly colored by our respective interests. Josh's view is likely, but certainly not completely, influenced by the fact that syntax is 'deeper' than phonology and phonetics. The phenomena that Josh is interested in are considerably more abstract than the phenomena that I'm interested in. Don't get me wrong, I make use of plenty of abstraction - multivariate perceptual distributions, stochastic information processing channels, and decision rules defined on them aren't exactly concrete. But I hope everyone agrees that these abstract constructs reside much closer to sub-linguistic performance systems than do models purporting to describe syntactic knowledge.
I believe that is all.
There was one idea I forgot to include, and, since Josh has promised a response at the end of an excellent post about programming languages and political beliefs, I want to express my forgotten idea, address an important difference between me and Josh, and respond to an issue that Josh brings up in his post.
In yesterday's post, I made a case for the importance of the interface to grammar. The very short version says that production and perception systems affect the input to the rules and constraints part of the phonological grammar. These effects can be seen in, e.g., the gaps in distributions of distinctive features. Knowledge of the grammar of a language includes, among other things, knowledge of the functions of the distinctive features in that language. Therefore, interface systems affect grammar.
All of this was in the interest of establishing that the concern for the interface in phonetics does not put it outside of linguistics proper. In retrospect, this issue isn't very interesting to me, and I don't think it's crucial to our understanding of language to have a clearly defined division between linguistics and not-linguistics. It is inconsistent to be serious about understanding language and simultaneously rule out, in principle, the value of studying performance. The justification for focusing exclusively on competence has always been that performance relies, in part, on competence, so we need a theory of competence first. Performance will come later.
That's fine, but I think it held much more water in the early days of generative theory than it does now. We've got enough of an idea about the fundamental aspects of competence to build good performance models and theories today. Of course, there is still value in focusing on competence. If, as I argued yesterday, performance systems also affect competence, that's fine, too. Our goal, after all, is a thorough understanding of language.
I strongly believe that a big part of the difference in emphasis on competence vs. performance between me and Josh is the fact that I'm a phonetician and he's a syntactician. My arguments yesterday addressed phonology exclusively, and my argument immediately above that we may as well develop performance models now, are to be understood with respect to my interest in fairly low-level perceptual and decisional processes. It seems reasonable to assume that the effects of interface on grammar will be more numerous and easier to detect and investigate in phonology than in syntax.
But even within syntax, I think interfaces play a crucial role in the grammar. As my post yesterday mostly concerned speech sounds, the interfaces in question were those between phonological grammar and perception and production systems, while the interfaces that are likely to affect syntactic grammar are more likely with semantic, morphological, and phonological grammars than with perception and production systems.
To re-paraphrase my argument from yesterday, interfaces affect grammar because they determine the input to and, therefore, the applicability of the rules and constraints. In discussing scheme and abstraction (and libertarianism) this morning, Josh said that "[t]here's a lesson there for linguists like Noah who think that the details of the interface have something to say about the underlying engine."
On reading this, it occurred to me that we need to be very clear about what we think we're doing in creating theories of competence. It seems to me that these theories are about the function of grammar. That is, when we build, say, generative theories of linguistic universals, we're building descriptions and explanations of something akin to programmed functions over data arrays. The kind of data array used as input, and the kind needed as output, seem to me to have quite a bit of influence on the internal structure of the function.
If we're talking about discovering the functions and data arrays crucial to the operation of some system, in this case linguistic, we are manifestly not talking about which programming language these functions and data arrays are implemented in. The 'same' function can be implented in Scheme, C, C++, or Java. How efficiently or easily it is implemented in each of these languages may make you decide to use one or avoid another, without a doubt. But when you're competence-theorizing in linguistics, you're not building a language from the top down, you're observing extant languages and trying, based on these observations, to infer the basic, universal structures and functions of big-L Language.
The fact that Scheme is really good at abstraction of a given (set of) function(s) is neither here nor there with regard to the role of interfaces and grammars. Once you've settled on a language, the way you write functions will be determined by, among other things, the inputs and outputs to that particular function, proper (or improper, if you're no good at it) programming techniques, and the structure of that language. I don't see any reason to believe that big-L Language is necessarily implemented in one language or another. Observing inputs and outputs to infer the structure of a black-box function is hard, and limited, enough. Given this and the fact that multiple languages can perform the same function on the same input, observing inputs and outputs seems very unlikely to be able to tell us much about anything 'higher up' than the function itself.
Update
I've changed my mind back to caring about the partition between competence and performance, at least indirectly, and, perhaps not surprisingly, I have also reconsidered my stance on the role of abstractness. Insofar as linguistic theories purport to explain the functions (and their inputs and outputs) of language, and insofar as the details of a programming language impact the form of these functions (and their inputs and outputs), it is certainly relevant how these functions are implemented. So, the level of abstractness of linguistic functions and variation in the ability of a given programming language to achieve a given level of abstractness are potentially interesting research questions. But whether or not a linguistic model has anything specific in common with a particular programming language is not so interesting, at least not to me.
Which brings us back to Labov's performance-competences and the difference in my and Josh's interests. I believe, and I think I made the case yesterday, that perception and production systems are relevant to phonological grammar. For what it's worth, plenty of OT and, to a lesser extent, generative phonologists see perception as relevant to phonology, as well. More specifically, I think that the perceptual models I work with are useful tools to study, well, perceptual systems. Insofar as I use these tools to describe the 'competence' that underlies perceptual performance, I'm making a claim to being on the 'proper' side of the the partition dividing linguistics proper and linguistics, um, improper. Whether or not anyone else agrees with me is not all that important to me (unless a grant approval depends on it). I am confident that my research program is worth pursuing, whether or not there is consensus regarding its place in linguistics.
Which, in turn, brings us back to Josh's unhealthy obsession with syntax and my perfectly reasonable obsession with phonetics. Both of our perspectives on competence and performance are undoubtedly colored by our respective interests. Josh's view is likely, but certainly not completely, influenced by the fact that syntax is 'deeper' than phonology and phonetics. The phenomena that Josh is interested in are considerably more abstract than the phenomena that I'm interested in. Don't get me wrong, I make use of plenty of abstraction - multivariate perceptual distributions, stochastic information processing channels, and decision rules defined on them aren't exactly concrete. But I hope everyone agrees that these abstract constructs reside much closer to sub-linguistic performance systems than do models purporting to describe syntactic knowledge.
I believe that is all.
16.9.06
Slow and steady wins the race, right?
So, Josh established once and for all what is and what isn't linguistics a few days ago. I've been meaning to discuss some of what he says in that post, but because I am typically slow to response-blog, he has another language related post I want to respond to, as well.
(Of all the linguistic issues Josh would end up blogging about, VOT is perhaps the one I expected least. In any case, my responses to the two posts are related, so it turns out to be a good thing I 'waited'.)
The earlier post is concerned primarily with whether or not sociolinguistics is linguistics proper or not, but it also touches on the role of phonetics in linguistics. Josh makes a compelling case that a fair bit of sociolinguistics is sociology, only tangentially related to linguistics (he estimates it's an 80/20 split, but I'm not willing to commit to any such numbers). He also insists "that phonetics is more concerned with the interface than the subject proper".
For Josh, among many other linguists, capital-L Language is competence, not performance. That is, linguistics proper is concerned with the knowledge underlying language, as opposed to what is actually said on any given occasion. Even if you concede this point, you have to be careful when deciding what counts as competence and what doesn't. Perhaps the most famous sociolinguist - William Labov - argues in in an unpublished manuscript on the foundations of linguistics that the competence/performance distinction is incoherent (emphasis added):
Katz's syntactic flourishes aside, it is my assertion that communication - and thereby the interface - should be central to a theory of language. I will provide support for this assertion by discussing an incompletely-asked question in phonology.
It is well known that certain combinations of phonological features are comparatively rare among the world's language. Take, for example, the uneven distribution of certain place and glottal features. Voiceless, labial [p] and voiced, velar [g] are commonly missing from phoneme inventories. As Gussenhoven and Jacobs eventually describe it (in Understanding Phonology, the book I should have used when I taught undergraduate phonology, instead of this one, whose name I will not utter, or whatever the written-version of 'utter' is), "... [p] is relatively hard to hear, and [g] is relatively hard to say." This is because of aeroacoustic effects in both cases. In the former case, the shape and size of the sub-closure cavity in [p] causes the noise that accompanies closure release to be very quiet, and thereby indistinct, relative to stops at other places of articulation. In the latter case, the closure for [g] is closer to the vocal folds than most other stops. Voicing requires a pressure drop across the glottis, and stopping airflow above the glottis inhibits this, particularly so when the super-glottal cavity is small, as it is with [g].
So, at least in some cases, the means by which sounds are produced and perceived directly, if not deterministically, affects the distribution of speech sounds across languages. Speech sounds - combinations of phonological features - are at the foundation of phonological theory. If you know the phonology of, say, Dutch, you know, among other things, which sounds are part of the language and (a subset of) which sounds are not. Which is another way of saying that you know which phonological features are functional in combination with which other features. Which means that you know which rules and constraints operate when. Which had better be part of competence, or the term risks losing all meaning, at least with regard to phonology. If this is part of competence, then at least some of phonetics is linguistics proper.
On a more abstract level, the central issue here is whether or not phonological features are independent. As soon as we ask the most obvious question - are they? - we realize that me must specify what we mean by independence. Phonological features certainly seem not to be independent with regard to cross-linguistic distribution or within-language rule and constraint application. In showing this, I presented an example of non-independence in production - [g] - and perception - [p].
With regard to the former, however underlying distinctive features map onto production parameters (not a simple issue), it seems unlikely in the extreme that independence would be the norm. Candidate violations of featural independence come readily to mind: for example, place of articulation affects all sorts of acoustic cues in all sorts of complicated (and likely non-independent) ways, such as VOT, burst amplitude and spectral shape, frication amplitude and spectral shape, frication and VOT duration, to name just a few.
With regard to the latter, the interactions in production are bound to have effects on perception. Whether independent or not, our perceptual systems are awfully good at perceiving relevant distinctions between speech sounds, although it is generally unknown precisely what sorts of (in)dependencies play what sorts of roles in perception. I am currently in the process of addressing exactly these issues. I recently posted about one of the tools I plan to use extensively. I will post in the future about others.
But enough horn autotooting. The point is, again, that, insofar as production and perception systems are understood to affect grammar, they are part of linguistics proper. This means that the models and theories we use to understand production and perception are necessary parts of our models and theories of language in general.
This all brings us nicely, if indirectly, to Josh's VOT post, in which the revised role of production and perception sheds some light on the issue discussed therein, which issue is shifts in bilingual relative to monolingual voice onset times. The basic observation is that, e.g., Spanish-English bilingual children produce more English-like 'Spanish' stops and more Spanish-like 'English' stops, at least in terms of VOT. Furthermore, the effects are apparently both reliably measureable and perceptually irrelevant to the adults around the children.
Josh writes:
Directly relevant is an understanding of the relationship between production (and perception) systems and the grammar. On the one hand, we've got pretty clear evidence of underlying cross-classification. On the other, we've got something that looks a lot like motor and exemplar memory effects. There's nothing inconsistent about maintaining both, and I don't think it's dodging the question to invoke motor or exemplar theories to explain a phenomenon such as this. In fact, I think it's exactly the right way to proceed.
Of course, it may be that this particular phenomenon (i.e., the shifts in bilingual VOTs) won't tell us much about competence, which would put it outside of linguistics proper. That's okay with me, as I've got a very different research agenda, and I hasten to add that it doesn't mean that this phenomenon is uninteresting in general.
Finally, I'm glad to see Josh posit a nice, if vague, phonetic hypothesis. If I wasn't convinced of the value of investigating non-articulatory aspects of phonology and phonetics, I wouldn't have spent the last four years taking the classes I did.
(Of all the linguistic issues Josh would end up blogging about, VOT is perhaps the one I expected least. In any case, my responses to the two posts are related, so it turns out to be a good thing I 'waited'.)
The earlier post is concerned primarily with whether or not sociolinguistics is linguistics proper or not, but it also touches on the role of phonetics in linguistics. Josh makes a compelling case that a fair bit of sociolinguistics is sociology, only tangentially related to linguistics (he estimates it's an 80/20 split, but I'm not willing to commit to any such numbers). He also insists "that phonetics is more concerned with the interface than the subject proper".
For Josh, among many other linguists, capital-L Language is competence, not performance. That is, linguistics proper is concerned with the knowledge underlying language, as opposed to what is actually said on any given occasion. Even if you concede this point, you have to be careful when deciding what counts as competence and what doesn't. Perhaps the most famous sociolinguist - William Labov - argues in in an unpublished manuscript on the foundations of linguistics that the competence/performance distinction is incoherent (emphasis added):
The terms 'idealism' and 'materialism' can be seen to be most appropriate in relation to the definitions of data involved. The idealist position is that the data of linguistics consists of speakers' opinions about how they should speak: judgments of grammatically or acceptability that they make about sentences that are presented to them....While I don't agree that this constitutes an infinite regress (it seems clear to me that a lower bound on linguistically relevant and controllable production and perception variables is establishable in principle), the general point is important, and one could easily make the case that the partition between linguistically interesting competence and mere performance typically excludes linguistically relevant knowledge. It may be that the conditioning context for some pronunciation variant is more social (e.g., economic status of interlocutor) than traditionally linguistic (e.g., prosodic position), but this fact alone doesn't make sysematic linguistic behavior any less indicative of underlying knowledge about the linguistic system. Even the most ardent 'idealists' (e.g., Chomsky) understand that we can only indirectly observe competence as it is 'filtered' through performance. From Apects of the Theory of Syntax (p. 4):
The materialist approach to the description of language is based on the objective methods of observation and experiment. Subjective judgments are considered a useful and even indispensable guide to forming hypotheses about language structure, but they cannot be taken as evidence to resolve conflicting views. The idealist response is that these objective observations of speech production are a form of 'data flux' which are not directly related to the grammar of the language at all....
.... The idealist position has more recently been reinforced by a distinction between 'performance' and 'competence'. What is actually said and communicated between people is said to be the product of 'language performance', which is governed by many other factors besides the linguistic faculty, and is profoundly distorted by speaker errors of various kinds. The goal of linguistics is to get at the underlying 'competence' of the speaker, and the study of performance is said to lie outside of linguistics proper. The materialist view is that 'competence' can only be understood through the study of 'performance', and that this dichotomy involves an infinite regress: if there are separate rules of performance to be analyzed, then they must also comprise a 'competence', and then new rules of 'performance' to use them, and so on.
The problem for the linguist, as well as for the child learning the language, is to determine from the data of performance the underlying system of rules that has been mastered by the speaker-hearer and that he puts to use in actual performance.So when Josh says that
People like me, though we accept that Phonetics is also Linguistics, would insist that Phonetics is more concerned with the interface than the subject proper. Language is competence. Studies of articulatory motor functions and sound processing are valuable (especially in practical industry terms, for things like speech processing on those annoying telephone menus that have you say "one" rather than press the button), no doubt about it, but mostly as a way to explain how useable information gets to the language module and back out again. I do not seriously believe that Phonetics has any bearing on meaning or grammar (though there are certainly those that do) - though there are bound to be certain mathematical artefacts of the way articulators are arranged that sometimes cause a speaker to prefer one possible form over another, etc.I think he's mistaken. 'Mathematical artefacts' are by no means the only way in which the interface has an effect on grammar. In the early days of generative theory (i.e., whence the popular resurgence of the competence-performance divide), the interface was seen as crucial, if not central, to the theory of language. As Katz inelegantly wrote in The Philosophy of Language (1966, p. 98):
Natural languages are vehicles for communication in which syntactically structured and acoustically realized onjects transmit meaningful messages from one speaker to another....Had he been capable of linguistic communication, Katz could have written what he seems here to mean, and he could have avoided making silly mistakes. On the former hand, what he means is that communication is important to any understanding of language. On the latter hand, it should be obvious that not all linguistic communication depends on acoustic transmission. At the end of his discussion, Josh points this out to an unnamed 'sound person', which category I assume excludes Katz.
Roughly, linguistic communication consists in the production of some external, publicly observable, acoustic phenomenon whose phonetic and syntactic structure encodes a speaker's inner, private thoughts or ideas and the decoding of the phonetic and syntactic structure exhibited in such a physical phenomenon by other speakers in the form of an inner, private experience of the same thoughts or ideas.
Katz's syntactic flourishes aside, it is my assertion that communication - and thereby the interface - should be central to a theory of language. I will provide support for this assertion by discussing an incompletely-asked question in phonology.
It is well known that certain combinations of phonological features are comparatively rare among the world's language. Take, for example, the uneven distribution of certain place and glottal features. Voiceless, labial [p] and voiced, velar [g] are commonly missing from phoneme inventories. As Gussenhoven and Jacobs eventually describe it (in Understanding Phonology, the book I should have used when I taught undergraduate phonology, instead of this one, whose name I will not utter, or whatever the written-version of 'utter' is), "... [p] is relatively hard to hear, and [g] is relatively hard to say." This is because of aeroacoustic effects in both cases. In the former case, the shape and size of the sub-closure cavity in [p] causes the noise that accompanies closure release to be very quiet, and thereby indistinct, relative to stops at other places of articulation. In the latter case, the closure for [g] is closer to the vocal folds than most other stops. Voicing requires a pressure drop across the glottis, and stopping airflow above the glottis inhibits this, particularly so when the super-glottal cavity is small, as it is with [g].
So, at least in some cases, the means by which sounds are produced and perceived directly, if not deterministically, affects the distribution of speech sounds across languages. Speech sounds - combinations of phonological features - are at the foundation of phonological theory. If you know the phonology of, say, Dutch, you know, among other things, which sounds are part of the language and (a subset of) which sounds are not. Which is another way of saying that you know which phonological features are functional in combination with which other features. Which means that you know which rules and constraints operate when. Which had better be part of competence, or the term risks losing all meaning, at least with regard to phonology. If this is part of competence, then at least some of phonetics is linguistics proper.
On a more abstract level, the central issue here is whether or not phonological features are independent. As soon as we ask the most obvious question - are they? - we realize that me must specify what we mean by independence. Phonological features certainly seem not to be independent with regard to cross-linguistic distribution or within-language rule and constraint application. In showing this, I presented an example of non-independence in production - [g] - and perception - [p].
With regard to the former, however underlying distinctive features map onto production parameters (not a simple issue), it seems unlikely in the extreme that independence would be the norm. Candidate violations of featural independence come readily to mind: for example, place of articulation affects all sorts of acoustic cues in all sorts of complicated (and likely non-independent) ways, such as VOT, burst amplitude and spectral shape, frication amplitude and spectral shape, frication and VOT duration, to name just a few.
With regard to the latter, the interactions in production are bound to have effects on perception. Whether independent or not, our perceptual systems are awfully good at perceiving relevant distinctions between speech sounds, although it is generally unknown precisely what sorts of (in)dependencies play what sorts of roles in perception. I am currently in the process of addressing exactly these issues. I recently posted about one of the tools I plan to use extensively. I will post in the future about others.
But enough horn autotooting. The point is, again, that, insofar as production and perception systems are understood to affect grammar, they are part of linguistics proper. This means that the models and theories we use to understand production and perception are necessary parts of our models and theories of language in general.
This all brings us nicely, if indirectly, to Josh's VOT post, in which the revised role of production and perception sheds some light on the issue discussed therein, which issue is shifts in bilingual relative to monolingual voice onset times. The basic observation is that, e.g., Spanish-English bilingual children produce more English-like 'Spanish' stops and more Spanish-like 'English' stops, at least in terms of VOT. Furthermore, the effects are apparently both reliably measureable and perceptually irrelevant to the adults around the children.
Josh writes:
There is some level at which the child is storing its two /p/s in a similar place. We know this because they affect each other. If these two categories were truly language-independent, what we would expect to see, I would imagine, is phonemes that pattern exactly as they do for monolingual speakers. Instead, there is (admittedly minimal) overlap.Josh points out the obvious (although he also points out, correctly I think, that this isn't so obvious to everyone that it should be obvious to), that this is (at least indirect) evidence for, as Josh puts it, "a phonemic level of representation", although I think it is better described as evidence for features. Clearly, people cross-classify speech sounds. The Spanish and English [p]s in the bilingual child's mind have much in common. In fact, I am convinced that cross-classification is done along different dimensions, and to different degrees, depending on whether it is done with regard to production or perception. This is the independence issue again, but it's only indirectly relevant here.
It will be objected by people (like this guy) who reject a phonemic level of representation (or used to, or do on Tuesdays except during Passover, or something - it's not terribly clear) that this is an artefact of pronouncing the two sounds in similar locations repeatedly. Motor memory stores exemplars of past productions, and these end up interfering with each other. As to the question of how the subject manages to continue to differentiate between the two distinct (realizations of) phonemic categories in the distinct languges, they would presumably say that this comes from associations with the other serial sounds being produced. The similarity in VOT is an articulatory effect - but only one of many, the others being effects that come from repeatedly producing series of sounds in the given language category.
Yes, but that's dodging the question in a sense. The point is that Spanish and English have this sound that is pronounced in similar enough ways that the subject becomes at least a little confused as to which is which. The two categories do exhibit an influence on each other, and they do so because they are similar across the two languages in some important sense.
If, indeed, there are language-universal phonemic categories that are defined with respect to things in addition to articulation, we should expect to see effects here that cannot be predicted by articulation alone. That would indeed be very interesting.
Directly relevant is an understanding of the relationship between production (and perception) systems and the grammar. On the one hand, we've got pretty clear evidence of underlying cross-classification. On the other, we've got something that looks a lot like motor and exemplar memory effects. There's nothing inconsistent about maintaining both, and I don't think it's dodging the question to invoke motor or exemplar theories to explain a phenomenon such as this. In fact, I think it's exactly the right way to proceed.
Of course, it may be that this particular phenomenon (i.e., the shifts in bilingual VOTs) won't tell us much about competence, which would put it outside of linguistics proper. That's okay with me, as I've got a very different research agenda, and I hasten to add that it doesn't mean that this phenomenon is uninteresting in general.
Finally, I'm glad to see Josh posit a nice, if vague, phonetic hypothesis. If I wasn't convinced of the value of investigating non-articulatory aspects of phonology and phonetics, I wouldn't have spent the last four years taking the classes I did.
15.9.06
Big Cats
I took my daughter to the Exotic Feline Rescue Center yesterday. It should take about an hour to drive there from Bloomington, but the directions given on the website, while technically accurate, are missing some key information.
We took highway 46 west out of Bloomington. Easy enough, although, before we left the house, I did briefly forget about a 4 year old change which road you take from downtown Bloomington to get to 46 west. I grew up here, but didn't get my driver's license until I was 21 and living in Arizona (just north of where Senator Highway became a gravel road, just south of Goldwater Lake), so I had no motorized automobility prior to moving away in the first place. In addition, I don't often have a reason to drive in that direction anyway, so that part of my mental map of my hometown is less developed than some others.
A few miles west of the very tiny town of Bowling Green, a normal right turn onto a nondescript perpendicular is called for. I passed it, then I turned around and drove back to the redolently named 200 E. I headed north, past Ashboro Road, along which is housed the EFRC, for about three miles. I looked, briefly, at state-sized maps of the whole state, then at the covers of maps of Ohio, Louisville, and similarly inappropriate locales. I headed south, back to a utility worker working on utilities along 200 E, and asked him if he knew where I could find an Ashboro Road.
We found the turn marked by a sign amid much lush flora, facing west such that, when you approach the Ashboro-200 E intersection from the South, it is nearly invisible. A half-mile or so east of the secret sign, we parked next to the entrance.
The facility is amazing and bizarre. They depend quite a bit on donations and volunteer labor, but despite the low budget, they've put together an impressive array of enclosures. The website says they have over 200 cats (the guide today told us 192 cats, and the 2006 mid-year report says something about some of the cats dying) in enclosures covering 30 acres.
First you see a shed and a fork in a gravel drive. The guide met us, told us the rules (no petting the cats, stay at least 3 feet from the fences, a third rule I can't remember, and if a cat turns its ass toward you, it's likely to spray you with a very stinky liquid territory marker. We were advised to step to the side to avoid the spray, as moving back merely (slightly) delays the arrival of the stink.
The tour began on the left fork, with cougars in smallish enclosures on the left, lions in a large cyclone-fence enclosure to the right. The landscaping inside each enclosure is left up to the cats. Apparently, cougars like plenty of vegetation, whereas lions do not. Some of the enclosures have fencing-material tops, others have electrically live wires along the rim, still others are high-walled but otherwise open on top.
Because the function of the EFRC (i.e., because it is not a zoo, but, rather, a life-long home for the cats), neither the facility itself nor the population of cats is not designed for show. Whereas a zoo might have a single lion pride, a couple of tigers, and a few other cats, we saw something like 40 tigers (one of which is a genetically rare white tiger), 30 lions, a dozen and a half cougars, two bobcats, a couple of black 'panthers' (or melanistic leopards), some standard leopards, maybe a jaguar (?), a serval, and probably some others that I am forgetting.
Although a small number of the cats that live there were born there, the vast majority have very sad backgrounds. There are a a bunch of ex-circus cats (including one 23 year old tiger whose canine teeth were worn to nubs from incessant chewing on the bars in her small circus cage), and quite a few cats who belonged to people with very bad ideas about what makes a good pet. Three of the cougars were found in an apartment in (or near) Chicago. One of the tigers was found in a residence that also contained a meth lab. Another tiger was owned by a man who later pled guilty to over a hundred counts of child molestation (the tiger was child-bait, apparently, and had to be dealt with after the guy moved to the slammer). A disturbing number of them were found in various towns just wandering around.
Keeping track of my daughter (and myself), trying to enjoy seeing such amazing animals up close, and hearing horrible story after horrible story, I quickly became overwhelmed and felt a bit numb.
On a somewhat lighter note, they keep the names the animals arrive with. Somewhat surprisingly, there are no lions named Simba at the EFRC. There are, however, quite a few Tonys and Tiggers (and variations of Raja) among the tigers.
In one area, a male tiger tried to spray us. We hurried by and avoided any incident, although on the way back out of the area a few minutes later, my daughter was not happy to learn that we would have to walk back by the spray-happy fellow. Not helping matters was the fact that, while we listened to that and the other nearby cats' stories, a young lion pounced on the fence behind us. We must have looked like a fine set of meat-based toys. If this 'kitten' had weighed about 200 pounds less, its behavior would have been adorable. It was fairly terrifying instead.
I brought my camera in hopes of snapping some national-geographic-worthy photographs. I got one (with the help of an employee) late in the tour, which can be viewed here. Because of the rules and the fencing materials used in the enclosures, it is very difficult to get good pictures. After realizing this, I quickly came to wish I had brought an audio recording device instead. The sounds were simply amazing. I don't know why, but a number of the cats were very vocal throughout our visit. Some of the lions seemed to have important things to roar at each other. Some of them were clearly excited about feeding time (some of them excited enough that the yowls and roars repeatedly made me flinch and feel a bit panicky). Some were happy to see the woman guiding us, including a very chirpy cougar and a large number of 'chuffing' tigers (chuffing is kind of a snorted tiger greeting). In any case, the sounds were loud and clear. I plan on returning to record at a later date.
All in all, it was a fine field trip. A bit frustrating to find, but well worth the time and effort. They house cats from all over the country, and they do so with not a lot of money. I highly recommend visiting and, if you can afford it, donating or volunteering, too.
We took highway 46 west out of Bloomington. Easy enough, although, before we left the house, I did briefly forget about a 4 year old change which road you take from downtown Bloomington to get to 46 west. I grew up here, but didn't get my driver's license until I was 21 and living in Arizona (just north of where Senator Highway became a gravel road, just south of Goldwater Lake), so I had no motorized automobility prior to moving away in the first place. In addition, I don't often have a reason to drive in that direction anyway, so that part of my mental map of my hometown is less developed than some others.
A few miles west of the very tiny town of Bowling Green, a normal right turn onto a nondescript perpendicular is called for. I passed it, then I turned around and drove back to the redolently named 200 E. I headed north, past Ashboro Road, along which is housed the EFRC, for about three miles. I looked, briefly, at state-sized maps of the whole state, then at the covers of maps of Ohio, Louisville, and similarly inappropriate locales. I headed south, back to a utility worker working on utilities along 200 E, and asked him if he knew where I could find an Ashboro Road.
We found the turn marked by a sign amid much lush flora, facing west such that, when you approach the Ashboro-200 E intersection from the South, it is nearly invisible. A half-mile or so east of the secret sign, we parked next to the entrance.
The facility is amazing and bizarre. They depend quite a bit on donations and volunteer labor, but despite the low budget, they've put together an impressive array of enclosures. The website says they have over 200 cats (the guide today told us 192 cats, and the 2006 mid-year report says something about some of the cats dying) in enclosures covering 30 acres.
First you see a shed and a fork in a gravel drive. The guide met us, told us the rules (no petting the cats, stay at least 3 feet from the fences, a third rule I can't remember, and if a cat turns its ass toward you, it's likely to spray you with a very stinky liquid territory marker. We were advised to step to the side to avoid the spray, as moving back merely (slightly) delays the arrival of the stink.
The tour began on the left fork, with cougars in smallish enclosures on the left, lions in a large cyclone-fence enclosure to the right. The landscaping inside each enclosure is left up to the cats. Apparently, cougars like plenty of vegetation, whereas lions do not. Some of the enclosures have fencing-material tops, others have electrically live wires along the rim, still others are high-walled but otherwise open on top.
Because the function of the EFRC (i.e., because it is not a zoo, but, rather, a life-long home for the cats), neither the facility itself nor the population of cats is not designed for show. Whereas a zoo might have a single lion pride, a couple of tigers, and a few other cats, we saw something like 40 tigers (one of which is a genetically rare white tiger), 30 lions, a dozen and a half cougars, two bobcats, a couple of black 'panthers' (or melanistic leopards), some standard leopards, maybe a jaguar (?), a serval, and probably some others that I am forgetting.
Although a small number of the cats that live there were born there, the vast majority have very sad backgrounds. There are a a bunch of ex-circus cats (including one 23 year old tiger whose canine teeth were worn to nubs from incessant chewing on the bars in her small circus cage), and quite a few cats who belonged to people with very bad ideas about what makes a good pet. Three of the cougars were found in an apartment in (or near) Chicago. One of the tigers was found in a residence that also contained a meth lab. Another tiger was owned by a man who later pled guilty to over a hundred counts of child molestation (the tiger was child-bait, apparently, and had to be dealt with after the guy moved to the slammer). A disturbing number of them were found in various towns just wandering around.
Keeping track of my daughter (and myself), trying to enjoy seeing such amazing animals up close, and hearing horrible story after horrible story, I quickly became overwhelmed and felt a bit numb.
On a somewhat lighter note, they keep the names the animals arrive with. Somewhat surprisingly, there are no lions named Simba at the EFRC. There are, however, quite a few Tonys and Tiggers (and variations of Raja) among the tigers.
In one area, a male tiger tried to spray us. We hurried by and avoided any incident, although on the way back out of the area a few minutes later, my daughter was not happy to learn that we would have to walk back by the spray-happy fellow. Not helping matters was the fact that, while we listened to that and the other nearby cats' stories, a young lion pounced on the fence behind us. We must have looked like a fine set of meat-based toys. If this 'kitten' had weighed about 200 pounds less, its behavior would have been adorable. It was fairly terrifying instead.
I brought my camera in hopes of snapping some national-geographic-worthy photographs. I got one (with the help of an employee) late in the tour, which can be viewed here. Because of the rules and the fencing materials used in the enclosures, it is very difficult to get good pictures. After realizing this, I quickly came to wish I had brought an audio recording device instead. The sounds were simply amazing. I don't know why, but a number of the cats were very vocal throughout our visit. Some of the lions seemed to have important things to roar at each other. Some of them were clearly excited about feeding time (some of them excited enough that the yowls and roars repeatedly made me flinch and feel a bit panicky). Some were happy to see the woman guiding us, including a very chirpy cougar and a large number of 'chuffing' tigers (chuffing is kind of a snorted tiger greeting). In any case, the sounds were loud and clear. I plan on returning to record at a later date.
All in all, it was a fine field trip. A bit frustrating to find, but well worth the time and effort. They house cats from all over the country, and they do so with not a lot of money. I highly recommend visiting and, if you can afford it, donating or volunteering, too.
Subscribe to:
Posts (Atom)
Blog Archive
-
▼
2006
(40)
-
►
September
(26)
- Belated Birthday Notice
- The Media Sucks. No. The Media Suck.
- The importance of property rights [updated]
- One last time with celebrinerd
- Promises, promises... (pt. 2)
- Programming, perception, and a priori postulates
- Tortured legal reasoning
- Once more with the celebrinerd...
- Links! Lots of links will bring 'celebrinerd' to t...
- Doing my part
- I can't stop me that easily!
- Prefiero carnitas
- Addendum to yesterday's post (updated)
- Slow and steady wins the race, right?
- Big Cats
-
►
September
(26)