I have my biases when it comes to the way I evaluate a writer's style, diction, grammar, mechanics, etc. At the same time, despite my being something of a language Nazi, I don't call myself a total prescriptivist because I openly recognize that variations exist, and that certain current trends in language tend heavily in a particular direction, likely leaving curmudgeons like me behind one day (the fusion of the words "under way" into the odious "underway" is an example of such a trend).
Because my current job requires me to teach SAT test-taking strategy, I often find myself going through the "improve the sentence" and "find the error" sections with mixed feelings. The problem is that many of the "errors" on the SAT don't seem like errors at all, but are more like marginally accepted stylistic variants. This old article at Language Log expresses my frustration better than I can. Pay particular attention to the section on collective nouns. I often find myself telling my students that Yanks and Brits handle these nouns very differently, and that the SAT has a distinct (and understandable) Yank bias. That said, it's not always obvious that a particular locution is more obviously US than UK in nature, and vice versa, so if the SAT prefers one locution over another, this preference may be the result of a bias that has nothing to do with received US/UK English.
Here are some of the problems I personally have encountered while teaching parts of the SAT:
1. The "due to" versus "because of" distinction. The fact of the matter is that these expressions are often used interchangeably; trying to figure out which one is more appropriate involves more mental pretzeling than is justifiable on a test like the SAT. Oh, yes: "rules" regarding these locutions do exist, but those "rules" are far from indisputable.
2. Serial commas. My own strong preference is to include the comma, which comes right before a conjunction (e.g., and, or): A, B, and C. As it turns out, the SAT gurus agree with me on this: in the "find the error" section of the test, it's possible that a student will encounter a problem in which a sentence is missing a serial comma (e.g., A, B and C); if the student doesn't mark this as an error, he or she will get the problem wrong. As much as this pleases my own aesthetic sense, this sort of question makes me uncomfortable: in the world of style guides, it's by no means settled that a serial comma is necessary. As much as I hate it, the locution "A, B and C" is acceptable.
3. Abstruse rules about tense. In Korea, I made an effort to teach students the salient points about "if" conditional sentences, whose rules are fairly rigid in written English, though much more flexible in spoken English. (In spoken English, no one blinks if you begin a sentence with "If I could have...") But not all verb tenses are governed so strictly, and it's not always obvious that one particular tense must be used.
I agree with the Language Log post: the makers of the SAT have introduced "errors" that aren't really errors at all. To call them erroneous is to foist a certain grammatical ideology on the test-takers. The only errors that should appear on the test are the unambiguous ones-- the ones that are universally accepted as errors.
_
Friday, May 27, 2011
grammatical ideology
2 comments:
READ THIS BEFORE COMMENTING!
All comments are subject to approval before they are published, so they will not appear immediately. Comments should be civil, relevant, and substantive. Anonymous comments are not allowed and will be unceremoniously deleted. For more on my comments policy, please see this entry on my other blog.
AND A NEW RULE (per this post): comments critical of Trump's lying must include criticism of Biden's or Kamala's or some prominent leftie's lying on a one-for-one basis! Failure to be balanced means your comment will not be published.
Subscribe to:
Post Comments (Atom)
My off-the-top-of-my-head guess is that those items are in there for technical testing reasons, not substantive content-based reasons. On standardized tests, items are selected that produce the desired spread; if items do not produce that spread, they are tossed out. Unambiguous items are less likely to produce the desired spread, so they toss in the arguable ones to get it. After all, especially for the SAT, the whole point of the test is to differentiate among people--to declare this person is better than that person, and not as good as that one. Actual content/construct validity counts, but not as much as one might think.
ReplyDeleteAddofio
I suppose that's possible, but the frequency with which I encounter these items (many are from exercises related to the test, and not from sample tests themselves) makes me wonder whether there isn't another reason for their presence.
ReplyDeleteBesides-- when ETS does experimental questions on an actual test, don't they lump all those questions into experimental sections? (The GRE does this.) The ambiguous questions aren't normally scattered among the "normal" sections, are they? Or am I smoking something that's too strong for me?
I'll need to check into this.