Last night, I bit the bullet and purchased some crossword puzzle software, which I plan to put to good use in my conversation classes as a way to help build vocabulary. I spent a good part of yesterday testing out some online demos, 90% of which sucked (caveat emptor; the free online services are pretty shitty). The Mac-only shareware I ended up buying* seems to work pretty well, even if it is a bit clunky and uncreative in how it puts words together. It also comes with a "word search builder" option, so I can create acres of word searches should the students tire of crosswords.
I'm always hunting for something new to do in class. In general, I try not to rely too much on any particular style of activity, nor any particular teaching method. The MTV generation is easily bored, so variety is key. In conversation class, straight lecture is right out: the students need time to talk, to flex their English muscles. If I monopolize their time by giving them The Kevin Show, they'll come away having learned little. I often alternate between partner work and group work, and am a big fan of mixer-style activities. Occasionally, I'll throw in something along the lines of Total Physical Response.
I've come to discover, though, that you can't let Korean students mix too freely, because they almost always settle into large clumps and clusters, perhaps preferring the benefits of communal effort to working in pairs (two people being a necessarily smaller "common fund of knowledge"** than a group of three or more). This was a problem during the last mixer quiz I gave my students, and it backfired on them. Because knowledge is viral, spreading an incorrect meme can have disastrous consequences. The latest quiz, for example, had a vocab question: "What is genealogy?" One student, conferring with her partners, had the brilliant thought that genealogy was the same as genetics, and so encouraged other students to answer that way. Because the students hadn't truly mixed but had instead formed large clusters, the wrong answer spread like wildfire, and as a result everyone got the question wrong on the quiz. I'd repeatedly warned the students about this danger, and they ignored me. Their grades suffered as a result.
Mixer quizzes have one other major disadvantage: a dumb or chronically unprepared student can do fairly well on them if they trust that all their partners know the correct answers.
The mixer quiz works like this: Student A gets a page with ten problems on it. She is to find a partner for Question #1, and the partner is to write the correct answer on Student A's paper. Student A can write nothing on her own paper: instead, she has to watch her partner and make sure she [the partner] has written the correct answer. The partner makes whatever corrections Student A specifies, and also initials her work. Student A then takes her paper back and finds a different partner for Question #2. And so on. At the end of the mixer quiz, there should be ten different handwriting samples on everyone's paper.
Students were warned about what could happen should they simply let their papers circulate. A student who doesn't monitor and correct the answers put on her paper can end up with an error-ridden quiz and a bad grade-- none of which will be her partners' fault, as it was her responsibility to make sure the partners wrote correct answers (one major reason for this procedure is to keep people talking, because it makes little sense to give a conversation class a purely written quiz).
But students can abuse the format either by clustering in large groups, as mentioned above, or by talking in Korean (a recurrent problem, especially among the lower levels), or by letting their quizzes "float" among classmates. The unfortunate result of these abuses is widespread mediocre performance-- perhaps two "A" grades in a class of eighteen. This also means, though, that students who deserve an "F" are more likely to end up with a "C."
To compensate for these problems, I include a one-on-one quick interview: the students' chance to demonstrate their own knowledge and skill directly to me. This is easy to manage but time-consuming (as is the mixer quiz itself). I generally rate students on a ten-point scale, taking off a quarter-point for every mistake I hear. Four mistakes will mean a 9 out of 10; still an "A" by most standards. This is much less generous than what I did last semester, when I used a far-too-merciful 100-point scale and took off a quarter-point for every mistake. Students could make 40 mistakes and still end up with a 90. That's just silly. This semester sees a somewhat stricter Kevin. In the spirit of strictness, I'm going to make student mixing more rule-governed from now on: no more clustering allowed!***
The combination of mixer-plus-interview has produced a more realistic grade distribution that also feels like a more accurate reflection of each student's performance. I do see one disadvantage in my interview scoring system, though: students who speak at length will inevitably make more errors, and it's not fair to penalize them for their extra effort. I compensate for this by telling students that the interview is brief (only about 2 minutes), so they should waste no time in answering. No hemming, hawing, or irrelevancy! Some might say that such a policy kills the very spirit of conversation, but with only 60 minutes to get through both a mixer quiz and the one-on-one interview, I don't have time for desultory chit-chat with all 12-20 students. I feel justified in adopting my current approach to quizzes.
*Yeah, I'm one of the stupid people who pays for shareware. Sue me.
**I'm borrowing the term from cognitive theorist Bernard Lonergan. The term refers to our collective knowledge. For example, the fact that the moon orbits the earth at a mean distance of slightly more than one light-second isn't something I verified for myself; I take it on faith that the experts who contributed to the "common fund" got it right. My knowledge of the moon's distance from the earth derives from that common fund.
***Initially, this wasn't a problem. I suspect the clustering evolved as a way to finesse the test.