Sunday, October 02, 2005

crosswords and language teaching

Last night, I bit the bullet and purchased some crossword puzzle software, which I plan to put to good use in my conversation classes as a way to help build vocabulary. I spent a good part of yesterday testing out some online demos, 90% of which sucked (caveat emptor; the free online services are pretty shitty). The Mac-only shareware I ended up buying* seems to work pretty well, even if it is a bit clunky and uncreative in how it puts words together. It also comes with a "word search builder" option, so I can create acres of word searches should the students tire of crosswords.

I'm always hunting for something new to do in class. In general, I try not to rely too much on any particular style of activity, nor any particular teaching method. The MTV generation is easily bored, so variety is key. In conversation class, straight lecture is right out: the students need time to talk, to flex their English muscles. If I monopolize their time by giving them The Kevin Show, they'll come away having learned little. I often alternate between partner work and group work, and am a big fan of mixer-style activities. Occasionally, I'll throw in something along the lines of Total Physical Response.

I've come to discover, though, that you can't let Korean students mix too freely, because they almost always settle into large clumps and clusters, perhaps preferring the benefits of communal effort to working in pairs (two people being a necessarily smaller "common fund of knowledge"** than a group of three or more). This was a problem during the last mixer quiz I gave my students, and it backfired on them. Because knowledge is viral, spreading an incorrect meme can have disastrous consequences. The latest quiz, for example, had a vocab question: "What is genealogy?" One student, conferring with her partners, had the brilliant thought that genealogy was the same as genetics, and so encouraged other students to answer that way. Because the students hadn't truly mixed but had instead formed large clusters, the wrong answer spread like wildfire, and as a result everyone got the question wrong on the quiz. I'd repeatedly warned the students about this danger, and they ignored me. Their grades suffered as a result.

Mixer quizzes have one other major disadvantage: a dumb or chronically unprepared student can do fairly well on them if they trust that all their partners know the correct answers.

How so?

The mixer quiz works like this: Student A gets a page with ten problems on it. She is to find a partner for Question #1, and the partner is to write the correct answer on Student A's paper. Student A can write nothing on her own paper: instead, she has to watch her partner and make sure she [the partner] has written the correct answer. The partner makes whatever corrections Student A specifies, and also initials her work. Student A then takes her paper back and finds a different partner for Question #2. And so on. At the end of the mixer quiz, there should be ten different handwriting samples on everyone's paper.

Students were warned about what could happen should they simply let their papers circulate. A student who doesn't monitor and correct the answers put on her paper can end up with an error-ridden quiz and a bad grade-- none of which will be her partners' fault, as it was her responsibility to make sure the partners wrote correct answers (one major reason for this procedure is to keep people talking, because it makes little sense to give a conversation class a purely written quiz).

But students can abuse the format either by clustering in large groups, as mentioned above, or by talking in Korean (a recurrent problem, especially among the lower levels), or by letting their quizzes "float" among classmates. The unfortunate result of these abuses is widespread mediocre performance-- perhaps two "A" grades in a class of eighteen. This also means, though, that students who deserve an "F" are more likely to end up with a "C."

To compensate for these problems, I include a one-on-one quick interview: the students' chance to demonstrate their own knowledge and skill directly to me. This is easy to manage but time-consuming (as is the mixer quiz itself). I generally rate students on a ten-point scale, taking off a quarter-point for every mistake I hear. Four mistakes will mean a 9 out of 10; still an "A" by most standards. This is much less generous than what I did last semester, when I used a far-too-merciful 100-point scale and took off a quarter-point for every mistake. Students could make 40 mistakes and still end up with a 90. That's just silly. This semester sees a somewhat stricter Kevin. In the spirit of strictness, I'm going to make student mixing more rule-governed from now on: no more clustering allowed!***

The combination of mixer-plus-interview has produced a more realistic grade distribution that also feels like a more accurate reflection of each student's performance. I do see one disadvantage in my interview scoring system, though: students who speak at length will inevitably make more errors, and it's not fair to penalize them for their extra effort. I compensate for this by telling students that the interview is brief (only about 2 minutes), so they should waste no time in answering. No hemming, hawing, or irrelevancy! Some might say that such a policy kills the very spirit of conversation, but with only 60 minutes to get through both a mixer quiz and the one-on-one interview, I don't have time for desultory chit-chat with all 12-20 students. I feel justified in adopting my current approach to quizzes.






*Yeah, I'm one of the stupid people who pays for shareware. Sue me.

**I'm borrowing the term from cognitive theorist Bernard Lonergan. The term refers to our collective knowledge. For example, the fact that the moon orbits the earth at a mean distance of slightly more than one light-second isn't something I verified for myself; I take it on faith that the experts who contributed to the "common fund" got it right. My knowledge of the moon's distance from the earth derives from that common fund.

***Initially, this wasn't a problem. I suspect the clustering evolved as a way to finesse the test.


_

4 comments:

  1. "Mixer quizzes have one other major disadvantage: a dumb or chronically unprepared student can do fairly well on them if they trust that all their partners know the correct answers."

    While this may be undesirable in an academic setting, this happens very often in real life--i.e., a person who may not be the cream of the crop surrounds him or herself with intelligent people and manages to succeed (or at least cover up failure). So while the original mixer test format may not have been ideal for learning English, it was probably teaching a valuable life lesson: you're only as good as the people with which you surround yourself.

    ReplyDelete
  2. Dis-moi qui tu hantes et je te dirai qui tu es, as the saying goes.

    But, damn, is that the life lesson I want to be teaching?

    When I hit upon the mixer quiz idea while teaching French in an American Catholic high school in 1993, I was relieved. I'd been trying to find some method to get the students talking with each other as well as moving around, but I knew I wanted the quiz to reflect the student's own merits. This is why it's crucial to the quiz that each student be responsible for the rightness or wrongness of the answers appearing on his/her paper.

    Unfortunately, as the Korean example shows, deliberately collaborative efforts (e.g., through clustering in groups) can muddy the waters, making it difficult to determine to what extent the student's quiz answers are a reflection of her own merit. I plan to set up my next mixer quiz in such a way that mixing will occur according to a rigid procedure. I learned my lesson: spontaneity in a quiz isn't the best medicine.

    No method is flawless, though, and so I've built in measures, e.g., the one-on-one interview, to compensate for methodological flaws. I suppose one has to make compromises no matter how one goes about the teaching game.


    Kevin

    ReplyDelete
  3. What crossword puzzle software did you buy, and would you recommend it?

    ReplyDelete
  4. I'm using Crossword Forge 4, for Mac. It's also available for Windows. A writeup is here.

    So far, I'm enjoying the software. It's fairly easy to figure out and good for my limited needs. Actually, I like the word search builder more than the crossword builder.

    When making crosswords, the computer has to figure out the best way to make the words intersect with each other within a finite gridspace. This means that there's a risk of crowding/awkwardness: not all the words might fit into the grid. This happened on a recent crossword I made: my "word bank" had 25 words in it, but the puzzle fitted only 24. The word search is better because it's easier to scrunch a lot of words together.

    All in all, I'd recommend the software.


    Kevin

    ReplyDelete

READ THIS BEFORE COMMENTING!

All comments are subject to approval before they are published, so they will not appear immediately. Comments should be civil, relevant, and substantive. Anonymous comments are not allowed and will be unceremoniously deleted. For more on my comments policy, please see this entry on my other blog.

AND A NEW RULE (per this post): comments critical of Trump's lying must include criticism of Biden's or Kamala's or some prominent leftie's lying on a one-for-one basis! Failure to be balanced means your comment will not be published.