Learning Assessment Should Not Mimic A Google Search

Blog

Publication date:

April 06, 2021

Share:

All through my 18 years of formal education, my progress was measured by how well I could answer exam questions. If I got enough answers correct, I was certified as being knowledgeable and could move on to the next level. This was probably the routine for most of you as well: drip, test, move, drip, test, move, and so on.

The problem with the standard testing approach is that answers to questions reveal very little about how knowledgeable one really is. I am sure that we can all remember times when we aced a test knowing fully well that we were standing on thin ice when it came to understanding the material we were being tested on. The fact is that answers reveal some knowledge but not the depth of understanding of that knowledge. Just think of it this way: do Siri and Alexa (or any other digital assistant you might use) understand anything of what they give you in response to your questions? Exactly.

In my years as a teacher, I was always curious about why some students would give the wrong answer to some of my questions. On one hand, I wanted to know if I had been the source of some confusion or misunderstanding and, hence, could improve on my delivery. On the other hand, I wanted to know what gaps in their knowledge and/or reasoning prevented them from fully comprehending what I taught them. Those insights gave me a better handle on what level to pitch my material at and how to best frame it. For me, teaching was always a co-learning opportunity.

As one would expect, the arguments my students would give to support erroneous answers revealed a lack of depth in their understanding of what I taught them. On a few occasions, however, I was surprised by the quality of their thought processes. Although they missed a question, those processes revealed a pretty good command of the material. In fact, I suspected that it might have been better than that of some of their classmates who aced the questions. As the objective of education is effective learning and not acing tests, this revelation should make us pause on how we continue to assess learning outcomes.

Answering exam questions does not reveal the depth of understanding – or lack thereof –  but an ability to recall information. In the case of multiple choice questions, that information is even given and the student simply has to match keywords disguised in the question with the information given. In that sense, standard testing merely mimics a Google search; i.e., see if the keywords bring back what we were looking for.

There is no way to improve learning effectiveness without proper assessment of it. If we want students to actually comprehend what they learn so they can constructively and decisively work with it, we need to assess understanding and not just knowing. We need to go way beyond just the recall or identification of information. Much more effort should go into such assessment. As the saying goes, you can’t manage what you can’t measure.

What might sound a bit surprising at first is that in contrast to answers, the quality of the questions raised based on knowledge acquired reveals quite a bit more about the depth of understanding of that knowledge. Let me illustrate what questions and answers reveal in a slightly different context. It will further support a need for drastically improving how we assess learning.

I am a big believer in gamefied learning and have often used business simulation games as effective learning tools. One game I developed myself and used extensively over the years is eTales. It is a multi-team game with some teams playing the  manufacturer role and others playing the brick-and-mortar retailer role. The manufacturers can sell their products directly online or through the retailers’ stores. Hence, the teams can collaborate and/or compete. The implied vertical and horizontal competition makes for a rich learning environment.

In the eTales game, the retailers and the manufacturers negotiate on how they will work together if they decide to do so. It is quite interesting to see how both sides approach these negotiations. Manufacturers and retailers view the world very differently and, hence, arrive at the negotiation table with fundamentally different perspectives: retailers want all the products from all the manufacturers in all their stores but only in their stores; manufacturers want the retailers’ stores to carry all of their products but only their products. Hence, if they want to work together, then they need to find some common ground where both can expect to realize tangible benefits.

To arrive at such a point, both sides need to learn to understand each other. This is achieved through asking questions as both sides will naturally hold their own cards close to their chests. Part of the learning process is to identify what questions to ask and how. Clearly, either side wants to get as much information as possible about the other side but does not want to reveal much about themselves in the process of doing so.

The principle by which both sides should approach these negotiations is simple: act naive but don’t be naive. If you start asking smart questions, that will put the other side on guard and they might clam up. In contrast, if you ask more naive questions, you might be able to get the other side to talk more freely. Human behavior is quite predictable here: feeling at ease, one tends to say much more than one should. What  participants learn (usually when it is too late) is that you easily reveal smartness by what questions you ask and how you ask them.

For some of us, this is what we intuitively do when we assess people in social settings; we judge them by the questions they ask. Just think of what most of us do when we evaluate job candidates. We take note of the answers they give to our questions but we are particularly tuned in to the type and quality of questions they ask us. Just think of how you feel about a candidate who has no questions to ask at all.

The lesson that education needs to learn is that knowing something is not equivalent to understanding it. We all witness that on almost a daily basis. The distinction between knowing and understanding is quite important when we consider how intelligent agents will increasingly empower our decision making in years to come. The contribution that human intelligence can bring to the table in such a collaborative environment is precisely the understanding that goes beyond simply knowing. Hence, the last thing we should be doing in education is cloning Siris or Alexas. Education should make sure that students have a robust understanding of the knowledge they acquire. If they just know, intelligent agents will beat them at every turn.

In education, we rarely if ever assess learning effectiveness based on the quality of questions asked. We typically just mark off the correct (i.e., expected) answers. If the student recognizes the keywords in the question, something comes back and that student is certified as being knowledgeable. Nothing could be further from the truth. All such answers reveal is an ability to recall knowledge based on keywords contained in the questions. Without any understanding, that knowledge is very fragile. The consequence is that we end up graduating students who don’t really know what they know. That sets them up for a future where they will become easily replaceable by intelligent agents who know much more, can produce it much faster, and can do so with no error and zero effort.

Also read

Subscribe to my Blog

.