top of page
Search

EDU624: Blog 2

  • Writer: Rosa Conti
    Rosa Conti
  • Oct 2, 2022
  • 5 min read

Updated: Oct 15, 2022

Connecting and Assessing e-Learner Knowledge

My experience growing up as a student in the 70s was limited to paper and a sharp No. 2 pencil. We filled out circled forms, wrote down answers in journals with our names across the front, and demonstrated our knowledge by writing on the blackboard or raising our hands.


While technology has evolved us to a place we couldn’t previously imagine but now take for granted, transforming from paper to digital has its own advantages and disadvantages. One of these is that educators, designers, and developers have an obligation to understand how a learner learns and what elements and activities should be included in online learning. As I progress in this course about e-Learning design in diverse environments, I am impressed with this field of education planners because I realize that creating learning content and subsequent testing activities is a minor work of art.


My last blog talked about how Absorb activities deliver the content that is intended to be learned, whereas Do activities take this information and allows the learner to practice and reinforce their knowledge and skills. I learned recently that the third branch of this system planning involves Connect activities.

CONNECT ACTIVITIES

Connect activities ensure that students can make a connection between what is being taught and what is already known (Horton, 2012). They also help learners to see relationships between new ideas and previously acquired skills and knowledge. I struggled with understanding the clear difference between Do and Connect activities because each has the learner doing something, and I initially thought the purpose of both sets of activities was to keep students engaged by keeping them, well, busy.


Therefore, my biggest takeaway from studying Connect activities is that their purpose is to create a link, a calibration, an anchor between something newly learned and already known (Horton, 2012). Otherwise, if the purpose is to teach new information, it should be done through Absorb or Do activities.

  • New learning = Absorb and Do activities

  • Anchor learning = Connect Activities

According to Horton (2012), there are several different Connect activities that can be used, each depending on your course's objectives. In other words, not all e-Learning will benefit from using all of the available Connect activities, and not all Connect activities will make sense if inserted into an e-Learning course.


Let me give you an example. I am creating a mock e-Learning module using Storyline for this EDU624 graduate course. It intends to teach PowerPoint basics to middle school students (sixth- to eighth-graders). In the effort to include Absorb, Do, and Connect activities into the learning, I am currently working on creating the following:

Because my course is being designed as an asynchronous model, students would be unable to use a Connect activity such as story sharing unless I created a discussion board for the course. Likewise, the content is not eligible for using other types of Connect activities such as scavenger hunts, guided research, or calculators.


Therefore, I believe a well-suited Connect activity for learning the steps of introductory PowerPoint would be stop-and-think questions. These "ponder" questions will encourage learners to consider how they can apply their knowledge. For example: "Where do you think a PowerPoint presentation can be handy in your life?" or "Why do you think adding a photo to this page makes it more exciting?" or "What purpose do you think formatting text offers?" Offering a takeaway document that lists step-by-step instructions can also be a valuable Connect activity. Still, it might not be needed because the steps are rudimentary and bare basics. I will flesh this out in the upcoming weeks.


TESTS & ASSESSMENT ACTIVITIES

According to Horton (2012), a test is an activity that provides feedback about how well students are meeting learning objectives. There is a handful of things to decide before writing test questions. Below are a few that stood out for me.


  • Is the test warranted? Will it help to measure performance, or is it included as an extra add-on activity? Remember, if self-practice is what you’re after, Do activities achieve this.

  • Will the test be used to enhance learning vs. measuring, certifying, or motivating learners’ abilities? If a test is used to enhance learning, it should be offered upfront to gauge a starting point before the content delivery (Absorb activity).

  • What kinds of tests will be used? Test questions define and clarify learning objectives, so it’s essential to be sure the ones used can do this.

  • Do you want the learner to “recall” or “recognize” information? Tests are suitable for subjects that require memorization and knowledge validation. Do activities help with recognition and practice opportunities.


My biggest takeaway this week when learning about this topic is that tests and assessments aren’t always necessary (or even applicable), and it’s not a given that they will add value. In fact, an irrelevant test casually placed into an e-Learning course can have backfiring effects.


For example, can you imagine how a redundant test can provoke learner anxiety creating an inability to focus? Or how a student may let up on their concentration and effort if they think they are doing well (or “well enough”) because they scored high on an easy test? What about students who sufficiently understand the learning but “do bad” at timed tests (or tests in general)?


If you’re going to include test questions, be sure they make sense. Twice, in separate undergraduate courses, I experienced multiple-choice questions where the answers were ambiguous, and more than one answer was correct. Fortunately, my college agreed and returned my lost points for the two ill-designed questions.


Another thing to consider when including a test is whether it should be an open or closed book activity. I believe people put too much emphasis on memorization and overlook resourcefulness. Take my colleague, Paul. He has a master's degree in mechanical engineering with four decades of field experience (and is wicked smart, in my opinion). Yet, he claims that he (and other engineers) would do poorly on a knowledge test because his work depends on reference charts and other resources. He says the real skill is knowing where to find relevant information and what it means, and I agree. Herein lies another example where testing may be misused or misinterpreted.


As for me, I will not include tests in my mock course model because, as a different kind of example, a test is unnecessary here because the content is so straightforward there aren't enough "asks" from the learning content to warrant an assessment. Also, the current Do activities suffice in offering interactive practice tasks. Creating anything more would be unnecessary for both the learners and the designer (me). It's significant to note that if I had designed my mock course in a synchronous model, a practical test at the end of the course could be to ask students to demonstrate their learning by creating a simulated (or original) PowerPoint slide for grading.

This video, presented by the Tennessee Department of Education, offers insight into how assessments can inform teaching by providing one form of feedback about students' academic progress.

References


Horton, W. (2012). E-Learning by design. (2nd ed.). John Wiley & Sons.


 
 
 

Comments


© COPYRIGHT 2024 ROSA CONTI  |  ALL RIGHTS RESERVED

bottom of page