Image: Chatbot Image, 2025

Contact North’s AI tools

I’ve noticed that Contact North is now providing free access to a suite of four AI tools. They can all be easily accessed on Contact North’s AI tool landing page

I’ve decided to try and evaluate these tools over the next few blog posts and will share with you my results. 

My evaluation criteria

To do this, I have constructed a fairly simple evaluation framework, and I would be really interested in your suggestions regarding this framework. Are these the right questions to ask about these tools, or are there other issues I should consider?

1. Target group (Scale 0-5)

Is it clear who should make use of these tools and for what purpose? This is not a simple criterion. If the target group is too broad (everyone in the world, for instance) it is not likely to satisfy or be useful to everyone. It may be useful for a particular type of student or potential students or instructor, but this may not be clear without a lot of wasted time experimenting with a tool.

Given that Contact North is primarily focused on post-secondary students and instructors, and possibly Grade 12 students and teachers primarily in Ontario but also across Canada I will rank this on a five-point scale: 

  • Nil: not at all clear who should use this tool 
  • 2-3: useful for a small number of (potential) students or instructors (2 not clear which, 3 clearly useful to a small sector)
  • 4: useful for the majority of students and/or instructors, but for some it will not be useful
  • 5: all students and/or instructors would benefit from this tool.

I’m only using a five-point scale for this, because of the difficulty of anticipating how useful it will be to others than myself.

2. Ease of use (Scale: 0-10)

This is critical, and so I am using a ten-point scale. It must be easy to use, especially if aimed at learners. The type of questions I will be considering are as follows:

  • Is it easy to find/log in?
  • Is it easy to ask questions?
  • Does it provide the necessary information quickly?
  • Can I follow my own line of inquiry?
  • If I get stuck, will it re-direct me?
  • Is it obvious what I must do once I’m engaged?

3. Accuracy/comprehensiveness of information (Scale: 0-10)

Again, this is a critical criterion, so I rank it also on a ten point scale. The following are the type of question I will be asking:

  • How accurate is the information provided?
  • Is the information correct within context?
  • Does it provide a range of possible answers where this is appropriate?
  • Does it provide relevant follow-up questions or activities?

If I have grave concerns about the accuracy  or comprehensiveness of the answers, this criterion alone may be enough for me to ‘fail’ the tool.

4. Likely learning outcomes (Scale: 0-10)

This is both an essential criterion but one that will likely be difficult for me to assess, because it will depend very much on who is using the tool. However, there is a ‘scale’ of possible outcomes that I will use, for a total of 10 points:

  • provides accurate/essential information on the topic/question (1-3 points)
  • helps with understanding of key concepts or principles within the study area/topic (2-3 points)
  • enables/supports critical thinking about the topic (with: 5 points) or without (3 points) good feedback
  • motivates the learner to continue learning about the topic (1 point)

5. Transparency (Scale: 0-5)

This is a simple question: where does this information come from? Who says? Will it provide references, facts or sources to justify its answers or the information it provides? What confidence can I have in the information provided? I will provide a scale of 0 to 5 for this criterion.

6. Overall satisfaction (Scale: 0-10)

This will encompass a number of subjective issues, most of all, is the tool fun to use and do I have a general feeling of satisfaction in using the tool? Or does it leave me feeling uneasy or concerned about its likely impact on learners (or to a lesser extent instructors)? Is there a coherence about the learning provided by the tool? Does it encourage cheating?

This will leave me with a maximum score of 50, although a tool could fail completely on a couple of criteria (lack of accuracy or a failure to lead to learning). Any tool scoring more than 30 is likely to be useful.

I would add one more criterion – ethics and privacy. For instance, does it protect or anonymise student data? However, I have no way of knowing this.

Over to you

  • What criteria would you use instead of the ones I’ve listed?
  • Would you weight them differently?
  • Have you used these tools? What would your evaluation be?

Please use the comment box at the end of this post

Up next

I will be writing my evaluation of aitutorpro.ca towards the end of this week.

LEAVE A REPLY

Please enter your comment!
Please enter your name here