
Contact North’s AI tools
I’ve noticed that Contact North is now providing free access to a suite of four AI tools. They can all be easily accessed on Contact North’s AI tool landing page:
- AI Tutor Pro: aitutorpro.ca
- AI Teaching Assistant Pro: aiteachingassistantpro.ca
- AI Pathfinder Pro: aipathfinderpro.ca
- AI Trades Explorer Pro: aitradesexplorerpro.ca
I’ve decided to try and evaluate these tools over the next few blog posts and will share with you my results.
My evaluation criteria
To do this, I have constructed a fairly simple evaluation framework, and I would be really interested in your suggestions regarding this framework. Are these the right questions to ask about these tools, or are there other issues I should consider?
1. Target group (Scale 0-5)
Is it clear who should make use of these tools and for what purpose? This is not a simple criterion. If the target group is too broad (everyone in the world, for instance) it is not likely to satisfy or be useful to everyone. It may be useful for a particular type of student or potential students or instructor, but this may not be clear without a lot of wasted time experimenting with a tool.
Given that Contact North is primarily focused on post-secondary students and instructors, and possibly Grade 12 students and teachers primarily in Ontario but also across Canada I will rank this on a five-point scale:
- Nil: not at all clear who should use this tool
- 2-3: useful for a small number of (potential) students or instructors (2 not clear which, 3 clearly useful to a small sector)
- 4: useful for the majority of students and/or instructors, but for some it will not be useful
- 5: all students and/or instructors would benefit from this tool.
I’m only using a five-point scale for this, because of the difficulty of anticipating how useful it will be to others than myself.
2. Ease of use (Scale: 0-10)
This is critical, and so I am using a ten-point scale. It must be easy to use, especially if aimed at learners. The type of questions I will be considering are as follows:
- Is it easy to find/log in?
- Is it easy to ask questions?
- Does it provide the necessary information quickly?
- Can I follow my own line of inquiry?
- If I get stuck, will it re-direct me?
- Is it obvious what I must do once I’m engaged?
3. Accuracy/comprehensiveness of information (Scale: 0-10)
Again, this is a critical criterion, so I rank it also on a ten point scale. The following are the type of question I will be asking:
- How accurate is the information provided?
- Is the information correct within context?
- Does it provide a range of possible answers where this is appropriate?
- Does it provide relevant follow-up questions or activities?
If I have grave concerns about the accuracy or comprehensiveness of the answers, this criterion alone may be enough for me to ‘fail’ the tool.
4. Likely learning outcomes (Scale: 0-10)
This is both an essential criterion but one that will likely be difficult for me to assess, because it will depend very much on who is using the tool. However, there is a ‘scale’ of possible outcomes that I will use, for a total of 10 points:
- provides accurate/essential information on the topic/question (1-3 points)
- helps with understanding of key concepts or principles within the study area/topic (2-3 points)
- enables/supports critical thinking about the topic (with: 5 points) or without (3 points) good feedback
- motivates the learner to continue learning about the topic (1 point)
5. Transparency (Scale: 0-5)
This is a simple question: where does this information come from? Who says? Will it provide references, facts or sources to justify its answers or the information it provides? What confidence can I have in the information provided? I will provide a scale of 0 to 5 for this criterion.
6. Overall satisfaction (Scale: 0-10)
This will encompass a number of subjective issues, most of all, is the tool fun to use and do I have a general feeling of satisfaction in using the tool? Or does it leave me feeling uneasy or concerned about its likely impact on learners (or to a lesser extent instructors)? Is there a coherence about the learning provided by the tool? Does it encourage cheating?
This will leave me with a maximum score of 50, although a tool could fail completely on a couple of criteria (lack of accuracy or a failure to lead to learning). Any tool scoring more than 30 is likely to be useful.
I would add one more criterion – ethics and privacy. For instance, does it protect or anonymise student data? However, I have no way of knowing this.
Over to you
- What criteria would you use instead of the ones I’ve listed?
- Would you weight them differently?
- Have you used these tools? What would your evaluation be?
Please use the comment box at the end of this post
Up next
I will be writing my evaluation of aitutorpro.ca towards the end of this week.
Interesting offerings and it looks pretty much like the stuff that achieves the magical savings of time and personalization of GenAI. Your criteria is sound, I look forward to seeing what you can come up with on Transparency. I hopped in and watched a few videos and demos, create an account with the Learning Shorts tool.
The front facing site just states it as “powered by AI” and “free, private, confidential” though no details on exactly what is powering it nor how it is “private”. I am guessing it is OPpenAI as one of the tools was labeled “ChatGPT” account needed, and bured way down in a liost of updates, from last year, it lists it as “ChatGPT 4o powered” https://www.aiteachingassistantpro.ca/whats-new
I have to admit after watchng a few of the videos it produced is first of all, it’s rather impressive on the quality and at the same time after sitting through a few videos, it’s looking across the uncanny valley. There’s a sameness that I would find boring, and most of the videos, even if they include the name of the university as “persoanlized”, the content is very vanilla. It is all bland summaries, lacks any context or links to sources.
But maybe that’s just me, I’d be real curious how valuable students find this talking robotic content. I wandered into the YouTube channel for the Learning Shorts, and there is a stark differencd between the experience of the AI bots calmly, even with smiles, yammering away about APA formatting and Risk Management in almost exactly the same tone, versus the screencast LTI instructions that perhaps are less slick, but it’s a person there.
Looking forward to your takes.
Oh just one more thing, what is the source of your featured image labeled as “Chatbot Image”? I keep a collection of AI as represented by robots 😉 The happy smiling boy in the front appears to have the prooper number of fingers on his left hand, but note the wrong position of his thumb on his right hand (left side of image).