
Contact North’s AI tools
Contact North has made available for general use a suite of four AI tools for students, instructors and administrators:
- AI Tutor Pro: aitutorpro.ca
- AI Teaching Assistant Pro: aiteachingassistantpro.ca
- AI Pathfinder Pro: aipathfinderpro.ca
- AI Trades Explorer Pro: aitradesexplorerpro.ca
In a previous post, I set out some criteria for making this analysis. Thanks to several people who suggested improvements, most of which I have incorporated.
What is AI Tutor Pro?
AI Tutor Pro is basically an AI powered chatbot that has two functions:
- ‘Check’: a test of your current knowledge on a particular topic
- ‘Grow’: a tool to help you ‘grow’ or improve your knowledge on a particular topic
It asks you to determine what you wish to know or be tested on, and allows for several levels of response in multiple languages:
- elementary level
- high school level
- undergraduate
- professional
For the ‘Grow’ tool you can also choose topic difficulty:
- introductory
- intermediate
- advanced.
Both tools are based on questions and answers. You are asked what you want to learn about, then the tool gives answers /feedback then asks you questions. On the ‘Check’ tool when you have finished it gives you a numerical mark.
What I did
I tried out AI tutor pro as follows:
The Check tool
This asked me initially to enter a subject. I chose two subjects:
- Art
- The use of Artificial Intelligence (AI) for teaching and learning.
The Grow tool
I chose:
- The use of Artificial Intelligence (AI) for teaching and learning.
I am a ‘newby’ to Acrylic painting, which is why I wanted to test my knowledge on Art.
I chose the AI question because I wanted to stretch/grow my knowledge on this topic, and I also wanted to check what I already knew.
So now to the evaluation.
1. Target group (Scale 0-5)
Is it clear who should make use of these tools and for what purpose?
Both tools offer levels of education (elementary to professional) as a starting point, and by having a ‘check’ tool, it helps with deciding better what questions to ask on the ‘grow’ tool. However, I felt some preliminary questions, before getting into the meat of the interactions, would have helped. For instance, it would have helped if it had asked me at the start: what kind of art are you interested in? This would have narrowed the questions I got on areas I wasn’t interested in, such as architecture or sculpture. As always, such tools are only as good as your own input allows them to be.
I would give both tools a score of 4 out of 5 on this criterion.
2. Ease of use (Scale: 0-10)
- Is it easy to find/log in? Yes, just click on the url and go.
- Is it easy to ask questions? Yes – too easy, as often there is no boundary.
- Does it provide the necessary information quickly? Extremely fast (a couple of seconds) even for complex answers. This was the most impressive feature of both bots.
- Can I follow my own line of inquiry? In theory, yes. If you don’t like the question the bot asks that follows the feedback on the previous question, you can ask your own question. However, I definitely had the feeling of going down a rabbit hole that I had not designed and didn’t know where it was going. This was particularly so for the ‘Grow’ tool on AI. I was asked what AI would be difficult to do and I answered ‘dealing with complex problems’. It asked me for an example and I gave the drug overdose crisis then found myself 10 minutes later still trying to solve the drug problem rather than learning more about AI.
- If I get stuck, will it re-direct me? Not really. If I said ‘don’t know’ it would give me the answer then ask me another question but the next question it asked was not an elaboration on the bit I didn’t understand.
- Is it obvious what I must do once I’m engaged? Yes
I would give this a score of 8 out of 10
3. Accuracy/comprehensiveness of information (Scale: 0-10)
- How accurate is the information provided? As far as I could tell, it was 100% accurate. I did disagree a couple of times on the Art (‘which artists used light to great effect’? I said the Flemish renaissance painters, the bot said Caravaggio, but hey?)
- Is the information correct within context? Yes, especially with regard to the AI questions.
- Does it provide a range of possible answers where this is appropriate? You usually get just one or two answers at best. It could improve on this score
- Does it provide relevant follow-up questions or activities? Most of the time, but see the rabbit-hole effect above.
However, I had a major problem with saving the material for later use. There is a ‘copy’ button, but I have no idea where the record of the questions and answers is stored or how to access a copy of the transactions. The only thing I could do was to print what was on the screen a screen at a time which was not satisfactory. This is a major fault as it essential for a learner to be able to go back and find the information whenever needed. There is a button ‘Help me improve’ at the end when you have finished which allows you to provide an email address then get an analysis of your Q and A, but users should be able to have an online store of all their transactions as they go, as it is extremely frustrating to keep losing the information.
I would give it 7 out of 10 on this score, mainly because of the difficulty of retrieving one’s interactions.
4. Likely learning outcomes (Scale: 0-10)
- provides accurate/essential information on the topic/question (1-3 points) Full marks for this in most cases.
- helps with understanding of key concepts or principles within the study area/topic (1-3 points) I would give it 2 points, mainly because it depends heavily on the user asking the right questions in terms of their understanding of the topic. If you don’t know what to ask, it will prompt with another questions but that may not address the particular difficulty you have.
- enables/supports critical thinking about the topic (with: 1-5 points) or without (1-3 points) good feedback I would give it 2 on this. It depends too much on the learner coming up with the right questions.
- motivates the learner to continue learning about the topic (1 point). I lost interest in the ‘check’ tool after about 10 minutes, and in the ‘grow’ tool after 20 minutes, on each topic.
I would give this a score of 7 out of 10.
5. Transparency (Scale: 0-5)
Where does this information come from? Who says? Will it provide references, facts or sources to justify its answers or the information it provides? What confidence can I have in the information provided? I really have to give a 1 for this. I did have some confidence in the information provided when I already knew this. I have no idea though where the information was sourced. Whenever I asked for a source I received the following:
my responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available information. I do not have direct sources or citations for specific answers, as my knowledge is based on a wide range of information up to October 2023.
On the ‘check’ tool, I received a score of 70. I have no idea whether this is good or bad. The more questions I answered, the greater the score, so it is pretty meaningless. Some kind of comparison would be useful.
6. Ethics and privacy (1-5)
Privacy
There is a link to Contact North’s data security and privacy policies at the bottom of the landing page of each AI tool. These policies are clear and reasonable, although data are sent to OpenAI where it is removed after 30 days.
Ethics
Although these are important topics, I have difficulty in knowing whether or not these tools will be used ethically. The difficulty of storing the interactions makes cheating more difficult. There is not space to deal with the general issues of cheating, but if instructors set questions that need paragraph answers or less on a particular topic, these could easily be answered by these tools (especially the ‘check’ tool.) However, instructors should be asking better questions that require students to do their own thinking.
I would give this 8 out of 10
7. Overall satisfaction (Scale: 0-10)
Is the tool fun to use and do I have a general feeling of satisfaction in using the tool? Or does it leave me feeling uneasy or concerned about its likely impact on learners (or to a lesser extent instructors)? It was fun for between about 20 minutes (for Art) to an hour (for AI in teaching and learning). I did learn a little bit more about AI than I knew before but I never got to the core of how it worked, which was what I was wanting, but obviously this will need more than an hour on the tool. However, I had a feeling of diminishing returns after an hour. I would need to use it a lot more on a specific topic to really understand its limitations.
Is there a coherence about the learning provided by the tool? I hate to say this, but I think there is a limit to the Socratic method of question and answer. Sometimes I wanted stronger guidance on what questions to ask, as the questions the bot asked didn’t always take me in the direction I wanted to go, and I didn’t know what alternative questions to ask. I suspect I may be a bit more sophisticated in my learning than high school students so the less education one has, the more limited (or dangerous) these tools may be.
I was also disappointed that all the bot answers were textual. Some visuals would really have helped, particularly regarding Art.
I am giving this a 5 on overall satisfaction, but this is very personal. Others will love the tool and others will hate it.
Overall evaluation
I’m not sure how scoring is useful, but my analysis results in a score of 40 out of 60 – or 66.6%. A score of 100% would indicate that these tools can stand entirely on their own for learning, so a score of 66% suggests they need to be supplemented with other forms of instruction. The main issues were the difficulty of tracking one’s transactions, and as always with AI tools, the lack of transparency, but in general AI Tutor Pro will be a very useful tool for many learners, and especially for independent learners.
The challenge for instructors will be integrating these tools within their teaching. Instructors will need to consider their learning objectives carefully, given that these tools allow mostly for comprehension, and are very good for this particular function of learning. Generally, the tools will be significantly enhanced if used for group discussion and group work, with guidance from the instructor, especially on the questions to ask.
Over to you
Have you used AI Tutor Pro? If so, what was your reaction?
Up next
I will be doing a similar evaluation of AI Teaching Assistant Pro, to be published towards the end of next week