For the last several years I've wrapped up my 7th grade CS classes with a big robotics build. Students design, construct and code their own Raspberry Pi robots in teams and then showcase their work at our exhibition.
This year's teams really stepped it up and requested to learn all sorts of new tools, libraries and/or programming in order to increase the interactivity of their robots.
From chat bot technology to voice activated translators, several teams had great ideas about how A.I. could help them innovate to help others. And in the process of learning how to integrate A.I. capabilities into their code, students also learned about the limitations, challenges and ethical considerations with current A.I. technology.
Our A.I. requirements
Since I had never integrated A.I. into my programming before either, I was as much a novice as the students. So we decided that each team interested in A.I. capabilities would use the same library (whether it exactly fit their original project pitch or not) so that we could all learn together.
What we needed was:
voice recognition
ability to run the library locally/offline (for student privacy)
software that would work with our storage and processing limitations (Raspberry Pi 4)
What we used
After quite a bit of searching, I found PocketSphinx-- a small, open source, speech recognition engine developed at Carnegie Mellon University.
It requires just a couple of installs and runs locally, so I decided it was a good tool for the students to experiment with.
While it's not necessarily the most sophisticated tool, and while it was a little buggy (a couple of our computers required MANY installs and reinstalls before the software would recognize a word...), it turned out to be a great speech recognition tool for students to integrate into their projects.
Our learning: challenges & ethics in A.I.
In the process of working with PocketSphinx and designing their programs, students learned a lot more about how A.I. works, and especially some of the challenges and ethics related to A.I. development.
"why can't we just use Chat GPT in our projects? I already use it all the time..."
"this is so slow...."
"it's not understanding what I say!"
"why doesn't it know that word?"
"why isn't my code working? I got it from Chat GPT..."
These were some of the comments from our speech recognition teams as students worked with Pocket Sphinx to create their companion bot, automated translation bot, mathematics buddy, and robotic basketball trainer.
Data privacy-- In explaining my choice to use Pocket Sphinx, the students and I had a good conversation about our personal data, and the ways in which online models are using our conversation data to feed their LLMs (large language models). Several students were surprised to learn that their data was being collected each time they use Chat GPT (and that there is technically an age recommendations for using ChatGPT).
Speed of model & energy use-- Why was our program so slow to run? We discussed how LLMs need lots of processing power to quickly analyze lots and lots of data. And that LLMs use quite a lot of energy in order to process that data.
How are models trained, and by whom? -- While working within this limited model, I had a chance to reiterate for students what "machine learning" is and how A.I. models are trained. Student observations of how well the A.I. worked in their project (or didn't work the way they'd imagined) brought up great questions around why an A.I. tool might not recognize one person's voice (or accent) as well as another's, or why it might not recognize certain phrases or colloquialisms in ways that we expect or want it to.
Effective prompt writing & AI literacy -- In the course of our robotics unit, some teams took the initiative to try and use A.I. to program their bots to do things they didn't know how to code themselves. It was exciting to see students trying to move their learning forward independently, and in the process they learned more about the nuances of effective prompt writing and that A.I.'s output cannot always be assumed 100% accurate or the best solution to the problem at hand.
For example, one team wanted to code their robot to "dance" and prompted A.I. to tell them how to code their robot's wheels to move in certain ways to help them create that dance. During their team check-in (and after a significant amount of work had already been put into copying ChatGPT's output into their robot's program) I had to deliver the bad news that the code they were using was not going to run with the hardware that we were using. While the program might have worked with another setup, they had not prompted ChatGPT to write a program that used the Python library that they needed to control the specific type of hardware that we were using.
What's next?
This experience of developing with A.I. helped students make meaningful and deep connections regarding how A.I. is built and used. In their previous year of computer science (as 6th graders) students engaged in a more direct instruction-based unit around A.I. and machine learning, what it is, how it's trained, and how it works. And after this deeper dive, via making, I witnessed students able to speak more confidently about A.I. in their presentations -- both about positive uses of A.I. and the challenges students experienced with the A.I.
With the advantageous outcomes of this experience, I am updating my project-based robotics unit to intentionally integrate A.I. exploration and learning experiences for all students. Mini-lessons on prompt-writing and model-training would integrate nicely into students' work of planning their programs, and offer students an opportunity to dive deeper into A.I. concepts in-context. And playing with continuous live speech in Pocket Sphinx (with students able to see on-screen how the machine is collecting their conversations in real time) provides a concrete example of data collection in action, and launches the conversation around data privacy and ethics in computing.
Comments
Post a Comment