2/2
What I found was interesting.
I've been writing questions to assess knowledge and prevent the student trying to BS their way through from succeeding.
For example, I ask a specific example about what dopamine does in a made up experiment that is close but not identical to one in the literature. If the student understands reward-prediction-error* and works through it carefully, they get the right answer. If they just make the analogy to the older experiment, they get the wrong answer.
Truth be told, I originally did this because a lot of students work like AI/LLMs themselves, and a lot of my goal is trying to teach them not to do this. So I started developing questions that if you understood it you got it right, but if you just work by analogy to the examples, you get it wrong.
Turns out this is good for preventing AI from getting good grades.
馃
* Yes, DA >> RPE, but this is an intro theory class and that part of the class is about understanding how we get to RPE so we can go beyond it later.