Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Redish Lab
@adredish@neuromatch.social  路  activity timestamp 6 days ago

2/2

What I found was interesting.

I've been writing questions to assess knowledge and prevent the student trying to BS their way through from succeeding.

For example, I ask a specific example about what dopamine does in a made up experiment that is close but not identical to one in the literature. If the student understands reward-prediction-error* and works through it carefully, they get the right answer. If they just make the analogy to the older experiment, they get the wrong answer.

Truth be told, I originally did this because a lot of students work like AI/LLMs themselves, and a lot of my goal is trying to teach them not to do this. So I started developing questions that if you understood it you got it right, but if you just work by analogy to the examples, you get it wrong.

Turns out this is good for preventing AI from getting good grades.
馃

* Yes, DA >> RPE, but this is an intro theory class and that part of the class is about understanding how we get to RPE so we can go beyond it later.

#AI #teaching

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About 路 Code of conduct 路 Privacy 路 Users 路 Instances
Bonfire social 路 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login