Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Ulrike Hahn
@UlrikeHahn@fediscience.org  ·  activity timestamp 2 months ago

another great talk #cogsci25 yesterday was by Sean Trott on "Do we know enough to know what language models know" on the difficulties in trying to make sense of LLMs.

One of the most useful things I thought was his point that we need to think more clearly about what it would mean if an LLM passes a human behavioural test, say a theory of mind test:
-do we bite the bullet and acknowledge the capacity
- do we reject the capacity regardless (if yes why?)
- do we change our views on the construct validity of the test for either machines or machines and humans?

I think these are questions to think about in advance that could yield a lot of conceptual and methodological clarification

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.13 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login