Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
@xuv writes
@xuv writes
@xuv_writes@p.xuv.be  ·  activity timestamp last week

The problem is that, in LLMs, words (symbols) are not grounded in experiences of the real world, so any meaning of words needs to be inferred from the relations of words to each other, floating in some abstract space and being prone to misinterpretation, let alone hallucinations.

Max Riesenhuber is the co-director of Georgetown’s Center for Neuroengineering.

source

https://p.xuv.be/quoting-max-riesenhuber #AI #LLM
Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author
View (PDF)
  • Copy link
  • Flag this post
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct