Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Aaron
Aaron
@hosford42@techhub.social  ·  activity timestamp last month

If you want a specific example of why many researchers in machine learning and natural language processing find the idea that LLMs like ChatGPT or Claude are "intelligent" or "conscious" is laughable, this article describes one:

https://news.mit.edu/2025/shortcoming-makes-llms-less-reliable-1126

#LLM
#ChatGPT
#Claude
#MachineLearning
#NaturalLanguageProcessing
#ML
#AI
#NLP

  • Copy link
  • Flag this post
  • Block
Dianora (Diane Bruce)
Dianora (Diane Bruce)
@Dianora@ottawa.place replied  ·  activity timestamp last month

@hosford42 "colorless green ideas sleep furiously!"

  • Copy link
  • Flag this comment
  • Block
Aaron
Aaron
@hosford42@techhub.social replied  ·  activity timestamp last month

There are quite a few examples of this sort of problem already documented in image processing. Examples include the neural network that learn to recognize *not* the thing they are being trained to recognize, but instead the background, setting, camera, lighting, or presence of markings like watermarks or logos.

It comes as no surprise at all that the same problems can be observed in language models.

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-alpha.41 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct