Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Aaron
@hosford42@techhub.social  ·  activity timestamp 2 weeks ago

If you want a specific example of why many researchers in machine learning and natural language processing find the idea that LLMs like ChatGPT or Claude are "intelligent" or "conscious" is laughable, this article describes one:

https://news.mit.edu/2025/shortcoming-makes-llms-less-reliable-1126

#LLM
#ChatGPT
#Claude
#MachineLearning
#NaturalLanguageProcessing
#ML
#AI
#NLP

  • Copy link
  • Flag this post
  • Block
Dianora (Diane Bruce)
@Dianora@ottawa.place replied  ·  activity timestamp 2 weeks ago

@hosford42 "colorless green ideas sleep furiously!"

  • Copy link
  • Flag this comment
  • Block
Aaron
@hosford42@techhub.social replied  ·  activity timestamp 2 weeks ago

There are quite a few examples of this sort of problem already documented in image processing. Examples include the neural network that learn to recognize *not* the thing they are being trained to recognize, but instead the background, setting, camera, lighting, or presence of markings like watermarks or logos.

It comes as no surprise at all that the same problems can be observed in language models.

  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-alpha.8 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login