Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
CorinnaBalkow
CorinnaBalkow
@CorinnaBalkow@digitalcourage.social  ·  activity timestamp last week

Wondering if I understood LLMs correctly. Like it is a stochastic parrot, you can use it to create texts that sound plausible.
But there isnt any sort of proofing algorithm.

Can any GPT actually look for grammar mistakes? spelling errors? what is important or the main message of a text?if any of the statements in a text are true?
if any of the sources exist?

Please advise with any papers that would show that using an LLM for any of those would work.

#AI #AIResearch #AIEthics #LLM

  • Copy link
  • Flag this post
  • Block
Ulrike Hahn
Ulrike Hahn
@UlrikeHahn@fediscience.org replied  ·  activity timestamp last week

@CorinnaBalkow this question is complicated by the fact that GPTs can also make use of plug-ins that are not themselves transformer based…. so using something like ChatGPT isn’t just ‘using an LLM’, for example

https://gpt.wolfram.com

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct