I can't help but think creating an LLM to verify the claims of LLMs and thus reduce misinformation is the AI equivalent of fighting fire with fire...
Teach people to use their brains and do real research, surely that is the solution?
Post
I can't help but think creating an LLM to verify the claims of LLMs and thus reduce misinformation is the AI equivalent of fighting fire with fire...
Teach people to use their brains and do real research, surely that is the solution?
"So he asked his wife about it, and his wife said yes
Come back and see me when it's time to know less.
You do too much aquestioning of the world at large
Everybody knows the politician's in charge." — Incredible String Band, "Puppet song"
😂
@alexisbushnell
This should definitely be part of education systems, starting at an early age. Sort of an 'intellectual self-defence'. Until that happens however, I'm working on it another way:
https://techne-system.neocities.org/program/level-2/topic-8/l2t8-i-tech#evaluating-any-technology
There's also lessons on critical thinking as well in the program.
@alexisbushnell It is incredibly convenient for the people selling matches
@alexisbushnell well, sometimes fighting fire with fire works, firefighters do use counter fires to stop the progress of uncontrolled ones, explosions used to suck up all the oxygene can stop raging fires. So if it's truly the right analogy, it's more nuanced.
I agree critical thinking education is certainly part of the solution, but i don't believe it's the full solution, and llms can be useful to show how easy it is to convince people one way or another.
@alexisbushnell They can also misinform at scale for very cheap, and already are, if (and i really mean "if") we can use them to balance the scale, exposing the misinformation, then we probably should, because it's not even close to a fair battle at the moment.
@tshirtman I see that pov for sure but usage is going to burn the world. What use is maybe reducing the spread of misinformation (which seems unlikely even with the use of LLMs for a variety of social reasons) if we've destroyed the planet entirely and killed most of the people on it in the process?
@alexisbushnell i think there is misinformation about the energy costs of llms, too, some usages (image, videos) are much much worse than others (chatbots), and some (i think code generation, which is chatbot on steroids) are in lhe middle. Compared to traditional search, a chat prompt might consume much less energy, though it's hard to know for sure because of lack of independant data. People assuming the worst doesn't help.