@inthehands This is why experienced developers can make use of LLMs, and why LLMs won't replace them.
@inthehands This is why experienced developers can make use of LLMs, and why LLMs won't replace them.
Despite the obviousness of the larger conclusion (“LLMs don’t give accurate medical advice”), this passage is…if not surprising, exactly, at least really really interesting.
2/
@inthehands This is why experienced developers can make use of LLMs, and why LLMs won't replace them.
There’s a lesson here, perhaps, about the tangled relationship between what is •typical• and what is •correct•, and what it is that LLMs actually do:
When medical professionals ask medical questions in technical medical language, the answers they get are typically correct.
When non-professional ask medical questions in a perhaps medically ill-formed vernacular mode, the answers they get are typically wrong.
The LLM readily models both of these things. Despite having no notion of correctness in either case, correctness is more statistically typical in one than the other.
3/