I look at other people’s languages on here and I don’t have to understand them to see that they are just beautiful.
People modelling large language scopes are wondrous. Machines modelling languages for profit at the expense of habitability are shit
I look at other people’s languages on here and I don’t have to understand them to see that they are just beautiful.
People modelling large language scopes are wondrous. Machines modelling languages for profit at the expense of habitability are shit
…as Hank Green asserts, language is our most precious technology.
My strong view is that if we delegate it to machines to feed to machines without oversight, we’re fucked.
So let’s not do that.
If we delegate it to machines to be reappraised by humans *before* relaying on to other humans, and we have clear visibility of the consequences, that can be less bad
@urlyman I agree with Hank. An LLM is never going to be halfway through writing something and realize there's another angle to it that's never been explored, for example. It's never going to be able to tell a story from an unusual viewpoint or a historical perspective. It's never going to be able to interpret events in terms of emotions or any other aspect of human psychology. It's never going to capture human experience in a few words.
LLMs don't seem to even know that they're repeating the same information, or that they've changed a person's gender, so how can we expect subtely or nuance from them? The best they can do is make a chopped fruit salad out of pickings of text they've lifted from the internet.
And when they're making fruit salad out of their own fruit salad ... what then?
…What tends to happen is things like a culture of no care pumps an AV stream through an LLM to produce a transcript which with *no oversight* is offered up to people trained into a state of attention deficit.
That’s quite a large oversight