The number of people here - in my fedi feed - who uncritically say "I used genAI for this" is increasing.
I am genuinely surprised.
The number of people here - in my fedi feed - who uncritically say "I used genAI for this" is increasing.
I am genuinely surprised.
@neil
It depends what you mean, Neil. I've been using it a lot, but only in subjects where I understand more than just the fundamentals, and have a good success rate at detecting bullshit. I've found it excellent for helping me sort out IT configuration issues, but would never ask it to generate content that I could generate myself.
For me, the jury is still out on the efficiency issue. It's saved me literally hours of conventional searching, so it's hard to judge either way.
It's not a panacea, that's for sure, but dismissing it out of hand is a bit harsh.
If you've used it, I'd be genuinely interested to learn from your experiences:
How did you get comfortable with how the models were trained, and the source of the training data?
How did you assess the environmental implications, which I understand are significant?
@neil I'll drop you an email Neil.
@neil I'm also noticing an increase in #GenAI image posts without noting it.
Saw one of a chinchilla the other day that was too perfect... clicked the included link to the 'artist's' site they included and it was nothing but prints of #AI slop including #AIPorn / #Nudes.
Even admitted use of it on their 'artist site' and somehow people didn't take issue with it and kept boosting it here.
I think many here don't have the critical eye yet they might need to develop in order to not spread it.
@neil I have some mixed feelings. Perhaps if the tool was more engineer led and less CEO led, we could end up with interesting things.
On one hand I have to suffer through dealing with an energy provider whose AI often misses the point, and you can watch hallucinating throughout the conversation, and the CEO bleats about how much time it saves them and how efficient it all is.
1/2
@neil On the other, I see people in the open-source community who write better test code because it's relatively easy to add and feels less of a slog.
My team of consultants often have to write one-time programs to migrate configurations, the code quality doesn't matter - we'll check the output by hand. We can work faster and safely.
But at some point, one of us will get over-confident, and it'll go wrong...
Some things can be done faster, generalist work can be easier to do.
2/3
@neil Basically, I don't think it can come down to "AI Bad", I think as ever, it comes down to tech bros bad, and AI feeds very well into corporate enshittification.
I can be mad at corporate enshittification, Elon building gas powered datacentres so they can produce CSAM, it being harder to speak to support, etc while also being cautiously interested in potential for improved open-source, an easier point of entry for someone to make a small code change in a project than just raise an issue
3/3
I guess that I am more hesitant.
How did you get comfortable with how the models were trained, and the source of the training data?
How do you assess the environmental implications, which I understand are significant?
@neil @wishy the trouble is, a *lot* of the modern world is built on the shoulders of, at best, morally dubious origins. Hell, most of modern medicine is built on foundations made of stolen corpses.
The environmental aspects are trickier, and could be levelled at ... just about anything else online. It'll *probably* get better (because less energy means more profit), plus at what point does a single LLM query outweigh 50 search attempts that lead to just as much gibberish?
I can't speak for wishy, but my choice has been to use models from a European company (Mistral) since I have a much higher trust in that EU regulations work.
As for environmental impacts I run my models locally. The power usage, when actively letting the model work, is about the same as when playing a game.
@neil It feels especially out of place on fedi IMO.