P.s This is a short post so I'm not going into the many problems I have with AI and the ethics around it. And to those concerned specifically about the resources AI uses, we should be concerned! However, applying such focus to AI only in a world of wild overconsumption feels like a huge distraction.
Post
2/
I find AI to be an exceptionally useful tool.
I am disabled and for decades now have spent a lot of time researching my health conditions, the law, searching for relevant studies, writing emails to the health board, the ombudsman, filling out the many laborious forms I encounter. With AI, a task that would normally take a week or longer (thanks fatigue, brain fog), takes just hours instead.
3/3
I save so much time and energy. And I can use that time and energy to work on things I am actually interested in doing. Like writing and photography.
I think we have to be careful not to completely demonise this tool. Used wisely, used ethically, it can go some way towards levelling the time and energy playing field.
END
@kristiedegaris Agreed, wholeheartedly. I've been thinking about how to approach this question here for a bit now but clearly you are braver than I am.
It's revolutionary tech, we need to figure out ways to democratize it, not bury our heads in the sand while the elite flattens us.
@renardboy Yes! I'm surprised this side of it is being ignored. And there are many ways to make it more sustainable, more ethical (mostly moving away from LLMs and towards domain specific models).
@kristiedegaris the issue with so called AI (we ain't anywhere near AI today), which really just LLMs mimicking intelligence, is that we can't be sure the result is correct, we need to verify it's correct.
Just a short example, asking an LLM if the Swedish city of Gothenburg was bombed during second world war, you will most likely end up with a false answer, this for many cities in Europe got bombed during this time and in the data which the LLM has been feed with you will read about all the cities which were bombed, not the cities that weren't targets.
As many LLMs are feed with what ever is found online (easier to process than what you find in a Library), you will have a lot of false information mixed into it's data, those they have had a tendency to be pro-white-power as such groups are broadly visible online.
Then you have the issue with wording, two prompts that for us humans may ask for the same thing can generate different answers.
Sure it can be useful and helpful tool if you look past the environmental effects and the greedy people owning the systems.
@aho Sadly, like the rest, I have to spend most of my time looking past environmental effects and the greedy people owning the system.
If I shop at Tesco I have to ignore it, if I use Google I have to ignore it etc etc. Harm is built into the systems.
And I agree re inaccuracy but there's a lot you can do to mitigate it, including checking. Even with that in the mix, I still save a lot of time.
@kristiedegaris your use case is not only defensible, it is exactly one of the key reasons #ai should exist - though I hope you guard yourself against hallucinations.
Like Mark Twain said, paraphrased... because careful of reading medical journals, you may die of a typo.
There are some who rail against it. I think if it is used specifically to help humans it is a good use. And for those who are opposed, they need to build something better.
P.s This is a short post so I'm not going into the many problems I have with AI and the ethics around it. And to those concerned specifically about the resources AI uses, we should be concerned! However, applying such focus to AI only in a world of wild overconsumption feels like a huge distraction.
@kristiedegaris My main concern is over reliance on LLM's. I don't think of them as inherently good or bad any more than I think of a hammer as inherently good or bad. We do have to be careful to avoid expecting LLM's to do our thinking for us.
A hammer is a great tool used correctly. Used incorrectly it's worse than useless.
@kristiedegaris best practices: for every “answer” you get publish and save the “query”
if using someone else s analysis art or research or search results , reference in bibliography. seek permission first
@kristiedegaris have academics and teachers published best practices for use ? and i am from #virginia where 70% of data traffic goes in and out and #datacentre construction and ops gobble up farmland water electricity and a few very nice dry stack stone walls too
@kristiedegaris I've not found the numbers yet, but I believe streaming (e.g. Netflix, YouTube, Amazon Prime) consumes much more power than AI, and no one is talking about shutting that down or moving to download once and store local, or even go back to DVDs and CDs!
@kristiedegaris every tool does some harm and some good. Helping people optimize that is much more constructive than shitting on them.
So in my opinion it always depends on the specific circumstances.
Maybe in your specific circumstances it could be feasible and better if a Luddite helped you so you don't need to rely on 'ai'. But without knowing the entire truth, claiming that your use is immoral could be as bad as claiming a blind person's use of a screen reader is immoral.
Context matters.
@iwein Like so many I am financially restricted in what human help I can procure. Otherwise I'd have a personal assistant on the books.
I suppose I'm offering myself up here because I'd love to open the discussion up to some nuance and like you say, context.