Which is closest to your view?
Post
B3
Terminator 2: Judgement Day
As much as we talk shit about the "slop" it is actually a real technology that they are using to great effect in surveillance and automated combat.
@ZachWeinersmith I'm A2, but I find the works/doesn't work binary too stark for my taste. I'd say something like "It has uses in some domains but is highly flawed and overhyped.
@ZachWeinersmith A2 leaning for A1, in the sense that the domains where it can be useful are most of the times better served by ad-hoc tools anyway
@ZachWeinersmith A2-A3, I guess. The hype of idiots and dishonest businesspeople doesn't really have bearing over how actually useful it is though.
And I'll add that I don't think its badness is an intrinsic property of it, just an inevitable result of our totally corrupt system.
Overall I think it's revolutionary tech that will have a deep positive impact in the way research is done, once everyone catches up to what its strengths and (most importantly) limitations are.
@ZachWeinersmith A2, though like others I’ll caveat that I’m specifically talking about LLMs here. Artificial intelligence is a BIG field.
@ZachWeinersmith
A2, leaning C2 (depending on what kind of AI and how it is used).
A1 for the things currently being hyped as "AI".
C2 for the very different things that get conflated with them.
@ZachWeinersmith I’ll join the A2 crowd, with the proviso that much of the harm to society is a direct result of the excessive hype
@ZachWeinersmith
A2
I don't trust the men having control over it.
I hope it will die financially without making too much damages but I think we are already over a 2008 crisis magnitude.
@ZachWeinersmith
A2, but only because a friend pays the bills by training them and finding uses for them in the process
@ZachWeinersmith A1 with an option on A2 - I can't say for sure if there are workable situations because of the hype. I suspect there may be some. But also don't think the positives of any workable situation would outweigh the ethical, societal, and environmental negatives.
@ZachWeinersmith A1.
If you ignore cost/benefit, I can concede A2 as in there are narrow things it can do well but it will be prohibitibely expensive to use LLMs only in the niche applications they can work well, once we lose the VC expectation that AI will replace half the workforce and only a few select industies have to shoulder the entire bill of the thing. Hence A1 factoring in the cost.
@ZachWeinersmith I'm one of the lonely C2s.
It's like a lot of things. The technology itself could be grounded in a completely legal and moral landscape. We could imagine a moderately sized data farm powered by solar energy and consuming only public domain, open source, and creative common works for training.
If it's not being built that way, it's because society, not the technology, made some bad choices.
A2, being mostly harmful to society in myriad ways, while being partially functional in narrow use cases for some domains.
@ZachWeinersmith A3, but why is this not a poll?
Also, is this basically an AD&D alignment chart?
@ZachWeinersmith I'm usually C3 with caveats about capital concentration (what do you mean you are an "AI first" company but you don't run your own models?).
However, having had peculiar experiences with AI psychos in managerial positions, I sometimes land on A3...
I also feel like I lack the theoretical framework to evaluate the impact (in software engineering). E.g.
1. It's nice to ask Claude what the syntax is for the git command I forgot, but is it worth it having it run it for me without my knowledge?
2. The steps we take to reduce context windows for humans are likely good for LLMs too. How do we measure human context windows?
3. If a procedure can be described to the LLM and saved as a "skill", isn't it better to make it into a deterministic script?
4. Eventually the uncertainty/error rates of a natural language processor will be extremely low -- will they ever be lower than the bugs introduced by a deterministic procedure?
5. If you need to add more and more details to get the LLM to do what you want, how is that different from writing lower level code?
If anyone has something related I'd love to read it.
@ZachWeinersmith Yet another A2 here.
@ZachWeinersmith C2 I guess, but only in an abstract sense. I can use AI for good, but most uses are bad.
@ZachWeinersmith A1, with a touch of A2 for very specific use cases.
@ZachWeinersmith A2, A1 when I'm feeling particularly jaded or grumpy
@ZachWeinersmith A1, because "artificial intelligence" isn't a thing. Machine learning, including LLMs, solidly A2.
@ZachWeinersmith solidly A1
It's destroying people's critical thinking skills, in some cases causing irreversible harm (death), and has no real-world value.
The amount of times I've seen people ask AI a question, then have to check it was right - meaning it would have taken the same or less time to just figure it out from the jump? Bananas.
I'm also so sick of jobs that will get zero benefit from AI having to find a way to integrate it to please leadership.
Fuck AI.
@ZachWeinersmith c2/c3 because in the context of setting all the fossil fuels on fire to make food and heat our houses, we have to burn fossil fuels to solve the problems created by being unwilling to stop burning fossil fuels for food and space heating. Everything we do is setting ancient forests on fire, just some of it is socially acceptable.
@ZachWeinersmith I suppose C2, I mean in theory A2 but in practice, to me the relative wastefulness of AI (and by that I mean LLM) does not actually make it a good fit for a lot of tasts it is currently being used for.
@ZachWeinersmith Currently A1 but willing to nudge toward A2 if the evidence shows, e.g., fewer people are now dying of cancer.
The bubble can't burst too soon.
Between A1 and A2.
First of all, AI doesn't exist and it was recently mathematically proven that what we have now, machine learning / LLMs, can NEVER grow into full AIs.
What we have now is good a pattern recognition. You can feed it all of the drugs used to fight a certain disease, their effectiveness, their side effects, etc. and it can spit out other drugs that would be useful targets for study.
What we have would fit into A2. What we are being sold is a scam.
Assuming by „AI“, you mean generative models such as LLMs and image generators.
Imho, the technology _is_ impressive but very overhyped. The harm to society and environment is greater than its usefulness, by far.
A1 - Cherry-picked results don't mean "working", even if it's the chatbot framework doing the cherry picking.