Which is closest to your view?
Post
@ZachWeinersmith A2 except for Anthropic and OpenAI which are A12
@ZachWeinersmith Like everyone assuming "AI" means "generative AI".
I thought of saying A2, but I'm going A1. It's closer to the truth.
What the AI companies are doing is a scam. They don't care what the tech is good for, only what they can make people think it's good for. They are pushing the tech to be used as an unscoped technology, "the everything machine", to be used as much as possible in all areas, both where it does good and where it does immense harm.
It looks like it's good at a few things (e.g. bug hunting) but that's incidental. Their effort goes into making generative AI appear to be good at things it is not.
@ZachWeinersmith A2. Especially insofar as AI actually has a meaning (to me and the rest of the olds) beyond just LLMs & their image and video analogues.
@ZachWeinersmith @iris_meredith
A2.
As a writer who has always played with tech, GPT2 was so much fun because it was weird and stupid. Models today are still nonsense (and can be fun for play) but have dangerous hyper-realistic human-shaped masks grafted on.
@ZachWeinersmith If we mean "AI" in the sense of the dominant discourse, A1. If we mean AI in the historical sense, C2 or C3.
Depends on what you mean by “AI”.
Assuming we’re talking about generative “AI” like LLMs and diffusion models, A2.
…with a caveat that few of the “good” use cases, if any, are really all that good, and most will likely get rapidly less good as major “AI” companies start charging closer to true costs for the services.
@ZachWeinersmith I bet you'd get very different answers if you asked about "works in my area of expertise" rather than "works" generally
@ZachWeinersmith Multiclassing all As and all Bs, somehow. #lang_en
@ZachWeinersmith depends on what you mean by AI:
If it's what the tech industry is trying to sell - A1.
If it's the specific technology behind what they're trying to sell - A2.
If it's machine learning in general, including LLMs (just not at the scale of the current models), then C3.
@ZachWeinersmith A mix of A1 & A2.
@ZachWeinersmith LLMs? Firmly in the A1 category. The "stochastic" part is what makes them seem like AI, but it's also what makes them a fraud. Randomness is what fools people and is also likely a big contributer to their worst fabrications.
Also they are practically weapons-grade tools to make code harder to maintain because they fill it with random but convincing nonsense that does nothing, but seems to be functional.