RE: https://cyberplace.social/@GossiTheDog/116011754710793421
Microsoft Research has similar studies. Using AI assistants hinders your acquisition of skills and is not even improving productivity. Surely we just have to do more of it, right?
RE: https://cyberplace.social/@GossiTheDog/116011754710793421
Microsoft Research has similar studies. Using AI assistants hinders your acquisition of skills and is not even improving productivity. Surely we just have to do more of it, right?
@tante
This makes you think: If these are the results of the producers and sellers of AI services themselves - how bad would it look like if someone independent had done a study?
Was in a virtual meeting with 2 others last week to plan a bigger virtual bi-monthly meeting. For reasons, we decided to cancel the bigger meeting. With 3 brains already on the case, 1 colleague shares their screen, opens an LLM, and starts to prompt it with "You're an organizer of a meeting you want to cancel by email .."
@tante To _some_ extent that's fine.
Calculators have pushback from people who want to overweigh drilling the multiplication tables, usually because they want the education system to rank and sort students (rather than teach them how to solve practical problems).
The problem is the specific differences between ML blobs (and their companies and their business models and capitalism etc etc etc) and calculators.
@tante One major one is that, even with calculators, students should know what multiplication is and how to do it on paper. I don't care how many they can correctly do in an hour on a test, though.
@tante Attention is all you need. If you delegate tedious work to LLMs, does it prevent you from learning? If you consult/review your work with LLM, or suggest and write test that fill the edge cases for you, etc. does it prevent you from learning? In a way yes, but the pros are overweighting the cons. If you still use your brain and don't replace it with LLM, you're still learning.
That, coupled with the impending knowledge collapse that #AI is creating & that almost nobody (save for folks active in that space) is talking about, is creating a turning point (read: an absolute clusterf*ck) in terms of how we do work, how we engage, and how we treat knowledge.
And I don't think society on the whole is ready to deal with those effects.
https://dev.to/dannwaneri/were-creating-a-knowledge-collapse-and-no-ones-talking-about-it-226d
@tante the boosters do not care. I had a coworker tell me to my face yesterday that we need to stop relying on "internal expertise" at our company, and instead hand that off to LLMs to write tests to validate everything. This was after I pointed out to him that the majority of tests the LLM wrote for him were fake, and tested nothing. You could write code that severely broke what the LLM has written for him and the tests would have continued to pass. How can we give up on the expertise we have, and give up on further building that expertise, if the tools aren't capable of doing the work? If you can't validate the output then you're guaranteed to create a catastrophic failure in the future.
@tante In product management terms - making your users dependent on your product is a feature, not a bug.
@tante nicht so laut! Das ist doch Blasphemie!