Discussion
Loading...

Discussion

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Eric Eggert
Eric Eggert
@yatil@yatil.social  ·  activity timestamp yesterday

@zersiax If "AI" solves a specific problem and removes a systematic barrier, then that is totally a valid use and is not something that is in any way problematic. Because the person knows what tool was used and where the pitfalls are.

The problematic aspect is when society determines that "AI" generated solutions are sufficient enough, do not make it transparent to users that the solutions might have errors, and say "check, look how inclusive we are".

  • Copy link
  • Flag this post
  • Block
Florian
Florian
@zersiax@cupoftea.social replied  ·  activity timestamp 2 days ago

However, the person in question also rather likes not being dependent on somebody else's timeline, on others in general, and they are also rather attached to the house they live in, and the ability to keep eating, drinking and breathing.

Then, they learn that, through using AI, they could generate/vibe-code/pick your poison, a fix for the issue that prevents them from being fired/evicted/kicked out of college, again, take your pick.

They feel doing so is unethical, steals people's work, contributes to a larger global issue, but they also feel that they've tried everything else and people just aren't willing to clean up the mess they've left this person in by not taking responsibility for the issue and fixing it.

What is this person to do?

  • Copy link
  • Flag this comment
  • Block
alcinnz
alcinnz
@alcinnz@floss.social replied  ·  activity timestamp yesterday

@zersiax I agree that it is tricky, and there are places where gen-AI provides a better accessible UX then previous approaches. If the issue is that the publisher lacks image descriptions older machine learning tech can list what's in the image, but LLMs can phrase it as a sentence.

Fundamentally I can't get around my answer boiling down to "learn more about tech", which can be hard when everyone's shouting about LLMs. And you probably have other interests.

  • Copy link
  • Flag this comment
  • Block
Florian
Florian
@zersiax@cupoftea.social replied  ·  activity timestamp yesterday

@alcinnz There's discussions about LLMs lacking the correct context to generate appropriate alt text even now; AI is being used by, essentially, the "developer" in my example to make this problem go away but that's a whole entire different conversation :)
Learning more about tech is absolutely a viable strategy; many accessibility issues can be circumvented somewhat by getting clever with the tools you have, but there comes a point where even that isn't going to work, e.g., a screen reader user trying to use a tool that just isn't compatible with screen readers at a college or professional level.
An example of this is the myriad videogame mods being generated by AI for videogames that could easily be made, say, accessible for the blind, but weren't. No life-and-death situation, but a good example of a door that was being held closed that people are now forcing open one way or another

  • Copy link
  • Flag this comment
  • Block
alcinnz
alcinnz
@alcinnz@floss.social replied  ·  activity timestamp yesterday

@zersiax I guess where I land is that the issue is more with how we discuss LLMs than the tech itself, & what its being used to excuse.

As much as I wish to discourage their use... If that's the option you see available for you, go for it!

  • Copy link
  • Flag this comment
  • Block
Florian
Florian
@zersiax@cupoftea.social replied  ·  activity timestamp yesterday

@alcinnz I think that is where I am as well, yes. To be clear, I'm not necessarily the hypothetical person here, I see this all over the web in the communities I'm part of. But yeah, I feel at this point the lesser of two evils is going to be person-dependent and unfortunately that is currently a very good way to polarize

  • Copy link
  • Flag this comment
  • Block
alcinnz
alcinnz
@alcinnz@floss.social replied  ·  activity timestamp yesterday

@zersiax Ultimately I think we techies need to broaden our conversation beyond the current symbol of The Power of Computing, there's other tech to excite us. I do what I can to that end, but controversy acts like a perpetual motion machine!

Anyways that's that job I've taken on, it doesn't have to be yours.

  • Copy link
  • Flag this comment
  • Block
Florian
Florian
@zersiax@cupoftea.social replied  ·  activity timestamp yesterday

I'm not asking to be a dick, or to troll, I am genuinely interested in what people feel is an acceptable way out of this conundrum.
And I'm asking, because it IS, unfortunately, the reality for a huge amount of people right now. Some people have waited months, years, even decades, to get equal access to certain types of resources, applications, services, entertainment, the works. They have asked, they have begged, they have raged and they have gone up and down every single chain of command they can think of in order to be acknowledged, only to be ignored, dismissed or deprioritized at every turn. Because people, haven't, cared. They were not the primary cash cow, they were not the easy path, therefore they were ok to ignore.
So again, the question. People who rather wouldn't use AI but are systematically considered dismissible by the people who should know better, ... what exactly are they meant to be doing right now?

  • Copy link
  • Flag this comment
  • Block
Eric Eggert
Eric Eggert
@yatil@yatil.social replied  ·  activity timestamp yesterday

@zersiax If "AI" solves a specific problem and removes a systematic barrier, then that is totally a valid use and is not something that is in any way problematic. Because the person knows what tool was used and where the pitfalls are.

The problematic aspect is when society determines that "AI" generated solutions are sufficient enough, do not make it transparent to users that the solutions might have errors, and say "check, look how inclusive we are".

  • Copy link
  • Flag this comment
  • Block
Florian
Florian
@zersiax@cupoftea.social replied  ·  activity timestamp yesterday

@yatil 😊 Yep the inverse of this conversation is people who are NOT running into #accessibility issues using #AI as a way to shut up those who do and shelf it under reasonable accomodation/inclusivity without having to put in actual effort, but that's a whole entire different conversation that often comes up when AI is used to, say, generate a bunch of alt text without context leading to silliness like the various "misunderstood" food item descriptions at GrubHub, for example

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct