Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Elena Brescacin
Elena Brescacin
@elettrona@poliversity.it  ·  activity timestamp 2 weeks ago

Yesterday I experienced how dangerous AI allucinations can be, once again:
I missed Envision's last webinar from Thursday 29th. And the first thing I did? Let me ask Gemini to summarize the 50 minutes long video; once it managed to do it, even extracting key points, and I double-verified it to be right. But this time it didn't succeed and returned a completely inappropriate subject.
It basically INVENTED a story. Of course I'll take my time listening to the entire conference (like I'm used to) but what if people PERMANENTLY rely on this, assuming "no need to listen to the whole video, let me just have it summarized"?
#ai #AiSlop #allucination #blind #tech #webinar

  • Copy link
  • Flag this post
  • Block
Paul Sutton (zleap)
Paul Sutton (zleap)
@zleap@techhub.social replied  ·  activity timestamp 2 weeks ago

@elettrona

I don't think it is a case of IF people rely on this, people DO rely on this? I still have not found a use case for AI.

  • Copy link
  • Flag this comment
  • Block
Elena Brescacin
Elena Brescacin
@elettrona@poliversity.it replied  ·  activity timestamp 2 weeks ago

@zleap No, unfortunately it relies on gemini, perplexity, openAI. I have had helpful situations when the question was related to read text, it helps correcting mistakes OCR makes. But not more than this. Not from my point of view.

  • Copy link
  • Flag this comment
  • Block
Elena Brescacin
Elena Brescacin
@elettrona@poliversity.it replied  ·  activity timestamp 2 weeks ago

@zleap I am blind .- and use it to describe photos and read text on them. Trained models for blind users, or for specific tasks (the custom assistant for my self-hosting service provider) is quite good in standard simple tasks. But when it deals with more specific topics, or complicated samples like the one I had, it's a disaster. In this specific case, the guy talking has not a very clear English pronunciation so it has created an unpleasant result. Don't misunderstand me, I'm not accusing the guy at all; I'm accusing the system, and myself of having lost time on it. But this kind of tests is part of my job. Things change their real meaning, according to the name you give them.
"artificial intelligence", it's a scam name because it's not intelligent at all. Large language model, you realize what it is and get aware of what it can give you in real. It's a model, an empty box. No, a box with random contents.

  • Copy link
  • Flag this comment
  • Block
Paul Sutton (zleap)
Paul Sutton (zleap)
@zleap@techhub.social replied  ·  activity timestamp 2 weeks ago

@elettrona

I agree that AI is not even intelligent, a lot of the AI stuff to me is marketing hype to aid profits. In some cases an excuse for job cuts.

Not a good or helpful situation, So is your custom model built independently of big tech companies? Is testing easier as you can give feedback to the developers, who can tweak the algorithm?

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct