@neil
19 million on X leaves a "lot" of who are not
@neil
19 million on X leaves a "lot" of who are not
@neil I bet nothing even close to a quarter of a supposed 19m "people" on "X" in the UK are regularly relying on it as a key source of GOV.UK information. And of whatever number may do so I expect most would just look elsewhere if it wasn't on "X".
@neil pathetic. They've got a website. People can visit that!
@neil pathetic isn't it.
@neil the point of the AI slop machines is they are not designed to do specific things, but are given lots of data and asked to do things. I think proving that groks AI porn bot was designed to do that is going to be hard.
Will they ban paint brushes as well as someone could use them to paint a naked portrait?
I do think there is potential for a painting to be abusive. But it is rate-limited, and it would be quite rare to cross paths with someone with the time, motivation and skill to create an abusive painting, or even a realistic digitally edited image.
At the moment it seems that the only barrier is the ability to ask for the abusive image, and having a source image. That bothers me significantly.
Targeting tools that make it easy seems helpful.
@RadtkeJCJ @neil i think its entirely reasonable to say grok is legally responsible for all it creates. That is different from what is proposed though which as reported will be very hard to police.
My initial reaction was the same as your- it isn't designed to do anything particular.
However, design is also about intent... and I wonder if questions about their (limited) reaction and the reasoning behind it having the functionality to create intimate images at all might be enough to show it is "designed" to do this ? I also wonder if language like "reckless" in the legislation might be useful here, to avoid the "it just does it" dodge.
@neil but that wouldn't fix the current problem as the Grok isn't designed to create those images.
@neil I don't believe that latter statistic for one moment. A quarter of that, perhaps - still a large number, but HMG would make a massive point by deleting their account.
@neil @chrisgerhard The service aspect of course makes it much easier to argue that because the provider can exercise some degree of control over what users do with the service, it ought to do so.
I think that would be interesting to explore. It seems like such an egregious abuse that designing against it would be a reasonable thing to do. Tech products already add things like rate limits to protect the technical infrastructure from abuse, and design for security.
I don't think it would be reasonable to have a minor bug (fixed rapidly) be a criminal offence though. So there is some nuance needed.
@neil @RadtkeJCJ the company operating it. They have made a choice to do this. This should be their risk.
@neil @chrisgerhard devil will be in the details. Likely unenforceable fines will be levelled anyway. Are #ofcom then *really* going to deprive "almost a quarter of people" from access to news and ban #Twitter?
@neil @chrisgerhard CBS v Amstrad, for lawyers with long memories. Of course the debate is now distorted by the shift from selling products to providing services.