All the noise about alignment from the advocates of 'AI Safety' is completely specious because AI already aligns with the hierarchical dualisms that shape our society at the deepest levels, especially misogyny, racism and a contempt for nature.
All the noise about alignment from the advocates of 'AI Safety' is completely specious because AI already aligns with the hierarchical dualisms that shape our society at the deepest levels, especially misogyny, racism and a contempt for nature.
@CriticalThinkingGames surely the task is to analyse the processes of garbage production so we can avoid repeating them?
1/2
Yes!
Question how a new system works - why it works that way - and the underlying decisions supporting this approach.
Here: a new system is designed to work, in part, by relying on a number of existing things which cause harm.
This new system is allowed to rely on those harm-causing things for several reasons (e.g. complacence; ignorance; fear of change; animus; greed).
Not all of these reasons are equally harmful... but they all result in a continued use of those things shown to cause harm.
This is a problem.
@danmcquillan One of my main points against #AI; It is trained by fallible humans, with fallible, false, problematic and often outright wrong or offensive data.
Why do people keep expecting it to be perfect? It's a #fallacy of epic proportions. If humans are misogynistic (or misandric), racist or in other ways biased, how the heck do you expect such a model not to be those things?
@rraggl I think it's important to clarify that AI is the expression of hegemonic cultural values rather than essentialised 'human' ones. Doing tech otherwise will be an important part of constructing worlds otherwise.