Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Anthony
Anthony
@abucci@buc.ci  ·  activity timestamp 21 hours ago

I put the text below on LinkedIn in response to a post there and figured I'd share it here too because it's a bit of a step from what I've been posting previously on this topic and might be of some use to someone.

In retrospect I might have written non-sense in place of nonsense.

If you're in tech the Han reference might be a bit out of your comfort zone, but Andrews is accessible and measured.



It's nonsense to say that coding will be replaced with "good judgment". There's a presupposition behind that, a worldview, that can't possibly fly. It's sometimes called the theory-free ideal: given enough data, we don't need theory to understand the world. It surfaces in AI/LLM/programming rhetoric in the form that we don't need to code anymore because LLM's can do most of it. Programming is a form of theory-building (and understanding), while LLMs are vast fuzzy data store and retrieval systems, so the theory-free ideal dictates the latter can/should replace the former. But it only takes a moment's reflection to see that nothing, let alone programming, can be theory-free; it's a kind of "view from nowhere" way of thinking, an attempt to resurrect Laplace's demon that ignores everything we've learned in the >200 years since Laplace forwarded that idea. In that respect it's a (neo)reactionary viewpoint, and it's maybe not a coincidence that people with neoreactionary politics tend to hold it. Anyone who needs a more formal argument can read Mel Andrews's The Immortal Science of ML: Machine Learning & the Theory-Free Ideal, or Byung-Chul Han's Psychopolitics (which argues, among other things, that this is a nihilistic).
#AI #GenAI #GenerativeAI #LLM #coding #dev #tech #SoftwareDevelopment #programming #nihilism #LinkedIn

  • Copy link
  • Flag this post
  • Block
William Whitlow
William Whitlow
@wwhitlow@indieweb.social replied  ·  activity timestamp 20 hours ago

@abucci

I suppose to a certain extent this is the epistemological question of LLMs. Does data arrive at true understanding, or statistical associations? This insight seems to be growing in popularity, but now has to climb the mountain of sunk cost from investments. As such, many of these conversations are on the fringe of being impossible to even have.

The greatest irony, is believing we have moved beyond these challenges. Whereas the reality is we have merely stopped engaging with them.

  • Copy link
  • Flag this comment
  • Block
Anthony
Anthony
@abucci@buc.ci replied  ·  activity timestamp 20 hours ago
@wwhitlow@indieweb.social
Does data arrive at true understanding, or statistical associations?
Maybe you've read him, but Han digs into this, and his answer is a resounding "no". He refers to a data-only view as "total ignorance". He cites Hegel while making this argument, so we've been around this block for over 2 centuries now.

The greatest irony, is believing we have moved beyond these challenges. Whereas the reality is we have merely stopped engaging with them.
I couldn't agree more. There's a lot to engage with here, and it's frustrating at times that so many seem to be assuming the problems away rather than grappling with them. Among other things it's a wasted opportunity to learn and discover.
  • Copy link
  • Flag this comment
  • Block
William Whitlow
William Whitlow
@wwhitlow@indieweb.social replied  ·  activity timestamp 20 hours ago

@abucci

I have not read Hans yet, but I should probably add his work to my list.

I consider it fortunate to find myself indebted to the Aristotelian metaphysics of hylomorphism. Which, while it does not address all questions, it certainly helps to ground contemporary philosophy in a realist tradition. In that regard, one of the shortcomings of LLMs is believing that associations of words without experience constitutes knowledge. Rather than language deriving from experience.

  • Copy link
  • Flag this comment
  • Block
Anthony
Anthony
@abucci@buc.ci replied  ·  activity timestamp 19 hours ago
@wwhitlow@indieweb.social The only hylomorphism I'm familiar with is the computer science concept, so I'm not sure I'd be able to follow you there on first read. I do feel like I'm seeing a tendency in the fields I do follow towards a panpsychism or pancognitivism that I read as an attempt to breathe a form into non-living matter that explains how it becomes animate, alive, evolving, or what have you. The attempts I'm aware of feel circular, but then again I'm not super knowledgeable about such things.

Channeling Hegel, I think Han is saying that associations of words lack comprehension: there's no single concept that encompasses what the words are saying into a unified whole. So, word associations are accretive: you can only add more, never synthesize. In my mind it's a bit like gathering more and more 2-d points of a circle without ever realizing they lie on a circle; you have a growing mess of data with no comprehension, no finality or completion in the form of the circle. I think there's a case to be made that computers by themselves are not capable of bridging this gap in general: humans must be involved because we are able to make leaps of logic that are uncomputable. Which I suppose is a kind of experience.
  • Copy link
  • Flag this comment
  • Block
William Whitlow
William Whitlow
@wwhitlow@indieweb.social replied  ·  activity timestamp 18 hours ago

@abucci

I like that analogy.

Unfortunately? I never got into functional programming, so I had to look up hylomorphism as a CS concept. I have found that computational and philosophical terms often have a high degree of similarity, but not so much here.

Aristotelian hylomorphism supposes that substances are composed of matter and form. Wherein form is the principle of intelligibility and provides the end. Intellect extracts the form from material beings. Hence language alone cannot suffice.

  • Copy link
  • Flag this comment
  • Block
Anthony
Anthony
@abucci@buc.ci replied  ·  activity timestamp 18 hours ago
@wwhitlow@indieweb.social While I was noodling on what I read on Wikipedia about Aristotelian hylomorphism, what occurred to me is that the functional programming notion buries form, in a sense, and that might be the relation. These things function to compose two processes that could be separable. One builds up a structure from parts; the other breaks down the structure, computing something about the parts+structure along the way. What's lovely about hylomorphisms is that they can go directly from the initial part to the value computed without ever building up and breaking down the structure. The structure remains implicit, in other words. You might say the I/O process a hylomorphism encompasses has an implicit form built into it. I don't know if that's the reason for the use of this term in functional programming, but I suspect something along those lines might be.
  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.2 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct