Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Yogi Jaeger
Yogi Jaeger
@yoginho@spore.social  ·  activity timestamp 2 months ago

"The real danger isn’t that machines will become intelligent—it’s that we’ll mistake impressive computation for understanding and surrender our judgment to those who control the servers." Mike Brock

https://www.notesfromthecircus.com/p/why-im-betting-against-the-agi-hype

#AGI is not achievable with existing architectures.

This is a good analysis.

Title of a Substack essay: "Why I'm Betting Against the AGI Hype" by Mike Brock.
Shows a picture of a little girl holding the hand of a robot.
Title of a Substack essay: "Why I'm Betting Against the AGI Hype" by Mike Brock. Shows a picture of a little girl holding the hand of a robot.
Title of a Substack essay: "Why I'm Betting Against the AGI Hype" by Mike Brock. Shows a picture of a little girl holding the hand of a robot.
  • Copy link
  • Flag this post
  • Block
Yogi Jaeger
Yogi Jaeger
@yoginho@spore.social replied  ·  activity timestamp 2 months ago

But it misses one additional and fundamentally important point: true agency, judgment, creativity, and imagination are only possible if you are a self-manufacturing living system that has to invest physical work into your own continued existence; you can't get these things in an algorithmic framework:

https://arxiv.org/abs/2307.07515

https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1362658/full

#AI is #AlgorithmicMimicry

https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1362658/full
arXiv.org

Artificial intelligence is algorithmic mimicry: why artificial "agents" are not (and won't be) proper agents

What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.
  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-beta.35 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct