2 Comments

I can't recall if you've dealt with this in earlier posts, but there seems to be an implicit assumption that true AGI naturally requires that the machine also be self-aware, have consciousness. I am not sure why that needs to be true - I'm willing to believe that a machine can be smarter (or maybe "smarter") than me without being aware of itself as an independent entity.

What would look like real intelligence would be awareness of how people behave, and that's generally easy for us because we are also people. We have all kinds of things we've learned over our lifetimes that we weren't born knowing, like the permanence of death, or the ability to project future consequences of present behavior. I don't see why a machine couldn't also learn those things.

I guess the point is, I'm pretty sure we could mimic a human brain's problem-solving abilities (at least in theory - the energy required might make this a neat hypothetical rather than a useful achievement) without worrying about whether it has emotions or self-awareness.

Expand full comment
author

That could certainly be the case! John Searle thinks they're intimately connected, but his area is philosophy and not science. I suspect that there’s a connection between human intelligence and consciousness, although neither one is very well understood. My functional definition of intelligence, which I quoted part of in the post, is meant more as a list of what a human mind can do for comparison purposes to any AI/AGI that we have or might develop. It’s an interesting question to ponder -- I’ll probably discuss this more in a future post!

Expand full comment