Everyone knows what it is to be conscious, and we imagine that other people are also aware. That we have a voice in our heads, apparent agency and free will, a little person inside who is commenting, making decisions and in charge.
We’re not sure if dogs have this, and we’re pretty sure that sunflowers aren’t having deep philosophical misgivings about this year’s harvest. We’re very sure we have it, though.
But other than a few philosophers, we mostly avoid considering the nature of the most vivid part of our days. The noise in our heads that can narrate joy and prolong and amplify stress.
AI is here now. And AI requires us to get clear about what’s actually going on in my head (and yours). Dennett’s work on the Intentional Stance is worth considering:
Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.
It’s easier to win a game of chess if you imagine that the person you’re playing against is playing a game of chess. The rook doesn’t ‘want’ anything, but if we imagine the person who moved it does, we can play with more ease.
We give the driver of a car in the other lane or the person in a negotiation the consciousness we imagine that we have. It’s a powerful shortcut to survival, and might even be the way we ended up deciding we were conscious.
Because we might not be. There’s plenty of fascinating work on the fact that free will is an illusion, and that the voice in our head happens after we’ve already made a decision.
It’s clear to most people who have used AI that it can’t possibly have consciousness, but it’s also human to imagine that it does. I can’t use chatgpt for more than three minutes without imagining that there’s a weird little person inside. I know what it’s doing, but that doesn’t help me interact with it.
It has no free will. It’s not conscious. There are turtles all the way down.
My guess is that we’ll revert to our habits and take an intentional stance with AI. Perhaps, though, this might be a moment to think hard (whatever that means in a world without active consciousness) about the fact that we might be exactly like an AI… that we’re nothing but wet electricity, and the intentional stance is an evolutionary hack, the product of living in a complicated world, not the cause of it.
If our brains are embeddings plus probabilities… if what we are doing is processing inputs and creating outputs, that might be all there is. And that could be fine. If it helps us find peace of mind and acceptance of our situation, it could end up being a net positive.
It could be that when we drop the story and simply focus on what is in front of us, we’re able to make the impact we seek. The noise in our heads might just be noise.