Epistemic hunger and A.I.
This morning, I managed to bang out a couple pages more of Kinds of Minds by Daniel Dennett, mainly chapter 3.
While he sometimes makes some assertions I can’t jump onboard with, he is pretty persuasive. I did have some thoughts about one topic in this chapter. He suggests, to me, that the fundamental quality that differentiates animals from other things, for instance plants, is having an “epistemic hunger“- or the hunger to acquire information in other words. This is purported to exist from the ground floor on up. For instance, in the molecular locks and keys of our biology, which express this in the fact of there being not only different receptors throughout our bodies, but also the specific molecules which ‘seek’ them. The epistemic hunger of animal life has its basis here, having been equipped with millions of these micro ‘agents’ seeking information within the body. But plants too, have such molecular machinery and so, how animals differ from plants is in that their ‘micro-agents’ not only take in this information from the environment, but they are ‘designed’ and equipped with mechanisms to seek out and interpret this information, or give it meaning.
I was cautioned upon reading this, only in that I’m not particularly anima chauvinistic perhaps. I’m not disinclined to ascribe an information seeking quality among plants for one thing. They operate much more slowly than do animals, but we can’t let time disguise the operations.
That’s all peripheral to the post anyway, I wanted to take this idea of epistemic hunger and open it up to inquiry as it might relate to the holy grail of consciousness studies, artificial intelligence (A.I.)
I was recently asked whether (I’m modifying the question just a bit- for clarity), a machine with reasoning intelligence beyond that of a human would also have the capacity to be creative in the way human beings are? I took this creativity to mean artistic, poetic, as oppose to technological..although maybe they are really like two sides of a coin. I responded by sharing the perspective of Thomas Metzinger of The Ego Tunnel: The Science of the Mind and the myth of the self. How about I just paste the dialogue…
The documentary im doin research and writing for has got me looking into ideas related to paradigm shift, accelerating change, and the idea of the technological singularity, so ive come across alot of references to AI. Heres my question, assuming we get to a point where computers can reason beyond human capability will they have the capacity for creativity? Its a question I honestly dont have the knowledge to confront in any substantive way. Is creativity beyond the sum of our rational processes, is there something irrational to it, or do we just not have the understanding of the brain requisite to explain human creativity? is creativity just a manifestation of the gap between our knowledge and reality? If human creativity is a consequence of some human lacking, can a computer create? Im rambling but i wanted to put the bug in your ear, and maybe it will start to drive you crazy as well. Maybe not.
“So this question.. actually gets me thinking about what I read in Thomas Metzingers The Ego Tunnel… where as a philosopher he is curious about the boundaries of consciousness and scientific inquiry into the mechanisms, or nature of human consciouness. So here , you posit a point in time where computers can reason beyond human capability heh…”- “…what might be fundamental to the ability to be creative is at the very basis an awareness of conscious state. So that is to say, if we create a computer that can report something about its “state of mind” or state of consciousness , in other words be metacognisant, then its also conceivable according to Metzinger, that it is capable of placing a “self model” within its internally dynamically generated “world model” An Ego, point of view, or perspective is born.
Now , I’ll just put an excerpt here where he describes the first “Ego Machines” (which may not be necessary to have created if our goal is to create an artificial intelligence that can reason beyond human abilities)
If they had a stable bodily self model, the would be able to feel sensory pain as their own pain. If their postbiotic self-model was directly anchored in the low-level, self -regulatory mechanisms of their hardware—just as our own emotional self-model is anchored in the upper brainstem and the hypothalamus–they would be consciously feeling selves. They would experience a loss of homeostatic control as painful, because they had an inbuilt concern about their own existence. They would have interests of their own, and they would subjectively experience this fact. They might suffer emotionally in qualitative ways completely alien to us or in degrees of intensity that we , their creators, could not even imagine. In fact, the first generations of such machines would very likely have many negative emotions, reflecting their failures in successful self regulation because of various hardware deficits and higher-level disturbances. These negative emotions would be conscious and intensely felt, but in many cases we might not be able to understand or even recognize them.
An interesting bit which illustrates some of the effects born out of giving a machine an ego. Now something you asked struck me, whether human creativity is a consequence of some human lacking.
Well, I think that a machine such as this, might also perceive some lacking…and may, in a self interested manner seek means to satisfy or fill that gap… I think once a machine can exibit true learning, and the ability to integrate experiences of past to present.. then the potential is there for creativity simply put. I think todays computers already possess some creativity. AI today can do some pretty interesting things , if we ask them of course- but is an AI solution to a problem any less a creative act than what a human might bring to it ?
Engineering for instance , does have rules that need to be followed for some structure to stand… an AI following those rules can come out with some “thing”. That’s minimally creative aint it?
Capacity for Art is a whole ‘notha story though aint it… I don’t know if you’re treading on that.. but my inkling is that it’s not inconceivable for a reasoning ego bearing AI. but we do need to have some dialogue in the sciences about why it is humans create, perceive, indulge in the arts. It’s such an idle mind, non evolutionarily adaptive kind of pursuit as far as I can see. But its arguable that it is evolutionary adaptive also, afterall everyone seems to do it, and it seems to point to what is meaningful, and valuable to human life across the globe, and that’s no small thing perhaps.”
Now again, Epistemic hunger and A.I.
So, I’m beginning to incorporate this concept of epistemic hunger into the equation where X is ‘whether a machine can possess consciousness (however poorly defined it still may be) and the capacity for art’. I suppose that, conceptually, if a system is designed -which models this epistemic hunger on a few levels, sufficient levels ( this argument is kinda familiar) then not only will the desire for creativity manifest, but perhaps it’s an unavoidable and a necessary companion for a system to function at all. Through a Darwinian lens, this makes sense anyway, but practically, for US, is this feasible? Can we model epistemic hunger from the ground up? Is it perhaps only a semantic question? Of how we choose to represent “meaning” for a post-biotic machine? No doubt it is in part semantic, and for a phenomenologist, concerned with the subject, that might be all that matters.
See all posts on Books