Thursday, March 28, AD 2024 11:51am

The Promises of Artificial Intelligence

Most of us are familiar with some concept of artificial intelligence, be it Data from Star Trek: The Next Generation, C-3PO and R2D2 from Star Wars, HAL from 2001: A Space Odyssey, Skynet from The Terminator, or Joshua from War Games, to name a few popular examples. We’ve long been introduced to the notion of the struggle to determine if artificial intelligence constitutes life whether these beings, which we have created, deserve rights. We’ve also come across the notion of whether we need to restrict these beings so that they cannot turn and extinguish human life (think Asimov’s Three Laws of Robotics, and movies like The Terminator and The Matrix, where the artificial intelligence has turned on humankind). Yet we very rarely hear the debate as to whether such artificial intelligence can ever be a reality. In fact, and partially due to the promises made in the 50’s and 60’s, many people think that super-intelligent machines are destined to occur any day now.


One the reasons I find the question of the possibility of artificial intelligence to be important is because it not only has practical ramifications, but also that it touches the fundamental question of computer science, and indeed, our fundamental notions of the world. The whole field of computer science started back in 1900 with the question: “What can we automate?” As an ultimate goal, artificial intelligence asks if we can fully automate human thinking and reasoning.

The optimistic researcher in AI believes that eventually—maybe through massive parallel processing, neural networks, and ultra-sophisticated algorithms—we will accomplish that goal. Indeed, in the AI classes I took while working on my Master’s Degree, at least a third of our lectures eventually devolved into speculation about how to apply Asimov’s Three Laws of Robotics should we ever succeed in self-aware machines.

Frankly, I found such discussions to be a complete waste of time. I’m not just skeptical about the possibilities of producing true artificial intelligence; I flatly believe it will never happen. Part of my skepticism is theological in origin: human intelligence and self-awareness are matters of spirit, thus beyond the reach of science. However, the rest has to do with concepts I deal with daily in my own research.

My particular branch of theoretical computer science—computational complexity—deals less with the question of “what can we compute”, and more with the question of “what resources are necessary to compute this particular problem?” Most often, the resource we worry about is time, since that’s the one we have a very hard time reusing, but we also consider space (i.e. how much RAM a problem needs), circuit complexity, advice, randomness, and (the crucial one to this post) nondeterminism.

I’m going to take some time to get technical on a few of these terms, so if this becomes too boring, feel free to skip ahead. I would recommend paying attention to nondeterminism, though, because that will be important for the rest of this post.

Time: This one should be obvious to anyone who starts up Windows and has to wait five and a half minutes before it fully boots up. Should it really take that long to boot up? Perhaps some of the algorithms are inefficient and do a lot of things that don’t need to be done. Is there a why to speed that up?

More specifically, though, let’s consider the problem of factoring a number into its prime factorization (since this has practical application in cryptography). For example, 15 = 3 * 5 and 24 = 2 * 2 * 2 * 3. A simple algorithm to do this runs as follows. For any input n, start at 2 and see if 2 divides n. If so, add 2 to the list, divide n by 2, and start over. If not, increase to 3 and check, and then 4, and then 5, and then 6, and so on up to n itself.

So what is the time requirement on this? If we’re lucky, it doesn’t take much time. For example, we could figure out 8 pretty quickly (only three checks: 2, 2, and 2). 19 would take much longer, since it is prime and we would have to check all eighteen numbers from 2 to 19 before concluding that. Now, this doesn’t seem to bad, especially for small numbers and the speed of computers today. But what if we’re checking 398,225,076,122,297,449,994,272,105,333,728? Or a number that is thousands of digits long? There’s not enough time in the lifespan of the universe to compute that!

But we can be more efficient. For example, we don’t have check every number against n. We need only check the primes. So instead of looking at 2,3,4,5,6,7,8,… we would look at 2,3,5,7,11,… Also, we can make use of the observation that if we haven’t seen a prime factor by the square root of the number, the number must be prime. (Don’t worry if you don’t automatically see this; it’s not important to the discussion.) This gives us a huge amount of savings.

Randomness: With this we ask if we can solve a problem with the help of some randomly generated numbers. For example, instead of iterating through primes sequentially, we could randomly select one, see if it divides our input, and then continue the process. (As you could imagine, for this problem, randomness doesn’t help very much.) Randomized algorithms are of particular importance in my field. First, there are many problems that are actually solved very simply—assuming we accept a high probability of reaching the correct answer, and assuming we can actually generate random numbers.

Did you know that we cannot truly generate random numbers on a computer? At best, we can produce—by deterministic means—a sequence of numbers that looks random the human eye, and might even fool some simple pattern checkers. But the problem we in computer science continually run up against is that everything in an algorithm is determined. That was supposed to be the whole point of an algorithm, in the first place! An algorithm by definition is an automated, step-by-step process of solving problems. We try to evade the issue by “seeding” our number generator with events that are hard for others to calculate—the exact time the algorithm runs, or a user input like moving the mouse erratically—but after seeding, everything runs deterministically.

That’s the whole point of computing. Once the variables are initialized and all inputs have been taken in, the course of a program is set and unalterable. And that leads us to the final term we’ll consider: nondeterminism.

Nondeterminism: After having stated that all computers run completely deterministically (though their users certainly can act otherwise, and hardware is always prone to problems that affect computation) it doesn’t seem to make much sense worrying about a model of computation than isn’t physically possible. Yet nondeterminism is a crucial topic in computation complexity. (It’s so important that there’s a million dollar reward open for anyone who solves a specific problem dealing with the relation between nondeterminism and determinism.)

How do we define nondeterminism? Well, there’s a number of ways. One is to say that the next step in computation is not uniquely determined. With our factorization example, instead of saying the next step after checking 2 is checking 3, we could have a range of options, like checking 3 or checking 5 or checking 7. Moreover, there is no mechanism determining which step will actually occur (as opposed to randomized algorithms, where we might, say, flip a coin to pick step).

In terms of the human experience of solving problems, nondeterminism seems best explained as intuition. We humans typically operate on a set of rules for doing things, but at times we get ideas that seemingly come from nowhere, or we look at a problem and just know the solution. We talk about having a gut feeling about something, an unexplainable assurance that something will work. True, we may be wrong about these intuitions, but that does not explain where or how these intuitions came about.

In terms of human experience in general, nondeterminism would be best equated with free will. Given every indication that a person will do one thing, he can still surprise us and do something completely different. No matter the constraints, no matter how many factors are propelling him towards a particular action, he can always choose something different. Similarly, a nondeterministic computer, given a set of options, is free to pick any of them and is not fully constrained by whatever has come before.

We may ask then, how do we program this? How do we program something that, in effect, is not bound by its programming? The simple answer is that we can’t. It is essentially a contradiction to try.

So how does this affect the original question, that of the possibility of true artificial intelligence? To the true materialist, what I have just stated poses no problems. Given sophisticated enough tools and algorithms, we should be able to duplicated mechanically what nature has produced biologically. The problem of free will is either glossed over as something that can be copied, or as unimportant. Who is to say that free will and intelligence are interdependent? Who is to say that we can’t have intelligence without free will or self-awareness?

But what does it mean to have intelligence without self-awareness or free will? Can it rightfully be called intelligence at all (at least when we’re trying to replicate human intelligence)? Over the years, numerous definitions and standards have been proposed to handle this question. The most famous is the Turing Test, explained briefly as follows.

A person is placed in a room with a computer terminal and asked to interact with two unknown communicators, one human and one computer. By asking questions or simply making conversation, he is to determine which is which. If a computer can completely convince him that it is human, then it passes the test.

Of course, there are problems with this test. Humans can be quite gullible to clever algorithms that no one would claim are true intelligence. (See ELIZA and play a little with ALICE.) Conversely, humans can also be convinced that their human correspondent is a computer! Furthermore, this test does not necessarily demonstrate true intelligence, but instead clever algorithms of mimicry. A common objection along this line offers the following hypothetical. Suppose you were in a room with a large reference book and a door with a slot through which messages are passed. When you receive a message, it will have a series of strange symbols on it. You then open the reference book, find that particular series of symbols, and with it another series of symbols that you then copy down and slip back through the door. Later you found out that what was happening was that you were having an intelligible (or so it seemed to the person passing the slips) conversation in Chinese. Does this mean that you actually knew Chinese?

From this one could argue that self-awareness must somehow be involved with intelligence. The ability to deliberately place meaning in interaction seems somehow crucial in distinguishing true intelligence from a huge lookup table (no matter how quickly one can look things up).

What about free will? This is where most classroom discussion revolved, especially when it concerned the ethics of the Three Laws of Robotics. There is a general, if unspoken, consensus that an intelligent being will have at least a modicum of free will. But need this be the case? Limited to our own experience of both intelligence and free will, we find it difficult to conceive of things being any other way. Indeed, the denial of free will then leaves only determinism, and if it is all determined what we think and how we think, do we really think? And if we do not really think, do we really have intelligence?

The determinist, of course, will try to argue that we still have intelligence, but this becomes the intelligence of the giant lookup table, and that intelligence is the same as the intelligence of a mouse or an amoeba (though on different scales). By this rationale, our computers right now are already intelligent! We just need faster processors, bigger hard drives, and enormous databases to digitally create the intelligence of a human.

But the truth remains. We cannot program free will. We cannot program self-awareness. And to me that suggest we cannot program true intelligence. But then, as any Catholic knows, intelligence itself is a manifestation of a spiritual soul, and only the Divine Programmer knows how to how to write the scripts for that one!

0 0 votes
Article Rating
15 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Gabriel Austin
Gabriel Austin
Friday, January 16, AD 2009 2:27pm

The books of Father Stanley Jaki pretty well cover the topic.

j. christian
j. christian
Friday, January 16, AD 2009 6:30pm

“Did you know that we cannot truly generate random numbers on a computer?”

Ryan! It warms my heart to see this post and that statement. I was just having a conversation with my wife the other day about this very thing. (I think we’d just watched an episode of Battlestar Galactica and the whole Cylon thing sparked it.) I was telling her about my grad school class in math modeling and operations research, and how random number generators always need algorithms with seeds. My take on the whole problem is the same as yours. If the cosmos is just colliding atoms without supernature, how do we escape determinism? Just how sohpisticated would a computer have to be to mimic a human mind and be self-aware? What is “understanding” and “meaning” in such a universe???

Sometimes I just don’t get materialists…

Barnaby Dawson
Sunday, January 18, AD 2009 3:41am

In your introduction you state that we rarely debate whether AI is actually possible. Actually I think that there is way too much time spent on this question. All the available evidence indicates that the universe is Turing computable. If anyone can prove, or even find any evidence at all that there was a part of the universe (such as the human mind) that was not Turing computable that would be a huge revolution in physics bigger than anything since Newton.

And that’s the problem with any contention that AI is not possible. A scientific demonstration that AI is not possible would amount to such new physics as I just mentioned above. Without a scientific demonstration you are left with saying that you could have something which passes every test you can devise for intelligence and yet you do not regard as being intelligent (likewise concious etc.). This has the standard solipsistic problems. So unless this is the possibility you are considering then the idea that AI is impossible (rather than just very very difficult) is mere wishful speculation and will remain so until some actual evidence is presented.

I should also point out that Turing computation isn’t the only possible determinist framework for physical theories. But for you to be right would really imply that some form of hypercomputation is at work within the human brain/mind. Hypercomputation is a research interest of mine and take it from me there is no evidence that my research is physically relevant (let alone relevant to the philosophy of mine)!

Gabriel Austin
Gabriel Austin
Sunday, January 18, AD 2009 3:03pm

“This has the standard solipsistic problems. So unless this is the possibility you are considering then the idea that AI is impossible (rather than just very very difficult) is mere wishful speculation and will remain so until some actual evidence is presented”.

This is asking to prove a negative. If AI is possible, it is AI that must be demonstrated. Among the great problems [as usual] is that of defining intelligence. I take it to be the ability to make connections [inter legere] without having to install the connections in the machine. In a phrase, can the machine make its own connections.

Barnaby Dawson
Sunday, January 18, AD 2009 4:08pm

It isn’t asking you to prove a negative because there are examples of evidence that would make the contention that AI is impossible more plausible:

1) Finding a problem class which can be solved by minds (reliably) which is not Turing soluble. An example would be the Turing halting problem and another the word problem.

Technically you’d need to show that the minds can do this without significant external input to rule out nature containing the necessary information but this is a logical subtlety.

2) You could find new laws of physics that are not Turing computable (or Turing computable with some random noise added).

If the laws of physics, relevant to the functioning of the human brain/mind, are Turing computable and we reject a solipsistic position then artificial intelligence is possible (or at least as possible as normal intelligence!). Now in order to contend that it is not, one would have to show that there are laws of physics that are relevant to the human brain/mind which are not Turing computable. A solipsistic position wouldn’t help because then you could not demonstrate that other people were intelligent.

As I said before demostrating either (1) or (2) would qualify you for a nobel prize. This doesn’t mean you can’t! But it does make me doubtful.

Furthermore the argument I am trying to make is for the possibility of AI in principle. Thus it is not necessary for me to exhibit an AI to prove my point. I doubt anyone will do that for at least another decade or two.

Incidentally I meant “the philosophy of mind” in my original comment.

C. Le Sueur
C. Le Sueur
Sunday, January 18, AD 2009 5:45pm

“Did you know that we cannot truly generate random numbers on a computer?”

This is not quite correct. As far as we know, nuclear decay is non-deterministic and has been, and can be used in random number generators. Other sources of (as far as we know) truly random or random-enough numbers exist, including taking photographs of incoming cosmic rays, the time and type of user input and so on. This is not limited to seeding the generator, but, for example, the UNIX device /dev/random will force anything reading bits from it to wait until it has got enough entropy before continuing.

But anyway, you don’t provide anything to tie together free will and self-awareness on the one hand, and intelligence on the other. You equate free will with nondeterminism – very dubious since it gets the “free” bit right but what happens to the will? A computer program which uses true randomness in combination with algorithmic rules does not have free will. Self-awareness is apparently something more than just “having information about oneself” (more generally, I presume you think that awareness is more than possessing information) since computers are already aware in this sense of their internal environments such as their temperature, and are easily made aware of other things.

But even so, you don’t set up any implications between lack of these qualities and lack of intelligence. The Chinese Room thought experiment is interesting but hardly settling!

Matt
Matt
Sunday, January 18, AD 2009 8:30pm

I will insert my admittedly uneducated, and largely intuitive perspective on this.

If AI is possible, it would not look like human intelligence, making it a questionable possibility. Take for example this discussion, it demonstrates considerable intelligence among other capabilities in both interlocutors…. AI may be able calculate amazing scientific possibilities, but when it comes to non-material ideas there is no comparison between man and animal, nor do I think that there could be a reasonable comparison between man and machine.

As a common person, in order to accept true intelligence in a machine it would have to be capable of developing abstract, non-material, and original ideas.

God Bless,

Matt
ps. the computer’s self-awareness (as in it’s temperature) is not really the computer’s but the programmer’s awareness, encoded in the system in order to respond to a future event.

Barnaby Dawson
Monday, January 19, AD 2009 6:58am

Response to Matt: My inuitions and yours differ here so I’m not prepared to accept an argument based just on your intuitions.

I think the problem with your argument lies in the very dubious assumption that people have an unbounded capacity for abstract reasoning and for creating novel ideas (in the absence of significant environmental input). Sure we have some capability but your argument needs that capacitiy to be unlimited. Given what we know about the human brain/mind this would be a very speculative assumption.

Artificial intelligence programs may well have limits to their ability to engage in abstract reasoning, create new ideas or understand concepts but the issue is whether its possible in principle to produce a program which has about the same level of limitation that humans have.

In summary in order to show that AIs could not be intelligent (at the same level that humans are) you must not only show that artificial intelligence will be limited but you must also show that human intelligence is not likewise limited. But the same reasoning (based on the halting problem) that shows that AIs will have certain limits can be applied to humans if the laws of physics that are relevant to the brain/mind are Turing computable.

Matt
Matt
Monday, January 19, AD 2009 1:15pm

Barnaby,

i guess if you put enough artificial constraints then it’s impossible to prove ANYTHING is impossible.

We know that man’s capacity to “engage in abstract reasoning, create new ideas or understand concepts” is not limitless, because that would make us God. But you’ve yet to show that AI is capable of ANY original thought let alone limitless.

It seems to me that AI could achieve the level of intelligence of the highest animals short of humans, and with massive computational power, but that is distinct from human thought.

Just curious, are you a materialist? It seems that you’re treating man as just a higher animal, rather than possessing an eternal soul.

If you are arguing from a purely materialist perspective then it would be impossible to demonstrate the impossibility of AI achieving human intelligence.

Matt
ps. snootiness aside, do you REALLY believe intuitively that AI could ever participate in such a discussion?

Matt
Matt
Monday, January 19, AD 2009 2:19pm

Barnaby,

I meant no offense by the “snootiness”, but a little sarcasm, and for that I apologize. I guess I was just trying to reject the idea that intuitive ideas ought to be rejected out of hand, or are not worth discussing. It’s my understanding that Einstein developed the special theory of relativity triggered by an intuition that it was the case.

I think Ryan has very effectively placed a lot more intellectual rigor into the points I was trying to make.

Matt

Barnaby Dawson
Tuesday, January 20, AD 2009 7:44am

Response to Matt:

No offense taken. I’m arguing that if AI is impossible then that would imply a revolution in physics. And I am concluding that until further evidence emerges we should assume that AI is possible.

“..AI capable of ANY original thought..”. I would argue that you have not shown that people are capable of any original thought either by the exceedingly stringent definition you appear to be using. I am arguing that by any reasonable definition if people can reach a certain level of intelligence then that level can be reached by a suitably programmed, and powerful enough, computer.

I don’t think the term materialist is very well defined so I wouldn’t call myself one. I do think that the laws of physics are Turing computable where they are relevant to the human brain/mind.

I think there is a much bigger difference between today’s computers and ‘higher’ animals than between ‘higher’ animals and people. But never the less I really am convinced that artificial intelligence is possible! Furthermore my intuition that AI is possible is as strong as my intuition that other people think and feel. I am fascinated by the fact that others lack this intuition or have an opposing one. I try not to be over reliant on my intuitions, however, even when they are this strong.

“If you are arguing from a purely materialist perspective then it would be impossible to demonstrate the impossibility of AI achieving human intelligence.”

This is only true if you think the idea that the universe involves hypercomputation is not compatible with being a materialist. Do you assume a materialist must believe the universe has a finite number of laws of physics? Because if not then a materialist could in principle reject the possibility of AI (realised by faster computers of the type we have today rather than hypercomputers).

Response to Ryan:

“You’ll have to clarify universe in this dialogue”.

I normally use the definition: “Causally connected region” and for ‘our universe’ I use “The unique, and smallest, causally connected region including myself”. I do not try to separate the universe up into domains such as material and spiritual.

“Do you hold to hidden variable theory?”

I meant to add the caveat: OR Turing computable with some random noise added. In any case I understand Feynman proved that the predictions of quantum mechanics can be computably calculated which I think is enough for the purposes of my argument.

“What is your field?”

I am a mathematician working within set theory on hypercomputation. If I have misused the term Turing computable it is through carelessness not a lack of understanding. Never the less I think that at worst I have failed to specify what I meant rigorously enough. I didn’t say at any point that I work in the philosophy of mind (I don’t). I just mentioned the area.

“What specifically do you mean by laws being computable?”

I mean that the predictions of those laws can be calculated (with initial conditions as input) by a Turing computer. Richard Feynman proved that quantum mechanics is computable in this sense. Strictly speaking the same is only true of general relativity under the assumption of a space time like the one we observe in our universe (but this is enough).

“Intelligence, thought, etc. are actually phenomena…”

Hmmm, I didn’t really mean to say this. I really ought to have said: But for you to be right would really imply that the physics relevant to the mind is not just a combination of Turing computation and randomness. This doesn’t really effect my argument though.

Now that was a very long response! I’ve enjoyed this discussion and regret I may not have the time to continue it (I have my research to write up).

Matt
Matt
Tuesday, January 20, AD 2009 9:05am

Barnaby,

The philosophy of materialism holds that the only thing that can be truly proven to exist is matter, and is considered a form of physicalism. Fundamentally, all things are composed of material and all phenomena (including consciousness) are the result of material interactions; therefore, matter is the only substance.

What I am saying is that we believe that there is more to man than the sum of his biological parts. Our thought processes extend beyond the material world to the non-material world. We possess an immortal soul which gives us this ability, which a purely material creature or construct could not. I suggest that this capacity is a critical component of human intelligence.

Matt

Discover more from The American Catholic

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top