Wintery Knight

…integrating Christian faith and knowledge in the public square

Will robots and machines ever have consciousness like humans?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. Can a robot who wins on Jeopardy be “human”?Searle says no. And his famous Chinese room example (discussed in the article) explains why.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

Here is a link to the full article by John Searle on the Chinese room illustration.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Here’s a related article on “strong AI” by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Filed under: Polemics, , , , , , , , , , , , , , , , , , , , , ,

John Searle and the Chinese room illustration

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article.

In that article, Searle is writing about the IBM computer that was programmed to play Jeopardy. People have been talking about how this Jeopardy-playing computer seems human. But Searle disagrees. And his famous Chinese room example (discussed in the article) shows why no one should be concerned about computers acting like humans.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

Here is a link to the full article by John Searle.

Here’s an article by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Filed under: Polemics, , , , , , , , , , , , , , , , , , , , , ,

Can computers become conscious by increasing processing power?

There is a very famous thought experiment from UC Berkeley philosopher John Searle that all Christian apologists should know about. And now everyone who reads the Wall Street Journal knows about it, because of this article. (H/T Sarah)

Searle is writing about the IBM computer that was programmed to play Jeopardy. His Chinese room example shows why no one should be concerned about computers acting like humans. There is no thinking computer. There never will be a thinking computer. And you cannot build up to a thinking computer my adding more hardware and software.

Excerpt:

Imagine that a person—me, for example—knows no Chinese and is locked in a room with boxes full of Chinese symbols and an instruction book written in English for manipulating the symbols. Unknown to me, the boxes are called “the database” and the instruction book is called “the program.” I am called “the computer.”

People outside the room pass in bunches of Chinese symbols that, unknown to me, are questions. I look up in the instruction book what I am supposed to do and I give back answers in Chinese symbols.

Suppose I get so good at shuffling the symbols and passing out the answers that my answers are indistinguishable from a native Chinese speaker’s. I give every indication of understanding the language despite the fact that I actually don’t understand a word of Chinese.

And if I do not, neither does any digital computer, because no computer, qua computer, has anything I do not have. It has stocks of symbols, rules for manipulating symbols, a system that allows it to rapidly transition from zeros to ones, and the ability to process inputs and outputs. That is it. There is nothing else.

By the way, Searle is a naturalist – not a theist, not a Christian. But he does oppose postmodernism. So he isn’t all bad. But let’s hear from a Christian scholar who can make more sense of this for us.

UPDATE: Drew sent me a link to the full article by Searle.

Here’s an article by Christian philosopher Jay Richards.

Excerpt:

Popular discussions of AI often suggest that if you keep increasing weak AI, at some point, you’ll get strong AI. That is, if you get enough computation, you’ll eventually get consciousness.

The reasoning goes something like this: There will be a moment at which a computer will be indistinguishable from a human intelligent agent in a blind test. At that point, we will have intelligent, conscious machines.

This does not follow. A computer may pass the Turing test, but that doesn’t mean that it will actually be a self-conscious, free agent.

The point seems obvious, but we can easily be beguiled by the way we speak of computers: We talk about computers learning, making mistakes, becoming more intelligent, and so forth. We need to remember that we are speaking metaphorically.

We can also be led astray by unexamined metaphysical assumptions. If we’re just computers made of meat, and we happened to become conscious at some point, what’s to stop computers from doing the same? That makes sense if you accept the premise—as many AI researchers do. If you don’t accept the premise, though, you don’t have to accept the conclusion.

In fact, there’s no good reason to assume that consciousness and agency emerge by accident at some threshold of speed and computational power in computers. We know by introspection that we are conscious, free beings—though we really don’t know how this works. So we naturally attribute consciousness to other humans. We also know generally what’s going on inside a computer, since we build them, and it has nothing to do with consciousness. It’s quite likely that consciousness is qualitatively different from the type of computation that we have developed in computers (as the “Chinese Room” argument, by philosopher John Searle, seems to show). Remember that, and you’ll suffer less anxiety as computers become more powerful.

Even if computer technology provides accelerating returns for the foreseeable future, it doesn’t follow that we’ll be replacing ourselves anytime soon. AI enthusiasts often make highly simplistic assumptions about human nature and biology. Rather than marveling at the ways in which computation illuminates our understanding of the microscopic biological world, many treat biological systems as nothing but clunky, soon-to-be-obsolete conglomerations of hardware and software. Fanciful speculations about uploading ourselves onto the Internet and transcending our biology rest on these simplistic assumptions. This is a common philosophical blind spot in the AI community, but it’s not a danger of AI research itself, which primarily involves programming and computers.

AI researchers often mix topics from different disciplines—biology, physics, computer science, robotics—and this causes critics to do the same. For instance, many critics worry that AI research leads inevitably to tampering with human nature. But different types of research raise different concerns. There are serious ethical questions when we’re dealing with human cloning and research that destroys human embryos. But AI research in itself does not raise these concerns. It normally involves computers, machines, and programming. While all technology raises ethical issues, we should be less worried about AI research—which has many benign applications—than research that treats human life as a means rather than an end.

Jay Richards is my all-round favorite Christian scholar. He has the Ph.D in philosophy from Princeton.

When I am playing a game on the computer, I know exactly why what I am doing is fun – I am conscious of it. But the computer has no idea what I am doing. It is just matter in motion, acting on it’s programming and the inputs I supply to it. And that’s all computers will ever do. Trust me, this is my field. I have the BS and MS in computer science, and I have studied this area. AI has applications for machine learning and search problems, but consciousness is not on the radar. You can’t get there from here.

Filed under: News, , , , , , , , , , , , , , , , , , , , , ,

Can Darwinian evolution create new functional biological information?

Here’s a great article from Evolution News that explains the trouble that Darwinian evolution has in building up to functional new biological information by using a process of random mutation and natural selection.

Casey Luskin takes a look at a peer-reviewed paper that claims that Darwinian evolution can do the job of creating new information, then he explains what’s wrong with the paper.

Excerpt:

In Wilf and Ewens’s evolutionary scheme there is a smooth fitness function. Under this view, there is no epistasis, where one mutation can effectively interact with another to affect (whether positively or negatively) fitness. As a result, any mutations that move the search toward its “target” are assumed to provide an immediate and irrevocable advantage, and are thus highly likely to become fixed. Ewert et al. compare the model to playing Wheel of Fortune:

The evolutionary model that Wilf and Ewens have chosen is similar to the problem of guessing letters in a word or phrase, as on the television game show Wheel of Fortune. They specify a phrase 20,000 letters long, with each letter in the phrase corresponding to a gene locus that can be transformed from its initial “primitive” state to a more advanced state. Finding the correct letter for a particular position in the target phrase roughly corresponds to finding a beneficial mutation in the corresponding gene. During each round of mutation all positions in the phrase are subject to mutation, and the results are selected based on whether the individual positions match the final target phrase. Those that match are preserved for the next round. … After each round, all “advanced” alleles in the population are treated as fixed, and therefore preserved in the next round. Evolution to the fully “advanced” state is complete when all 20,000 positions match the target phrase.

The problem with this approach is that a string of biological information that has only some letters that are part of a useful sequence has no present function, and therefore cannot survive and reproduce.

Look:

Thus, Wilf and Ewens ignore the problem of non-functional intermediates. They assume that all intermediate stages will be functional, or lead to some functional advantage. But is this how all fitness functions look? Not necessarily. It’s well known that in many instances, no benefit is derived until multiple mutations are present all at once. In such a case, there’s no evolutionary advantage until multiple mutations are present. The “correct” mutations might occur in parallel, but the odds of this happening are extremely low. Ewert et al. illustrate this problem in the model by using the example of the difficulty of one phrase evolving into another:

Suppose it would be beneficial for the phrase

“all_the_world_is_a_stage___”

to evolve into the phrase

“methinks_it_is_like_a_weasel.”

What phrase do we get if we simply alternate letters from the two phrases?

“mlt_ihk__otli__siaesaaw_a_e_.”

Under the assumptions in the Wilf and Ewens model, the “fitness” of this nonsense phrase ought to be exactly half-way between the fitnesses of “all the world is a stage” and “methinks it is like a weasel.” Such a result only makes sense if we are measuring the fitness of the current phrase by its proximity to the target phrase.

But the gibberish of the intermediate phrase doesn’t cause any problem under Wilf and Ewens’s model. Not unlikeRichard Dawkins, they assume that intermediate stages will always yield some functional advantage. And as more and more characters in the phrase match the target, it becomes more and more fit. This yields a nice, smooth fitness function — rich in active information — not truly a blind search.

Not only is there that first problem, but here’s a second:

Wilf and Ewens endowed their mathematical model of evolution with foresight. It is directed toward a target — an advantage that natural selection conspicuously lacks. And what, in our experience, is the only known cause that is goal-directed and has foresight? It’s intelligence. This means that once again, the Evolutionary Informatics Lab has shown that simulations of evolution seem to work only because they’ve been intelligently designed.

This is worth the read. If Darwinian mechanisms really could generate code, then there would be no software engineers. The truth is, the mechanisms don’t work to create new information. For that, you need an intelligent designer.

Filed under: Polemics, , , , , , , , , , , , , , , , , , , , , ,

A look at how a former skeptic changed his mind about God’s existence

Reformed Seth sent me this post from the ultimate object blog.

Excerpt:

“There has been some confusion and more than a few requests for explanation about what is going on with my core beliefs. Some time last week, I realized that I could no longer call myself a skeptic. After fifteen years away from Christianity, most of which was spent as an atheist with an active, busy intent on destroying the faith, I returned to a church (with a real intention of going for worship) last Sunday. Although I know I may struggle with doubt for the rest of my life, my life as an atheist is over.

The primary motivator in my change of heart from a Christ-hater to a card-carrying Disciples of Christ member was apologetic arguments for God’s existence. Those interested in these arguments may pursue them in the comments section, but I don’t want to muddle this explanation up with formal philosophical proofs. Briefly, I grew tired of the lack of explanation for: the existence of the universe, moral values and duties, objective human worth, consciousness and will, and many other topics. The only valid foundation for many of those ideas is a personal, immaterial, unchanging and unchangeable entity. As I fought so desperately  to come up with refutations of these arguments – even going out of my way to personally meet many of their originators, defenders, and opponents  – I realized that I could not answer them no matter how many long nights I spent hitting the books. The months of study rolled on to years, and eventually I found an increasing comfort around my God-believing enemies and a growing discontent and even anger at my atheist friends’ inability to kill off these fleas in debate and in writing, an anger that gave birth to my first feeling of separateness from skepticism after reading comments related to a definitively refuted version of the Christ Myth theory, the idea that Jesus Christ never even existed as a person at all. Line after line after line of people hating Christianity and laughing at its “lie,” when solid scholarship refuting their idea was ignored completely. It showed that the motive of bashing and hating Christianity for some skeptics wasn’t based in reason and “free thinking” at all, although it would be unfair to lump many of my more intellectually rigorous and mentally cool skeptic friends in this way.

As time went on, I reverted the path I traced after giving up Christianity so long ago: I went from atheist to agnostic to … gulp … *leaning* in the direction of God, to finally accepting that he very well could exist, and then to coming out and admitting (quietly) He did exist. After considering Deism (the belief in a God who abandons His creation), Islam, Hinduism (yes, Krishna, don’t laugh), Baha’i, and even Jainism briefly, I have decided to select Christianity due to its superior model for human evil and its reconciliation, coupled with the belief that God interacted with man directly and face-to-face and had *the* crucial role in this reconciliation. This, of course, doesn’t prove that Christianity is absolutely true (although I can prove that God exists), but rather reflects my recognition that Christianity is exactly what I would expect to be the case given that God exists.

I feel guilty when I read posts like that… I think to myself “you shouldn’t be so mean to people who disagree with you, maybe they are like this guy – honestly thinking things through and willing to change their minds”. Sigh. I feel so guilty right now.

I really like what he had to say about reconciliation, though. I feel the pressure to reconcile people to God through Christ’s offer of forgiveness – that’s why I work so hard on apologetics, and to have money to buy people things they need for their studying. To really get people to be reconciled, you have to be convincing. You have to be persuasive. And you can’t do that without having studied the arguments and the evidence.

I also agree with him about the reconciliation. The resurrection is a good argument, but it’s inductive – it’s the best explanation based on the historical bedrock that we have. But what clinches the case for Christianity in the end is Christ descending from his glory to suffer with us – and for us, too.

In case any of you haven’t read my testimony, it’s right here.

Seth also found evidence that this guy really was a skeptic before. (That link goes to John Loftus’ “Debunking Christianity” web site)

Filed under: Commentary, , , , , , , , , , , , , , , , , , , ,

Wintery Tweets

RSS Intelligent Design podcast

  • An error has occurred; the feed is probably down. Try again later.

RSS Evolution News

  • An error has occurred; the feed is probably down. Try again later.
Click to see recent visitors

  Visitors Online Now

Page views since 1/30/09

  • 4,685,802 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,275 other followers

Archives

Follow

Get every new post delivered to your Inbox.

Join 2,275 other followers

%d bloggers like this: