OFF) no "strangers" in here, just lettin' ya'll know/ A.ANDROID voiceover

Nathan Gilbert nathan.gilbert at GMAIL.COM
Fri Dec 20 09:45:05 EST 2013


I think that it will be possible to simulate (or duplicate) human-like
intelligence someday. We don't understand the mind enough nor have the
computing capacity required to do so. I think Searle's thought
experiment is good at elucidating that there is more to "intelligence"
than simple symbol manipulation, which is what all current chat bots,
Watsons, Deep Blues are doing.

I like the term sentience better as well when we are speaking about
human-like, HAL 9000 "intelligence".

On Fri, Dec 20, 2013 at 6:45 AM, Mike Holmes <fofp at staffmail.ed.ac.uk> wrote:
> On 19/12/2013 17:06, Nathan Gilbert wrote:
>
>> Also, the Turing Test isn't very good at determining actual
>> intelligence in the sense most of us look for in AIs. There are very
>> simple chat bots that can hold seemingly meaningful conversations for
>> extended periods and thus pass the Turing Test. These bots are nothing
>> more than a collection of pattern recognition algorithms and if/then
>> clauses, basically on the par with the intelligence of a thermostat.
>> Not the kind of intelligence we find meaningful.
>
>
> You're begging the question there in assuming that a digital algorithmic
> system cannot be "intelligent". I think that's the error Searle makes in his
> "Chinese Room" analogy.
>
> The kind of intelligence we find "meaningful" is perhaps better signified by
> the word "sentience"?
>
> FoFP
>
> --
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.



More information about the boc-l mailing list