|
| | | |
Symbol Grounding and its Implications for Artificial Intelligence
Mayo, M.
In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for 'understanding' to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general. |
Cite as: Mayo, M. (2003). Symbol Grounding and its Implications for Artificial Intelligence. In Proc. Twenty-Sixth Australasian Computer Science Conference (ACSC2003), Adelaide, Australia. CRPIT, 16. Oudshoorn, M. J., Ed. ACS. 55-60. |
(from crpit.com)
(local if available)
|
|