文摘
Many recent papers claim, that the symbol grounding problem (SGP) remains unsolved. Most AI researchers ignore that and the autonomous agents (or robots) they design indeed do not seem to have any “problem”. Anyway, these claims should be taken rationally, since nearly all these papers make “robots” a subject of the discussion - leaving some kind of impression that what many roboticists do in the long run has to fail because of the SGP not yet being solved. Starting from Searle’s chinese room argument (CRA) and Harnad’s reformulation of the problem, we take a look on proposed solutions and the concretization of the problem by Taddeo’s and Floridi’s “Z condition”. We then refer to two works, which have recently shown that the Z-conditioned SGP is unsolvable. We conclude, that the original, hard SGP is not relevant in the context of designing goal-directed autonomous agents.