I guess I didn't really notice until now the trend in the readings about these computer programmers and computers dealing in some kind of pure information/data before this line, but since reading it I've been strongly reminded of theory of Platonic Ideals. What these programmers and mathematicians seem to be getting at is that they have stripped away the trappings of mediated information and broken through to the capital "I" Information through their research and development of computer language. Perhaps I'm just unused to this kind of tech-based discussion, but it seemed unnerving to think that the realm of pure forms was being tapped into through the further development of computers. As if the world and all of it's data can be stripped down to 0's and 1's. I'm sure I'm exaggerating this point, but the language used to describe code should, if anything, be thought of as a further abstraction and removal from pure information, rather than a step in the direction of "pure". This discourse, to me, suggests a shift in the investment and trust placed in AI and away from "pure" human intelligence from this point onwards in our societies' governments and is a little unsettling .
But I found another unsettling trend while reading this selection. All of the personification and altogether human-ification of these machines' characteristics to the point where they are no longer tools at all but rather colleagues, "thinking aids, cyborg partners" and "second selves" (259). It makes me question the use of the term "artificial"when describing these machines' intelligence. Is it artificial because it is human-made? Are we not all human-made as well? Is it artificial because it is programmable? Are humans not programmable much if the same way? Is it artificial because it has finite memory? Artificial because it can be replicated? Controlled? That it ages and becomes obsolete? Obviously humans share these attributes. So, really, what's the point in calling it "artificial intelligence" when all signs point to an intelligence so closely modeled after human intelligence it could be a person's second self - it can have a self-hood!
Some great points here, James, both about the term "artificial" and the rhetoric around "information." Remember, information as defined by the Shannon-Weaver model of communication is not pure, since it is subject to noise/interference, but it is divorced from what we commonly think of as information (in terms of content). Invoking the Platonic ideals isn't a bad move, considering both historical and recent investments in presenting technology as somehow removed from the material world ("in the cloud" or "bits and bytes"). Your consideration of both the role of linguistic mediation and the typical human/machine divide is worth pursuing, maybe in your next assignment? Perhaps "artificial" can become a bit of a protective barrier between us and the possibility of our technology exceeding or revolting against our capacities.
ReplyDelete