a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Notes: Artifictional Intelligence

Blog: Notes: Artifictional Intelligence


Collins book strikes at the heart of two questions are important beyond inquiry into the field of AI alone: i) How is knowledge produced? ii) What do we mean by intelligence?

Collins offers the idea that there is no single “AI” as commonly portrayed in the media; rather, there are many levels of AI. I think this is congruent within the broader framework of contemporary cognitive science. cognition, nor more narrowly, whatever we label as “intelligence” can be augmented by tools of our invention. Collins proposes that there are 6 levels of AI, examples of the first level being simply things like calculators and tractors. Under the view of cognitive science, tractors amplify our physical capacities, and calculators help us offload certain cognitive functions — again, they are simply objects of our creation that help us do our jobs faster and better. Level II AI on the other hand, is hinged on the ease of anthropomorphizing the tool being used. That is also why Collins insists that there is a huge overlap between Level I and Level II. If one can think of turn signals as the facial expressions of cars then, surely it’s not difficult to imagine Siri or some social robot having some sort of ability to infer mental states of others. Level II AI is called asymmetrical prosthesis because we do a huge amount of “repair” work when the AI system does not work the way it should. We have a huge capacity for anthropomorphizing animals and objects in the world, just as we “make good the mistakes of con artists and bogus doctors”. Now I thought the latter part of the sentence is deeply fascinating because it bears on trust, and trust is a concept that is treated in detail later in the book. Seems like there is a huge leap from anthropomorphizing to the notion of trust. But they are connected because in anthropomorphizing — in other words, perceiving the existence of a mind — in an object is to grant it the i) experience of perception and ii) agency, and these are the basic ingredients for trust.

In any case, in order to move from Level II AI to Level III, we need systems that are symmetrical prostheses. Such systems are the subject of AI-hype these days; at this point, they still belong to realm of science fiction, as Collins points out with his examples of Ava (of Ex Machina), Hal (of Space Odyssey) and Samantha (of Her). These three characters are fluent interlocutors of human culture and conversation that they are truly indistinguishable from other human beings, and they can pass the severest Turing Test (as in Ex Machina, I suppose?). They “may be psychopaths, but so are some humans, and as with humans, you don’t find such things out until way down the line”.

An interesting note on the “repair work” we do everyday with these machines: Collins also points out that because we are so used to doing such repair work these days we don’t even notice that we are doing so. We put up with the awkwardness of online banking and hotel booking — which can lead to somewhat disastrous mistakes. But I think what is more interesting and far more dangerous is the crippling of our own ability to “repair” when we become habituated with the awkwardness of our tools. Thought experiment: the reverse of Ava in Ex Machina is when we piece together discrete bits of direct (e.g., text exchanges) and indirect information (e.g., social media data) about a person, ignore the conflicting information and problems that wouldn’t go unaddressed in face-to-face conversation full with the richness of multimodal information, and conjure a notion of the person that is drastically different from what they really are like in-person. Think of horror scenarios in online dating and couch-surfing.

In the case of repair with AI, we aren’t even aware that we are correcting the mistakes the system makes and projecting ideas of intelligence onto it; in the latter case of non-pair with other humans, we are aware that we are not correcting some glaring problems in the interaction, but ignore them because, hypothetically, we are so used to glaring mistakes produced by artificial systems. Perhaps there is a connection between the two cases, and I wonder if anthropological methods will be the best way of assessing the existence of such a connection.

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter