a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: From Error to Autonomy:

Blog: From Error to Autonomy:


artistic potentialities of digital error

Up to this point, the idea of ​​a computational autonomy is questionable and has not reached a consensus by the scientific community. Since the beginning of computation and data analysis in the XIX century, Ada Lovelace have structured an idea that legitimize the machine to create diverse compositions, however, Lovelace considers that it would not be possible for the machine to “originate” nothing that was not programmed in advance. (LOVELACE, 1842). The possibility of a computational autonomy is officially considered on the XX century, namely by the well-known scientist Alan Turing, who shows opposition to the statements of Lovelace.

Alan Turing believes that the now re-programmable computer could simulate human intelligence if it were programmed enough for that purpose. In this sense, the mathematician encouraged the inclusion of pseudo-random algorithms on computer programming (TURING, 1950). These are currently defined as stochastic processes: sets of algorithms that allow making seemingly random decisions, but that are calculable through mathematical probability. Were, on the one hand, one of the bases for the integration of computational art, as the device began to be explored as an assistant to creativity from its random results — which seem to simulate human creativity. On the other hand, the experimental methodology of the artists implied one of the great problems of the computational art: the artist stopped recognizing its output.

Both Vilém Flusser and Edmond Couchot address this relation of artists to technologies in a black-box analogy: without knowing the functioning of the mechanism that generates the images. Couchot considers any image produced by technical mediation as simulations of the model process, because the coding and decoding process models and interposes the specificity of the medium. Related to the representative content of the images, in later studies, Hito Steyerl conceptualizes the poor-images: images that tend towards the abstraction in the aesthetic of «digital ruin». These images tend to suffer unexpected failures and collect coding artifacts, i.e., notorious distortions caused by the coding and decoding processes of the contents. In Steyerl’s opinion they are like bruises in the images, derived from the compression and transmission of data, which prove their dynamics and movement.(STEYERL, 2009)

During the process of the medium used to read and decode the images, sometimes occur data failures that alter the digital contents. It is from this fact that Steyerl reflects the desire to become like digital images. She argues that these images live by themselves as things, as they show to be autonomous and participates in the construction of our reality, things that accumulate forces, but also degenerate (STEYERL, 2010). In parallel, Daniel Rourke in his “Digital Autonomy” essay rethinks the computational error as a kind of improbable redefinition of the digital by its own laws that drive to progress. In the author’s words, “the image as thing maintains its autonomy through the glitches it harbours”, with glitch being a coding artefact resulting from an error. (ROURKE, 2011).

The transformation of digital causes unexpected and inevitable errors that generally occur due to the manipulation of the contents and their constant spatiotemporal updating. In practice, the origin of the error process is in the constant re-adaptation of the digital information, which from its “apparent” malfunction reveals evidences of the medium. Through the above, investigating the encoded artefacts of the image led me to ponder their potential in the artistic area. Artists like Rosa Menkman try to preserve and reproduce this moment of error. From her perspective, failure can encourage a new understanding of system functionality. It can lead to a reflexive moment that, in turn, may reveal itself as an exoskeleton of progress, a moment of catharsis (MENKMAN, 2011). The condition of the content is susceptible to different interpretations which may not be totally negative, since it can guide the user to a critical dimension or reveal some important factor of the computational system.

“Mistake as a transgression of norms which, in fact, represents a huge potential for new creative solutions.”

Interestingly, this was the theme of the Ars Electronica festival, held in September 2018, where the current «need for error» was debated to promote the improvement of digital autonomy. For example, in the functioning of neural networks and the structuring of predictive analyzes. These processes are optimized through calculations and measures of error, as well as random degrees of mutation, to self-govern their actions and simulate more autonomous systems.

At this point, I admit: the dystopian idea of ​​a technology totally capable of surpassing human capabilities will probably never be possible. However, the increasing complexity of data entered for the simulation of machine behavior reflects its future capabilities. The susceptibility to failures in computational systems is linked to a formal character of their behaviour, and the computer is admitted as a closed system that acts only under deterministic processes. In this sense, the existence of limits on the manipulation of any technical process seems obvious, due to the finitudes of codification but, in practice, it is true that these limits are in continuous expansion of their possibilities. The probability of digital autonomy thus depends on the future development of computing and the complexity of the inserted data.

A.I. expectations for computational creation remain high and developing abruptly. More intelligent things and agents are being noticed, and apparently can operate (semi) autonomously in an unsupervised environment. It is envisioned a possible generation of mechanisms that incorporate and adapt their findings to adjust their autonomous actions, not with the aim of replacing humans, but rather of aiding the production by simulating our behavior. Assuming these autonomous capacities in the artistic scope, I wonder if the computational device could come to simulate the creativity in similarity to the artist? As Turing would say: if they were “taught enough”; I imagine what would become, substantially, a creativity very different from ours. However, I admit that it is «risky» to define some creative capacity based on algorithmic computation, because that is directly related to the inner complexity of the human being, which is not possible, for now, to simulate.

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter