a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: How artificial intelligence is reshaping global power and Canadian foreign policy – OpenCanada

Blog: How artificial intelligence is reshaping global power and Canadian foreign policy – OpenCanada

Artificial intelligence has a reputation for being a buzzword that’s dangled in front of venture capitalists. A recent UK study found that 40 percent of European ‘AI startups’ did not actually use AI in a “material” way, an error sometimes caused by incorrect labeling from third-party analytics websites, but which businesses were in no rush to correct. According to the study, AI companies attract between 15 and 50 percent more funding in comparison to non-AI startups.

Perhaps complicating the problem is that no single definition of artificial intelligence exists.

When an Australian news outlet described the Boeing 737 Max’s sensor malfunction as a “‘confused’ AI,” technical professionals on Twitter protested that the term was seemingly misapplied to any technology that uses an algorithm.

But what does AI really mean and when should we use the term? AI is better understood as a disciplinary ecosystem populated by various subfields that use (often big) data to train goal-seeking technologies and simulate human intelligence. A few of these subfields include machine learning, machine vision, and natural language processing. These technologies are often predictive, designed to anticipate social, political or economic risk and transfer the burden of human decision-making onto a model. In fact, the Treasury Board of Canada, tasked with drafting the directive that will guide AI integration into the federal civil service, prefers the term “automated decision-making” to describe how AI will operate within the Canadian government once the directive takes effect.

Due to its scientific veneer, emerging technology has been historically guarded from social scrutiny. Yet AI applications belong to a class of physical and digital objects that are used to project Canadian influence abroad and at home. AI applications have a number of foreign policy uses, ranging from trade to defence to development work. In no particular order, AI: allocates commercial resources, enables large-scale surveillance of often vulnerable populations, radicalizes extremists, fights extremism, and predicts and reduces climate change vulnerability.

Despite its widespread use, we are only just beginning to decide how AI’s social impact should be regulated. Canada’s most visible commitments to AI have been through the G7, a group whose members possess close to 60 percent of global wealth and who use the platform to cultivate shared norms on topics ranging from security to economics. Less visible, though equally important, are the intersections between AI and Canadian national security. So far, Canadian legislation has focused on the standards that govern data collection, a move that directly, if not obviously, impacts AI’s relationship to security. Because algorithms (and yes, sometimes AI) are enmeshed in political decision-making, these technologies also offer a vision of ‘social good’ that can compete with liberal democratic commitments.

In Ottawa, decision-makers sprinkle the evidence of AI’s socio-technical impact across political speeches and reports. Foreign Minister Chrystia Freeland’s 2017 address on Canada’s foreign policy priorities points to the transformative impact that automation and the digital revolution have had on the workforce to explain rising populist disaffection towards free trade and globalization (though Freeland says that free trade is still overwhelmingly beneficial). Similarly, the Department of National Defence position, outlined in its Strong, Secure, Engaged policy, acknowledges that western military forces have a strategic and tactical advantage because operations use space-enabled systems in order to process and manipulate big data. (Drones and metadata harvesting are probably the most frequently cited examples here, though many other common uses exist that don’t incite the same level of public concern. For instance, the navy is developing voice-enabled assistants for Canadian warships.) And in the aftermath of the New Zealand mosque shooting, Public Safety Minister Ralph Goodale called on digital platforms to better recognize the ways their platforms propagate right-wing extremism and terrorism. (Curiously, right-wing extremism and terrorism are mentioned as separate categories in his speech, despite the New Zealand shooting being the most severe in the nation’s history.) Goodale went further and told his G7 colleagues that digital platforms who could not temper their algorithms “should expect public regulation…if they fail to protect the public interest.”

Source: “artificial intelligence” – Google News

(Visited 4 times, 1 visits today)
Post a Comment

Newsletter