Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: HUBweek Change Maker: Mauro Martino

Blog: HUBweek Change Maker: Mauro Martino

Founder, Visual AI Lab, IBM Research

Go to the profile of HUBweek

Mauro Martino is a scientist and artist who focuses on information technology related to the exploration, dissemination, and sharing of knowledge. He uses artificial intelligence to explore and enhance understanding of the world, transforming any type of information, whether it is visual, acoustic, or semantic, into interactive tools that are beautiful and simple to use. Originally from Italy, Martino is the creator and director of the Visual Artificial Intelligence Lab at IBM Research, and he is also a Professor of Practice at Northeastern University.

His AI Portraits project was voted the coolest of 2018 in this year’s HUB Madness competition, presented by BNY Mellon.

Zoe Dobuler: What is your background, and how did you find your way to your current role and area of research?

Mauro Martino: I started out in interaction design, and then my Ph.D. was in design technology, so a bit more mathematically-oriented. And then my first job was focused on urban study at MIT’s Senseable City Lab. Later, from the Senseable City Lab, I became staff at Albert-László Barabási’s Center for Complex Network Research. I realized that the experience of modeling not just data, but models, was so complex and interesting, it became my new language, my new way to think, and now I see it everywhere. And I’m still good friends with Barabási — we still work together on projects.

Later, I became a professor at Northeastern University, but I stopped immediately my tenure track to become the leader of a new team at IBM Research focusing on visualization in the AI field. So, now it’s been five years that I’ve been leading this visual AI lab, and the problem is always the same — how can we best visualize models, the flow of data from reality? And the models are incredibly complicated, not easy to visualize. But this is a new frontier for artificial intelligence: In 90% of cases, we’re trying to visualize a neural network. And neural networks are a type of mathematical problem that are very hard to explain, to interact with.

ZD: What are you working on now? How does the AI Portraits project fit in?

MM: I do spend a lot of my time now thinking about the ethics of the field. We are trying to make models accessible to everybody. And AI Portraits is one of these examples. I think of myself as working with three main audiences: normal people, people who work in AI, and people who build AI. So, the most sophisticated interface is for people who are building new AI models. Then, there is the problem of explaining, and making accessible, AI to normal people who don’t know anything, like my mother, or anyone on the street.

So, AI Portraits is one of these types of experiments, where we tried to create a game to give these kind of people the experience of being portrayed by an AI system. In reality, there are many things that use neural networks all around us, like in our iPhones. For example, apps that make our skin look smoother. But we don’t even realize they are there, that AI is seeing us in those moments. So with AI Portraits, we wanted to make it more clear, that there is AI at work.

With AI Portraits, the faces in a canonical position — facing forward. That means, whatever the position is of your face that you upload a photo to AI Portraits, you will always come out with the same — not the same expression, because the expression will come from you — but in the same position. If you tried to mask your chin, the AI will build your chin. If you have glasses, it takes out the glasses. And we try to show, by way of gamification, that the AI is straining to do something, but just certain things: The AI is straining just to put you in that canonical position, and it’s just able to do that without end, and only that. So, if some information is missing, it tries to rebuild it from what it knows about faces.

And then, we tried to introduce other interesting topics by playing with AI Portraits. There are kinds of biases that are sophisticated, invisible. There is a lot of focus on AI’s inability to identify black women. But, this is a problem that we can — slowly, with effort — recover, using data that actually represents the population.

But there are other types of biases we can think about, too. What if I trained the system with just actors? From any nationality, any ethnicity? So, at this point you have good representation of all races and genders in this specific dataset; we’ve recovered that particular type of bias in this particular situation. But there’s still another one — do you look like an actor in some way? If the collection is just actors, what happens to your face? And this is almost invisible, since we use so many faces from so many countries that includes all genders and races of people. But we did use photos that are posed like an actor, not someone in normal daily life. So, this pushed the visualization to show you looking more like a character. And technically, this is another type of bias.

Hopefully using AI Portraits will push people to think about all these other little biases — I think it will be very meaningful for the future of AI for people to be aware that there are many, many types of big and small biases that can be present. So, in my daily work, we’re really trying to do as much as we can to make AI accessible and without any type of bias. It’s hard work, but in some way with this little game and visual experience, we hope we will push people to think about it. It’s much better than just creating a didactic slideshow that tries to explain the problem to you. If you have a gamification of the experience, maybe you’ll start to think about it more yourself.

ZD: Much of your work with AI has to do with the arts. What has been your experience at the intersection of art and artificial intelligence? How can each discipline inform the other?

MM: It’s very easy to be an artist who claims to use AI, because it’s a buzzword that gets a lot of attention. I always try to visualize data or a model, or both sometimes. It’s a very interesting moment, because we don’t really know where AI is going, and what kind of contribution it’s going to have in the art field. I use AI to communicate something that’s so innovative that it justifies this effort. So, if I build an AI model to generate an abstract work, I don’t think I’m in the art field, because I’m doing something with code that is possible to do without code — and that other people did many years ago without code. But, there is a type of aesthetic that really comes from AI, from collaboration with models, that is in some way unpredictable, because the structure is harder to decode. So, my effort in terms of aesthetics is to understand when I can build something that is really different, and that cannot be made without using AI.

Right now, I’m working on another project on AI sculptures that will be released in a few weeks. AI sculptures use a GAN model to innovate 3D objects. The idea is to make a new type of sculpture — new, because it’s not like anything I’ve seen before, and new because it’s impossible to make by any other method except with AI. And you have to be careful, because it’s not enough to just be able to reproduce humankind’s aesthetic. You need to find a new type of aesthetic, otherwise there’s no reason to use these tools. Otherwise, it can be impressive for my mom when I show her a portrait made by AI that looks like it was made by Giotto, but that’s not the point.

Of course, with AI Portraits, the point is different. I’m not working to make something aesthetic; I’m working to democratize the experience of being portrayed by an AI engine. And obviously people can have a portrait done—like the one that comes from AI Portraits— made if you pay an artist to do it. But you need to pay, and find a good artist — it’s not something everyone can do. With AI Portraits, it democratizes the experience of being portrayed. And often if you upload your image, you don’t feel that what comes back is you. It’s a very common experience if you were to have your portrait painted by a real artist. Like my father always did portraits of his friends, but people would complain that they didn’t look exactly like them, and he would say, “This is my way of seeing you.” It’s just in his DNA. And I wanted to share that experience with everybody. And, you don’t recognize yourself, but your friends will recognize you — try this experiment. Try this with your parents, your friends, ask them, “Do you think this is me?” And they’ll say, “Yes, of course there’s something of you there, I recognize you.” There’s always something. So, democratization and original output are two of the directions. I can’t imagine a better way to mix AI and art.

ZD: When thinking about the future of AI, where do you think the field will be in 10 years? In 50?

MM: My activity in the field is very limited to a specific kind of work; it’s just a tiny fraction of the whole AI universe. I’m not one of the guys who can predict the future of AI. But in my little piece of land, I can try my best.

I feel that AI will be invasive, and will go everywhere in the future. For example, if you were a journalist, your experience of writing an article will be very different. AI will write the article for you, and you will just need to highlight what information you want it to include. The writing part will just be for you to add some surprising joke or comment. But the idea that we are writing to describe what’s going on will become obsolete. And the idea of how we define “writing” will be different, too: Instead of sitting down and writing, you will be coordinating the system that writes the article for you. And of course, you can add to it to make it better. The same thing will happen, say, with a choreographer. The system will write the choreography for you, and you will be in dialogue with the system.

ZD: So you see it as being very collaborative.

MM: I think the future of AI is a future with a better type of managing. We’ll be focused on being the director of things — everyone will become a director of something. Like a music director, but all the violins and other instruments will be different types of AI engines. But to be a good director will not be easy, and we all need to learn how to be good directors of this technology. Of course, there are many types of activities that will just be in symbiosis with other people, not an AI engine — like sports, for instance. But I hope AI will be so ingrained that people will never have to experience inhuman work. AI will give us freedom, and time, and maybe the luxury to be boring.

The HUBweek Change Maker series showcases the most innovative minds in art, science, and technology making an impact in Boston and around the world.

To stay up to date on our Change Makers, events in Boston, and everything else at HUBweek, subscribe to our newsletter, and follow us on Twitter, Facebook, and Instagram.

Source: Artificial Intelligence on Medium

(Visited 4 times, 1 visits today)
Post a Comment