ProjectBlog: Documenting the world with Gado Images and Watson

Blog: Documenting the world with Gado Images and Watson

Thomas Smith is co-founder and CEO of Gado Images, a media and technology company based in the San Francisco Bay Area that digitizes, captures and shares the world’s visual history.

Gado Images is a media and technology company that works with photographers, archives, and collectors worldwide to help them digitize, annotate and monetize their photographs, illustrations, sheet music, and other visual materials. We work with everyone from large historical archives like Johns Hopkins University and the Afro-American Newspapers to small historical collectors. We also have a network of contemporary photographers worldwide, providing news photographs documenting technology, business, finance, and travel. Our images are routinely used in CNN, The New York Times, Entrepreneur, Vanity Fair, Forbes, Fortune, and many more.

A big part of our business is taking in imagery and quickly making sense of it — who is depicted, what they are doing, when the photo was taken or illustration created, etc. This kind of information and context is essential to helping our customers find the perfect image to illustrate their news stories, documentaries, products, etc. As you can imagine, this is no small task, especially when dealing with everything from a news photo shot today to a historical engraving that might be more than 200 years old.

To quickly process and make sense of our customers’ materials — and to make them more searchable and, thus, more valuable on the image licensing market — we use our Cognitive Metadata Platform (CMP), which automates many tasks related to understanding images. The CMP, in turn, draws heavily on the cognitive computing capabilities of IBM Watson™.

One of the first steps we perform when we ingest a new client image is to run the image through the Watson Visual Recognition service. This provides us with tags describing the content of the image, as well as confidence figures for each tag assigned. The tags are cross-referenced against our own controlled vocabulary, which draws on ImageNet and WordNet. We then run the image through our own recognition services, as well as several other automatic tagging platforms, again referencing against our controlled vocab to get a standard set of terms and confidence values.

You can read the full blog post on IBM Developer.

Originally published at

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.