Blog

ProjectBlog: GPT-2: Too Dangerous For the Public

Blog: GPT-2: Too Dangerous For the Public


If you have been following our favorite extravagant Billionaire Elon Musk’s latest adventure’s, looking into Open AI’s latest research, or even carousing around tech related news articles you have likely seen something about GPT-2. GPT-2 is the latest in text-generation AI technology. The research project was completed by Open AI which of course, is funded by Elon Musk as a new generation text generator. Text-generation of course has been around for years now but the sheer power behind the GPT-2 model is astounding. Not only can it realistically replicate realistic text in seconds, but it can do so convincingly that it might be nearly impossible to tell its generated text.

GPT-2 stands for generative pretrained transformer 2, since it is the second of its kind. Generative — the GPT-2 model was trained to predict the next token in a series of tokens completely unsupervised. That is, it makes predictions on what to output next given a sequence. Pretrained — OpenAI created a language model specifically for machine translation. Imagine ImageNET transfer learning only this time for text generation. Transformer — the GPT-2 uses transformer architecture. That is it doesn’t use traditional neural network’s like LSTM or hidden markov models. Instead, it uses transformer. So you might be wondering, what did they actually train this on. Simply, the internet and on its older brother. They trained GPT-2 on over 40 gigs of raw text from internet sites such as Reddit but also they used transfer learning from GPT-1. In combination with plenty of time and computing power this is how GPT-2 was trained.

All this is great but why is it so dangerous? Take this quote from GPT:

System Prompt (human-written)

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

Model Completion (machine-written, 10 tries)

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them — they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

It might be hard to imagine an AI system generating text this clean, accurate, and as… accurately speculative? But GPT-2 indeed did generate the above text. What is most astounding is the simple fact that it was only given a small prompt of text. Does this seem more dangerous? If not imagine giving it a prompt of someone’s speech. It would be very simple to slander someone or make up a completely false story. How about sending out fake emails or tweets. Frankly, the OpenAI team can’t predict how or when people might use the technology. It’s possible to do transfer training with this as well so imagine what sorts of models people could create by applying their own bias. Luckily, OpenAI has been nice enough to release a miniature version for public use. This model has a full research release on github with notes so you can check it out yourself.

The future of text generation lies with OpenAI’s GPT-2. User’s will generate text with the model and countless more model’s will be trained from GPT-2. With text-generation coming to the masses, it partially seems like a solved problem. Of course, that means it is no where near solved and we will find new and exciting applications for the technology. So here’s to Elon Musk’s wild behavior and here’s to OpenAI’s constant meticulous research into the world of AI. Bring on the future and bring out the full GPT-2 soon.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top
a

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.

Social