Blog: Facebook Advertisement Failure — True Story
This is a true story from a couple of years ago…
Unfortunately our family dog Lucy passed away yesterday. It was a very sad day. My wife posted on another social media channel of our loss. I of course responded to her post with words of great sadness and loss for our four legged child. We will miss her tremendously and loved her with all of our hearts.
So later that evening I went back on that social media service to just check things out and up comes the following ad offering me dog treats and toys:
TO QUOTE A VERY WELL KNOWN CARTOON DOG SCOOBY DOO…
Now I am sure BarkBox had nothing to do with this as they purchase ads and trust the channel to provide the ad to the right person at the right time.
As for the Facebook I am not sure the exact recommendation engine or machine learning algorithm they are using for their ad placement but if they had done a bit more than a dictionary search on keyword of ‘DOG’ it may have prevented this very poorly timed ad. Maybe a little sentiment analysis or TF/IDF or perhaps using other critical words like ‘PASSED’ or ‘DIED?’ could have been important! So lets discuss the failure and some potential alternatives.
- Go give a big old hug to your dog or pet! (spouse and children apply too)
- Personalization is very critical to ad effectiveness.
- It is all about the right content to the right person at the right time that matters.
- Dictionary based models are RISKY! Get some better Machine Learning programmers and tools!
So what are some alternatives to this very lazy approach by Facebook? My wife’s original post included the following text and a picture of our pup Lucy: “Our family dog Lucy died today, we are so sad.” Obviously they found ‘DOG’ and just went with it. What could they have done better?
This is a big one! Concordance provides context and instances of a batch of words or set. The NLTK package can implement concordance easily in Python. Obviously, the word ‘DIED’ and ‘SAD’ didn’t get into their algorithms and the context was lost. Here are the steps that you would use to implement this methodology:
STEP 1: Read a text file as a string of raw text
STEP 2: Lower case all words from STEP 1
STEP 3: Tokenize the text from STEP 2 into a list of words where each entry is a word
STEP 4: Remove punctuation and other characters like @#$%^_&* from STEP 3.
STEP 5: Use the NLTK “frequency distribution” to find the frequency of each bigram from STEP 4.
STEP 6: Call NLTK concordance() function and see the result.
Sentiment Analysis is also known as ‘opinion mining.’ It is a process using text analysis, computational linguistics, and biometrics. Sentiment Analysis attempts to identify and quantify states of highly subjective and esoteric streams of text.
Again there is a very nice implementation of this using the NLTK toolkit. There is also a very easy to use web method located here: https://text-processing.com/demo/sentiment/ . So I took my text: “Our family dog Lucy died today, we are so sad.”
The SA Results where pretty obvious: the text is negative. There was no doubt that the ad should have never been served up as the algorithm found a 70% polarity as negative. See image below for this implementation.
This is not rocket science. This is just plain lazy work. Facebook, with all of your assets and power. You can do much better and with pretty simple technology. As for my dog she was the sweetest thing ever and I am not upset with the Facebook or barkbox. In fact when I get another dog, I will be sure to take a look at them again!
Today I cry for my dog as we celebrate her life and the gifts she gave our family!