Three weeks ago, we began an experiment to study the reliability of visitor tagging, in relation to our "Champ Problem". At the time, I pulled numbers for the previous three weeks, and said I'd be back in 3 week with new numbers to compare. Well, those 3 weeks have flown by, and so it's now time to take a look at the results!
There were two questions I was looking to answer with this experiment. The first was whether the tags we present to visitors before they tell their story influence what they tell us about. We already know how many people talked about Champ overall during the control period (20), and we know how many people talked about Champ for the question we modified during the same period (9). We can therefore evaluate the influence by comparing the numbers we get from the test time period. The second question related to how accurately visitors tagged their videos. Could we automate a response based on how someone tagged their video?
Before we start looking at data, there are a few things we need to be aware of. First, ECHO's "Champ Week" just so happened to fall into the middle of my test period. With presentations and activities going on in the building talking about Champ, it would not be hard to imagine some inflation in numbers. Second, visitation and kiosk activity during the two time periods was not identical. The total number of videos recorded decreased 25% in the test period as compared to the control period. Clearly, the Champ tag discourages people from using the Kiosk! (Or maybe not...) Regardless, we need to be expecting up to 25% fewer videos about Champ during this time period in comparison to the previous time period, just because there were 25% fewer videos overall.
To start off, we'll look at survey responses. The "What would you like to talk about" poll is asked before visitors record a story. If the visitors abandon the recording process after that, or indicate that they are a child without an adult present, their poll response still counts, but they don't show up in the actual video counts. During the three week test period, 9 people selected Blue-green algae (1 recorded a video; it was not retained, because it consisted entirely of the top of someone's head as they repeatedly said "Balls"), 5 selected Invasive Fish (1 recorded a video, it was not retained, because it was a young child asking how many fish are in Lake Champlain, a question which has been asked and answered several times already), and 3 selected Phosphorus or Nitrogen (1 recorded a video, it was not retained, as it contained a child repeating "I like mud" over and over). The infamous "Something Else" got 26 selections (3 videos recorded, none kept.) and finally, the new "Champ" selection, got 41 responses (8 videos recorded, none kept on Kiosk).
The first interesting thing I've noticed is that we did not keep a single video for this question in all 3 weeks! During the control time window, we kept 10 of 26 videos recorded. During the test period, 14 videos were recorded, and we kept 0. This drop, of 46%, is higher than the overall drop in videos recorded (25%).
So, now that we've got THOSE numbers out of the way, lets get back to our two main questions. During our test period, we had 13 videos recorded about Champ, 6 of which were on the Researcher question. This is a 35% DECREASE over the previous time period. If I subtract the 25% decrease in Kiosk usage, we're still getting 10% fewer Champ videos overall than before we added the tag. The decrease in people asking researchers about Champ (That's the question we modified) was 33%, very close to the overall decrease, and still above the 25%. While far from conclusive, this initial research does not suggest that including a tag for a topic causes an increase in people talking about that topic.
Our second question was on accuracy of tagging. 100% of the Champ videos on the researcher question used the Champ tag. However, two videos using that tag did not make it into the YouTube playlist. Most likely, these videos didn't even mention Champ, and I just cleaned them up without realizing. While user sorting is successful, the moderating step wasn't taken care of - and that comes as no surprise at all, as there doesn't seem to be any way to convince certain people that their jokes aren't funny enough to feature on our Kiosk.
So what is next? Nothing right away. I'm going to leave the Champ tag in place on the question, and continue to monitor videos which use it. At some point, I will probably modify the upload script to place videos with that tag in the appropriate playlist, but I want more than 3 weeks of testing before I take that step. For now, I'm off to review the latest videos!
~Travis Cook, Information Technology Coordinator