Monday, March 31, 2014

The Problem With Rankings

There has been a lot of buzz this week over ranking systems. Wherever you go, whatever you do, someone will find a way to rank your skills, your accomplishments. In our country, filled with Billboard top 100, iTunes top selling singles, and even Letterman’s humorous Top Ten list, our culture is used to ranking one thing above another, and assigning intrinsic value to each rank.
Before we start thinking that rankings are rank, let’s take a moment to examine the purpose, and practical application of rankings.
1) Rankings are only as important as we determine them to be.
A rank is kind of like a collectable item: It’s only as important as we make it. The first comic appearance of Superman goes for millions of dollars because Superman is the ultimate American superhero icon. Even people who aren’t into comic books know who Superman is. The first comic appearance of Electro the Robot is probably worth nothing, since no one knows who that is.
The same goes for ranking websites. The AFI (American Film Institute) ranked "Citizen Kane" as the number one film of all time. I disagree, so does that make them right? Well no, because all rankings are subjective, meaning that they are based on one’s own opinion and not factual data. It just so happens that AFI is a well-respected organization with movie scholars and critics on their panel, so the fact that they made "Citizen Kane" the number one film of all time probably means they had significant cause to do so. But I don’t agree, so right there you cannot state, for a fact, that AFI’s ranking is universally accepted.
2) What's the basis?
First, we need to understand the difference between objective and subjective. Objective means with absolutely no bias. A scoring system that’s completely objective is based on scientific data, determined by proven fact. Subjective is the exact opposite-it is based on opinion.
The ICCA, Harmony Sweepstakes, and all other a cappella competitions, try as they might, are subjective. The ICCA does the best job of determining a point system for each individual aspect of the group’s performance, but still, the score is subjective, based on the opinion of the judge. It is nearly (but not totally) impossible to make any kind of singing competition objective. The Barbershop Society is the closest, with each judge studying the arrangement as the group sings, and marking off points for blend or balance, or whatever else they deem fit.
For a judging system to be completely objective, you would need to score a group on a scale of 100 points, and take a point off for every wrong note, every blend issue, every tempo change performed incorrectly, etc. It’s really hard to do, and too complicated to be accurate. That’s why, when you get judged in a competition, you should keep this in the back of your mind: Your score, your placement, your overall impression is based solely on the opinions of the judges and nothing else, so it is very hard to take it at face value.
And who determines what to be scored and how? Human beings, who are inherently flawed. So if you put too much stake into scoring systems, you are saying that you care way too much what five, completely random people think about you, and that’s just silly.
3) Rankings often contradict each other.
You should never take one ranking system at face value. That is one ranking system, written by one group of people. When you determine whether the ranking system is viable, you need to look at several systems and compare the results.
For example, almost every group that ranks the best movies of all time has “Citizen Kane” as number one, so it’s safe to assume that the majority of movie scholars think this and this decision merits some weight. But television rankings are all very different. One group says "Breaking Bad" is the greatest television show of all time, one says "The Wire," one says the "Simpsons," etc. Who is right? No one. Who should you trust? Whomever you want.
When it comes to competition feedback, I use this method for determining which judge comments are worth looking at, and which are total bunk. Let’s say you just competed in the ICCA and you have five feedback sheets from judges. Use this method to determine how to score your actual performance:
1-If there is a comment that appears on all five judging sheets, take it to heart. Judges don’t compare notes, so if all five judges noticed the same thing, then it is either really good or really bad.
2-If there is a comment on one sheet that is completely contradicted on another sheet, cross both of those comments out and ignore them. If one judge liked your dance move and the other judge hated it, then how exactly are you supposed to react to that? You react by crossing them both out, and ignoring it.
3-If one judge noticed a very specific thing and none of the other judges noticed it, then you have to look at who the judge was, and what kind of background they have. If the judge who once choreographed a Broadway show thinks your dance is too simple, you can probably take that opinion to the bank. If someone like me thought the dance was too simple, you can probably shrug it off.
4) What do you get if you are number 1?
Here’s a fun fact: If you are at the top of any ranking list, if you win the ICCA, if you win Harmony Sweepstakes, guess what prize you get? NOTHING! You get nothing. No record deal. No prize money. No special plaque. You get nothing but the satisfaction of being first. If you get nothing, then why do you care so much that you are on top?
Rankings are subjective. Despite what critics tell you, despite what kinds of formulas they use, they are always subjective unless there is hard data to back it up. So please, take it at face value and don’t let the ranking ruin your day.
Marc Silverberg
Follow the Quest For The A cappella Major:
Twitter.com/docacappella
Acappellaquest.blogspot.com
Docacappella.tumblr.com
http://www.casa.org/content/quest-cappella-major-problem-rankings

No comments:

Post a Comment