2 Things I think will help the circuit's judging

arjunK

New Member
Messages
29
I've been around btf for a while, but I haven't posted yet. However, there are a two things that I think can positively impact the methods used to judge any competition, and would eliminate a lot of problems/discontent with judging as of late. Here goes:
1. Score Normalization -
Normalizing scores in judging would eliminate not only the need to drop the highest and lowest scores (ie Elite 8 ), but also give teams a better idea of where they stood with respect to the competition. It would go as follows:Suppose Team A, Team B, and Team C are the only 3 teams competing. And the judges are Person 1, Person 2, and Person 3. Let us also assume that the final scoresheet with total scores (out of 100) looks as follows:



Now, instead of leaving the scores "raw" like this, we simply normalize them to a scale of 10 (or whichever number you choose). For each judge, we set the highest score they give as a 10, the lowest score they give as a 0, and we can easily calculate all of the inbetween scores' values using simple math. 95-55=40 and 75-55=20. 20/40=x/10 making x=5. If you do this for all three score sheets, the normalized score sheet would look as follows:



What problem does this solve? It eliminates the notion that judges judge on a DIFFERENT scale. Some judges will give consistent high marks, and their scale might be between 80 and 90, while another judge may give scores between 40 and 75. I believe this can be used in place of dropping the highest and lowest scores at Elite 8 and any other competition which does so (because if one judges scale is lower than the others, his or her marks will always be the ones dropped. same goes for if one judges scale is higher). Last but not least, it provides each team with a good idea of WHERE THEY STOOD in comparison to the other teams. Because, after all, scoring is all relative. If you win with an average score of 50/100, you still win. If you lose with an average score of 90/100, you still lose. I realize this could have been explained pretty easily using some statistics terms (z-score, etc) but I want everyone to be able to follow this.


2. There should be some place on every rubric regarding ORIGINALITY -
Think back to 2008-2009, there didnt seem to be much of a problem with teams recycling other teams' choreography ideas. I believe this is mostly because choreography was much more traditional and simple in the past - you can't say YO THIS TEAM DID DOUBLE CHAAFA WE DID THAT FIRST R U KIDDING ME? But now, as the circuit has evolved, we see much more thought and tweaking going into the choreography of certain teams. And yes, I do agree that to some extent, taking some one else's idea as inspiration and making it your own or doing it better does have its pros. I try my best not to do so when choreographing, but still have the same level of respect to those who do so. HOWEVER, especially as of late, I have been seeing so many ideas used in their ORIGINAL form, no tweaks, no "making it your own", nothing. And these are not ideas that people can argue that they haven't seen - some have even gone so far as to use ideas from 1st place performances at HUGE competitions (elite 8, idols, etc) without modifying them and in their original form. I personally have no problem with this. It is up to you whether you choose to put an idea into your set. I do however feel that a team who uses an original idea should be given more credit than a team who recycles an idea. It's extremely difficult to come up with a new way to tweak a move, or a new effect to have with a certain step which still remains within the realm of bhangra, and efforts placed toward this should be rewarded.It is because of the reasons listed above that I think there should never be a rubric without some place for originality. Yes, most have points for creativity, but how often have teams been called out for being unoriginal by judges? I've never even heard of it. The only big CHANGE that implementing this would require is making sure that those who are selected to judge a competition have seen several videos and are aware of the trends that are going on today, especially the very mainstream performances. I don't think its difficult to find judges who are "bhangra-heads" like the rest of us, and who can pick up on these things. In fact, I think most of the judges my team has been judged by are more than capable of spotting these things, but the lack of an explicit mention of originality in every rubric that I have seen impedes their ability to do so.

Thats my 2 cents on judging these days. I'm not trying to call out any team or judge in particular, and this is not a result of me being discontent with any specific placings. I think most of the judges that are selected to judge competitions are more than qualified. These are just some things I've been noticing as of late, and I would like to hear what others think!
 

J Wong

Member
Messages
301
A major problem judges have in calculating scores is time. The duration between the end of the last team and the end of the performing act isn't much time at all. In that time they need to tally, average, and deliberate which does not give a lot of time for anything extra. For example, at AKD the bhangra judges did not have enough time to calculate true scores and were forced to give in their raw scores while the other judges had already added points to their rubrics and as such the placings were skewed.
 

arjunK

New Member
Messages
29
J Wong said:
A major problem judges have in calculating scores is time. The duration between the end of the last team and the end of the performing act isn't much time at all. In that time they need to tally, average, and deliberate which does not give a lot of time for anything extra. For example, at AKD the bhangra judges did not have enough time to calculate true scores and were forced to give in their raw scores while the other judges had already added points to their rubrics and as such the placings were skewed.


I had something to solve this too which i forgot to include! After each performance, the judging sheets should be collected and entered into an excel sheet with all of this loaded (it would not be hard at ALL to make this) - therefore solving the issue of time eh?
 

KushK

Well-Known Member
Messages
1,161
arjunK said:
J Wong said:
A major problem judges have in calculating scores is time. The duration between the end of the last team and the end of the performing act isn't much time at all. In that time they need to tally, average, and deliberate which does not give a lot of time for anything extra. For example, at AKD the bhangra judges did not have enough time to calculate true scores and were forced to give in their raw scores while the other judges had already added points to their rubrics and as such the placings were skewed.


I had something to solve this too which i forgot to include! After each performance, the judging sheets should be collected and entered into an excel sheet with all of this loaded (it would not be hard at ALL to make this) - therefore solving the issue of time eh?
arjunk on a roll ;)
 

voxanimus

<('.'<) (>'.')>
Messages
1,685
I completely agree with this method, and have been thinking of posting something along these lines for a while now.


linear normalization of judging scores allows judge "range" to play a much lesser role, and also allows there to be a true emphasis on relativity in scoring. teams, at any particular competition, are simply competing against the other teams present. they are not competing against the ideas and conceptions of bhangra a judge may hold—they aren't attempting to live up to a nebulous, varying standard that none of them know about. for example, if a judge decides to award every team 5 or less points in a 10 point vardiyan category because he or she, consciously or unconsciously, decides that no vardi can ever match the sheer perfection that is the one he or she designed at home on photoshop, that fact (or, to be precise, prejudice) shouldn't put a team out of the running. if every team at the competition received 5 or less points out of for vardiyan, then any and all 5s should be scaled to 10s, and other scores should also be proportionately scaled up.


swi and "sue" brought up an argument in the elite 8 2010 thread about how linear normalization of scores allows what a judge intended to be a minor penalty to be magnified into a detraction that costs a team a trophy. that is, it doesn't preserve standard deviation/variance, and therefore causes differences to be blown out of proportion. to use an extreme example, let's say there was a 50-point chadar-tying category, and every team but one got 50 points, with that one receiving 49. linear normalization would cause the one team to get a 0 for chadar-tying, while every other team would keep their 50; an undoubtedly severe punishment.


i think that specific problem can be remedied by normalizing with specificity; that is, category- and judge-specific normalizations. for example, if john, tim, and bob are judging elite 8 2013, and the competition has 4 categories, energy, nakhra, choreo, and creativity, there would need to be 12 independent normalizations. that is, bob's energy scores would be normalized independently of tim's energy scores, etc. doing this prevents overly large totals from being normalized. any remaining disparity is validly justified by the paramount importance of relativity: at a competition, you're competing against the other teams. bottom line. if your whatever—turlas, lets say—are worse than every other team at the competition, then, as far as turlas go, IN RELATION TO THE OTHER TEAMS IN THE COMPETITION, your turlas have no merit.
 

komil

New Member
Messages
6
arjunK said:
I've been around btf for a while, but I haven't posted yet. However, there are a two things that I think can positively impact the methods used to judge any competition, and would eliminate a lot of problems/discontent with judging as of late. Here goes:
1. Score Normalization -
Normalizing scores in judging would eliminate not only the need to drop the highest and lowest scores (ie Elite 8 ), but also give teams a better idea of where they stood with respect to the competition. It would go as follows:Suppose Team A, Team B, and Team C are the only 3 teams competing. And the judges are Person 1, Person 2, and Person 3. Let us also assume that the final scoresheet with total scores (out of 100) looks as follows:



Now, instead of leaving the scores "raw" like this, we simply normalize them to a scale of 10 (or whichever number you choose). For each judge, we set the highest score they give as a 10, the lowest score they give as a 0, and we can easily calculate all of the inbetween scores' values using simple math. 95-55=40 and 75-55=20. 20/40=x/10 making x=5. If you do this for all three score sheets, the normalized score sheet would look as follows:



What problem does this solve? It eliminates the notion that judges judge on a DIFFERENT scale. Some judges will give consistent high marks, and their scale might be between 80 and 90, while another judge may give scores between 40 and 75. I believe this can be used in place of dropping the highest and lowest scores at Elite 8 and any other competition which does so (because if one judges scale is lower than the others, his or her marks will always be the ones dropped. same goes for if one judges scale is higher). Last but not least, it provides each team with a good idea of WHERE THEY STOOD in comparison to the other teams. Because, after all, scoring is all relative. If you win with an average score of 50/100, you still win. If you lose with an average score of 90/100, you still lose. I realize this could have been explained pretty easily using some statistics terms (z-score, etc) but I want everyone to be able to follow this.


2. There should be some place on every rubric regarding ORIGINALITY -
Think back to 2008-2009, there didnt seem to be much of a problem with teams recycling other teams' choreography ideas. I believe this is mostly because choreography was much more traditional and simple in the past - you can't say YO THIS TEAM DID DOUBLE CHAAFA WE DID THAT FIRST R U KIDDING ME? But now, as the circuit has evolved, we see much more thought and tweaking going into the choreography of certain teams. And yes, I do agree that to some extent, taking some one else's idea as inspiration and making it your own or doing it better does have its pros. I try my best not to do so when choreographing, but still have the same level of respect to those who do so. HOWEVER, especially as of late, I have been seeing so many ideas used in their ORIGINAL form, no tweaks, no "making it your own", nothing. And these are not ideas that people can argue that they haven't seen - some have even gone so far as to use ideas from 1st place performances at HUGE competitions (elite 8, idols, etc) without modifying them and in their original form. I personally have no problem with this. It is up to you whether you choose to put an idea into your set. I do however feel that a team who uses an original idea should be given more credit than a team who recycles an idea. It's extremely difficult to come up with a new way to tweak a move, or a new effect to have with a certain step which still remains within the realm of bhangra, and efforts placed toward this should be rewarded.It is because of the reasons listed above that I think there should never be a rubric without some place for originality. Yes, most have points for creativity, but how often have teams been called out for being unoriginal by judges? I've never even heard of it. The only big CHANGE that implementing this would require is making sure that those who are selected to judge a competition have seen several videos and are aware of the trends that are going on today, especially the very mainstream performances. I don't think its difficult to find judges who are "bhangra-heads" like the rest of us, and who can pick up on these things. In fact, I think most of the judges my team has been judged by are more than capable of spotting these things, but the lack of an explicit mention of originality in every rubric that I have seen impedes their ability to do so.

Thats my 2 cents on judging these days. I'm not trying to call out any team or judge in particular, and this is not a result of me being discontent with any specific placings. I think most of the judges that are selected to judge competitions are more than qualified. These are just some things I've been noticing as of late, and I would like to hear what others think!
+1 love you june :)

I strongly agree with Arjun's point regarding originality, judges should really be able to spot choreo that is clearly jacked from another team. Its completely unfair to the original team as they put so much time and effort into their choreo, which is all discredited in the matter of seconds when another team jacks it. So for future reference i really think this matter should be carefully recognized by judges at their respective competitions.
 

Ajay.H

New Member
Messages
142
Deciding if something in particular was jacked or not is no where close to an objective decision and I think you'll run into a lot of trouble with that. Teams will always be able to say "we never saw that video/performance" and "we thought of it on our own." Some teams are probably telling the truth and maybe some aren't, I don't know, but I don't think we can make that call. Hopefully, if they blatantly copied something obvious, they'll lose respect for that and they won't have the pride in their set that they could have. It sucks, but I don't know what you can do about it besides hope that breaking the norms of the bhangra world are enough of a deterrent.

I think you can still judge on originality as it kind of falls along with creativity, but it seems hard to try and dock points for "stealing" specific things.
 

dheerja

Member
Messages
607
Yeah definitely agree that plagiarism is an issue these days, but that means you're also asking judges to watch every single bhangra performance on YouTube. That's a lot to expect, and it's actually something we should deter judges from doing, since they should walk into a competition without any preconceived notions about teams and their routines.
 

Saleem

Well-Known Member
Staff member
Messages
1,928
I agree w/ arjunk and dheerja... sue, swi,and i have discussed score normalization and ive had to lay down the geek pain when some people came up with other ideas on how to help scoring. the excel route was taken by burgh i remember, in 2010, and it was good because they collected at the end (meaning we could compare teams' sheets and make sure we were consistent), then they threw them in excel, then they gave us the top 4 teams (which were clustered) so we could deliberate.

I can't even fathom how a judge would take the #1 team numerically, which had some separation from other scores, and not place them. Normally it is just a discussion between 1st, 2nd, and 3rd and there is general consensus amongst judges.
 

Basim

♥ BTF ♥
Staff member
Messages
1,459
dheerja said:
you're also asking judges to watch every single bhangra performance on YouTube. That's a lot to expect...
I agree with this. It shouldn't be the judges job to figure out if teams hijacked a move/segment from another team (or dance group).

I personally don't have a problem with judges watching videos on youtube (not sure how you can control that aspect since technology is such a big part of our lives these days). The judges need to be able to differentiate what they might know about a team or have seen in the past from what they present on stage.

Ideally the BEST way on tackling this issue would be for competition organizers to select judges based on what type of competition they are planning to host - by that I mean a hardcore live traditional competition, a collegiate competition primarily with music teams, etc. Then to WORK WITH THE JUDGES to formulate a rubric rather than "force-feed" some set rubric and make them follow something they might not favor/agree with. Once the judges & competiton organizers come to a conconlusion on the rubric, they should release the names of the judges & the rubric to the public.

It's really up to the organizers if they wish to have a judge's "deliberation" session or not. My take on it would be the top 3 teams are selected based on the rubric, then discussion is used to finalize the top 3 placings based on not only the "points" they assigned to various categories, but also the notes they wrote down about individual performances.

~ Basim :)

Edit: Saleem beat me by a few minutes while I typed out my entire response ;)
 

smehta313

Active Member
Messages
382
dheerja said:
Yeah definitely agree that plagiarism is an issue these days, but that means you're also asking judges to watch every single bhangra performance on YouTube. That's a lot to expect, and it's actually something we should deter judges from doing, since they should walk into a competition without any preconceived notions about teams and their routines.

No one is asking the a judge to watch every single video on youtube, but as bhangra fanatics it is fair and almost expected you to be up to date and be able to recognize when moves are being taken from last year's elite 8 winners.


Judges by no mean are expected to watch "every single bhangra performance on YouTube," but they, like most of us here do as well, should be relatively up to date with what is happening.


There are sooooo many bhangra performances and competitions around North America at this point that no one has watched every routine, but the big comps (Boston, Burgh, Bruin, Elite, Jashan) and the big name teams (or teams doing well in the circuit for the time being) should be recognized.


What's being asked is simply to watch big comps and big teams, not every team being created left and right at every local vaisakhi competition. I don't think there is anyone that does that.
 

dheerja

Member
Messages
607
smehta313 said:
dheerja said:
Yeah definitely agree that plagiarism is an issue these days, but that means you're also asking judges to watch every single bhangra performance on YouTube. That's a lot to expect, and it's actually something we should deter judges from doing, since they should walk into a competition without any preconceived notions about teams and their routines.

No one is asking the a judge to watch every single video on youtube, but as bhangra fanatics it is fair and almost expected you to be up to date and be able to recognize when moves are being taken from last year's elite 8 winners.


Judges by no mean are expected to watch "every single bhangra performance on YouTube," but they, like most of us here do as well, should be relatively up to date with what is happening.


There are sooooo many bhangra performances and competitions around North America at this point that no one has watched every routine, but the big comps (Boston, Burgh, Bruin, Elite, Jashan) and the big name teams (or teams doing well in the circuit for the time being) should be recognized.


What's being asked is simply to watch big comps and big teams, not every team being created left and right at every local vaisakhi competition. I don't think there is anyone that does that.

Understood but if you're going to expect judges to recognize stolen moves there needs to be some sort of standard set. They might stay up to date on the bhangra scene but not watch any competition videos from the month of April 2011 for example, and miss things that you'd expect them to see. It's a tough line to set, there are plenty of really creative teams out there that don't compete as often or don't place as much as the top tier teams, but their creative integrity is just as important.

Personally I don't think judges should actively watch current videos. It's impossible to stay unbiased when you walk into a competition and have seen a team's set already. You lose the surprise factor, and you tend to lean towards teams you favored going into the competition.
 

Ajay.H

New Member
Messages
142
dheerja said:
smehta313 said:
dheerja said:
Yeah definitely agree that plagiarism is an issue these days, but that means you're also asking judges to watch every single bhangra performance on YouTube. That's a lot to expect, and it's actually something we should deter judges from doing, since they should walk into a competition without any preconceived notions about teams and their routines.

No one is asking the a judge to watch every single video on youtube, but as bhangra fanatics it is fair and almost expected you to be up to date and be able to recognize when moves are being taken from last year's elite 8 winners.


Judges by no mean are expected to watch "every single bhangra performance on YouTube," but they, like most of us here do as well, should be relatively up to date with what is happening.


There are sooooo many bhangra performances and competitions around North America at this point that no one has watched every routine, but the big comps (Boston, Burgh, Bruin, Elite, Jashan) and the big name teams (or teams doing well in the circuit for the time being) should be recognized.


What's being asked is simply to watch big comps and big teams, not every team being created left and right at every local vaisakhi competition. I don't think there is anyone that does that.

Understood but if you're going to expect judges to recognize stolen moves there needs to be some sort of standard set. They might stay up to date on the bhangra scene but not watch any competition videos from the month of April 2011 for example, and miss things that you'd expect them to see. It's a tough line to set, there are plenty of really creative teams out there that don't compete as often or don't place as much as the top tier teams, but their creative integrity is just as important.

Personally I don't think judges should actively watch current videos. It's impossible to stay unbiased when you walk into a competition and have seen a team's set already. You lose the surprise factor, and you tend to lean towards teams you favored going into the competition.
Agreed but also consider the teams side of things. What happens when I, as a team captain, haven't seen a certain SGPD routine and I choreo a similar phumnian move that they used? There is absolutely no standard that you could objectify for calling a team out on something being "stolen" unless you make it a rule that every team captain watch and know every subjectively "big" performance.

If your team does a move that is known for another team, you'll lose respect for it. If you want to avoid that, watch the big videos, keep up with the big performances, whatever. But if you don't watch the videos, that doesn't mean you should lose points from a judging point of view. Maybe that lets a few teams get away with it, but the other option can screw over honest teams, too.
 

arjunK

New Member
Messages
29
Ajay.H said:
dheerja said:
smehta313 said:
dheerja said:
Yeah definitely agree that plagiarism is an issue these days, but that means you're also asking judges to watch every single bhangra performance on YouTube. That's a lot to expect, and it's actually something we should deter judges from doing, since they should walk into a competition without any preconceived notions about teams and their routines.

No one is asking the a judge to watch every single video on youtube, but as bhangra fanatics it is fair and almost expected you to be up to date and be able to recognize when moves are being taken from last year's elite 8 winners.


Judges by no mean are expected to watch "every single bhangra performance on YouTube," but they, like most of us here do as well, should be relatively up to date with what is happening.


There are sooooo many bhangra performances and competitions around North America at this point that no one has watched every routine, but the big comps (Boston, Burgh, Bruin, Elite, Jashan) and the big name teams (or teams doing well in the circuit for the time being) should be recognized.


What's being asked is simply to watch big comps and big teams, not every team being created left and right at every local vaisakhi competition. I don't think there is anyone that does that.

Understood but if you're going to expect judges to recognize stolen moves there needs to be some sort of standard set. They might stay up to date on the bhangra scene but not watch any competition videos from the month of April 2011 for example, and miss things that you'd expect them to see. It's a tough line to set, there are plenty of really creative teams out there that don't compete as often or don't place as much as the top tier teams, but their creative integrity is just as important.

Personally I don't think judges should actively watch current videos. It's impossible to stay unbiased when you walk into a competition and have seen a team's set already. You lose the surprise factor, and you tend to lean towards teams you favored going into the competition.
Agreed but also consider the teams side of things. What happens when I, as a team captain, haven't seen a certain SGPD routine and I choreo a similar phumnian move that they used? There is absolutely no standard that you could objectify for calling a team out on something being "stolen" unless you make it a rule that every team captain watch and know every subjectively "big" performance.

If your team does a move that is known for another team, you'll lose respect for it. If you want to avoid that, watch the big videos, keep up with the big performances, whatever. But if you don't watch the videos, that doesn't mean you should lose points from a judging point of view. Maybe that lets a few teams get away with it, but the other option can screw over honest teams, too.
I see what you mean. BUT there are a few things I'd like to say. SIMILAR is one thing, I'm talking about when a team does the EXACT thing (simply to say that there is a lesser probability of you choreographing the EXACT same thing). I don't think it's ridiculous to assume that almost everyone on the circuit has seen the HUGE videos, for example the winners of elite 8 and so on. I do see what your point is though - coincidences can happen.
 

priyab

New Member
Messages
38
wow, this is great. im sure this isnt a brand new idea, but you presented it very well and were considerate of your audience. i dont understand statistics at all, so i appreciate that. :)

also, i think its interesting to see that people can see both the pros and cons to judges having seen a lot of videos prior to watching a performance. i agree, it's way too subjective to screen judges for this criteria for the sake of originality. i think it's a bigger issue when they've seen a lot of your videos and you haven't changed your set dramatically from competition to competition. then again, thats something for teams to take upon themselves.
 
Top