Queensland Junior Rating List – How it works
by David McKinnon email@example.com
FAQs for players & parents & FAQs for event organisers
|The QJRL is a rating list for Junior Chess players in Queensland and Northern NSW. It is different to the Australian Chess Federation (ACF) ratings which is a national adult and junior list that is published three times a year. The QJ ratings are produced by myself six times a year and generally come out on or near the 1st of January, March, May, July, September and November.
The QJ rating system (QJRS) began in June 1993 with the rating of the 1993 Queensland Junior Championships. The first official list was in October 1993 and included a little over 100 players. It was started as a joint CAQ/Qld Junior Chess League project. Its aims were to more accurately reflect the playing strength of juniors while also including many more players from school/club chess that were not ACF rated.
Presently there are over 2200 players on the list. About 100 players are removed from each published list due to inactivity (18 months). Each year 50-60,000 games are rated on the list. The list appears on both the CAQ http://www.caq.org.au/ and Gardiner Chess websites http://www.gardinerchess.com/
1. How QJ Rating (changes) are Calculated: For players with current ratings
|It is quite easy to understand the concept of how players with established ratings gain and lose points. It depends on three things…
1. Your score in that tournament
2. Your rating
3. The average of your opponents’ ratings (“average opposition”)
· If you get a 50% score (for example 3/6, 4/8…) against people who are rated the same as you (on average); you would neither gain nor lose points
· If you score 50% against people rated higher than you; then you would have played better than expected (from your rating) and would gain points
· On the other hand, if you score 50% against people rated lower than you; then you would lose points
· Consequently if you score 75% against people who are rated the same as you; you would gain points
Obviously there are many different variations of this theme, all of which are governed by the Percentage Expected Table (Table 1)
The amount your rating changes is determined by …
i. The score you achieved : the achieved score (AS)
ii. The score you were expected to get : the expected score (ES)
iii. The “K-factor”
So that …
The K-factor is simply a number that determines the amount of change.
The QJRL uses different K-factors for different tournaments…
· Lightning (5 mins per side) is 5;
· Open events are 20, and
· Events with shorter time controls 10 (most club/school games) or 15.
Using the above formula to calculate your rating change from a tournament.
Let’s say you scored 8 points in a 10 round school tournament in which your expected score was 6.5. You therefore scored more points than expected and your rating will go up.
Rating change = (AS-ES) x K = (8.0-6.5) x 10 = + 15
You will therefore gain 15 points from this tournament.
2. How new (unrated) players get ratings
|When a player plays in a tournament they get a score against an average opposition of players. A similar table to above is used to calculate their ‘performance rating’ for that event (Table 2).
Performance ratings for players without a rating from all events are looked at over a 4 month period (which is 2 rating periods). This is to give those who play in few rated events a chance to play enough games to get a rating; and also to average out ‘good’ tournaments with ‘bad’ tournaments. A minimum of 8 (non-lightning) games is required to be played to get a rating (generally two tournaments). If less than 8 games are played then a ‘provisional’ rating (indicated by ‘P’) is given. Ratings will be published only if over 500.
3. How QJ ratings compare with ACF and FIDE ratings
|The ACF system is the national rating system which also includes a rapid rating (for rapid events). The FIDE system is for international ratings. One aim of QJ ratings is to tie in with both systems. Someone with a QJ rating of 1200 should be playing at 1200 ACF level. QJ and ACF ratings ideally should be similar but will rarely be exactly the same as different events are rated on the two systems (eg. ACF does not rate lightning events). Also the QJ system uses the ‘Elo’ method of calculating ratings whereas the ACF system uses the ‘Glicko2’ method which considers the ‘reliability’ of a rating. In other words in the ACF system a new players’ rating will change more rapidly than one who has played thousands of games.
Juniors playing in ACF rated/open events (and international events for that matter) can have these events rated also. Automatically all Queensland Open events, the Australian Open and Doeberl Cup are rated every year. In these events when juniors play adults their average opposition is calculated based on the ACF ratings of their adult opponents; if they happen to play someone with a QJ rating then this is used. I also rate the Australian Juniors and Australian Junior School Championships each year. The interstate junior events are somewhat more difficult to rate and I won’t go into this here. I am always happy to rate other interstate events in which Queensland Junior players participate, I just need to know via email.
|4. Some more technical issues|
|a. Rating ‘all games’
All games submitted to the QJRL are rated. By this I mean that even games played against unrated players count. I do this by assigning new players a provisional rating based on their performance in that same event. For those mathematicians out there the event simply gets run through the ratings program over and over again so that by the 4th or 5th ‘cycle’ the unrated players have a very accurate performance rating from the event. This is done because whilst it would be fortunate for a certain player to lose 3 games against unrated players but win against 3 rated players; the other way around would not be as fortunate! It is simply more accurate and more fair to rate all games.
b. Prevention of ratings deflation
The injection of points into a junior rating system is vital as there are a large number of rapidly improving players participating. Without the prevention of ratings deflation players would simply “swap” ratings points. Players who were improving slowly would actually go down because more quickly improving (and therefore underrated) players would take rating points off them. The QJRS therefore has mechanisms for prevention of ratings deflation.
i. The ‘3% rule’
Simply speaking this states that the “Percentage expected” table (Table 1) presented previously is altered by -3%. Therefore a player playing an average opposition the same rating as him/herself is only expected to score 47% (and not 50%) in order not to lose or gain points.
In Swiss Perfect (the most frequently used tournament pairing software) it is possible to assign a ‘ceiling’ to a ratings difference. This basically is to try to protect top players from losing points. A player rated 2000 who plays opponents rated 2100, 1900, 1800, 1950 and 500 will have the ‘average opposition’ significantly dropped by playing the 500 rated player (eg. in the first round). The ratings ceiling for the system is 500 for those rated above and 400 for those rated below. In other words the 2000 rated player would in effect be playing someone 1600 and the 500 rated player someone 1000. A ceiling is also used for calculating performance ratings on players scoring close to 100% or 0%.
It is quite possible for players to “overshoot” their true rating. A person who is rated at 1000 and plays in several tournaments in which they perform at 1100 strength can actually accrue enough points by the end of the rating list to rise over 1100. This results because ratings only change every 2 months. If ratings changed automatically after every tournament (or even after any game!) this would not happen. In order to prevent against overshooting in the QJRS players playing a large number of games and gaining many points have their performance ratings calculated for each tournament and do not rise above the average of these. Similarly it is theoretically possible for someone to overshoot in the negative direction but this is very very uncommon.
|FAQs for players/parents|
|1. What do I have to do to get a rating?
Simply play in events which are rated. There are many many events rated each year – a list is available at the bottom of each rating list. You do not need to pay a rating fee or be a member of any specific organization to qualify. Just play chess.
2. Why do I not have a rating?
The most common reason is that the performance ratings of the events you played in were below 500. The low limit for the list is 500. Play over 500-strength and you will go on the list. The other reason is that the events you play in either were not rated (sent to me) or were not rated in the current list (but will be in the following list). If you think despite this I have made a mistake feel free to send me an email.
3. Why did I lose points when I won lots of games?
Read the section above on how QJ ratings are calculated. You may have won 4/6 games but if you played players with much lower ratings that you, you may have been expected to score more than 4/6.
4. Why is my rating change different to what was on the Swiss Perfect rating calculations at the end of the event?
The Swiss Perfect rating calculations do not include games against unrated players and the ‘3% rule’. They also may or may not have the correct K factor and ‘ceiling’ applied (these can be changed).
5. The information on the list is incorrect or I do not want it there for privacy reasons.
Please send me an email with the correction/request and I will change/remove the information. There are 60,000 games played in 100s of events each year. To some degree I rely on club/school organizers to notify me of changes to schools etc… I would personally only check for change of school information (eg. primary to high school) twice a year. Whilst this information is useful its original primary purpose was simply to identify the correct player when we have several by the same name.
|FAQs for Event organisers
1. How do I send events for rating?
I only accept events submitted in Swiss Perfect format. For people who are unaware this is chess tournament pairing software. There are 3 files that need to be sent via email for each event ending in .ini, .trn and .sco. I need to know the time-control for the event. I generally assume all junior club/school events have a K factor of 10 (so 15-20 minute games).
2. What events can be rated?
Basically it is possible to rate any event with at least 1 rated person in it. Obviously the more rated players in the event the more accurate the rating generated will be. Generally if it is a new club/school I need at least a few rated players for accuracy reasons. However there are ways around this so (particularly for those outside SE Qld) feel free to email me and ask.
3. Do I need to input the tournament information in a special way?
Yes is the brief answer. I never refuse events run on Swiss Perfect but it does make things a lot easier if the following format is followed…
4. Is there an easy way to input players?
A Swiss Perfect input file is on the CAQ site at: http://www.caq.org.au/htm/ratings.htm
A similar one exists for the ACF ratings and instructions for both are found under ‘Instructions for use’ on the left hand side of the same webpage. Just follow the instructions. The QJRL file however is obviously under a different filename and is already in .trn format (not .exe). I am told it takes time to get used to it but for people running large events with many rated players it is good to learn. It also means that when the events come to me for rating they are in the format I need.
5. What is the last date I can send events for rating and still have it included on the next list?
Generally this is the 20th of the month prior. So for the list due out March 1st this is 20th February. Obviously this depends on what else I am doing and also what the event is. If it is an important event I may wait a day or two for it before getting the ratings ready however if it is a 6 player club event and it’s sent to me on the 25th it will be held over to the following list. There are times when I simply cannot produce a list on the 1st of the month but I try and keep this to a minimum.
6. What information goes on the list
In order it is: Rating, Number of games played in the rating period, Date of birth (YYYYMMDD), M/F, School, HS/PS/club, location, Surname, Name. There is also a file with Top players/most improved etc… Once you send me an event for rating unless you tell me otherwise players with ratings will appear on the list with this information (if available/sent). If for privacy reasons players need information withheld you need to let me know at time of sending the event in for rating. Be aware that any information you send me ends up on the web.
7. How do I get a copy of the list?
No hardcopies are produced. I email the list automatically to people who send me events for rating. If I have missed you then please let me know. I email the list in text and excel format so you can search/sort players from your school/club. I probably only personally check for change of school information twice a year; so if people have changed school or you have a date of birth that is not there on the list then please let me know. Soon after I email the list they appear on the CAQ and Gardiner Chess websites.
8. What if I have technical enquiries?
Send me an email. If it is too technical especially to do with Swiss Perfect I will direct it to Pat Byrom who very kindly writes the software that integrates event output files with the rating system.
|Table 1 – Percentage Expected
This table is based on the Elo System and is used for many rating systems around the world. It shows us what percentage(%) you would be expected to score against any group of opponents.
The first column (RD) is the difference between your rating and the average of your opponents’. The second column (+) shows us what percentage we would be expected to score if our rating was the amount in the first column above our opponents’ average rating. The third column (-) shows us what percentage we would be expected to score if our rating was the amount in the first column below our opponents’ average rating.
For example, take the line that says 107-113; 65; 35. This means that if you played a field whose average rating was 110 above yours (eg. Your rating was 1000 and the average of your opponents’ was 1100), you would be expected to score 35%. If they were 110 points below you then you would be expected to score 65%. This leads to the idea of the “expected score”.
If you were expected to score 65% in a 10-round tournament then your expected score (ES) would be 0.65 x 10 = 6.5 ie. 6.5/10. If you scored 6.5 points in this tournament then you would have done what is expected of you and your rating would not change. The more points you score above 6.5 the more your rating increases. The less points you score (below 6.5), the more you lose.
% – percentage score eg. 6/8 = 75%
Diff – amount added or subtracted to average opposition to calculate performance rating
So if a player scores 3/5 (60%) against an average opposition of 600 their performance rating for that event is 600 + 72 = 672.