Some numbers and reflections.

So the final decisions are out. 30 papers were accepted and we – that is the program chairs – are super happy about the selection of papers which will be presented.

While Augmented Humans has no interest in being another CHI, we thought it might be interesting to compare statistics. The average rating of submissions at Augmented Humans was 2.9 (SD = 0.83) compared to the average of 2.6 (SD = 0.43) at CHI this year. While the average is relatively similar, the difference in standard deviation is striking. One explanation might be that the variability of the quality submitted was higher at AH2020. Another explanation, which we find more likely, is that because most of our reviews were provided by the Program Committee, these reviewers felt more confident in taking clear positions and giving either a very high or very low score, to ease the work for us chairs (thank you!). The average of the rejected papers at Augmented Humans was 2, while the average for the accepted papers was 3.1. See the figure below for more details on how the scores and decisions were distributed:

Alt text

While we are happy about the accepted papers, we are also sad about the many submissions we had to turn away, and painfully aware that many of you will be disappointed by seeing your work turned away with three short reviews and a ‘reject’. For the sake of transparency, we want to share the behind the scenes process which went into selecting the papers: In discussion with steering committee members and general chairs we established that we could accommodate a maximum of 36 papers, based on time-constraints. We also discussed that we would like to have an acceptance rate of no lower than 30%, but less than 50%.

Once the reviews came in, the Program Chairs met in person, in Saarbrücken. Starting with Submission #1, up to submission #75 we discussed every single paper. Without concerning ourselves too much with the specific review scores, we used the reviews to figure out which parts of the paper were most relevant for us to read, to make an informed decision. If there were aspects of the paper we felt like we wanted more input on, we would go back to the reviews, and see if any of the reviewers had discussed the questions we had. Then, based on our own assessment and the written report of the reviewers, we either marked the paper as ‘accept’, ‘reject’ or ‘decide in second pass’.

After completing our first pass, we had marked 24 papers as ‘accept’. 24 papers would fit comfortably in the allotted time, and considering that it is 35% of the submissions, they would also meet our target. We were happy with this, because it meant that we did not have reject any of the papers we wanted to accept and that we could revisit our ‘decide in second pass’ papers, without pressure to accept any. At this stage, we also grouped the papers by theme, to make sure that we did not have some skew towards specific topics in our decisions. In our second pass, we then chose another 6 papers to include. This pushes our acceptance rate to 43%, and fits two days well, without requiring us to be overtly strict in time-keeping.

In a final step, we double-checked our decisions against the average review scores, to see if we had missed a potentially strong paper or accepted a potentially week paper. The paper with the lowest score which we accepted had a score of 1.7. Here the reviewers primarily felt that the contribution was too far outside of the domain of HCI to be accepted. However, we felt the paper was valid and interesting and would compliment some of the other papers we accepted nicely. The highest rated paper which we rejected had a score of 3. Such papers were typically strong contributions, which received a high rating from at least one reviewer, and a low rating by a single reviewer, who identified a problem which shed doubt on some or all of the results presented in the paper. Please understand that our primary interest was to accept strong submissions, so we did not make these decisions lightly.

At this point we’d also like to thank the program committee who delivered 209 reviews in record time. For those of you who are fast at math: Yes, at the end, we received more reviews than requested :-)

Thanks also to the poster and demo chairs, who then gave all submissions one more final pass: as many of the projects we rejected were interesting and relevant to the conference, the poster and demo chairs will consider if any of the works might be invited to present in other forms. We will reach out to several authors with invitation to present their work as a demonstration or as a poster.