1. Introduction                                                                                                5. Discussion

                2. A Simple Decision-Making Task                                                                6. Conclusion

                3. Method                                                                                                      Biographies

                4. Results                                                                                                      References

4.   Results


 

Overall, the players completed 20,173 trials, with 8,347 of these trials resulting in correct responses (41.37%). There were 11,369 (56.36%) incorrect responses and an additional 457 (2.27%) timeouts recorded. The task was designed to promote a success rate between 40-60%, allowing a sufficient number of both correct and incorrect trials for analysis (Williams, Nesbitt, Eidels and Elliott 2011).  The results showed that this preliminary goal was achieved, so we moved on to two further types of analysis. We first performed (section 4.1) further analysis on the pooled data in order to gain an overall appreciation of the data. This pooling process resulted in an unequal number of trials for each condition, so within-group analysis could not be used. A more traditional within-group analysis of the data is performed in section 4.2.

 

4.1.       Pooled Trial Results

The average response time of participants was 3.97 seconds (SD=1.98). A paired-samples t-test was conducted to compare the response time in winning trials and losing trials (excluding timeouts). There was a significant difference in the response time for winning responses (M=4.40, SD=1.81) and losing responses (M=3.50, SD=1.88): t(19718)=1.96, p <0.05. Again, this was expected, as the task was designed so that responding more slowly would improve the player’s chance of success.

 

Next, we considered all trials in relation to the three experimental conditions. Overall, the 48 players completed 6,818 trials in the no sound condition, 6,661 trials in the constant sound condition, and 6,694 trials in the increasing sound condition. In the no sound condition there were 2,794 (40.98%) correct responses, 3,830 (56.17%) incorrect responses, and 194 (2.85%) timeouts. In the constant sound condition there were 2,717 (40.79%) correct responses, 3,773 (56.64%) incorrect responses, and 171 (2.57%) timeouts. In the increasing sound conditions there were 2,836 (42.37%) correct responses, 3,766 (56.26%) incorrect responses, and 92 (1.37%) timeouts. 

 

We designed the increasing sound as a temporal cue to reduce timeouts; it seemed to be effective, considering the drop from 2.85% timeouts in the no sound condition and 2.57% in the constant sound to 1.37% in the increasing sound condition. A chi-square test of goodness-of-fit was performed to determine whether timeouts occurred equally across all sound conditions. Timeouts were not equally distributed in the experiments: X2 (2, N=457) = 49.09, p< 0.05. Unlike the timeouts, there were no significant differences in the number of correct responses – X2(2, N=8,347) = 2.37, p=0.30 – or incorrect responses – X2(2, N=11,369) = 0.15, p=0.93 – across the three conditions. This suggests that, apart from the reduction in timeouts, there were no statistically significant changes in players’ hit rate (accuracy) when either of the sounds was included in the display. 

 

However, using a one-way ANOVA we found a significant effect of sound on mean response time for all trials at the p<.05 level for the three conditions: F(2, 19713) = 15.26, p = 0.00. Post hoc comparisons using the Tukey HSD test indicated that the mean response time for the no sound condition (M = 3.77, SD = 1.93) was significantly faster than both the constant sound condition (M = 3.93, SD = 1.89) and the increasing sound condition (M = 3.94, SD = 1.88). The constant sound condition did not significantly differ from the increasing sound condition. 

 

We then considered mean response time from winning trials separately from the mean response time for losing trials. Again there was a significant effect of sound on mean response time for both the winning trial data at the p<.05 level for the three conditions – F(2, 8344) = 12.31, p = 0.00 – and the losing trial data – F(2, 11366) = 5.03, p = 0.01.

 

In terms of wins, post hoc comparisons indicated that the mean response time for the no sound condition (M=4.26, SD=1.85) was significantly faster than the constant sound condition (M=4.48, SD=1.81) and the increasing sound condition (M = 4.47, SD = 1.76). The constant sound condition did not significantly differ from the increasing sound condition. This overall pattern was consistent with the losses data, where post hoc comparisons indicated that the mean time for the no sound condition (M=3.42, SD=1.91) was significantly faster than the constant sound condition (M=3.53, SD=1.85) and the increasing sound condition (M=3.54, SD=1.87). Again, the constant sound condition did not significantly differ from the increasing sound condition. 

 

These results were pleasing from our design goals, as they provided further indication that 1) players avoided timeouts in the increasing sound condition and, simultaneously, 2) were able to wait longer to respond than in the original no sound condition. What was most surprising about these results is that players also seemed to wait longer to respond in the constant sound condition, although this produced no significant reduction in timeouts. This constant sound condition was included as a control condition and was not expected to produce any variation in the way players performed the task.

 

4.2.       Player by Player Results

After examining effects from pooled data, we also considered the player-by-player results. That is, the mean result for each player in each condition was calculated before analyzing these results in a one-way repeated measures design. On average, players completed 420.27 (SD=81.43) trials: 142.04 (SD=33.40) in the no sound condition, 138.77 (SD=27.69) in the constant sound condition, and 139.46 (SD=31.54) in the increasing sound condition. The minimum number of trials completed by a player was 318. The maximum number of trials by a single player was 697.

 

Given the variation in number of trials that players completed we were concerned that our overall results could be biased, or over-represented, by individual performance. We therefore repeated our pooled-data analysis by using the averaged results for the 48 players. This entailed averaging all trials for each of the 48 individual players to find their averages and then finding the average of these 48 results.

 

First, we considered the average number of winning trials for each player in the three conditions: no sound (M=58.21, SD=17.82), constant sound (M=56.60, SD=18.86), and increasing sound (M=59.08, SD=17.01). A repeated measures (within subjects) one-way ANOVA showed no significant difference between the number of wins in the three sound conditions: F(2,47) = 0.70, p =.497. 

 

Next, we considered the number of losses per player in the three conditions: no sound (M=79.79, SD=39.37), constant sound (M=78.60, SD=7.62), and increasing sound (M=78.46, SD=39.18). Again, a repeated measures (within subjects) one-way ANOVA showed no significant difference: F(2,47) = 0.06, p =.946. 

 

We then analyzed the number of timeouts in the three conditions: no sound (M=4.33, SD=4.87), constant sound (M=3.54, SD=3.79), and increasing sound (M=1.65, SD=2.55). In this case a significant difference was found between the number of timeouts in the three sound conditions: F(2,47) = 13.36, p < .05 (0.000). Post hoc comparisons with Bonferroni correction confirmed that increasing sound resulted in a significantly lower number of timeouts compared to the no sound condition (p = .01). There were no significant differences between either the no sound and constant sound or the constant sound and increasing sound conditions. 

 

We then considered the average response time for all trials per player (n=48), in the three conditions: no sound (M=4.06, SD=1.19), constant sound (M=4.16, SD=1.12), and increasing sound (M=4.21, SD=1.20). A repeated measures (within subjects) one-way ANOVA showed no significant difference between the response time in the three sound conditions: F(2,47) = 0.51, p > =.599.


Next, we compared response times for all winning trials per player (n=48) in the three conditions: no sound (M=4.22, SD=1.23), constant sound (M=4.40, SD=1.11), and increasing sound (M=4.40, SD=1.20). No significant difference was found for the players’ average winning response times, F(2,47) = 0.98, p =.379. 

 

Finally, we considered the response time only for losing trials per player in the three conditions: no sound (M=3.88, SD=1.16), constant sound (M=3.94, SD=1.08), and increasing sound (M=4.04, SD=1.19). Again, no significant difference was found for the players average losing response time: F(2,47) = 0.72, p =.492.

Figure 5: Box plot of timeouts for players over the three conditions.

Figure 6: Box plot of response times for players over the three conditions.

                1. Introduction                                                                                                5. Discussion

                2. A Simple Decision-Making Task                                                                6. Conclusion

                3. Method                                                                                                      Biographies

                4. Results                                                                                                      References