Accessing Toolbox Data

For data collection, we try to make this as user-friendly and flexible as possible. At present there are three options for data collection.

1)   As default, the apps will display summary statistics on the end screen after each run-through of an app. This provides some of the most common indices to summarise a child’s performance on these measures. To further contextualise a child’s performance, you can request a performance chart from the settings menu [⚙︎], which can be found in the bottom-right corner of the app’s home screen. This plots a child’s score in relation to our preliminary Australian norms. Note, however, that these scores are charts are for indicative purposes only, and are not advised for research purposes. For more formal or research purposes, one of the following two options are recommended.  

2)   In the settings menu [⚙︎] you can also toggle on and enter a user-specified e-mail address to which you would like item-by-item data to be sent. This uses the iPad’s ‘Mail’ app to send the data to the location you specify – so you will need to ensure the Mail app is set up. This provides accuracy (and, wherever relevant, response time) data that is suitable for research purposes. It does, however, require some data entry at the end to transfer data from the e-mails into your analytic software. As such, for larger-scale use or repeated use, we can also facilitate option 3 below.   

3)   The final option, also specified in the settings menu [⚙︎] is to specify the URL of an online database to which your data will be sent. We do not collect any demographic or performance data entered into or generated by the apps, so for this option you would set up your own personal database. We have a database of our own that we use for our projects, and if you have the technological savvy to set up a database yourself we are happy to provide you the source code for our database for easy setup. If you do not have this skill set, we can put you in touch with our database developers who can set this up for you at their hourly rate (which is minimised because we have already invested in creating the database code they will use). In either case, for support in setting up an online database, please contact eyt_dev@sockii.com.

We have tried to create data collection options that are as diverse, low-cost and flexible as possible, so we hope one of these fits your needs.


Fail Safes:
As researchers ourselves, we understand the importance of ensuring no data is lost. For this reason we have implemented a number of fail safes to assist in data management.  

1)   Each app's settings menu has a 'Last Data Sync' time stamp so you can quickly and easily see whether the data has been submitted to the specified database;
2)   When database or e-mail logging is enabled, the iPad also keeps local logs of the data files that can be manually extracted in the unusual case that they do not submit to the database. As long as the app is not deleted, these logs remain accessible through iTunes. Please click here for a step-by-step guide to manual data extraction;
3)   If you have a database set up, you can also have e-mail logs sent to alert whenever any data attempts to submit to the database (as another check for whether data has been sent and whether the database successfully received it); If you have a database set up, you can also create daily backups of this database to ensure that any data that is deleted or lost can be reinstated from one of these earlier backups.

Preparing and Processing Toolbox Data

While it is possible to simply use the summary statistics we provide on the end screen after each administration, note that these scores are derived from unprocessed data, and thus should be considered in this light. For larger-scale and more formalised uses, we recommend using full trial-by-trial data for analysis.  

For the Mr Ant task, scores are calculated using a point score calculated as: beginning from level 1, one point for each consecutive level in which at least two of the three trials were performed accurately, plus 1/3 of a point for all correct trials thereafter. For a sample spreadsheet to facilitate this data cleaning process, please click here.   

(Microsoft Excel, 589.43 KB)

For the Not This task, scores are calculated using a point score calculated as: beginning from level 1, one point for each consecutive level in which at least three of the five trials were performed accurately, plus 1/5 of a point for all correct trials thereafter. For a sample spreadsheet to facilitate this data cleaning process, please click here.   

(Microsoft Excel, 678.36 KB)

For the Go/No-Go task, we start by removing all trials for which responding was faster than 300 ms (and thus is unlikely to have been in response to the stimulus). We then remove all blocks in which the child was largely non-responsive (go accuracy below 20% and no-go accuracy exceeds 80%) or indiscriminately responsive (go accuracy exceeds 80% and no-go accuracy below 20%). From the resultant data, we then calculate an impulse control score (% Go Accuracy x % No-Go Accuracy), which reflects the child’s ability to withhold their response in the context of the strength of that typical (pre-potent) response. For a sample spreadsheet to facilitate this data cleaning and calculation process, please click here.

(Microsoft Excel, 2.70 MB)

For the Card Sorting task, we tend to review the accuracy of Block 1 (pre-switch) and Block 2 (post-switch). Since a post-switch accuracy score intends to index the extent to which a child could successfully switch from one sorting rule to the next, we swap the two scores if the post-switch accuracy is larger than the pre-switch accuracy. This ensures that final post-switch scores (Block 2 + Block 3) reflects the child’s ability to successfully switch between sorting rules. For a sample spreadsheet to facilitate this, please click here (note that the score swapping is done manually, however).   

(Microsoft Excel, 261.61 KB)

For the Expressive Vocabulary task, if the ‘specify’ option was used for any of a child’s responses to more clearly indicate what they said, these will automatically be scored as a 0 (incorrect). However, we give credit if the word produced contains the target word (e.g., the word is snow but the child said snow globe), because they knew and were able to produce that word. We will also give credit for highly common alternatives to the target word (e.g., the word is hippopotamus but the child said hippo). We do not provide credit for alternate viable words, because words have been carefully selected to follow a particular developmental sequence. Rather, if the child was unable to produce a target word initially, we would prompt with ‘What else might this be called?’ until they are able to produce the word (e.g., the child saying ‘I don’t know’) or until satisfied that the child will not produce the target word (trying to balance our desire to establish whether the child can independently produce the word, while at the same time ensuring that the child does not become frustrated and disengage from the task).  

(Microsoft Excel, 39.79 KB)


For the Child Self-Regulation and Behaviour Questionnaire, a number of items must be reverse-scored prior to combination into sub-scales. The following sub-scales can be derived as follows:

· Sociability: Items 1, 4, 9, 16(reversed), 22(reversed), 27, 32
· Externalising: Items 3, 20, 23, 26, 28
· Internalising: Items 17, 21, 25, 33, 34
· Prosocial: Items 15, 19, 24, 27, 30
· Behavioural Self-Regulation: Items 7(reversed), 13, 15, 29(reversed), 30, 31(reversed)
· Cognitive Self-Regulation: Items 5, 6, 8, 12, 18
· Emotional Self-Regulation: Items 2, 10, 11(reversed), 14(reversed), 23(reversed), 26(reversed)

For a sample spreadsheet to facilitate this data cleaning process, please click here.

(Microsoft Excel, 138.20 KB)