For data collection, we try to make this as user-friendly and flexible as possible. At present there are three options for data collection.
1) The 'I' menu can be found in the bottom right corner of each of the
app's title screens. In this menu, you can specify a database URL to which the data is sent. If this option
is used, by entering a database URL in this 'I' menu field, then data will remotely be sent to this
database. If the iPad is connected to WIFI during collection, this transfer will occur immediately. If the
iPad is not connected to WIFI during collection, all collected data will queue on the iPad and will be sent
to the database the next time it is connected to WIFI and the app is launched.
NOTE: This option requires a small in-app purchase to access the 'I' menu (a one-time cost, which funds further toolbox development) and establishment of a database. If you are sufficiently savvy in web coding, we have provided some sample code on our support site to facilitate you establishing your own database. Alternatively, you could hire someone to create this for you (we have provided the contact details of our database developer, who is able to create the same version as he has done for us at a small cost: currently at ~$350).
2) If you'd rather not go down the database route, you can instead
have the data sent to a user-specified e-mail address (also turned on and specified in the 'I' menu). Each
e-mail is easily formatted to give trial by trial data that is most relevant to the task (e.g., accuracy,
response, response time).
NOTE: This option requires a small in-app purchase to access the 'I' menu (a one-time cost, which funds further toolbox development), and manual data entry of each e-mail into your preferred analysis software at the end of data collection.
3) For those using the apps for more informal purposes, we also added some summary statistics on the end screen (this does not require an in-app purchase). These provide some of the most common indices to summarise performance on these measures. NOTE: These scores are derived from unprocessed data, and thus should be considered in this light. For larger-scale and more formalised uses, we recommend using full trial-by-trial data for analysis.
We have tried to create as many data collection options as possible, so hopefully one of these works for you.
A Note on In-App Purchases:
Note that our most recent app update involves a small in-app purchase to access the 'I' menu, as a means to collect some funds to continue with toolbox development (to date this has all been from one-off small grants). It is a one-off purchase that gets you lifetime access. Also, the purchase is associated with an Apple ID, so you can run multiple iPads with a single in-app purchase (just ensure all iPads are using the same Apple ID). Please note that 100% all proceeds will be put towards further development of the toolbox.
As researchers ourselves, we understand the importance of ensuring no data is lost. For this reason we have implemented a number of fail safes to assist in data management.
1) Each app's 'I' menu has a 'Last Data Sync' time stamp so you can quickly and easily see whether the data has been submitted to the specified database;
2) When database or e-mail logging is enabled, the iPad also keeps local logs of the data files that can be manually extracted in the unusual case that they do not submit to the database. As long as the app is not deleted, these logs remain accessible through iTunes. Please click here for a step-by-step guide to manual data extraction;
3) If you have a database set up, you can also have e-mail logs sent to alert whenever any data attempts to submit to the database (as another check for whether data has been sent and whether the database successfully received it); If you have a database set up, you can also create daily backups of this database to ensure that any data that is deleted or lost can be reinstated from one of these earlier backups.
While it is possible to simply use the summary statistics we provide on the end screen
after each administration, note that these scores are derived from unprocessed data, and thus should be
considered in this light. For larger-scale and more formalised uses, we recommend using full trial-by-trial data
For the Mr Ant task, scores are calculated using a point score calculated as: beginning from level 1, one point for each consecutive level in which at least two of the three trials were performed accurately, plus 1/3 of a point for all correct trials thereafter. For a sample spreadsheet to facilitate this data cleaning process, please click here.
For the Not This task, scores are calculated using a point score calculated as: beginning from level 1, one point for each consecutive level in which at least three of the five trials were performed accurately, plus 1/5 of a point for all correct trials thereafter. For a sample spreadsheet to facilitate this data cleaning process, please click here.
For the Go/No-Go task, we start by removing all trials for which responding was faster than 300 ms (and thus is unlikely to have been in response to the stimulus). We then remove all blocks in which the child was largely non-responsive (go accuracy below 20% and no-go accuracy exceeds 80%) or indiscriminately responsive (go accuracy exceeds 80% and no-go accuracy below 20%). From the resultant data, we then calculate an impulse control score (% Go Accuracy x % No-Go Accuracy), which reflects the child’s ability to withhold their response in the context of the strength of that typical (pre-potent) response. For a sample spreadsheet to facilitate this data cleaning and calculation process, please click here.
For the Card Sorting task, we tend to review the accuracy of Block 1 (pre-switch) and Block 2 (post-switch). Since a post-switch accuracy score intends to index the extent to which a child could successfully switch from one sorting rule to the next, we swap the two scores if the post-switch accuracy is larger than the pre-switch accuracy. This ensures that final post-switch scores (Block 2 + Block 3) reflects the child’s ability to successfully switch between sorting rules. For a sample spreadsheet to facilitate this, please click here (note that the score swapping is done manually, however).
For the Expressive Vocabulary task, if the ‘specify’ option was used for
any of a child’s responses to more clearly indicate what they said, these will automatically be scored as a 0
(incorrect). However, we give credit if the word produced contains the target word (e.g., the word is snow but
the child said snow globe), because they knew and were able to produce that word. We will also give credit for
highly common alternatives to the target word (e.g., the word is hippopotamus but the child said hippo). We do
not provide credit for alternate viable words, because words have been carefully selected to follow a particular
developmental sequence. Rather, if the child was unable to produce a target word initially, we would prompt with
‘What else might this be called?’ until they are able to produce the word (e.g., the child saying ‘I don’t
know’) or until satisfied that the child will not produce the target word (trying to balance our desire to
establish whether the child can independently produce the word, while at the same time ensuring that the child
does not become frustrated and disengage from the task).
For the Child Self-Regulation and Behaviour Questionnaire, a number of items must be reverse-scored prior to combination into sub-scales. The following sub-scales can be derived as follows:
· Sociability: Items 1, 4, 9, 16(reversed), 22(reversed), 27, 32
· Externalising: Items 3, 20, 23, 26, 28
· Internalising: Items 17, 21, 25, 33, 34
· Prosocial: Items 15, 19, 24, 27, 30
· Behavioural Self-Regulation: Items 7(reversed), 13, 15, 29(reversed), 30, 31(reversed)
· Cognitive Self-Regulation: Items 5, 6, 8, 12, 18
· Emotional Self-Regulation: Items 2, 10, 11(reversed), 14(reversed), 23(reversed), 26(reversed)
For a sample spreadsheet to facilitate this data cleaning process, please click here.