Measuring Development and Behavior
Even though many domains of development and behavior work together for a child to demonstrate a skill, when we are measuring development we want to look at each domain as separately as we can. Why? Consider this example: If want to see how well a child stacks blocks (a fine motor task) but give lengthy and complex verbal directions (meaning lots of receptive language skills are needed to understand) when we ask him to stack blocks (a fine motor task) and he then fails to perform, is it a motor problem or a receptive language problem? We can’t tell because he might have failed to perform because of receptive language deficits. So for screening motor skills, we use very simple commands, including gestures and examples to make sure we are just measuring motor skills (e.g., “Watch this. Now you do it.”). Once we can see that motor skills are OK, we can try more complicated commands that tap receptive language skills more heavily (e.g., “Put the block ON the chair”).
The same effort to scrutinize development in each domain as separately as possible also goes for use of a tool like PEDS which elicits parents’ concerns. For example, if a mom raises a concern, such as, “My son won’t mind me or do anything I ask”, PEDS questions are designed to help her think carefully about each domain and thus other possible reasons for her child’s misbehavior. We want her to consider things like, “Is my child able to hear me?” (health/sensory issues); “Does my child understand what I’m saying?”(receptive language); “Is my child physically able to do what I’ve asked?” (motor+health); “Is my child able to tell me when he doesn’t understands or needs help to do what I’ve asked?” (expressive language). Answers to PEDS questions help parents and professionals consider whether there are other likely causes for behavior issues so we can plan the most appropriate interventions.
So ideally, when viewing how well children are doing we want to look at each domain carefully to determine if there are delays compared to same-age children. Yes, we all have strengths and weaknesses, but when children have substantial delays or disabilities in one domain, we also need to anticipate that there will probably be problems acquiring skills in other areas. Ideally, we want to see relatively even development across all domains (while also recognizing that children with enormous intellectual strengths will usually still have motor and social skills more in-keeping with those of their same age peers and so may be understandably frustrated at times).
Types of Measures: ‘Different Strokes for Different…Settings’
There are three broad types of tools: Diagnostic, Assessment, and Screening,. This how these tools work:
Diagnostic Measures: This refers to a battery of in-depth tests administered by a range of skilled professionals. Diagnostic measures are required for determining eligibility for public school special education services and thus used mostly for children 3 years and older. Typically each professional on a multi-disciplinary team administers one or more diagnostic tests within their area of expertise in order to produce a range of scores including quotients for each domain. Some examples of the professionals often involved and the kinds of skills they test are:
for physicians—a detailed neuromotor exam viewing muscle tone, strength, balance, coordination; health/health problems; family/child medical history, attention and hyperactivity issues, etc.
for social workers—a lengthy family history that focuses on current living arrangements, social support, domestic violence, substance abuse, parents’ mental health, social service needs, etc.
for speech-language pathologists—measures of pragmatics (every-day communication); articulation (ability to produce sounds of words); prosody (the flow of language and who most of rely on the rise and fall of pitch to convey meaning, e.g., questions rise in pitch at the end of a sentence); receptive language skills (including understanding various types of sentences and receptive vocabulary); expressive language (the words we can speak at will and the types of sentences we can construct for optimal communication);
for psychologists—detailed evaluation of various types of thinking skills such as how we solve problems with objects/pictures, versus words or numbers; rate of learning and taking in new information, memory of various kinds, etc.; and self-help/adaptive behavior skills.
for educators—measures of various kinds of academic skills including awareness of sounds in words, association of sounds with letters, ability to recognize common words in books and safety signs, ability to understand what is read, knowledge of math facts and ability to use those in practical ways; spelling and handwriting, ability to express one’s self in writing; study well; stay organized (e.g., find and turn in homework), etc.
Diagnostic measures lead to a diagnosis (such as the ones described in the next section that covers disabilities and delays). Extensive multi-page reports are born out of such testing. Needless to say, diagnostic testing is slow (often 1 to 2 days of testing is needed) and expensive, and should be reserved for children who are clearly struggling with age-appropriate tasks.
Assessment Measures: A less expensive alternative to diagnostic testing is to use an assessment test. A single assessment test suffices, rather than a battery, and only requires administration by a single professional. Assessment measures:
Provide age-equivalent scores across many domains (e.g., one for receptive language and one for expressive language, one for fine motor and one for gross motor, one for self-help, social-emotional, and one for pre-academics/academics).
Are able to indicate the extent of strengths and weaknesses (e.g., about how far below average a child is performing AND often how far above).
Usually take 30 – 60 minutes to complete
Are often used to enhance the speed and efficiency of Early Intervention Intake and for foster care/child welfare services or Neonatal Intensive Care Unit Follow-up Clinics where children have elevated risks for problems and are thus more likely to have substantive strengths and weaknesses.
Are helpful for research projects because they are much less expensive than diagnostic measures (see below) but still offer results that are responsive to statistical analyses (e.g., a continuous metric to describe outcomes rather than a binary cutoff score).
Although assessment tools are neither short enough for busy clinics, nor do they lead to a diagnosis, they are helpful for Early Intervention services because they can determine delays, only short reports are needed, and are easy to use longitudinally to track progress. More information on assessment tools is provided in the NICU/EI module on www.pedstest.com
Screening tools are very brief measures (5 – 15 minutes) that:
Sort children who probably have problems from children who probably do not.
Render a pass/fail or cutoff score to decide when further testing is needed.
Do not provide a diagnosis but should still give us a bit of information about what kinds of further testing would be helpful (e.g., if a speech-language pathologist should evaluate further, an autism specialist, a developmental or school psychologist), etc. In response to all failed screens, vision, hearing, and lead levels should be checked before referring.
Are usually used in busy public health and primary care clinics.
May under-detect (meaning that a few children will be missed). BUT screens should be repeated over time (e.g., at each well-visit) and are short enough to make that easy to do.
May over-refer (meaning that some children who fail screens will not be found eligible for EI services). BUT… over-identified children should be watched closely because they sometimes have emerging problems, and so wherever possible should be enrolled in quality prevention services such as Head Start, good day care programs, etc.
Screening tests come in two types: broad-band (also called general) meaning that all or most domains are measured; and narrow-band (also called specific) meaning a focus on detecting a single disability (e.g., autism spectrum disorders, articulation impairment, etc.). For most of us a broad-band screen is sufficient although recent policy mandates addressing better detection of autism spectrum disorder require administration of both a broad-band screen plus an autism-specific screen.
More information on quality tools for early detection appears later in this module.
Methods for Measuring Child Development
There are three broad measurement methods that are described below. The pros and cons of each approach are discussed. Please note that some tests use a combination of approaches-- which can be helpful depending on circumstances.
Hands-on (also called direct-elicitation)
With hands-on measures, an examiner asks the child to perform various tasks. Most diagnostic tools rely on this approach [with the exception of mental health and self-help skills since these require information from parents (e.g., we are probably not able to ask a child to show us how he/she takes a bath, observe to take a bath, adjusts water temperature, observe how he or she washes himself, , dries off, picks out clothes to wear, brushes teeth, and in diagnostic testing which is always a one-on-one encounter most typically within an exam room with only a table and a chairs, observe much about interactions with other children, siblings, etc.)].
The disadvantages of hands-on measurement include:
Children may not be cooperative (especially infants and toddlers—who may well be asleep, hungry, unaccustomed to strangers, not feeling well, or just in a very oppositional phase and so none to compliant with requests to perform).
Examiners need more skill in managing children during testing (e.g., they need to know how to build rapport with children, manage challenging behavior during testing, and have abundant familiarity and lots of practice with test materials and questions so as to present test items swiftly without fumbling around with scoring or what to present next).
Examiners need to be sensitive to behavioral cues that can be indicators that a child is struggling with certain skills and that the child is well-aware of his/her weaknesses(e.g., if a child balks when presented with one type of task (e.g., fine motor), examiners need to considering switching to measurement of a different domain (e.g., language). Once establishing success, and thus rapport and compliance, when to return to the difficult areas (and to, most often, starting with much easier items.)
Examiners are usually much aided in deciding where to start with items and with what domains by interviewing parents a bit before starting testing. This can give examiners a sense of where and what areas a child is most likely to be successful and thus “a very good place to start”
Hands-on tools usually take more time than other approaches.
Hands-on administration has advantages at times, such as when:
A child is newly placed in foster care and so the new caretaker doesn’t know much about what the child can do
The accompanying parent isn’t a knowledgeable caretaker (e.g., a teen mom whose parents or relatives do more of the child’s care)
The examiner feels the caretaker isn’t a reliable reporter of children’s skills (e.g., a parent who is obviously high or just defensive--which may occur if testing occurs during the process of determining whether a child should be removed from the home)
When we want to teach young professionals about child development, aquire testing skills (e.g., building rapport, managing children’s behavior, etc.).
Observation (sometimes called, play-based assessment). Some test questions can be scored by displaying tempting materials and then simply watching a child’s behaviors and skills. For example, if we give an infant or toddler a bit of cereal, we can score items such as whether they use a superior pincer grasp (tip of index finger plus tip of thumb to pick up the cereal) while we also watch how well a child chews it (gumming versus chomping with teeth and then how well he/she swallows—chokes or uses tongue to get food into the back of the mouth). If we follow that task by offering a cup of water, we can then note how well a child holds and controls the cup, and whether the water is sucked out or instead poured into the mouth slightly in small amounts like older children and adults do. We can also see how the child conveys an interest in getting more food (e.g., gazing at cereal out of reach, looking at the examiner and then back at the food, vocalizing while looking at the examiner, using single words like “more” or phrases like “want more”, and whether he or she names the food like “cheerios” etc.). We can also observe how well a child sits unsupported, reaches for things and other simple motor skills.
Observation methods are often used with very young children (and with older children when measuring interactions with others, i.e., social-emotional and behavioral skills). But observation methods don’t work well when we need to look at a child’s ability to name or understand specific words or at their ability to demonstrate skills in reading, math or handwriting. For measuring these skills, we need to use hands-on measurement, information from parents or a combination of the two approaches.
Information from parents
There are two different methods of gathering information from parents—parent report and parents’ concerns. Each approach offers different kinds of information and both have enormous value.
Parent Report:This approach presents parents with descriptions of milestone-type tasks that most children of the same age can perform. Parents are asked to read (or sometimes listen) to a description of a skill and then asked to tell whether their child can perform this or not. Parents by virtue of the substantive time they have to observe their children can be quite accurate reporters of current skills. Some examples of parent-report measures are: PEDS:Developmental Milestones, Ages and Stages Questionnaire, the Modified Checklist of Autism in Toddlers, etc.
The disadvantages/challenges with parent-report measures may include:
a) Literacy demands (if asking parents to read and respond to written measures is required) although this is true for any measure that doesn’t have an interview option. When an interview option is also available, carefully asking parents a question such as “Would you like to complete this on your own or have someone go through this with you?” Otherwise, parents can usually read words like “yes” or “no” and if their literacy is poor, they may just circle answers and thus results will be suspect unless parental literacy is probed.
b) Length: Some parent-report measures (although not all) exceed the span of time we allocate for parents to answer questions (e.g., the time we expect them to spend in waiting/ exam rooms or the time we expect to spend during a home visit). When tools aren’t completed in the anticipated time frame, this complicates work flow, lengthens visit time, etc.
c) Time and expense: Parent report tools must present varying sets of items at various ages and so some tests (but not all) require photocopying and thus retrieving different forms for different ages (e.g., copying in pink for one age, orange for the next for measures where photocopying is required). This requires time (for photocopying, organizing copies, allocating space for storing copies, and then making sure examiners have the correct form for the child’s age). So… time and expense when photocopying is needed.
e) Reporting challenges: Parents often report success with skills that are only emerging but not yet mastered (meaning not fully generalized and demonstrable in unfamiliar settings). However… quality parent report tools account for this, usually by having at least a three- option multiple choice response (e.g., “rarely—sometimes-- most of the time”).
f) Ability to capture disordered development: Although skill-focused multiple-choice questions are helpful in teaching parents and providers about child development, they may not capture disordered development [e.g., a child may be using age-appropriate three word utterances, but if he is just saying “Wheel of Fortune” over and over, that’s a problem that might not be detected by skill-focused tools.
g)Lack of Information about Parents’ unique challenges: Skill-focused questions do not give parents an opportunity to describe their specific developmental-behavioral challenges (e.g., bed-time or eating problems). Lacking information about parents’ unique issues, professionals are less able to respond with appropriate advice and specific referrals. As a consequence true parent-professional collaboration and communication is lacking and encounters may lack relevance to families which, in turn may deter follow-through with recommendations.
h)Hands-on options may be lacking (e.g. for the ASQ) but not for all parent-report tools (e.g., PEDS:DM). While interviewing letting parents complete skills-focused questions on their own (by having their children demonstrate) is an efficient approach to measurement, there are times when professionals rightly worry about the accuracy of parents’ reporting (see hands-on measurement discussion above) and so, having a tool that encourages parents to elicit children’s skills but also allows professionals to use interview as well as use hands-on measurement (including observation) offers much needed flexibility.
The advantages of parent-report measures are that they:
a)Often require less time than hands-on or observation measures;
b)Are easy to administer (since we don’t have to grapple with children’s behavioral issues, fatigue, hunger, etc.).
c)Are known to help parents learn about age-appropriate skills (something most parents want information about anyway)
d)Help track progress (e.g., via the longitudinal growth forms for the ASQ and PEDS:DM) which is important for all professional services but critical for early childhood programs and for follow-up studies where we want to see that children are learning, i.e., acquiring skills.
e)Are, when quality tools are deployed, as accurate as hands-on measures.
Parents’ Concerns: This approach to early detection involves eliciting parents’ observations and child-rearing issues in their own words and then addressing those specific concerns with referrals, parenting information, etc. We can all relate to the appreciation we have when we asked our opinions, have them listened to, and taken seriously. Parents are no exception. So eliciting and addressing parents’ concerns is the most collaborative of approaches to early detection. Yet this method too has advantages and disadvantages:
Informal/ad-hoc lines of questioning are a mess and create needless disadvantages such as:
a)Use of informal non-validated questions may not “speak to parents”. For example, the word “worries” doesn’t encourage families to talk (since they may not be sure yet that they are in fact, worried. Maybe they are just noticing and thus just concerned).
b)Many professional terms are not known to families (e.g., “development”, “gross motor”, “expressive language”). So carefully tested and validated questions are needed.
c)Translations, if poorly done, often use questions that don’t work in various cultures. For example, the word “concerns” which is prominent in PEDS, didn’t work in Somali (because it turned out to be a popular warlord slogan, e.g., “we are concerned about you” and parents thought providers were spying on their families back home)! Careful, careful vetting of translations among providers and families is essential and that’s one of the things validated measures offer.
d) With informal questions, professionals inevitably flounder with accurate decision-making. In such cases, they tend to under-refer and wait and see, and so will miss ~ 70% of children with problems. With the evidence validated tools offer, providers are shown when it is better to refer. Again, quality tools are much needed!
Validated approaches to eliciting/addressing parents’ concerns have challenges as well:
e)Parents’ concerns do not offer a detailed way to track children’s progress in learning new skills. Decreases in parents’ concerns over time, suggest satisfaction with services and their sense of a child’s improvement, but early childhood programs will need skill-focused tools to also illustrate children’s growth.
f)Order effects can occur. For example, if parents’ concerns are elicited after they’ve been asked to report on children’s skills (as is the case with the ASQ), parents sometimes think they themselves are being tested on knowledge of child development. Thus they seem to raise needless concerns. Eliciting parents’ concerns first is needed to ensure accuracy.
g)Some professionals, no matter how much they are shown the validity of parents’ concerns, just want skill-focused tools, and so may mistrust the accuracy of a tool like PEDS and thus over-ride its evidence with “junk science.” (Ideally we should encourage the collaboration afforded by eliciting parents’ concerns but for providers who prefer skill-based measures, we should also encourage them to use tools such as PEDS:Developmental Milestones, the ASQ, etc.). And, as you will see in the policy/mandate section in this module, this is the wise recommendation of the American Academy of Pediatrics whose policy is widely adopted by other professional societies.
h)As with parent-report measures, parents may not have thought about development as professionals do, i.e., as a range of domains. They may not feel comfortable revealing their concerns especially to a stranger the first time they are asked. Nevertheless, on the second administration, parents have usually contemplated and observed more carefully and so are better able to answer. Repeated screening is needed.
i)A parents’ concerns measure like PEDS sometimes (~ 20% of the time in the case of PEDS) calls for additional screening using a different approach. This isn’t a serious disadvantage since it is wise to both elicit/address parents’ concerns and view children’s milestones, and both types of screens can be conducted swiftly. But for trainers/trainees, it means that learning to two screens is valuable (e.g., PEDS+PEDS:DM; PEDS+ASQ; PEDS+Brigance Screens).
The strengths of eliciting and addressing parents’ concerns, if using a standardized validated screen, are many:
a) Professionals come to understand parents’ unique child-rearing issues and are better able to discern disorder from delay (e.g., a child may well pass a fine motor item about ability to pick up a cheerio, but only the parents’ own description will alert you to the presence of a concerning tremor). Similarly, only if we ask parents will we hear their concerns about fatigue, misbehavior, frequent illnesses, and other critical indicators that something’s not right.
b) Parents learn that their observations and parenting questions are truly of interest to professionals;
c) As a consequence of improved collaboration with professionals, parents, especially those with limited education who tend to be problematically reticent, are more likely to raise concerns.
d) When parents have a chance to express their concerns, they are far more likely to keep their appointments (e.g., for well-visits and parent-teacher conferences).
d) Questions about concerns, if carefully written, probing all domains, and proven to work, i.e., standardized, reliable, and validated, help parents think about development as professionals do-- as relatively discrete skill areas.
f) Professionals are better able, given parents’ precise worries, to focus child-rearing advice and referral recommendations toward parents’ and children’s unique needs.
Summary and Recommendations about Measurement Approaches
Bottom line, it is wise to elicit and address parents’ concerns at each encounter. It is also wise to periodically view children’s skills (via a parent report tool, an observational, or a hands-on measure).