Pre-conference workshops

Monday, July 2, 2012

Preconference Workshops

 

 


Barbara Byrne (University of Ottawa, Canada)

Testing measurement and structural equivalence across culture: Procedures, issues and complexities

[click here for short bio]

 

A critical prerequisite to cross-cultural comparisons is knowledge that the measuring instrument is operating equivalently (i.e., it is invariant) across the groups of interest. This structural equation modeling (SEM) workshop demonstrates procedures involved in testing for evidence of both measurement and structural equivalence that includes testing for latent mean differences. Participants are “walked through” all phases of the analytic process, from model specification in the computer input to interpretation of results in the computer output. Issues and complexities encountered when these tests encompass different cultural groups are identified and discussed. To gain the most from this workshop, some knowledge of, and experience with SEM is recommended.

 

Gary Canivez (Eastern Illinois University, USA)

Measurement matters: Applying psychological measurement principles in modern clinical testing and assessment

[click here for short bio]

 

Measurement Matters is an INTERMEDIATE workshop designed to extend the knowledge and application of basic measurement principles to tests and assessment methods frequently used in applied psychological practice. Weiner (1989) cogently noted, psychologists must “(a) know what their tests can do and (b) act accordingly” (p. 829). In order to follow Weiner’s advice psychologists must possess fundamental competencies in psychological measurement: test score reliability, validity, utility, and norms. The importance of these competencies cannot be understated for ethical assessment and clinical practice (Dawes, 2005; McFall, 2000). Scientific foundations and ethical principles provide the foundation and specific research methods and empirically supported interpretation practices are discussed in the context of tests of intelligence, psychopathology, achievement, and extended to other measures. A brief review of commonly used reliability and validity approaches will be provided but the primary focus for this workshop will be to illustrate the importance of more advanced considerations such as assessment of hierarchical test structures, incremental validity, diagnostic utility, differential diagnosis, and norms. Participants will be better able to critically evaluate psychometric information provided in test manuals as well as the extant literature essential for determining which tests or what test scores are adequately supported for specific clinical uses.

 

Kurt Geisinger (Buros Center for Testing, University of Nebraska-Lincoln, USA)

Evaluating skills: Fundamental concepts and skills for psychologist and researchers

[click here for short bio]

 

This workshop will outline the ways in which tests should be evaluated.  The procedures used by the Buros Center for Testing/Institute of Mental Measurements will be described.  The primary orientation will be on the current visions of validity and validation, including different approaches to evidence production.  There will also be consideration given to test development procedures, pre-testing, reliability, fairness, and scoring procedures.  The adaptation of tests from one language and culture to another will be addressed.  Finally, the "places" that those reviewing tests need to go to acquire information regarding tests will be discussed.

 

Ron Hambleton (University of Massachusetts at Amherst, USA)

Item Response Theory: Concepts, models, and applications

[click here for short bio]

 

Many testing agencies and researchers would like to use item response theory (IRT) models for developing, scoring, and equating aptitude, achievement, and personality tests. These IRT models, too, can be used to provide the measurement underpinnings for new test designs such as multi-stage testing and computer-adaptive testing. In this half-day workshop, we will survey the following topics: (1) Shortcomings of classical test theory that have inspired the development of IRT models, (2) specific IRT models for fitting binary and polytomously-scored data, (3) basics of item and ability parameter estimation, (4) graphical and statistical approaches for assessing model fit, (5) introduction to IRT software, (6) test development using item information, and (7) equating of test scores. Because of the limited time available to cover these topics comprehensively, we will provide a bibliography to facilitate follow-up reading.

 

Dragos Iliescu (SNSPA University & OS/D&D/Testcentral, Romania)

Test adaptation: the strife for equivalence

[click here for short bio]

 

The workshop will focus on a number of topics related to test adaptation. The introduction will focus on terminology, legal constraints, and the general test adaptation sequence. The bulk of the workshop will be dedicated to the problem of equivalence, focusing on linguistic equivalence, cultural equivalence and psychological equivalence (construct and measurement equivalence). Each of these points will be illustrated with case studies. Finally, caveats in the test adaptation process are underlined, by discussing sources of bias affecting test adaptation (Hambleton et al., 2004). The sources of bias will be discussed under three headings: (a) Cultural & language differences, (b) Technical aspects (design of the test, design of the adaptation process), and (c) Interpretation of test results.

 

Rob Meijer & Iris Egberink (University of Groningen, The Netherlands)

Analyzing non-cognitive data with item response theory: What can we learn from modern test theory?

[click here for short bio]

 

Tests and questionnaires play a crucial role in psychological assessment. Both cognitive measures (e.g., intelligence tests) and non-cognitive measures (e.g., mood questionnaires, personality questionnaires) belong to the practitioner's toolkit in different fields of psychology. For example, in personnel selection procedures besides intelligence testing, often personality questionnaires are used to assess whether a candidate is suited for a particular job. Also, in the clinical field both cognitive and non-cognitive measures are used for diagnostic purposes and to select the most appropriate treatment for the diagnosed disorder. Because psychological tests and questionnaires are used to make important decisions, high-quality standards for the construction and the evaluation of these instruments are necessary. One of these standards is item response theory  Although there has been no shortage of researchers demonstrating the potential of IRT in the cognitive domains, its use in the noncognitive measurement area (e.g., personality, attitude, and psychopathology) has lagged behind that of other areas. Nevertheless, applied researchers are beginning to use IRT with greater frequency in recent years.  The overarching aim of this workshop is present different IRT modeling approaches and methods to analyze non-cognitive data, to discuss applications of IRT modeling, and to comment on recent discussions on whether to use dominance, unfolding, or forced-choice IRT models.

 

Richard Morey (University of Groningen, The Netherlands)

Programming in R for applied purposes

[click here for short bio]

 

The R language is a versatile tool for data analysis, with a growing user base. R's many strengths include interactive data analysis with powerful graphical capabilities, the flexibility of a script language, easily reproducible analyses, and its large body of contributed packages for cutting-edge methodology. These strengths, however, are sometimes overshadowed the fact that the learning curve is steeper than for other software. In this workshop, attendees will get an overview of data analysis in R, with an eye toward helping users move to R for everyday analyses. Topics will include graphical data analysis, common inferential techniques (such as linear models), and Monte Carlo simulations. Practical guidance will be offered as well, including how to use R efficiently in your work flow and getting help when you need it. At the end of the workshop, users will have a enough familiarity with R to do most of their analyses using R.

 

Aletta Odendaal (University of Johannesburg, South Africa) & Marié de Beer (University of South Africa)

Modern advances in dynamic testing: Practical solutions to identified concerns

[click here for short bio]

 

The practice of dynamic assessment and the measurement of learning potential have received increased attention in multicultural contexts as a fair and equitable means for assessing learning capacity, which is less influenced by socioeconomic and educational background and prior learning experiences. There are various definitions available which attempt to capture the essence of dynamic assessment with the test-train-retest strategy as the most universally recognizable element (Murphy & Maree, 2006). Given historical developments in the field of dynamic assessment two broad approaches are distinguished: 1) the application of dynamic testing from a clinical, diagnostic and remedial perspective and 2) a psychometric, measurement-orientated and comparative approach (De Beer, 2010). The two approaches share specific similarities as well as distinct differences. In addition, several concerns have been raised about the practical application as well as psychometric properties of such measurements. The workshop aims to explore how the developments in modern measurement techniques and the use of computer technology in adaptive testing can address some of the problems typically associated with dynamic assessment. The following areas will be addressed in the workshop: (a) Brief overview of the history and development of dynamic assessment, (b) The value of dynamic assessment versus classical assessment, (c) Identification of concerns and challenges from a developing country perspective, (d)Measurement of learning potential: can we assume equivalence?, (e) The use of Item Response Theory (IRT) and Computerised Adaptive Testing (CAT), and (f) Supporting examples and exercises illustrating the use of IRT and CAT.

 

Important Dates and Deadlines

Conference Dates:

July 3-5, 2012

July 2, 2012 (Pre-Conference Workshops)

 

Deadlines:

Submissions are now closed since 20 January 2012

Early bird registration has been closed on 15 April 2012

 

Second announcement of conference:

Download 2nd Announcement 8th Conference of the ITC

 

_____________________________________

 

DIAMOND SPONSORS:

 

GMAC

 

NIP

 

SHL

 

_____________________________________

 

PLATINUM SPONSORS:

 

BPS

 

BUROS

 

Thomas