Welcome!Welcome! Welcome to the website for the Language Assessment Lab at Indiana University. Directed by Dr. Sun-Young Shin, the Language Assessment Lab (or LAL) is devoted to the understanding and advancing of theory,...

Read more

Trip to MwALTTrip to MwALT           We Language Testing Lab folks went to the University of Illinois at Urbana-Champaign to attend the annual MwALT conference held on October 5-6th. The...

Read more

SLRFSLRF                 Dr. Shin and Ryan Lidster presented their work, “Dictogloss as dynamic assessment?” at Second Language Research Forum...

Read more

AAAL (2013, Dallas, TX)AAAL (2013, Dallas, TX) American Association for Applied Linguistics (AAAL) 2013, Dallas, TX Dr. Shin, Ryan, and our former LAL members Stacy and Rebecca presented their recent research project on The influence of the learners’...

Read more

MwALT 2013 at MSUMwALT 2013 at MSU Dr. Shin, Ryan, and our former LAL members, Rebecca and Stacy presented their papers on “The Effects of L2 Proficiency Differences in Pairs on Idea Units in a Collaborative Text Reconstruction Task”...

Read more

Projects

We are currently working on the following language assessment projects:

1. Operationalizing and Investigating the Measurement of Intelligibility of Different Varieties of English to Inform the Design of Listening Assessment
Introducing non-native varieties of English into a second language (L2) listening comprehension test may enhance authenticity and broaden construct representation, but at the same time, it may cause test bias against some test takers who do not share first language (L1) backgrounds with a speaker. Unfortunately, this dilemma for ESL listening test developers and users has not been resolved yet because previous research on the effects of shared L1 on L2 listening test scores has shown only conflicting results without explicating circumstances under which a shared-L1 effect takes hold. To our knowledge, this is mainly due to lack of systematic control of important variables affecting test takers’ performance on a listening test. Thus, the proposed study is intended to investigate the potential for a shared L1 effect on TOEFL iBT (internet-based test) listening test scores not only by conducting Differential Item Functioning (DIF) analyses in an effort to understand how accented speech is related to a source of test bias, but also by controlling key factors including texts, item types, and the degree of intelligibility, comprehensibility and accentedness of L2 speech, along with pronunciation variables that may affect test takers’ performance on a listening test. Results of this study will help test developers make an informed decision about whether L2 varieties can be included in TOEFL listening section, and if so, to what extent and under which conditions they should be implemented.

2. Developing and Validating Achievement-Based Assessments of Student Learning Outcomes
Until recently, our Intensive English Program (IEP), like many, used a test of global proficiency to assess student readiness for advancement, despite the curriculum’s stated goal and the requirement of many language program accreditation agencies, including the Commission on English Program Language Accreditation, that students be advanced based on their level of achievement of student learning outcomes (SLOs). Typically, to measure SLO achievement, IEPs use course grades either alone or in combination with a standardized assessment tool. However, grades are composite measures which include many factors beyond SLO mastery and which are subject to variation in the means of assessment and rating severity across instructors. Meanwhile, score gains on standardized proficiency tests cannot be linked directly to SLO achievement. Thus, assessing SLO mastery in a reliable and valid manner presents a formidable challenge for language programs, but doing so is crucial for evaluating program effectiveness and demonstrating to stakeholders what students in the program actually become able to do through instruction. Thus, we are examining the development and validation of a set of in-house language assessments specifically designed to measure mastery of curricular learning outcomes in a multi-level IEP housed in a large, Mid-western University. There were three primary goals for developing a battery of level-based achievement-oriented assessment instruments for each level. First, developing assessments would cause instructors and administrators to refine and operationalize the curriculum’s stated goals and evaluate the appropriateness of its current design and implementation. Second, the assessments would provide models enabling students and novice instructors to understand what is expected of successful students at each level. Third, performance data would provide valuable information on how students were actually performing at different levels over time, which in turn could inform curriculum revision.

3. Divisibility of textual knowledge in L2 reading and writing tasks
The purpose of this study is to help us understand how English textual knowledge is engaged in students’ performance on reading and writing tasks. Interpreting and producing extended discourse in a cohesive and coherent manner are important to both to the second language (L2) readers and writers when they organize and synthesize ideas in a given text. It has been well documented on previous research that cohesion and coherence are two distinct constructs of discourse competence or textual knowledge particularly in writing. Cohesion is often conceptualized as representing intersentential relationships at the surface level of a text. In contrast, coherence is related to producing a quality of the mental representation of the text based on semantic relationship. In an effort to understand the relationships of these two traits of textual knowledge, previous research has focused primarily on a text rather than either a reader or a writer, and the majority of research has been exclusively conducted on the role of cohesive devices and coherence in writing quality. However, to date, it has not yet been investigated that the extent to which different traits of textual knowledge are captured in both reading and writing tasks. In that vein, we are investigating the nature and relationship of two components of textual knowledge, cohesion and coherence, as measured by a multiple-choice reading test and essay tasks.