One of the important issues for our schools is the ability to optimize their student’s performance on state tests. One of my (perhaps overused) comments is that effective education will be reflected on any well-constructed test. If this is true, then we can utilize our own assessment program as a substitute for the state tests and, by optimizing performance on STAR, also optimize performance on the state tests.

students-testing

That is the theory, so let’s look at the data that we have and the studies conducted by Renaissance to examine whether there is a strong enough relationship between STAR and the state tests to conclude that using STAR to determine students’ strengths and weaknesses and to monitor their progress during the year will result in optimal performance on the state tests.

 

Imagine Schools has conducted a study of the relationship between STAR results and state test outcomes both on a predictive (Fall STAR to state scores) basis and a concurrent (Spring STAR to state scores) basis. Renaissance has conducted linking studies to determine the usability of STAR to facilitate:

1. The early identification of students at risk of failing to make yearly progress goals in reading and math, which could help teachers decide to adjust instruction for selected students.
2. Forecasting percentages of students at each performance level on the state assessments sufficiently in advance to permit redirection of resources and serve as an early warning system for administrators at the building and district level.

Here are the conclusions from each of these studies that discuss the strength of the relationship between STAR results and state test results on both a predictive and concurrent basis.

Imagine Study
The relationships between the results of the STAR assessment and the state assessments in these states (Arizona, Florida and Ohio) are so strong that they would be considered good when looking at the SAME TEST given twice to the same subjects (test-retest reliability).

The relationships between the Fall administration of STAR and the Spring administration of the state tests (Arizona, Florida and Ohio) is an OUTSTANDING indication of predictive validity. The STAR assessment can be considered a proxy for the state assessments. Full text available on Inside Imagine: Click here

Renaissance Studies

STAR PARCC – Ohio

Correlations indicated a strong relationship between the STAR and PARCC tests. On average, the correlation between PARCC and concurrent STAR scores (i.e., STAR tests taken within +/- 30 days of the PARCC mid-date) was .79 for reading and .79 for math. Similarly, the average correlation between PARCC and predictive STAR scores (i.e., STAR tests taken earlier and projected to the PARCC mid-date) was .81 for reading and .81 for math. When projecting STAR scores to estimate PARCC performance, students were correctly classified as either proficient or not 83% of the time for reading and 84% for math.  Full text available: Click here

STAR PARCC – Colorado

Correlations indicated a strong relationship between the STAR and PARCC tests. On average, the correlation between PARCC and concurrent STAR scores (i.e., STAR tests taken within +/- 30 days of the PARCC mid-date) was .79 for reading and .79 for math. Similarly, the average correlation between PARCC and predictive STAR scores (i.e., STAR tests taken earlier and projected to the PARCC mid-date) was .81 for reading and .81 for math. When projecting STAR scores to estimate PARCC performance, students were correctly classified as either proficient or not 86% of the time for reading and 91% for math. Full text available: Click here

STAR PARCC – DC

Correlations indicated a strong relationship between the STAR and PARCC tests. On average, the correlation between PARCC and concurrent STAR scores (i.e., STAR tests taken within +/- 30 days of the PARCC mid-date) was .79 for reading and .79 for math. Similarly, the average correlation between PARCC and predictive STAR scores (i.e., STAR tests taken earlier and projected to the PARCC mid-date) was .81 for reading and .81 for math. When projecting STAR scores to estimate PARCC performance, students were correctly classified as either proficient or not 86% of the time for reading and 91% for math. Full text available: Click here

STAR PARCC – Maryland

Correlations indicated a strong relationship between the STAR and PARCC tests. On average, the correlation between PARCC and concurrent STAR scores (i.e., STAR tests taken within +/- 30 days of the PARCC mid-date) was .79 for reading and .79 for math. Similarly, the average correlation between PARCC and predictive STAR scores (i.e., STAR tests taken earlier and projected to the PARCC mid-date) was .81 for reading and .81 for math. When projecting STAR scores to estimate PARCC performance, students were correctly classified as either proficient or not 86% of the time for reading and 91% for math. Full text available: Click here

STAR  FSA – Florida 

Correlations indicated a strong relationship between the STAR and FSA tests. On average, the correlation between FSA and concurrent STAR scores (i.e., STAR tests taken within +/- 30 days of the FSA mid-date) was .81 for reading and .79 for math. Similarly, the average correlation between FSA and predictive STAR scores (i.e., STAR tests taken earlier and projected to the FSA mid-date) was .84 for reading and .81 for math. When projecting STAR scores to estimate FSA performance, students were correctly classified as either proficient or not 83% of the time for reading and 84% for math. Full text available: Click here

STAR STAAR – Texas

Correlations indicated a strong relationship between the STAR and STAAR tests.  On average, the correlation between STAAR and concurrent STAR scores (i.e., STAR tests taken within +/- 30 days of the STAAR mid-date) was .77 for reading and .76 for math.  Similarly, the average correlation between STAAR and predictive STAR scores (i.e., STAR tests taken earlier and projected to the STAAR mid-date) was .79 for reading and .77 for math.  When projecting STAR scores to estimate STAAR performance, students were correctly classified as either proficient or not 83% of the time for reading and 84% for math.  Full text available: Click here

The strength of the relationships reported in the Imagine study and in all of the state specific Renaissance studies support the conclusion in the Imagine study that STAR can be utilized as a proxy for any of the state tests. This means that by utilizing the STAR results to drive instruction on both the classroom and individual student intervention level and to monitor progress of students, classrooms, grade levels and schools, the school’s performance on the state assessments will be optimized as well.

Skip to content