What type of student achievement data did you analyze?
I used My Lexia: an "actionable, norm-referenced performance data" program that supports students with Phonological awareness, phonics, structural analysis, automaticity/fluency, vocabulary, and comprehension. I also used two programs from Renaissance Learning: STAR and AR 360(Accelerated learning) which assess students in foundational skills, reading informational text, reading literature, and language. The hard data from these programs is allowing me to see where my students are, where they are headed, and how I can support them. I analyzed multiple reports from these three sources which gave me data that allowed me to see how my students were performing. This summative and formative data showed me the current reading level that each of my students were in.
What were the main findings generated from the analysis of your data?
I noticed that there was some discrepancy between the three programs. Some of my students seemed to do a lot better in STAR compared to AR and Lexia. I think this is probably because STAR is a program that some of them are more familiar with and perhaps the questions and ongoing formative assessments provided are focusing more on vocabulary and certain comprehension skills. I also noticed that some of my students that struggle in reading are the ones that place low in all three of the sources of data that I was looking at. Lastly, I was able to get a more detailed understanding of each of the programs and what they claimed to do for students. Both of them claimed to have norm-referenced performance actionable data that allows me to see what my students need.
Share 5 questions that the data sparked.
How is the data gathered?
I'm aware of the some of the questions and the progression for one of the programs, but I'm wondering how do they gather this data.
Are these programs aligned with the CCSS?
I was looking at this information and began to think about it's alignment with the common core state standards.
How similar within the three programs is the content that assesses the students?
I was wondering perhaps one is easier than the other, and maybe that's why some students do a lot better.
Which of the three is more detailed in the data that is gathered?
The layout of each of these data reports is very different and some offer very specific information. I am wondering which one is giving me more information than the other.
How accurate is the data gathered by each of these sources?
I know that all test have various factors that can affect the outcome of the results, but I'm wondering how they compute data to give me the reading levels of each of my students.
*How do the compare with my weekly in class assessments?
Name 3 priority needs and mention which one of these seemed most urgent.
Inferencing, vocabulary (multiple meaning words), and point of view
I noticed that these areas were the ones that most of my students were scoring low in. Even thought a lot of them met the benchmark, these areas were their lowest in comparison with the other skills.
I would say that vocabulary and inferencing. Multiple meaning words would be a simple one to tackle on because I can review various words through simple mini lessons that focus on looking at context and memorizing some of these words and their multiple meanings. I would then create more opportunities for the students to understand Inferencing, which is a reading skill that will require more planing, but is of equal importance.
Which target group did you select to work with and why?
I decided to focus on my four lowest students based on the data that I have. I want to make sure that I support all my students, so that they don't keep getting behind in their reading. I see how tough it is for students who struggle to catch up in the higher grades, so I want to make sure that this doesn't happen.
I used My Lexia: an "actionable, norm-referenced performance data" program that supports students with Phonological awareness, phonics, structural analysis, automaticity/fluency, vocabulary, and comprehension. I also used two programs from Renaissance Learning: STAR and AR 360(Accelerated learning) which assess students in foundational skills, reading informational text, reading literature, and language. The hard data from these programs is allowing me to see where my students are, where they are headed, and how I can support them. I analyzed multiple reports from these three sources which gave me data that allowed me to see how my students were performing. This summative and formative data showed me the current reading level that each of my students were in.
What were the main findings generated from the analysis of your data?
I noticed that there was some discrepancy between the three programs. Some of my students seemed to do a lot better in STAR compared to AR and Lexia. I think this is probably because STAR is a program that some of them are more familiar with and perhaps the questions and ongoing formative assessments provided are focusing more on vocabulary and certain comprehension skills. I also noticed that some of my students that struggle in reading are the ones that place low in all three of the sources of data that I was looking at. Lastly, I was able to get a more detailed understanding of each of the programs and what they claimed to do for students. Both of them claimed to have norm-referenced performance actionable data that allows me to see what my students need.
Share 5 questions that the data sparked.
How is the data gathered?
I'm aware of the some of the questions and the progression for one of the programs, but I'm wondering how do they gather this data.
Are these programs aligned with the CCSS?
I was looking at this information and began to think about it's alignment with the common core state standards.
How similar within the three programs is the content that assesses the students?
I was wondering perhaps one is easier than the other, and maybe that's why some students do a lot better.
Which of the three is more detailed in the data that is gathered?
The layout of each of these data reports is very different and some offer very specific information. I am wondering which one is giving me more information than the other.
How accurate is the data gathered by each of these sources?
I know that all test have various factors that can affect the outcome of the results, but I'm wondering how they compute data to give me the reading levels of each of my students.
*How do the compare with my weekly in class assessments?
Name 3 priority needs and mention which one of these seemed most urgent.
Inferencing, vocabulary (multiple meaning words), and point of view
I noticed that these areas were the ones that most of my students were scoring low in. Even thought a lot of them met the benchmark, these areas were their lowest in comparison with the other skills.
I would say that vocabulary and inferencing. Multiple meaning words would be a simple one to tackle on because I can review various words through simple mini lessons that focus on looking at context and memorizing some of these words and their multiple meanings. I would then create more opportunities for the students to understand Inferencing, which is a reading skill that will require more planing, but is of equal importance.
Which target group did you select to work with and why?
I decided to focus on my four lowest students based on the data that I have. I want to make sure that I support all my students, so that they don't keep getting behind in their reading. I see how tough it is for students who struggle to catch up in the higher grades, so I want to make sure that this doesn't happen.