The Death of Testing and the Rise of Learning Analytics

I know that it is sad news for some, but more than a few of us have assessed the situation, and the prognosis is not good for our friend (or perhaps the arch enemy to others of us), the test. We might be witnessing the death of testing. Tests are not going away tomorrow or even next year, but their value will fade over the upcoming years until, finally, tests are, once and for all, a thing of the past. At least that is one possible future.

Tests are largely a 20th century educational technology that had no small impact on learning organizations around the world, not to mention teachers and students. They’ve increased anxiety, kept people up all night (often with the assistance of caffeine), and consumed large chunks of people’s formative years.

They’ve also made people lots of money. There are the companies that help create and administer high-stakes tests. There are the-the companies that created those bubble tests and the machines that grade them. There are the test proctoring companies along with the many others that have created high-tech ways to prevent and/or detect cheating on tests. There are the test preparation companies. There are even researchers who’ve done well as consultants, helping people to design robust, valid and reliable tests. Testing is a multi-billion dollar industry.

death of testingGiven this fact, why am I pointing to the death of the test? It is because of the explosion of big data, learning analytics, adaptive learning technology, developments around integrated assessments in games and simulations and much more. These technologies are making and will continue to make it possible to constantly monitor learner progress. Assessment will be embedded in the learning experiences. When you know how a student is making progress and exactly where that student is in terms of reaching a given goal, why do you need a test at the end? The student doesn’t even need to know that it is happening, and the data can be incredibly rich, giving insights and details often not afforded by traditional tests.

Such embedded assessment is the exception today, but not for long. That is why many testing companies and services are moving quickly into the broader assessment space. They realize that their survival depends upon their capacity to integrate in seamless ways with content, learning activities and experiences, simulations and learning environments. This is also why I have been urging educational publishing companies to start investing in feedback and assessment technologies. This is going to critical for their long-term success.

At the same time, I’m not convinced that all testing will die. Some learning communities will continue to use them even if they are technically unnecessary. Tests still play a cultural role in some learning contexts. My son is in martial arts and the “testing day” is an important and valued benchmark in community. Yes, there are plenty of other ways to assess, but the test is part of the experience in this community. The same is true in other learning contexts. Testing is not always used because it is the best way to measure learning. In these situations, testing will likely remain a valued part of the community. In some ways, however, this helps to make my point. Traditional testing is most certainly not the best or most effective means of measuring learning today. As the alternatives expand and the tools and resources for these alternatives become more readily available, tests will start the slow but certain journey to the educational technology cemetery, finding a lot alongside the slide rule and the overhead projector.

Tagged : / / /

5 Strategies for a Balanced Approach to Big Data in Education

We are in the decade a big data. During this second decade in the 21st century, many are grappling with the challenges and opportunity of massive data and the emergence of tools to mine and analyze these data. Within education, this is not new. It started long before No Child Left Behind, with the 20th century growth of modern educational psychology and measurement movements. From that era we saw IQ and aptitude testing, standardized and multiple choice tests, the Bell Curve, and countless efforts in quantifying almost anything about students: achievement, retention, reading proficiency, performance by demographic data, etc. While some of these ideas have a much longer history (China used proficiency exams for civil service already in 2200 B.C.), these certainly gained a new level of attention and importance over the last 150-175 years. Consider how things have changed, as explained by David McAruthur in his 1983 report, Educational Testing and Measurement: A Brief History.

In the mid-1800s, Horace Mann launched the use of written exams in the United States. Based on that, promotion to the next grade was based on performance on these exams. Prior to that, it was oral exams and personal recommendation of the teacher. Testing was not central aspects of American education before this.

Already by the end of the 19th century, because these tests and the perceived negative impact by some, we saw the birth of a new concept, “teaching to the test.” In places like Chicago, there was even a ban on using tests for grade promotion, arguing that the teacher’s recommendation was the better option. The concern was that we would lose much of the “magic” in teaching and learning environments if we used a reductionist approach like just focusing upon students performing well on the tests. Nonetheless, even today there is an entire industry around test preparation and equipping people to perform as best as they can on tests ranging from the SAT to the GRE, LSAT and MCAT.

At this point in history, with more teaching and learning happening partly or fully through technology-enhanced means, we have even more student data to track and analyze. Every action on a device can be captured and reviewed. Similarly, external agencies are requiring the tracking of data about students: data ranging from demographics to attendance, vaccination records, and academic progress.

The advocates for big data point to many affordances. We can identify people at risk before it is too late, sometimes even proactively. We can use data to drive improvements in one or more eras. We can use data to more quickly identify and address problems. We can use data sets to personalize learning, conduct research on best and promising practices, measure progress, and to prevent students from slipping between the cracks (any number of cracks: socially, academically…).

Critics bring plenty of concerns to the conversation as well. Large data sets might inform policy, but while those policies help many, there are always losers with some policies as well. For example, perhaps predictive analytics allow learning organizations to track who is likely to succeed in an upper level math course. As such, they use this to track students on pathways that are more likely to work out for the students. That might exclude a student who is passionate about a STEM field and is willing to work hard enough to overcome the risks and alters that discouraged such a path. Then there are concerns about data privacy, misinterpretation of data, and losing sight of the people…the faces behind the numbers. Empathy and personal connection can be easily disregarded as important part of informing policy. Numbers matter, but so do the people represented in those numbers. There is an important difference between knowing that 80% of a given population is performing below grade level on reading and knowing the stories, challenges, and lived experiences of the people in that 80%.

How do we pursue the benefits of big data while also avoiding some of the limitations or negative elements? There is no easy answer to such a question, but I offer the following ten suggestions.

  1. Persistently challenge the assumption that quantitative data are more important. Get adept at arguing for the benefits of qualitative and quantitative measures. There are plenty of stories and examples from we can pull to make our point.
  2. Learn about the stories of big data success and invest just as much time in learning about big data disasters. Specific cases and examples can help important practice. Push for much higher levels of big data fluency. If we are going to be increasingly data-driven, then we need people who have higher levels of quantitative fluency. Without that, we either relegate important thought and work to a new quantitative tehcnocracy or we risk making flawed, even dangerous, conclusions by misreading the data. Anyone arguing for increased use of data must also be ready to put in the hard work of becoming more literate and fluent.
  3. Beware of the drive to value that which is easier to measure. This starts by persistently bringing the group back to mission, vision, values and goals. If we do not do this, it is easy enough for missions and goals to change just because some goals are more neatly and easily measured than others. Big data is not just about numbers. You can have big quantitative and qualitative data. Be a firm voice in starting with mission. We want to be mission-driven, data-informed, not the other way around.
  4. Consider an equal treatment approach to data usage. If teachers insist on using big data to analyze students, then shouldn’t big data be used to inform policies for teachers as well? What about the same thing for administrators and board members? While this will never be perfect, pushing for an equal treatment approach is likely to nurture empathy and more balanced consideration by decision-makers. For example, consider how many educators insist on the value of frequent tests, quizzes and grading practices that they would vehemently oppose if the same practices were applied to them. Take this an apply it to the state agencies, federal agencies, and politicians as well. Any politicians committed to arguing for big data on a state or federal level in education should be just as open and welcoming to the use of an careful data-driven analysis of their success, record and behaviors in office.
  5. Champion for the most highest possible ethical standards when it comes to data. Sometimes it is so tempting to use data, even for noble purposes, but we have to pass for security reasons or to protect various parties. We must hold the highest possible standard in this regard, even when personal loss is involved.

Big data in education will continue to have affordances and limitations, but these five strategies are at least a good start in promoting a more balanced approach.

 

Tagged : / /

Educational Publishers & Content Providers: The Future is About Analytics, Feedback & Assessment

What is the future of educational publishers and content providers? As more content becomes freely distributed online and there are more creative (and sometimes free) products and services that help aggregate, curate, chunk, edit and beautify this content; there are questions about the role of educational publishers and content providers. While there is something to be said for a one-stop-shop for content, that might not be enough to secure a solid spot in the marketplace of the future, especially given that content is not the only thing for which people are shopping.

Some fear or simply predict the demise of such groups, but I expect a long and vibrant future. In fact, over the past decade or two, we’ve already witnessed publishing companies rebrand themselves as education companies with a broader portfolio of offerings than ever before. They’ve done so by adding experts in everything from educational psychology and brain research to instructional design, software development to game design, educational assessment to statistics, analytics, and testing. These are exactly the types of moves that will help them establish, maintain, and extend their role in the field of education. This is a shift from a time when many educational publishers and content providers would suggest that it is best to leave the “teaching” up to the professional educators. Now, more realize that there is not (nor has there really ever been) a clear distinction between the design of educational products and services and the use of them for teaching. Each influences the other, and understanding of educational research is critical for those who design and develop the products and services that inform what and how educators teach students.

According to this article, the preK-12 testing and assessment market is almost a 2.5 billion dollar market, “making them the single largest category of education sales” in 2012-2013! A good amount of this is the result of efforts to nationalize and standardize curriculum across geographic regions (like with the Common Core), allowing education companies to design a single product that aligns with the needs of a larger client base. However, even apart from such moves for standardization, more people are becoming aware of the possibilities and impact of using feedback loops and rich data to inform educational decisions.

This is just the beginning. If you are in educational publishing or a startup in the education sector, this is not only a trend to watch, but one to embrace. Start thinking about the next version of your products and services and how learning analytics and feedback loops fit with them. If you look at the K-12 Horizon Report’s 5-year predictions, you see learning analytics, the Internet of everything, and wearable technology. What do all three of these have in common? They are an extension of the Internet’s revolution of increased access to information, but this time it is increasing a new type of information and making it possible to analyze and make important decisions based on the data. Now we have a full circle. Data is experienced by learners. The actions and changes of the learner become new data points, which give feedback directly to the learner, to a teacher, or the product that provided the initial data. There is a new action taken by the learner, teacher and/or interactive product and the cycle continues (see the following image for three sample scenarios).

Screen Shot 2015-02-16 at 2.36.14 PM

Some (although an increasingly small number) still think of the Internet and digital revolution in terms of widespread access to rich content. Those are people who think that digitizing content is adequate. Since the 2000s, we’ve experience the social web, one that is read and write. Now we live in a time where those two are merged, and each action individually and collectively becomes a new data point that can be mined and analyzed for important insights.

While there are hundreds of analytics, data warehousing and mining, adaptive learning, and analytic dashboard providers; there is a powerful opportunity for educational content providers who find ways to animate their content with feedback, reporting features, assessment tools, dashboards, early alert features, and adaptive learning pathways. Education’s future is largely one of blended learning, and a growing number of education providers (from K-12 schools to corporate trainers) are learning to design experiences that are constantly adjusting and adapting.

The concept that we are just making products for the true experts, teachers, is noble and respectable, but the 21st century teacher will be looking for new content and learning experiences that interact with them (and their students), tools that give them rich and important data (often real-time or nearly-now) about what is working, what is not, who is learning, who is not, and why. They will be looking for ways to track and monitor learning progress. If a content provider does not do such things, it will be in jeopardy, with the exception of extremely scarce or high-demand content that can’t be easily accessed elsewhere.

As such, content still matters. It always will. However, the thriving educational content providers and publishers of the 21st century understand that the most high-demand features will involve analytics, feedback (to the learner, teacher, or back to the content for real-time or nearly now adjustments), assessment, and tracking.

Tagged : / / / / /

Do We Want Purpose-Driven or Data-Driven Learning Organziations?

  • How many students have failed their last two math quizzes?
  • Which students have missed three or more days of school in the last month.
  • What is our 4 or 5 year graduation rate? How about our first year retention rate?
  • Which students are most at risk for dropping out?
  • What percentage of students are first generation college students?
  • What factors most lead to student engagement and improved learning?
  • How much class time is “on task” for each student? What is the average cost to recruit a student for a given program?

Ask any question about student learning, motivation, or engagement. Then find data to help answer that question. Now what? What do you do with data? How will it inform your decisions? This is what people refer to as data-driven decision-making, and it can be wonderfully valuable. However, it can’t drive decisions, not by itself. Decisions are not data-driven. They are driven by mission, vision, values and goals. We want purpose-driven organizations, not data-driven ones.

Without clarifying one’s goals and values, the data are of little value. Or, perhaps even worse, the data lead us to function with a set of values or goals that we do not want. I’ve seen many organizations embrace the data-driven movement by purchasing new software and tools to collect and analyze data, but they did not first figure out how data will help them achieve their goals and live out our values. I’ve seen organizations that value a flat and decentralized culture be drawn into a centralized and largely authoritarian structure…because the systems were easier to use that way or it was less expensive. I’ve seen organizations that value the individual and personal touch abandon those emphases when data analysis tools were purchased. I’ve also seen organizations spend large sums on analytic software, only for it to be largely unused. These may not be all bad, but it is wise to recognize how data will influence an organization.

An important part of any organizational plan to collect, analyze and use data sets is to establish some ground rules, working principles, and key performance indicators. These should reflect the organization’s values and mission. Yet, it is easy to set up some key performance indicators over others simply because they are easier to measure, that is how another organization did it, they are values and demanded by external stakeholders, or because a small but influential core wants it. As such, data analysis can lead us away from our mission, vision, values and goals as much as it can help is achieve or remain faithful to them. The data that we see and analyze has a way of establishing institutional priorities. The data not collected or analyzed ceases to have a voice amid such outspoken data sets.

In addition to this, data analysis is not neutral. The methods and technologies associated with it are values-laden. They typically amplify values like efficiency and effectiveness. Few people will disagree that both of those have a role in learning organizations, but not at the expense of other core values. As such, I contend that, alongside key performance indicators, it is wise to establishing core value indicators when implementing a data analytics plan. What indicators let us know that our values are visible, strong, and amplified? 

In the end, behind any decision is a mission, vision, set of values, and list of goals. Not data. Start with the goals and values. Then ask how data can serve those values and goals…not lead them.

Tagged :

3 Helpful Resources About Data-Driven Decisions in Education

Do you go to mechanics who try to fix your car without doing diagnostics?

What about doctors who give prescriptions or recommend surgeries without analyzing the health concern first?

Would you use a financial advisor who did not take the time to learn about your financial situation?

Or, what would you think of a consultant who didn’t take the time to ask questions and figure out your needs or the necessary boundaries for a project?

When some teachers hear the phrase data-driven decision-making, they instantly think of No Child Left Behind and, more often than not, that evokes a negative reaction. That is unfortunate because data-driven decision making is a powerful tool for teaching and learning and does not have anything to do with flaws of No Child Left Behind. Data-driven decision making is about making informed decisions that benefit learners. Are you interested? Here are three useful readings to get you started.

Data Driven Teachers – This article will introduce you to the concept of data-driven decision-making. It will provide you with a good foundation on the subject.

Making Sense of Data Driven Decision Making in Education – This article will give you a helpful framework for using data to make decisions in education.

10 Things You Always Wanted to Know About Data-Driven Decision Making – This is a solid introductory article on the subject.

Tagged :