Educational Publishers & Content Providers: The Future is About Analytics, Feedback & Assessment

What is the future of educational publishers and content providers? As more content becomes freely distributed online and there are more creative (and sometimes free) products and services that help aggregate, curate, chunk, edit and beautify this content; there are questions about the role of educational publishers and content providers. While there is something to be said for a one-stop-shop for content, that might not be enough to secure a solid spot in the marketplace of the future, especially given that content is not the only thing for which people are shopping.

Some fear or simply predict the demise of such groups, but I expect a long and vibrant future. In fact, over the past decade or two, we’ve already witnessed publishing companies rebrand themselves as education companies with a broader portfolio of offerings than ever before. They’ve done so by adding experts in everything from educational psychology and brain research to instructional design, software development to game design, educational assessment to statistics, analytics, and testing. These are exactly the types of moves that will help them establish, maintain, and extend their role in the field of education. This is a shift from a time when many educational publishers and content providers would suggest that it is best to leave the “teaching” up to the professional educators. Now, more realize that there is not (nor has there really ever been) a clear distinction between the design of educational products and services and the use of them for teaching. Each influences the other, and understanding of educational research is critical for those who design and develop the products and services that inform what and how educators teach students.

According to this article, the preK-12 testing and assessment market is almost a 2.5 billion dollar market, “making them the single largest category of education sales” in 2012-2013! A good amount of this is the result of efforts to nationalize and standardize curriculum across geographic regions (like with the Common Core), allowing education companies to design a single product that aligns with the needs of a larger client base. However, even apart from such moves for standardization, more people are becoming aware of the possibilities and impact of using feedback loops and rich data to inform educational decisions.

This is just the beginning. If you are in educational publishing or a startup in the education sector, this is not only a trend to watch, but one to embrace. Start thinking about the next version of your products and services and how learning analytics and feedback loops fit with them. If you look at the K-12 Horizon Report’s 5-year predictions, you see learning analytics, the Internet of everything, and wearable technology. What do all three of these have in common? They are an extension of the Internet’s revolution of increased access to information, but this time it is increasing a new type of information and making it possible to analyze and make important decisions based on the data. Now we have a full circle. Data is experienced by learners. The actions and changes of the learner become new data points, which give feedback directly to the learner, to a teacher, or the product that provided the initial data. There is a new action taken by the learner, teacher and/or interactive product and the cycle continues (see the following image for three sample scenarios).

Screen Shot 2015-02-16 at 2.36.14 PM

Some (although an increasingly small number) still think of the Internet and digital revolution in terms of widespread access to rich content. Those are people who think that digitizing content is adequate. Since the 2000s, we’ve experience the social web, one that is read and write. Now we live in a time where those two are merged, and each action individually and collectively becomes a new data point that can be mined and analyzed for important insights.

While there are hundreds of analytics, data warehousing and mining, adaptive learning, and analytic dashboard providers; there is a powerful opportunity for educational content providers who find ways to animate their content with feedback, reporting features, assessment tools, dashboards, early alert features, and adaptive learning pathways. Education’s future is largely one of blended learning, and a growing number of education providers (from K-12 schools to corporate trainers) are learning to design experiences that are constantly adjusting and adapting.

The concept that we are just making products for the true experts, teachers, is noble and respectable, but the 21st century teacher will be looking for new content and learning experiences that interact with them (and their students), tools that give them rich and important data (often real-time or nearly-now) about what is working, what is not, who is learning, who is not, and why. They will be looking for ways to track and monitor learning progress. If a content provider does not do such things, it will be in jeopardy, with the exception of extremely scarce or high-demand content that can’t be easily accessed elsewhere.

As such, content still matters. It always will. However, the thriving educational content providers and publishers of the 21st century understand that the most high-demand features will involve analytics, feedback (to the learner, teacher, or back to the content for real-time or nearly now adjustments), assessment, and tracking.

Tagged : / / / / /

The Limitations of Course Evaluations: Identifying Helpful, Accurate & Wholistic Measures

As learning organizations venture further into the use learning analytics and data-driven decision-making, I find it increasingly important to consider the danger of simply collecting and analyzing the data that are available or easiest to collect. I will use the example of course evaluations in schools to illustrate my point, largely based on the insights from An Evaluation of Course Evaluation Evaluations by Stark and Feishtat (which I learned about and located because of this article in the Chronicle of Higher Education). Amid their critique of evaluations, they share the following story.

Three statisticians go hunting. They spot a
deer. The first statistician shoots; the shot passes a yard to the left of the deer. The
second shoots; the shot passes a yard to the right of the deer. The third one yells, “We
got it!” (Stark, P, and Freishtat, R., p. 4)

As indicated by this story, using averages in data may lead to flawed conclusions. At some point, there is need to put faces and stories to the data, which calls for more forms of data collection. The problem is that not all data are equally easy to collect. So, we often settle for pre-developed templates, what our analytics software can most easily collect and display, or what we (individually or collectively) can most easily understand. We may establish key performance indicators and identify measures based on what data is available or easiest to collect, analyze and understand. In doing so, we make flawed conclusions about how we are doing as an institution. Our numbers look good, so we are making progress. Or, our numbers are down so we must do what we can to raise them.

Note the potential flaw with that last statement. If our numbers are down, we must do something to raise them. When we hear something like this, we have signs of a subtle but important shift in an organization. There may be hundreds of ways to increase the numbers so that we seem to be making progress. Yet, not these options are equally valuable. Consider a course evaluation where an instructor’s overall course evaluations go down one semester. The only obvious change that the instructor can identify from the last term (where rating were much higher) was that she added the requirement of a weekly learning journal. So, she got rid of the learning journal assignment the next term and the evaluation averages went back up. Problem solved. Look more closely and find that student performance had actually increased during that term with the lower average evaluation. So, the ratings are now higher but students are not performing as well on the assessments. The teacher sticks with that strategy, knowing that rank and promotion is partly dependent on course evaluation averages.

Most course evaluations are based upon self-reporting, because that is easy to do. In the scenario from the last paragraph, note that discovering this potential problem would only happen if we collected actual student performance data along with their evaluations. Yet, I am not aware of organizations that do that. It is a more complex task to carry out. So, we settle for the easy route, despite the fact that it may lead us down the wrong path.

Please know that I am not arguing against the benefit of quantitative data in learning organization. These data sets can indeed open our eyes to important patterns, trends, and relationships. They are quite valuable. Instead, I’m suggesting that we want to put careful thought and planning into what data we collect and how we collect them, that we do the hard work of identifying measures that will give us the most complete and accurate picture. We want the complete (or as complete as possible) story. We want to see human faces in the data. This will help us use the data to make decisions that will truly support our organizational mission, vision, values and goals.

Self-reporting data in course evaluations has any number of limitations, as pointed out by Stark, P, and Freishtat. The ratings do not mean the same to all students. What one student considers “excellent” may only be “very good” to another student. What one student considers “very challenging” may be “not very challenging” to another. Given this reality, what do the averages tell us?

As Stark and Freishtat explain,

To a great extent, this is what we do with student evaluations of teaching effectiveness.
We do not measure teaching effectiveness. We measure what students say, and pretend
it’s the same thing. We dress up the responses by taking averages to one or two decimal
places, and call it a day (p. 6).

In the end, I must confess that I was favorable to Stark and Freishtat’s work because it affirms my own values and convictions. They conclude that a better way of evaluating teacher effectiveness is one that includes observations, narrative feedback, the inclusion of artifacts as evidence of teacher effectiveness, along with insights gleaned from course evaluations (p. 11). This sort of triangulation tells a story. It puts a face on the data. It provides context and something from which a teacher can more readily learn. The problem is that this takes more time and effort. Yet, if we truly want to create key performance indicators for our learning organizations, and we genuinely want to know how we are doing with regard to those indicators, then it requires this type of work. And from another perspective, what example do learning organizations set for students if the people in that organization set up an entire system of measurement based upon cutting corners and doing what is easy and available?

Tagged : /

Do We Want Purpose-Driven or Data-Driven Learning Organziations?

  • How many students have failed their last two math quizzes?
  • Which students have missed three or more days of school in the last month.
  • What is our 4 or 5 year graduation rate? How about our first year retention rate?
  • Which students are most at risk for dropping out?
  • What percentage of students are first generation college students?
  • What factors most lead to student engagement and improved learning?
  • How much class time is “on task” for each student? What is the average cost to recruit a student for a given program?

Ask any question about student learning, motivation, or engagement. Then find data to help answer that question. Now what? What do you do with data? How will it inform your decisions? This is what people refer to as data-driven decision-making, and it can be wonderfully valuable. However, it can’t drive decisions, not by itself. Decisions are not data-driven. They are driven by mission, vision, values and goals. We want purpose-driven organizations, not data-driven ones.

Without clarifying one’s goals and values, the data are of little value. Or, perhaps even worse, the data lead us to function with a set of values or goals that we do not want. I’ve seen many organizations embrace the data-driven movement by purchasing new software and tools to collect and analyze data, but they did not first figure out how data will help them achieve their goals and live out our values. I’ve seen organizations that value a flat and decentralized culture be drawn into a centralized and largely authoritarian structure…because the systems were easier to use that way or it was less expensive. I’ve seen organizations that value the individual and personal touch abandon those emphases when data analysis tools were purchased. I’ve also seen organizations spend large sums on analytic software, only for it to be largely unused. These may not be all bad, but it is wise to recognize how data will influence an organization.

An important part of any organizational plan to collect, analyze and use data sets is to establish some ground rules, working principles, and key performance indicators. These should reflect the organization’s values and mission. Yet, it is easy to set up some key performance indicators over others simply because they are easier to measure, that is how another organization did it, they are values and demanded by external stakeholders, or because a small but influential core wants it. As such, data analysis can lead us away from our mission, vision, values and goals as much as it can help is achieve or remain faithful to them. The data that we see and analyze has a way of establishing institutional priorities. The data not collected or analyzed ceases to have a voice amid such outspoken data sets.

In addition to this, data analysis is not neutral. The methods and technologies associated with it are values-laden. They typically amplify values like efficiency and effectiveness. Few people will disagree that both of those have a role in learning organizations, but not at the expense of other core values. As such, I contend that, alongside key performance indicators, it is wise to establishing core value indicators when implementing a data analytics plan. What indicators let us know that our values are visible, strong, and amplified? 

In the end, behind any decision is a mission, vision, set of values, and list of goals. Not data. Start with the goals and values. Then ask how data can serve those values and goals…not lead them.

Tagged :

3 Helpful Resources About Data-Driven Decisions in Education

Do you go to mechanics who try to fix your car without doing diagnostics?

What about doctors who give prescriptions or recommend surgeries without analyzing the health concern first?

Would you use a financial advisor who did not take the time to learn about your financial situation?

Or, what would you think of a consultant who didn’t take the time to ask questions and figure out your needs or the necessary boundaries for a project?

When some teachers hear the phrase data-driven decision-making, they instantly think of No Child Left Behind and, more often than not, that evokes a negative reaction. That is unfortunate because data-driven decision making is a powerful tool for teaching and learning and does not have anything to do with flaws of No Child Left Behind. Data-driven decision making is about making informed decisions that benefit learners. Are you interested? Here are three useful readings to get you started.

Data Driven Teachers – This article will introduce you to the concept of data-driven decision-making. It will provide you with a good foundation on the subject.

Making Sense of Data Driven Decision Making in Education – This article will give you a helpful framework for using data to make decisions in education.

10 Things You Always Wanted to Know About Data-Driven Decision Making – This is a solid introductory article on the subject.

Tagged :