This is a brilliant analysis of what’s behind the so-called Common Core Curriculum – a brilliant plan to monopolize and monetize education for the benefit of a few. What do you think?
by Robert Shepherd
(A term from the gaming world, pwned, from owned, is a neologism meaning “achieved total control and/or domination over.” If an opponent uses you, against your better interests, to achieve his or her own objectives, or if you are obliterated within seconds of the beginning of game play, then you have been pwned.)
The last state has now pulled out of the proposed national database of student responses and scores. Those who were horrified at the prospect of such a privately held, Orwellian Total Information Awareness system for K-12 public school education, one that would have served as a de facto checkpoint and censor librorum for curricula, are cheering.
But don’t think for a moment that Big Data has been beaten. I am going to explain why. I hope that you will take the effort to follow the connections in the story below. The story is a bit complicated, and some of it hinges on matters of business and economics that make for dull reading. I think, however, that you’ll find the story as a whole both shocking and extraordinarily consequential and so worth the effort. The tale I am going to tell is a birth narrative. It’s the story of a monstrous birth, like that of the monsters that sprang from the primordial ocean in ancient Mesopotamian mythology. But this is a true story, and the monstrous birth was engineered. This is the story, as I understand it, of the birth of the Common [sic] Core [sic].
And what rough beast, its hour come round at last,
Slouches towards Bethlehem to be born?
The emergence of the Internet presented a challenge to the business model of the big educational publishers. It presented the very real possibility that they might go the way of the Dodo and the Passenger Pigeon. Why? With a bit of effort, you will be able to find, right now, if you choose to look, some 80 or so complete, high-quality, absolutely FREE open-source textbooks on the Internet–textbooks written by various professors–textbooks in geology, biology, astronomy, physics, law, grammar, foreign languages, every conceivable topic in mathematics, and other subjects.
The development of the possibility of publishing via the Internet, combined with the wiring of all public schools for broadband access, removed an important barrier to entry to the educational publishing business–paper, printing, binding, sampling, warehousing, and shipping costs. Pixels are cheap. Objects made of dead trees aren’t. In the Internet Age, small publishers with alternative texts could easily flourish. Some of those—academic self publishers interested not in making money but in spreading knowledge of their subjects—would even do substantive work for free. Many have, already. There are a dozen great intro statistics texts , some with complete answer keys and practice books and teachers’ guides, available for FREE on the Web today.
Think of what Wikipedia did to the Encyclopedia Britannica. That’s what open-source textbooks were poised to do to the K-12 educational materials monopolists. The process had already begun in college textbook publishing. The big publishers were starting to lose sales to free, open-source competitors. The number of open-source alternatives would grow exponentially, and the phenomenon would spread down through the grade levels. Soon. . . .
How were the purveyors of textbooks going to compete with FREE?
What’s a monopolist to do in such a situation?
Answer: Create a computer-adaptive ed tech revolution. The monopolists figured out that they could create computer-adaptive software keyed to student responses in databases that they, and they alone, could get access to. No open-source providers admitted. They could also team up with tablet providers and sell districts tablets with their curricula preloaded, tablets locked to prevent access to other publishers’ materials.
Added benefit: By switching to computerized delivery of their materials, the educational publishing monopolists would dramatically reduce their costs and increase their profits, for the biggest items on the textbook P&L, after the profits, are costs related to the physical nature of their products–costs for paper, printing, binding, sampling, warehousing, and shipping.
By engineering the computer-adaptive ed tech revolution and having that ed tech keyed to responses in proprietary databases that only they had access to, the ed book publishers could kill open source in its cradle and keep themselves from going the way of typewriter and telephone booth manufacturers.
The Big Data model for educational publishing would prevent the REAL DISRUPTIVE REVOLUTION in education that the educational publishers saw looming–the disruption of THEIR BUSINESS MODEL posed by OPEN-SOURCE TEXTBOOKS.
A little history:
2007 was the fiftieth anniversary of the Standard and Poors Index. On the day the S&P turned fifty, 70 percent of the companies that were originally on the Index no longer existed. They had been killed by disruptions that they didn’t see coming.
The educational materials monopolists were smarter. They saw coming at them the threat to their business model that open-source textbooks presented. And so they cooked up computer-adaptive ed tech keyed to standards, with responses in proprietary databases that they would control, to prevent that. The adaptive ed tech/big data/big database transition would maintain and even strengthen their monopoly position.
But to make that computer-adaptive ed tech revolution happen and so prevent open-source textbooks from killing their business model, the publishers would first need ONE SET OF NATIONAL STANDARDS. And that’s why they, and their new tech partners, paid to have the Common [sic] Core [sic] created. That one set of national standards would provide the tags for their computer-adaptive software. That set of standards would be the list of skills that the software would keep track of in the databases that open-source providers could not get access to. Only they would have access to the BIG DATA.
In other words, the Common [sic] Core [sic] was the first step in A BUSINESS PLAN.
A certain extraordinarily wealthy computer mogul described that business plan DECADES ago–the coming disruptive programmed learning model in education, the model now commonly referred to as computer-adaptive learning based on Big Data.
So, that’s the story, in a nutshell. And it’s not an education story. It’s a business story.
And a WHOLE LOTTA EDUCRATS haven’t figured that out and have been totally PLAYED. They are dutifully working for PARCC or SBAC and dutifully attending conferences on implementing the “new, higher standards” and are basically unaware that they have been USED to implement a business plan. They don’t understand that the national standards were simply a necessary part of that plan.
And here’s the kicker: The folks behind this plan also see it is a way to reduce, dramatically, the cost of U.S. education. How? Well, the biggest cost, by far, in education is teachers’ salaries and benefits. But, imagine 300 students in a room, all using software, with a single “teacher” walking around to make sure that the tablets are working and to assist when necessary. Good-enough training for the children of the proles. Fewer teacher salaries. More money for data systems and software. Ironically, the publishers and their high-tech Plutocratic partners were able to enlist both major teachers’ unions to serve as propaganda ministries for their new national bullet list of standards, even though the game plan for those standards is to reduce the number of teachers’ salaries that have to be paid. Thus the education deform mantra: “Class size doesn’t matter.”
Think of the money to be saved.
And the money to be made.
The wrinkle in the publishers’ plan, of course, is that people don’t like the idea of a single, Orwellian national database. From the point of view of the monopolists, that’s a BIG problem. The database is, after all, the part of the plan that keeps the real disruption, open-source textbooks, from happening–the disruption that would end the traditional textbook business as surely as MP3 downloads ended the music CD business and video killed the radio star.
So, with the national database dead, for now, the education deformers have to go to plan B.
What will they do? Here’s something that’s VERY likely: They will sell database systems state by state, to state education departments, or district by district. Those database systems will simply be each state’s or district’s system (who could object to that?), and only approved vendors (guess who?) will flow through each. Which vendors? Well, the ones with the lobbying bucks and with the money to navigate whatever arcane procedures are created by the states and districts implementing them, with the monopolists’ help, of course. So, the new state and district database systems will work basically as the old textbook adoption system did, as an educational materials monopoly protection plan.
So, to recap: to hold onto their monopolies in the age of the Internet, the publishers would use the Big Data ed tech model, which would shut out competitors, and for that, they would need a single set of national standards.
In business, such thinking as I have outlined above is called Strategic Planning.
The plan that a certain computer mogul had long had for ed tech proved to be just what the monopolist educational publishers needed. That plan and the publishers’ need to disrupt the open-source disruption before it happened proved to be a perfect confluence of interest–a confluence that would become a great river of green.
The educational publishing monopolists would not only survive but thrive. There would be billions to be made in the switch from textbooks to Big Data and computer-adaptive ed tech. Billions and billions and billions.
And that’s why you have the Common [sic] Core [sic].