Pearson, PARCC, Smarter Balanced, and the Money Exchange

On January 31, 2012, Pearson announced a contract from Smarter Balanced and PARCC to develop a "Technology Readiness Tool":

The SMARTER Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers (PARCC) today announced they have awarded a contract to Pearson to develop a new Technology Readiness Tool to support states as they transition to next-generation assessments.


According to Pearson's press release, this tool is open source:

This new open source tool, with the assistance of the State Educational Technology Directors Association (SETDA), will support state education agencies as they work with local education agencies to evaluate and determine needed technology and infrastructure upgrades for the new online assessments to be launched by the two consortia in the 2014-15 school year.

And, this contract was paid for out of Race to the Top money initially awarded to Smarter Balanced and PARCC:

SMARTER Balanced and PARCC both received grants from the federal Race to the Top Assessment Program to work with states to create next-generation, comprehensive assessment systems. The development of the Technology Readiness Tool is one component of their initiatives to establish infrastructure and content for common online assessments. Intended to launch in spring 2012, the tool will be developed using open source technology, allowing the consortia free access to the source code.

If I understand this correctly, the consortia paid Pearson to develop a diagnostic tool using open source code so that the consortia could have access to the source code that they were paying Pearson to develop?

On second thought, I don't know if it's possible to understand this correctly.

As the dust settles, it looks like Smarter Balanced and PARCC - two consortia who won grants to develop tests aligned to the Common Core - handed over federal funds to Pearson - who is also developing assessments and services aligned to the Common Core. This is what corporate welfare looks like.

I would love to be surprised, but I also suspect that the codebase from the "open source tool" will never see the light of day. If anyone can point me to more information, please add a comment with more information.

Image Credit: Image from 401(K) 2012 shared under an Attribution Share-Alike license.


Corporate America, the new welfare Queen!

It is available, Apache 2.0 license.

Good to see that the codebase for this get into the wild.

The first commit to the repo (as listed at was April 24, 2014, when the Apache license was checked in.

It looks like the full project was checked in a month later, on 22 May. While it would have been nice to have this development take place in the open, it's good that the codebase is out there.

An additional benefit of having this specific code available sooner is that it would have allowed districts to use this code to tests readiness. By the end of May, 2014, implementation plans for the new tests were well underway - and in many districts, the plans were completed.

Given that the grant award was January, 2012, what prevented this code from being released in, say, January 2013? This would have allowed more schools to benefit from it. Even January 2014 would have been better.

Bill. Have you ever researched IMSGlobal? I think their story might explain Pearson's coding specs. Please see the blog I posted about this at Utahns Against Common Core:

Hello, JaKell,

I am familiar with various IMS Global standards - they have been around for a while. They have been issuing different standards for a while. Many of them are used within corporate training, as well as higher ed, and within K12 (although I'd estimate that the K12 adoption isn't as consistent as within other sectors).

IMS isn't the only player in this space, and there are various orgs working on developing standards that increase how learning materials - curriculum, texts, assessments, plans, etc - can be easily reused. Other standards that support different types of interoperability include SCORM and SIF - it should also be noted that the different standards do different things. There are overlaps of a greater or lesser extent, but (as just one VERY GENERAL example) while both SCORM and LTI attempt to solve issues related to content reuse across systems, they do it in very different ways.

I'm not a huge fan of these standards for many reasons, but not for the ones you cite in your blog post. Many (okay, maybe all :) ) of the standards are incredibly overengineered, and tilt toward supporting publisher/content creator needs over the needs of learner autonomy. They also tend to enforce a pretty traditional version of learning and instructional design - again, that's a huge generalization.

Just about all of the publishers and many learning companies have been supporting versions of these standards dating back to NCLB and earlier. SIF was one of the earlier versions here, along with SCORM. IMS Global has been around since 1997. The companies behind these different standards bodies have collaborated with each other over the years, but they have remained distinct entities.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
To prevent automated spam submissions leave this field empty.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.