A couple of weeks ago, I participated in the White House’s Open Data Licensing Jam. Organized in concert with our friends over at 18F, the Jam was focused on addressing issues with respect to open government data. Participants from agencies, media, watchdogs, and the private sector put their heads together, summer camp ice-breaker style, and identified the most pressing challenges in making government information more open and usable. It was great fun and we were honored to have been invited to participate.

The Data Quality Challenge

Being the kind of company we are, GovTribe is most interested in one challenge in particular: data quality. Put simply, government data can be as open and machine-readable as any consumer might want, but if the primary data sources are of poor quality - if the individual data entry systems are poorly designed or misused – it’s still going to be of limited value. In other words, the GIGO problem.Garbage In, Garbage Out.

To some degree, solving the government’s GIGO problem is part of our mission statement. The principle being if you use overlapping data sources, apply smart techniques to extract the highest quality portions of each source, and then integrate – well you might just be able to make some sense of this market. But there are limits. And I began to run up against some of these limits in my latest Purse String Index analysis.

Purse String Index: Agency for International Development

Any company operating in the Agency for International Development (USAID) space could probably say a good deal (in private) about the (let’s say) atypical procurement experience. Among other things, their acquisition processes tend to result in inconsistencies around the manner and reliability with which procurement information is posted to FBO and other systems. As a result, in our experience, USAID procurement data is probably the most difficult to analyze.

All of this is a preamble to our Mission-level Purse String Analysis of USAID. We took the previous five years of contract award data, cleaned it up (a lot) and applied the same analysis of throughput we conducted for Department of Homeland Security and Health and Human Services. NOT included in this analysis are any non-competitive awards, task orders awarded under existing vehicles, personal service contracts, and any contracts for goods purchases. This is an evaluation of competitive service contract procurements for which a procurement history was available (or inferable by mining multiple data sources). In short, we compute the Purse String Index using solicitations that go through a complete procurement process on FBO - from solicitation to award.

To recap, the Purse String Index is an evaluation of performance across the three variables that, in our experience, speak to productivity and efficiency:

  • Frequency – Compared to the agency average, how often does the Mission award contracts?
  • Velocity – Compared to the agency average, how short or long is the procurement process?
  • Magnitude – Compared to the agency average, what is the Mission’s mean contract award value?

We then modify the Mission’s score based on, what we call, the Annoyance Factor. The Annoyance Factor is derived from two things - the number of amendments or modifications issued prior to award, and the number of times the due date changed. A Purse String Index score of one (1.00) indicates a Mission is performing at the agency average.

The chart below shows the USAID Missions active in the last five years, ordered by the Purse String Index. Again, by active we mean a complete procurement process posted to FBO.

USAID/Washington tops the list by a large margin. The home office has a long procurement process on average (a low Velocity score) but puts out contracts most often and has the highest average contract value.

For a bit more context, the table below shows the stats driving the Purse String Index scores for the top and bottom five Missions.

Once again, we included in this analysis only the professional service contract awards for which a procurement history was available on FBO.gov. The India Mission, for instance, may have awarded additional contracts during the five year period we analyzed, but only these twocontracts had a public procurement history. Same with the Bosnia Mission.

Bosnia got a high score with stats that are the inverse of USAID/Washington. Very few contracts and a very low average contract size, but the fastest procurement process of any Mission by far. The next two Missions, Hungary and Philippines, have a pretty high frequency and velocity with moderate average contract values.

At the other end, Cambodia, India and the others with low Purse String Index scores have infrequent, moderate or low value contracts that take a long time to award.

Regarding the Annoyance Factor (which seems to be the most compelling part of the Purse String Index in the blogosphere), there was not much of an effect on the overall rankings among Missions. USAID puts out a lot of high scope, complex procurements and all of the Missions amend and change the due dates on contracts with some degree of regularity. Afghanistan, Kazakhstan, and Philippines are slightly higher than average on both counts, as is clear in the bar chart.

What's next?

We’ve gotten lots of requests for Purse String Index analyses of other agencies and regularly provide deep-dives for our custom report customers. We can’t get to everything on our blog, but please don’t hesitate to reach out if you have a request – Purse String-related or otherwise.