IT Vendor Performance Score Cards

More state technology departments are considering formalizing a process for evaluating the performance of information technology (IT) vendors. The overarching goal is certainly laudable – providing the state procurement community with a comprehensive tool to reduce risk in the contract award process. And while it may identify vendors who exceed performance expectations, it also may wrongly disadvantage some vendors for future IT procurement projects.

Arizona is one of the more recent states launching a vendor performance system and engaging stakeholders in the process. Arizona Procurement Administrator Barbara M. Corella convened a stakeholder meeting at the end of September to consider designing a vendor evaluation program. Some of the evaluation questions under consideration are:

  • How long is a poor evaluation to remain as part of a vendor record?
  • When is an evaluation conducted?
  • When is it appropriate to ask for input from others involved in the project?
  • Is the review confidential under state public records disclosure laws?

Corella will reconvene the stakeholder group at the end of November to obtain further input regarding these questions.

California is also launching an IT vendor performance evaluation metric. Under the direction of the California Chief Information Officer, Carlos Ramos, the Statewide Technology Procurement Division of the Department of Technology has drafted an IT vendor performance “score card” and has been soliciting input. The proposed score card includes an overall rating, measurement criteria, and corresponding weights. The ratings run from 1 (does not meet expectations) to 5 (significantly exceeding expectations). The draft score card’s measurement criteria include: software development lifecycle, project management methodology, contract fulfillment, and vendor performance. A performance assessor also will select the measurement criteria, the corresponding weights, and the frequency of the assessment at the project kickoff. The frequency of the assessment will be based on the project’s complexity and the contract term.

A vendor task force established to help develop this new tool held several meetings over the summer to discuss the score card and additional meetings are anticipated. It is also anticipated that California will phase-in these assessment requirements, initially applying them to the dozen or so large, “reportable,” IT projects each year.

There are a number of challenges in creating an IT vendor score card. One main challenge is avoiding use of subjective evaluation standards. Two questions on California’s proposed score card demonstrate how difficult that is to achieve:  One question states, “How well does the vendor demonstrate knowledge and experience, bring innovative ideas, and provide value-added suggestions to your organization?” Another asks, “How well does the vendor understand government and public sector and provide you with best practices?”

Clearly, questions like those mentioned above can open the door to subjectivity and personal bias. The questions unrealistically assume the assessor has a core understanding of state procurement policies, generally, and IT business procurement needs in particular. Additionally, answers to either of these questions will be influenced by an assessor’s individual knowledge of complex IT systems. They also assume an assessor has a wide bandwidth to stay abreast of a fast-paced technology product environment and the best practices in IT.

Other concerns that have been raised also need to be addressed. For example, will vendors have advance notice of the weighting to gauge how to craft a response to a solicitation? Will vendors have to satisfy two different score cards – one used to win the solicitation and a second score card to rate performance? How will due process be incorporated into the assessments and does that include the ability to challenge an assessor’s rating? Without answers for these concerns, IT vendors may have a difficult time establishing a solid past performance record and render some IT companies at a significant competitive disadvantage.

IT vendors should monitor and engage in these efforts to ensure that each creates a standard and accurate vendor performance assessment process and score card. In particular, IT vendors must advocate for a vendor performance evaluation framework that includes a review of factors that impacted the project’s success, but were not within the vendor’s control. These include outside influence from legislative oversight of the project, insufficient public funding, inadequate project requirements or scope, changing project requirements or scope, including requirement creep responding to innovative changes in technology, delays imposed by the government customer, and inadequate or ineffective government project management.

That’s not to say that solutions to these problem areas cannot be found. States like Arizona and California can look at the approaches taken and lessons learned by the federal government through their years of experience evaluating performance by their contracting partners. ITAPS has been monitoring both state efforts and intends to provide input in both processes.

Public Policy Tags: Public Sector

Related