Proposal Evaluation and Vendor Selection

Proposal evaluation is always, implicitly or explicitly, a two-step process. In the first step you need to establish proposals’ compliance with basic requirements and, thus, eligibility for full technical, commercial and financial evaluation.

Compliance
You may have already listed all “must comply” criteria in the RFP. These could be technical, commercial, legal or financial. Even if they were not explicitly listed, a typical RFP will give you leeway to exclude vendors who do not comply with the most important of your requirements.
For example:

  • Unacceptable level of technical non-compliance
  • Proposed technology not compliant with P25 standard
  • Proposed user equipment lacks CAP compliance certificate
  • Proposed technology has not yet been commercially deployed
  • Non-compliance with commercial terms and conditions such as payment terms or non-performance penalties
  • Poor customer service references

Make sure that any decision to exclude a vendor from further evaluation is justified, and solidly and thoroughly documented. Exclusion is likely to be protested, either formally or informally.

Evaluation
Some entities split proposals to evaluate:

  • by cost,
  • by all other criteria,

then compare. This may not work well in complex system RFPs as the proposals are likely to be significantly different in scope, performance and functionality. So normalizing the proposed price may be very difficult for anyone but subject matter experts.

Any significant opportunity is likely to attract between three and five competing vendors each with proposals of 1,000 to 2,000 pages. This creates a challenge in terms of processing such a massive amount of information. The work has to be divided and allocated.

You can do this in three ways:

  1. You can assign the work by proposal – one member of your team is responsible for evaluating Proposal A, one for Proposal B, one for Proposal C. But that often creates lack of balance between different evaluators and you may end up evaluating the evaluators rather than the proposals.
  2. You can assign more than one person to each proposal. This will partly remedy the problem, but assumes you have sufficient number of competent evaluators.
  3. A better way is to assign a specific topic or topics – preferably corresponding to your evaluation criteria – to each of the team members and ask them to provide detailed comparisons of each proposal on their allocated topic. This may be coverage design, system resiliency, pricing for example. This may still be difficult, but fairer and more manageable, as any differences between evaluators will be applied across all competitors.

Wherever possible, involve a consultant in this process. Being familiar with the standard proposal documentation, she/he can process the material much faster and ensure that individual evaluators’ oversights, mistakes or biases are corrected.

Evaluation criteria
It is also important to agree in advance on how to evaluate categories that are not easily quantified – such as system resiliency – or that can be quantified in multiple subcategories. For example, when evaluating pricing, are you looking at the total initial investment? Total costs over 5, 10, 15 years? Or maybe you want to evaluate the proposed costs of infrastructure, maintenance services, licensing and subscriber units separately? These decisions need to be made before evaluation begins and should reflect your organization’s prioritized needs.

Weighting the criteria
The evaluation criteria should be very well defined for internal purposes. It is good practice to assign weights to specific criteria.

The figures below are an example only. Your weightings may differ significantly.

Coverage 50%
Functionality/Reliability 30%
Price 10%
Local support 5%
Misc. 5%
Total 100%

Unfortunately, this definition is not sufficient in practice. You should also agree in advance how the points should be assigned in each category. For example, it is rarely sufficient to simply rank the proposals by price and assign the points by place – 1st 100 points, 2nd 80 points, 3rd 60 points, because the price differentials are never evenly spread.

Your evaluators should write reports on their areas of responsibility and present them to the team so their conclusions are well understood and can be queried. The team’s meetings should be recorded.

If the winning vendor cannot be clearly selected by proposal evaluations alone, you may consider creating a short list and asking for vendor presentations to clarify any outstanding questions. Occasionally, it may be necessary to go back to the drawing board, issue a new RFP and repeat the entire process. The best way to avoid this risk is to write a good, clear, detailed RFP document and keep the entire process fair and transparent.

The process should be designed so that the outcome of the evaluation can be summarized in a simple table, clearly identifying the winner. For example:

Functionality 30% Coverage 50% Price 10% Support 5% Misc 5% Total
Competitor A 20 10 5 8 3 46
Competitor B 18 17 3 12 1 51
Competitor C 14 22 8 10 2 56 – winner
Competitor D 17 16 10 10 1 54

The selection process outcome should not be communicated to the vendor community until it is approved by the authorized officials.

Click "NEXT" to learn about Contract Negotiation and Signing or select a topic from the menu below.

Procuring your P25 System: Articles

Leave a Reply

Your email address will not be published. Required fields are marked *