+254 720 703 606  | +254 720 225 788  |  +254 797 415 868   +250 788 486 042


2.0 Participatory Intergrated Development Process

  1. Input Tracking Matrix

Input tracking matrix is developed by project beneficiaries and service providers in a meeting at the community as shown below.

  1. First, in order to be able to track inputs, budgets or entitlements one must start by having data from the supply side.
  2. Take this information to the community and the project/facility staff and tell them about it. This is the initial stages of letting the community know their ‘rights’ and providers their ‘commitments.’
  3. Using the supply-side information above and the discussions in the sub-groups one needs to finalize a set of measurable input indicators that will be tracked. These will depend on which project   or service is under scrutiny.
  4. With the input indicators finalized the next step is to ask for and record the data on actual for each input from all of the groups and put this in an input tracking scorecard as shown in table-1below.
  5. Wherever possible each of the statements of the group member should be substantiated with any form of concrete evidence (receipt, account, actual drugs or food, etc.). One can triangulate or validate claims across different participants as well. In the case of physical inputs or assets one can inspect the input (like Community water Tank) to see if it is of adequate quality/complete. One can also do this in the case of physical inputs -like the number of mosquito nets present at the CDDC office-in order to provide first hand evidence about project and service delivery.


Input Indicator





Fingerlings per pond




Ken Brew chicks per welfare group




Project Funds









  1. Community Scoring of Performance

Community scoring of performance is done though the following process:

  1. Once the community has gathered, the facilitators (both local and external) face the task of classifying participants in a systematic manner into focus groups. The most important basis for classification must be usage, in order to ensure that there are a significant number of users in each of the focus groups. Without this critical mass, no useful data can be solicited. Each group should further have a heterogeneous mix of members based on age, gender, and occupation so that a healthy discussion can ensue.
  2. Each of the focus groups must brainstorm to develop performance criteria with which to evaluate the facility and services under consideration. The facilitators must use appropriate guiding or ‘lead-in’ questions to facilitate this group discussion. Based on the community discussion that ensues, the facilitators need to list all issues mentioned and assist the groups to organize them into measurable or observable performance indicators. The facilitating team must ensure that everyone participates in developing the indicators so that a critical mass of objective criteria is brought out.
  3. The set of community generated performance indicators need to be finalized and prioritized. In the end, the number of indicators should not exceed 5-8.
  4. Having decided upon the performance criteria, the facilitators must ask the focus groups to give relative scores for each of them. The scoring process can take separate forms – either through a consensus in the focus group, or through individual voting followed by group discussion. A scale of 1-5 or 1-100 is usually used for scoring, with the higher score being ‘better’.
  5. In order to draw people’s perceptions better it is necessary to ask the reasons behind both low and high scores. This helps explain outliers and provides valuable information and useful anecdotes regarding service delivery.
  6. The process of seeking user perceptions alone would not be fully productive without asking the community to come up with its own set of suggestions as to how things can be improved based on the performance criteria they came up with. This is the last task during the community gathering, and completes the generation of data needed for the CSC. The next two stages involved are the feedback and responsiveness component of the process.


Table-2: A Sample Community Generated Performance Scorecard TCB Farming



Input indicator










Current situation baseline

Measurable result

(target) month

Actual result achieved


(difference between target and achieved,


Explain why the difference

The rating as per what is achieved


60  farmers trained on TCB husbandry








Positive Attitude of Farmers








Management of Green Houses








Quality Management of Farms








Equal access to Green houses for all community farmers








Equal access to Extension services for all community farmers


















Table 3 Example of a Sample of a Community Scorecard within a Focus Group

Community Generated Criteria


R e m a r ks

Very Bad




Very Good

1)       Positive Attitude of Farmers






-Farmers were trained well

2)       Management of Green Houses






-Rain water seeps through the Green House mesh, disinfectant available and in use

-Parts of the mesh worn out

3)       Quality Management of Farms






-Farms -have Clean-Manure applied on TCB stems

4)       Equal access to Green Houses for all community farmers






Restricted access-depending on availability of community member responsible for opening

5)       Equal access to Extension services for all community farmers






Farmers have equal chances to be visited and advised-lest a farmer is absent from their farms



Start by scoring the Table from Subject Received Size Categories 

Programme administrator WKCDD field work Fri 5/2 233 KB  

Very Bad - 20%       Bad-    35%  Ok  -  50%  Good -  60%  Very Good  - 80% and above


Related topics