Identifying eGov Failure Causes: Design-Reality Gap Analysis"Why did my e-government project fail?" This page offers one technique for answering this question by identifying causes of failure. Follow this link for other techniques. A
gap exists for all e-government projects between the design
assumptions/requirements and the reality of the client public agency.
The larger this gap between design and reality, the greater the risk
that the project will fail. The identification technique presented here
asks you to rate the size of design-reality gaps along a set of seven
'ITPOSMO' dimensions. The dimensions that show the largest gap are the
most likely causes of project failure. Follow this link for a detailed
explanation of design-reality gaps (and some related case examples). Identifying the Factors Underlying FailureIdentification
consists of questions relating to a series of seven 'ITPOSMO'
dimensions - information, technology, processes, objectives &
values, staffing & skills, management systems and structures, and
other resources - with attached rating numbers. Using each of
the seven ITPOSMO dimensions in turn, analyze two things. First, the organizational reality relating to each dimension that existed just
prior to the implementation of the application. Second, the
conceptions/requirements within the design of the e-government
application for that dimension (if the design has changed significantly
over time, choose the final design that was actually implemented). For
each one of the dimensions, give a numerical rating to indicate the
size of the design-reality gap on that dimension. The larger the gap,
the more likely it is to have been an important cause of the failure.
The rating for each dimension's gap can be anywhere on a scale from zero
to ten. As a guide, illustrations are just given here for gaps
corresponding to ratings of zero, five and ten, but all numbers in the
range are possible. Illustrative ratings: 0 rating would indicate 'no difference between the application design and organizational reality'; 5 rating would indicate 'some degree of difference between the application design and organizational reality'; 10 rating would indicate 'complete and radical difference between the application design and organizational reality'
Note
that it may be necessary to conduct further data gathering (interviews,
document analysis, observations, questionnaires, etc.) in order to
adequately evaluate the size of each dimension's gap. Thus,
for example, taking the first dimension - information - 0 would
indicate that the information usage required by the e-government
application's design was exactly the same as the information really
being used in the organization just prior to implementation. 5 would
indicate that the information usage required by the e-government
application's design was somewhat different from the information really
being used in the organization just prior to implementation. 10 would
indicate that the information usage required by the e-government
application's design was completely and radically different from the
information really being used in the organization just prior to
implementation. The other six dimensions to be rated from zero to ten are: the technology used in the agency (comparing the requirements contained
within the design of the e-government application vs. the real
situation just prior to implementation); the work
processes undertaken in the agency (comparing the processes needed for
successful implementation of the e-government application vs. the real
situation just prior to implementation); the objectives
and values that key stakeholders would have needed for successful
implementation of the e-government application vs. their real objectives
and values just prior to implementation; the staffing
numbers and skill levels/types required in/by the agency (comparing the
requirements for successful implementation of the e-government
application vs. the real situation just prior to implementation); the management systems and structures required in the agency (comparing
the requirements for successful implementation of the e-government
application vs. the real situation just prior to implementation); the time and money required to successfully implement and operate the
new application compared with the time and money really available just
prior to implementation.
Presenting, Analyzing and Using the ResultsThe
gap rating scores for each dimension are ranked in a table in numerical
order. An illustration is provided in the worked example below. Those
dimensions which receive the highest gap rating are most likely to
represent the causes of failure in the e-government project. Rating
is a subjective process, but a rough guide to likelihood of a
particular dimensional gap being a cause of failure is shown in the
table below. Gap Score | Likelihood of Dimension Being Contributor to Failure |
---|
8.1 - 10.0 | Very likely | 6.1 - 8.0 | Likely | 4.1 - 6.0 | Possible | 2.1 - 4.0 | Unlikely | 0.0 - 2.0 | Very unlikely |
Rather
than simply accepting the scores and rankings, it makes sense to first
discuss the order and scores that have emerged, to see whether they
reflect the perceptions of key stakeholders. For example, the ranked
list of dimensions - with likely failure causes clearly identified - can
be circulated for further comments to those stakeholders, or can be
used as the basis for 'lessons learned' workshop. Knowledge
about the likely causes of failure can then be disseminated to others
who would find it useful; and subsequently applied. It can be applied in
two main situations. i. To salvage the existing e-government project Application
of this knowledge will typically mean using the knowledge to try to
revive the existing e-government project, and turn it from failure to
success. This will mean paying special attention to the identified
causal dimensions in order to reduce the gap between design and reality
on that dimension. For example, imagine the staffing and
skills dimension emerges with the highest gap score as the strongest
individual cause of failure. What should the current e-government
project team do? They can do one or both of two things: ·
Revise the staffing and skills assumptions within the e-government
application's design in order to make them more like the current reality
of staffing and skills within the organization. For example, they could
simplify the interface and processes within the e-government
application in order to reduce the complexity of skills required to use
the application. · Take actions that change the reality
of existing staffing and skills within the organization in order to make
that reality closer to the staffing and skills assumptions within the
e-government application's design. For example, they could undertake an
intensive training programme to develop necessary skills, knowledge and
attitudes.
ii. To assist future e-government projects You
can try to use the knowledge developed from identification of failure
causes on subsequent e-government projects, in an attempt to reduce
their risk of failure. This will mean paying special attention to the
identified causal dimensions in future projects to ensure that the gap
between design and reality remained small. However, this makes the
assumption that gaps causing failure on one project will be the main
cause of failure on later projects. This might be true, but it might not
be. Design-reality gap analysis assumes that one size does not fit all,
and that gaps are different on different projects. In
other words, it is better to start design-reality gap analysis from
scratch for any new project rather than try to rely on the results from a
previous project. Those previous results may give you some background
guidance, but no more than that. Variations on the Basic Technique1. Who does itThese
seven rating scales can be used by a single individual, such as a
project consultant or project manager, to help them with their own
understanding and recommendations. Alternatively, a more participative
approach can be used. The seven scales can be presented to a group of
key project stakeholders in a facilitated workshop. The stakeholders
discuss and rate each dimension. The largest design-reality gaps - the
ones that are most likely causes of the failure - are identified. The
workshop would then move on to work out what to do with this knowledge. 2. Emphasizing particular dimensionsThe
basic technique makes a questionable assumption - that all dimensional
gaps are equally important; for example, that if the information
dimension and the process dimension both show a design-reality gap of 7,
then both were equally important causes of failure. A more complex
variation - already noted above - uses the raw scores as the basis for
further reflection and discussion, leading to a revised ranking list of
factors. From experience, gaps on the objectives and values
dimension may be more significant as a cause of failure than equivalent
gaps on other dimensions because this dimension incorporates key
elements such as politics, culture, self-interest, motivation, and the
aspirations that a whole variety of different stakeholder groups seek to
achieve from the new e-government system. 3. More complex dimensionsThe
use of just seven rating scales is very much a 'blunt instrument'. A
more sophisticated - also more time-consuming - approach is to break
each main dimension down into a series of sub-dimensions. Each
sub-dimension is then allocated its own rating scale. For instance: The 'technology' dimension could be broken down into three sub-dimensions: software, hardware and networks. The 'staffing and skills' dimension could be broken down into one
sub-dimension for each significant staff grouping involved and/or one
sub-dimension for each of the six key e-government competencies
(strategic, change/project management, information systems development
and management, hands-on, interpersonal, 'intelligent customer'
(contracts, suppliers, procurement)).
Such
sub-dimensions can either be pre-set or they can be determined within a
facilitated workshop. In the latter case, sub-dimensions can be attuned
to particular organizational context. 4. Creating your own dimensionsThis
'attuning' just mentioned can go further: stakeholders can use the
seven suggested ITPOSMO dimensions merely as a starting point for
discussion, and can then develop their own particular dimensions and
sub-dimensions that are seen to be relevant to the specific context.
Design-reality gaps can then be assessed for each one of those
dimensions/sub-dimensions. Pros and Cons of this TechniqueThis
technique is relatively simple and quick to understand and put into
practice. One key advantage is that it matches the unique situation of
each individual e-government project, rather than imposing a "one size
fits all" concept. On the downside, it tries to cram a lot of issues
into each single dimension (particularly into 'objectives and values'
and 'staffing and skills'), and it will not work well if there are
competing designs or competing ideas about what counts as 'reality'. The
approach also takes no account of possible interaction between
dimensions as a cause of failure. Real-World ExamplesA
number of real-world cases of e-government project gap analysis and
related learning about failure cause identification are provided on this
site: Automating Public Sector Bank Transactions in South Asia A Single Personnel Information System for a Southern African Government An Integrated Information System for Defence Force Management in the Middle East
Worked ExampleA
new Web-based procurement system was implemented last year by the
Ministry of Transportation in Gedactia. Introduction of the system was
promoted and partly-funded by an external donor, which put in place many
of the formal skills and technology required, but the project had
relatively little internal support. Evaluation shows that
the e-procurement system is little used. It has significantly undershot
on its objectives, which set clear goals for the value of
electronically-made purchases to be achieved within one year, and for
the savings to be made through e-procurement. Why did this e-government project partially fail? An answer is given below. Questions, Answers & RatingsInformation Question :
What was the gap between the information assumptions/requirements of
the new e-procurement system design, and the information in use in
reality in the Ministry just prior to implementation? Answer :
The project consultants made use of a fairly 'generic' design for the
e-procurement system. In reality, this matched some core elements of
information used in Gedactian procurement. However, the Ministry made
use of slightly different information to this 'one size fits all'
assumption. In reality also, there were shortcomings in availability of
information that the design assumed would be present - a list of all
government suppliers, accurate pricing information, and a clear set of
guidelines on procurement. Thus there was a fair-sized gap between the
information assumptions of the design and organisational realities. Gap rating : 6.5 Technology Question :
What was the gap between the technology assumptions/requirements of the
new e-procurement system design, and the technology in use in reality
in the Ministry just prior to implementation? Answer :
The e-procurement system design assumed the presence of a set of robust
Internet connections, Web servers, and procurement software within the
Ministry; it also assumed the presence of Internet-connected systems in a
broad range of suppliers. In reality, the Ministry made fairly limited
use of ICTs, the telecommunications infrastructure in the country was
somewhat limited, and many smaller suppliers lacked access to ICTs. Gap rating : 7 Processes Question :
What was the gap between the work processes required for successful
implementation of the new e-procurement system design, and the work
processes in use in reality in the Ministry just prior to
implementation? Answer : The e-procurement system
design required a set of formal, rational work processes that dealt
efficiently with procurement. These proposed work processes under the
new system design followed roughly the same lines as the earlier real
procurement system, and that system did function, but with a number of
'hiccups' and inefficiencies in the way that work was carried out. Gap rating : 2.5 Objectives and Values Question :
What was the gap between the objectives and values that key
stakeholders required for successful implementation of the new
e-procurement system design, and their real objectives and values just
prior to implementation? Answer : The e-procurement
system design assumed a procurement system that values rational
functioning within public agencies, such as freedom of procurement from
political interventions. The design assumed objectives of greater
efficiency (whatever the impact on jobs), and of the spread of
e-government. The reality was somewhat different, though it varied from
stakeholder to stakeholder. The donors - who were driving the project -
largely shared these objectives and values; as did the project
consultants and IT suppliers working for the donors. Many senior
officials did not share them: they were either happy with the status quo
or they had other priority objectives than e-procurement, they
supported a politicised rather than rational culture within the
Ministry, and they were not particularly keen on the spread of ICTs
within government. Many clerical staff within the Ministry similarly did
not share the design objectives and values: they feared the new system
and they could not see its value. Gap rating : 7.5 Staffing and Skills Question :
What was the gap between the staffing numbers and skills levels/types
required for successful implementation of the new e-procurement system
design, and real staffing and skills just prior to implementation? Answer :
The e-procurement system design assumed the presence of a whole range
of competencies for both its implementation and its ongoing operation.
For example, it assumed a reasonable-sized team with good experience of
designing and implementing e-procurement systems; it assumed good
knowledge within that team of Gedactian public sector specificities; it
assumed some capacities within the Ministry to manage the implementation
contract and the procurement system; it assumed a set of hands-on IT
skills among clerical staff in the Ministry. In reality, some of those
competencies were present and some were not. The project team had good
experience, but knew little about Gedactia; the Ministry had a limited
set of management experitise; and clerical staff had a few basic IT
skills but not the higher-level skills that operation of the Web-based
system would require. Gap rating : 6 Management Systems and Structures Question :
What was the gap between the management systems and structures required
for successful implementation of the new e-procurement system design,
and real management systems and structures just prior to implementation? Answer :
The e-procurement system design assumed some limited changes to
management systems compared with organisational reality, with the
introduction of some IT management of the Web systems, and some changes
to oversight mechanisms for procurement. The design assumed no
significant changes to Ministry structures. Gap rating : 2.5 Other Resources Question :
What was the gap between the other resources (money, time, other)
required for successful implementation of the new e-procurement system
design, and real availability of those resources just prior to
implementation? Answer : The e-procurement system
design assumed two sets of financing to be available. First, a larger
sum for introduction of the system; second, a smaller ongoing sum for
system operation and maintenance. In reality, the donor was making the
money for the first set available, and for the second for the first two
years. The design also assumed a relatively gentle timescale, using an
incremental approach in roll-out of the system. This seems to match
fairly well with the amount of time that staff had available (and that
political timescales imposed). Gap rating : 2.5 ResultsThe dimensions are sorted into numerical order, and presented as a table. Dimension | Gap Score | Likelihood as Cause of Failure |
---|
Objectives & Values | 7.5 | Likely | Technology | 7 | Likely | Information | 6.5 | Likely | Staffing & Skills | 6 | Likely | Processes | 2.5 | Unlikely | Management Systems & Structures | 2.5 | Unlikely | Other Resources | 2.5 | Unlikely |
Conclusions and ActionDesign-reality
gaps on four dimensions - objectives & values, technology,
information, and staffing & skills - have been identified as the
most likely causes of this e-government failure. These dimensional gaps
can now form the focus either for remedial action on the existing
project and/or for risk reduction strategies in future projects. Follow
this link for more details about actions to take to close design-reality gaps. |