Risk management for laboratory automation projects:Part 3

From LabAutopedia

Jump to: navigation, search
JALACover small.png  A Tutorial from the Journal of the Association for Laboratory Automation

Originally appearing in JALA 2004 9 72-86 (view)

Risk management for laboratory automation projects:Part 3
Authored by: Robert D. McDowall, McDowall Consulting

Goto Part 1; Part 2


Evaluation and selection of the system

In this section, it has been assumed that a commercial LIMS or laboratory automation software package is being selected. However, if an in-house system is being developed from components that the organization will integrate and develop, a few modifications to Table 5 are necessary. The general factors involved, such as the technology used in the system, the mode of selection, and the impact of the vendor, are discussed separately.

Technology Components 

Presented below are factors involving the technical components of a system that influence the risk during the selection process.

New or non-standard system components 

Increased risk to a project will be incurred if new or non-standard system components are selected for the application. Under this category are included:

  • Hardware and operating system
  • Networking protocols or components
  • Application software, including languages, databases, tools, techniques, or utilities

The risk in selection of non-standard components is manifested in several ways. The development team and the support staff need to become familiar with the respective components. This may require training that may be extensive as well as costly. The extent of integration between these new components and any existing applications may raise technical problems at the very least. Training to use these packages, and potential delays due to technical problems to be solved, must be an essential element of the project plan.

Establishing contact with the vendor's technical experts for specialist information and advice may be a way of gaining information to reduce risk or to obtain solutions to actual problems experienced. It is preferable to keep to the corporate standards wherever they are established for easier implementation and maintenance.

The choice of packages that do not conform to corporate guidelines must be made carefully:

  • Does the database have sufficient flexibility to undertake the tasks now and in the future?
  • Is the application development language suitable for the task?

The choice of the wrong database or development language will have a major impact on the project's ability to deliver the expected benefits.

Type of system and processing 

The greater the complexity of the system, the higher the risk that something will go wrong—this is Murphy's Law of laboratory automation. Management of risk approaches should be adopted to choose the simplest approach consistent with supporting the application effectively. The choice of a pilot system to size the processor, memory, and disc I/O accurately before the installation of an operational system may be one avenue to take. If distributed processing is required, implementing core functions in two locations first could be adopted and preferable to finding the completed package does not work as anticipated. Some applications may require on-line data capture in real time; this requirement may entail having a failure resistant hardware configuration. The need and justification for every requirement should be investigated thoroughly.

Response time 

The faster the response time required by the application and the users, the higher the risk. Failure to meet this performance criterion may result in loss of user involvement and interest. The sizing of hardware components and the design for rapid database searches for urgently required data may be crucial, but remember that not all data may be needed rapidly. Ensure that the computer has the expansion capacity for the next three to five years to cope with increased demand, either with sufficient capacity purchased at first intent or by planned incremental growth.

System availability 

The need for high system availability should be investigated and justified; there is often a stated requirement for 24/7 availability but few systems actually require it (best justified are manufacturing processes for raw materials and active ingredients).

If a high degree of availability is placed upon the system, the supporting hardware and network also need a high level of availability. Therefore, fault-tolerant hardware may be justified in cases of near-100% availability being required. Procedures for identifying and solving problems should be developed, as should effective and rapid contingency plans and disaster recovery procedures—e.g., consideration should be given to spare hardware systems being available to be started up in the case of an unplanned system or IT infrastructure failure.

Technology mix 

The greater the number of technologies that have to be integrated into an application environment, the greater the risk becomes. Wherever possible, keep to the simplest approach that is consistent with the requirements of the application and that meets the needs of the user and organization. Wherever possible, use components, or proven technology, that the organization has knowledge of and has used successfully before. Here, the success rate of the organization in implementing IT and automation projects plays a role. An organization with a successful and innovative track record in implementing projects can probably justify the risk involved with a range of technologies. However, a less-evolved organization should lower its sights and err on the side of caution.

Risks associated with the system selection 

Selecting the system 

When a system or an application package is chosen, there are a number of parties with vested interests within an organization such as the IT Department, the user laboratory, and, in a regulated industry, the quality assurance unit (QAU). Using an effective project team approach, all parties should be represented and have input.

From the IT departments viewpoint, the users may have chosen the wrong package for a number of reasons such as non-standard components, new technology, or the database does not appear to fit the requirements. The input of the IT department should be to check the requirements and package to assess the degree of fit from the IT perspective. It is possible that the package may not meet user expectations, or it may take considerably longer to implement than anticipated from an IT perspective. This can be resolved by ensuring that the package is tested fully with tests that represent functions carried out by the users.

If the IT department, with little input from the users, selects the package, the greatest risk is that the users will reject the system. The users know their own environment the best and appreciate the functions they require. The QAU interest is that the selected system can be validated, and they can use the system to carry out audits effectively.

The closer the package matches the requirements, the less risk incurred by the project. The further the package is from the requirements, the more customization will be required, or the users will have to modify their working practices to use the package. Both instances increase the risk of the project and can lead to excessive time delays or user rejection. It may be appropriate in this instance to consider a custom-designed system rather than a package. Alternatively, redefine the scope of the project and reassess the fit to the modified requirements.

Seduction by technology 

Evaluating a system or application package without a system or user requirements specification is asking for trouble. There is no baseline from which to make a value judgement and is likened by the author to being seduced by technology—it is unknown if the system can actually meet the business needs, as they have not been documented. Therefore, before evaluating systems, it is important to document the requirements and have objective means of evaluating the systems being reviewed.

The vendor 

The project will have increased risk if the organization has no experience dealing with a specific vendor. Without first-hand knowledge of their contract negotiating techniques and their willingness to modify their system (if required), a laboratory could end up with a system that does not support the business and that incurs expensive delays. This risk is increased if the company is new or has only a relatively small number of installations.

To a certain extent, an indication of a vendor's attitude toward existing customers and their problems can be obtained from site visits. However, it is important to remember that a vendor will not usually take potential customers to a site at which they have had many problems. The site most likely to be selected will be one with which the vendor has a positive relationship.

To counter this, it may be prudent to insist that all agreements with the vendor are in writing; this may also be true of statements made by sales personnel who are attempting to win an order. Access to the vendor's technical specialists can build confidence in dealing with a vendor and be the start of a good working relationship. Communications, both formal and informal, should be established, and any issues discussed should be entered into a log as a formal record of progress.

Vendor failure 

This risk covers the failure of a vendor due to either commercial failure of the company or, more often, the withdrawal from a market sector for commercial reasons or changes in business direction. If any of these problems occur, it is important that the organization does not suffer a loss of its investment—both money and time. Contingency plans may be drawn up for the maintenance of the system, at least until a replacement can be found and implemented (possibly a one- to three-year period).

To try and avoid failure of this type before any order has been placed, and preferably during the selection process itself, obtain financial statements from each vendor under consideration. Non-disclosure agreements may be essential to obtain information, especially if it is not part of a published annual report. Key indicators are the length of time the vendor has been in the business and the growth of the company over that period. Within the IT area, many companies may have relatively short track records and may be relatively small. The impact that their product or products have realized in the time they have been available could be used instead. When considering automation, it is more likely that the company will be larger, but this is no guarantee of minimized risk, as some of the largest companies have changed direction and left customers with little or no future direction.

While care is needed in vendor selection, a track record and growth with a successful product is ideal, but these factors should not be used as exclusion criteria against smaller companies that may be emerging with a superior product.

Safeguarding the investment can be achieved by the use of clauses in the purchase contract—for example, all software and documentation should be provided or put into escrow with a third party in the event of failure. Access to source code is a contentious issue, but in the event of corporate failure, this may be the only way of maintaining the system. To protect the laboratory, it may be prudent to include a clause allowing the maintenance of the software by a third party if the vendor cannot or will not fulfil the contract. Incorporating items such as those outlined above is a long and complex process that should be undertaken carefully.

Make or buy a system? 

The best approach to minimize risk is to buy a commercial system rather than make or program your own. This preferred approach means that system development and maintenance costs are spread over the whole customer base, and there is usually a development of the system with new features being added, especially in competitive market segments of laboratory automation. However, often a system does not match exactly a laboratory's requirements, and here is the beginning of compromise: Do I change my ways of working (cheaper and better in the long run) or change the way the system works (short-term gain but costs more in the long run, as the laboratory upgrades from different versions)?

Many laboratory automation projects are custom or bespoke (unique and built specifically for a single group). These benefits are a tailored approach that matches the current ways of working but are expensive (the laboratory meets the full development, maintenance, and support costs). This is high risk. Often, the ego of the organization drives these projects, as there is sufficient money and resources to fund the work, but it will usually take longer than a commercial system. Custom projects are only truly justified when there are no commercial offerings.

Risks associated during development and rollout

Of all areas of the SDLC, development and implementation are the stages that have the highest risk associated with them. To a certain extent, a project can cope with poor sponsorship or the suboptimal selection of a system. However, the development and implementation phases are where the majority of projects fail. Even a technically perfect system that matches user needs can be lost by user indifference or hostility. Some common risk factors that could occur during development and implementation are presented in Table 6.

There are some factors that are unique to development and implementation. However, it is also the part of the project where many of the earlier risks will have their full effect if they have not been managed properly.

Fixed-system scope 

By the time the development of the system starts, it is imperative that the scope of the system is fixed and the functions to be customized are prioritized and agreed upon by the user management. If the scope is not fixed, users or managers could add additional functions without control: this is one of the major reasons for the failure of many projects. This could have a number of results; almost certainly, the system will be delayed and the functions added may not produce any meaningful business benefit. During any implementation, the core laboratory functions should be configured first. Additional functions must only be added later according to business need and under change control.

System scope matches laboratory working practices 

When the development starts, the scope should either match the working practices in the laboratory, or changes in the manual practices have been instigated so that they match the new system functions. System credibility can be lost easily amongst the users by an unplanned mismatch of system and working practices. Liaison amongst the user representatives on the project team with the system developers should help to alleviate this problem.

Change-control procedures 

Once the scope has been fixed, there should be change-control procedures set up to debate and approve any additions, deletions, or modifications to the scope. Without change control procedures in place, there is a significant risk in uncontrolled development of a system.

The change control process involves a set of procedures and a review group. The latter can be either a subgroup of the project team or a separate group whose purpose is to review and prioritize any modification of the scope. Submissions, detailing the changes to be made, should be in writing with the business benefit laid out. Change control should avoid the trivial functions being added at the expense of more urgent ones, thus delaying the project. The corollary is that occasionally some important functions are missed from a specification, and this mechanism provides the means to have them authorized for inclusion.

Implementation of change control is also useful when the system is fully operational, as all changes to the system configuration should be proposed and authorized in this manner.


The documentation of the system is a key quality issue. The main document required for the development is the scope describing the functions to be customized. Additionally, an outline of the change-control procedures, draft testing and validation plans, draft procedures for start-up and shut down of the system, and outlines for the user manuals are needed.

Documentation is required for validation of a system, but more importantly, it is essential for the smooth transition from development to operation. The time required writing good quality user and support documentation is usually longer than anticipated. Therefore, the tasks should be started well in advance of when they are required, and enough time should be allowed for the job to be completed, with sufficient quality to be the first line of support for the system.

Involvement of users in prototyping and testing 

Before development starts, the project team should have identified a group of sympathetic users who will be used to test prototypes or functions developed via conventional programming. The users should represent all groups within the laboratory environment. Note the use of the word “sympathetic.” Credibility is easily lost during development by word of mouth and by the actual performance of a system. There is little point in selecting a group of users who do not want the system to succeed or who are skeptical toward the use of automation. What is required is constructive comment and criticism that will allow development of functions to proceed without detrimental comments about the system being made.

Implementation and rollout 

Detailed planning and availability of personnel in this phase of the SDLC are crucial to the credibility of management and success of the overall implementation.

Detailed implementation plan available 

Before commencing this phase of work, a detailed plan covering the implementation must be available. Details covered should be as follows:

  • The implementation style for an LIMS should be clearly defined, and the implications of each approach thought through before starting.
  • Training, which can be carried out in various ways (e.g., internal or external), should be planned and costed. Any external staff from a vendor should be informed of when and where they are required.
  • External groups who submit work to the laboratory should know when training takes place and the impact of this on the work schedules. The latter should be rearranged to include the immediate post implementation period when productivity will be lower than normal.

The aim of the plan is to remove most of the uncertainty involved during implementation and direct resources to where they are most needed and when they are most required.

Training plans agreed 

Once the implementation style has been agreed upon, the training schedule can be developed relatively easily. In the implementation plan, the groups of workers who will be trained to use the system and the order of training should be identified along with the support staff who must be on hand to augment training and solve any problems. Obviously, risk increases dramatically if staff are not trained to use the system.

Many vendors offer standard courses; however, this may not meet the needs of users where the system has been customised from the core system offered for sale. It may be beneficial to consider customizing training courses and holding them on site if there is sufficient demand to do so or the cost benefit is good.

Training is an easy target when it comes to budget cuts; instead of training all system users, only key ones are trained, with the aim of cascading the training from one or two key users to the rest of the user community. This is often a false economy, as the key users may be technically very capable but not professional trainers; skills and knowledge may not be transferred effectively to the key users who are poorly equipped to transfer what they have retained to others. Ensure training is properly budgeted and professionally carried out to guarantee that the organization has a good opportunity to gain the best benefit from its investment in the system.

Implementation delays 

There are a number of possible causes of delays including vendor failure to deliver a package within an agreed time frame or the writing of in-house software is slower than expected. More problematical are instances where the functions of the system do not match the current working practices in the laboratory, necessitating a delay to rewrite software. The lack of suitably qualified staff, either in-house or from a vendor, may impact the project at a crucial time. Regardless of the cause, delays in implementation are frustrating and have a bad effect on morale and a negative impact on the credibility of the system.

Using staff to work on the project in their spare time increases risk due to conflicting interests. It is preferable to have dedicated staff working on a project to ensure implementation in a timely manner.

The easiest way to manage the risk is to have slack or contingency periods built into the project plan. This can be used to offset delays and avoid reissuing the project plan. If they are not required, then the project delivers ahead of schedule.

Poor system performance 

This is a classic reason for failure of projects during implementation: the system was sized either by estimation or by a formula. The overall platform performance is not sufficient to operate the system effectively and provide adequate performance to the users and, ultimately, the laboratory customers. Effectively, the system is useless and unable to perform its function. This can be due to a combination of factors:

  • Hardware related—processor undersized, insufficient memory, insufficient disc input/output capacity
  • Software related—inefficient or non-optimized software routines, database searches slow and not optimized, estimates of laboratory workloads too low

There exist a number of approaches to overcome these problems. One is to define the overall workload of the laboratory accurately and define in unambiguous terms what a sample, test, analysis, and result mean within the context of a specific laboratory. This should allow a vendor to size a system more accurately. Note that, however, vendors work on average system sizes. If a laboratory's application is below average, performance should not be affected and may be enhanced. However, if the application is above average, performance will be affected, often quite dramatically.

Visits to existing users are a more practical way of discovering how effective the vendor has been at sizing a system. If this approach is taken, it is imperative that the site visit is to a laboratory in the same industry and, wherever possible, that it uses the same software modules as the vendor is proposing for you. Often it can be very difficult to find a laboratory site that operates even in the same industry as your laboratory, or if one is found, it is located in a different country. However, a number of aspects of site visits are very useful.

Alternatives exist to avoid performance problems. The approach taken in the author's laboratory was to purchase a small development computer system, develop the software, and carry out performance tests that predicted the size of computer system required to support the intended user base. Another is to carry out a performance test on the potential system configuration, and time the responses obtained. A third is to specify the minimum response times required in the contract with any penalties upon failure to achieve them. The author prefers the more practical approach of direct sizing, as it removes an area of uncertainty during the most critical phase of a project. This practical approach to hardware sizing should also eliminate the need to seek additional funds for a processor upgrade or additional discs soon after the system is operational.


Given the rapid development and life cycle of hardware and communications components, it will not come as a surprise to find that the system hardware can be replaced in a product line before the organization's depreciation period is completed. If the equipment has been purchased from a recognized supplier, service support should not be a problem but expansion may be. To reduce, but not completely eliminate, this risk, ensure that the development plans of the hardware supplier are known, especially if a proposed hardware system has been available for over two years.

Learning from failure

Learning from our mistakes is a common saying, but the temptation when faced with a project failure is to keep quiet or sweep the issue under the carpet. Take another view, as it is unrealistic to anticipate or expect that all automation projects undertaken will be completely successful. The culture of an organization and the attitudes of immediate line management will usually dictate how failure is dealt with: some organizations may encourage risk taking and allow the undertaking of leading-edge projects, with the expectation that some will fail, whilst others may be more circumspect and not encourage risks to be taken. Whatever the organization, it is rare to undertake an investigation of the reasons for failure. This is unfortunate, as failure is a valuable learning experience that should be used and fed back into the cycle of laboratory automation projects for the benefit of the organization.

The causes of automation failures can appear to be many and varied, and failure can also come in varying degrees. However, failure can be classified into four main categories that are presented and discussed below.

Failure to learn 

When a project has been undertaken, the lessons and experience should have been learnt and passed to any new project before the latter starts. Knowing the reasons for failure should help a similar project succeed by avoiding the obvious pitfalls.

The corollary, of course, is to know the reasons for a project being successful, which can be just as helpful. Of course, we never bother to understand why a project was successful, do we. Usually it is congratulations and plaudits all around and down to the bar for a pint; however, the dividing line between success and failure can be agonisingly thin.

Failure to anticipate 

The essence of a failure to anticipate is not ignorance of the future, as that obviously cannot be foretold, but the failure to take precautions against a known hazard or events. Examples include the rapid development of equipment (scientific, automatic, and computer). It should be possible to anticipate the introduction of new models, usually through vendor briefings under a non-disclosure agreement. Failure to take note of these events could mean the purchase of an item or equipment model that could be obsolete before an automated system is operational. Furthermore, the introduction of any automated system requires careful management of expectation of the potential user base.

Failure to adapt 

Adapting can be defined as identifying and taking full advantage of opportunities that arise during the course of an automation project. To exploit opportunities involves having people who have the authority and ability to work independently and use their initiative. Working practices and organizations in a changing environment are not immutable and should alter to meet the new challenges that arise as a result. This needs to be actively managed—never forget that the human element in automation projects is one of the keys for success.

Catastrophic failure 

As the title suggests, this is a total failure of an automation project, which can be the result of mistaken scientific principles being applied, the wrong technology being used, non-involvement of the users in the project, or incompetent management. Knowing the general reasons for failure listed above, and the encouragement of a culture of openness and honesty for investigating and explaining failure of individual projects, will benefit all future ones within an organization.


When considering risk assessment and management throughout the lifetime of an automation project, a number of common threads emerge:

  • Effective planning is needed, which includes allowances for slippages and tasks that were not identified at the start of the project. The plan should go to a depth that allows the project to progress on strong technical and human grounds. This is not always done, and project plans are usually overoptimistic.
  • Communication among all parties (e.g., users, vendor, management, QA, and IT) is an essential element of reducing risk by the transfer of information.
  • Discussion of the business benefits of a new system should be realistic to manage user expectations.
  • Experience and skills on automation and IT projects are valuable resources within many organizations. Too often they are not used to their full extent by passing experience to different functional groups that are undertaking similar projects. Therefore, many projects waste time and resources overcoming the same problems that other groups resolved on other projects.
  • Commonsense and flexible management approaches are essential, both from the user management and from the project manager.
  • User involvement is essential for a successful project and must be matched by management commitment.

Goto Part 1; Part 2

Click [+] for other articles on 
Project management(1 C, 9 P)
The Market Place for Lab Automation & Screening  The Market Place