Risk Based Validation of Laboratory Information Management Systems (LIMS)

From LabAutopedia

Jump to: navigation, search
Invited-icon.jpgA LabAutopedia invited article

Authored by: R.D.McDowall, McDowall Consulting



The purpose of this section is to present the options available for the validation of a Laboratory Information Management System (LIMS) within a regulated GMP environment. It is important to note that the content of this paper described good computing practices if applied outside of the pharmaceutical industry. The only difference for the pharmaceutical industry is the need for Quality Assurance to approve some of the key documents written during the project.

First, I will present what is a LIMS and then introduce and discuss briefly the laboratory and organisation environment that a system will operate under. This is important as the entity that will be validated is not a single LIMS application but will also include the other applications that it will interface to as well as the analytical instruments and systems connected to the LIMS. Second, we will look at the life cycles and software that constitutes a LIMS, followed by third, the roles and responsibilities of the personnel involved in a validation project. Fourth, we will discuss the processes that the LIMS could automate and how to use the LIMS to make them more efficient and effective. Fifth, we will present the life cycle stages and discuss the documented evidence from concept to roll-out of the system and finally, sixth, the measures necessary to maintain the validated status of an operational LIMS.

It is assumed that a commercial system will be implemented and validated. Depending on the specific LIMS being implemented there will be additional configuration of the software and / or writing of custom software using either a recognised commercial language or an internal scripting language. The days of an organisation writing their LIMS application is not justifiable on the basis of time, cost and support effort.

Different 4Qs Terminology for Computerised System Validation

The following 4Qs terminology and abbreviations are used in this chapter:

  • Design Qualification (DQ) Definition of the intended purpose of the system to be validated. The document written is a user requirements specification (URS) rather than a DQ.
  • Installation Qualification (IQ) Installation of the system components (hardware, software and instruments) and integrating them together
  • Operational Qualification (OQ) Testing that the software works as the supplier intended
  • Performance Qualification (PQ) or User Acceptance Testing Testing of the system in the way it is intended to be used and against the requirements in the URS and verification that requirements in the URS have been fulfilled

Note that this terminology used is the same as USP <1058> on Analytical Instrument Qualification BUT has different meanings. The terminology used in this section relates solely to computerised system validation and also linked to my AIQ section.

LIMS Do Not Have On Buttons

Mahaffey[1]stated in his book that LIMS do not have on buttons. This is a vital concept, as true today as it was when it was written nearly 20 years ago. This concept must be understood by management and project team members involved in the implementation or operation of any LIMS. There is substantial software configuration and / or customisation involved in getting the system to match the current or planned laboratory working practices as well as populating the database with specifications and migrating data from legacy systems if required. This takes time which management is not likely to appreciate but the work still needs to be done and included in the project plan for the system. The size of the data population may mean that the implementation is phased over time.

Project Timescales and Phased Implementation

As a consequence of having no on button, the project timescales for a LIMS implementation can be rather lengthy and may be implemented over a number of phases. It is important that management understand this and there can be no quick roll-out unless quality is compromised or project resources increased accordingly. The first phase of the project should deliver a functioning system for at least part of the laboratory (e.g. raw materials, finished products etc) and also have the major instruments interfaced to the system.

Typically the phases of the project will take the following times: • Selection may take between 6 and 9 months. This could be reduced to zero if the laboratory has to implement a system that has been selected by the organisation. • After key personnel training, there will be 6 – 12 months before the first phase of the system is implemented, validated and roll-out So a typical project could run between 12 and 24 months for the first phase of work and therefore will need to be adequately resourced in terms of money, time and personnel. To maintain project momentum it is better to have shorter rather than longer time scales. Further phases of the LIMS project will follow extending the timeline depending on the complexity of the work to be performed.

Project Risk

Risk management is a key element in any LIMS project. However it is important to understand that there are two types of risk to be managed. The first is regulatory risk associated with compliance with applicable external regulations and corporate policies which will be covered in this chapter. However space does not permit a detailed discussion about the second which is risk associated with the project and is business risk. As over 50% of LIMS implementations fail to meet initial expectations, the reader is referred to the paper written by McDowall [2] which contains risk tables where various business risk scenarios for laboratory automation projects are presented and discussed. This will help project teams identify potential risks and develop plans to mitigate them before they occur.

References for LIMS Validation

There are a number of references that can be used for help when considering the validation of a LIMS. There is United States Pharmacopoeia general chapter <1058> on Analytical Instrument Qualification [3] that references the FDA guidance on General Principles of Software Validation [4]. Using the FDA guidance is a flawed approach for a LIMS as it is written in the context of medical devices that are not configured or customised by the users. Therefore omitting this phase of work from the validation opens the laboratory to unacceptable business and regulatory risks.

The GAMP Forum has published a good practice guide on the Validation of Laboratory Computerised Systems [5]. However this suffers a number of drawbacks in terms of over-complex risk management and its simpler implementation life cycle for a LIMS [6]. In the author’s view it is better to adapt the various life cycle models presented in GAMP 5 [7] which will be discussed later in this chapter or adapt the validation approach for chromatography data systems [8] for the additional work required for a LIMS.

What is a LIMS and the LIMS Environment?

Before we begin a discourse on the validation of a LIMS it is important to understand two terms: LIMS and the LIMS environment. The first refers to the LIMS application that is purchased from a commercial supplier that is then implemented in the laboratory. The term LIMS implies that there is the application only to be validated, this will not be the case in most LIMS implementations.

What is a LIMS?

A LIMS is a computer application designed for the analytical laboratory that is designed to administer samples, acquire and manipulate data and report results via a database [9]. It automates the process of sampling, analysis and reporting and in its simplest concept is shown in Figure 1, here the samples are generated outside of the laboratory and submitted for analysis. The laboratory analyses these samples, generates data which is interpreted and the laboratory produces information in the shape of a report to the individuals who will use the information to make decisions. Therefore it is important to realize that a LIMS should impact both a laboratory where it is implemented and the organization that the laboratory serves. To be effective a system should deliver benefit to both the laboratory and the organization. Thus a LIMS has two targets: • The laboratory: the information generator • The organisation: the sample provider and the information user

Figure 1: A LIMS has Two Targets

Figure 1 shows a LIMS sited at the interface between a laboratory and an organisation. Samples are generated in the organisation and received in the LIMS followed by laboratory analysis. The data produced during analysis are reduced within the LIMS environment to information which is transmitted back into the organization. This represents the ideal placement of the LIMS: both the organisation and the laboratory benefit via the system. The line dividing the organisation and the laboratory shows that the LIMS is of equal benefit to both. There are other versions of this diagram that can be implemented that provide virtually little benefit to the laboratory and the organisation that are discussed in more detail [10].

Therefore, to hit these two targets, integration of the LIMS with other applications both inside and outside the laboratory is the key to success. Hence we need to consider the term the LIMS environment to encompass the IT environment inside and outside of the laboratory.

The LIMS Environment

In reality a LIMS is more complex than just a single application and hence I prefer the term LIMS environment to describe at least two of the following elements: • LIMS application • Analytical instruments interfaced directly with the LIMS • Laboratory data systems and computer systems interfaced with the LIMS (chromatography data systems, scientific data management systems, electronic laboratory notebooks etc) • Applications outside of the laboratory that are also interfaced to the LIMS (enterprise resource planning systems) This is the full scope of the computerised system that could be validated within a LIMS project and is shown in Figure 2.

Figure 2: Diagram of a LIMS environment

Designing the LIMS environment means that you need to consider all the other systems in the laboratory that must interface with the LIMS. This includes other applications such as scientific data management systems, Chromatography Data System (CDS), and electronic lab notebooks (ELN), as well as various data systems that may be attached to those or run independently. It also includes analytical instruments, chromatographs, and laboratory observations shown in the lower half of Figure 2. Data can be transferred to the LIMS by a variety of means: • Direct data capture from an instrument connected directly to the LIMS • Data capture from an instrument with analysis and interpretation by the attached data system and only a result is transferred to the LIMS • As above, but the results or electronic records are transferred to the LIMS via a Scientific Data Management System • Laboratory observations can be written into a notebook then entered manually into the LIMS or captured electronically via an Electronic Laboratory Notebook (ELN) and transferred electronically to LIMS.

Once the laboratory side of the LIMS environment has been designed, the LIMS needs to be integrated in the organization. Some of the systems to construct the LIMS environment here and interface with the system are: • E-mail for transmission of reports to customers or keeping them aware of progress with their analysis • LIMS web servers for customers to view approved results • Applications maintaining product specifications • Enterprise Resource Planning (ERP) systems for linking the laboratory with production planning and batch release • Electronic Document Management Systems (EDMS) • Electronic Batch Record Systems (EBRS) These are just a few of the possible applications that a LIMS could be interfaced to; the list of potential candidates will be based on the nature of the laboratory and the organization it serves.

The LIMS Application

We also need to consider the LIMS application itself in more detail and as what functions the software can undertake on its own. Here the discussion is general as individual commercial LIMS applications will differ in their scope and functional offerings. In addition there may be functional overlap between the LIMS and applications that could be interfaced with it, either the LIMS or another application could automate a specific portion of a process. This overlap needs to be resolved with an overall strategic plan for the LIMS environment to determine if the LIMS or another application will undertake a specific function. In more detail, the functions that could typically be automated within a LIMS are: • Specification management • Scheduling analytical work • Sample management including sample labelling • Analysis management: definition of methods and procedures • Instrument interfacing and communication • Results calculation, management and reporting • Stability studies management and reporting including calculation of storage times • Environmental sample planning, analysis and reporting • Instrument calibration and maintenance records • Trending results versus specifications • Laboratory out of specification investigations

Figure 3 shows a high level process flow of a typical LIMS that starts with the development and validation of analytical method followed by sample analysis. Highlighted in dashed lined around develop and validate method and sample analysis is the implied need to interface with instrument data systems such as a CDS and also analytical balances. However the options implemented for a specific LIMS installation will be dependent on the functions of the laboratory being automated and the other applications operational or planned to be installed in the LIMS environment.

Figure 3: LIMS workflow for method development and sample analysis

The LIMS Matrix

In addition to process mapping, another tool to help plan the implementation of a system is the LIMS matrix [10], which is useful to document and visualise the high level needs of the LIMS environment. The matrix consists of a three by six matrix, nine of the cells represent the laboratory and the other nine the organisation. This is a way of depicting the LIMS application together with the LIMS environment as each application either planned or existing is mapped on to the matrix. The matrix is most useful in getting senior management and the LIMS project team to agree on the overall scope of the system.

Understanding and Simplifying Laboratory Processes

A pre-requisite before implementing a LIMS, or indeed any major computerised system, is to map and optimise the laboratory processes that the LIMS will automate. The laboratory needs to understand the process and to identify any bottlenecks in the process and the reasons for each one occurring. This is especially important when moving from a paper based to an electronic environment. The reason for this is that most laboratory processes have evolved over time rather than designed, moreover the processes are paper based rather than electronic. Refer to reference 10 for further information about the LIMS matrix. The aim for any LIMS implementation is for a simplified and streamlined electronic process rather than the automation of an inefficient and paper based status quo.

The Way We Are

The first stage in considering the paperless laboratory is to look at the basic processes and computerised systems: how do they currently operate and how do they integrate together. A laboratory may have many computerised systems such as chromatography data system and data systems associated with the main analytical techniques such as MS, UV, NIR etc. As such the laboratory can appear on the surface to be very effective but in practice these are islands of automation in an ocean of paper. The main way that data are transferred from system to system is via manual input using paper as the transport medium, a slow and inefficient process. Furthermore, the process will have evolved over time and may have additional tasks that do not add any value to the laboratory output and it becomes very slow and inefficient. The diagnostic process is to map your current process and then redesign and optimise your laboratory process to use IT systems, including LIMS, effectively and efficiently to ensure that they deliver business benefit and regulatory compliance. Therefore the process maps for the current working practices understand what you do and why you do it. In many instances it will be due to one or more of the following: • Custom & practice (we have always worked this way) • Evolution over time (we have had new projects or new tasks to do) • Extensive quality control checks (the FDA didn’t like our previous way of working)

Figure 4: The Multiple Current Processes for Sample Management

For example, Figure 4 shows a process map for sample receipt in a laboratory. There are two process flows: the first for internally generated samples and the second for externally generated ones. Originally the two process flows were the same but when a shipment of samples was lost after delivery to the site, the manager of the laboratory instigated the second process flow to prevent this happening again. However also shown on the diagram is an undocumented process developed by one individual to streamline their work: simple and easy to perform it is never the less non-compliant as there is no SOP written for it. The problem when implementing a LIMS is that to incorporate all three process flows into the system brings three times the effort in specifying, implementing and validating. In all probability the resulting workflows will be less efficient that the paper system that replaces it. Hence the need to map and redesign, standardise, harmonise or optimise the processes in the laboratory to ensure a simpler implementation.

Operating Principles of an Electronic Laboratory

There are three basic operating principles of the electronic laboratory that should be used to redesign or optimise the laboratory processes [4]. These are:

  1. Capture Data at the Point of Origin: If you are going to work electronically, then data must be electronic from first principles. However, there is a wide range of data types that include observational data (e.g., odour, colour, size), instrument data (e.g., pH, LC, UV, NMR, etc.), and computer data (e.g., manipulation or calculation of previous data). The principle of interfacing must be balanced with the business reality of cost-effective interfacing: what are the data volumes and numbers of samples coupled with the frequency of the instrument use?
  2. Eliminate Transcription Error Checks: The principles for design are as follows: never re-type data and design simple electronic workflows to transfer data and information seamlessly between systems. This requires automatic checks to ensure that data are transferred and manipulated correctly. Where appropriate, implement security and audit trails for data integrity and only have networked systems for effective data and information sharing.
  3. Know Where the Data Will Go: Design data locations before implementing any part of the LIMS and the LIMS environment. The fundamental information required is what the volumes of data are generated by the instrumentation and where the data will be stored: in an archive system, with the individual data systems or on a networked drive? The corollary is that security of the data and backup are of paramount importance in this electronic environment. In addition, file naming conventions are essential to ensure that all data are uniquely numbered, either manually or automatically. If required, any archive and restore processes must be designed and tested so that they are reliable and robust.

The key message when designing electronic workflows is to ensure that once data are acquired they are not printed out or transcribed again but transferred electronically between systems using validated routines.

The Way We Want to Be

The main aim is to understand where there are bottlenecks and issues in the process. Analyse and find the root causes of these bottlenecks as they will help you to challenge and improve process. When the current process is redesigned and optimised the aim must be to have as far as it practicable:

  • Electronic ways of working
  • Effective and efficient hand-offs and transfers between applications and organisational units This will enable a laboratory to get the process right.

Figure 5 shows the improved sample management process that will be implemented by the LIMS. Just by visual comparison of the current and new processes you can see that the new process is simpler and easier to understand. Here, the existing formal and informal process flows have been merged into a single, simplified process and sample labels contain bar codes to enable better sample tracking and management. There is no differentiation between the source of the samples – all samples are treated the same. This means that user training and validation work is only spent on a single unified process saving time and effort which more than pays for the cost of redesigning the process.

Figure 5: The Optimised Sample Management Process prior to LIMS Implementation

Therefore, look at your basic laboratory process and design the electronic ways of working: see what changes could be made to your ways of working to remove inefficient tasks and improve the speed. Knowledge and interpretation of the GLP or GMP regulations that the laboratory works to is also very important: knowing which records need to be signed and when. However, in trying to work electronically requires that any application used is technically compliant with 21 CFR 11. Furthermore, it is essential that the LIMS be interfaced with the analytical systems in the laboratory to ensure that data are captured electronically. Unfortunately this is not always the case as a report has stated that less than 50% of systems are connected to instruments [11]. In such a situation, it is doubtful if any LIMS implementation will be cost effective as the resulting manual driven process around the LIMS with few advantages over the paper system it replaces.

GAMP Software Categories and System Life Cycle for a LIMS

To define the risk and amount of work that we need to do when validating a LIMS we need to understand the categories of software present in a LIMS. Once this is determined, the life cycle that is necessary to implement a LIMS can be defined.

GAMP Software Categories in a LIMS

The Good Automated Manufacturing Practice (GAMP) guidelines is an industry written document for the validation of computerised systems used in the pharmaceutical industry now in its 5th version [7]. In all versions there is a classification of software into one of five categories presented in Table 1. Further discussion and debate on the GAMP software categories as applied to laboratory computerised systems can be found in the paper by McDowall [12].

Table 1: GAMP 5 Software Categories
Category 1: Infrastructure Software
  • Established or commercially available layered software including operating systems, databases, office applications etc
  • Infrastructure Software Tools including antivirus, network management tools etc
Category 2: Firmware
  • Discontinued – firmware now treated as category 3, 4 or 5.
  • Clash with USP <1058> over approach for Group B laboratory instruments: validate or qualify?
Category 3: Non-Configured Products
  • Off the shelf products that cannot be changed to match the business processes.
  • Can also include products are configurable but only if the default configuration is used.
Category 4: Configured Products
  • Configured products provide standard interfaces and functions that enable configuration of the application to meet user specific business processes.
  • Configuration using a vendor supplied scripting language should be handled as custom components (Category 5).
Category 5: Custom Applications
  1. These applications are developed to meet the specific needs of the regulated company.
  2. Implicitly includes internal application macros, LIMS scripting language customisations, VBA spreadsheet macros
  3. High inherent risk with this type of software.

Therefore a LIMS could contain the following categories of software:

  • LIMS Application software which is configured (category 4)
  • Customisation of the product using the internal scripting language (category 5)
  • Writing custom code using a recognised computer language to connect the LIMS to another application or instrument (category 5) As a minimum a LIMS could consist of only category 4 software (option one above) but in a GMP environment will also contain at least one type of category 5 software the scripting language option for customisation. This mixed environment affects the life cycle and will lengthen the time for implementation of the system. Therefore, when at all possible, the laboratory should change the business process to match the LIMS to reduce implementation time and validation cost, which was discussed in the previous section.

LIMS Life Cycle Model

The combination of category 4 and category 5 software in a typical LIMS means that a more complex life cycle is required to accommodate these two categories. This means that the life cycle must control the implementation of the configuration of the commercial LIMS software as well as control the writing of the custom elements. The GAMP guide version 5 [7] and the GAMP Good Practice Guide on Testing [13] have a number of life cycle models including a model of software category 4 integrated with category 5 extensions. It is this model that we will consider for a LIMS life cycle model.

The life cycle model, that has been adapted for a LIMS is presented in Figure 6. The category 4 life cycle follows the stages lined by the bold lines and the category 5 software modules are nested within the category 4 life cycle for the extensions to the configured system and are linked by the thinner lines. This is the most common approach to a LIMS system life cycle.

Figure 6: LIMS Life cycle model for Category 4 and Category 5 Software (Adapted from reference 7)

Some explanation of the life cycle is required. The category 4 and 5 life cycles are highly integrated and dependent on one another. The main part of the life cycle is for the category 4 portion of the application and requires the following phases connected by the heavy solid line: • Writing a User Requirements Specification (URS) to define the overall system and business requirements of the LIMS • Configuration of the system is documented in a configuration specification (CS) that details how the application will be configured to meet the requirements • A technical specification will define the computing platform that will run the production system and any other environments • The hardware platform and infrastructure software including the operating system will be installed and qualified (IQ and OQ) in a data centre followed by the installation and qualification of the database and LIMS software (IQ and perhaps the OQ). • After qualification of the LIMS software it will be configured as specified in the configuration specification. This process needs to be documented either at this stage or during the PQ. Category 5 custom elements will have the nested life cycle as shown within Figure 6 with the light solid line: • Customisation of the system using the LIMS scripting language need to be specified in a software module specification. • Modules of custom code will be written using the scripting language and follow the coding conventions devised by the LIMS vendor • Testing of each module will be undertaken against the software module specification to show that it works and will include tests to show that the code is integrated with the main LIMS application

To ensure that the configured application (category 4) and the custom modules (category 5) work together the whole system undergoes user acceptance testing or PQ is carried out by the users to show that the whole system (configured application and custom modules) work as specified in the URS.

Note that not all the documents required for a LIMS validation are shown or indicated on this life cycle model depicted in Figure 6 but the main phases of a project. A functional specification is not shown in the diagram as this will typically be written by the LIMS supplier and can be used by the laboratory to reduce the amount of testing based on whether the LIMS function is standard, configured or customised.

Validation Roles and Responsibilities for a LIMS Project

There will be a number of individuals involved in a LIMS project, the range of roles involved will be discussed below. The reason for this list is that a multidisciplinary approach is essential for the implementation of a LIMS. The following personnel may be involved with a project depending on the company, its organisational structure and the LIMS project size.

  • Senior management: this individual or individuals will be responsible for the budget and authorisation of the project. Summary reports of the progress of the project against the project plan should be generated regularly to ensure continued support.
  • Laboratory management: resource allocation between the project and the normal work. Typically the laboratory manager will be the system owner and responsible for the LIMS and the overall approach taken in the validation.
  • Laboratory users: these will constitute the bulk of the project team and typically will split their time between their normal work and the LIMS. Allowance needs to be made by management to allow time on the LIMS project otherwise the project will suffer and fail to meet milestones and deadlines.
  • LIMS project manager: this individual is the single point of responsibility for the whole project and responsible for planning and organising the work with the available resources. Ideally this role should be full time [2] to ensure dedication to the delivery of the project on time and on budget. The project team members will be tasked through the project manager. If there is a conflict between the normal role of the project team members and the LIMS, the project manager will have to negotiate with the laboratory manager. It is these conflicts that will result in delays to the LIMS project and possible budget escalation.
  • Sample providers and information users: these are the customers of the laboratory who generate samples and use the information provided by the laboratory. In some cases they may be the sample individuals or in others different people. They must be included in the scope of the project as otherwise the project will only benefit the laboratory but not the organisation
  • Quality Assurance: the role is to ensure compliance with applicable regulations and company procedures. To ensure efficient document production not every document needs to be authorised by QA before release. As a minimum the validation plan, summary report and user requirements specification and any test plans should be authorised by QA but not all the individual test scripts as there is limited value of a QA review before execution as opposed to a post execution review by them.
  • IT Department: there will be input into the technical architecture and ensuring that corporate IT standards are met by the new system. They will be involved with the installation and qualification of the computer platform and the operation of the platform of the validated system. There may be a role in allocating user identities and access privileges, backing up the system and maintaining the platform including patching and database administration.
  • LIMS Supplier: provides the LIMS application, support company audits to see how the system was developed and supported and also technical expertise to help configure or customise the system to meet the business requirements of the system. The vendor may offer consultants or contractors to carry out this work who will work alongside the project team members and the users to achieve a configured / customised system. Familiarity with the LIMS application, its configuration possibilities and the scripting language are key factors in deciding whether to use the supplier in a greater role
  • Internal Validation Group: if involved in the LIMS project this team will be involved in providing validation advice and may also be writing many of the documents. In many cases the use of a validation group may be an unwritten statement to the users that validation is not their problem – nothing can be further from the truth. The responsibility for validation, as stated above, belongs with the system owner. This group is a repository of validation expertise that can be used in the validation of any system but they rely on the input from the users, supported by management, to achieve their aims and objectives.
  • Consultants can be used either for overall direction and phasing or a LIMS project or advice about specific topics such as validation of the system. The laboratory should use consultants to add value to a project and advise them of better ways of undertaking the work etc. This group can also be used in place of an internal validation group or work alongside them. Care should be taken to ensure that corporate standards are followed when using consultants unless there is a good business reason for not doing so.

System Life Cycle Detail and Documented Evidence

The life cycle and the documented evidence discussed in this section is based upon the validation of a number of systems but needs to be understood in the context of an organisation’s computer validation policies and procedures. Each organisation can have different approaches and terminology. Therefore the terminology that I use here may be different to some organisations. However, what matters is have you performed the work described in each section rather than argue over the name of a specific document. The key message is that you can demonstrate that the system was developed under control and is validated. The main documents needed for validation of a LIMS are presented in Table 1 above (GAMP 5 Software Categories) and each one will be discussed in the subsequent sections of the life cycle.

Table 2. Typical Documentation for a LIMS Validation
Document Name, followed by sub-bullets indicating Outline Function in Validation
  1. System Risk Assessment
    • Documents the decision to validate the LIMS or not and the extent of validation work to be undertaken
  2. Validation Plan
    • Documents the scope and boundaries of the validation effort
    • Defines the life cycle tasks for the system
    • Defines documentation for validation package
    • Defines roles and responsibilities of parties involved
  3. Project Plan
    • Outlines all tasks in the project
    • Allocates responsibilities for tasks to individuals or functional units
    • Several versions as progress is updated
  4. User Requirements Specification (URS)
    • Defines the functions that the LIMS will undertake
    • Defines the scope, boundary and interfaces of the system
    • Defines the scope of tests for system evaluation and qualification
  5. System Selection Report
    • Outlines the systems evaluated on paper or in-house
    • Summarises experience of evaluation testing
    • Outlines the criteria for selecting chosen system
  6. Functional Risk Assessment and Traceability Matrix
    • Prioritising system requirements: mandatory and desirable
    • Classifying requirements as either critical or non-critical
    • Tracing testable requirements to specific PQ test scripts
  7. Vendor Audit Report
    • Defines the quality of the software from suppliers perspective (certificates)
    • Confirms that quality procedures matches practice (audit report)
    • Confirms overall quality of the system before purchase
  8. Purchase Order
    • From supplier quotation selects software and peripherals to be ordered
    • Delivery note used to confirm actual delivery against purchase order
    • Defines the initial configuration items of the LIMS
  9. Configuration Specification
    • Defining the configuration of the system policies
    • User types and access privileges
    • Default entries into the audit trail defined
  10. Software Module Specifications
    • Specifying a custom module and how it will integrate within the LIMS
    • Coding and documenting the module to pre-defined standards
    • Informal developer testing and correction of the module code
  11. Technical Architecture (Technical Specification)
    • IT platform(s) defined e.g. terminal servers, database server together with resilience features
    • Operating systems and service packs
    • Operating environments: production, validation etc.
  12. Installation Qualification (IQ)
    • Installation of the components of the system by the IT and the LIMS supplier after approval
    • Testing of individual components
    • Documentation of the work carried out
  13. Operational Qualification (OQ)
    • Testing of the installed system
    • Use of an approved suppliers protocol or test scripts
    • Documentation of the work carried out
  14. LIMS Application Configuration
    • Configuration of the LIMS application according to the configuration specification Data Base Population
    • Controlled input of methods to the LIMS
    • Controlled input of raw material, intermediates and in process control sample and finished product specifications to the LIMS Module
  15. Testing and Integration of Custom Software
    • Formal testing of the module against the software design specification
    • Integration testing with the LIMS application
  16. Data Migration
    • Identification of the data elements and fields to migrate from an old LIMS e.g. specifications, results, on-going stability studies
    • Planning and executing the work
    • Confirming the successful data migration
  17. User Acceptance Test (e.g. PQ) Test Plan
    • Defines user testing on the system against the URS functions
    • Highlights features to test and those not to test
    • Outlines the assumptions, exclusions and limitations of approach
  18. PQ Test Scripts
    • Confirmation of software configuration
    • Test script written to cover key functions defined in test plan
    • Scripts used to collect evidence and observations as testing is carried out
    • Documents any changes to test procedure and if test passed or failed
  19. User Training, SOPs and System Documentation
    • Procedures defined for users and system administrators including definition and validation of custom calculations, input of specifications, account management and logical security
    • Procedures written for IT related functions
    • Practice must match the procedure
  20. Service Level Agreement (SLA)
    • Agreement between the laboratory and IT for IT and infrastructure services for the LIMS User Training Material
    • Initial material used to train super users and all users available
    • Refresher or advanced training documented
    • Training records updated accordingly
  21. Validation Summary Report
    • Summarises the whole life cycle of the LIMS
    • Discusses any deviations from validation plan and quality issues found
    • Management authorisation to use the system
    • Release of the system for operational use (this can be a separate release certificate in some organisations)

System Risk Assessment

Before starting a LIMS project a risk assessment should be undertaken at the level of the system itself to determine if any or all functions of it are under GMP regulations. One such methodology has been described [14][15] which is based upon the records generated by the system and the GAMP category of software. This process determines if the system needs to be validated or not and if so the approach to validating the system. There are alternative approaches used by GAMP [7] and Scolofino and Bishop [16] amongst others.

Initial Definition of User, Business and System Requirements

The key document in the whole of the LIMS validation is the User Requirements Specification (URS) as this defines the user acceptance tests and also can influence the validation strategy to be outlined in the validation plan. From the life cycle model shown in Figure 6 that the user and system requirements are linked to the tests carried out in the user acceptance tests or performance qualification. Therefore, it is important to define the requirements for the basic functions of the LIMS, the adequate size, 21 CFR 11 [17] requirements and consistent intended performance in the URS. Remember that the URS provides a laboratory with the predefined specifications to validate the LIMS; without this document the system cannot be validated [4].

The main elements in an URS should include the following major areas; each requirement must be individually numbered and written so that it can be traced to where it is either tested or verified later in the life cycle. • Overall system requirements such as: number of users, locations where the system will be used and the instruments connected to the system; will terminal emulation be used? • Compliance requirements from the predicate rule and 21 CFR 11 such as: logical security, audit trail, user types and access privileges, requirements for data integrity, time and date stamp requirements, electronic signature requirements. • LIMS functions defined using the workflow outlined in Figure 3 but ensure that capacity requirements are defined such as maximum number of samples to be run, custom calculations and reports for the initial implementation and roll-out etc. • IT Support requirements such as: database support, backup and recovery and archive and restore. • Interface requirements e.g. will the LIMS be a standalone system or will it interface with instruments, data systems such as a CDS within the laboratory or ERP system outside of it?

The completed document will be an initial step for the selection of a system as it will be refined as the project progresses and after the final system has been purchased. Developing requirements is a continual process and the final version of the document will reflect the specific LIMS and version number that will be implemented. A URS is a living document [4][18].

Vendor Audit

The majority of the system development life cycle for a commercial LIMS will be undertaken by a third party: the vendor. The European Union GMP Annex 11 on computerised systems states [19]:

  • Section 5: The software is a critical component of a computerised system. The user of such software should take all reasonable steps to ensure that it has been produced in accordance with a system of Quality Assurance.

This regulation is currently in the process of being revised and the draft issued in 2008 for industry comment and contains further requirements for vendor audits [18]:

  • Section 5.1: … The supplier of software should be qualified appropriately; this may include assessment and/or audit.
  • Section 5.3: Quality system and audit information relating to suppliers or developers of software and systems implemented by the manufacturing authorisation holder should be made available to inspectors on request, as supporting material intended to demonstrate the quality of the development processes.

The GAMP Guide version 5[7] recommends that a vendor audit be undertaken for category 4 software to ensure that the system was developed in a quality manner and is supported adequately by the vendor and this is a reasonable interpretation of the proposed update of Annex 11. The vendor audit should take place once the product has been selected but before the system has been purchased in case of issues discovered during the audit impact the purchase decision. The purpose of the audit is to see if an adequate quality management system is in place and operated effectively for the development and support of the LIMS. The evaluation and audit process is very important part of the life cycle as it ensures the design, build and testing stages (which are under the control of the supplier) have been checked to ensure compliance with the regulations. The audit should be planned in advance and cover items such as the design and programming phases, product testing and release, documentation and support; a report of the audit should be produced after the visit and if the EU GMP update is unchanged will be available to inspectors [18].

Many LIMS suppliers are certified to ISO 9001 [20] or ISO 90003[21] and offer a certificate that the system conforms to their quality processes. This is adequate for supporting development phase of the life cycle but remember that there is no requirement for product quality in ISO 9000 and product warranties do not guarantee that the system is either fit for purpose or error free [8]. If the system is critical to GMP operations it is better to consider a vendor audit, for further reading about a vendor audit see the relevant chapter in reference [8].

Selecting and Purchasing the System

The selection and purchase of a new LIMS should be a formal selection process to see if an application matches the main requirements of the URS. The outline tests can be used to screen and select the system. An in-house test of a system is strongly advised if there is sufficient time and resources to do this as this will give increased confidence that a system can undertake the laboratory’s work. A selection report would be the outcome of this phase of the work and would form part of the supporting evidence for the LIMS validation.

The company’s internal procurement processes should be followed to write the capital expenditure justification to be circulated for approval. At the same time, the vendor’s contract terms and conditions should be reviewed and changed to correct any issues, such as payment terms, before the purchase order is placed. Once the LIMS request is approved, the purchase order can be placed. This provides the first phase of the configuration management of the system as it defines the components of the system to be delivered.

Controlling the Work: The Validation and Project Plans

Note that writing the validation plan appears relatively late in the life cycle, typically after the decision to purchase the LIMS. This is because the may be issues discovered during the selection of the initial system or in training to use the selected system that may impact on the way that the system in implemented or rolled-out. Therefore the controlling document for the validation work comes in here to avoid the need to issue amendments or a new version of the plan. However, it is important that the LIMS project team remember that the earlier phases of the project should be adequately documented to ensure that the selection process is not documented retrospectively.

The name for this document varies so much from laboratory to laboratory: validation plan, master validation plan or validation master plan or even quality plan. Regardless of what it is called in an organisation it should cover the work to be done so that the system provides business benefits and also meets regulatory compliance.

The validation plan should define:

  • The system to be validated (name and version number) including its scope and boundaries
  • The roles and responsibilities of the people involved in the project
  • The life cycle to be followed and the documented evidence to be produced when this is followed
  • How to deal and document any deviations from the plan

There will typically be a separate project plan with further breakdown of tasks in a project planning application e.g. MS project. This plan should be referred to in the validation plan but do not include the dates and tasks in the validation plan itself. The reason for this is that some plans can be rather optimistic and therefore do not include the timescale in the validation plan as this gives the project manager the flexibility to update the plan separately without the need to update and reauthorize the validation plan.

Refining the User Requirements

Specification during Implementation It is important to realize that the URS is a living document that must be updated as the system requirements change and evolve; for example an URS should be written to select a system. Then the document will be reviewed and updated to reflect the selected LIMS and version that will be validated and the functions specific to the laboratory and the systems interfaced to it. The reason for this is that the selected system may have more or less features that contained in the original URS and therefore the document must be updated to reflect the system to be validated.

It is also not unknown for the URS to be updated at least once more especially after piloting requirements and during writing the user acceptance tests (PQ) to reflect further changes in the understanding of the system by the users, changes in the way that the laboratory wants to operate and also to correct requirements that were incorrectly written or not understood fully earlier in the life cycle. This is normal and expected.

Piloting Your Requirements

Whilst a URS is essential for defining requirements for system selection, they are not usually sufficiently detailed or the best way of working with the purchased system. Therefore an excellent way of refining and defining requirements further is through a pilot phase of the LIMS. However this needs to be structured and managed well so that the project and the specification documentation benefit. From a practical perspective only have two pilot phases so that the piloting does not go on for ever. Each phase needs to be carefully defined in terms of the scope of functions to be prototyped, the time to be spent on the work and how requirements and test documents will be written or updated.

Piloting requirements is usually informal in that any documents written will not be checked by Quality Assurance and this fact should be stated in the validation plan. However the URS documents need to be updated and outline test scripts written at the end of each phase to reflect the final prototyped functions work as intended. This approach enables the final URS and any configuration specifications to be finalised relatively shortly after the end of the prototyping phase and not delay the overall project.

The system prototyping can be conducted on an unqualified or qualified system. However the risk with using either approach is that when the prototyping work is completed is that management can think that the implementation is completed; this misunderstanding has to be managed accordingly.

Functional Risk Assessment and Traceability Matrix

Although not a current regulatory requirement traceability of requirements, from the URS through all subsequent phases, is a regulatory expectation [22][23]. However more importantly it is a vital business tool in ensuring that all requirements captured in the URS are managed in subsequent phases of the life cycle. To achieve this effectively, it is important that requirements are presented correctly and in a manner that facilitates traceability. It is all very well the regulations stating that a user must define their requirements in a URS, what does this mean in practice? Table 3 illustrates a way that capacity requirements can be documented; each requirement; note that each requirement is:

  • Uniquely numbered
  • Written so that it can be tested, if required, in the PQ or verified later in the life cycle
  • Prioritised as either mandatory (M = essential for system functionality) or desirable (D = nice to have but the system could be used without it). This prioritisation can be used in risk analysis of the functions and also for tracing the requirements through the rest of the life cycle.
Table 3: Some Capacity Requirements for a LIMS
Req No. Requirement Priority M/D
3.2.1 LIMS has the capacity to support up to 20 named users using the system at the same time from any laboratory network location. M
3.2.2 LIMS has the capacity to hold 800 GB of live data on the system. M
3.2.3 The system can print to any of 4 network printers M
3.2.4 The LIMS will be available for a minimum of 98% per month down time for maintenance and backup per month (e.g. 3%). M
3.2.5 The LIMS response time is <  30 seconds for a successful log-in and this must not be degraded by fluctuations in network traffic. D

Each requirement must be written so that it is either testable or verifiable. Testable means that based on the requirement as written a test or tests can be devised to show that the delivered LIMS does or does not meet the requirement. Verifiable means that a requirement is met by carrying out an activity e.g. the installation of a component, writing a procedure, auditing a vendor. To write requirements that are testable or verifiable the URS writers should follow the process defined in IEEE standard 1233 [24]. This states that a well-defined requirement must address capability, condition, and may include a constraint. Remember as shown in Figure 6 life cycle model that the URS requirements are the input to the tests carried out in the PQ phase of the life cycle. If the requirements are not specified in sufficient detail, they cannot be tested.

Table 4: Functional Risk Assessment and Traceability Matrix
Req No. Requirement Priority M/D Risk C/N Test Req.
3.2.1 LIMS has the capacity to support up to 20 named users using the system at the same time from any laboratory network location. M C TS05
3.2.2 LIMS has the capacity to hold 800 GB of live data on the system. M C IQ
3.2.3 The system can print to any of 4 network printers M C IQ
3.2.4 The LIMS will be available for a minimum of 98% per month down time for maintenance and backup per month (e.g. 3%). M C SLA
3.2.5 The LIMS response time is < 30 seconds for a successful log-in and this must not be degraded by fluctuations in network traffic. D N  -

The next stage in the process is to carry out a risk assessment of each function determined on if the function is business and / or regulatory risk critical (C) or not (N). Table 2 for URS now has two additional columns added to the as shown in Table 3. This approach allows priority and risk to be assessed together. Only those functions that are classified as both Mandatory and Critical are tested in the qualification phase of the validation [8][14]. Therefore in Table 4 requirement 3.2.5 will not considered for testing, as it does not meet the selection criteria. Of the remaining requirements, some are traced to the installation of system components to meet requirements 3.2.2 and 3.2.3 and the service level agreement (SLA) to meet requirement 3.2.4. Requirement 3.2.1 will be tested in capacity test script, which in this example is called Test Script 05 (TS05). In this way, requirements are prioritised and classified for risk and the most critical ones can be traced to the PQ test script.

Specifying the Configuration of the System

The way that the LIMS will be configured and / or customised must be documented and for a LIMS application a configuration specification is the best way to do this. The typical configuration elements of a LIMS covered in such a document will encompass:

  • Definition of user types and their access privileges
  • Configuration of any system policies: functions turned on and any settings e.g. password length, use of electronic signatures, audit trail configuration etc
  • Definition of context sensitive default entries for the audit trail such as to the reason for data changes
  • Information about the instruments interfaced with in the laboratory
  • Identification of any instruments and systems for which training records, maintenance and qualification status will be maintained.

There will need to be links between the URS and the configuration specification to aid traceability. For example a URS requirement could state that five user types would be set up in the LIMS application and name them with a reference to the configuration specification. In the latter document there would be a reference back to the URS, the access privileges would be defined for each user type. Typically the functions available in a LIMS will vary with each version of the software from a vendor, it is important that the configuration specification is linked to the specific LIMS version being validated. The configuration specification is another example of a living document that has to be kept current.

Writing the Technical Architecture

This document, which can also called a technical specification, is typically written by the IT department taking into consideration the recommendations of the supplier in terms of server sizing (minimum processor power, memory and disk sizing etc) and the organisation’s corporate standards. A technical architecture will document the servers and their operating systems that will constitute the system e.g.

  • Database server
  • Application server
  • Terminal emulation for the application e.g. Citrix farm for the application (the advantage of this, especially for a large system, is that the installation of the application on clients is reduced to a single task)
  • Use of virtual servers for some LIMS instances
  • Operating system and service packs installed on each server
  • Other applications, tools and utilities to be installed on the servers e.g. antivirus, backup agent, etc The use of diagrams is a very useful way of illustrating how the components come together to constitute the overall system and helps to collate the individual server specifications.

The number of environments or instances for the LIMS will need to be specified in this document. For example there could be the following instances for the system:

  • Sandbox or development
  • Training
  • Validation
  • Production or operational At least two, if not three, instances are required for a LIMS. The validation and production instances should be mirror images so that the most of the testing can be conducted in the validation instance rather than the operational one. If the hardware platforms of these two instances are identical then the validation instance could be a disaster recovery for the operational one. User training should be conducted in an instance that is identical to the operational system.

Installing and Qualifying the System Components

The installation of the components of the LIMS will be undertaken in a number of layers starting with the hardware platform and proceeding through the operating system, utilities and tools, database and finishing up with the LIMS application software. The work carried out in this phase of the project will be based upon the technical architecture that will detail the nature and number of instances that will be established. First, each server will be installed, qualified and documented by the IT department and there may be an option to install the database that the LIMS will use if the organisation has the appropriate license. Then the LIMS vendor will install the database (if not done by IT) and the LIMS application software. The LIMS vendor should supply the IQ and OQ documentation before the execution so that the laboratory staff can review and approve the document before execution.

During the IQ the initial configuration baseline should be established by taking an inventory of the whole system including hardware, software and documentation. For a LIMS the IQ should cover:

  • Server (for data storage) installation by the IT department, server supplier or manufacturer
  • Interfacing of any instruments to the system (either as part of the initial LIMS IQ or separate from it)
  • Installation of bar code readers and printers
  • Processing or data review workstations either the IT department or contractors working on their behalf (typically with an operating system configured to corporate requirements)
  • Connection of the servers and any new workstations to the corporate network
  • Installation of the LIMS application software for data processing on the workstations

The IQ work around the LIMS is typically supported by the vendor, system administrators from the laboratory, and IT department depending on the complexity of the configuration of the system and the LIMS environment. Planning is essential as retrospective documentation of any phase of this work is far more costly and time consuming.

During this phase of the projects analytical instruments and instrument data systems will also be connected and interfaced to the LIMS. There needs to be appropriate qualification documentation to cover the instruments and data systems interfaced. Care needs to be taken to include any configuration of the LIMS or middleware used to communicate between the LIMS and instrument / data system especially of data are extracted from an instrument data system.

Do I Need an OQ for the Installed LIMS Application?

The operational qualification (OQ) is carried out shortly after the IQ and is intended to demonstrate that the application works the way the vendor says it should. Note that the OQ will be carried out on the unconfigured and non-customised LIMS application that has just been installed. Most LIMS suppliers will supply OQ scripts and if required the staff to execute the scripts. The depth and coverage of these OQ packages vary enormously, and the problem, from the laboratory perspective, is deciding if such an OQ package offers value for money.

The decision process should be risk based and documented. It will be helped by asking questions like these:

  • How close will you operate to the core LIMS being installed?
  • Will you be using standard LIMS functionality?
  • Will you be configuring the system?
  • Will you be customising the system?
  • Am I repeating the vendor’s internal testing in the OQ?
  • Can I leverage any work in the OQ and reduce my PQ effort?

Subject to a satisfactory vendor audit report, standard LIMS functionality can be assumed to work as the vendor has tested this and also the standard functionality used by the laboratory will be tested implicitly during the PQ phase of the project. Therefore why bother to test functions in the OQ that the vendor has already done? Configured and custom elements cannot be tested in this phase of the work as they will be specific to the laboratory and they will be input into the system after this phase of the project. Therefore the OQ of a LIMS should be a limited test to indicate that the installed system works.

The US GMP regulations (clause 160 [25]) require that before execution the test protocols have to be approved by the QC/QA unit and also that whatever is written in them needs to be scientifically sound. Here is an example of a Warning Letter sent by FDA to Spolana [26], a Czech company, in October 2000:

Furthermore, calibration data and results provided by an outside contractor were not checked, reviewed and approved by a responsible Q.C. or Q.A. official.

Therefore, never accept IQ or OQ documentation from a supplier without evaluating and approving it. Check not only coverage of testing but also that test results are quantified (i.e. have supporting evidence) rather than solely relying on qualified terms (e.g., pass / fail). Quantified results allow for subsequent review and independent evaluation of the test results. Further, ensure personnel involved with IQ and OQ work from the vendor are trained appropriately by checking documented evidence of such training such as certificates is current before the work is carried out.

Configuring the LIMS Application

Following the installation and qualification of the LIMS application software, it needs to be configured according to the parameters documented in the configuration specification:

  • Defining user types and the access privileges for each one
  • Setting up user accounts and allocating each one a user type
  • Turning on or off the system policies and inputting the defined settings from the configuration specification
  • Entering the default entries for the audit trail

The set up of the system should be documented and traced against the approved configuration specification. This has two advantages: the first is the documentation of the application configuration for the validation document suite and the second is for disaster recovery purposes as it will enable a rebuild of the system from the application disks.

Specifying and Coding Custom Software Modules

Custom coding using the scripting language provided by the LIMS vendor also needs to be specified, coded, tested and integrated within the LIMS application. One control mechanism for this can be documented in the validation plan and another could be via change control once the system is operational. An alternative approach could be to use an SOP to define exactly what is required in the way of specification, coding and testing for each custom module that could be used during implementation and also when operational. This latter approach may be the best one from the start as there is a single way for coding the LIMS.

From both business and validation perspectives, custom coding is the highest risk software (software category 5) as it is unique and the whole process is in the user’s hands. Therefore, unless absolutely necessary, keep custom elements to an absolute minimum unless there is a good business case for taking this path. There have been many cases where too much customisation has made it difficult for a laboratory to upgrade from a particular version of a LIMS.

Customisation of the LIMS will need software design specification written for each custom module required. This specification will need to detail:

  • Data inputs and how this will occur – manually or automatically and from where in the system
  • Handling the data with specification of how data will be manipulated including specification of the equations with ranges of the values being input and how outputs will be presented
  • Outputs from the module
  • Integration of the custom module with the LIMS application
  • Appropriate error handling including verification of data entry
  • Compliance issues such as access security and audit trail Customisation of the LIMS can also include new tables in the database and screens and these also need to be documented as well.

The LIMS development environment is where the initial custom code will be written against the software design specification. The writing of the code using the LIMS internal scripting language should follow the standards laid out by the software vendor. There will be informal testing of the code by the developer that will be informal and not documented. This is a normal part of the software development process by the programmer: an iteration of programming, followed by testing and recoding to enable the software to work correctly. When the software module is ready for formal testing it will be transported from the development environment to the validation instance and formal testing will be carried out against the software design specification.

Testing and Integrating Custom Software Modules

The formal testing of the custom modules is carried out in the validation environment against the requirements contained in the software design specification. Depending on the complexity of the custom module there may be white box testing of the algorithms (testing that the inputs, calculations and the outputs are as expected) as well as black box testing (the overall module and its integration with the LIMS). This work will be documented and the documentation suite subject to QA approval before release of the module into the production environment.

Population of the Database

Population of the database with the laboratory methods, analytical procedures and product specifications can be a long process especially for laboratories that have a large number of products. This phase of the work is often not planned, overlooked or underestimated. The scope of work will cover the raw materials including active ingredients, intermediates and in-process testing and primary and secondary finished product testing. In some cases, a LIMS implementation has been phased based in the population of the database work e.g. raw materials could be first, primary finished products second and secondary product testing third – the actual order will be dependent on laboratory and business priorities.

Entry of this material must be controlled and checked before its use as an incorrect test or specification could result in product being released that was either under or over strength with potential impact on patient safety. This process will be on-going after the end of the application validation and the best way to control this is via an SOP. Note that this is the normal way that a LIMS operates and therefore does not need to be under change control as the main procedure should have the input specification, testing and release process contained within it.

User Acceptance Testing (Performance Qualification)

The performance qualification (PQ) stage of the overall validation of the system can also be considered as the user acceptance testing. This should be undertaken by trained users and based upon the way that the system is used (including configuration and customisation) in a particular laboratory and the surrounding environment. Therefore, a LIMS cannot be considered validated simply because another laboratory has validated the same software: the operations of two laboratories may differ markedly even within the same organisation. The functions to be tested in the PQ must be based on the prioritised requirements defined in the URS and with the numbering of individual requirements can be traced back to the system requirements via the traceability matrix [22][23]. Documentation of the PQ can be done in a number of ways but the one preferred by the author is to have a controlling PQ test plan that described the overall approach to testing and a number of PQ test scripts (test cases) that sit underneath the plan. There will be QA approval of the plan as this is a high level document but not of the individual test scripts as there is little added value that can be input by QA. However the whole PQ package will be reviewed by QA after execution of the work to check compliance and standards have been adhered to.

PQ Test Plan

A way to document the PQ is to use an overall PQ test plan that outlines the scope of the system to be tested, the features of the LIMS to test as well as those that will not be tested with a discussion of the assumptions, exclusions and limitations of the testing undertaken. A documentation standard for the PQ test plan can be found in the IEEE standard 829-1998 [27] and adaptation of this for practical use is presented in Table 5.

Table 5: Outline of a Test Plan Adapted from IEEE Standard 829-1998[27]
  1. Introduction
  2. Test system
  3. Test environment(s)
  4. Features to be tested including description of the test scripts and the test procedures contained in each
  5. Features not to be tested with a rationale for each feature excluded
  6. Test Approach including assumptions, exclusions and limitations of testing for each test script
  7. Pass/fail acceptance criteria
  8. Suspension criteria and resumption requirements
  9. Test deliverables

The key sections of a PQ test plan are the features to test and those that will not be tested and associated with the features to be tested are the written notes of the assumptions, exclusions and limitations to the testing undertaken. The assumptions, exclusions and limitations of the testing effort were recorded in the appropriate section of the PQ test plan to provide contemporaneous notes of why particular approaches were taken. This is very useful if an inspection occurs in the future, as there is a reference back to the rationale for the testing. It is also very important as no user can fully test a LIMS or any other software application. For example, the operating system and any database used would be explicitly excluded from testing as the LIMS application software implicitly or indirectly tests these elements of the system.

Release notes for the LIMS version being validated will document the known features or errors of the system and may be a reference document for the overall validation. However, PQ tests carried out in any validation effort should not be designed to confirm the existence of known errors but to test how the system is used by the users on a day to day basis. The role of the user in the testing is to demonstrate intended purpose it is the role of the software development team to find and fix errors. If these or other software errors were found during the PQ testing, then the test scripts have space to record the fact and describe the steps that would be taken to resolve the problem.

PQ Test Scripts

In the same IEEE standard [27] can be found the basis for the test documentation that is the heart of any PQ effort i.e. the test script. This document consists of one or more test procedures for specific requirements testing and each test procedure will contain test execution instructions, collation of documented evidence and the acceptance criteria for the test procedure as follows.

  • Outline one or more test procedures that are required to test the specific LIMS requirements
  • Each test procedure will consist of a number of test steps that define how the test will be carried out
  • For each key test step the expected results must be defined (not all test steps need to contain expected results especially if they are instructions to move from one part of the system to another)
  • There will be space to write the observed results and note if it the test step passes or fails when compared with the expected results
  • There is a test log to highlight any deviations from the testing
  • Sections will collate any documented evidence produced during the testing; this must include both paper and electronic documented evidence
  • Definition of the acceptance criteria for each test procedure and if the test passes or fails
  • A test summary log collating the results of all testing
  • A sign off of the test script stating if the script has passed or failed
  • Sections throughout the document for a reviewer to check and approve the work
Testing Overview

One key point is that to ensure that the PQ stage progresses quickly, a test script should test as many functions as possible as simply as possible (great coverage and simple design). Software testing has four main features, known as the 4Es [28]:

  • Effective: demonstrating that the system tested meets both the defined system requirements and also finds errors
  • Exemplary: test more than one function simultaneously, where feasible
  • Economical: tests are quick to design and quick to perform
  • Evolvable: able to change to cope with new versions of the software and changes in the user interface It is an abject failure if the PQ testing documentation is written to test one requirement per script as this does not test the system as it is intended to be used.
Write the PQ Test Scripts

It is difficult to estimate the number of test scripts that a LIMS implementation requires for as the number of features used within the system can vary as will the range and extent of configuration as well as applications and instruments interfaced with the system. However, testing LIMS functionality should consider:

Analytical process flows as configured by the laboratory: raw materials, in process analysis and release

  • Specification management
  • Stability protocols and reports with alerts
  • Interfaces and data transfer between the LIMS and instruments and between the LIMS and applications
  • Sample management from registration to disposal • Unavailability of the network: buffering of data and prevention of data loss • Custom calculations implemented within the system • System capacity tests e.g. analysing the largest expected number of samples in a batch, number of users on the system
  • Interfaces between the LIMS and other software applications e.g. CDS

Testing should also consider any electronic record/signature requirements (e.g. 21 CFR Part 11) and other regulatory requirements:

  • System security and access control including between departments or remote sites
  • Preservation of electronic records e.g. Backup and Recovery; Archive and Retrieve
  • Data integrity
  • Audit trail functions along with date and time stamps especially if the system is used between different time zones
  • Electronic signatures
  • Identifying altered and invalid records Some of these compliance requirements can be integrated with some of the main LIMS functionality testing listed in the previous section. For example, electronic signing of results by the tester and reviewer can be integrated into analysis of a sample and audit trail entries generated by altering incorrect data entries. The aim of this approach is to test the system as it is intended to be used.
Considerations when Designing Test Procedures

Some of the considerations for designing test procedures for a LIMS will be discussed here, note that all aspects of the system that need to be tested must be defined in the URS. The simplest will be to consider a configured aspect of the LIMS which will be logical security and access control. Whilst logical security appears at first glance to be a very mundane subject, the inclusion of this topic as a test is very important for regulatory reasons as it is explicitly stated in 21 CFR 11 and the predicate rules that access to a system should be limited to authorised individuals. Also when explored in more depth it provides a good example in the design of a test case.

  • The test design for access control should consist of the following basic test components as a minimum:
  • An incorrect account name fails to gain access to the system
  • The correct account name and password gain access to the system
  • Correct account but minor modifications of the password fail to gain access to the software.
  • Account locking after three failed attempts to access with an alert sent to the system administrator

The important considerations in this test design are:

  • Successful test cases are not just those that are designed to pass but also are designed to fail. Good test case design is a key success factor in the quality of LIMS validation efforts. Of the test cases above most are designed to fail to demonstrate the effectiveness of the logical security of the system
  • The test relies on good computing practices being implemented by the system administrator to ensure that users change or are forced to change their passwords on a regular basis and that these are of reasonable length (minimum 6 – 8 characters).
  • Locking an account can also ensure that an requirement of Part 11 is tested e.g. altering a system administrator to a potential problem

Other test case designs that should be used are defined below:

  • Boundary test: the entry of valid data within the known range of a field e.g. a pH value would only have acceptable values within 0-14.
  • Stress test: entering data outside of designed limits e.g. a pH value of 15.
  • Predicted output: knowing the function of the module to be tested, a known input should have a predicted output.
  • Consistent operation: important tests of major functions should have repetition built into them to demonstrate that the operation of the system is reproducible.
  • Common problems: both on the operational and support aspects of the computer system should be part of any validation plan e.g. backup works, incorrect data inputs can be corrected in a compliant way with corresponding audit trail entries. The predictability of the system under these tests must generate confidence in the LIMS operations (i.e. trustworthiness and reliability of electronic records and electronic signatures) and the IT support. For more information about the format of the document and more detail of PQ testing see the book by McDowall [8].

Migrating Data from the Existing LIMS or another System

Data migration from another system (this could be an existing LIMS or another system) into the new LIMS can often be a component of a LIMS project or a separate project in its own right. This can be a difficult phase of the work as the understanding of a legacy system may have been lost by the company especially following reorganisation, merger or acquisition. In essence the work needs to be planned and the data to be migrated from the originating system to the LIMS identified and mapped with the fields in the LIMS database. Do not expect a 1:1 relationship between the data as the two systems were developed independently and therefore there may be work to be done to ensure fit or consistent reduction of data. In the latter case not all data elements may be transferred and the organisation needs to determine what they will do with data that cannot be migrated. If scripts are to be written to automate the data transfer they will have to be specified, developed and tested so that they are validated before use.

Writing SOPs and Training the Users

All personnel involved with the selection, installation, operation and use of a LIMS should have training records to demonstrate that they are suitably qualified to carry out their functions and maintain them. It is especially important to have training records and curricula vitae of installers and operators of a system as this is a particularly weak area and a system can generate an observation for non-compliance. Major suppliers of LIMS will usually provide certificates of training for their engineers but also the IT Department staff responsible for the network and utility operations also require evidence of their education, experience and training.

The types of personnel involved that could be involved in a validation are: • Suppliers staff: who were responsible for the installation and initial testing of the data system software, left copies of their training certificates listing the products they were trained to work on. These were checked to confirm they were current and covered the relevant products and then included in the validation package. • System managers: training in the use of the system and administration tasks were provided by the supplier and documented in the validation package. • Users: were either analytical chemists or technicians whom had their initial training by the supplier staff to use the data system and this was documented in their training records. • Consultants: any consultants involved in aiding a validation effort must provide a curriculum vitae (resume) and a written summary of skills to include in the validation package for the system as required by the GXP regulations e.g. §211.25 [25]. • IT Staff: training records and job descriptions outlining the combination of education, training and skills that each member has.

Training records for LIMS users are usually updated at the launch of a system but can lapse as a system becomes mature. To demonstrate operational control, training records need to be updated regularly especially after software changes to the system. Error fixes do not usually require additional training, however major enhancement or upgrade should trigger the consideration of additional training. The prudent laboratory would document the decision and the reasons not to offer additional training in this event. To get the best out of the investment in a LIMS, periodic retraining, refresher training or even advanced training courses could be very useful for large or complex ones. Again this additional training must be documented.

System Documentation Vendor Documents

The documentation supplied with the LIMS application or system (both hardware and software), release notes, user guides and user standard operating procedures will not be discussed here as it is too specific and also depends upon the management approach in an individual laboratory. However, the importance of this system specific documentation for validation should not be underestimated. Keeping this documentation current should be considered a vital part of ensuring the operational validation of any computerised system. The users should know where to find the current copies of documentation to enable them to do their job. The old versions of user SOPs, system and user documentation should be archived.

Standard Operating Procedures (SOPs)

Standard Operating Procedures are required for the operation of both the LIMS applications software and the system itself. SOPs are the main medium for formalising procedures by describing the exact procedures to be followed to achieve a defined outcome. Procedures have the advantage that the same task is undertaken consistently, is done correctly and nothing is omitted and that new employees are trained faster [8]. The aim is to ensure a quality operation.

The FDA Guidance for Industry on Computerised Systems in Clinical Investigations provides a minimum list of SOPs expected for a computerised system in a GCP environment [29]. This list is presented following editing for a LIMS operating in a GLP or GMP laboratory. • System setup/installation (including the description and specific use of software, hardware, and physical environment and the relationship) • System operating manual (user guide) • Archive and restore (including the associated audit trails) • System maintenance • System security measures • Change control • Data backup, recovery, and contingency plans • Alternative recording methods (in the case of system unavailability) • Computer user training and roles and responsibilities of staff using the system

Note that this is a generalised list of SOPs and more procedures may be required if the operating environment is more complex. Conversely, some of the procedures above could be condensed into a single SOP with more scope. The key issue is that all areas for the operation and maintenance of the system are controlled by procedure.

IT Service Level Agreement

In the case of outsourcing the support for the hardware platforms and network that run the LIMS either to the internal IT Department or an outsourced IT function, a Service Level Agreement (SLA) must be written to ensure that IT do not destroy the validation status of the system. This SLA should cover procedures such as: • Controlling and implementing changes to the system and the IT infrastructure • Database administration activities • Backup and recovery including media management • Storage and long term archive of data • Disaster recovery

This SLA will cover the minimum service levels agreed together with any performance metrics so that the IT department can be monitored for effectiveness.

Reporting the Work: The Validation Summary Report

The validation summary report brings together all of the documentation collected throughout the whole of the life cycle and presents a recommendation for management approval when the system is validated. The emphasis is on using a summary report as a rapid and efficient means of presenting results as the detail is contained in the other documentation in the validation package.

The report should summarise the work undertaken by the project team and checked against the intention in the validation plan. This gives the writer an opportunity to document and discuss any changes to plan. A list of all the documents produced during the validation should be generated as an appendix to the report. Finally there should be a release statement signed by the system owner and QA authorising the release of the system for operational use. Some organisations have a separate release certificate that achieves the same end point and is quicker to release than waiting for approval of the validation summary report.

Maintaining the Validated Status

After operational release of the LIMS comes the most difficult part of computerised system validation; maintaining the validation status of the system throughout its whole operational life. Look at the challenges that will be faced when dealing with maintaining the validation of a system; some of the types of changes that will impact an operational LIMS are: • Software bugs will be found and associated fixes installed • Application software, operating system, plus any software tools or middleware used by the LIMS will be upgraded • Network improvements: changes in hardware, cabling, routers and switches to cope with increased traffic and volume • Hardware changes: PCs and server upgraded or increases in memory, disk storage etc • Interface to new applications e.g. spreadsheets or laboratory data systems • Expansion or contraction of the system due to work or organisation reasons • Environmental changes: moving or renovating laboratories

All of these changes need to be controlled to maintain the validation status of the LIMS. In addition there are other factors that impact the system as well from a validation perspective, such as: • Problem reporting and resolution • Software errors and maintenance • Backup and recovery of data • Archive and restore of data • Maintenance of hardware • Disaster recovery (business continuity planning) • Written procedures for all of the above

In this section, the number of measures will be discussed that need to be in place to maintain the validation status of a LIMS.

Change Control and Configuration Management

Changes will occur throughout the lifetime of the system from a variety of sources such as:

  • Service packs for the LIMS software to fix software errors
  • New versions of the LIMS software offering new functions
  • Interfacing of new instruments to the LIMS
  • Upgrades of network and operating system software
  • Changes to the hardware: additional memory, processor upgrade, disc increases etc. • Extension of the system for new users

This is the key item from the release of the system for operational use to its retirement and is essential for maintaining the validation status of the LIMS. All changes must be controlled. From a regulatory perspective there are specific references to the control of change in both the OECD consensus document [30] and EU GMP regulations [19]. Therefore change control is the primary means of ensuring that unauthorised changes cannot be made to a system which would immediately loose its validation status.

A change request form is the means of requesting and assessing and change to the LIMS or a system in the LIMS environment:

  • The change requested was described first by the submitter.
  • The business and regulatory benefits should be described along with the cost estimate of making the change
  • The impact of the change should be assessed by the system managers and then approved or rejected by management.
  • Changes that were approved were implemented, tested and qualified before operational release.

The degree of re-validation work to be done was determined during the impact analysis. Changes that impacted the configuration (hardware, software and documentation) were recorded in a configuration log maintained within Excel.

Revalidation Criteria

Any change to a LIMS should trigger consideration if revalidation of the system is required. Note the use of the word "consider". There is usually a knee-jerk reaction that any change means that the whole system should be revalidated. One should take a more objective evaluation of the change and its impact before deciding if full revalidation is necessary. Firstly, if revalidation is necessary, to what extent is it required to test: only the feature undergoing change, the module within the system or the whole application? There may even be instances where no revalidation would be necessary after a change. However the decision must be documented together with the rationale for it.

Therefore a procedure is required to evaluate the impact of any change to a system and act accordingly. One way to evaluate a change is to review the impact that it would make to data accuracy, security and integrity [2]. This will give an indication of the impact of the change on the system and the areas of the application affected. This allows the revalidation effort to target the change being made.

Operational Logbooks

To document the basic operations of the computer system a number of logbooks are required. The term logbook is used flexibly in this context; the actual physical form that the information takes is not the issue, rather the information that is required to demonstrate that the procedure actually occurred. The physical form of the log can be a bound notebook, a pro-forma sheet, a database or anything else that records the information needed, as long as security and integrity of the records (paper or electronic) are maintained.

Backup Log

The aim of a backup log is to provide a written record of data backup and location of duplicate copies of the system (operating system and application software programs) and the data held on the computer. The backup schedule for the discs can vary. In a larger system, the operating system and applications software will be separated from the data that are stored on separate discs. The data change on a fast timescale that reflects the progress of the samples through the laboratory and must be backed up more frequently. In contrast, the operating system and application programs change at a slower pace and are therefore more static; the backup schedule can therefore reflect this.

Some of the key questions to ask when determining the backup requirements for the LIMS are:

  • How long should the time between backups be? Ideally this should be daily.
  • Nature of the backup? Full, incremental or differential. The best security is daily full backups but this takes much time and is therefore a combination of full once per week and either incremental or differential backups daily.
  • Who is authorised to perform backups and who signs off the log? The laboratory manager in conjunction with the person responsible for the system should decide this. The authorisation and any review signature required should be defined in an SOP
  • When should duplicate copies be made for security of the data? This question is related to the security of data and programs. Duplicate copies should be part of the backup procedure at predetermined intervals. The duplicate copies should be stored in a separate location in case of a hazard to the computer and the original backups located nearby. Duplicate backups are also necessary to overcome problems reading the primary backup copies.
Problem Recording and Recovery

During the operation of a computer system, boot up, backup or other system functions, it will be inevitable that errors may occur. It is essential that these errors are recorded and the solution to resolve it also written down. Over time, this can provide a useful historical record to the operation of the computer system and the location of any problem areas in the basic operation. Areas where this may be the case may be in peripherals where a print queue has stalled. This is relatively minor case, however there may be cases where the application fails due to a previously undetected error. In the latter case, there is a need to for link the error resolution to the change control system.

Software Error Logging and Resolution

As it is impossible to completely test all of the pathways through LIMS software or any software [31], it is inevitable that errors will occur during the operation of the system. These must be recorded and tracked until there is a resolution. The key elements of this process are to record the error, notify the support group (in-house or supplier), classify the problem and identify a way to resolve it. Not all reported problems of a LIMS will be resolved, they might be minor and have no fundamental effect on the operation of the system and may not even be fixed. Alternatively a work around may be required which should be documented, sometimes even retraining may be necessary. Other errors may be fatal or major, that mean the system cannot be used until fixed. In these cases, the revalidation policy will be triggered and the fix tested and validated before the LIMS can be operational again.

Maintenance Records

All quality systems need to demonstrate that the equipment used is properly maintained and in some instances calibrated. Computers are no exception to this. Therefore records of the maintenance of the LIMS need to be set up and updated in line with the work carried out on it. The main emphasis of the maintenance records is towards the physical components of a system: hardware, networking and peripherals; the software maintenance is covered under the error logging system described above.

If the hardware has a preventative maintenance contract, the service records after each call should be placed in a file to create a historical record. Also any additional problems that occur that requires maintenance will be recorded in the system log and there will need to be cross-references to the appropriate record there. Many smaller computer systems have few if any preventative maintenance requirements but this does not absolve the laboratory from keeping records of the maintenance of the system. If a fault occurs that requires a service engineer to visit, then this must be recorded as well.

On sites where maintenance of personal computers is maintained centrally for reasons of cost or convenience, maintenance records may be held centrally. The remit of the central maintenance group may cover all areas of a site or organisation including regulated or accredited as well as non-accredited groups. It is important for the central maintenance group to maintain records sufficient to demonstrate to an inspector of the work they undertake. As defined in EU GMP Annex 11, the third party undertaking this work should have a service agreement and also have the curriculum vitae of its service personnel available and up to date.

Disaster Recovery

Good computing practices require that a documented AND tested disaster recovery plan must be available for all major computerised systems. It rarely is. Failure to have a disaster recovery plan places the data and information stored by major systems at risk, the ultimate losers being the workers in the laboratory and the organisation. Here it is not the laboratory who are responsible for the disaster recovery plan but the IT department.

Disaster recovery is usually forgotten, or not considered, as "it will never happen to me". The recovery plan should have several shades of disaster documented. From the loss of a disc drive: how will data be restored from tape or backup store and then updated with data not on the backup, through to the complete loss of the computer room or building through fire or natural disaster depending where the facility is sited. Once the plans have been formulated, they should be tested and documented to see if they work. Failure to test the recovery plan will give a false sense of security and compound any disaster. This plan needs to be tested and also updated as the IT technologies used by the organisation change over time.

Chapter Summary

The first stage in a LIMS validation is to map the current business processes inside the laboratory and outside the laboratory where samples are generated and information used and then simplify and optimise them for electronic working. The time and effort spent on the mapping process is saved as the overall implementation is now simpler and faster. Risk based validation of a LIMS environment is presented based on a life cycle for GAMP category 4 software. The first stage is to determine if the system needs to be validated, the second stage assesses the risk presented by each requirement to determine if it needs to be tested or not, third as much as possible of the vendor’s development work is leveraged to reduce the amount of testing that the users have to perform. However a LIMS can also have configurable and custom elements that will be specific to the laboratory where the LIMS is being implemented, therefore the validation effort is directed towards these software elements as this is where the highest risk is based. Measures to maintain the validation status of the system when the LIMS is operational are also discussed; the major one of these is change control.


  1. R.R.Mahaffey (1990), LIMS Applied Information Technology for the Laboratory, Van Nostrand Reinhold, New York
  2. 2.0 2.1 R.D.McDowall (2004). Risk Management for Automation Projects, Journal of the Association of Laboratory Automation VOL, 72 - 86
  3. United States Pharmacopoeia, General Chapter <1058> Analytical instrument Qualification, First Supplement to USP XXX1 p3587 (2008)
  4. 4.0 4.1 4.2 4.3 FDA Guidance for Industry, General Principles of Software Validation, 2002
  5. GAMP Good Practice Guide, Validation of Laboratory Computerised Systems, International Society for Pharmaceutical Engineering, Tampa, Florida, 2005
  6. R.D.McDowall, Spectroscopy (2006) 21 (4) Validation of Spectrometry Software: Critique of the GAMP Good Practice Guide for Validation of Laboratory Computerized Systems pp14-30
  7. 7.0 7.1 7.2 7.3 7.4 Good Automated Manufacturing Practice Guidelines Version 5, International Society for Pharmaceutical Engineering, Tampa, Florida, 2008
  8. 8.0 8.1 8.2 8.3 8.4 R. D. McDowall (2005), Validation of Chromatography Data Systems: Meeting Business Needs and Regulatory Requirements, RSC Chromatography Monographs Series Editor R.M.Smith, Royal Society of Chemistry, Cambridge.
  9. R.D.McDowall and D.C.Mattes (1990),Architecture for a Comprehensive Laboratory Information Management System, Analyt Chem 62, 1069A – 1076A
  10. 10.0 10.1 R.D.McDowall (1995), A matrix for a LIMS with a strategic focus, Laboratory Automation and Information Management 31, 57 – 64
  11. Strategic Analysis of US Laboratory Information Management Systems markets, Frost and Sullivan 2008 cited by K. Shah, Pharmaceutical Technology Europe, XX (2009) May 31-32
  12. R.D.McDowall (2009), Understanding and Interpreting the New GAMP 5 Software Categories, Spectroscopy June issue publication
  13. GAMP Good Practice Guide, Testing GXP Systems, International Society for Pharmaceutical Engineering, Tampa, Florida, 2006 (available only to members as an electronic version via
  14. 14.0 14.1 R.D.McDowall (2005), Effective and Practical Risk Management Options for Computerised System Validation, Quality Assurance Journal, 9, 196-227.
  15. R.D.McDowall (2009), Validation of Commercial Computerised Systems using a Single Life Cycle Document (Integrated Validation Document), Quality Assurance Journal, accepted for publication
  16. R.M.Sicnolfi and S Bishop (2007), RAMP (Risk Assessment and Management Process): An Approach to Risk-Based Computer System Validation and Part 11 Compliance, Drug Information Journal, 41 69-79
  17. Electronic Records; Electronic Signatures Final Rule (1997), (21 CFR 11), Federal Register 62, 13430 – 1346
  18. 18.0 18.1 18.2 Good Manufacturing Practice for Medicinal Products in the European Community, Annex 11 – Computerised Systems, Commission of the European Communities, Brussels (2007).
  19. 19.0 19.1 Good Manufacturing Practice for Medicinal Products in the European Community, proposed revision for public comment Annex 11 – Computerised Systems, Commission of the European Communities, Brussels (2008).
  20. International Standards Organisation, ISO Standard 9001: 2005, Quality Management Systems – Requirements, International Standards Organisation, Geneva, 2005
  21. International Standards Organisation, ISO Standard 90003, Software Engineering – Guidelines for the Application of ISO 9001-2000 to Computer Software, International Standards Organisation, Geneva, 2004
  22. 22.0 22.1 R.D.McDowall (2008), Validation of Spectrometry Software: The Proactive Use of a Traceability Matrix in Spectrometry Software Validation, Part 1: Principles, Spectroscopy 23 (11), 22-27
  23. 23.0 23.1 R.D.McDowall (2008), Validation of Spectrometry Software: The Proactive Use of a Traceability Matrix in Spectrometry Software Validation, Part 2: Practice, Spectroscopy 23 (12), 78-86
  24. IEEE Standard 1233-1998 (1998), Guide for Developing Software Requirements Specifications, Institute of Electronic and Electrical Engineers
  25. 25.0 25.1 US Current Good Manufacturing Practice Regulations for Finished Pharmaceuticals (21 CFR 211) with revisions as of 2008
  26. Spolana, FDA Warning Letter, October 2000
  27. 27.0 27.1 27.2 IEEE Standard 829-1998 (1998), Software Test Documentation, Institute of Electronic and Electrical Engineers
  28. M. Fewster and D. Graham (1999), Software Test Automation, Effective Use of Test Execution Tools, Addison Wesley, London
  29. FDA Guidance for Industry (2007), Computerised Systems in Clinical Investigations
  30. OECD (1995), Consensus Document on Principles of Good Laboratory Practice applied to Computerised Systems, Organisation for Economic Co-operation and Development, Paris
  31. B. Boehm (1970), Some Information Processing Implications of Air Force Missions: 1790 – 1980, RAND Corporation, Santa Monica, California, 1970.
Author information: R.D.McDowall Principal McDowall Consulting 73 Murray Avenue Bromley, BR1 3DJ, UK; Email:

Click [+] for other articles on 
Quality(6 P)
The Market Place for Lab Automation & Screening  The Market Place
Click [+] for other articles on 
The Market Place for Lab Automation & Screening  The Market Place