SLAS

Project Management

From LabAutopedia

Jump to: navigation, search
Invited-icon.jpgA LabAutopedia invited article

Authored by: Steven D. Hamilton

Today, laboratory automation is, or should be, all about integrating processes, systems, technology and people to achieve improvements in laboratory operations, capability, quality and productivity - at the bench, laboratory and enterprise level. We have seen the same evolution occur with regard to desktop computers and office automation, and in most organizations it is now accepted that there must be an overall plan and guiding strategy. Laboratory automation should be no different. It is an endeavor that must be carefully planned and researched to provide the maximum benefit. Project management, therefore, has become a key part of practicing lab automation. While the process and skills that are part of typical project management training apply, there are aspects of lab automation project management that are unique to this field.

Contents


Choosing the right project

Detailed articles:Justifying laboratory automationEvaluation of bottlenecks and process flow,  Risk management for laboratory automation projects 

Probably the most important step in any laboratory automation project is that of choosing the "right" project. In an industrial or corporate environment, that means choosing a project that has strong potential to be of positive benefit to the organization. One should evaluate the economic justification of a project to determine if the completed project offers an acceptible financial Return on Investment (ROI). Every organization has different formulas for economic justification, or ROI calculations.  Please refer to the article on Justifying Laboratory Automation for a more detailed discussion.

A strategic justification, i.e. the potential impact on intangible factors such as safety, skill retention, capabilities and quality, should also be evaluated. For instance, there are cases where a given type of science cannot be done without an investment in a new automated technology. Microarray analysis of DNA and RNA is such an example. No one attempts to create such incredibly dense arrays of biochemical material manually.  It is often difficult to put a monetary value on strategic justification factors and different organizations and even different parts of the same organization will have differing levels of recognition and acceptance of such strategic factors. 

The impact or fit of the project on the overall workflow within an organization should also be evaluated. The ideal goal would be to use automation to address "bottlenecks", or segments of the organization that are rate-limiting to the entire enterprise. It may be of questionable benefit to the organization to automate a process that is not rate limiting, unless of course there is a strong economic or strategic justification. Once must also be careful not to create bottlenecks where none previously existed via the enhanced capability that automation may bring to a individual step of the process. More information can be found in the article about Evaluation of Bottlenecks and Process Flow.

Developing a strategy and plan

Once the right project has been chosen, the next step is to develop a strategy and plan for the entire project. The first step towards this should be the formation of a project team, consisting of "stakeholders" in project. Membership in this team must go beyond those immediately involved in the nuts and bolts of implementation. It must include whoever is ultimately going to be the owner/operator(s) of the automated system. Many are the projects that have succeeded technically but failed operationally because the "hands-on" laboratory staff were not on-board with the project. It is vital that the eventual system owners/operators be identified at this stage of project planning and that they have a positive attitude toward the endeavor. The project team must also include those who will be the recipient of whatever is the output of the automated system - a processed sample, assay data, etc. They must understand and be prepared for any impact on their work that may result from the new automation and should be encouraged to voice any concerns early. A person who is the next cog in sample processing should certainly know that pending automation may result in many more samples in their inbox or a change in timing of the arrival of such samples. A person who depends on the results of a manual assay that is planned to be automated must be comfortable with any changes or shifts that may occur to the data. Finally, the project team must include secondary up and downstream people who may be indirectly impacted by the new automation. If the new system is going to involve a great increase in the use of consumables, then the person responsible for purchasing and maintaining that consumable supply should be included. If the new system will result in a great increase in raw data output from the laboratory, then the person responsible for data processing and management should be included.  If the new system will result in a great increase in chemical or biological waste, then the resources responsible for waste handling and disposal need to be involved. 

The key function of this project team is to go through a Requirements Analysis, resulting in a Functional Requirements Document (FRD), i.e. a document formally outlining the operational requirements of a proposed system and specific conditions or stipulations related to the project in general.  Many example documents exist from various disciplines. This document should define the following in detail:

  • Project objective: What is the overall project goal? What factors will define success?  How does this project map to business objectives, requirements and goals? 
  • Performance requirements: What are the key measurements of performance necessary for the system to be successful? What factors constitute failure risks? Examples include:
    • Required system thoughput:  From the start of operation, what is the expected time for the processing of the first unit or sample to be complete (ramp up time)?  How many units or samples must be processed per unit time thereafter? 
    • System availability: How many hours/day and days/week is the system expected to be available for use?
    • Total capacity: What is the amount of work the system can be loaded with at any one time, i.e. sample capacity. 
    • Expected frequency of user interaction: How often is it desired for the system to require user interaction?  What "walk away" time is expected? 
    • Reliability:  What is the acceptible level of reliability?  What is the consequence of unreliability?  Are the samples being processed irreplacable?  Are timely results critical to the enterprise?  Reliability can be stated as:
      • Mean-Time-To-Failure (MTTF): The mean (average) time the system is operable from start of operations before the first failure occurs.
      • Mean-Time-Between-Failure (MTBF): The mean (average) time between failures of a system, including the time of repair and recovery from such failures. 
    • Recoverability: In the event of system failure, what type of recovery capability is expected?  Must the system self-recover?  Is it sufficient to allow errors to be discovered at the time of the next scheduled user interaction?  Should the system announce an error condition, and if so how?  What is the criticality of tending to an error situation? 
    • Audit trail: What level of automatic documentation of operations is required?  List the included data.  
    • Software interfaces:  Name the applications with which the system must interface and what data is to be exchanged. 
    • Hardware interfaces:  Name any specific hardware which must be part of or interfaced to the system. For instance, a given assay might require a very specific detection device. 
  • Validation plan and test protocols: Plans for actually testing and measuring key performance criteria. Include statistical expections of precision and accuracy, throughput, failure rate, etc.  Note: Specifics of equipment are unknown at this point, so protocols must focus on known measurables in the process to be automated which will exist regardless of equipment involved. 
  • Timeline: Overall project timetable, including checkpoints and milestones.
  • Financial considerations: Overall project budget, including on-going operational costs.

This document must be written to be understood by scientists and managers as well as those in other technical fields - e.g. engineers or computer scientists - depending on the exact nature of the project.  Note that the FRD does not specify technical solutions.  All too often scientists like to jump immediately into problem solving before they have adequately defined the problem itself.  No one likes creating a functional requirements document. It's tedious and time consuming. It's a stage that is often rushed though in the haste to "get on with the project'. However, the process that the team must go through to create a quality functional requirements document is essential to uncovering and understanding all the factors that could affect project success or failure. The document itself is then essential for communicating those factors to whoever will be involved in actually supplying or creating the automated system

Choosing the right technology

Many people involved in laboratory automation projects often jump to this step as soon as a potential project is envisioned. It's natural for technology-oriented people to want to get to the part of a project they enjoy the most. However, good project planning demands that one not skip or short-change essential steps. Once the functional requirements document has been written, the team will have all the information necessary to do a good job of choosing the right technology for the project. The following factors must be considered:

  • Process complexity: Simple processes point toward simple technology, and vice versa.
  • Stability of procedures: Complex systems are often not easily and quickly adapted to rapidly changing procedures.
  • Capacity and unattended throughput needs: High capacity and long periods of unattended operation will require more complex or at least larger systems.
  • Staff experience with automation: Take small steps, build up experience and succeed.
  • Funding: Small budgets mean purchasing simple, off-the-shelf technology.
  • Timeframe of need: Fast project timelines are not compatible with complex, custom,  or unproven technology.

One must also consider how these factors may change over the life cycle of the system. For instance, will the need for capacity increase? Will more walk-away operation time be desired? If so, can this growth capacity be built into the system from the outset or can it be done as an upgrade later? Some features are only available at the time of manufacture, or may require an extended downtime or return to the factory for implementation. Will this be acceptable? What is the possibility that such equipment may no longer be available in the future? Is the vendor you have chosen to work with stable? If not, take precautions to protect yourself, such as putting software source code in escrow or purchasing key spare parts. Is there likely to be a need to add new processes or procedures to the system? If so, what LUO's might they encompass? Are they likely to be more or less complex? Will the precision and accuracy performance of the current system suffice for future needs? Such things are hard to predict, but some level of what-if planning should be attempted. Many systems meet an untimely end because no adaptability was considered at project planning time.

The 2006 ALA Survey on Industrial Laboratory Automation[1] gathered the following data regarding the respondents desired point of adoption of new technology:

Entry point for adoption of new laboartory automation technology

Technology Entry Point

Description of Entry Point Technology % of Respondents
Bleeding Edge Technology showing high potential, but yet to demonstrate value or practicality 7%
Leading Edge Technology proven in marketplace but few knowledgeable personnel to implement or support it 61%
State of the Art Technology which everyone agrees is the right solution 32%
Dated Technology still useful and implemented, but a replacement leading edge technology is readily available 0%
Obsolete Technology which has been superseded by state-of-the-art technology, rarely implemented anymore 0%

 

Resourcing and executing the project

Detailed article:The laboratory automation expert

Mentioned above was the fact that organizations with little or no experience with laboratory automation should take small steps and build up their expertise before getting involved in more complex projects. The 2006 ALA survey of the State of Industrial Automation explored the question of internal automation staffing:

Does your company / organization employ dedicated internal staff serving as automation resources? ALA 2006 survey
Answer 2006 2003 1998
Yes 75% 66% 64%
No 25% 34% 36%


The availability of knowledgable, internal resources increases the options available for executing a project and the benefit of institutional learning enhances the odds of success. What type of people do organizations employ as laboratory automation experts? What skill sets are important? Where can one get training and/or find qualified people? The LabAutopedia article on The Laboratory Automation Expert delves into this subject.

The classic question of build vs. buy must be made when considering sourcing of the automation technology. The 2006 survey information indicates that the % of people buying laboratory automation off-the-shelf has remained fairly constant over the past decade, at just under 50%.

Describe the percentage of automation sourcing in your laboratory / organization from each of the listed sources.  ALA 2006 survey
Answer 2006 2003 1998
Off the shelf (no customization): 47% 46% 48%
Developed in-house: 15% 20% 22%
Developed/customized via original vendor: 26% 24% 25%
Developed/customized via 3rd party integrator: 11% 9% 4%
Other: 1% 1% 1%


Thus slightly over half of the survey respondents have required some level of automation customization.  Off the shelf solutions were not adequate. The primary source of lab automation customization throughout this period has been the original equipment manufacturer, who are consistently reported to have supplied about half of all the customized automation.  The 1998 survey reported that in-house development was the second ranked source customized automation, with third-party integrators ranked a distant third.  The 2006 survey indicated the gap between in-house and third-party sourced automation customization had narrowed significantly. 

Rely on the Functional Requirements Document as the best way to transfer knowledge about the project to any technology provider, internal or external. The FRD should clearly describe to the provider the process to be automated, the key performance factors and the methods of testing that performance once the system is complete.  It should included in any purchase contract or work agreement.  Some of the performance criteria may need to be re-defined based on feedback from the technology-provider.  This is acceptable, provided that such changes do not endanger the success of the project.  Defined milestones and checkpoints are often written into the contract as payment points. Ideally, there should be no surprises due to lack of communication or specification if the FRD was done well.  Potential technical surprises are usually more than enough to deal with.

System testing and validation

Detailed Articles: Factory Acceptance Testing, Risk management for laboratory automation projects 

There are two basic types of testing and validation - FAT, or Factory Acceptance Testing, and SAT, or Site Acceptance Testing. FAT is done at the location where system engineering and fabrication is done, which most likely is not the final laboratory location. SAT is done at the final location for the system, typically the laboratory of the system "owner". Both types of testing should be done according to the protocols and requirements originally set out in the Functional Requirements Document.  Again, the goal is to have no surprises. Why do both FAT and SAT? Testing at the "factory", or fabrication/engineering site gives the best access to all the technical resources that participated in the system creation, plus others. It is the best location to discover and iron out bugs quickly because of the access to resources. Typically the ultimate "owner" of the system will be present for final FAT testing, although certainly such testing can and should have been conducted by the engineering staff prior to their arrival, according to the FRD testing specs. What often cannot done during FAT testing is the actual chemistry or biology that the system is designed for. The site may not be equipped for such testing, nor may the means be available to test the ultimate chemical or biological outcome of the process. Therefore biological and/or chemical testing is usually done as the last part of SAT. Since the system may have been affected in some way by the transport from factory to site, SAT begins by simply repeating the FAT protocol, and then focuses on testing that can only be done at the final site.

If the system will be operating in a regulated environment, there are government mandated requirements regarding controls, audits, system validations, audit trails, electronic signatures, and documentation for software and systems. Such specifications should have been included in the FRD, but the brunt of the work comes at the time of SAT and on-going operation. You can learn more about regulatory compliance for automated system in this article.

Not to be overlooked at this time is delivery of documentation. Good system documentation is essential to on-going performance. Unfortunately such documentation is often the last thing to be done and is often incomplete, especially for work that is more custom and prototypical. Desired documentation should be included in the FRD and delivery of such should be included in the actual purchase contract.  A system should not be considered to have fully completed SAT until the agreed-upon documentation is complete.

Ongoing operation and support

Contrary to popular opinion, automation projects are not done once SAT is complete. After such an investment, systems are expected to and should perform well for many years to achieve the return that was projected early in the planning process. Unfortunately, as the project team dissolves and people focus on other projects, measures are often not put into place to assure excellent on-going operation. What often happens?

  • The project “Champion” moves on: This often can't be avoided, but a broad team, with broad buy-in is the best way to maintain focus and interest.
  • System documentation is never completed, and after time, no one understands it: Be sure documentation needs are listed in the FRD and don't declare SAT done w/o delivery of said documentation.
  • The system is just not reliable:  Did your reliability requirements match your technology entry point?  High reliability requires more mature technology.   
  • The physical location and layout are not flexible:  Did you consider possible system changes when you were planning?
  • Appropriate long-term support not available:  Build support funding into the funding original request. Be up-front about on-going costs.  Evaluate the potential longevity of products and providers.
  • The original system provider bites the dust:  Occasionally this happens, but good planning and research on providers can minimize the risk. There are other ways to protect yourself, such as escrowing of software or parts.
  • The system is not adaptable to changing needs: You can't anticipate everything, but you know whether you work in a slow or fast changing laboratory environment.  If the latter, then you must place system flexibility high on the requirements list. 

Compiling a rigorous FRD and having internal laboratory automation expertise is the best hedge against all these scenarios.

Performance metrics monitoring

Detailed Articles: Lean Sigma in the LabImproving your lab with simulation 

Automated systems can process prodigious amounts of samples and generate great quantities of data in short periods of time. Such systems can appear to be functioning normally, not triggering any error detection mechanisms, but producing out-of-specification data due to some subtle operational deviation. It is vital to catch such deviations quickly, lest bad data creep into scientific decision making processes, and samples, reagents and time wasted. One approach is to design automated systems to to monitor themselves, recording and reporting key operational parameters as checks of on-going performance quality.  In general, such approaches follow this functional outline:

  • Standard tests are performed by automated systems at regular intervals
  • Results are captured, stored and analyzed automatically for non-conforming performance
  • Immediate automatic notifications (e.g. email) are sent regarding non-conforming performance or other errors
  • Critical information is tracked with control charts and electronically posted regularly
  • Periodic summary reports are generated and distributed automatically

Such an automated program was developed at Amgen Inc. for the monitoring the performance of automated liquid handling systems[2].  Their Automatic Metric Monitoring Program (AMMP) is implemented on a PC server using a Visual Basic program as controller. All e-mail, distribution, and list processing is handled by Microsoft Outlook while analysis, graphics, and reports are generated by JMP (SAS Institute) and MATLAB (MathWorks) scripts invoked by the Visual Basic-controlling application. The metrics database is Oracle based. The system is reported to have led to measureable performance improvements.

A similar system has been developed at Bristol-Myers Squibb for the Web-based displaying of the current operational state of a diverse set of analytical instruments as well as the state of the sample queues currently running on those instruments.[3] The Web-based approach allows both analysts and support personnel to access information about instruments of interest from any standard computer workstation in the company. Analysts can use the information to determine which walk-up instrument has the shortest sample queue or to see whether their sample analyses are complete. Support personnel can quickly scan a relevant group of instruments to determine where to focus their maintenance efforts. As many as 140 instruments are monitored by their OmniQueue application.

Such and other quality-monitoring techniques can benefit from a knowledge and implementation of select principals of Six Sigma or Lean Sigma, the former focusing on identifying and removing the causes of defects and errors in manufacturing and business processes and the latter focusing on process flow improvement.  The use of Six Sigma techniquies have been described by Liu A in a case study focused on reduction of cycle time and entry defects in the Clinical Report Form entry process.[4] 

Automated systems and regulated environments

Detailed Articles: Automated systems and regulated environments ; Considerations_When_Implementing_Automated_Methods_into_GxP_Laboratories

Regulated laboratory environments, such as some in the pharmaceutical industry, pose a unique environment for system project management.  The project planning practices described in the above sections above roughly equate to specific practices with defined terminology (often referred to as 4Q) in regulated environments. 

  • Developing a Functional Requirements Document (FRD) = Design Qualification, or User Requirements Specification (URS) in a regulated environment 
  • Factory Acceptance Testing (FAT) and Site Acceptance Testing (SAT) = Installation Qualification (IQ) and Operational Qualification (OQ) in a regulated environment
  • Ongoing operatoin and support / Performance Metrics Monitoring = Performance Qualification (PQ) in a regulated environment

It should be noted that the usage of the 4Q terms are not universally uniform, and can lead to confusion.  Nonetheless, the overall intent is to build quality into the process of planning, acquiring, installing and operating laboratory automation systems via assuring that sound procedures and practices have been followed and are documented and traceable. 

Governmental regulations tend to focus more on the goal rather than on functional specifics, and so are open to interpretation.  The United States Food and Drug Administration (FDA), for instance, publishes guidelines for Good Laboratory Practices (GLP) and Good Manufacturing Practices (GMP) that contain subsections addressing laboratory instrumentation and computers, but the language can be broad and lacking specifics.  It is important to note that the original FDA validation guidelines were developed for pharmaceutical manufacturing process validation and subsequently were adapted to apply to analytical instrumentation, and then computers.  This has resulted in varied interpretations.  In 1983 the FDA published a guide to the inspection of Computerised Systems in Pharmaceutical Processing, also known as the ‘bluebook’ (FDA 1983).  Both the American FDA and the UK MHRA have added sections to the regulations specifically for the use of computer systems.  For the MHRA this is Annex 11 of the EU GMP regulations (EMEA 1998).  The FDA introduced 21 CFR Part 11[5] for rules on the use of electronic records, electronic signatures in 1997.  Based on the Guidance for Industry from 2003 (Scope and Applications) 21 CFR Part 11 has been reworded but is currently undergoing extensive reviews within FDA prior to any public review.   With regard to instrumentation, FDA Regulation 21 CFR Part 211.160 [6] specifies that "Laboratory controls shall include: The calibration of instruments, apparatus, gauges, and recording devices at suitable intervals in accordance with an established written program containing specific directions, schedules, limits for accuracy and precision, and provisions for remedial action in the event accuracy and/or precision limits are not met."   

Certain non-governmental groups develop guidelines and standards that are in effect adopted and enforced by government regulatory agencies.  In the United States, standards and guidance documents related to food and drug regulations are developed and published by the not-for-profit United States Pharmacopeia (USP) and published in their National Formulary (USP–NF), a book of public pharmacopeial standards.  The FDA is then charged with enforcing compliance.  Their latest compilation (USP 31 / NF 28 (2008) - effective August 1, 2008)) contains a General Chapter (USP<1058>) on Analytical Instrument Qualification (AIQ) which provides a 5-page, very flexible, pragmatic and much simplified approach to laboratory equipment qualification. It also generally considers software as a core part of the instrument – so in qualifying the instrument, the software is also qualified.  The International Society for Pharmaceutical Engineering (ISPE), via its subcommitte on Good Automated Manufacturing Practice (GAMP), publishes a series of Good Practice Guides (GPG).   In 2008 they published the 352 page document, GAMP 5 - A Risk Based Approach to Compliant GxP Computerised System[7].  GAMP 5 is a rigorous project management approach to Computer System Validation (CSV) derived from a software development focused perspective which can be applied across the range of instruments to complex software.

The challenge in testing (for quality assurance purposes) laboratory automation lies in the different approaches described in the above two documents and in the hybrid instrument/computer nature of much of what we call laboratory automation.  One approach (USP) is derived from the "instrument" perspective and the other (GAMP) is derived from the "computer" perspective.  Most laboratory automation falls in-between those two perspectives, equal measure instrument and computer.  Terminology in the two approaches is similar, but subtly different, which adds to confusion.  Until guidelines are developed that suitably address the spectrum of laboratory automation, a hybrid approach toward testing is probably advised, in consultation with organizational QA staff.  The project management fundamentals of developing a detailed Functional Requirements Document (including testing plans) early in the process, and doing FAT and SAT testing at the end of the process still apply, but must be tailored to fit the exact quality testing approach being taken. 

The topic is explained in further detail in articles in the category on regulatory compliance.

References

  1. Hamilton, S.D. 2006 ALA Survey on Industrial Laboratory Automation J. Assoc. Lab. Autom 2007, 12, 239-246
  2. Schultz, H.; Alexander, J.; Petersen, J.; Hartmann, T.; Overland, D.; Gigante, B.; Grandsard, P. The Automatic Metric Monitoring Program. J. Assoc. Lab. Autom. 2004, 9, 398-403
  3. Echols, M.; Smith, D.; Nirschl, D. A Web-based Instrument Monitoring System. J. Assoc. Lab. Autom. 2004, 9, 398-403
  4. Liu, E. Clinical Research The Six Sigma Way. J. Assoc. Lab. Autom. 11, 2006, 42-49
  5. Guidance for Industry Part 11, Electronic Records; Electronic Signatures - Scope and Application; August 2003 http://www.fda.gov/cder/guidance/5667fnl.htm
  6. Part 211. CURRENT GOOD MANUFACTURING PRACTICE FOR FINISHED PHARMACEUTICALS Subpart I--Laboratory Controls
  7. GAMP 5, A Risk-Based Approach to Compliant GxP Computerised Systems, ISPE.
Click [+] for other articles on 
Project management(1 C, 9 P)
The Market Place for Lab Automation & Screening  The Market Place