Sunteți pe pagina 1din 12

Understanding and Interpreting the New GAMP

5 Software Categories
Jun 01, 2009
By Spectroscopy Editors
Spectroscopy
Volume 24, Issue 6

The GAMP (Good Automated Manufacturing Practice) guide version 5 was released in
March 2008 and one of the changes was that the classification of software was revised again.
This column will look at what the changes mean for the laboratory and whether all of these
should be implemented.
Version 5 of the Good Automated Manufacturing Practice (GAMP) guide (1)
was released last year. This publication has been available since 1994, when
version 1 was informally published in the UK, and since its inception it has
always contained a classification of software. This is one of the best parts of
the guide as it has an in-built risk assessment, as we shall see in this column.
We will explore version 5 of the software classification and see what changes
we need to make to ensure that it can be implemented practically and
R.D. McDowall effectively in any laboratory.
However, before we continue much further I should also declare a vested
interest: I have a lovehate relationship with the GAMP guide. I love the classification of
software outlined in Appendix M4 and hate the life cycle V model. My rationale for this position
is that versions 14 of the guide presented a single life cycle V model that was really only
applicable to process equipment and manufacturing systems. It had very little to do with
computerized systems, especially laboratory ones. Therefore, every validation was shoehorned
into an inappropriate model because there was little thought and intelligence applied and the
model followed blindly. For example, when a commercially available laboratory system was
validated, functional and design specifications were written for virtually no gain but at a great
cost in time and resources. The problem lay in the origins of the GAMP guide. The first version
was written by a group of volunteers in the UK in the early 1990s as a mechanism to control
suppliers of process equipment to the pharmaceutical industry, and this legacy survived through
to version 4. However, the model does not make it into version 5 of the Guide, which is a shame;
as mentioned above, the model is very good for process equipment.
However in GAMP version 5, I'm very pleased to say that the "one size fits all" approach has
been replaced by a breath of fresh air with different life cycles depending on the classification of
the software being implemented. The key message is that now a single size life cycle model does
not fit all systems. Note that GAMP is a guide and you can deviate from it all that is required
is the application of thought and intelligence coupled with effective risk management that is well
documented. OK, perhaps this is a step too far . . . .
Software Classification Categories
As I mentioned earlier, the software categories in GAMP 5
have been revised (1). To appreciate the scope of these changes
fully we need to look at the classification of software from
GAMP 4 (2) and compare this with GAMP 5, as shown in
Table I.
In the beginning, or at least in GAMP 4, there were five
categories of software: Table I: Comparison of software
categories in GAMP 4 and
Category 1: Operating systems GAMP 5
Category 2: Firmware
Category 3: Standard software
Category 4: Configured software
Category 5: Custom software

The constituents of each category are outlined in Table I; however, there was always a debate
about some commercial software packages were they category 3 or 4? Many spectroscopists
would argue that an application should be classified as category 3 and not 4, as it should be less
work to validate and evade the real classification. To help resolve this debate, in GAMP 5 the
software categories have been revised and refined most for the better and one for the worse.
This is a natural evolution of this approach to software classification. So we now have the
following four categories:

Category 1: Infrastructure Software


Category 3: Nonconfigured products
Category 4: Configured products
Category 5: Custom applications

Refer to Table I as we discuss the changes in the software classification in more detail in the next
section.

Why Classify Software?


Before we go into a detailed discussion of the software categories, perhaps we should ask the
question "Why bother to classify software?". What benefit does this software classification
provide?
If you look at Table I there is a built-in risk assessment. The least risky and most widely available
software is in category 1 (operating systems, databases, office software, and other widely
available software). This is widely available software that can be used by anyone and in any
industry. As we progress through down the categories as shown in Table I, generally the software
becomes more specialized in its function (from a general office application to software that can
control a spectrometer to acquire and process data then report the results). As we go down the list
there is the increasing ability of the users to change the operation of software and process the
results until we reach category 5. In category 5 is a unique solution that is conceived, specified,
written, tested, and maintained by the users or the organization; here is the greatest risk. Let's
now take a detailed look at each of the software categories and see what has changed and if there
are any problems we need to discuss.
Software Classification Changes and Their Laboratory Impact
Presented and discussed here are the various changes to the software classifications in the new
GAMP guide.
Category 1: Greatly Expanded Scope Infrastructure Software
Category 1 has undergone a radical change and expansion from simply operating systems, that
had been constant in GAMP versions 1 to 4, to infrastructure software. This category is broken
down into two subcategories:

Established or commercially available layered software and


Infrastructure software tools.

The intention is that the infrastructure software in this category provides the computing
environment for running both regulated and nonregulated applications within an organization. All
software in this category needs to be controlled and qualified in an organization so that dual
standards are not applied by the IT department, which can question the status of validated
applications if not done.
Software in the subcategory of Established or Commercially Available Layered Software still
includes operating systems from GAMP 4, but this has also been expanded to encompass a
greater scope:

databases
programming languages
middleware
office software
ladder logic interpreters (for manufacturing systems),
statistical programming tools and spreadsheet packages.

The key issue is that many of these software tools are the base products for the applications used
in the laboratory or they are the foundation layer for the laboratory applications to operate under.
For example, if your spectrometer has application software that has a database to manage the
methods, data, and results you generate, the latter is configured by the spectrometer supplier from
the out-of-the-box database to operate with their application software. Languages are used as a
means to write the applications, each of which will be validated for their intended use.
Note that category 1 also includes office software such as word processing, spreadsheet, database,
and presentation applications. Now before you rush off thinking that Excel templates and macros
do not need to be validated, think again, as the guide notes that "applications developed using
these packages" are excluded from category 1 and these can be category 3, 4, or 5, respectively
(1), depending on their complexity.
Note also the phrasing of the subcategory "established or commercially available". This means
that both open source and software commercial can be used, which ratifies the status quo (open
source operating systems [Linux or OpenVMS], databases [MySQL], and source code
management [SubVersion]). In many IT departments and research groups open source software is
used and often this use can be extensive. Some people may argue that open source software is
hacked code, but when the code can be reviewed by many programmers it may be argued that the
quality of the finished application could be better than some commercially available software.
Regardless of the debate, the word established allows the use of open source applications within
category 1.
The second subcategory is infrastructure software tools that comprise a wide variety of software,
such as

network monitoring software


anti-virus
backup
help desk
IT configuration management tools and other network software.

While sounding like a shopping list, it provides the IT group with tools to establish, protect,
monitor, and manage their computing environment and networks and where you store your
laboratory data and electronic records. However, care needs to be taken with applications in this
subcategory as the use of the application could drastically change the category to which it is
allocated. Take for example help desk or configuration management applications. If no regulatory
data are held exclusively here then they are category 1, but if they are your only tools for problem
management or change control processes for regulated applications, then this changes the
category and the validation group comes galloping round the bend.
From the laboratory, audit, and inspectional perspectives, what is required is control of the
applications that comprise group 1. Some of the typical controls will be

identification of the software (name, version, and supplier),


where is it installed including path to the servervirtual server,
configuration to operate in your environment,
demonstration that the software has been installed correctly, and
a simple demonstration that the software works.

All should be done regardless of where the software originates: open source or commercial
software. Furthermore, change control and configuration management are essential elements of
control both with category 1 and the other software categories. So we qualify these items of
software, not validate them. In contrast, we validate software in categories 3, 4, and 5. Note that I
have omitted category 2 software; we will now discuss this in more detail.
Category 2: Ignore Discontinuation of Firmware Classification but with Care
As you can see in Table I, GAMP 4 had five categories of software, which is reduced to four in
the latest version. The category that has gone missing in action is category 2 (firmware).
The argument for this category's discontinuation from the GAMP Forum is that firmware, which
can vary from simple to custom software, can be accommodated in the other categories
depending on its nature. To understand why this category was eliminated from GAMP 5 we need
to consider what we mean by the term firmware. In its original form, firmware was a set of
operating instructions for an instrument embedded in a read-only memory (ROM) chip or used to
start more complicated programs on an instrument or device. An alternative term favored by IBM
is microcode. This is software but instead of being delivered on a disk or USB stick, it comes
preinstalled on a chip having been built into the instrument during the manufacturing process.
FOR: Firmware is still software, and different chip versions will be produced over time due to
bugs that will be found and fixed by a manufacturer. When this occurs, a new version of the
firmware ROM is produced. During a maintenance visit by a service engineer (aka the Angel of
Death), the firmware ROM may be replaced with a new one. Therefore, change control must
include firmware as the upgraded chip may contain new functions not present in the existing chip
as well as the bug fixes for the original software. One instance where a firmware change may
need to be upgraded is when you install a new version of instrumentdata system application
software. It is possible that the drivers for your spectrometer may have changed and to ensure that
the new software works correctly, the firmware needs to be upgraded at the same time. You need
to know that firmware will be upgraded in advance so you can complete the change request
before the work starts.
Over time, firmware chips have become bigger to allow more instructions to be input and the
ROM can be replaced with flash ROM, which can be erased by UV light and rewritten through a
firmware updater. Other forms of firmware can be upgraded via a download; the best example of
this is the BIOS firmware chip in your PC, which typically can be updated online to fix errors,
improve functionality, and keep it current.

The biggest issue with firmware now is the ability to define programs or parametrization to
produce user-defined routines that can be kept in memory and recalled at will. A typical example
is a dispenserdilutor, in which a user can prepare different routines for different analytical
methods. So instead of a simple set of instructions there is now the ability of the users to change
the way the instrument works. However, flexibility comes at a potential price of the impact on the
results and errors if the user-defined program is incorrect. The parallel in the conventional
software world is Excel and the ability to produce incorrect templates or macros if proper controls
are not in place.
Therefore, the rationale for the discontinuation in GAMP 5 is that the software sitting on the
firmware can be classified in the remaining software categories. In doing so it should eliminate
the impact of user-defined programs that are possible with the more complex type of firmware
systems.
AGAINST: Now let's look at the argument for retaining category 2 software and ignoring the
GAMP 5 advice. Looking around the laboratory you'll see many common laboratory instruments
(such as balances, pH meters, and dispenser dilutors) that still use a firmware chip to operate the
instrument. Furthermore, the firmware cannot be changed by the user and the only way for a chip
update is via the service engineer. Under these circumstances I believe that there is no need for
excessive validation, and these instruments can still be qualified.
However, we now run into a slight problem. The issue is that the GAMP 5 approach is now in
direct conflict with USP <1058> for analytical instrument qualification (AIQ) (3), which we
discussed in detail in the last "Focus on Quality" column (4).
GAMP 5 says validate and USP <1058> says qualify. Oh dear, what shall we do?
This is a common problem when different professional groups develop guidance; the individual
participants sit in a silo and fail to consider anything outside their own boundaries. Hence, when
each new guidance is unveiled with a fanfare and the chink of a cash register, it is not until later
that we find that due to professional myopia there are conflicts and problems. Ironically, GAMP 4
was congruent with USP <1058> and these instruments would be classified as category 2
software qualified to demonstrate their intended purpose, a simpler and quicker process than
validation. Under this approach the instrument's software is implicitly validated as part of the
qualification to demonstrate intended use job done. Why do more?
POTENTIAL SOLUTION: So how do we resolve this situation? Which takes precedence a
USP general chapter or the GAMP guide? I don't think we need to phone a friend or go 50:50 to
answer this question do we? So in the laboratory you'll want to ignore GAMP 5 and retain
category 2 software for laboratory instruments and be consistent with USP <1058>. This
approach will also make your life easier in these instances.
WARNING! Implement this approach with some caution and the application of intelligence. The
reason for the warning is that most of the laboratory instruments in this section are indeed
category 2 software, but some instruments can have additional functionality that needs further
control (for example, theycan have programmable firmware for user-defined procedures that
automate tasks over and above the basic functionality of the instrument). The ability of the users
to change the function of the instrument has many advantages but also comes with a significant
downside: errors and incorrect procedures. Therefore each laboratory needs to take this into
consideration when deciding the software category and hence how much qualification or
validation work is undertaken.
Therefore you'll need to adopt a two-stage process if you develop user-defined procedures.

First, qualify the basic operation of the dispenserdilutor as category 2 software and undertake
any calibration as necessary to ensure that the instrument is fit for its basic intended use.
Second, any user-defined procedures must be documented and validated (specified and tested) as
separate activities. This user-defined software is category 5 but can be validated on a simpler life
cycle model, as the development platform (that is, the instrument) has already been qualified.

As you move away from simple firmware that is implicitly tested as you qualify the instrument,
you will need to take this two-stage process approach, although control of user-defined
procedures can be controlled by an SOP rather than writing a validation plan every time you
develop one of these user-defined procedures. You will also need to control the versions of each
of these procedures and introduce change control to ensure that changes to each one are managed
effectively.
The other issue that you may encounter with some instruments in this category is the
incorporation of calculations in the operation of an instrument. A balance is the classic example
here if it is being used for content uniformity testing. Any calculations used as part of YOUR use
of the instrument should be checked as part of the qualification to ensure compliance with the
requirement of 21 CFR 211.68(b) (. . . Input to and output from the computer or related system of
formulas or other records or data shall be checked for accuracy. The degree and frequency of
inputoutput verification shall be based on the complexity and reliability of the computer or
related system . . .) (6). However, the calculation checks and testing should be integrated into the
overall instrument qualification and not become a separate calculation validation in a
multilayered validation approach.
Software Silos or Software Continuum?
The next three software categories we will discuss are intended as a continuum rather than
discrete silos, so some interpretation may be necessary as to which category a system falls into.
This will need to be documented in your system risk assessments or validation plans, so "think
things through" is the take-home message.
Category 3 Software: What's in a Name?
Category 3 in GAMP 5 has been renamed from Standard Software to Nonconfigured Product to
sharpen the difference between this and category 4 software. Now this means that software that is
used as installed falls into category 3 and may (note the careful use of the word may) also include
software that is configurable (category 4) but is used either unconfigured or with the standard
defaults provided by the software supplier.
Despite the name, category 3 software is also configured, but for the environment (run-time
configuration). It is this fact that distinguishes category 3 from category 4 software. What is run-
time configuration?

First, is that upon installation of a category 3 application, the software is capable of operating and
automating the business process without any modification in fact, as noted in Table I in the
GAMP 5 column, it cannot be changed in this respect. Some other terminology used to describe
this type of application is canned software or commercial-off-the-shelf software (COTS), or even
off-the-shelf software (OTS), but these are badly abused terms that can be misused to mislead or
lie to users and therefore will not be used in this column.
Second, run time configuration is only the definition of items in the software to enable the system
to operate within the installed environment. Some typical run time configuration parameters are
the definition of users and user types for authorized individuals, entry of the department or
company name into report headers, selection of units to present or report data, default data
storage location (either a local or network directory), and the default printer. Reiterating the
statement above, the key characteristic of software in this category is that run time configuration
does not change the automation of the business process or the collection and analysis of the data
and records generated by the software. This is in contrast with category 4 software, in which the
actual operation of the software to support the business process is changed to match the
laboratory business process.

As well as a simpler life cycle, which we will discuss in a later column, the software vendor or
supplier's work in software development can be used to save validation effort within the
laboratory, as functional and design specifications are not expected from the user. Note that this
life cycle does not absolve the user from defining his requirements and also demonstrating
intended, use but the testing is only a single phase. This change helps practitioners in the field to
interpret software better: you could have the same software in category 3 or 4, depending whether
the default settings are used or if the application is configured respectively. To illustrate this
point, I have been involved in validating the same software application that was category 3
software when operating a time-of-flight mass spectrometry (TOF-MS) system (here the
application was used as installed) but category 4 when used for bioanalysis with quantitative
analysis using a triple-quadrupole MS instrument (here electronic signatures were configured for
use in the application).
Category 4: Configured Products Refined
The name of this category has changed to help refine and redefine the software categories; the
guide has moved from defining software as "package" in GAMP version 4 to "product" in GAMP
version 5. I believe this is to emphasise the commercial nature of category 3 and 4 software,
which constitutes the bulk of the software used in laboratories today.
The major difference between category 3 and 4 software, as mentioned earlier, is the ability to
modify the function of the software to match a business process. The user has the means and
knowledge to change the functionality of the device in a way that changes the results outputted by
the device. As a direct consequence, this triggers increased validation effort. There are many
ways to achieve this but the essence is to take standard software modules that provide the basic
functionality to automate a process and change it by configuration tools. These tools are provided
by the vendor of the product, hence configuration rather than using an external language to write
custom code that is attached to the product. However, these tools can vary in their nature from
simple configuration buttons that turn a feature on or off to graphical drag-and-drop to a modular
"configuration" language that typically writes large blocks of software; hence, custom code,
which raises the debate of configuration versus customization.
Understanding the difference between configuration and customization is the key to managing
software and validation risk. However, many in the laboratory are seduced by suppliers marketing
literature that talks of configuration when in reality it is customization, as we'll discuss now. The
main point I would like to make is caveat emptor buyer beware . . . . The user is responsible
under the regulations, and if you are seduced by the marketing literature, it's your problem.
Category 4 and 5 Software: Configure versus Customize Where Is the Line?
Configuration and customization of software are terms that are poorly defined in the validation
world and frequently used interchangeably, especially in a vendor's marketing literature. It is
important to understand the difference between these two terms as they mean entirely different
things and consequently can have a dramatic impact on the amount of validation work that you
could undertake.
The problem is that even the FDA and GAMP have not been able to define these terms,
as configuration and custom are missing from the glossaries from the FDA (5) and GAMP 5 (1).
However, an issue arises when a "configuration" or scripting language is provided by a software
vendor to enable the modification of a program's function to fit the business process. Regardless
of terminology, this language writes custom code within the application. The problem is a
vendor's marketing department has typically cottoned on to the idea that custom code is a bad
idea and decided to call this "configuration" instead, or worst of all, COTS software without
defining what the latter term means.
Note what GAMP 5 says on this point in Appendix M4 (1): custom software components . . . ,
developed with an internal scripting language, written or modified to satisfy specific user
business requirements, should be treated as Category 5. In plain English this means that the so-
called configuration is really customization. Therefore, any definition of customization needs to
include the problem of the internal language customizations. So, here is my attempt at defining
these two terms.

Configuration: The modification of the function of a software product to meet business process or
user requirements using tools supplied by the supplier. These tools can include input of user-
defined text strings for drop-down menus, turning software functions on or off, graphical
dragging and dropping of information elements, and creation of specific reports using the
standard functionality of the package.
Customization: The writing of software modules, scripts, procedures, or applications to meet
business requirements. This can be achieved using an external programming language (such as
C++ or Visual Basic for Applications or PL*SQL for database procedures), macro instructions, or
an internal scripting language specific for a commercial application.

The two definitions discussed are very important, as you will have to determine if the software
you have is customized or configured. Getting it wrong can result in generating the wrong data or
receiving a noncompliance during an inspection, as we will discuss in the next section.
Category 5: Custom Macros, Modules, and Applications
Not much has changed in GAMP 5 from this apart from the
name change from custom software to custom application.
This is the highest risk software, as it may be unique and may
not be subject to the same rigors of specification and testing as
commercial products because it could be undertaken in-house
or outsourced to a commercial software company.
Some further background
However, does the name of this category really reflect the
reading on Wikipedia
situation? The problem with this category name is the use of
the word application. This implies a single application but this does not reflect the whole reality.
As GAMP 5 notes (1), software is a continuum and therefore you can have a configurable product
that has custom software or custom modules written to aid functionality of the installed
application. A LIMS is a case in point here; there is a language provided by the LIMS vendor to
change the software function this is category 5 software as discussed in the previous section
and the output is custom code. Note that the LIMS marketing people will have you think that it is
configuration. No, it is custom software as it uses an internal scripting language.
This software category implicitly includes macros written say within a spectrometer application
to produce a shortcut for processing or manipulating data; these macros are custom software and
not an application per se. Each macro needs to be validated and controlled to ensure that it does
what it is supposed to do. Also, we have in this software category the user-defined programs
written on category 2 instruments; again each will be validated.
Therefore each application, module, user-defined program, or macro needs to be specified,
version controlled, built, and tested (including integration testing with the commercial
application, as applicable) as a minimum to ensure the quality of the software. Furthermore, the
custom code modules will need to have their source code managed and backed up to prevent
overwriting or loss, respectively.

Users and the Software Screwup Factor


Let us return to the discussion on software classification from a slightly different perspective. At
this point I would like to introduce you to a rather vile and obnoxious four-letter word. The word
is user. We now enter the realm of the user (defined as either someone in the laboratory or IT
I'm not fussy here) who has a multitude of possibilities to screw up software and also the
analytical results generated by it. So we'll take another look at the GAMP software categories
from the perspective of the ability of the user to change the operation of software and hence
influence, manipulate, or screw up the results.
Category 1: The ability of users to influence results with infrastructure software is the lowest of
all five software categories. Software in this category can evolve through patching, service packs,
and the occasional new version but there is not validation in the classical sense. So instead of
validating a steady state, one qualifies each upgrade. Furthermore, the layered structure here
ensures that there is no or only an isolated impact on the data due to bugs or mistakes. So the
ability of the users to influence results is typically limited to "does the software work or not."
Category 2: Traditional firmware is similar to category 1 software in that the instrument's
operations are fixed and the users cannot change the functions. Therefore, the screwup factor is
relatively low and hence the risk to the data. However, user-defined programs are different and
will be discussed under category 5, as there is a larger risk for incorrect data generation.
Category 3: With category 3 software we start to enter specialized rather than general software
applications. Here software is used virtually as installed with only the run time configuration that
can be changed by the user. As there are relatively few changes to be made to the software
perhaps where the data and records are stored being the most critical the risk is greater than
the last two categories but still relatively low.
Category 4: In this category there is a wider spectrum of configuration tools available to the users
both trained and untrained in their use and hence the screwup factor rises accordingly. With
the most configurable systems here the risk rises to a medium level. However, with more
sophisticated configuration tools the risk increases, as more mistakes can be made. Alternatively,
to reduce the screwup factor, use the software with as many default settings as practicable.
Category 5: Here is the software category in which the user screwup factor is at its highest and
the impact of errors the greatest. There must be proper and effective controls in place for the
overall validation of the macros, modules, and applications to prevent users from doing anything
stupid. However, useless users can be told by clueless management to cut corners in the name of
speed and efficiency and not to fully specify or test the software. As custom software can contain
data handling routines, these could act unpredictably or errors could be introduced that change
data and records in subtle and undocumented ways that have not been considered due to poor
specifications. Hence the need for control of the overall process.

Let's summarize the impact of users on software. The further down the software classification
you go, the software becomes more specific in function coupled with the ability of the software to
directly influence the data and calculations and hence the final results. Or in other words, the
ability of the users to screw up the software and the data increases as you get further down the
software categories. Therefore, controlling and regulating the ability of users to configure or
customize software is the key here. From a personal perspective, avoid writing any software
unless there is a good business case for doing so. My rationale is that your function in the
laboratory is analytical science, not software development.
A Modified Software Classification
So if you have been following the discussion so far we need to
revise the GAMP 5 software categories to take account of the
world in the laboratory. Presented in Table II is my modified
classification of revised software categories, which is a
combination of GAMP 4 with GAMP 5 classifications for the
laboratory that is intended to be pragmatic and usable while
managing risk effectively.
First, we return to five categories of software but use or Table II: Modified software
modify, in one case, the GAMP 5 titles for them. The five classification
categories are shown in Table II together with the types of
software examples found in each one.
Second, let us review each of the categories to see if and how they have been modified:

Category 1: Use existing GAMP 5 classification no changes proposed.


Category 2: Firmware this category has been reinstated for many laboratory instruments to
make the software classification congruent with the qualification approach in USP <1058> for
Group B instruments. However, care must be taken with some systems that allow user-defined
programs to be written; these must be validated and controlled separately in addition to the basic
instrument qualification.
Category 3: Use existing GAMP 5 classification no changes proposed.
Category 4: Use existing GAMP 5 classification no changes proposed.
Category 5: Modify the name to Custom Modules and Applications and expand the scope to be
more explicit regarding what constitutes custom software, especially in a laboratory environment.
Summary
We have reviewed the GAMP 5 software categories and highlighted where there are
strengths in the changes but also noted the problems with some of the categories from the
laboratory perspective. I have also suggested an alternative classification that would
benefit the laboratory and reinstate category 2 software. As with all my writing, the ideas
and suggestions in this column are to get you to think, adopt, adapt, or reject as you see
fit.
Acknowledgment
I would like to thank Lajos Hadju for his very helpful review and constructive comments
during the preparation of this column. It was his input on the software classification and
the impact of the users that has improved the content of this column.
R.D. McDowall is principal of McDowall Consulting and director of R.D. McDowall
Limited, and "Questions of Quality" column editor for LCGC Europe, Spectroscopy's
sister magazine. Address correspondence to him at 73 Murray Avenue, Bromley, Kent,
BR1 3DJ, UK.
References
(1) Good Automated Manufacturing Practice Guidelines version 5, International Society
for Pharmaceutical Engineering, Tampa, FL, 2008.
(2) Good Automated Manufacturing Practice Guidelines version 4, International Society
for Pharmaceutical Engineering, Tampa, FL, 2001.
(3) United States Pharmacopoeia Inc, <1058> Analytical Instrument Qualification.
(4) R.D. McDowall, Spectroscopy 24(4), 2027 (2009).
(5) Food and Drug Administration, Glossary of Computerized System and Software
Development Terminology, 1995.

S-ar putea să vă placă și