Have you ever thought about expiry dates on drug packages and wondered how these dates are determined? In today’s blog article we will discuss the very same topic from an analytical perspective.
Every drug product has a shelf life after which it starts to degrade. This shelf life is determined by the evaluation of storage conditions during drug development. Performing formal stability studies (including long term studies --> 12 months and accelerated stability studies --> 6 months) is required before submission of the drug’s dossier to regulatory authorities. The duration of these stability studies makes the situation untenable when aiming to develop stability-indicating analytical methods. These are methods which are used to quantitate the decrease of the active pharmaceutical ingredient (API) over time due to degradation and the subsequent increase of degradants. Thus, to save time, an analyst is faced with the necessity to artificially generate degraded samples and API is forced degraded. Therefore, it is stressed by different means such as subjecting it to acid, base, oxidants, heat, humidity and light and then analyzing the degradants. There are two basic aspects of the drug that are considered afterwards:
- Potency: This essentially means the evaluation of the API in the sample. It is logical to think, that under stressed conditions the API will degrade and hence will be less effective.
- Determination of the degraded products (by-products): As the API degrades, degradation products are arising. These products might be toxic in nature and must be identified and analyzed. As a rule of thumb, impurities amounting to more than 0.1% of the parent drug must be able to be quantified.
Designing a forced degradation study and the corresponding stability-indicating detection method requires 3 main considerations:
- selection of the degradation conditions,
- choosing appropriate drug concentration(s), and
- selection of an appropriate method and detector.
A clear understanding of the drug’s synthesis pathway and of some physiochemical properties often allow a better understanding of the components that are anticipated to be present in the final product. As mentioned above, degradation might be achieved by applying different degradation types such as hydrolysis, oxidation, photolytic or thermal stress. Thus, different reagents, exposure and storage conditions and sampling days must be considered. The drug concentration should be high enough allowing the generation of degradation products to be detectable, while on the other hand, studies should be done using the concentration the drug is intended to be in its final formulation. For the method, most widely RP-HPLC can be chosen, but other methods such as e.g. capillary electrophoresis (CE) or gas chromatography coupled with mass spectrometry (GC-MS) might also be applicable. The aim is to separate as many components as possible and determine them through distinct peaks. Common variables of the method are choice of solvents, pH, temperature, and column type. Regarding the detector, a UV detector with a detection range of 0.1 - 0.5 to 100% concentration of the parent drug is mostly suitable, while sometimes a diode array detector (DAD) might be better.
Validation of stability-indicating test methods is more complex than validating other method types. The normal validation parameters for quantitative impurity tests according to the ICH Q2(R1) guideline must be evaluated. The amount of how much degradation is required for validation of the corresponding stability-indicating method is highly discussed, and no general answer can be given. During forced degradation studies degradants which do not normally exist in the aged products might occur. Hence the determination of peak purity is required. Peak purity offers additional information about method specificity. In other words, it is required to demonstrate how well the main peak is separated from the degradants. Peak purity makes sure that no peak underlies or overlaps or co-elutes with the API compound. This point is also addressed by the FDA’s guidance for industry “Analytical Procedures and Methods Validation for Drugs and Biologics”, 2015 as it mentions to demonstrate specificity of a stability-indicating test by performing a combination of challenges, such as “use of samples spiked with target analytes and all known interferences; samples that have undergone various laboratory stress conditions; and actual product samples (produced by the final manufacturing process) that are either aged or have been stored under accelerated temperature and humidity conditions”. Once the method is validated, it can be used to support the formal stability studies under specific temperature and humidity conditions to determine shelf life.
To summarize, forced degradation studies are designed to generate product-related degradation variants and to develop stability-indicating analytical methods for the determination of the degradation products formed during formal stability studies. During validation, forced degradation study samples are used to demonstrate specificity.
According to the recent FDA-issued answers to cGMP questions regarding laboratory controls, the question was raised as to whether cGMP requires the execution of forced degradation studies in any case when studying stability-indicating methods for DPs. The answer is no, if the degradation path and suitability of the method can be substantiated by other data (such as DS stress tests, accelerated and long-term studies of the DS or DP), with a focus on specificity. In addition, the decision on the extend of forced degradation studies must be documented with a rationale.