It seems like the most elementary of research principles: Make sure the cells and reagents in your experiment are what they claim to be and behave as expected. But when it comes to antibodies—the immune proteins used in all kinds of experiments to tag a molecule of interest in a sample—that validation process is not straightforward. Research antibodies from commercial vendors are often screened and optimized for narrow experimental conditions, which means they may not work as advertised for many scientists. Indeed, problems with antibodies are thought to have led many drug developers astray and generated a host of misleading or irreproducible scientific results.
This week, more than 100 researchers, antibody manufacturers, journal editors, and funders met in Pacific Grove, California, to hash out standardized approaches to antibody testing. “Cell authentication is a walk in the park compared to what we need to do with antibodies,” says Leonard Freedman, president of the Global Biological Standards Institute (GBSI), a Washington, D.C.–based nonprofit that advocates for better basic research practices and that sponsored the meeting. In the coming months, the attendees hope to come up with a scoring system that will identify the most reliable antibodies for a given type of experiment and ultimately (they hope) make results more reproducible across experiments.
Antibodies are typically made in animals such as rabbits or goats, by injecting a protein of interest and waiting for the animal’s B cells to respond to the foreign molecule with the Y-shaped proteins, which can be isolated from its blood. But batches of the same antibody from different animals may cross-react with different proteins. And it’s hard to trace a given batch to its origin, because antibodies are often relabeled and resold by another vendor under a new name, Freedman says. [Full Article]