BestDosage
Blog/Authority
Authority2026-04-21 · 12 min read

How an Analytical Chemist Evaluates Wellness Claims: My Framework

An analytical chemist's personal framework for cutting through wellness BS. How Chad reads studies, evaluates supplements, spots scams, and built a scoring system for 36,000+ practitioners.

CW

Chad Waldman

Founder & Analytical Chemist

How an Analytical Chemist Evaluates Wellness Claims: My Framework — Authority
Most wellness claims fail basic scientific scrutiny. An analytical chemist's evaluation framework: check the study design (RCT > observational > case report), verify the dose matches research (most supplements are underdosed), look for conflicts of interest, and never trust a claim that can't cite a PMID.

I have an analytical chemistry degree. I spent years in labs where precision wasn't a preference — it was the point. When you're running gas chromatography on a pharmaceutical intermediate and your result is off by 0.3%, you don't ship it. You rerun it. You find out why.

Then I got into wellness marketing.

I don't mean I became a wellness influencer. I mean I took a job at a company that marketed supplements. And the gap between the science I'd been trained on and the science the marketing team was citing was so wide you could park a clinical trial in it.

I kept seeing brands cite studies that, when I actually read them, said the opposite of what the marketing claimed. A study showing modest benefit at 500mg of magnesium glycinate cited to sell a product containing 50mg of magnesium oxide. A "clinically proven" label slapped on a product whose referenced study used a completely different formulation. A "peer-reviewed" claim linking to a pay-to-publish journal with no impact factor.

I wasn't angry at first. I was confused. How does an entire industry get away with this? The answer turned out to be simple: most people don't read the studies. Most people can't read the studies. The language is designed to be impenetrable. The industry counts on that.

So I started building a framework. Not because I'm smarter than everyone — I'm not — but because I had the training to read clinical literature and the stubbornness to actually do it. That framework became the foundation of everything BestDosage does today.

This article is that framework. All of it. How I read studies. How I evaluate supplements. How I spot scams. How I went from evaluating products to evaluating practitioners. And why dosage — not the ingredient — is the thing that actually matters.

Why a Chemist Cares About Wellness

I didn't set out to build a wellness directory. I set out to answer a question: why does the gap between what science says and what wellness brands claim exist at all?

The answer is money. Obviously. But the mechanism is more interesting than that. Here's how it works: a researcher at a university publishes a study showing that curcumin reduces inflammatory markers in a specific population at a specific dose over a specific timeframe. The study is careful. The conclusions are hedged. The limitations section is longer than the results section.

Then a supplement company's marketing team reads the abstract — just the abstract — and turns it into "CLINICALLY PROVEN ANTI-INFLAMMATORY." The dose in the study was 1,000mg of a patented bioavailable form. The dose in the product is 200mg of standard turmeric extract with 3% curcuminoids. The study ran for 12 weeks with physician oversight. The product's label says "take daily for best results."

Same ingredient. Completely different reality. And this is the gap I couldn't stop seeing once I started looking.

The name BestDosage came from this realization. The ingredient is almost never the problem. The dose is the problem. The form is the problem. The bioavailability is the problem. Every wellness conversation starts with "what should I take?" The right question is "how much, in what form, and does the research support that specific protocol?"

I went from skeptic to obsessive researcher to builder. I couldn't fix the supplement industry. But I could build tools that help people evaluate claims the way I do — with a framework, not with feelings.

My Study-Reading Framework (5 Steps)

This is the exact process I use every time someone sends me a study, a supplement makes a claim, or a practitioner cites research. Five steps. Takes me about 10 minutes per study now. Used to take an hour.

Step 1: Check the Study Design

Not all studies are created equal. The hierarchy of evidence matters more than anything else. Here it is, from strongest to weakest:

  1. Systematic review / meta-analysis — Pools data from multiple studies. The gold standard. If a meta-analysis says something works, that's the strongest signal you'll get.
  2. Randomized controlled trial (RCT) — Participants randomly assigned to treatment vs. placebo. Controlled for confounders. This is what most people mean when they say "the science says."
  3. Cohort study — Follows a group over time, observes outcomes. Strong for identifying associations. Can't prove causation.
  4. Case-control study — Compares people with a condition to people without. Retrospective. Useful but prone to bias.
  5. Case series / case report — "We saw this happen in these 5 patients." Interesting. Not evidence.
  6. In vitro / animal study — "It killed cancer cells in a petri dish." So does a handgun. Preclinical data is hypothesis-generating, not conclusion-generating.
  7. Expert opinion — "Dr. Famous thinks this works." That's nice. Where's the data?

A systematic review from the Cochrane Library (PMID: 14583977 — their methodology paper explains why this matters) carries more weight than a hundred Instagram testimonials. A single in vitro study cited to sell a supplement is essentially meaningless for clinical decision-making.

When a brand says "studies show," I immediately ask: what kind of studies? If the answer is in vitro or animal models only, that's not a lie — but it's not what most consumers think "studies show" means.

Step 2: Look at Sample Size and Duration

A study with 12 participants that ran for 4 weeks is not the same as a study with 300 participants that ran for 12 months. Both are technically "clinical studies." One is meaningful. The other is a pilot at best.

My rough thresholds: for supplements and lifestyle interventions, I want at least 50 participants per arm and at least 8 weeks of intervention. For anything making serious health claims, I want 100+ per arm and 6+ months. Anything smaller is exploratory — worth noting, not worth citing as proof.

Duration matters because biology is slow. Inflammatory markers can shift in 2-4 weeks. Hormonal changes take 6-12 weeks to stabilize. Cardiovascular adaptations take months. A 2-week supplement trial showing "improvements" is probably measuring noise.

Step 3: Check Who Funded It

This is the step most people skip. It's the most important one after study design.

A 2003 systematic review in BMJ (PMID: 12468485) found that industry-sponsored studies were significantly more likely to report results favorable to the sponsor. This has been confirmed repeatedly since. It doesn't mean industry-funded research is automatically wrong. It means the incentive structure creates bias, and you should read the methods section more carefully when the funder has a financial interest in positive results.

I look for the funding disclosure — usually at the end of the paper. I check whether any authors have equity, consulting agreements, or board positions with the company whose product was studied. Conflicts of interest don't invalidate a study. They're a flag to scrutinize harder.

The cleanest evidence comes from independent academic research funded by NIH, NCCIH, or equivalent bodies with no product-level financial interest. When independent and industry-funded studies agree, confidence goes up. When only industry-funded studies exist, I stay cautious.

Step 4: Read the Actual Conclusion, Not the Abstract

Abstracts are marketing documents for scientists. I'm being slightly hyperbolic. Slightly.

The abstract tells you what the authors want you to know. The discussion and limitations sections tell you what you need to know. I've read studies where the abstract says "significant improvement in primary endpoint" and the limitations section says "the improvement was clinically marginal and may not persist beyond the intervention period."

Both statements are true. One is useful for supplement marketing. The other is useful for making health decisions. I always read the full text. If I can't access the full text, I check Sci-Hub or email the corresponding author. If a claim is based on a study I can't fully read, I don't trust it.

Specific things I look for in the discussion: Did the authors acknowledge limitations? Did they overstate their findings? Did they note that replication is needed? A good study is humble about what it proved. A bad study pretends it proved more than it did.

Step 5: Check if the Dose in the Study Matches the Dose in the Product

This is my pet obsession. This is the step that made me build BestDosage.

Real example: ashwagandha. A well-known RCT (PMID: 23439798) showed that 300mg twice daily of KSM-66 ashwagandha root extract significantly reduced cortisol levels and improved stress scores. Total daily dose: 600mg of a specific patented extract with standardized withanolide content.

Go look at 10 random ashwagandha supplements on Amazon. At least half will contain a different extract type, a lower dose, or an unstandardized preparation — and cite this exact study. The study used 600mg of KSM-66. The product contains 250mg of generic ashwagandha root powder. Those are not the same thing. Not even close.

Another example: vitamin D. A major RCT (PMID: 30415629 — the VITAL study) used 2,000 IU daily. The old RDA of 400 IU? Based on preventing rickets in children, not optimizing adult health. Same ingredient. Five-fold dose difference. Completely different clinical relevance.

Every time I evaluate a product or a claim, I check: does the dose in the product match the dose in the study being cited? If the answer is no, the citation is decoration, not evidence.

How I Read a Supplement Label in 60 Seconds

I do this almost reflexively now. Here's the checklist that runs through my head every time I pick up a bottle.

Active ingredient dose. Not total ingredient weight — active compound dose. If a label says "Turmeric 750mg" but doesn't specify curcuminoid content or percentage, that's a red flag. The curcuminoids are the active compounds. The turmeric root powder is the delivery vehicle. I want to know how much of what actually matters is in the capsule.

Match to clinical dose. Once I know the active dose, I check it against the literature. Is 750mg of turmeric with 95% curcuminoids in the range that studies used? Or is it a fraction of the studied dose dressed up in a big capsule?

Proprietary blend. Two words that should make you suspicious. "Proprietary blend" is the supplement industry's way of saying "we don't want you to know how much of each ingredient is in here." They list the ingredients but not the individual amounts — only the total blend weight. A "focus and energy blend" containing caffeine, L-theanine, and alpha-GPC at a combined 500mg could be 490mg caffeine and 5mg each of the other two. You have no way to know. That's the point.

Fillers and flow agents. Magnesium stearate, silicon dioxide, rice flour — these are manufacturing aids. They help powders flow through capsule-filling machines. They're generally safe in the amounts used. Titanium dioxide is more debated — the EU banned it as a food additive in 2022 (EFSA opinion, PMID: 33982120) while the FDA still permits it. I don't panic over flow agents, but I prefer brands that minimize them and are transparent about why they're there.

Third-party testing. This is the fastest quality signal. USP Verified, NSF Certified for Sport, and ConsumerLab approved all mean an independent lab has verified that what's on the label is what's in the bottle. No contamination. No underdosed ingredients. No label fraud. If a supplement doesn't have third-party testing, I ask why.

Want the full checklist as a printable? Download the practitioner evaluation worksheet — it includes supplement evaluation criteria alongside practitioner scoring.

Free Resource

The Chemist's Wellness Evaluation Toolkit

Chad's personal study-reading checklist + ingredient red flags + BestDosage scoring criteria summary. The framework behind the directory.

No spam. Unsubscribe anytime.

The 5 Wellness Red Flags I Spot Immediately

After years of evaluating products, practitioners, and claims, I've developed a pattern recognition that's almost involuntary. These five red flags are the ones I see most often — and the ones that should make you walk away fastest.

1. "Clinically Proven" Without a PMID

If a product or practitioner claims something is "clinically proven" and can't point you to a specific study with a PubMed ID, the claim is marketing copy. Full stop. "Clinically proven" is not a regulated term. Anyone can use it. The question is: clinically proven where? By whom? Published in what journal? With what study design?

I saw a supplement brand claiming their collagen peptide was "clinically proven to reduce wrinkles by 28%." I asked for the PMID. They sent me a white paper — not a peer-reviewed study — funded by their own company, with 23 participants, no control group, and a 4-week duration. That's not clinically proven. That's a brochure.

2. Before-and-After Photos With Different Lighting

This one is so common it's almost funny. The "before" photo: fluorescent overhead lighting, no makeup, frowning expression. The "after" photo: warm golden lighting, professional styling, smiling. The transformation isn't the product. It's the photographer.

I look for consistent lighting, same angle, same camera distance, no filters, same time of day. When those conditions are met, before-and-after comparisons can actually be informative. When they're not — and they almost never are in wellness marketing — you're looking at a lighting demo, not a product demo.

3. "Natural" Used as a Safety Claim

Arsenic is natural. Lead is natural. Hemlock is natural. Socrates died from something natural. The word "natural" tells you nothing about safety, efficacy, or quality. Absolutely nothing.

This is the one that trips up smart, educated people. The logical chain goes: natural = from nature → nature = good → therefore natural products are safe. Every link in that chain is broken. Some of the most toxic substances on Earth are entirely natural. Some of the safest medications are entirely synthetic.

When I evaluate a product or a practitioner who leads with "all-natural" as their primary selling point, I know they're marketing to emotion, not evidence. Natural can be fine. Natural can also be dangerous. The question isn't whether it's natural — it's whether it works and whether it's safe at the dose being recommended.

4. Proprietary Blends Hiding Underdosed Ingredients

I mentioned this in the label-reading section, but it deserves its own red flag because it's so pervasive. A blend of 15 ingredients totaling 800mg means the average dose per ingredient is 53mg. Some of those ingredients — say, lion's mane mushroom — need 500-1000mg to reach clinically studied doses. If it's hiding inside a 15-ingredient blend at 800mg total, it's fairy dust. It's there for the label, not for your body.

The worst offenders list 10-20 ingredients on the front label in big font, making it look comprehensive, while the Supplement Facts panel reveals a single proprietary blend with no individual doses. This is legal. It should not be.

5. Celebrity Endorsements Replacing Clinical Evidence

A famous person holding a product is not a study. A famous person saying "I use this every morning" is not a meta-analysis. Celebrity endorsements work because humans are wired for social proof — if someone I admire does something, it must be good. But social proof is a cognitive shortcut, not an evidence standard.

I've seen wellness brands spend more on a single celebrity partnership than on all their product testing combined. The ROI makes sense for them — endorsements sell. But they tell you nothing about whether the product works. When I see a brand leading with celebrity endorsements and burying or omitting clinical evidence, I know where their priorities are. And they're not with your health.

Why I Built a Scoring System

For years my framework was a tool I used for myself. I'd evaluate a supplement, share my findings with friends who asked, move on. Then the requests started scaling. "Hey Chad, can you look at this practitioner for me?" "What do you think of this naturopath?" "Is this functional medicine doctor legit?"

The questions shifted from products to people. And I realized the same framework applied.

If I could spot a bad supplement label in 60 seconds, could I spot a bad practitioner in 60 data points?

Turns out, yes. The signals are the same. Evidence orientation. Transparency. Conflicts of interest. Scope awareness. Whether they can cite research when asked. Whether they acknowledge limitations. Whether they coordinate with conventional care or operate in a silo.

So I built the BDS Score. A structured evaluation across 12 criteria, applied to every practitioner in our directory. Not a vibe check. Not star ratings from patients who loved the waiting room snacks. A systematic assessment of the things that actually predict whether a practitioner will help you or harm you.

It's the same framework I use for studies and supplements, translated into a practitioner evaluation system. Check the credentials (study design). Check the training depth (sample size). Check for conflicts of interest (funding). Read what they actually do, not what their website says (full text, not abstract). And verify that what they deliver matches what they promise (dose matches study).

The full methodology is public: How the BDS Score works. I publish it because transparency is the whole point. If I'm asking practitioners to be transparent, the scoring system has to be transparent too.

Why Dosage Matters More Than the Ingredient

The name. People ask about it. Here's the answer.

BestDosage isn't about finding the best supplement. It never was. It's about a principle: the dose makes the medicine. Paracelsus said it in the 1500s. It's still the most important idea in pharmacology. And it applies to everything — not just pills.

Vitamin D: 400 IU daily versus 5,000 IU daily. Same molecule. Same supplement aisle. One prevents rickets in children. The other addresses clinical deficiency in adults and has been associated with immune modulation, mood regulation, and reduced respiratory infection risk (PMID: 28202713 — a meta-analysis of 25 RCTs on vitamin D and acute respiratory infections). Same ingredient. Twelve-fold dose difference. Completely different clinical significance.

But here's where it gets interesting — and why BestDosage became a practitioner directory instead of a supplement review site.

Dosage applies to practitioners too. The average conventional primary care visit lasts 11-18 minutes. A functional medicine first visit lasts 60-90 minutes. Same patient. Same body. Same symptoms. Radically different amount of data collected. The "dose" of attention matters. The "dose" of investigation matters. An 11-minute appointment and a 90-minute appointment are not the same intervention, even if both practitioners have MD after their name.

Dosage applies to modalities. Two minutes in a cryotherapy chamber versus 30 minutes in an infrared sauna — both are "thermal therapies" but the dose, the mechanism, and the evidence are completely different. Three minutes of cold exposure triggers norepinephrine release (PMID: 10751106). Thirty minutes of infrared heat triggers cardiovascular adaptations comparable to moderate exercise (PMID: 25705824). Same category. Different doses. Different outcomes.

The name is the philosophy. Everything in wellness — every supplement, every practitioner, every modality, every protocol — comes down to dose. The right thing at the wrong dose is the wrong thing. The wrong thing at any dose is still wrong. And the right thing at the right dose, verified by the right evidence, administered by the right practitioner? That's what we're trying to help you find.

What I Got Wrong

If I'm asking you to trust my framework, I owe you my mistakes. Here they are.

I dismissed functional medicine for 3 years. I was wrong.

My chemistry training made me allergic to anything that sounded "alternative." Functional medicine sounded alternative. The marketing language was full of words that made my scientist brain itch — "root cause," "heal your gut," "detox pathways." I assumed it was all pseudoscience dressed in a lab coat.

Then I actually looked at the data. The Cleveland Clinic Center for Functional Medicine published a study in JAMA Network Open (PMID: 31651969) showing significantly improved patient-reported outcomes compared to propensity-matched conventional care patients. It wasn't a perfect study — it was observational, single-center, and couldn't be blinded. But it was published in a legitimate journal, used reasonable methodology, and showed real effects.

I also sat in a functional medicine appointment. Ninety minutes. A comprehensive health timeline from birth to present. Lab panels I'd never seen in conventional care. The experience was so different from my 11-minute annual physical that I couldn't dismiss it anymore. The model — investigate root causes, use a hierarchy of interventions, treat the whole system — makes scientific sense even where the RCT evidence is still catching up. I was wrong to dismiss it wholesale, and I said so publicly. Read my full assessment: Functional Medicine: The Complete Guide.

I thought all supplements were useless. Some genuinely fill gaps.

Coming from pharma-adjacent chemistry, I had a bias: if it's not a drug, it doesn't work. That's an overcorrection. The research on specific supplements at specific doses for specific populations is actually quite strong. Vitamin D in deficient individuals (PMID: 28202713). Magnesium glycinate for sleep quality (PMID: 33865376). Omega-3 fatty acids at 2-4g/day for triglyceride reduction (PMID: 12438303). Creatine monohydrate for strength and cognitive function (PMID: 12945830).

The issue was never "supplements don't work." The issue is "most supplements are poorly formulated, underdosed, and marketed with claims that exceed their evidence." Those are different problems. One is about the category. The other is about the industry. I conflated them for years.

I overweighted RCT evidence and underweighted clinical observation.

RCTs are the gold standard for a reason. But they're not the only standard. Some interventions can't be easily randomized — you can't double-blind a dietary change or a 90-minute clinical encounter. Observational studies, well-designed case series, and consistent clinical patterns across practitioners carry real information, even if they sit lower on the evidence hierarchy. I was too rigid about this early on. I've loosened — not my standards, but my willingness to consider the full evidence picture.

I underestimated how much practitioner quality varies.

I thought the credential was the signal. MD = good. No credential = bad. Reality is far more nuanced. I've evaluated MDs who sell proprietary supplement lines to every patient and naturopathic doctors who practice more evidence-based medicine than anyone in their zip code. The credential matters — but the practice philosophy, the evidence orientation, and the transparency matter more. That's why the BDS Score weights behavior and practice patterns, not just letters after a name.

The Bottom Line

I didn't build BestDosage because I think I'm smarter than everyone. I built it because I think everyone deserves the same tools I use to evaluate claims. A PMID. A dose check. A conflict scan. A framework.

The wellness industry is a $6 trillion global market built on a foundation of inconsistent regulation, misrepresented research, and consumer confusion. That's not cynicism — that's the data. The Global Wellness Institute publishes the numbers. The FTC publishes the enforcement actions. The gap between the two is enormous.

But within that market, there are real practitioners doing real work. Functional medicine doctors running comprehensive investigations that conventional care doesn't offer. Wellness technology centers using evidence-backed modalities at effective protocols. Supplement companies formulating honest products at clinical doses with third-party verification. They exist. They're just hard to find in the noise.

That's what the scoring system does. That's what the directory does. That's what this entire framework is for — not to tear down an industry, but to surface the people and products within it that actually hold up to scrutiny.

Five steps to evaluate any claim. Sixty seconds to read a supplement label. Five red flags that should make you walk away. And a directory of 36,000+ scored practitioners if you want someone who's already been through the framework.

Take the quiz if you're not sure where to start. Read the functional medicine guide if that's your interest. Browse the wellness technology guide if you're exploring modalities. Check the practitioner selection guide if you want to vet someone yourself. Or visit the about page if you want to know more about the chemist behind the curtain.

I'm Chad. Your chemist.

Enjoyed this article?

Get free comparison charts, checklists, and research summaries.

Find a Practitioner

Ready to take the next step in your wellness journey?