Palmitoylethanolamide and Chronic Pain

Hi docs,

I believe Dr. Baraki mentioned a year or so ago that he hadn’t really encountered this supplement before, but I’m curious if anything has changed since. I was recommended this for chronic pain by my GP, and it is in use by local pain clinics as well.

I’ve read a number of studies on this, most showing a statistically significant effect but the clinical effects being hit or miss. The dosing was rather low in some of those studies, however (300-600mg/day for 14 days, for instance).

This study compared a wider range of applications with the same higher dosing (1200mg for 3 weeks and 600 for 4 weeks). Obvious weakness is that it’s an observational study and not blinded with a placebo group.

However, the results seem to be in line with meta-analyses of RCTs.

Any comments on this from your own familiarity with pain research?

I didn’t find too many randomized, controlled trials, but it does seem to have some effect on diabetic neuropathic pain. I am not sure of the magnitude of expected benefit compared with other available therapies for this condition.

I would not put much stock in observational data for this purpose, but I don’t have any other thoughts or expertise on this topic at this time.

Thanks for looking and for the response.

Re: observational data. I’m aware of the pitfalls of non-RCT data on interventions, but the consistency of findings and magnitude of effect across a broad range was quite large in this case, both in populations already on medication and none at all. Is potentially a common phenomenon with interventions in observational data when there’s no placebo comparison, even if other interventions are present and ineffective? Won’t hang my hat on that particular study in any case.

Is potentially a common phenomenon with interventions in observational data when there’s no placebo comparison, even if other interventions are present and ineffective?

Can you rephrase this question?

Sure, apologies for the nonsensical typo, too quick on the trigger when I can’t edit. My ESL is showing.

Is it common for observational studies that are intervention-based to have exaggerated magnitudes of effect, even when the effect is found on a relatively wide scale? For instance, this study consistently showed a fairly dramatic decrease in pain scale rating for a supplement (in some cases months after PEA supplementation stopped) across a variety of subjects with a variety of medication statuses. Does this sort of thing commonly – or at least occasionally – happen in the field, after which RCTs show little to no effect, or an effect specific to a narrow range of conditions?

Correct, it is not unusual for observational studies to find impressive apparent magnitudes of effect that are subsequently found to be null using more rigorous designs.