CK compare’s “What is Good Science” to mental health (MH) service standards and finds a stark contrast falling far short of quality science. If MH wants the respect of the scientific community it feels deserved, it needs to respectfully step up. The current landscape is anything but good science.


It’s important to first note clinical mental health (MH) studies can be & are usually good data-driven science. At least to whatever extent the modern brain/psychiatric measurements allow. But this is NOT what happens in day-to-day MH practice. Instead, treatments on the street are wildly subjective & fraught with bias and untested, unreviewed opinions… Peer review? Not a chance.

Notice MH in practice negates so many core fundamentals of “good science”: peer review, objectivity, & fact vs opinions. Can “subjective science” even exist? Not really. But it sure does in MH; it’s the gold standard, billions are made on questionable conclusions no other science would ever accept. It’s that simple.

Compounding this problem are the MH professionals themselves: They are so bias to their own subjectively driven industry-practices, they rarely challenge their own data’s integrity; they believe in it, as they were taught trickle-down style. Stepping into this madness from hard data-sciences, you see exactly that: madness. The irony is palatable.

Let’s break it down…

Google, What is Good science?

Tools & measurements are scientific mantras. They fill entire cities with lab equipment. These are precision instruments you can touch, quantify, analyze & replicate data with.

The tools in MH? Questionnaires patients fill out in a 30-50min session after work some grumpy Tuesday. Or notes scribbled in a providers field book.

Let’s preface with that little gem; without even discussing the bias and flaws this introduces alone.

(Partial list for brevity)

1) Facts vs Opinions:

Therapy & Psychiatry base conclusions on subjective opinions. That’s no secret. Usually one person in a quiet room with one patient alone. Unchallenged. Unverified. Untested. Little quantitative data present if any.

2) Acceptance of scientific ideas based on a process of publication & peer review:

Do I even need to elaborate here? Therapy & Psychiatry conclusions are almost never peer reviewed. In my 20-years of MH services, not one thing a therapist told me was double-checked, including life-altering diagnosis drifting unchallenged to this very day.

Do patient treatment plans get published for peer review beyond clinical studies and temporary licensing supervision? Nay. Should it be? Yes. Why is not? No one makes them.

3) Science is only as good as the data it uses; bad science can lead us astray:

MH professionals are the first to admit: they wish they had better data to base conclusions on. But the brain is hard; soul tricky. Understandable.

What is not OK is persisting this false narrative that they are indeed a high quality science full of accuracy & precision. We hear stories of bad therapy & broken MH services every day: It ruins lives.

My own family and lover have been lead astronomically astray by exactly this: horrible untested opinions that derailed my life, logistically & spiritually. The therapist long gone completely unaware of the consequences she handed us all. Lather Rinse Repeat – zero repercussions or consequence for failure.

4) Science follows certain rules & guidelines:

Anyone who has sampled the landscape of MH knows one thing well: treatment approach varies wildly clinician to clinician.

If MH were McDonalds, burgers and fries would be cooked and taste different in every location. Lack of quality control, routine supervision and absent peer review ensures this variability indefinitely. It’s appalling, not good science.

5) Replication is vital to good science:

Try describing how your therapist concluded diagnosis or treatment, then imagine relaying this to a new therapist to replicate verbatim, measurements and all. Good luck.

I’ve seen my own diagnosis achieve conclusion based on a slurry of subjectively driven observations over arbitrary lengths of time; no data measured. Since then, new MH providers simply take this old diagnosis at word, or seek confirmation bias they do nothing to mitigate against.

Compare this to analyzing kidney function with a blood test. Yuck. Of course, no one is claiming MH should have the precision of a blood test, the brain is too hard to measure (currently). But what I am saying is MH conclusions should always come with a huge asterisk indicating it’s lack of precision and accuracy, not the overly confident scientific process it claims to be.

What you find instead, is a half-baked science full of overconfident therapists performing their own self-assessments. Junk science.


I’ve worked with earth, engineering & computer sciences for over 20 years, hence my outrage. The reason MH gets deemed “junk science” relates to how horribly subjective and opinion-based it’s data. The conclusions made are appalling.

Data integrity in science can be objectively measured. When you apply data-analysis to MH practice data, you get exactly that: awful data, aka junk science.

Read Also

Recent Posts

Leave a Comment

Your email address will not be published. Required fields are marked *