One of the reasons I think therapy & psychiatry gets critiqued (possibly unfairly) as a lower quality science relates to how data is collected & assessed within the industry as well as the lack of transparent risk/failure assessments in respect to other sciences.

The bottom line is: it’s just not great data to base conclusions on. No other science offers so much subjective flexibility. Much of which is out of clinician control entirely while some is simply disregarded or not mentioned as industry-practice culturally.

This should be no surprise (or offense); providers would surely love access to better data.

When you spend enough time with other data-driven sciences, you learn standards apply to all data collection that therapy/psych inherently disregards, whether intentionally or inadvertently (due to the nature of the brain being hard to measure).

Instead, what I see is an over dependence on subjectively-driven conclusions poorly measured with an overemphasis on positive attitude.

You see this a lot on Twitter (as example), where modalities are shared without mention of their respective probabilities at all. Doing so can sometimes be deemed “negative attitude”. The same occurs in practice; negating failure sometimes altogether. I’ve never heard a provider say “our stuff doesn’t always work”.

It can be hard to find hard analytical data in mental health treatment. Compounding this issue: the data we do have is fraught with subjectivity & bias. Again, in many cases unavoidable.

That said, I never hear the concepts of data bias/subjectivity systematically discussed in my treatments. It’s like no one wants to admit the lower quality data we are stuck with exists (respectfully)

For example, a medical doctor may offer a surgical solution at 75% potential for success with a 10% mortality rate, and relays this probability to patients/family very clearly

In contrast, Cognitive Behavioral Therapy (CBT) modalities treating depression (as example) can be as high as ~30-50% failure rates, with statistical support that it can sometimes even make patients worse (poorly studied in itself)

Something I would like to hear in my own treatments is “dear patient, we are going to try CBT for your depression. I am going to give you a lot of advice that is not 100% proven effective, with X% probability of failing and making you X% worse. Here are the clinical stats”.

It’s important we manage expectations in respect to treatment limitations and failure, clinically and culturally

In some cases, the presence of subjectivity can be haphazardly used by mediocre therapists to excuse failure; a side effect of subjectively-driven data imo. Other sciences simply don’t allow this opportunity due to heavy reliance on undeniable logic-based mathematics.

One of my goals is seeing treatment acknowledge this subjectivity and bias more directly, while communicating probabilities for failure, harm and stigma as SOP, not waiting for each therapist to make this decisions independently or patient to ask.

One practical issue this causes in my own life is mismanaged treatment expectations: My family and friends think therapy is a no-risk win-win solution that always helps. And it’s just not.

Note: I’ve worked in data collection for 20 years as engineer/scientist/coder).

Read Also

Recent Posts

Leave a Comment

Your email address will not be published. Required fields are marked *