Any human endeavor reveals a plethora of life-advice. Like ducks, most of this wisdom rolls off our repellant backs unheeded. But every once in a blue-mad-moon a sliver of knowledge snakes through, carved into mind-stone forever. Mental health is no exception. Only two statements in my decades-long quest through “professional” mental health stand out, both offering a stark reality prescribers like to duck:

  1. 1 in 5 bipolar patients die by suicide; up to 60% try suicide at least once. (NIH)
  2. Managing bipolar is like training for the Olympics every single day. (pending)

These 2 stats pack a powerful & sobering punch.

These stats sum up the whole shebang in just two sentences; the reason to take this illness so seriously is life or death, and why it’s so hard to pull off. There is no splitting atoms here; but there is even less room for error. It can become easy to let yourself underestimate the verbal substance within any given session.

Many of the risk factors bipolar face are controllable with good old fashioned cognitive behavioral health: i.e. not being a fat reckless (falling) Karen.

Yup that’s right: that soul-crushing laziness you yawn around with all day aloofly is not allowed here. And that shit fitness-free diet? Please. Weakness cannot be afforded on razors edge. Not here. No excuses. Out here you die all alone a cream-filled Twinkie soft. The irony of David Goggins is pure: you need to be this crazy to manage being this (almost) crazy. Now giddyup, Chuck.

The point is the lack of stats in psychiatric care.

The fact that large populations of bipolar want to hopelessly leap off bridges provides insight into the persistent madness faced by all suffering: it’s hard… and respective statistical failures are real numbers. People are dying.

Even more alarming: for everyone who has tried to die bipolar, millions more have pondered the demons just never pull that trigger: blam. Wandering through life wanting to die is still a huge human failure; a failure notoriously unmeasured clinically and unspoken socially.

Not failing isn’t success. It just looks that way.

Here’s the kicker: under these stats lies the (often unspoken) measurements of psychiatric flailing; its success and failure. 60% of any patient bracket is alarmingly high, indicating entire industries operating objectively ineffective.

Wandering one of the inherent downsides of operating bias inside a vacuum is problematic fact: critiquing failure critiques provider; the two entities are intimately intertwined in “science”, glued by ego. Blurring statistical lines personal.

Imagine 1 in every 5 homes built exploding catastrophically without warning… killing all. Would we still celebrate the housing market so openly? Would we recommend Victorian or Contemporary? Or would we shift attention to the bigger problem: 20% of homes are dangerously fatal and people are dying. This is failure. There is nothing to celebrate about that. Yet psychiatrists are all heart and smiles hugging and clapping at their own pride-filled efforts, openly encouraging evaded responsibility.

Psychiatrists don’t admit: they are broken & archaic.

The painful truth is: Psychiatrists don’t like admitting the arena they operate within is embarrassingly broken and archaic. Stats are bad. Really bad. Yet therapy and medications still get dictated socially as if pure patient good.

“It can’t hurt” they might say. “You should be in therapy”, they plead… and my personal favorite “You need to take your meds asap.” The massive issues notwithstanding, terminal industry failures are rarely discussed with patients openly, let alone addressed directly with quantifiable (scientific) care.

The takeaway is worse: something in psychiatric care isn’t working. That said, trying to hold any med provider to these stats is a quick route to pissing someone off. You will be met with wish-washy replies defending how treatment drastically reduces these stats in patient favor.

Though you will almost never get actual data as to how or why… this responsibility is left to vanish in the treatment-ether. Push the issue, and you risk coming off negative or uncooperative. Or god forbid, anti Big Pharma against it all.

Replacing quantifiable stats with qualitative shrugs.

Case in point: I used to bring up the 1-in-5 mortality rate to my practitioner all the time, as a real concern I pushed actively against. She got outwardly annoyed with my persistent “negativity”, replacing my quantifiable stats with qualitative shrugs:

“You really like that stat, don’t you Mizz?” as if leaning on this to excuse my failures.

“Well Doc, it does define your industry success rate, does it not?”

“Err… that number goes way down if you are in treatment. Treatment helps.” they always defend.

“Oh yeah, how much does it help exactly?” I beg.

“It helps a lot… I see people get better all the time” she pleads.

“How many don’t get better?”

“Umm… Some, but it’s rare. I don’t know exactly offhand. It’s usually helpful”

A forced mantra washes all the science away.

As this forced mantra of “treatment always helps” washes all the science away, you quickly discover there is little room for legitimate science in psychiatric care. Bias and subjectivity rule the roost.

As an ex-scientist, ex-engineer, and current computer programmer, this aversion away from statistical analysis is truly alarming, but an industry-wide experience nonetheless.

Combine this with the sheer lack of treatment peer-review (#), and you start to see an entire industry shifting on loose sand subjectively; hold the professionals to these real numbers, and watch them squirm uninvolved: “We help people feel better. But you have to want to be here.”

Without solutions, we become the problem.

Is nothing in your treatment working? Have you reached a dead end psychiatrically? Are you scared it will all fail? Worried no one is actually helping? Ask your provider how they measure their own success, and if/when there are mechanisms in place for measuring the failures.

Hold prescribers accountable to real life numbers, countable without bias. Ask why and how much? And just like that: your 20-30 minutes of treatment is up; squandered uselessly.

Good luck data-mongers, it’s a shit show.

Don’t believe me? Try it for yourself: Ask your prescriber for data and statistics, and have them identify where their instrumental precision and accuracy stands in respect to your treatment. Or don’t ask for anything, and just quietly notice the lack of stats inherently present; notice how it all shifts suddenly into “good and helpful” qualitatively.

The process by which med-providers collect, analyze, assess, and compile their patient data is utterly heartbreaking for even half-baked science. Add onto this the subjectivity and bias inherently present in patient feedback, and you see the whole house of cards start to crumble quickly.

Conclusion: the “science” behind bipolar treatment.

Remember learning the precision vs accuracy of scientific instruments? Sig figs and standard deviations of error? Yeah, there is none of that here. At best, you might get a short form to fill out numbering your stress and inattention 1 through 10, but these are usually embedded within the formalities of managing controlled substances and preventing abuse, nothing more.

At worst, your data never gets collected at all, spending entire decades in “treatment” completely void of scientific data or peer review. In the world of science: it’s a train wreck to witness. And your prescriber surely will refuse to acknowledge this flailing sentiment.

Mouth-piecing industry problems will not make you psychiatric friends, warranted as it may be. This in itself represents a huge industry failure.

Dozens of medications tried over 10 years.

As someone who has tried dozens of medications over 10 years, I have witnessed firsthand their breakdowns. The failure is not the problem, the issue arises in this failure not having any mechanism by which to be measured. “The new med is helping great! Thank you doctor” is an easy statement for professionals to handle. Onto the next. However the same does not exist for failure.

A great psychiatric fallback gets universally applied: where all else fails, discuss the limitations of treatment under the guise of “managing patient expectations”, while pleading that none of this was ever meant to solve your problems at all: only patch them.

Indeed, try finding a prescriber willing to admit everything they do is half-assed wrong and an industry is in grave trouble. Then watch them all call you (almost) crazy.

Recent Posts

Leave a Comment

Your email address will not be published. Required fields are marked *