Episode Overview
Dr. A. John Rush, renowned for leading the famous STAR*D depression study, addresses a critical challenge in modern psychiatry: while physicians often rely on their clinical intuition to treat complex depression, new data proves this approach has a significant blind spot. Experience alone can miss the full extent of a patient’s suffering, leaving crucial progress untracked.
Dr. Rush reveals a system to fix this clinical blind spot using the psychology of clinical measurement. He explains how doctors can implement simple assessment tools to gather objective data, leading to more precise treatment adjustments. This straightforward method gives physicians the power to see what is truly working and can significantly boost patient remission rates.
Key Episode Highlights
🎯 BIGGEST LESSON [12:17]
“By bringing measurement to the bedside, we bring precision and science. The evidence is very clear right now. We make better decisions about what to do with patients.”
🎯 OTHER KEY TAKEAWAYS:
⚠️ THE HIDDEN GAP IN PSYCHIATRIC CARE [8:10]
“We don’t know anything about in what order, in what combination, and by what methodology we implement that ‘what’.”
Dr. Rush explains why knowing a treatment can work is only half the battle. This is the crucial gap between research and real-world results that most clinicians overlook.
✨ WHY ‘PROVEN’ TREATMENTS FAIL YOUR PATIENTS [20:45]
“Does this apply to everybody with depression, no matter how they show up? Absolutely not. That’s where it really gets very, very interesting because now we’re going from efficacy research to effectiveness research.”
Learn the critical difference between a treatment working in a controlled trial versus in your complex, real-world patient population.
⚡ THE LAW OF DIMINISHING RETURNS IN DEPRESSION [34:57]
“The more steps you take, the problem is, the less likely you are to get into remission. So remission rates were like 35% in the first step, 28% in the second step, 15% in the third step, 15% in the fourth step.”
Dr. Rush reveals the stark data from the STAR*D study. Use this critical insight to set realistic expectations with patients about the challenges of treatment-resistant depression.
Episode Chapters
00:00 – Introducing Dr. A. John Rush
02:20 – Why Dr. Rush Chose Psychiatry & a Career in Clinical Research
05:45 – How Cognitive Therapy Shaped Evidence-Based Psychiatry
07:10 – Strategies, Tactics, and the Research Gap
10:59 – Using AI & Clinical Data to Guide Treatment Decisions
18:39 – Why Clinical Trial Results Don’t Match Real-World Patients
22:54 – Pragmatic Trials That Reflect Everyday Psychiatric Practice
30:48 – The STAR*D Trial: Sequencing Treatments for Depression
36:32 – Dose Optimization & Long-Term Depression Recovery
39:56 – Building a Learning Healthcare System in Psychiatry
43:48 – Dr. Rush’s Advice for Researchers and Clinicians
podcast alerts
Get the latest episodes
Additional Resources
Back to top
Further Reading
Journal of Clinical Psychiatry: psychiatrist.com/jcp/
Dr. A. John Rush: https://www.linkedin.com/in/a-john-rush-8aa46042/
The Host
Ben Everett, PhD, is the creator and host of The JCP Podcast, a series that brings together leading voices in psychiatry to explore the latest research and its clinical implications. Everett earned his PhD in Biochemistry with an emphasis in Neuroscience from the University of Tennessee Health Science Center. Over a two-decade career spanning academia, publishing, and the pharmaceutical industry, he has helped launch more than a dozen new treatments across psychiatry, neurology, and cardiometabolic medicine. His current work focuses on translating complex scientific advances into accessible, evidence-based insights that inform clinical practice and foster meaningful dialogue among mental health professionals.
Full Episode Transcript
This transcript has been auto-generated and may contain errors. Please refer to the original recording for full accuracy.
00:00 Introducing Dr. A. John Rush
Dr. Ben Everett: Welcome to The JCP Podcast, where we explore the science and stories shaping mental health care today. I am your host, Dr. Ben Everett, and on this podcast we speak with clinicians, researchers, and thought leaders advancing the field of psychiatry with a focus on not just what is new, but what is meaningful for our listeners in their clinical practice.
My guest today is Dr. John Rush, one of the most influential clinical researchers in modern psychiatry. Dr. Rush is internationally recognized for his work in mood disorders, treatment-resistant depression, and the design of large pragmatic clinical trials that bridge research and real-world care. Over the course of a career spanning several decades, he has held leadership roles at institutions including the University of Texas Southwestern Medical Center, Duke-National University of Singapore, and numerous national and international research initiatives. He is perhaps best known for leading the landmark Sequenced Treatment Alternatives to Relieve Depression or STAR*D study, which fundamentally reshaped how clinicians think about antidepressant treatment strategies and treatment-resistant depression.
Dr. Rush has authored hundreds of scientific publications and has played a major role in developing measurement-based care approaches and outcomes tools that are now widely used in psychiatric research and practice. Much of his recent work has focused on how we can better translate clinical research into everyday clinical care and how routine clinical practice itself can become a source of new knowledge.
Today, we are going to talk about this challenge, but I would rather frame it as an opportunity. The opportunity to disseminate clinical research into real-world practice to provide better care for patients and also how clinicians and researchers can work together to close the gap between what we know and what we actually do in psychiatry.
So with that, Dr. Rush, welcome to The JCP Podcast.
Dr. A. John Rush: Well, thank you very much, Ben. Glad to be here and excited to talk to you about a whole variety of things.
Dr. Ben Everett: Yeah, this is going to be a lot to get through today.
Well, look, we like to start every episode with a couple of icebreaker questions. They are kind of similar in format, but we like to make our guests hopefully feel at home, but also want to get everybody to know people on a little bit of a personal level, not just necessarily a name on a textbook or a bunch of papers. So let us start at the beginning. As a young child, did you know medicine and science was for you, or is this something you came to a little bit later in life?
02:20 Why Dr. Rush Chose Psychiatry & a Career in Clinical Research
Dr. A. John Rush: I was a science person in sixth grade. I loved experimenting in chemistry laboratory. I have the honor of having had made dynamite in my junior chemistry laboratory class and had a little explosion there. Luckily it did not get on my transcript. So I knew I was going to be a science guy for sure. Medicine was a little bit later. I was going to be a math guy. But one of the guys in my math class, who was really very much a genius, solved an insoluble math problem which was one of our test requirements. It was on our test. So I said, “I am going to flunk math.” And I did not fall in love with E. coli, although I spent a year with them. There were millions of them. Not one of them did I fall in love with. So I said, “I like people, I want to do people research and people care.” That got me into medicine.
There was one other factor, which I asked my father, “How long would you keep paying for my tuition?” He says, “As long as you are a full-time student.” I said, “I am going to medical school. That is the longest course.” So I love learning. So that is the other factor.
Dr. Ben Everett: Well I tell you, lifelong learning is something that I hope is near and dear to all of our listeners’ hearts, because really when you get into science and medicine and healthcare you really need to stay on top of things. And really what a gift from your father to keep on paying for tuition. I mean what the amount of student debt that people are having going through these long advanced practitioner programs and medical school now, it is staggering.
So at what point going through medical school and rotations did you decide on psychiatry? Did you say, “Wow, this is really what I am meant to do”?
Dr. A. John Rush: Well I knew in medicine that the most important organ in the body is the brain, unless you are an adolescent, could be your reproductive organ, but when you grow up it is your brain. And I was really thinking about neurology, but I was interested in the neurology that made you a person. So that was basically kind of how I got into psychiatry. I was heading to neurology, I tried a Berry Plan to get out of Vietnam, applying to be a neurologist because they defer you. And then later they said, “We do not need any more neurologists, we need a general medical officer.” So that is how I got into the Army general medicine internship, after internship.
But I was in Germany and I hooked up with a psychiatrist trained at Walter Reed and I became an on-the-job trainee under him. His name is Hal Gillespie. Turned out to be the best man in my first wedding. I just loved psychiatry. It was so fascinating. The patients actually got better. The mystery of the mind and how it mismanages us and how we mismanage it was just too intriguing to pass up. So psychiatry was a slam dunk.
Dr. Ben Everett: I think that is an answer that will resonate with most of our listeners. I think we are all drawn to this for different reasons. But yeah, I am a scientist, but kind of a neuroscientist and got to it for some kind of the same reason. It is like, man, but how does this ball of tissue end up manifesting in an ego and a self and all that stuff? It is really pretty fascinating.
So you have done a lot with pragmatic research, large clinical trials, the STAR*D trial. At what point in time did you say, you know, I am really interested in this kind of clinical research as opposed to just being a clinician? I will not say just being a clinician. Obviously, it is a very important job.
05:45 How Cognitive Therapy Shaped Evidence-Based Psychiatry
Dr. A. John Rush: It was really in residency because I was supervised by a psychopharmacologist named Joe Mendels. I was trained on structured interviewing, a very old, old version called the SADS or Schizophrenia Affective Disorders Survey, And another mentor was Tim Beck, and Tim Beck was working on something that had not yet been called ‘cognitive therapy’, but it became that while I was a resident and he was supervising us, doing stuff that other people said you should not do: talk to the patient directly, sit them up, give them homework, measure what they are doing. And of course, he had the Beck Depression Inventory.
And so I said, “This guy is doing something really quite different.” And that was when I was enamored of his treatment. I mean, he supervised me. They got better. I said to him, “Dr. Beck, you really need to figure out if this works.” That is impudence beyond belief. Right? I am a second-year resident. But he said, “What do you mean?” I said, “You know, like a trial, like they do with drugs.” He said, “Well, that is a good idea.” I said, “Are you kidding? Because you have been working on this for twenty years. You are actually going to put this on the line? I mean, it could undo everything.” He said, “You know, you have got to go with the evidence.” And that is why he is one of my research heroes. So that got me into having to learn trials from lots and lots of wonderful people over the years.
07:10 Strategies, Tactics, and the Research Gap
Dr. Ben Everett: That is fascinating. You have had a number of very influential big names that were very pivotal in your training in your early career. And you can see how that shaped the trajectory of your career and what you ended up doing.
So a theme that runs through your work is what you call the gap between what we know and what we do in psychiatry. So I am curious if you could just define this research gap for the listeners who may not be familiar with this concept.
Dr. A. John Rush: Well, I think a lot of our research, certainly in the world of randomized controlled trials, they are designed to find out whether something works and whether it is safe. So what I call that, that is the “what,” the “what to do.” “Should this be in our list of what to do?” And you get a ‘yes’ or a ‘no’. No, it is too dangerous. It does not work. It does not keep working, off the list. Oh, it is on the list. It does, it is what to do.
But we now do not know anything about in what order, in what combination, and by what methodology do we implement that “what.” So I call the “whats” the strategies. That is the things you do. And the “how to” is the tactics. How do you do that? What is the dose escalation scheme? How long do you go before you fold? How long should you keep on if they have really responded well? What combinations work better? If something is somewhat better, if I add something, how do you make that even better? Or do you make it worse? So that is the way I think about the strategies and tactics. The strategies, what to do, and the tactics, how and who and when and where and obviously to what effect do you get.
Dr. Ben Everett: So why is it you think that in the practice of psychiatry it has maybe struggled more than some other areas of medicine in translating research findings into clinical practice? Is it a disconnect in the endpoints that are used in like these regulatory trials? You know, you have to use this, that, or the other. And maybe clinicians do not routinely use those. Or is it something else? Is it more nuanced?
Dr. A. John Rush: I think it is a combination of history and habits. The history of psychiatry, of course, has been largely bizarre behaviors that are threatening to the public. The alienists was the old name for us. “We got a bunch of aliens in here. What do you do with them? Put them as far away from you as possible and you do not want to catch it,” because, remember, infectious disease was a big cause, syphilis amongst them, of bizarre behavior and all kinds of other problems.
So we were already in the behavioral business, but we did not have any measurement tools. We did not have an EKG, which has been around for a long time. We did not have blood pressure. I mean, we took it, but it did not help us diagnose or manage. So we developed the habit of having a really good understanding of symptoms and course and how people behave, but we did not really have a set of tools. And then the world did develop a lot of symptom tools. And now we have measures of function and quality of life, other key outcomes, but we have the habit of not needing them. And experience says maybe we do not need them in a lot of patients. Maybe our judgment is pretty good.
But I would say within the last twenty or thirty years, we have now gotten, I think, fairly sophisticated research to say we may be good, but we can get better if we apply some additional tools because they tell us things that our impression is not bad at, but is maybe not as good as it should be, particularly in certain circumstances. I think that is where it is a habit issue. But now we have the tools.
10:59 Using AI & Clinical Data to Guide Treatment Decisions
The other thing that I would throw into the mix, and I think it is important, is we now have the tools to compile outcomes across people, and we could not do that before. I mean, when I was a resident, how did you figure out what to do? You asked for a second opinion, but you did not ask, “Do you have a lot of people in your database at the University of Pennsylvania who have had three episodes of depression and have failed on drug A and drug B? Can you tell me what drug C is going to do?”
But now with these large language models and all of our AI technology, I think we are really literally very shortly on the verge of being able to search databases of millions of people to pick the fifty thousand people that look like the patient in front of me who have been treated in different kinds of ways. And we can ask, “How did they do and how were they treated? Does the dose matter? Did people with fifty milligrams of sertraline do as well as people in one hundred and fifty?” Very simple question. In the old days, I had to send that patient to another person. The inventory that he gave me was his experiential inventory, but not a million people. And it is his impression of what he did that helps me with my impression of what to do. So by bringing measurement to the bedside, we are bringing precision and science.
I think the evidence is very clear now that we do make better decisions about what to do with patients. There was a recent study by Husain in JAMA Network that just replicated an older study by Dr. G-u-o, Guo, in The American Journal of Psychiatry- did exactly the same study. They said, “Use mirtazapine and paroxetine. Choose either. If that does not work, switch to the other. Use measurement-based care. Here is the tool. Measure the side effects, measure the symptoms. If you see this at this point, do this. If you see that at that point, do that.” He had exactly the same outcome. Totally different sample – from different countries even – had the same substantial benefit, and higher percent to remit by thirty or forty percent. Higher percent to response by thirty or forty percent. Not small. You do not need a statistician to see something that big. And the dose was higher and the side effects were not.
So I think we really know how to do this better. It is not surprising. I think about doing a blood pressure treatment and you have a band blood pressure cuff or impression. Or I used to guess when I was in the Army, I would take the pulse and I would guess at blood pressure and then I would take the blood pressure. I was not too far off, but not the best way to do it. Much more precise if we can use measurement to realize that sometimes we are not as smart as we think we are. And the other thing is, I think it gives patients a chance to say, “You know, this is not going as well as I think it should, Doc, can you help me with this problem?” And so it is more formal, but patients are afraid to tell the doctor that things are not going quite as well as they wish they would. So I think it also gives the patient a way of talking to us.
Dr. Ben Everett: That is a lot to really think through and I think very well said. I was thinking of follow-up questions but I think you really talked to the consequences for the patients where there is this gap and using big data now and LLMs to really get to this problem, replicate it, publish it. But then I think we still have a gap, right, which is, let us say I am a clinician who is ten, fifteen years out of residency and I have got my experience which you spoke to and I think I am pretty good at doing anxiety, depression in my patient population. But you know, maybe you could be doing better. So how do we get to where patients do rely on evidence, where there is new evidence that is validated in very large cohorts that might inform better patient care. How do we maybe move the needle in that part?
Dr. A. John Rush: I think you are talking about either off-label use or new use of new agents. Is that where you are going?
Dr. Ben Everett: No, just generally speaking, we have got experience and we have got evidence-based medicine. Right. So let us say I have got a robust amount of experience. I am ten, fifteen years out of residency and – wouldn’t be me again, I am just the scientist, I want to always make things clear that I am not a physician when I am saying this – but there are these new datasets that come out that say, “Hey, maybe this combination, paroxetine and mirtazapine, is better for this subset of patients,” but I did not see that. How do we ensure that people are aware of the evidence and maybe this learning gap for those individuals that are done with residency other than they have got to do their hours and whatnot.
Dr. A. John Rush: Yeah, I think the EHR can be made to search for the questions in front of you. So I can ask ChatGPT even now – obviously the answers are not perfect, it does hallucinate, it makes mistakes, it is getting better, et cetera. – but it saves me from having to go to the library. “Is there any evidence that such and such and such and such together in this kind of patient does anything good? How many patients, how long have they been followed?” So I think that as we become more precise and more thoughtful about what we are doing, we should become more curious about what we are doing and be a little bit less certain that we have the answer, but more certain that the patient literally has the answer, or other patients and other doctors are developing answers to our question. And now our job is to get all of those second opinions available to me and my patient right now, today, before I say, “Thank you very much, I will see you in a week.”
And I really think that the technology is very ripe to do that. It is already doing it in a number of areas. Our biggest difficulty, as I am sure you have seen me say before, is if we do not have outcomes, it is hard to say how the patient is doing. And we may hope that someone will read our notes or some natural language processing tool will read our notes for us. And there are people developing these. But as we all know, notetakers, some are very diligent and very specific. Some people are copy and paste and hope for the best. Some people do not want to put something in the record because it embarrasses the patient, or they do not want to be subpoenaed because it is a marital dispute or some other issue. So it is a very uneven process.
And to be honest, I personally have had to read over a thousand notes for a research project a while ago. I found about half of them were not interpretable. Example, “Patient doing well. Increase olanzapine by six milligrams.” Wait a minute – the patient is doing well-
Dr. Ben Everett: Patient’s doing well, why do I need to increase the olanzapine?
Dr. A. John Rush: Yeah, exactly. And I think really, that is a shorthand. Doing well, but not well enough. Still room to improve. Thirty percent better, still has fifty percent to go. But he did not write all that stuff down. So I am not saying the doctor made a mistake, but it is a real pain to write these notes in a way that is illuminating. And because we are under such time pressure, I do think that our computers can now start to compile our own experiences and those of other people as long as we have some kind of basic outcome for each case, each time we make a decision.
18:39 Why Clinical Trial Results Don’t Match Real-World Patients
Dr. Ben Everett: That is really good stuff right there.
So let us transition a little bit. I know you have written about some structural reasons why research does not always translate well into clinical practice. One issue you raise is that many clinical trials study patients who look very different from the patients we actually see in clinical practice. I hope you can maybe give us some perspectives on why that is.
Dr. A. John Rush: Sure. Well, it is by design, and it is very important. And the design, I am sure many of the listeners know is when you are starting out with a study to determine whether a drug is safe and whether it is effective, you want to be sure it is for a particular condition. And so you do not want to have people with multiple conditions in that sample, because let us say half the people have an anxiety disorder and depression and half do not. And you are studying an antidepressant. But what if the drug is only good for anxiety disorders? You could still find an effect that looks like an antidepressant effect because the anxiety is getting better and the depressive symptoms do respond a little bit to that. But it is an uncertain kind of answer to the question.
So what we want to do is be sure we have just the right diagnosis. And then we want to be sure that we do not have people that are so difficult to treat that it would take a gigantic number of people to find a small effect. So that means we want to have people that are not too treatment-resistant, but not too easy to treat. They have been ill for at least several months, not two weeks, and they do not have a lot of other comorbidity.
Well, guess what comes in the door. People that have been sick for five or ten years, been in and out of episodes, have lots of comorbidity. They are currently under a lot of stress. They are undergoing a career change on top of being depressed. Those individuals are not suitable for the trials because it threatens what we call the internal validity of the study. If we get an effect, we are not sure that it is a sound conclusion, that it is valid. So by having an internally very, very valid study, we have to sacrifice what we call generalizability. “Does this apply to everybody with depression, no matter how they show up? Absolutely not.”
And that is where it really gets very interesting, because now we are going from efficacy research to effectiveness research. Is it effective in the real world with real patients? Does it have efficacy under what I would call greenhouse conditions, where we have controlled a lot of different things. And when I tell you it works, I am absolutely certain it works, but I am not certain it works for everybody. And that is a major gap between the populations that we study and the populations we see.
And the other gap is in a research process, we augment the treatment. So we have a research nurse, we are making sure the patient does not drop out. We are measuring outcome, we are adjusting dose based on the outcome. We are not doing that in most clinical practices even now. So that is another. We have a set of procedural differences and we have population differences. And the third issue, which is even more important is does it keep working? Just because I get you better in eight weeks, how is your disease at six months or a year? And as we know, some of our treatments start off well, start off with a bang, but they poop out over time. Those are also things you cannot get out of the acute regulatory trials.
Dr. Ben Everett: Yeah, especially if you are just looking at eight to twelve weeks or something like that, maybe six months of follow-up. But sometimes you do not even get that much follow-up.
And to your second point about the procedural differences, I think that is one of the things that can cause a lot of the placebo effect that we see really throughout clinical medicine, but especially in psychiatry, where it is like, “Man, that is a heck of a placebo they are using in this trial.” You know, we have seen a number of promising things for schizophrenia in the past couple of years. It is not that going from phase two to phase three, it is not that the active comparator did not deliver. It looked about like what we expected from phase two, but the placebo effect is just so markedly different that there is really not a delta between your active and your placebo now. And yeah, so I think that can be a challenge with this type of clinical development.
22:54 Pragmatic Trials That Reflect Everyday Psychiatric Practice
So this begs the question- we have got all of these constraints that go into regulatory approval for a new drug or a new intervention of some type to get something on the market that has a label and an indication and some idea about safety and efficacy and a very specific population. So then how do we design clinical trials or get to clinical research that gets to the more generalizable part? I think this kind of gets back to the experience part, maybe that you were talking about when we first started. So how do we now bridge this gap between the regulatory trials and what people really need to inform clinical practice?
Dr. A. John Rush: Right, and this is a challenge, by the way, not just in psychiatry, but throughout medicine, as you know. So there are several different things that have been done. One is what they call a pragmatic trial or a point-of-care trial. They actually take patients who are literally part of who is being seen. They have a depression. They may have other comorbidities. They could be chronically depressed, acutely depressed, treatment-resistant, not resistant, lots of different kinds of depression. And they say, “Let’s just measure one or two simple outcomes and let’s randomize.” So if you randomize and you are in equipoise, meaning you do not have a favorite dog in the hunt, so to speak – it is not clear that either is better than the other – and you look at what happens. And so now you solve the sample problem, because you are not excluding- these are people that you would ordinarily see in practice and think, “Well, if it works for depression and they happen to have depression and substance abuse or depression and panic disorder, let’s see if it works.”
And we do know that in fact there is evidence that, with depression at least, when they have comorbid panic or other anxiety disorders, they do not do as well. They do better than not getting the treatment, but not as well as if they did not have that comorbidity. Well, it kind of makes sense, you know, if you have diabetes and arthritis and other kinds of problems and I am trying to treat you, very likely that too many disorders will be too much of a weight. And so I will still get the effect, but not quite as big as I had hoped for. So I think that is one approach.
The other approach is to go into electronic health records. If we have outcomes in the records, and now we can look at what we have actually done already because we are already treating people- what is their outcome? And so we do not have to sort of sign up for a formal trial as long as we have outcomes in the record and it is ethical to use those outcomes, anonymizing the data, that is just learning from what we do, learning by what we do.
Dr. Ben Everett: Right. So how can different areas of practicing clinicians, whether they be academic versus community, how can people help address this or solve this problem? Is there a way, if I am in a small community-based practice with a handful of other practitioners, is there some kind of way I can, with HIPAA constraints, appropriately upload my EHR of anonymized data so that other people could look at it in an LLM or do we need to lean on the Stanfords and the Penns that have these large patient databases?
Dr. A. John Rush: Oh, great question. A couple of answers. The APA actually has a practice network and that is available to clinicians to sign up and they can upload and anonymize the data. It is not yet at the level of returning the information to you immediately, but it is on the verge of being able to do that. So you can at least compare your patients with some other persons or other groups of patients. That is helpful.
But I think the most important step would be to have the EHR itself accept a range of outcomes that we put in, just like you put in a PSA or put in a blood pressure or whatever it happens to be. And that allows us now to track for every patient that we see individually: how is that patient doing and what happens to their measurement over time? We did a study recently in which it was just a global measure, Clinical Global Impression of Severity (CGIS), and what we found was the higher the CGIS, that is the worse the patient, obviously the more likely to crash and burn within six months after the first six months of observation. But the other interesting thing was that the waxing and waning of the CGIS, the beta, if you will, on the regression line, the bigger the wobble. Like one time I am a three, now I am a five, now I am a seven, now I am a two. Bad sign, the combination worse. If you did that with a global assessment of function, another zero to one hundred scale, just a function scale, it even adds to that prediction so that we can identify the most difficult or the most worrisome patients, those most likely to crash and burn. It is in the bottom sixth of the sample.
And now we can say, “You know, you are in this group, I have got a double look at what is going on with you, maybe change the treatment, certainly see you more often in order to head off what I think is going to happen to you.” And that is just a simple global measure. So there is however, you have to have the EHR receive the data and then you have to automate that predictive model. The doctor cannot sit there and start doing some kind of computer programming in the middle of seeing the patient. No, you need a dashboard and you need something that would pop up and say, kind of like, I guess if you are a stock dealer. I do not know much about stocks, but you know, “Sell this one, buy that one and watch this one carefully, like hold, sell, hold, buy,” I do not know, but some sort of indicator so that when we get a clue from what the patient is doing, we are making individualized predictions and we give you some probability, ten percent chance of getting worse, fifty percent chance of getting worse. If I hear those numbers, I am going to do something differently depending on how big the differences are in the numbers. So that is where the EHR has got to be shaped up to help us.
Dr. Ben Everett: Yeah, I mean, just me thinking as potentially a patient, I would love that if my doctor could send me an alert or I could just get a system-level alert. It is like, “Hey, based on X, Y and Z, we are kind of concerned about where you are right now. Why do you not come in, I think we have some things that we can do to hopefully head off what we think is about to happen.” I mean, you can see, I think on the heart disease side there are wearables that make all claims about things, but I mean we are seeing these types of things happen. And I know even for PTSD there are wearables that are making strides with this type of stuff. And look, if it results in better patient care, I think who would not opt in for that type of thing.
Dr. A. John Rush: Exactly. But we have got to make our EHR help us. And that still takes some engineering and that takes some investment by Epic or Cerner or whoever the right company is or combination of companies.
And I think the other thing is we have got to develop the habit of measuring and then deciding rather than deciding and then measuring because we are so used to, we are pretty good at what we do, but we are not as good as we could be. So good is good, but better is even better. And I think getting that habit, you know, “Is it really as I think it is,” I think those are important decisions. I think a proactive part by the APA and other people would really help us all become better collaborators with each other through the magic of the computer.
Dr. Ben Everett: I live in Mississippi. We are a big AG state. And I know it is often said ‘you cannot manage what you do not measure,’ and so, yeah, you have got to have some objective measures to be able to make informed decisions about what you are going to do. So interesting.
30:48 The STAR*D Trial: Sequencing Treatments for Depression
Well, look, I would be remiss if I did not bring up the STAR*D trial – very influential study and you were very involved in that. You are the primary author of a number of the papers. Definitely one of the more clinically relevant studies ever conducted in depression. And it gets to these kind of sequence-based actions. If for some reason somebody is not familiar with it, maybe could you just give us a little background on STAR*D and maybe we will dive into it a little bit deeper.
Dr. A. John Rush: Sure, yeah. So I was the PI on the study. I lived and breathed this thing for seven to eight years. That is all I did. I did not have a CRO. I was very fortunate to have some wonderful collaborators. Dr. Trivedi and Mike Thase and Maurizio Fava and Andy Nierenberg and John Stewart and Fred Quitkin. And the list goes on and on. I am sure I left out wonderful people- Steve Wisniewski, the key database manager at Pittsburgh. It was a real privilege to do this very long study.
It was launched for an interesting reason, which was the deficit, the deficiency in our information. So what do we need to know? We needed to know what do we do if the first treatment does not work. And we have no idea. So there was almost no second-step studies. So we had a lot of first-step studies because that was FDA regulatory studies. Perfect. But we had no idea what to do next step. And then we thought, well, and what about the next step? And what about the next step? So it was the “what next” question was the primary. And the second question was how, how long should we go? When do people really get better? When do you pull the plug, so to speak? They are getting a little better. How about eight weeks? When do I stop? I think three weeks, eight weeks. There is a big dispute about that. So those were the two very practical questions.
We had all kinds of impressions about what would happen. We had seven treatments at the second level, four treatments at the third, and as you know, two cells at the fourth. And we included psychotherapy in the second step as well because some people wanted to have that. And there was evidence that it is comparable to medications, at least in the first step at that time. Pretty good evidence. So that was our driving- and we said it has to be real patients with real doctors in real clinics. None of this research stuff, none of this academic stuff. So we went into primary care, forty percent of our patients, and specialty care, sixty percent of our patients. And it was also public sector and private sector – tried to be very generalizable.
Dr. Ben Everett: Yeah. So you really checked almost all the boxes of what we talked about earlier and how to really design a study that could be real-world, generalizable. So you were able to do that. So looking back, what did you find again? In case people are, for some reason or another, not familiar. Let you talk about the results just for a minute.
Dr. A. John Rush: So I do not know how long you have, but there are one hundred and forty-two publications and that is not all of them. There are some I do not even know. I say the big takeaways: Number one, there are people that respond and remit after six weeks of treatment. So this is hard to believe, but that some people it might be better to go beyond six weeks, even eight weeks. If they are getting somewhat better, hang in there, adjust the dose, things happen. So that is the first one.
Secondly, the choices between the treatments are not that profoundly different. It was not like the second step sertraline was so much better than bupropion, or that was much better than venlafaxine or whatever. It really has much more to do with how you deliver the treatment than what treatment you deliver. So we really did not find a large difference between- these are large samples, like two hundred people in a cell. So if you do not find a difference at two hundred people, you do not care if it is a thousand people, you do not have a difference. It could be different people are responding, but the comparable efficacy was a big finding.
The third was the more steps you take, the problem is the less likely you are to get into remission. So remission rates were like thirty-five percent in the first step, twenty-eight percent in the second step, fifteen percent in the third step, fifteen percent in the fourth step. So the more you go, the more treatments you fail, the more treatment-resistant you are, the less optimistic you should be about the initial treatment. It is not that you should not take the step, but you need to be prepared for a much smaller return on investment.
And the other thing is, if you get better after the first step, you pretty likely at least half keep the response out to a year. But by the third or the fourth step, you are looking at very few people, like ten percent of the fifteen percent keep the response. So it is not just whether you can get there, it is whether you can stay there that matters.
The other thing is that we found we use measurement-based care in primary and psychiatric practices. Both the outcomes at the first step were exactly the same. In other words, if you do the same stuff using a precise methodology, you get the same results because we are using it in the same patient population. They all have the same entry criteria. So it does suggest that a lot of our depressed patients can be very well managed if they use measurement-based care and primary care. But for psychiatric care, you need to be prepared for more complex patients. Harder to get well, harder to stay well. And that obviously leads into where we are now with interventional psychiatry.
36:32 Dose Optimization & Long-Term Depression Recovery
Dr. Ben Everett: All right, so this was what I was going to ask a second ago. I think you just got to it, but looking back, what do you think the most important lesson is? I think I heard a couple of things there. Is it make sure you give it long enough to work, assess your patient in the meantime? But some patients might need six to eight weeks on that first step. Or is it something a little bit different?
Dr. A. John Rush: Yes, I would say persist. And secondly, get the dose up and use measurement-based care. Because the reason I say that is the doses that we got to in STAR*D were about forty percent higher than what was being used in practice. And there is pretty good evidence that the dose matters in a lot of these medications. Obviously not everybody should get one hundred and fifty milligrams of sertraline or whatever the drug of the day is. But if you are measuring symptoms and side effects, whether you are getting better, maybe you hold at fifty for the person that got a bunch of side effects and they may do fine at fifty, but there may be other people, you have got to go to one hundred and fifty or two hundred before you get there. And they do get there and they do not have really bad side effects because that is I think the measurement-based care thing. And the third is the long-term follow-up. Durability is everything.
Dr. Ben Everett: Let me ask you real quick because you have mentioned measurement-based care. I think specifically we have talked about efficacy, but what about safety? I think there are some really well-validated, very simple adverse event questionnaires that clinicians can use to objectively capture that. Also, do you have some favorites or anything that you would like to throw out?
Dr. A. John Rush: I think that the overall judgment as to whether to raise the dose can be done on a global measure. The one that is in the public domain that is easy to do is called the FIBSER. Frequency, Intensity, and Burden of Side Effects. There are basically three zero-to-seven scales, none, mild, moderate, severe. You can ask the patient. I like to talk to patients on a penny to dollar rating scale. If the side effects you have are zero or worth a penny, how bad are your side effects today? If they say it is a dollar, that is a lot. Or just use zero to ten or just use language. None, mild, moderate, severe, or disaster. That is what we want the patient to tell us.
So I do not think you need very specific itemized side effect rating scales because they can go out to forty or fifty items. It is just not possible to use it in practice and it does not really inform the decision. The decision is: can I go up in dose given the global impact of this drug on your life? And I ask the patient, “If I raise the dose, would you give me a smiley face or you would frown or you would run out of the office?” “I am running out of the office.” I am not raising the dose. “I would frown.” “How bad is the frown? Maybe we try it for a few days. If you do not like it, back down, okay?” Or “No, I am okay. I mean, I am not sleeping so well and the sexual dysfunction is there, but I can tolerate. We can go up. I would rather get rid of the depression. I do not worry about those side effects.” So making the patient the collaborator in those decisions with whatever scale one needs. I think we have been preoccupied with which scales and we really should be preoccupied with how to get the patient to communicate to us in a way that we can understand and they are comfortable. And if it is a penny to dollar rating scale or it is a FIBSER, or it is something or other, pick your scale, but tell me how we are doing.
39:56 Building a Learning Healthcare System in Psychiatry
Dr. Ben Everett: It is good stuff. I know FIBSER is the one that I am the most familiar with and have done some writing about.
So we have just got a couple of minutes left this afternoon. So maybe we will have you back on and we will dive deeper on a couple of other topics. But let us kind of skip into this idea. You have written about psychiatry as a learning healthcare system. Maybe you can talk to us a little bit about what do you think about healthcare as a learning healthcare system?
Dr. A. John Rush: This has been around a long time. It is from the Institute of Medicine. It is not an original idea by me. It has been pioneered in non-psychiatric conditions and shown to be effective. What it really entails is you back away from the position of ‘I know what to do with you because I am smart and I went to school and I have a lot of experience and I am good. That is why you are seeing me. Right.’ We still hold that in our fantasy, but we say to ourselves, “I am not exactly sure in this particular instance because it is a patient I have never seen before how you are going to do on that particular medication or that particular intervention.”
So you re-establish humility and be very honest about it. Remember we were talking about the group-to-individual gap? How do I go from group data to the individual? You actually run an experiment. We do not do informed consent, but we do have a consent for treatment. Now, when we see patients and we say, “Okay, here is what I think might be best. What do you think?” And then you try it out.
And as we accumulate that, we individual practitioners and groups of practitioners in a care system do that repeatedly over time and share that information with each other, we are now starting to learn stuff that I could not learn otherwise. That is the ‘a thousand second opinions’ happening. Because now we can, if you will, harvest the experience of all of the other doctors having to deal with very similar situations that I could not possibly ever have a lifetime of experience doing. And I can say with reasonable certainty, “There is an X percent chance overall, but for you, there is a Y percent chance that this is going to work. And how do I know that? I am looking at thirty thousand people that kind of look like you, and that is sort of what happened to them.”
But I cannot guarantee an outcome. I can guarantee that I will learn and you will learn. And by the way, I have data for what happens if the drug does not work. I have another time at bat with you. But I think that makes the patient aware that we are doing the best we can with as much knowledge as we can get our hands on. But they then become the collaborators in telling us how it is doing, what are the side effects, is it really working as expected, what are the exceptions. And you put that together systematically so I can learn from everybody else’s and they can learn from me. That is a learning healthcare system where you can find things that you otherwise could never find with an NIH trial, you just do not have the data.
Dr. Ben Everett: See, I love that from the patient standpoint too, because there are so many people where there might only be one psychiatrist or a handful in their area, and there are waiting lists to get in if you are trying to get a second opinion. I mean, the patient is suffering the whole time. And this model, it is like everybody is learning from everybody. And I love just the way that you talk about, the way you are talking to your patients. I think we can all learn so much about that. And this idea of shared decision-making and bringing so much data into the conversation is, “Yes, I have got my experience, but like a thousand second opinions kind of built into this right here.” I would be, the patient me is really reassured by that. I like that idea of just that collaborative approach and the shared decision-making is really important.
43:48 Dr. Rush’s Advice for Researchers and Clinicians
All right, maybe one last thing, but in two short parts. So let us just leave with advice. We will think about it maybe two different ways. So advice for the people that are doing research right now in this way of thinking through things, if you might have already said it, but what kind of advice would you give to a young assistant professor at a teaching hospital in the department of psychiatry right now who is interested in doing pragmatic research?
Dr. A. John Rush: So the only research worth doing is either research that changes your mind or changes your behavior. Curiosity research is a waste of time. And so I live and die by that mantra. If it does not change minds or has the ability to change minds, that is the way we think about things, how we conceptualize things. It may change how we do things, like a different rating scale, or it may change things that we do in the clinic, like which drug to use, how to use it, when to use it, what sequence, what combination, which kinds of people, how to better target it. All those things, I think of them as engineering problems. They are not as sexy as a Nobel Prize G-protein story. But we’ve got these treatments. They are pretty darn good. We can make them a lot better if we can address the questions of the when, how, etc., the sequence of things. The tactical questions are really very, very important. So I would say think from the patient perspective.
Dr. Ben Everett: Yeah. And then what about just your community clinicians, what can they be doing right now to help this? We talked about the APA has that portal, whatever it is. I’ll find the link and I’ll make sure it is in the show notes for anybody that is interested. But what advice do you have for clinicians?
Dr. A. John Rush: I would say if you are not measuring outcome with some kind of a global scale or an itemized scale, each time you make a decision, try it out and you will- I know a lot of people, many of my friends say, “I do not need to know it. I already got the idea.” But I have found twenty, thirty percent of patients do not tell you how poorly they are doing when they talk to you. But when they fill out a scale, even if it is a simple little scale like the PHQ or something, they can let you know that you are not quite as good as you think you are. And because you are asking them for it, not in a perfunctory way, but, “I want you to fill out the scale. It is important that I know how you are doing,”and that you use it, I think you will be surprised as to what you can learn.
And I would say get the patient to participate in that same measurement of outcome. I think giving the patient the agency, the ability to say, “Here are my side effects, zero to one hundred, here is my disease severity,” whatever it happens to be. Now you are really talking about making patient-informed decisions as a clinician, and those are the ones that really count because we do not know how they feel. We have to trust them to tell us.
Dr. Ben Everett: That is great stuff. And I know there is plenty of literature on how sometimes we do not listen quite as well as we think we do. When they do the surveys of patients who are in practice or actually being treated by psychiatrists, and sometimes the patients say, “I feel like maybe they did not listen quite as well as I wanted them to,” or, “I think I said this,” but they did not. So I think it is really important advice.
Dr. A. John Rush: Thank you.
Dr. Ben Everett: All right, well, look, in closing, Dr. Rush, I want to thank you for being here on The JCP Podcast, sharing your experiences, your vast knowledge of clinical research, and all these different things that we explored today. I think a lot more that we could get through, so maybe we will have you back on in the future. I had a lot of fun today. I hope this was good for you too, and I hope the listeners enjoy it as well.
Dr. A. John Rush: Thank you, Ben. It was a lot of fun.
Dr. Ben Everett: This has been The JCP Podcast. Insightful, evidence-based, human-centered.
About the Authors
Related JCP Articles
Related PCC Articles